Updates from: 09/28/2023 01:19:03
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Ad Auth No Join Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/ad-auth-no-join-linux-vm.md
The final step is to check that the flow works properly. To check this, try logg
[centosuser@centos8 ~]su - ADUser@contoso.com Last login: Wed Oct 12 15:13:39 UTC 2022 on pts/0 [ADUser@Centos8 ~]$ exit- ```+ Now you are ready to use AD authentication on your Linux VM. <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization.md
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory.md
[create-azure-ad-ds-instance]: tutorial-create-instance.md
active-directory-domain-services Administration Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/administration-concepts.md
To get started, [create a Domain Services managed domain][create-instance].
[password-policy]: password-policy.md [hybrid-phs]: tutorial-configure-password-hash-sync.md#enable-synchronization-of-password-hashes [secure-domain]: secure-your-domain.md
-[azure-ad-password-sync]: ../active-directory/hybrid/connect/how-to-connect-password-hash-synchronization.md#password-hash-sync-process-for-azure-ad-domain-services
+[azure-ad-password-sync]: /azure/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization#password-hash-sync-process-for-azure-ad-domain-services
[create-instance]: tutorial-create-instance.md [tutorial-create-instance-advanced]: tutorial-create-instance-advanced.md [concepts-forest]: ./concepts-forest-trust.md
active-directory-domain-services Alert Ldaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-ldaps.md
Create a replacement secure LDAP certificate by following the steps to [create a
If you still have issues, [open an Azure support request][azure-support] for more troubleshooting help. <!-- INTERNAL LINKS -->
-[azure-support]: ../active-directory/fundamentals/how-to-get-support.md
+[azure-support]: /azure/active-directory/fundamentals/how-to-get-support
active-directory-domain-services Alert Nsg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-nsg.md
It takes a few moments for the security rule to be added and show in the list.
If you still have issues, [open an Azure support request][azure-support] for additional troubleshooting assistance. <!-- INTERNAL LINKS -->
-[azure-support]: ../active-directory/fundamentals/how-to-get-support.md
-[configure-ldaps]: tutorial-configure-ldaps.md
+[azure-support]: /azure/active-directory/fundamentals/how-to-get-support
+[configure-ldaps]: ./tutorial-configure-ldaps.md
active-directory-domain-services Alert Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-service-principal.md
# Known issues: Service principal alerts in Microsoft Entra Domain Services
-[Service principals](../active-directory/develop/app-objects-and-service-principals.md) are applications that the Azure platform uses to manage, update, and maintain a Microsoft Entra Domain Services managed domain. If a service principal is deleted, functionality in the managed domain is impacted.
+[Service principals](/azure/active-directory/develop/app-objects-and-service-principals) are applications that the Azure platform uses to manage, update, and maintain a Microsoft Entra Domain Services managed domain. If a service principal is deleted, functionality in the managed domain is impacted.
This article helps you troubleshoot and resolve service principal-related configuration alerts.
After you delete both applications, the Azure platform automatically recreates t
If you still have issues, [open an Azure support request][azure-support] for additional troubleshooting assistance. <!-- INTERNAL LINKS -->
-[azure-support]: ../active-directory/fundamentals/how-to-get-support.md
+[azure-support]: /azure/active-directory/fundamentals/how-to-get-support
<!-- EXTERNAL LINKS --> [New-AzureAdServicePrincipal]: /powershell/module/azuread/new-azureadserviceprincipal
active-directory-domain-services Change Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/change-sku.md
It can take a minute or two to change the SKU type.
If you have a resource forest and want to create additional trusts after the SKU change, see [Create an outbound forest trust to an on-premises domain in Domain Services][create-trust]. <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md [concepts-sku]: administration-concepts.md#azure-ad-ds-skus [create-trust]: tutorial-create-forest-trust.md
active-directory-domain-services Check Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/check-health.md
This article shows you how to view the Domain Services health status and underst
The health status for a managed domain is viewed using the Microsoft Entra admin center. Information on the last backup time and synchronization with Microsoft Entra ID can be seen, along with any alerts that indicate a problem with the managed domain's health. To view the health status for a managed domain, complete the following steps:
-1. Sign in to [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator).
+1. Sign in to [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator).
1. Search for and select **Microsoft Entra Domain Services**. 1. Select your managed domain, such as *aaddscontoso.com*. 1. On the left-hand side of the Domain Services resource window, select **Health**. The following example screenshot shows a healthy managed domain and the status of the last backup and Azure AD synchronization:
Health status alerts are categorized into the following levels of severity:
For more information on alerts that are shown in the health status page, see [Resolve alerts on your managed domain][troubleshoot-alerts] <!-- INTERNAL LINKS -->
-[azure-support]: ../active-directory/fundamentals/how-to-get-support.md
+[azure-support]: /azure/active-directory/fundamentals/how-to-get-support
[troubleshoot-alerts]: troubleshoot-alerts.md
active-directory-domain-services Compare Identity Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/compare-identity-solutions.md
You can also learn more about
[manage-gpos]: manage-group-policy.md [tutorial-ldaps]: tutorial-configure-ldaps.md [tutorial-create]: tutorial-create-instance.md
-[whatis-azuread]: ../active-directory/fundamentals/whatis.md
+[whatis-azuread]: /azure/active-directory/fundamentals/whatis
[overview-adds]: /windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview [create-forest-trust]: tutorial-create-forest-trust.md [administration-concepts]: administration-concepts.md
active-directory-domain-services Concepts Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-custom-attributes.md
Select **+ Add** to choose which custom attributes to synchronize. The list show
If you don't see the directory extension you are looking for, enter the extensionΓÇÖs associated application appId and click **Search** to load only that applicationΓÇÖs defined extension properties. This search helps when multiple applications define many extensions in your tenant. >[!NOTE]
->If you would like to see directory extensions synchronized by Microsoft Entra Connect, click **Enterprise App** and look for the Application ID of the **Tenant Schema Extension App**. For more information, see [Microsoft Entra Connect Sync: Directory extensions](../active-directory/hybrid/connect/how-to-connect-sync-feature-directory-extensions.md#configuration-changes-in-azure-ad-made-by-the-wizard).
+>If you would like to see directory extensions synchronized by Microsoft Entra Connect, click **Enterprise App** and look for the Application ID of the **Tenant Schema Extension App**. For more information, see [Microsoft Entra Connect Sync: Directory extensions](/azure/active-directory/hybrid/connect/how-to-connect-sync-feature-directory-extensions#configuration-changes-in-azure-ad-made-by-the-wizard).
Click **Select**, and then **Save** to confirm the change.
To check the backfilling status, click **Domain Services Health** and verify the
## Next steps
-To configure onPremisesExtensionAttributes or directory extensions for cloud-only users in Microsoft Entra ID, see [Custom data options in Microsoft Graph](/graph/extensibility-overview?tabs=http#custom-data-options-in-microsoft-graph).
+To configure onPremisesExtensionAttributes or directory extensions for cloud-only users in Microsoft Entra ID, see [Custom data options in Microsoft Graph](/graph/extensibility-overview?tabs=http#custom-data-options-in-microsoft-graph).
-To sync onPremisesExtensionAttributes or directory extensions from on-premises to Microsoft Entra ID, [configure Microsoft Entra Connect](../active-directory/hybrid/connect/how-to-connect-sync-feature-directory-extensions.md).
+To sync onPremisesExtensionAttributes or directory extensions from on-premises to Microsoft Entra ID, [configure Microsoft Entra Connect](/azure/active-directory/hybrid/connect/how-to-connect-sync-feature-directory-extensions).
active-directory-domain-services Create Forest Trust Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-forest-trust-powershell.md
To complete this article, you need the following resources and privileges:
* Install and configure Azure AD PowerShell. * If needed, follow the instructions to [install the Azure AD PowerShell module and connect to Microsoft Entra ID](/powershell/azure/active-directory/install-adv2). * Make sure that you sign in to your Microsoft Entra tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet.
-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services.
-* You need [Domain Services Contributor](../role-based-access-control/built-in-roles.md#contributor) Azure role to create the required Domain Services resources.
+* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services.
+* You need [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#contributor) Azure role to create the required Domain Services resources.
## Sign in to the Microsoft Entra admin center
Before you start, make sure you understand the [network considerations and recom
1. Create the hybrid connectivity to your on-premises network to Azure using an Azure VPN or Azure ExpressRoute connection. The hybrid network configuration is beyond the scope of this documentation, and may already exist in your environment. For details on specific scenarios, see the following articles:
- * [Azure Site-to-Site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).
- * [Azure ExpressRoute Overview](../expressroute/expressroute-introduction.md).
+ * [Azure Site-to-Site VPN](/azure/vpn-gateway/vpn-gateway-about-vpngateways).
+ * [Azure ExpressRoute Overview](/azure/expressroute/expressroute-introduction).
> [!IMPORTANT] > If you create the connection directly to your managed domain's virtual network, use a separate gateway subnet. Don't create the gateway in the managed domain's subnet.
You should have Windows Server virtual machine joined to the managed domain reso
1. Connect to the Windows Server VM joined to the managed domain using Remote Desktop and your managed domain administrator credentials. If you get a Network Level Authentication (NLA) error, check the user account you used is not a domain user account. > [!TIP]
- > To securely connect to your VMs joined to Microsoft Entra Domain Services, you can use the [Azure Bastion Host Service](../bastion/bastion-overview.md) in supported Azure regions.
+ > To securely connect to your VMs joined to Microsoft Entra Domain Services, you can use the [Azure Bastion Host Service](/azure/bastion/bastion-overview) in supported Azure regions.
1. Open a command prompt and use the `whoami` command to show the distinguished name of the currently authenticated user:
Using the Windows Server VM joined to the managed domain, you can test the scena
1. Connect to the Windows Server VM joined to the managed domain using Remote Desktop and your managed domain administrator credentials. If you get a Network Level Authentication (NLA) error, check the user account you used is not a domain user account. > [!TIP]
- > To securely connect to your VMs joined to Microsoft Entra Domain Services, you can use the [Azure Bastion Host Service](../bastion/bastion-overview.md) in supported Azure regions.
+ > To securely connect to your VMs joined to Microsoft Entra Domain Services, you can use the [Azure Bastion Host Service](/azure/bastion/bastion-overview) in supported Azure regions.
1. Open **Windows Settings**, then search for and select **Network and Sharing Center**. 1. Choose the option for **Change advanced sharing** settings.
For more conceptual information about forest types in Domain Services, see [How
<!-- INTERNAL LINKS --> [concepts-trust]: concepts-forest-trust.md
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance-advanced]: tutorial-create-instance-advanced.md [Connect-AzAccount]: /powershell/module/az.accounts/connect-azaccount [Connect-AzureAD]: /powershell/module/azuread/connect-azuread [New-AzResourceGroup]: /powershell/module/az.resources/new-azresourcegroup
-[network-peering]: ../virtual-network/virtual-network-peering-overview.md
-[New-AzureADServicePrincipal]: /powershell/module/AzureAD/New-AzureADServicePrincipal
+[network-peering]: /azure/virtual-network/virtual-network-peering-overview
+[New-AzureADServicePrincipal]: /powershell/module/azuread/new-azureadserviceprincipal
[Get-AzureRMSubscription]: /powershell/module/azurerm.profile/get-azurermsubscription [Install-Script]: /powershell/module/powershellget/install-script
active-directory-domain-services Create Gmsa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-gmsa.md
Applications and services can now be configured to use the gMSA as needed.
For more information about gMSAs, see [Getting started with group managed service accounts][gmsa-start]. <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md [tutorial-create-management-vm]: tutorial-create-management-vm.md [create-custom-ou]: create-ou.md
active-directory-domain-services Create Ou https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-ou.md
For more information on using the administrative tools or creating and using ser
* [Service Accounts Step-by-Step Guide](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd548356(v=ws.10)) <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md [tutorial-create-management-vm]: tutorial-create-management-vm.md [connect-windows-server-vm]: join-windows-vm.md#connect-to-the-windows-server-vm
active-directory-domain-services Delete Aadds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/delete-aadds.md
This article shows you how to use the Microsoft Entra admin center to delete a m
To delete a managed domain, complete the following steps:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator).
1. Search for and select **Microsoft Entra Domain Services**. 1. Select the name of your managed domain, such as *aaddscontoso.com*. 1. On the **Overview** page, select **Delete**. To confirm the deletion, type the domain name of the managed domain again, then select **Delete**.
active-directory-domain-services Deploy Azure App Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-azure-app-proxy.md
With Microsoft Entra Domain Services, you can lift-and-shift legacy applications running on-premises into Azure. Microsoft Entra application proxy then helps you support remote workers by securely publishing those internal applications part of a Domain Services managed domain so they can be accessed over the internet.
-If you're new to the Microsoft Entra application proxy and want to learn more, see [How to provide secure remote access to internal applications](../active-directory/app-proxy/application-proxy.md).
+If you're new to the Microsoft Entra application proxy and want to learn more, see [How to provide secure remote access to internal applications](/azure/active-directory/app-proxy/application-proxy).
This article shows you how to create and configure a Microsoft Entra application proxy connector to provide secure access to applications in a managed domain.
To create a VM for the Microsoft Entra application proxy connector, complete the
Perform the following steps to download the Microsoft Entra application proxy connector. The setup file you download is copied to your App Proxy VM in the next section.
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator).
1. Search for and select **Enterprise applications**. 1. Select **Application proxy** from the menu on the left-hand side. To create your first connector and enable App Proxy, select the link to **download a connector**. 1. On the download page, accept the license terms and privacy agreement, then select **Accept terms & Download**.
With a VM ready to be used as the Microsoft Entra application proxy connector, n
> For example, if the Microsoft Entra domain is *contoso.com*, the global administrator should be `admin@contoso.com` or another valid alias on that domain. * If Internet Explorer Enhanced Security Configuration is turned on for the VM where you install the connector, the registration screen might be blocked. To allow access, follow the instructions in the error message, or turn off Internet Explorer Enhanced Security during the install process.
- * If connector registration fails, see [Troubleshoot Application Proxy](../active-directory/app-proxy/application-proxy-troubleshoot.md).
+ * If connector registration fails, see [Troubleshoot Application Proxy](/azure/active-directory/app-proxy/application-proxy-troubleshoot).
1. At the end of the setup, a note is shown for environments with an outbound proxy. To configure the Microsoft Entra application proxy connector to work through the outbound proxy, run the provided script, such as `C:\Program Files\Microsoft AAD App Proxy connector\ConfigureOutBoundProxy.ps1`. 1. On the Application proxy page in the Microsoft Entra admin center, the new connector is listed with a status of *Active*, as shown in the following example:
If you deploy multiple Microsoft Entra application proxy connectors, you must co
## Next steps
-With the Microsoft Entra application proxy integrated with Domain Services, publish applications for users to access. For more information, see [publish applications using Microsoft Entra application proxy](../active-directory/app-proxy/application-proxy-add-on-premises-application.md).
+With the Microsoft Entra application proxy integrated with Domain Services, publish applications for users to access. For more information, see [publish applications using Microsoft Entra application proxy](/azure/active-directory/app-proxy/application-proxy-add-on-premises-application).
<!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md [create-join-windows-vm]: join-windows-vm.md
-[azure-bastion]: ../bastion/tutorial-create-host-portal.md
+[azure-bastion]: /azure/bastion/tutorial-create-host-portal
[Get-ADComputer]: /powershell/module/activedirectory/get-adcomputer [Set-ADComputer]: /powershell/module/activedirectory/set-adcomputer
active-directory-domain-services Deploy Kcd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-kcd.md
In this scenario, let's assume you have a web app that runs as a service account
To learn more about how delegation works in Active Directory Domain Services, see [Kerberos Constrained Delegation Overview][kcd-technet]. <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md [create-join-windows-vm]: join-windows-vm.md [tutorial-create-management-vm]: tutorial-create-management-vm.md
active-directory-domain-services Deploy Sp Profile Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-sp-profile-sync.md
From your Domain Services management VM, complete the following steps:
<!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md [tutorial-create-management-vm]: tutorial-create-management-vm.md
active-directory-domain-services Fleet Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/fleet-metrics.md
The following table describes the metrics that are available for Domain Services
## Azure Monitor alert
-You can configure metric alerts for Domain Services to be notified of possible problems. Metric alerts are one type of alert for Azure Monitor. For more information about other types of alerts, see [What are Azure Monitor Alerts?](../azure-monitor/alerts/alerts-overview.md).
+You can configure metric alerts for Domain Services to be notified of possible problems. Metric alerts are one type of alert for Azure Monitor. For more information about other types of alerts, see [What are Azure Monitor Alerts?](/azure/azure-monitor/alerts/alerts-overview).
-To view and manage Azure Monitor alert, a user needs to be assigned [Azure Monitor roles](../azure-monitor/roles-permissions-security.md).
+To view and manage Azure Monitor alert, a user needs to be assigned [Azure Monitor roles](/azure/azure-monitor/roles-permissions-security).
In Azure Monitor or Domain Services Metrics, click **New alert** and configure a Domain Services instance as the scope. Then choose the metrics you want to measure from the list of available signals:
active-directory-domain-services How To Data Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/how-to-data-retrieval.md
You can create a user in the Microsoft Entra admin center or by using Graph Powe
You can create a new user using the Microsoft Entra admin center. To add a new user, follow these steps:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../active-directory/roles/permissions-reference.md#user-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](/azure/active-directory/roles/permissions-reference#user-administrator).
1. Browse to **Identity** > **Users**, and then select **New user**.
active-directory-domain-services Join Centos Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-centos-linux-vm.md
If you have an existing CentOS Linux VM in Azure, connect to it using SSH, then
If you need to create a CentOS Linux VM, or want to create a test VM for use with this article, you can use one of the following methods:
-* [Microsoft Entra admin center](../virtual-machines/linux/quick-create-portal.md)
-* [Azure CLI](../virtual-machines/linux/quick-create-cli.md)
-* [Azure PowerShell](../virtual-machines/linux/quick-create-powershell.md)
+* [Microsoft Entra admin center](/azure/virtual-machines/linux/quick-create-portal)
+* [Azure CLI](/azure/virtual-machines/linux/quick-create-cli)
+* [Azure PowerShell](/azure/virtual-machines/linux/quick-create-powershell)
When you create the VM, pay attention to the virtual network settings to make sure that the VM can communicate with the managed domain:
Now that the required packages are installed on the VM, join the VM to the manag
* Check that the VM is deployed to the same, or a peered, virtual network in which the managed domain is available. * Confirm that the DNS server settings for the virtual network have been updated to point to the domain controllers of the managed domain.
-1. Now initialize Kerberos using the `kinit` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md).
+1. Now initialize Kerberos using the `kinit` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](/azure/active-directory/fundamentals/how-to-manage-groups).
Again, the managed domain name must be entered in ALL UPPERCASE. In the following example, the account named `contosoadmin@aaddscontoso.com` is used to initialize Kerberos. Enter your own user account that's a part of the managed domain:
To verify that the VM has been successfully joined to the managed domain, start
If you have problems connecting the VM to the managed domain or signing in with a domain account, see [Troubleshooting domain join issues](join-windows-vm.md#troubleshoot-domain-join-issues). <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md
active-directory-domain-services Join Coreos Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-coreos-linux-vm.md
If you have an existing CoreOS Linux VM in Azure, connect to it using SSH, then
If you need to create a CoreOS Linux VM, or want to create a test VM for use with this article, you can use one of the following methods:
-* [Microsoft Entra admin center](../virtual-machines/linux/quick-create-portal.md)
-* [Azure CLI](../virtual-machines/linux/quick-create-cli.md)
-* [Azure PowerShell](../virtual-machines/linux/quick-create-powershell.md)
+* [Microsoft Entra admin center](/azure/virtual-machines/linux/quick-create-portal)
+* [Azure CLI](/azure/virtual-machines/linux/quick-create-cli)
+* [Azure PowerShell](/azure/virtual-machines/linux/quick-create-powershell)
When you create the VM, pay attention to the virtual network settings to make sure that the VM can communicate with the managed domain:
With the SSSD configuration file updated, now join the virtual machine to the ma
* Check that the VM is deployed to the same, or a peered, virtual network in which the managed domain is available. * Confirm that the DNS server settings for the virtual network have been updated to point to the domain controllers of the managed domain.
-1. Now join the VM to the managed domain using the `adcli join` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md).
+1. Now join the VM to the managed domain using the `adcli join` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](/azure/active-directory/fundamentals/how-to-manage-groups).
Again, the managed domain name must be entered in ALL UPPERCASE. In the following example, the account named `contosoadmin@aaddscontoso.com` is used to initialize Kerberos. Enter your own user account that's a part of the managed domain.
To verify that the VM has been successfully joined to the managed domain, start
If you have problems connecting the VM to the managed domain or signing in with a domain account, see [Troubleshooting domain join issues](join-windows-vm.md#troubleshoot-domain-join-issues). <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md
active-directory-domain-services Join Rhel Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-rhel-linux-vm.md
If you have an existing RHEL Linux VM in Azure, connect to it using SSH, then co
If you need to create a RHEL Linux VM, or want to create a test VM for use with this article, you can use one of the following methods:
-* [Microsoft Entra admin center](../virtual-machines/linux/quick-create-portal.md)
-* [Azure CLI](../virtual-machines/linux/quick-create-cli.md)
-* [Azure PowerShell](../virtual-machines/linux/quick-create-powershell.md)
+* [Microsoft Entra admin center](/azure/virtual-machines/linux/quick-create-portal)
+* [Azure CLI](/azure/virtual-machines/linux/quick-create-cli)
+* [Azure PowerShell](/azure/virtual-machines/linux/quick-create-powershell)
When you create the VM, pay attention to the virtual network settings to make sure that the VM can communicate with the managed domain:
Now that the required packages are installed on the VM, join the VM to the manag
* Check that the VM is deployed to the same, or a peered, virtual network in which the managed domain is available. * Confirm that the DNS server settings for the virtual network have been updated to point to the domain controllers of the managed domain.
-1. Now initialize Kerberos using the `kinit` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md).
+1. Now initialize Kerberos using the `kinit` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](/azure/active-directory/fundamentals/how-to-manage-groups).
Again, the managed domain name must be entered in ALL UPPERCASE. In the following example, the account named `contosoadmin@aaddscontoso.com` is used to initialize Kerberos. Enter your own user account that's a part of the managed domain:
To verify that the VM has been successfully joined to the managed domain, start
If you have problems connecting the VM to the managed domain or signing in with a domain account, see [Troubleshooting domain join issues](join-windows-vm.md#troubleshoot-domain-join-issues). <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md
active-directory-domain-services Join Suse Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-suse-linux-vm.md
If you have an existing SLE Linux VM in Azure, connect to it using SSH, then con
If you need to create a SLE Linux VM, or want to create a test VM for use with this article, you can use one of the following methods:
-* [Microsoft Entra admin center](../virtual-machines/linux/quick-create-portal.md)
-* [Azure CLI](../virtual-machines/linux/quick-create-cli.md)
-* [Azure PowerShell](../virtual-machines/linux/quick-create-powershell.md)
+* [Microsoft Entra admin center](/azure/virtual-machines/linux/quick-create-portal)
+* [Azure CLI](/azure/virtual-machines/linux/quick-create-cli)
+* [Azure PowerShell](/azure/virtual-machines/linux/quick-create-powershell)
When you create the VM, pay attention to the virtual network settings to make sure that the VM can communicate with the managed domain:
To join the VM to the managed domain, complete the following steps:
![Example screenshot of the Active Directory enrollment window in YaST](./media/join-suse-linux-vm/enroll-window.png)
-1. In the dialog, specify the *Username* and *Password* of a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md).
+1. In the dialog, specify the *Username* and *Password* of a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](/azure/active-directory/fundamentals/how-to-manage-groups).
To make sure that the current domain is enabled for Samba, activate *Overwrite Samba configuration to work with this AD*.
To verify that the VM has been successfully joined to the managed domain, start
If you have problems connecting the VM to the managed domain or signing in with a domain account, see [Troubleshooting domain join issues](join-windows-vm.md#troubleshoot-domain-join-issues). <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md
active-directory-domain-services Join Ubuntu Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-ubuntu-linux-vm.md
If you have an existing Ubuntu Linux VM in Azure, connect to it using SSH, then
If you need to create an Ubuntu Linux VM, or want to create a test VM for use with this article, you can use one of the following methods:
-* [Microsoft Entra admin center](../virtual-machines/linux/quick-create-portal.md)
-* [Azure CLI](../virtual-machines/linux/quick-create-cli.md)
-* [Azure PowerShell](../virtual-machines/linux/quick-create-powershell.md)
+* [Microsoft Entra admin center](/azure/virtual-machines/linux/quick-create-portal)
+* [Azure CLI](/azure/virtual-machines/linux/quick-create-cli)
+* [Azure PowerShell](/azure/virtual-machines/linux/quick-create-powershell)
When you create the VM, pay attention to the virtual network settings to make sure that the VM can communicate with the managed domain:
Now that the required packages are installed on the VM and NTP is configured, jo
* Check that the VM is deployed to the same, or a peered, virtual network in which the managed domain is available. * Confirm that the DNS server settings for the virtual network have been updated to point to the domain controllers of the managed domain.
-1. Now initialize Kerberos using the `kinit` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md).
+1. Now initialize Kerberos using the `kinit` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Microsoft Entra ID](/azure/active-directory/fundamentals/how-to-manage-groups).
Again, the managed domain name must be entered in ALL UPPERCASE. In the following example, the account named `contosoadmin@aaddscontoso.com` is used to initialize Kerberos. Enter your own user account that's a part of the managed domain:
To verify that the VM has been successfully joined to the managed domain, start
If you have problems connecting the VM to the managed domain or signing in with a domain account, see [Troubleshooting domain join issues](join-windows-vm.md#troubleshoot-domain-join-issues). <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md
active-directory-domain-services Join Windows Vm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-windows-vm-template.md
It takes a few moments for the deployment to complete successfully. When finishe
In this article, you used the Azure portal to configure and deploy resources using templates. You can also deploy resources with Resource Manager templates using [Azure PowerShell][deploy-powershell] or the [Azure CLI][deploy-cli]. <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md
-[template-overview]: ../azure-resource-manager/templates/overview.md
-[deploy-powershell]: ../azure-resource-manager/templates/deploy-powershell.md
-[deploy-cli]: ../azure-resource-manager/templates/deploy-cli.md
+[template-overview]: /azure/azure-resource-manager/templates/overview
+[deploy-powershell]: /azure/azure-resource-manager/templates/deploy-powershell
+[deploy-cli]: /azure/azure-resource-manager/templates/deploy-cli
active-directory-domain-services Join Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-windows-vm.md
To administer your managed domain, configure a management VM using the Active Di
> [Install administration tools on a management VM](tutorial-create-management-vm.md) <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md
-[vnet-peering]: ../virtual-network/virtual-network-peering-overview.md
+[vnet-peering]: /azure/virtual-network/virtual-network-peering-overview
[password-sync]: ./tutorial-create-instance.md [add-computer]: /powershell/module/microsoft.powershell.management/add-computer
-[azure-bastion]: ../bastion/tutorial-create-host-portal.md
+[azure-bastion]: /azure/bastion/tutorial-create-host-portal
[set-azvmaddomainextension]: /powershell/module/az.compute/set-azvmaddomainextension
active-directory-domain-services Manage Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/manage-dns.md
Name resolution of the resources in other namespaces from VMs connected to the m
For more information about managing DNS, see the [DNS tools article on Technet](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753579(v=ws.11)). <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md
-[expressroute]: ../expressroute/expressroute-introduction.md
-[vpn-gateway]: ../vpn-gateway/vpn-gateway-about-vpngateways.md
+[expressroute]: /azure/expressroute/expressroute-introduction
+[vpn-gateway]: /azure/vpn-gateway/vpn-gateway-about-vpngateways
[create-join-windows-vm]: join-windows-vm.md [tutorial-create-management-vm]: tutorial-create-management-vm.md [connect-windows-server-vm]: join-windows-vm.md#connect-to-the-windows-server-vm
active-directory-domain-services Manage Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/manage-group-policy.md
In a hybrid environment, group policies configured in an on-premises AD DS envir
This article shows you how to install the Group Policy Management tools, then edit the built-in GPOs and create custom GPOs. If you are interested in server management strategy, including machines in Azure and
-[hybrid connected](../azure-arc/servers/overview.md),
+[hybrid connected](/azure/azure-arc/servers/overview),
consider reading about the
-[guest configuration](../governance/machine-configuration/overview.md)
+[guest configuration](/azure/governance/machine-configuration/overview)
feature of
-[Azure Policy](../governance/policy/overview.md).
+[Azure Policy](/azure/governance/policy/overview).
## Before you begin
To group similar policy settings, you often create additional GPOs instead of ap
For more information on the available Group Policy settings that you can configure using the Group Policy Management Console, see [Work with Group Policy preference items][group-policy-console]. <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md [create-join-windows-vm]: join-windows-vm.md [tutorial-create-management-vm]: tutorial-create-management-vm.md
active-directory-domain-services Mismatched Tenant Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/mismatched-tenant-error.md
The managed domain and the virtual network belong to two different Microsoft Ent
The following two options resolve the mismatched directory error: * First, [delete the managed domain](delete-aadds.md) from your existing Microsoft Entra directory. Then, [create a replacement managed domain](tutorial-create-instance.md) in the same Microsoft Entra directory as the virtual network you wish to use. When ready, join all machines previously joined to the deleted domain to the recreated managed domain.
-* [Move the Azure subscription](../cost-management-billing/manage/billing-subscription-transfer.md) containing the virtual network to the same Microsoft Entra directory as the managed domain.
+* [Move the Azure subscription](/azure/cost-management-billing/manage/billing-subscription-transfer) containing the virtual network to the same Microsoft Entra directory as the managed domain.
## Next steps
active-directory-domain-services Network Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/network-considerations.md
Virtual network peering is a mechanism that connects two virtual networks in the
![Virtual network connectivity using peering](./media/active-directory-domain-services-design-guide/vnet-peering.png)
-For more information, see [Azure virtual network peering overview](../virtual-network/virtual-network-peering-overview.md).
+For more information, see [Azure virtual network peering overview](/azure/virtual-network/virtual-network-peering-overview).
### Virtual Private Networking (VPN)
You can connect a virtual network to another virtual network (VNet-to-VNet) in t
![Virtual network connectivity using a VPN Gateway](./media/active-directory-domain-services-design-guide/vnet-connection-vpn-gateway.jpg)
-For more information on using virtual private networking, read [Configure a VNet-to-VNet VPN gateway connection by using the Microsoft Entra admin center](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).
+For more information on using virtual private networking, read [Configure a VNet-to-VNet VPN gateway connection by using the Microsoft Entra admin center](/azure/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal).
## Name resolution when connecting virtual networks
Don't lock the networking resources used by Domain Services. If networking resou
| Azure resource | Description | |:-|:| | Network interface card | Domain Services hosts the managed domain on two domain controllers (DCs) that run on Windows Server as Azure VMs. Each VM has a virtual network interface that connects to your virtual network subnet. |
-| Dynamic standard public IP address | Domain Services communicates with the synchronization and management service using a Standard SKU public IP address. For more information about public IP addresses, see [IP address types and allocation methods in Azure](../virtual-network/ip-services/public-ip-addresses.md). |
-| Azure standard load balancer | Domain Services uses a Standard SKU load balancer for network address translation (NAT) and load balancing (when used with secure LDAP). For more information about Azure load balancers, see [What is Azure Load Balancer?](../load-balancer/load-balancer-overview.md) |
+| Dynamic standard public IP address | Domain Services communicates with the synchronization and management service using a Standard SKU public IP address. For more information about public IP addresses, see [IP address types and allocation methods in Azure](/azure/virtual-network/ip-services/public-ip-addresses). |
+| Azure standard load balancer | Domain Services uses a Standard SKU load balancer for network address translation (NAT) and load balancing (when used with secure LDAP). For more information about Azure load balancers, see [What is Azure Load Balancer?](/azure/load-balancer/load-balancer-overview) |
| Network address translation (NAT) rules | Domain Services creates and uses two Inbound NAT rules on the load balancer for secure PowerShell remoting. If a Standard SKU load balancer is used, it will have an Outbound NAT Rule too. For the Basic SKU load balancer, no Outbound NAT rule is required. | | Load balancer rules | When a managed domain is configured for secure LDAP on TCP port 636, three rules are created and used on a load balancer to distribute the traffic. |
Don't lock the networking resources used by Domain Services. If networking resou
## Network security groups and required ports
-A [network security group (NSG)](../virtual-network/network-security-groups-overview.md) contains a list of rules that allow or deny network traffic in an Azure virtual network. When you deploy a managed domain, a network security group is created with a set of rules that let the service provide authentication and management functions. This default network security group is associated with the virtual network subnet your managed domain is deployed into.
+A [network security group (NSG)](/azure/virtual-network/network-security-groups-overview) contains a list of rules that allow or deny network traffic in an Azure virtual network. When you deploy a managed domain, a network security group is created with a set of rules that let the service provide authentication and management functions. This default network security group is associated with the virtual network subnet your managed domain is deployed into.
The following sections cover network security groups and Inbound and Outbound port requirements.
You must also route inbound traffic from the IP addresses included in the respec
For more information about some of the network resources and connection options used by Domain Services, see the following articles:
-* [Azure virtual network peering](../virtual-network/virtual-network-peering-overview.md)
-* [Azure VPN gateways](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md)
-* [Azure network security groups](../virtual-network/network-security-groups-overview.md)
+* [Azure virtual network peering](/azure/virtual-network/virtual-network-peering-overview)
+* [Azure VPN gateways](/azure/vpn-gateway/vpn-gateway-about-vpn-gateway-settings)
+* [Azure network security groups](/azure/virtual-network/network-security-groups-overview)
active-directory-domain-services Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/notifications.md
You can also choose to have all *Global Administrators* of the Microsoft Entra d
To review the existing email notification recipients, or add recipients, complete the following steps:
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](../active-directory/roles/permissions-reference.md#authentication-policy-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](/azure/active-directory/roles/permissions-reference#authentication-policy-administrator).
1. Search for and select **Microsoft Entra Domain Services**. 1. Select your managed domain, such as *aaddscontoso.com*. 1. On the left-hand side of the Domain Services resource window, select **Notification settings**. The existing recipients for email notifications are shown.
active-directory-domain-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/overview.md
To get started, [create a managed domain using the Microsoft Entra admin center]
[compare]: compare-identity-solutions.md [synchronization]: synchronization.md [tutorial-create]: tutorial-create-instance.md
-[azure-ad-connect]: ../active-directory/hybrid/whatis-azure-ad-connect.md
-[password-hash-sync]: ../active-directory/hybrid/how-to-connect-password-hash-synchronization.md
-[availability-zones]: ../reliability/availability-zones-overview.md
-[forest-trusts]: concepts-resource-forest.md
+[azure-ad-connect]: /azure/active-directory/hybrid/connect/whatis-azure-ad-connect
+[password-hash-sync]: /azure/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization
+[availability-zones]: /azure/reliability/availability-zones-overview
+[forest-trusts]: ./concepts-forest-trust.md
[administration-concepts]: administration-concepts.md [synchronization]: synchronization.md [concepts-replica-sets]: concepts-replica-sets.md
active-directory-domain-services Password Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/password-policy.md
For more information about password policies and using the Active Directory Admi
* [Configure fine-grained password policies using AD Administration Center](/windows-server/identity/ad-ds/get-started/adac/introduction-to-active-directory-administrative-center-enhancements--level-100-#fine_grained_pswd_policy_mgmt) <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md [tutorial-create-management-vm]: tutorial-create-management-vm.md
active-directory-domain-services Powershell Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md
To see the managed domain in action, you can [domain-join a Windows VM][windows-
[windows-join]: join-windows-vm.md [tutorial-ldaps]: tutorial-configure-ldaps.md [tutorial-phs]: tutorial-configure-password-hash-sync.md
-[nsg-overview]: ../virtual-network/network-security-groups-overview.md
+[nsg-overview]: /azure/virtual-network/network-security-groups-overview
[network-ports]: network-considerations.md#network-security-groups-and-required-ports <!-- EXTERNAL LINKS -->
-[Connect-AzAccount]: /powershell/module/Az.Accounts/Connect-AzAccount
-[Connect-AzureAD]: /powershell/module/AzureAD/Connect-AzureAD
+[Connect-AzAccount]: /powershell/module/az.accounts/connect-azaccount
+[Connect-AzureAD]: /powershell/module/azuread/connect-azuread
[New-AzureADServicePrincipal]: /powershell/module/AzureAD/New-AzureADServicePrincipal
-[New-AzureADGroup]: /powershell/module/AzureAD/New-AzureADGroup
-[Add-AzureADGroupMember]: /powershell/module/AzureAD/Add-AzureADGroupMember
-[Get-AzureADGroup]: /powershell/module/AzureAD/Get-AzureADGroup
-[Get-AzureADUser]: /powershell/module/AzureAD/Get-AzureADUser
-[Register-AzResourceProvider]: /powershell/module/Az.Resources/Register-AzResourceProvider
-[New-AzResourceGroup]: /powershell/module/Az.Resources/New-AzResourceGroup
-[New-AzVirtualNetworkSubnetConfig]: /powershell/module/Az.Network/New-AzVirtualNetworkSubnetConfig
-[New-AzVirtualNetwork]: /powershell/module/Az.Network/New-AzVirtualNetwork
-[Get-AzSubscription]: /powershell/module/Az.Accounts/Get-AzSubscription
-[cloud-shell]: ../cloud-shell/cloud-shell-windows-users.md
-[availability-zones]: ../reliability/availability-zones-overview.md
+[New-AzureADGroup]: /powershell/module/azuread/new-azureadgroup
+[Add-AzureADGroupMember]: /powershell/module/azuread/add-azureadgroupmember
+[Get-AzureADGroup]: /powershell/module/azuread/get-azureadgroup
+[Get-AzureADUser]: /powershell/module/azuread/get-azureaduser
+[Register-AzResourceProvider]: /powershell/module/az.resources/register-azresourceprovider
+[New-AzResourceGroup]: /powershell/module/az.resources/new-azresourcegroup
+[New-AzVirtualNetworkSubnetConfig]: /powershell/module/az.network/new-azvirtualnetworksubnetconfig
+[New-AzVirtualNetwork]: /powershell/module/az.network/new-azvirtualnetwork
+[Get-AzSubscription]: /powershell/module/az.accounts/get-azsubscription
+[cloud-shell]: /azure/active-directory/develop/configure-app-multi-instancing
+[availability-zones]: /azure/reliability/availability-zones-overview
[New-AzNetworkSecurityRuleConfig]: /powershell/module/az.network/new-aznetworksecurityruleconfig [New-AzNetworkSecurityGroup]: /powershell/module/az.network/new-aznetworksecuritygroup [Set-AzVirtualNetworkSubnetConfig]: /powershell/module/az.network/set-azvirtualnetworksubnetconfig
active-directory-domain-services Powershell Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-scoped-synchronization.md
To complete this article, you need the following resources and privileges:
* If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * A Microsoft Entra Domain Services managed domain enabled and configured in your Microsoft Entra tenant. * If needed, complete the tutorial to [create and configure a Microsoft Entra Domain Services managed domain][tutorial-create-instance].
-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to change the Domain Services synchronization scope.
+* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to change the Domain Services synchronization scope.
## Scoped synchronization overview
To learn more about the synchronization process, see [Understand synchronization
[scoped-sync]: scoped-synchronization.md [concepts-sync]: synchronization.md [tutorial-create-instance]: tutorial-create-instance.md
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
<!-- EXTERNAL LINKS --> [Connect-AzureAD]: /powershell/module/azuread/connect-azuread
active-directory-domain-services Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/scenarios.md
For more information about this deployment scenario, see [how to configure domai
To get started, [Create and configure a Microsoft Entra Domain Services managed domain][tutorial-create-instance]. <!-- INTERNAL LINKS -->
-[hdinsight]: ../hdinsight/domain-joined/apache-domain-joined-configure-using-azure-adds.md
+[hdinsight]: /azure/hdinsight/domain-joined/apache-domain-joined-configure-using-azure-adds
[tutorial-create-instance]: tutorial-create-instance.md [custom-ou]: create-ou.md [create-gpo]: manage-group-policy.md
-[sspr]: ../active-directory/authentication/overview-authentication.md#self-service-password-reset
+[sspr]: /azure/active-directory/authentication/overview-authentication#self-service-password-reset
[compare]: compare-identity-solutions.md
-[azure-ad-connect]: ../active-directory/hybrid/whatis-azure-ad-connect.md
+[azure-ad-connect]: /azure/active-directory/hybrid/connect/whatis-azure-ad-connect
<!-- EXTERNAL LINKS --> [windows-rds]: /windows-server/remote/remote-desktop-services/rds-azure-adds
active-directory-domain-services Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/scoped-synchronization.md
To complete this article, you need the following resources and privileges:
* If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * A Microsoft Entra Domain Services managed domain enabled and configured in your Microsoft Entra tenant. * If needed, complete the tutorial to [create and configure a Microsoft Entra Domain Services managed domain][tutorial-create-instance].
-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to change the Domain Services synchronization scope.
+* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to change the Domain Services synchronization scope.
## Scoped synchronization overview
To learn more about the synchronization process, see [Understand synchronization
[scoped-sync-powershell]: powershell-scoped-synchronization.md [concepts-sync]: synchronization.md [tutorial-create-instance]: tutorial-create-instance.md
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
active-directory-domain-services Secure Remote Vm Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-remote-vm-access.md
For more information on improving resiliency of your deployment, see [Remote Des
For more information about securing user sign-in, see [How it works: Microsoft Entra multifactor authentication][concepts-mfa]. <!-- INTERNAL LINKS -->
-[bastion-overview]: ../bastion/bastion-overview.md
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[bastion-overview]: /azure/bastion/bastion-overview
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md [configure-azureadds-vnet]: tutorial-configure-networking.md [tutorial-create-join-vm]: join-windows-vm.md
-[user-mfa-registration]: ../active-directory/authentication/howto-mfa-nps-extension.md#register-users-for-mfa
-[nps-extension]: ../active-directory/authentication/howto-mfa-nps-extension.md
-[azure-mfa-nps-integration]: ../active-directory/authentication/howto-mfa-nps-extension-rdg.md
-[register-nps-ad]:../active-directory/authentication/howto-mfa-nps-extension-rdg.md#register-server-in-active-directory
-[create-nps-policy]: ../active-directory/authentication/howto-mfa-nps-extension-rdg.md#configure-network-policy
-[concepts-mfa]: ../active-directory/authentication/concept-mfa-howitworks.md
+[user-mfa-registration]: /azure/active-directory/authentication/howto-mfa-nps-extension#register-users-for-mfa
+[nps-extension]: /azure/active-directory/authentication/howto-mfa-nps-extension
+[azure-mfa-nps-integration]: /azure/active-directory/authentication/howto-mfa-nps-extension-rdg
+[register-nps-ad]:/azure/active-directory/authentication/howto-mfa-nps-extension-rdg#register-server-in-active-directory
+[create-nps-policy]: /azure/active-directory/authentication/howto-mfa-nps-extension-rdg#configure-network-policy
+[concepts-mfa]: /azure/active-directory/authentication/concept-mfa-howitworks
<!-- EXTERNAL LINKS --> [deploy-remote-desktop]: /windows-server/remote/remote-desktop-services/rds-deploy-infrastructure
active-directory-domain-services Secure Your Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-your-domain.md
It takes a few moments for the security settings to be applied to the managed do
To learn more about the synchronization process, see [How objects and credentials are synchronized in a managed domain][synchronization]. <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md
-[global-admin]: ../role-based-access-control/elevate-access-global-admin.md
+[global-admin]: /azure/role-based-access-control/elevate-access-global-admin
[synchronization]: synchronization.md <!-- EXTERNAL LINKS -->
-[Get-AzResource]: /powershell/module/az.resources/Get-AzResource
-[Set-AzResource]: /powershell/module/Az.Resources/Set-AzResource
+[Get-AzResource]: /powershell/module/az.resources/get-azresource
+[Set-AzResource]: /powershell/module/az.resources/set-azresource
[Connect-AzAccount]: /powershell/module/Az.Accounts/Connect-AzAccount [Connect-AzureAD]: /powershell/module/AzureAD/Connect-AzureAD
active-directory-domain-services Security Audit Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/security-audit-events.md
The following table outlines scenarios for each destination resource type.
| Target Resource | Scenario | |:|:|
-|Azure Storage| This target should be used when your primary need is to store security audit events for archival purposes. Other targets can be used for archival purposes, however those targets provide capabilities beyond the primary need of archiving. <br /><br />Before you enable Domain Services security audit events, first [Create an Azure Storage account](../storage/common/storage-account-create.md).|
-|Azure Event Hubs| This target should be used when your primary need is to share security audit events with additional software such as data analysis software or security information & event management (SIEM) software.<br /><br />Before you enable Domain Services security audit events, [Create an event hub using Microsoft Entra admin center](../event-hubs/event-hubs-create.md)|
-|Azure Log Analytics Workspace| This target should be used when your primary need is to analyze and review secure audits from the Microsoft Entra admin center directly.<br /><br />Before you enable Domain Services security audit events, [Create a Log Analytics workspace in the Microsoft Entra admin center.](../azure-monitor/logs/quick-create-workspace.md)|
+|Azure Storage| This target should be used when your primary need is to store security audit events for archival purposes. Other targets can be used for archival purposes, however those targets provide capabilities beyond the primary need of archiving. <br /><br />Before you enable Domain Services security audit events, first [Create an Azure Storage account](/azure/storage/common/storage-account-create).|
+|Azure Event Hubs| This target should be used when your primary need is to share security audit events with additional software such as data analysis software or security information & event management (SIEM) software.<br /><br />Before you enable Domain Services security audit events, [Create an event hub using Microsoft Entra admin center](/azure/event-hubs/event-hubs-create)|
+|Azure Log Analytics Workspace| This target should be used when your primary need is to analyze and review secure audits from the Microsoft Entra admin center directly.<br /><br />Before you enable Domain Services security audit events, [Create a Log Analytics workspace in the Microsoft Entra admin center.](/azure/azure-monitor/logs/quick-create-workspace)|
## Enable security audit events using the Microsoft Entra admin center
To enable Domain Services security and DNS audit events using Azure PowerShell,
1. Create the target resource for the audit events.
- * **Azure Log Analytic workspaces** - [Create a Log Analytics workspace with Azure PowerShell](../azure-monitor/logs/powershell-workspace-configuration.md).
- * **Azure storage** - [Create a storage account using Azure PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell)
- * **Azure event hubs** - [Create an event hub using Azure PowerShell](../event-hubs/event-hubs-quickstart-powershell.md). You may also need to use the [New-AzEventHubAuthorizationRule](/powershell/module/az.eventhub/new-azeventhubauthorizationrule) cmdlet to create an authorization rule that grants Domain Services permissions to the event hub *namespace*. The authorization rule must include the **Manage**, **Listen**, and **Send** rights.
+ * **Azure Log Analytic workspaces** - [Create a Log Analytics workspace with Azure PowerShell](/azure/azure-monitor/logs/powershell-workspace-configuration).
+ * **Azure storage** - [Create a storage account using Azure PowerShell](/azure/storage/common/storage-account-create?tabs=azure-powershell)
+ * **Azure event hubs** - [Create an event hub using Azure PowerShell](/azure/event-hubs/event-hubs-quickstart-powershell). You may also need to use the [New-AzEventHubAuthorizationRule](/powershell/module/az.eventhub/new-azeventhubauthorizationrule) cmdlet to create an authorization rule that grants Domain Services permissions to the event hub *namespace*. The authorization rule must include the **Manage**, **Listen**, and **Send** rights.
> [!IMPORTANT] > Ensure you set the authorization rule on the event hub namespace and not the event hub itself.
-1. Get the resource ID for your Domain Services managed domain using the [Get-AzResource](/powershell/module/Az.Resources/Get-AzResource) cmdlet. Create a variable named *$aadds.ResourceId* to hold the value:
+1. Get the resource ID for your Domain Services managed domain using the [Get-AzResource](/powershell/module/az.resources/get-azresource) cmdlet. Create a variable named *$aadds.ResourceId* to hold the value:
```azurepowershell $aadds = Get-AzResource -name aaddsDomainName ```
-1. Configure the Azure Diagnostic settings using the [Set-AzDiagnosticSetting](/powershell/module/Az.Monitor/Set-AzDiagnosticSetting) cmdlet to use the target resource for Microsoft Entra Domain Services audit events. In the following examples, the variable *$aadds.ResourceId* is used from the previous step.
+1. Configure the Azure Diagnostic settings using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) cmdlet to use the target resource for Microsoft Entra Domain Services audit events. In the following examples, the variable *$aadds.ResourceId* is used from the previous step.
* **Azure storage** - Replace *storageAccountId* with your storage account name:
To enable Domain Services security and DNS audit events using Azure PowerShell,
Log Analytic workspaces let you view and analyze the security and DNS audit events using Azure Monitor and the Kusto query language. This query language is designed for read-only use that boasts power analytic capabilities with an easy-to-read syntax. For more information to get started with Kusto query languages, see the following articles:
-* [Azure Monitor documentation](../azure-monitor/index.yml)
-* [Get started with Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-tutorial.md)
-* [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md)
-* [Create and share dashboards of Log Analytics data](../azure-monitor/visualize/tutorial-logs-dashboards.md)
+* [Azure Monitor documentation](/azure/azure-monitor/)
+* [Get started with Log Analytics in Azure Monitor](/azure/azure-monitor/logs/log-analytics-tutorial)
+* [Get started with log queries in Azure Monitor](/azure/azure-monitor/logs/get-started-queries)
+* [Create and share dashboards of Log Analytics data](/azure/azure-monitor/visualize/tutorial-logs-dashboards)
The following sample queries can be used to start analyzing audit events from Domain Services.
The following audit event categories are available:
For specific information on Kusto, see the following articles:
-* [Overview](/azure/kusto/query/) of the Kusto query language.
-* [Kusto tutorial](/azure/kusto/query/tutorial) to familiarize you with query basics.
-* [Sample queries](/azure/kusto/query/samples) that help you learn new ways to see your data.
-* Kusto [best practices](/azure/kusto/query/best-practices) to optimize your queries for success.
+* [Overview](/azure/data-explorer/kusto/query/) of the Kusto query language.
+* [Kusto tutorial](/azure/data-explorer/kusto/query/tutorials/learn-common-operators) to familiarize you with query basics.
+* [Sample queries](/azure/data-explorer/kusto/query/tutorials/learn-common-operators) that help you learn new ways to see your data.
+* Kusto [best practices](/azure/data-explorer/kusto/query/best-practices) to optimize your queries for success.
active-directory-domain-services Suspension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/suspension.md
To keep your managed domain healthy and minimize the risk of it becoming suspend
<!-- INTERNAL LINKS --> [alert-nsg]: alert-nsg.md
-[azure-support]: ../active-directory/fundamentals/active-directory-troubleshooting-support-howto.md
+[azure-support]: /azure/active-directory/fundamentals/how-to-get-support
[resolve-alerts]: troubleshoot-alerts.md
active-directory-domain-services Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/synchronization.md
For hybrid user accounts synced from on-premises AD DS environment using Microso
## Next steps
-For more information on the specifics of password synchronization, see [How password hash synchronization works with Microsoft Entra Connect](../active-directory/hybrid/how-to-connect-password-hash-synchronization.md?context=/azure/active-directory-domain-services/context/azure-ad-ds-context).
+For more information on the specifics of password synchronization, see [How password hash synchronization works with Microsoft Entra Connect](/azure/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization?context=/azure/active-directory-domain-services/context/azure-ad-ds-context).
To get started with Domain Services, [create a managed domain](tutorial-create-instance.md).
active-directory-domain-services Template Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/template-create-instance.md
To complete this article, you need the following resources:
* Install and configure Azure AD PowerShell. * If needed, follow the instructions to [install the Azure AD PowerShell module and connect to Microsoft Entra ID](/powershell/azure/active-directory/install-adv2). * Make sure that you sign in to your Microsoft Entra tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet.
-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services.
+* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services.
* You need Domain Services Contributor Azure role to create the required Domain Services resources. ## DNS naming requirements
To see the managed domain in action, you can [domain-join a Windows VM][windows-
[windows-join]: join-windows-vm.md [tutorial-ldaps]: tutorial-configure-ldaps.md [tutorial-phs]: tutorial-configure-password-hash-sync.md
-[availability-zones]: ../reliability/availability-zones-overview.md
-[portal-deploy]: ../azure-resource-manager/templates/deploy-portal.md
-[powershell-deploy]: ../azure-resource-manager/templates/deploy-powershell.md
+[availability-zones]: /azure/reliability/availability-zones-overview
+[portal-deploy]: /azure/azure-resource-manager/templates/deploy-portal
+[powershell-deploy]: /azure/azure-resource-manager/templates/deploy-powershell
[scoped-sync]: scoped-synchronization.md
-[resource-forests]: concepts-resource-forest.md
+[resource-forests]: ./concepts-forest-trust.md
<!-- EXTERNAL LINKS --> [Connect-AzAccount]: /powershell/module/Az.Accounts/Connect-AzAccount
To see the managed domain in action, you can [domain-join a Windows VM][windows-
[Register-AzResourceProvider]: /powershell/module/Az.Resources/Register-AzResourceProvider [New-AzResourceGroup]: /powershell/module/Az.Resources/New-AzResourceGroup [Get-AzSubscription]: /powershell/module/Az.Accounts/Get-AzSubscription
-[cloud-shell]: ../cloud-shell/cloud-shell-windows-users.md
+[cloud-shell]: /azure/active-directory/develop/configure-app-multi-instancing
[naming-prefix]: /windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain
-[New-AzResourceGroupDeployment]: /powershell/module/Az.Resources/New-AzResourceGroupDeployment
+[New-AzResourceGroupDeployment]: /powershell/module/az.resources/new-azresourcegroupdeployment
active-directory-domain-services Troubleshoot Account Lockout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-account-lockout.md
If you still have problems joining your VM to the managed domain, [find help and
<!-- INTERNAL LINKS --> [configure-fgpp]: password-policy.md [security-audit-events]: security-audit-events.md
-[azure-ad-support]: ../active-directory/fundamentals/active-directory-troubleshooting-support-howto.md
+[azure-ad-support]: /azure/active-directory/fundamentals/how-to-get-support
active-directory-domain-services Troubleshoot Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-alerts.md
The managed domain's health automatically updates itself within two hours and re
Domain Services requires an active subscription, and can't be moved to a different subscription. If the Azure subscription that the managed domain was associated with is deleted, you must recreate an Azure subscription and managed domain.
-1. [Create an Azure subscription](../cost-management-billing/manage/create-subscription.md).
+1. [Create an Azure subscription](/azure/cost-management-billing/manage/create-subscription).
1. [Delete the managed domain](delete-aadds.md) from your existing Microsoft Entra directory. 1. [Create a replacement managed domain](tutorial-create-instance.md).
Domain Services requires an active subscription, and can't be moved to a differe
Domain Services requires an active subscription. If the Azure subscription that the managed domain was associated with isn't active, you must renew it to reactivate the subscription.
-1. [Renew your Azure subscription](../cost-management-billing/manage/subscription-disabled.md).
+1. [Renew your Azure subscription](/azure/cost-management-billing/manage/subscription-disabled).
2. Once the subscription is renewed, a Domain Services notification lets you re-enable the managed domain. When the managed domain is enabled again, the managed domain's health automatically updates itself within two hours and removes the alert.
This error is unrecoverable. To resolve the alert, [delete your existing managed
Some automatically generated service principals are used to manage and create resources for a managed domain. If the access permissions for one of these service principals is changed, the domain is unable to correctly manage resources. The following steps show you how to understand and then grant access permissions to a service principal:
-1. Read about [Azure role-based access control and how to grant access to applications in the Microsoft Entra admin center](../role-based-access-control/role-assignments-portal.md).
+1. Read about [Azure role-based access control and how to grant access to applications in the Microsoft Entra admin center](/azure/role-based-access-control/role-assignments-portal).
2. Review the access that the service principal with the ID *abba844e-bc0e-44b0-947a-dc74e5d09022* has and grant the access that was denied at an earlier date. ## AADDS112: Not enough IP address in the managed domain
The following common reasons cause synchronization to stop in a managed domain:
Domain Services requires an active subscription. If the Azure subscription that the managed domain was associated with isn't active, you must renew it to reactivate the subscription.
-1. [Renew your Azure subscription](../cost-management-billing/manage/subscription-disabled.md).
+1. [Renew your Azure subscription](/azure/cost-management-billing/manage/subscription-disabled).
2. Once the subscription is renewed, a Domain Services notification lets you re-enable the managed domain. When the managed domain is enabled again, the managed domain's health automatically updates itself within two hours and removes the alert.
When the managed domain is enabled again, the managed domain's health automatica
If you still have issues, [open an Azure support request][azure-support] for more troubleshooting help. <!-- INTERNAL LINKS -->
-[azure-support]: ../active-directory/fundamentals/active-directory-troubleshooting-support-howto.md
+[azure-support]: /azure/active-directory/fundamentals/how-to-get-support
active-directory-domain-services Troubleshoot Domain Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-domain-join.md
If you still have problems joining your VM to the managed domain, [find help and
<!-- INTERNAL LINKS --> [enable-password-sync]: tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds [network-ports]: network-considerations.md#network-security-groups-and-required-ports
-[azure-ad-support]: ../active-directory/fundamentals/active-directory-troubleshooting-support-howto.md
+[azure-ad-support]: /azure/active-directory/fundamentals/how-to-get-support
[configure-dns]: tutorial-create-instance.md#update-dns-settings-for-the-azure-virtual-network <!-- EXTERNAL LINKS -->
active-directory-domain-services Troubleshoot Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-sign-in.md
If you still have problems joining your VM to the managed domain, [find help and
[troubleshoot-account-lockout]: troubleshoot-account-lockout.md [azure-ad-connect-phs]: ./tutorial-configure-password-hash-sync.md [enable-user-accounts]: tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds
-[phs-process]: ../active-directory/hybrid/how-to-connect-password-hash-synchronization.md#password-hash-sync-process-for-azure-ad-domain-services
-[azure-ad-support]: ../active-directory/fundamentals/active-directory-troubleshooting-support-howto.md
+[phs-process]: /azure/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization#password-hash-sync-process-for-azure-ad-domain-services
+[azure-ad-support]: /azure/active-directory/fundamentals/how-to-get-support
active-directory-domain-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot.md
Check if you've disabled an application with the identifier *00000002-0000-0000-
To check the status of this application and enable it if needed, complete the following steps:
-1. In the [Microsoft Entra admin center](https://entra.microsoft.com), seearch for and select **Enterprise applications**.
+1. In the [Microsoft Entra admin center](https://entra.microsoft.com), search for and select **Enterprise applications**.
1. Choose *All applications* from the **Application Type** drop-down menu, then select **Apply**. 1. In the search box, enter *00000002-0000-0000-c000-00000000000*. Select the application, then choose **Properties**. 1. If **Enabled for users to sign-in** is set to *No*, set the value to *Yes*, then select **Save**.
If you continue to have issues, [open an Azure support request][azure-support] f
[password-policy]: password-policy.md [check-health]: check-health.md [troubleshoot-alerts]: troubleshoot-alerts.md
-[Remove-MsolUser]: /powershell/module/MSOnline/Remove-MsolUser
-[azure-support]: ../active-directory/fundamentals/active-directory-troubleshooting-support-howto.md
+[Remove-MsolUser]: /powershell/module/msonline/remove-msoluser
+[azure-support]: /azure/active-directory/fundamentals/how-to-get-support
active-directory-domain-services Tshoot Ldaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tshoot-ldaps.md
If you have trouble connecting to a Microsoft Entra DS managed domain using secu
If you still have issues, [open an Azure support request][azure-support] for additional troubleshooting assistance. <!-- INTERNAL LINKS -->
-[azure-support]: ../active-directory/fundamentals/active-directory-troubleshooting-support-howto.md
+[azure-support]: /azure/active-directory/fundamentals/how-to-get-support
[configure-ldaps]: tutorial-configure-ldaps.md [certs-prereqs]: tutorial-configure-ldaps.md#create-a-certificate-for-secure-ldap [client-cert]: tutorial-configure-ldaps.md#export-a-certificate-for-client-computers
active-directory-domain-services Tutorial Configure Ldaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-ldaps.md
To complete this tutorial, you need the following resources and privileges:
* If needed, [create and configure a Microsoft Entra Domain Services managed domain][create-azure-ad-ds-instance]. * The *LDP.exe* tool installed on your computer. * If needed, [install the Remote Server Administration Tools (RSAT)][rsat] for *Active Directory Domain Services and LDAP*.
-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable secure LDAP.
+* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to enable secure LDAP.
## Sign in to the Microsoft Entra admin center
In this tutorial, you learned how to:
> [Configure password hash synchronization for a hybrid Microsoft Entra environment](tutorial-configure-password-hash-sync.md) <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md [secure-domain]: secure-your-domain.md <!-- EXTERNAL LINKS --> [rsat]: /windows-server/remote/remote-server-administration-tools
-[ldap-query-basics]: /windows/desktop/ad/creating-a-query-filter
+[ldap-query-basics]: /windows/win32/ad/creating-a-query-filter
[New-SelfSignedCertificate]: /powershell/module/pki/new-selfsignedcertificate
active-directory-domain-services Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-networking.md
To complete this tutorial, you need the following resources and privileges:
* If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * A Microsoft Entra tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].
-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services.
+* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services.
* You need Domain Services Contributor Azure role to create the required Domain Services resources. * A Microsoft Entra Domain Services managed domain enabled and configured in your Microsoft Entra tenant. * If needed, the first tutorial [creates and configures a Microsoft Entra Domain Services managed domain][create-azure-ad-ds-instance].
To see this managed domain in action, create and join a virtual machine to the d
> [Join a Windows Server virtual machine to your managed domain](join-windows-vm.md) <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md [create-join-windows-vm]: join-windows-vm.md
-[peering-overview]: ../virtual-network/virtual-network-peering-overview.md
+[peering-overview]: /azure/virtual-network/virtual-network-peering-overview
[network-considerations]: network-considerations.md
active-directory-domain-services Tutorial Configure Password Hash Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-configure-password-hash-sync.md
In this tutorial, you learned:
> [Learn how synchronization works in a Microsoft Entra Domain Services managed domain](synchronization.md) <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md
-[enable-azure-ad-connect]: ../active-directory/hybrid/how-to-connect-install-express.md
+[enable-azure-ad-connect]: /azure/active-directory/hybrid/connect/how-to-connect-install-express
<!-- EXTERNAL LINKS --> [azure-ad-connect-download]: https://www.microsoft.com/download/details.aspx?id=47594
active-directory-domain-services Tutorial Create Forest Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-forest-trust.md
To complete this tutorial, you need the following resources and privileges:
## Sign in to the Microsoft Entra admin center
-In this tutorial, you create and configure the outbound forest trust from Domain Services using the Microsoft Entra admin center. To get started, first sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to modify a Domain Services instance.
+In this tutorial, you create and configure the outbound forest trust from Domain Services using the Microsoft Entra admin center. To get started, first sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to modify a Domain Services instance.
## Networking considerations
The following common scenarios let you validate that forest trust correctly auth
You should have Windows Server virtual machine joined to the managed domain. Use this virtual machine to test your on-premises user can authenticate on a virtual machine. If needed, [create a Windows VM and join it to the managed domain][join-windows-vm].
-1. Connect to the Windows Server VM joined to the Domain Services forest using [Azure Bastion](../bastion/bastion-overview.md) and your Domain Services administrator credentials.
+1. Connect to the Windows Server VM joined to the Domain Services forest using [Azure Bastion](/azure/bastion/bastion-overview) and your Domain Services administrator credentials.
1. Open a command prompt and use the `whoami` command to show the distinguished name of the currently authenticated user: ```console
Using the Windows Server VM joined to the Domain Services forest, you can test t
#### Enable file and printer sharing
-1. Connect to the Windows Server VM joined to the Domain Services forest using [Azure Bastion](../bastion/bastion-overview.md) and your Domain Services administrator credentials.
+1. Connect to the Windows Server VM joined to the Domain Services forest using [Azure Bastion](/azure/bastion/bastion-overview) and your Domain Services administrator credentials.
1. Open **Windows Settings**, then search for and select **Network and Sharing Center**. 1. Choose the option for **Change advanced sharing** settings.
For more conceptual information about forest in Domain Services, see [How do for
<!-- INTERNAL LINKS --> [concepts-trust]: concepts-forest-trust.md
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance-advanced]: tutorial-create-instance-advanced.md [howto-change-sku]: change-sku.md
-[vpn-gateway]: ../vpn-gateway/vpn-gateway-about-vpngateways.md
-[expressroute]: ../expressroute/expressroute-introduction.md
+[vpn-gateway]: /azure/vpn-gateway/vpn-gateway-about-vpngateways
+[expressroute]: /azure/expressroute/expressroute-introduction
[join-windows-vm]: join-windows-vm.md
active-directory-domain-services Tutorial Create Instance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md
To complete this tutorial, you need the following resources and privileges:
* If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * A Microsoft Entra tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].
-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services.
-* You need [Domain Services Contributor](../role-based-access-control/built-in-roles.md#domain-services-contributor) Azure role to create the required Domain Services resources.
+* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services.
+* You need [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#domain-services-contributor) Azure role to create the required Domain Services resources.
Although not required for Domain Services, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Microsoft Entra tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it.
To see this managed domain in action, create and join a virtual machine to the d
<!-- INTERNAL LINKS --> [tutorial-create-instance]: tutorial-create-instance.md
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[network-considerations]: network-considerations.md
-[create-dedicated-subnet]: ../virtual-network/virtual-network-manage-subnet.md#add-a-subnet
+[create-dedicated-subnet]: /azure/virtual-network/virtual-network-manage-subnet#add-a-subnet
[scoped-sync]: scoped-synchronization.md [on-prem-sync]: tutorial-configure-password-hash-sync.md
-[configure-sspr]: ../active-directory/authentication/tutorial-enable-sspr.md
-[password-hash-sync-process]: ../active-directory/hybrid/how-to-connect-password-hash-synchronization.md#password-hash-sync-process-for-azure-ad-domain-services
-[resource-forests]: concepts-resource-forest.md
-[availability-zones]: ../reliability/availability-zones-overview.md
+[configure-sspr]: /azure/active-directory/authentication/tutorial-enable-sspr
+[password-hash-sync-process]: /azure/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization#password-hash-sync-process-for-azure-ad-domain-services
+[resource-forests]: ./concepts-forest-trust.md
+[availability-zones]: /azure/reliability/availability-zones-overview
[concepts-sku]: administration-concepts.md#azure-ad-ds-skus <!-- EXTERNAL LINKS -->
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
To complete this tutorial, you need the following resources and privileges:
* If you don't have an Azure subscription, [create an account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * A Microsoft Entra tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory. * If needed, [create a Microsoft Entra tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant].
-* You need [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) and [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services.
-* You need [Domain Services Contributor](../role-based-access-control/built-in-roles.md#domain-services-contributor) Azure role to create the required Domain Services resources.
+* You need [Application Administrator](/azure/active-directory/roles/permissions-reference#application-administrator) and [Groups Administrator](/azure/active-directory/roles/permissions-reference#groups-administrator) Microsoft Entra roles in your tenant to enable Domain Services.
+* You need [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#domain-services-contributor) Azure role to create the required Domain Services resources.
* A virtual network with DNS servers that can query necessary infrastructure such as storage. DNS servers that can't perform general internet queries might block the ability to create a managed domain. Although not required for Domain Services, it's recommended to [configure self-service password reset (SSPR)][configure-sspr] for the Microsoft Entra tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it.
To authenticate users on the managed domain, Domain Services needs password hash
> > Synchronized credential information in Microsoft Entra ID can't be re-used if you later create a managed domain - you must reconfigure the password hash synchronization to store the password hashes again. Previously domain-joined VMs or users won't be able to immediately authenticate - Microsoft Entra ID needs to generate and store the password hashes in the new managed domain. >
-> [Microsoft Entra Connect Cloud Sync is not supported with Domain Services](../active-directory/cloud-sync/what-is-cloud-sync.md#comparison-between-azure-ad-connect-and-cloud-sync). On-premises users need to be synced using Microsoft Entra Connect in order to be able to access domain-joined VMs. For more information, see [Password hash sync process for Domain Services and Microsoft Entra Connect][password-hash-sync-process].
+> [Microsoft Entra Connect Cloud Sync is not supported with Domain Services](/azure/active-directory/hybrid/cloud-sync/what-is-cloud-sync#comparison-between-azure-ad-connect-and-cloud-sync). On-premises users need to be synced using Microsoft Entra Connect in order to be able to access domain-joined VMs. For more information, see [Password hash sync process for Domain Services and Microsoft Entra Connect][password-hash-sync-process].
The steps to generate and store these password hashes are different for cloud-only user accounts created in Microsoft Entra ID versus user accounts that are synchronized from your on-premises directory using Microsoft Entra Connect.
Before you domain-join VMs and deploy applications that use the managed domain,
<!-- INTERNAL LINKS --> [tutorial-create-instance-advanced]: tutorial-create-instance-advanced.md
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[network-considerations]: network-considerations.md
-[create-dedicated-subnet]: ../virtual-network/virtual-network-manage-subnet.md#add-a-subnet
+[create-dedicated-subnet]: /azure/virtual-network/virtual-network-manage-subnet#add-a-subnet
[scoped-sync]: scoped-synchronization.md [on-prem-sync]: tutorial-configure-password-hash-sync.md
-[configure-sspr]: ../active-directory/authentication/tutorial-enable-sspr.md
-[password-hash-sync-process]: ../active-directory/hybrid/how-to-connect-password-hash-synchronization.md#password-hash-sync-process-for-azure-ad-domain-services
+[configure-sspr]: /azure/active-directory/authentication/tutorial-enable-sspr
+[password-hash-sync-process]: /azure/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization#password-hash-sync-process-for-azure-ad-domain-services
[tutorial-create-instance-advanced]: tutorial-create-instance-advanced.md [skus]: overview.md
-[resource-forests]: concepts-resource-forest.md
-[availability-zones]: ../reliability/availability-zones-overview.md
+[resource-forests]: ./concepts-forest-trust.md
+[availability-zones]: /azure/reliability/availability-zones-overview
[concepts-sku]: administration-concepts.md#azure-ad-ds-skus <!-- EXTERNAL LINKS -->
active-directory-domain-services Tutorial Create Management Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-management-vm.md
To safely interact with your managed domain from other applications, enable secu
> [Configure secure LDAP for your managed domain](tutorial-configure-ldaps.md) <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md [create-join-windows-vm]: join-windows-vm.md
-[azure-bastion]: ../bastion/tutorial-create-host-portal.md
+[azure-bastion]: /azure/bastion/tutorial-create-host-portal
active-directory-domain-services Tutorial Create Replica Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-replica-set.md
For more conceptual information, learn how replica sets work in Domain Services.
<!-- INTERNAL LINKS --> [replica-sets]: concepts-replica-sets.md [tutorial-create-instance]: tutorial-create-instance-advanced.md
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[howto-change-sku]: change-sku.md [concepts-replica-sets]: concepts-replica-sets.md
active-directory-domain-services Tutorial Perform Disaster Recovery Drill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-perform-disaster-recovery-drill.md
For more conceptual information, learn how replica sets work in Domain Services.
<!-- INTERNAL LINKS --> [replica-sets]: concepts-replica-sets.md [tutorial-create-instance]: tutorial-create-instance-advanced.md
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[howto-change-sku]: change-sku.md [concepts-replica-sets]: concepts-replica-sets.md
active-directory-domain-services Use Azure Monitor Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/use-azure-monitor-workbooks.md
Domain Services includes the following two workbook templates:
* Security overview report * Account activity report
-For more information about how to edit and manage workbooks, see [Azure Monitor Workbooks overview](../azure-monitor/visualize/workbooks-overview.md).
+For more information about how to edit and manage workbooks, see [Azure Monitor Workbooks overview](/azure/azure-monitor/visualize/workbooks-overview).
## Use the security overview report workbook
If you need to adjust password and lockout policies, see [Password and account l
For problems with users, learn how to troubleshoot [account sign-in problems][troubleshoot-sign-in] or [account lockout problems][troubleshoot-account-lockout]. <!-- INTERNAL LINKS -->
-[create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md
-[associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
+[create-azure-ad-tenant]: /azure/active-directory/fundamentals/sign-up-organization
+[associate-azure-ad-tenant]: /azure/active-directory/fundamentals/how-subscriptions-associated-directory
[create-azure-ad-ds-instance]: tutorial-create-instance.md [enable-security-audits]: security-audit-events.md [password-policy]: password-policy.md [troubleshoot-sign-in]: troubleshoot-sign-in.md [troubleshoot-account-lockout]: troubleshoot-account-lockout.md [azure-monitor-queries]: /azure/data-explorer/kusto/query/
-[kusto-queries]: /azure/kusto/query/tutorial?pivots=azuredataexplorer
+[kusto-queries]: /azure/data-explorer/kusto/query/tutorials/learn-common-operators?pivots=azuredataexplorer
active-directory Concept Authentication Strengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-strengths.md
Previously updated : 09/14/2023 Last updated : 09/27/2023
An authentication strength Conditional Access policy works together with [MFA tr
- **Authentication methods that aren't currently supported by authentication strength** - The **Email one-time pass (Guest)** authentication method isn't included in the available combinations. -- **Windows Hello for Business** ΓÇô If the user signed in with Windows Hello for Business as their primary authentication method, it can be used to satisfy an authentication strength requirement that includes Windows Hello for Business. But if the user signed in with another method like password as their primary authenticating method, and the authentication strength requires Windows Hello for Business, they get prompted to sign in with Windows Hello for Business.
+- **Windows Hello for Business** ΓÇô If the user signed in with Windows Hello for Business as their primary authentication method, it can be used to satisfy an authentication strength requirement that includes Windows Hello for Business. However, if the user signed in with another method like password as their primary authenticating method, and the authentication strength requires Windows Hello for Business, they aren't prompted to sign in with Windows Hello for Business. The user needs to restart the session, choose **Sign-in options**, and select a method required by the authentication strength.
## Known isssues
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Previously updated : 09/25/2023 Last updated : 09/27/2023
Now we'll walk through each step:
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-alt.png" alt-text="Screenshot of the Sign-in if FIDO2 is also enabled.":::
-1. Once the user selects certificate-based authentication, the client is redirected to the certauth endpoint, which is [https://certauth.login.microsoftonline.com](https://certauth.login.microsoftonline.com) or [`https://t<tenant id>.certauth.login.microsoftonline.com`](`https://t<tenant id>.certauth.login.microsoftonline.com`) for Azure Global. For [Azure Government](../../azure-government/compare-azure-government-global-azure.md#guidance-for-developers), the certauth endpoint is [https://certauth.login.microsoftonline.us](https://certauth.login.microsoftonline.us).
+1. Once the user selects certificate-based authentication, the client is redirected to the certauth endpoint, which is [https://certauth.login.microsoftonline.com](https://certauth.login.microsoftonline.com) for Azure Global. For [Azure Government](../../azure-government/compare-azure-government-global-azure.md#guidance-for-developers), the certauth endpoint is [https://certauth.login.microsoftonline.us](https://certauth.login.microsoftonline.us).
- The endpoint performs TLS mutual authentication, and requests the client certificate as part of the TLS handshake. You'll see an entry for this request in the Sign-ins log.
+However, with the issue hints feature enabled (coming soon), the new certauth endpoint will change to `https://t{tenantid}.certauth.login.microsoftonline.com`.
+
+The endpoint performs TLS mutual authentication, and requests the client certificate as part of the TLS handshake. You'll see an entry for this request in the Sign-ins log.
- :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-log.png" alt-text="Screenshot of the Sign-ins log in Microsoft Entra ID." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-log.png":::
-
>[!NOTE]
- >The network administrator should allow access to the User sign-in page and certauth endpoint *.certauth.login.microsoftonline.com for the customerΓÇÖs cloud environment. Disable TLS inspection on the certauth endpoint to make sure the client certificate request succeeds as part of the TLS handshake.
+ >The network administrator should allow access to the User sign-in page and certauth endpoint `*.certauth.login.microsoftonline.com` for the customer's cloud environment. Disable TLS inspection on the certauth endpoint to make sure the client certificate request succeeds as part of the TLS handshake.
+
+ Customers should make sure their TLS inspection disablement also work for the new url with issuer hints. Our recommendation is not to hardcode the url with tenantId as for B2B users the tenantId might change. Use a regular expression to allow both the old and new URL to work for TLS inspection disablement. For example, use `*.certauth.login.microsoftonline.com` or `*certauth.login.microsoftonline.com`for Azure Global tenants, and `*.certauth.login.microsoftonline.us` or `*certauth.login.microsoftonline.us` for Azure Government tenants, depending on the proxy used.
+ Without this change, certificate-based authentication will fail when you enable Issuer Hints feature.
+
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-log.png" alt-text="Screenshot of the Sign-ins log in Microsoft Entra ID." lightbox="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-log.png":::
+
Click the log entry to bring up **Activity Details** and click **Authentication Details**. You'll see an entry for the X.509 certificate. :::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/entry.png" alt-text="Screenshot of the entry for X.509 certificate.":::
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
Previously updated : 09/26/2023 Last updated : 09/27/2023
Here are a few sample JSONs you can use to get started!
{ "registrationEnforcement": { "authenticationMethodsRegistrationCampaign": {
- "snoozeDurationInDays": 0,
+ "snoozeDurationInDays": 1,
+ "enforceRegistrationAfterAllowedSnoozes": true,
"state": "enabled", "excludeTargets": [], "includeTargets": [
Here are a few sample JSONs you can use to get started!
{ "registrationEnforcement": { "authenticationMethodsRegistrationCampaign": {
- "snoozeDurationInDays": 0,
+ "snoozeDurationInDays": 1,
+ "enforceRegistrationAfterAllowedSnoozes": true,
"state": "enabled", "excludeTargets": [], "includeTargets": [
Here are a few sample JSONs you can use to get started!
{ "registrationEnforcement": { "authenticationMethodsRegistrationCampaign": {
- "snoozeDurationInDays": 0,
+ "snoozeDurationInDays": 1,
+ "enforceRegistrationAfterAllowedSnoozes": true,
"state": "enabled", "excludeTargets": [ {
active-directory Howto Mfa Userdevicesettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userdevicesettings.md
If you're assigned the *Authentication Administrator* role, you can require user
1. Browse to **Identity** > **Users** > **All users**. 1. Choose the user you wish to perform an action on and select **Authentication methods**. At the top of the window, then choose one of the following options for the user: - **Reset Password** resets the user's password and assigns a temporary password that must be changed on the next sign-in.
- - **Require Re-register MFA** makes it so that when the user signs in next time, they're requested to set up a new MFA authentication method.
- > [!NOTE]
- > The user's currently registered authentication methods aren't deleted when an admin requires re-registration for MFA. After a user re-registers for MFA, we recommend they review their security info and delete any previously registered authentication methods that are no longer usable.
+ - **Require Re-register MFA** deactivates the user's hardware OATH tokens and deletes the following authentication methods from this user: phone numbers, Microsoft Authenticator apps and software OATH tokens. If needed, the user is requested to set up a new MFA authentication method the next time they sign in.
- **Revoke MFA Sessions** clears the user's remembered MFA sessions and requires them to perform MFA the next time it's required by the policy on the device. :::image type="content" source="media/howto-mfa-userdevicesettings/manage-authentication-methods-in-azure.png" alt-text="Manage authentication methods from the Microsoft Entra admin center":::
To delete a user's app passwords, complete the following steps:
This article showed you how to configure individual user settings. To configure overall Microsoft Entra multifactor authentication service settings, see [Configure Microsoft Entra multifactor authentication settings](howto-mfa-mfasettings.md). If your users need help, see the [User guide for Microsoft Entra multifactor authentication](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc).+
active-directory Troubleshoot Authentication Strengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-authentication-strengths.md
Previously updated : 06/02/2023 Last updated : 09/27/2023 -+
To verify if a method can be used:
1. As needed, check if the tenant is enabled for any method required for the authentication strength. Click **Security** > **Multifactor Authentication** > **Additional cloud-based multifactor authentication settings**. 1. Check which authentication methods are registered for the user in the Authentication methods policy. Click **Users and groups** > _username_ > **Authentication methods**.
-If the user is registered for an enabled method that meets the authentication strength, they might need to use another method that isn't available after primary authentication, such as Windows Hello for Business or certificate-based authentication. For more information, see [How each authentication method works](concept-authentication-methods.md#how-each-authentication-method-works). The user needs to restart the session, choose **Sign-in options** , and select a method required by the authentication strength.
+If the user is registered for an enabled method that meets the authentication strength, they might need to use another method that isn't available after primary authentication, such as Windows Hello for Business. For more information, see [How each authentication method works](concept-authentication-methods.md#how-each-authentication-method-works). The user needs to restart the session, choose **Sign-in options** , and select a method required by the authentication strength.
:::image type="content" border="true" source="./media/troubleshoot-authentication-strengths/choose-another-method.png" alt-text="Screenshot of how to choose another sign-in method.":::
active-directory Test Throttle Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-throttle-service-limits.md
The following table lists Microsoft Entra throttling limits to consider when run
| Limit type | Resource unit quota | Write quota | |-|-|-| | application+tenant pair | S: 3500, M:5000, L:8000 per 10 seconds | 3000 per 2 minutes and 30 seconds |
-| application | 150,000 per 20 seconds | 70,000 per 5 minutes |
+| application | 150,000 per 20 seconds | 35,000 per 5 minutes |
| tenant | Not Applicable | 18,000 per 5 minutes | The application + tenant pair limit varies based on the number of users in the tenant requests are run against. The tenant sizes are defined as follows: S - under 50 users, M - between 50 and 500 users, and L - above 500 users.
active-directory Manage Device Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-device-identities.md
Previously updated : 06/12/2023 Last updated : 09/27/2023
You can access the devices overview by completing these steps:
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader). 1. Go to **Identity** > **Devices** > **Overview**.
-In the devices overview, you can view the number of total devices, stale devices, noncompliant devices, and unmanaged devices. You'll also find links to Intune, Conditional Access, BitLocker keys, and basic monitoring.
+In the devices overview, you can view the number of total devices, stale devices, noncompliant devices, and unmanaged devices. It provides links to Intune, Conditional Access, BitLocker keys, and basic monitoring.
Device counts on the overview page don't update in real time. Changes should be reflected every few hours.
From there, you can go to **All devices** to:
## Manage an Intune device
-If you have rights to manage devices in Intune, you can manage devices for which mobile device management is listed as **Microsoft Intune**. If the device isn't enrolled with Microsoft Intune, the **Manage** option won't be available.
-
-<a name='enable-or-disable-an-azure-ad-device'></a>
+If you have rights to manage devices in Intune, you can manage devices for which mobile device management is listed as **Microsoft Intune**. If the device isn't enrolled with Microsoft Intune, the **Manage** option isn't available.
## Enable or disable a Microsoft Entra device
There are two ways to enable or disable devices:
> - Disabling a device revokes the Primary Refresh Token (PRT) and any refresh tokens on the device. > - Printers can't be enabled or disabled in Microsoft Entra ID.
-<a name='delete-an-azure-ad-device'></a>
- ## Delete a Microsoft Entra device There are two ways to delete a device:
There are two ways to delete a device:
> - Removes all details attached to the device. For example, BitLocker keys for Windows devices. > - Is a nonrecoverable activity. We don't recommended it unless it's required.
-If a device is managed by another management authority, like Microsoft Intune, be sure it's wiped or retired before you delete it. See [How to manage stale devices](manage-stale-devices.md) before you delete a device.
+If a device is managed in another management authority, like Microsoft Intune, be sure it's wiped or retired before you delete it. See [How to manage stale devices](manage-stale-devices.md) before you delete a device.
## View or copy a device ID
You can use a device ID to verify the device ID details on the device or to trou
## View or copy BitLocker keys
-You can view and copy BitLocker keys to allow users to recover encrypted drives. These keys are available only for Windows devices that are encrypted and store their keys in Microsoft Entra ID. You can find these keys when you view a device's details by selecting **Show Recovery Key**. Selecting **Show Recovery Key** will generate an audit log, which you can find in the `KeyManagement` category.
+You can view and copy BitLocker keys to allow users to recover encrypted drives. These keys are available only for Windows devices that are encrypted and store their keys in Microsoft Entra ID. You can find these keys when you view a device's details by selecting **Show Recovery Key**. Selecting **Show Recovery Key** generates an audit log entry, which you can find in the `KeyManagement` category.
![Screenshot that shows how to view BitLocker keys.](./media/manage-device-identities/show-bitlocker-key.png)
You can now experience the enhanced **All devices** view.
## Download devices
-Global readers, Cloud Device Administrators, Intune Administrators, and Global Administrators can use the **Download devices** option to export a CSV file that lists devices. You can apply filters to determine which devices to list. If you don't apply any filters, all devices will be listed. An export task might run for as long as an hour, depending on your selections. If the export task exceeds 1 hour, it fails, and no file is output.
+Global readers, Cloud Device Administrators, Intune Administrators, and Global Administrators can use the **Download devices** option to export a CSV file that lists devices. You can apply filters to determine which devices to list. If you don't apply any filters, all devices are listed. An export task might run for as long as an hour, depending on your selections. If the export task exceeds 1 hour, it fails, and no file is output.
The exported list includes these device identity attributes:
You must be assigned one of the following roles to manage device settings:
> [!NOTE] > The **Require multifactor authentication to register or join devices with Microsoft Entra ID** setting applies to devices that are either Microsoft Entra joined (with some exceptions) or Microsoft Entra registered. This setting doesn't apply to Microsoft Entra hybrid joined devices, [Microsoft Entra joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enable-azure-ad-login-for-a-windows-vm-in-azure), or Microsoft Entra joined devices that use [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying). -- **Maximum number of devices**: This setting enables you to select the maximum number of Microsoft Entra joined or Microsoft Entra registered devices that a user can have in Microsoft Entra ID. If users reach this limit, they can't add more devices until one or more of the existing devices are removed. The default value is **50**. You can increase the value up to 100. If you enter a value above 100, Microsoft Entra ID will set it to 100. You can also use **Unlimited** to enforce no limit other than existing quota limits.
+- **Maximum number of devices**: This setting enables you to select the maximum number of Microsoft Entra joined or Microsoft Entra registered devices that a user can have in Microsoft Entra ID. If users reach this limit, they can't add more devices until one or more of the existing devices are removed. The default value is **50**. You can increase the value up to 100. If you enter a value above 100, Microsoft Entra ID sets it to 100. You can also use **Unlimited** to enforce no limit other than existing quota limits.
> [!NOTE] > The **Maximum number of devices** setting applies to devices that are either Microsoft Entra joined or Microsoft Entra registered. This setting doesn't apply to Microsoft Entra hybrid joined devices.
You must be assigned one of the following roles to manage device settings:
This option is a premium edition capability available through products like Microsoft Entra ID P1 or P2 and Enterprise Mobility + Security. - **Enable Microsoft Entra Local Administrator Password Solution (LAPS) (preview)**: LAPS is the management of local account passwords on Windows devices. LAPS provides a solution to securely manage and retrieve the built-in local admin password. With cloud version of LAPS, customers can enable storing and rotation of local admin passwords for both Microsoft Entra ID and Microsoft Entra hybrid join devices. To learn how to manage LAPS in Microsoft Entra ID, see [the overview article](howto-manage-local-admin-passwords.md). -- **Restrict non-admin users from recovering the BitLocker key(s) for their owned devices**: Admins can block self-service BitLocker key access to the registered owner of the device. Default users without the BitLocker read permission will be unable to view or copy their BitLocker key(s) for their owned devices. You must be a Global Administrator or Privileged Role Administrator to update this setting.
+- **Restrict non-admin users from recovering the BitLocker key(s) for their owned devices**: Admins can block self-service BitLocker key access to the registered owner of the device. Default users without the BitLocker read permission are unable to view or copy their BitLocker key(s) for their owned devices. You must be a Global Administrator or Privileged Role Administrator to update this setting.
- **Enterprise State Roaming**: For information about this setting, see [the overview article](./enterprise-state-roaming-enable.md).
active-directory Tenant Restrictions V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tenant-restrictions-v2.md
Last updated 09/12/2023
-+
active-directory Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-defaults.md
After registration is finished, the following administrator roles will be requir
- Global Administrator - Application Administrator - Authentication Administrator
+- Authentication Policy Administrator
- Billing Administrator - Cloud Application Administrator - Conditional Access Administrator - Exchange Administrator - Helpdesk Administrator
+- Identity Governance Administrator
- Password Administrator - Privileged Authentication Administrator - Privileged Role Administrator
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-assignments.md
$policy = $accesspackage.AssignmentPolicies[0]
$req = New-MgBetaEntitlementManagementAccessPackageAssignmentRequest -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -TargetEmail "sample@example.com" ```
+## Configure access assignment as part of a lifecycle workflow
+
+In the Microsoft Entra Lifecycle Workflows feature, you can add a [Request user access package assignment](lifecycle-workflow-tasks.md#request-user-access-package-assignment) task to an onboarding workflow. The task can specify an access package which users should have. When the workflow runs for a user, then an access package assignment request will be created automatically.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a global administrator.
+
+1. Browse to **Identity governance** > **Lifecycle workflows** > **Workflows**.
+
+1. Select an employee onboarding or move workflow.
+
+1. Select **Tasks** and select **Add task**.
+
+1. Select **Request user access package assignment** and select **Add**.
+
+1. Select the newly added task.
+
+1. Select **Select Access package**, and choose the access package that new or moving users should be assigned to.
+
+1. Select **Select Policy**, and choose the access package assignment policy in that access package.
+
+1. Select **Save**.
+ ## Remove an assignment You can remove an assignment that a user or an administrator had previously requested.
if ($assignment -ne $null) {
} ```
+## Configure assignment removal as part of a lifecycle workflow
+
+In the Microsoft Entra Lifecycle Workflows feature, you can add a [Remove access package assignment for user](lifecycle-workflow-tasks.md#remove-access-package-assignment-for-user) task to an offboarding workflow. That task can specify an access package the user might be assigned to. When the workflow runs for a user, then their access package assignment will be removed automatically.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a global administrator.
+
+1. Browse to **Identity governance** > **Lifecycle workflows** > **Workflows**.
+
+1. Select an employee offboarding workflow.
+
+1. Select **Tasks** and select **Add task**.
+
+1. Select **Remove access package assignment for user** and select **Add**.
+
+1. Select the newly added task.
+
+1. Select **Select Access packages**, and choose one or more access packages that users being offboarded should be removed from.
+
+1. Select **Save**.
+ ## Next steps - [Change request and settings for an access package](entitlement-management-access-package-request-policy.md)
active-directory Entitlement Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md
You can have policies for users to request access. In these kinds of policies, a
- The approval process and the users that can approve or deny access - The duration of a user's access assignment, once approved, before the assignment expires
-You can also have policies for users to be assigned access, either by an administrator or [automatically](entitlement-management-access-package-auto-assignment-policy.md).
+You can also have policies for users to be assigned access, either [by an administrator](entitlement-management-access-package-assignments.md#directly-assign-a-user), [automatically based on rules](entitlement-management-access-package-auto-assignment-policy.md), or through lifecycle workflows.
The following diagram shows an example of the different elements in entitlement management. It shows one catalog with two example access packages.
active-directory Entitlement Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-scenarios.md
There are several ways that you can configure entitlement management for your or
## Govern access for users in your organization
-### Administrator: Assign employees access automatically (preview)
+### Administrator: Assign employees access automatically
1. [Create a new access package](entitlement-management-access-package-create.md#start-the-creation-process) 1. [Add groups, Teams, applications, or SharePoint sites to access package](entitlement-management-access-package-create.md#select-resource-roles) 1. [Add an automatic assignment policy](entitlement-management-access-package-auto-assignment-policy.md)
+### Administrator: Assign employees access from lifecycle workflows
+
+1. [Create a new access package](entitlement-management-access-package-create.md#start-the-creation-process)
+1. [Add groups, Teams, applications, or SharePoint sites to access package](entitlement-management-access-package-create.md#select-resource-roles)
+1. [Add a direct assignment policy](entitlement-management-access-package-request-policy.md#none-administrator-direct-assignments-only)
+1. Add a task to [Request user access package assignment](lifecycle-workflow-tasks.md#request-user-access-package-assignment) to a workflow when a user joins
+1. Add a task to [Remove access package assignment for user](lifecycle-workflow-tasks.md#remove-access-package-assignment-for-user) to a workflow when a user leaves
+ ### Access package 1. [Create a new access package](entitlement-management-access-package-create.md#start-the-creation-process)
There are several ways that you can configure entitlement management for your or
## Day-to-day management
-### Administrator: View the connected organziations that are proposed and configured
+### Administrator: View the connected organizations that are proposed and configured
1. [View the list of connected organizations](entitlement-management-organization.md)
active-directory Pim Powershell Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-powershell-migration.md
Last updated 07/11/2023 -+ # PIM PowerShell for Azure Resources Migration Guidance
active-directory Pim Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-roles.md
For more information about the classic subscription administrator roles, see [Az
We support all Microsoft 365 roles in the Microsoft Entra roles and Administrators portal experience, such as Exchange Administrator and SharePoint Administrator, but we don't support specific roles within Exchange RBAC or SharePoint RBAC. For more information about these Microsoft 365 services, see [Microsoft 365 admin roles](/office365/admin/add-users/about-admin-roles). > [!NOTE]
-> - Eligible users for the SharePoint administrator role, the Device administrator role, and any roles trying to access the Microsoft Security & Compliance Center might experience delays of up to a few hours after activating their role. We are working with those teams to fix the issues.
-> - For information about delays activating the Azure AD Joined Device Local Administrator role, see [How to manage the local administrators group on Microsoft Entra joined devices](../devices/assign-local-admin.md#manage-the-azure-ad-joined-device-local-administrator-role).
+> For information about delays activating the Azure AD Joined Device Local Administrator role, see [How to manage the local administrators group on Microsoft Entra joined devices](../devices/assign-local-admin.md#manage-the-azure-ad-joined-device-local-administrator-role).
## Next steps
active-directory Concept Usage Insights Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-usage-insights-report.md
Previously updated : 08/24/2023 Last updated : 09/26/2023
Viewing the AD FS application activity using Microsoft Graph retrieves a list of
Add the following query, then select the **Run query** button. ```http
- GET https://graph.microsoft.com/beta/reports/getRelyingPartyDetailedSummary
+ GET https://graph.microsoft.com/beta/reports/getRelyingPartyDetailedSummary(period='{period}')
``` For more information, see [AD FS application activity in Microsoft Graph](/graph/api/resources/relyingpartydetailedsummary?view=graph-rest-beta&preserve-view=true).
active-directory Reference Sla Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-sla-performance.md
+
+ Title: Microsoft Entra SLA performance
+description: Learn about the Microsoft Entra service level performance and attainment
+++++++ Last updated : 09/27/2023+++++
+# Microsoft Entra SLA performance
+
+As an identity admin, you may need to track the Microsoft Entra service-level agreement (SLA) performance to make sure Microsoft Entra ID can support your vital apps. This article shows how the Microsoft Entra service has performed according to the [SLA for Microsoft Entra ID](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/).
+
+You can use this article in discussions with app or business owners to help them understand the performance they can expect from Microsoft Entra ID.
+
+## Service availability commitment
+
+Microsoft offers Premium Microsoft Entra customers the opportunity to get a service credit if Microsoft Entra ID fails to meet the documented SLA. When you request a service credit, Microsoft evaluates the SLA for your specific tenant; however, this global SLA can give you an indication of the general health of Microsoft Entra ID over time.
+
+The SLA covers the following scenarios that are vital to businesses:
+
+- **User authentication:** Users are able to sign in to the Microsoft Entra service.
+
+- **App access:** Microsoft Entra ID successfully emits the authentication and authorization tokens required for users to sign in to applications connected to the service.
+
+For full details on SLA coverage and instructions on requesting a service credit, see the [SLA for Microsoft Entra ID](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/).
++
+## No planned downtime
+
+You rely on Microsoft Entra ID to provide identity and access management for your vital systems. To ensure Microsoft Entra ID is available when business operations require it, Microsoft doesn't plan downtime for Microsoft Entra system maintenance. Instead, maintenance is performed as the service runs, without customer impact.
+
+## Recent worldwide SLA performance
+
+To help you plan for moving workloads to Microsoft Entra ID, we publish past SLA performance. These numbers show the level at which Microsoft Entra ID met the requirements in the [SLA for Microsoft Entra ID](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/), for all tenants.
+
+The SLA attainment is truncated at three places after the decimal. Numbers aren't rounded up, so actual SLA attainment is higher than indicated.
+
+| Month | 2021 | 2022 | 2023 |
+| | | | |
+| January | | 99.998% | 99.998% |
+| February | 99.999% | 99.999% | 99.999% |
+| March | 99.568% | 99.998% | 99.999% |
+| April | 99.999% | 99.999% | 99.999% |
+| May | 99.999% | 99.999% | 99.999% |
+| June | 99.999% | 99.999% | 99.999% |
+| July | 99.999% | 99.999% | 99.999% |
+| August | 99.999% | 99.999% | 99.999% |
+| September | 99.999% | 99.998% | |
+| October | 99.999% | 99.999% | |
+| November | 99.998% | 99.999% | |
+| December | 99.978% | 99.999% | |
+
+<a name='how-is-azure-ad-sla-measured-'></a>
+
+### How is Microsoft Entra SLA measured?
+
+The Microsoft Entra SLA is measured in a way that reflects customer authentication experience, rather than simply reporting on whether the system is available to outside connections. This distinction means that the calculation is based on if:
+
+- Users can authenticate
+- Microsoft Entra ID successfully issues tokens for target apps after authentication
+
+The numbers in the table are a global total of Microsoft Entra authentications across all customers and geographies.
+
+## Incident history
+
+All incidents that seriously impact Microsoft Entra performance are documented in the [Azure status history](https://azure.status.microsoft/status/history/). Not all events documented in Azure status history are serious enough to cause Microsoft Entra ID to go below its SLA. You can view information about the impact of incidents, and a root cause analysis of what caused the incident and what steps Microsoft took to prevent future incidents.
+
+## Tenant-level SLA (preview)
+
+In addition to providing global SLA performance, Microsoft Entra ID now provides tenant-level SLA performance. This feature is currently in preview.
+
+To access your tenant-level SLA performance:
+
+1. Navigate to the [Microsoft Entra admin center](https://entra.microsoft.com) using the Reports Reader role (or higher).
+1. Browse to **Identity** > **Monitoring & health** > **Scenario Health** from the side menu.
+1. Select the **SLA Monitoring** tab.
+1. Hover over the graph to see the SLA performance for that month.
+
+![Screenshot of the tenant-level SLA results.](media/reference-azure-ad-sla-performance/tenent-level-sla.png)
+
+## Next steps
+
+* [Microsoft Entra monitoring and health overview](overview-monitoring-health.md)
+* [Programmatic access to Microsoft Entra reports](./howto-configure-prerequisites-for-reporting-api.md)
+* [Microsoft Entra ID risk detections](../identity-protection/overview-identity-protection.md)
active-directory Govwin Iq Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/govwin-iq-tutorial.md
+
+ Title: Microsoft Entra SSO integration with GovWin IQ
+description: Learn how to configure single sign-on between Microsoft Entra ID and GovWin IQ.
++++++++ Last updated : 09/27/2023++++
+# Microsoft Entra SSO integration with GovWin IQ
+
+In this tutorial, you'll learn how to integrate GovWin IQ with Microsoft Entra ID. GovWin IQ by Deltek is the industry-leading platform providing the most comprehensive market intelligence for U.S. federal, state and local, and Canadian governments. When you integrate GovWin IQ with Microsoft Entra ID, you can:
+
+* Control in Microsoft Entra ID who has access to GovWin IQ.
+* Enable your users to be automatically signed-in to GovWin IQ with their Microsoft Entra accounts.
+* Manage your accounts in one central location.
+
+## Prerequisites
+
+To integrate Microsoft Entra ID with GovWin IQ, you need:
+
+* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* An active GovWin IQ Subscription. Single sign-on can be enabled at no cost. Make sure your Customer Success Manager has enabled a user at your organization as a SAML SSO Admin to perform the following steps.
+* All users must have same email address in Azure as provisioned in GovWin IQ.
+
+## Scenario description
+
+In this tutorial, you configure and test Microsoft Entra SSO in a test environment.
+
+* GovWin IQ supports only **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Adding GovWin IQ from the gallery
+
+To configure the integration of GovWin IQ into Microsoft Entra ID, you need to add GovWin IQ from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**.
+1. In the **Add from the gallery** section, type **GovWin IQ** in the search box.
+1. Select **GovWin IQ** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Microsoft Entra SSO for GovWin IQ
+
+Configure and test Microsoft Entra SSO with GovWin IQ using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in GovWin IQ.
+
+To configure and test Microsoft Entra SSO with GovWin IQ, perform the following steps:
+
+1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature.
+ 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on.
+1. **[Configure GovWin IQ SSO](#configure-govwin-iq-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Assign GovWin IQ test user to SSO](#assign-govwin-iq-test-user-to-sso)** - to have a counterpart of B.Simon in GovWin IQ that is linked to the Microsoft Entra ID representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Microsoft Entra SSO
+
+Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **GovWin IQ** > **Single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the URL:
+ `https://iq.govwin.com/cas`
+
+ b. In the **Reply URL** textbox, enter the value from the GovWin IQ Reply URL field.
+
+ Reply URL will be of the following pattern:
+ `https://iq.govwin.com/cas/login?client_name=ORG_<ID>`
+
+ c. In the **Sign on URL** textbox, enter the value from the GovWIn IQ Sign On URL field.
+
+ Sign on URL will be of the following pattern:
+ `https://iq.govwin.com/cas/clientredirect?client_name=ORG_<ID>`
+
+ > [!NOTE]
+ > Update these values with the actual Reply URL and Sign on URL found in the GovWin SAML Single Sign-On Configuration page, accessible by your designated SAML SSO Admin. Reach out to your [Customer Success Manager](mailto:CustomerSuccess@iq.govwin.com) for assistance. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Assign the Microsoft Entra ID test user
+
+In this section, you'll enable a test user to use Microsoft Entra single sign-on by granting access to GovWin IQ.
+
+ > [!Note]
+ > The user selected for testing must have an existing active GovWin IQ account.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **GovWin IQ**.
+1. In the app's overview page, select **Users and groups**.
+1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog.
+ 1. In the **Users and groups** dialog, select a test user from the Users list, then click the **Select** button at the bottom of the screen.
+ 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+ 1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure GovWin IQ SSO
+
+1. Log in to GovWin IQ company site as the SAML SSO Admin user.
+
+1. Navigate to [**SAML Single Sign-On Configuration** page](https://iq.govwin.com/neo/authenticationConfiguration/viewSamlSSOConfig) and perform the following steps:
+
+ ![Screenshot shows settings of the configuration.](./media/govwin-iq-tutorial/settings.png "Account")
+
+ 1. Select **Azure** from the Identity Provider (IdP) dropdown.
+ 1. Copy **Identifier (EntityID)** value, paste this value into the **Identifier** textbox in the **Basic SAML Configuration** section in Microsoft Entra admin center.
+ 1. Copy **Reply URL** value, paste this value into the **Reply URL** textbox in the **Basic SAML Configuration** section in Microsoft Entra admin center.
+ 1. Copy **Sign On URL** value, paste this value into the **Sign on URL** textbox in the **Basic SAML Configuration** section in Microsoft Entra admin center.
+
+1. In the **Metadata URL** textbox, paste the **App Federation Metadata Url**, which you have copied from the Microsoft Entra admin center.
+
+ ![Screenshot shows metadata of the configuration.](./media/govwin-iq-tutorial/values.png "Folder")
+
+1. Click **Submit IDP Metadata**.
+
+### Assign GovWin IQ test user to SSO
+
+1. In the [**SAML Single Sign-On Configuration** page](https://iq.govwin.com/neo/authenticationConfiguration/viewSamlSSOConfig), navigate to **Excluded Users** tab and click **Select Users to Exclude from SSO**.
+
+ ![Screenshot shows how to exclude users from the page.](./media/govwin-iq-tutorial/data.png "Users")
+
+ > [!Note]
+ > Here you can select users to include or exclude from SSO. If you have a webservices subscription, the webservices user should always be excluded from SSO.
+
+1. Next, click **Exclude All Users from SSO** for testing purposes. This is to prevent any impact to existing access for users while testing SSO.
+
+1. Next, select the test user and click Add Selected Users to SSO.
+
+1. Once testing is successful, add the rest of the users you want to enable for SSO.
+
+## Test SSO
+
+In this section, you test your Microsoft Entra single sign-on configuration with following options.
+
+> [!Note]
+> It may take up to 10 minutes for the configuration to sync.
+
+* Click on **Test this application** in Microsoft Entra admin center. This will redirect to GovWin IQ Sign-on URL where you can initiate the login flow.
+
+* Go to GovWin IQ Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the GovWin IQ tile in the My Apps, this will redirect to GovWin IQ Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next Steps
+
+Add all remaining users to the Microsoft Entra ID GovWin IQ app to enable SSO access. Once you configure GovWin IQ you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory The People Experience Hub Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/the-people-experience-hub-tutorial.md
+
+ Title: Microsoft Entra SSO integration with The People Experience Hub
+description: Learn how to configure single sign-on between Microsoft Entra ID and The People Experience Hub.
++++++++ Last updated : 09/22/2023++++
+# Microsoft Entra SSO integration with The People Experience Hub
+
+In this tutorial, you'll learn how to integrate The People Experience Hub with Microsoft Entra ID. When you integrate The People Experience Hub with Microsoft Entra ID, you can:
+
+* Control in Microsoft Entra ID who has access to The People Experience Hub.
+* Enable your users to be automatically signed-in to The People Experience Hub with their Microsoft Entra accounts.
+* Manage your accounts in one central location.
+
+## Prerequisites
+
+To integrate Microsoft Entra ID with The People Experience Hub, you need:
+
+* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* The People Experience Hub single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Microsoft Entra SSO in a test environment.
+
+* The People Experience Hub supports **SP and IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Adding The People Experience Hub from the gallery
+
+To configure the integration of The People Experience Hub into Microsoft Entra ID, you need to add The People Experience Hub from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**.
+1. In the **Add from the gallery** section, type **The People Experience Hub** in the search box.
+1. Select **The People Experience Hub** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Microsoft Entra SSO for The People Experience Hub
+
+Configure and test Microsoft Entra SSO with The People Experience Hub using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in The People Experience Hub.
+
+To configure and test Microsoft Entra SSO with The People Experience Hub, perform the following steps:
+
+1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature.
+ 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon.
+ 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on.
+1. **[Configure The People Experience Hub SSO](#configure-the-people-experience-hub-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create The People Experience Hub test user](#create-the-people-experience-hub-test-user)** - to have a counterpart of B.Simon in The People Experience Hub that is linked to the Microsoft Entra ID representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Microsoft Entra SSO
+
+Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **The People Experience Hub** > **Single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the URL:
+ `https://app.pxhub.io`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://auth.api.pxhub.io/v1/auth/saml/<COMPANY_ID>/assert`
+
+1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://auth.api.pxhub.io/v1/auth/saml/<COMPANY_ID>/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Reply URL and Sign on URL. Contact [The People Experience Hub support team](mailto:it@pxhub.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link](common/certificatebase64.png "Certificate")
+
+1. On the **Set up The People Experience Hub** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create a Microsoft Entra ID test user
+
+In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator).
+1. Browse to **Identity** > **Users** > **All users**.
+1. Select **New user** > **Create new user**, at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Display name** field, enter `B.Simon`.
+ 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Select **Review + create**.
+1. Select **Create**.
+
+### Assign the Microsoft Entra ID test user
+
+In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to The People Experience Hub.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
+1. Browse to **Identity** > **Applications** > **Enterprise applications** > **The People Experience Hub**.
+1. In the app's overview page, select **Users and groups**.
+1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog.
+ 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+ 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+ 1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure The People Experience Hub SSO
+
+1. Log in to The People Experience Hub company site as an administrator.
+
+1. Go to **Admin Settings** > **Integrations** > **Single Sign-On** and click **Manage**.
+
+ ![Screenshot shows settings of the configuration.](./media/the-people-experience-hub-tutorial/settings.png "Account")
+
+1. In the **SAML 2.0 Single sign-on** page, perform the following steps:
+
+ ![Screenshot shows configuration of the page.](./media/the-people-experience-hub-tutorial/values.png "Page")
+
+ 1. **Enable SAML 2.0 Single sign-on** toggle on.
+
+ 1. Copy **EntityID** value, paste this value into the **Identifier** textbox in the **Basic SAML Configuration** section in Microsoft Entra admin center.
+
+ 1. Copy **Login URL** value, paste this value into the **Sign on URL** textbox in the **Basic SAML Configuration** section in Microsoft Entra admin center.
+
+ 1. Copy **Reply URL** value, paste this value into the **Reply URL** textbox in the **Basic SAML Configuration** section in Microsoft Entra admin center.
+
+ 1. In the **SSO Login URL** textbox, paste the **Login URL** value, which you copied from the Microsoft Entra admin center.
+
+ 1. Open the downloaded **Certificate (Base64)** into Notepad and paste the content into the **X509 Certificate** textbox.
+
+ 1. Click **Save Configuration**.
+
+### Create The People Experience Hub test user
+
+1. In a different web browser window, sign into The People Experience Hub website as an administrator.
+
+1. Navigate to **Admin Settings** > **Users** and click **Create**.
+
+ ![Screenshot shows how to create users in application.](./media/the-people-experience-hub-tutorial/create.png "Users")
+
+1. In the **Create a new admin users** section, perform the following steps:
+
+ ![Screenshot shows how to create new users in the page.](./media/the-people-experience-hub-tutorial/details.png "Creating Users")
+
+ 1. In the **Email** textbox, enter a valid email address of the user.
+
+ 1. In the **First Name** textbox, enter the first name of the user.
+
+ 1. In the **Last Name** textbox, enter the last name of the user.
+
+ 1. Click **Create User**.
+
+## Test SSO
+
+In this section, you test your Microsoft Entra single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Microsoft Entra admin center. This will redirect to The People Experience Hub Sign-on URL where you can initiate the login flow.
+
+* Go to The People Experience Hub Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Microsoft Entra admin center and you should be automatically signed in to The People Experience Hub for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click The People Experience Hub tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to The People Experience Hub for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next Steps
+
+Once you configure The People Experience Hub you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
advisor Advisor Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-get-started.md
Title: Get started with Azure Advisor description: Get started with Azure Advisor.+++ Previously updated : 02/01/2019 Last updated : 09/15/2023 # Get started with Azure Advisor
-Learn how to access Advisor through the Azure portal, get recommendations, and implement recommendations.
+Learn how to access Advisor through the Azure portal, get and manage recommendations, and configure Advisor settings.
> [!NOTE]
-> Azure Advisor automatically runs in the background to find newly created resources. It can take up to 24 hours to provide recommendations on those resources.
+> Azure Advisor runs in the background to find newly created resources. It can take up to 24 hours to provide recommendations on those resources.
-## Get recommendations
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the left pane, click **Advisor**. If you do not see Advisor in the left pane, click **All services**. In the service menu pane, under **Monitoring and Management**, click **Advisor**. The Advisor dashboard is displayed.
-
- ![Access Azure Advisor using the Azure portal](./media/advisor-get-started/advisor-portal-menu.png)
-
-1. The Advisor dashboard will display a summary of your recommendations for all selected subscriptions. You can choose the subscriptions that you want recommendations to be displayed for using the subscription filter dropdown.
-
-1. To get recommendations for a specific category, click one of the tabs: **Cost**, **Security**, **Reliability**, **Operational Excellence**, or **Performance**.
-
- ![Azure Advisor dashboard](./media/advisor-overview/advisor-dashboard.png)
+## Open Advisor
-## Get recommendation details and implement a solution
+To access Azure Advisor, sign in to the [Azure portal](https://portal.azure.com) and open [Advisor](https://aka.ms/azureadvisordashboard). The Advisor score page opens by default.
-You can select a recommendation in Advisor to view additional details ΓÇô such as the recommendation actions and impacted resources ΓÇô and to implement the solution to the recommendation.
+You can also use the search bar at the top, or the left navigation pane, to find Advisor.
-1. Sign in to the [Azure portal](https://portal.azure.com), and then open [Advisor](https://aka.ms/azureadvisordashboard).
-1. Select a recommendation category to display the list of recommendations within that category, or select the **All** tab to view all your recommendations.
+## Read your score
+See how your system configuration measures against Azure best practices.
-1. Click a recommendation that you want to review in detail.
-1. Review the information about the recommendation and the resources that the recommendation applies to.
+* The far-left graphic is your overall system Advisor score against Azure best practices. The **Learn more** link opens the [Optimize Azure workloads by using Advisor score](azure-advisor-score.md) page.
-1. Click on the **Recommended Action** to implement the recommendation.
+* The middle graphic depicts the trend of your system Advisor score history. Roll over the graphic to activate a slider to see your trend at different points of time. Use the drop-down menu to pick a trend time frame.
-## Filter recommendations
+* The far-right graphic shows a breakdown of your best practices Advisor score per category. Click a category bar to open the recommendations page for that category.
-You can filter recommendations to drill down to what is most important to you. You can filter by subscription, resource type, or recommendation status.
-
-1. Sign in to the [Azure portal](https://portal.azure.com), and then open [Advisor](https://aka.ms/azureadvisordashboard).
-
-1. Use the dropdowns on the Advisor dashboard to filter by subscription, resource type, or recommendation status.
-
- ![Advisor search-filter criteria](./media/advisor-get-started/advisor-filters.png)
-
-## Postpone or dismiss recommendations
-
-1. Sign in to the [Azure portal](https://portal.azure.com), and then open [Advisor](https://aka.ms/azureadvisordashboard).
+## Get recommendations
-1. Navigate to the recommendation you want to postpone or dismiss.
+To display a specific list of recommendations, click a category tile.
-1. Click the recommendation.
+The tiles on the Advisor score page show the different categories of recommendations per subscription:
-1. Click **Postpone**.
+* To get recommendations for a specific category, click one of the tiles. To open a list of all recommendations for all categories, click the **All recommendations** tile. By default, the **Cost** tile is selected.
-1. Specify a postpone time period, or select **Never** to dismiss the recommendation.
+* You can filter the display using the buttons at the top of the page:
+ * **Subscription**: Choose *All* for Advisor recommendations on all subscriptions. Alternatively, select specific subscriptions. Apply changes by clicking outside of the button.
+ * **Recommendation Status**: *Active* (the default, recommendations that you haven't postponed or dismissed), *Postponed* or *Dismissed*. Apply changes by clicking outside of the button.
+ * **Resource Group**: Choose *All* (the default) or specific resource groups. Apply changes by clicking outside of the button.
+ * **Type**: Choose *All* (the default) or specific resources. Apply changes by clicking outside of the button.
+ * **Commitments**: Applicable only to cost recommendations. Adjust your subscription **Cost** recommendations to reflect your committed **Term (years)** and chosen **Look-back period (days)**. Apply changes by clicking **Apply**.
+ * For more advanced filtering, click **Add filter**.
-## Exclude subscriptions or resource groups
+* The **Commitments** button lets you adjust your subscription **Cost** recommendations to reflect your committed **Term (years)** and chosen **Look-back period (days)**.
-You may have resource groups or subscriptions for which you do not want to receive Advisor recommendations ΓÇô such as ΓÇÿtestΓÇÖ resources. You can configure Advisor to only generate recommendations for specific subscriptions and resource groups.
+## Get recommendation details and solution options
-> [!NOTE]
-> To include or exclude a subscription or resource group from Advisor, you must be a subscription Owner. If you do not have the required permissions for a subscription or resource group, the option to include or exclude it is disabled in the user interface.
+View recommendation details ΓÇô such as the recommended actions and impacted resources ΓÇô and the solution options, including postponing or dismissing a recommendation.
-1. Sign in to the [Azure portal](https://portal.azure.com), and then open [Advisor](https://aka.ms/azureadvisordashboard).
+1. To review details of a recommendation, including the affected resources, open the recommendation list for a category and then click the **Description** or the **Impacted resources** link for a specific recommendation. The following screenshot shows a **Reliability** recommendation details page.
-1. Click **Configure** in the action bar.
+ :::image type="content" source="./media/advisor-get-started/advisor-score-reliability-recommendation-page.png" alt-text="Screenshot of Azure Advisor reliability recommendation details example." lightbox="./media/advisor-get-started/advisor-score-reliability-recommendation-page.png":::
-1. Uncheck any subscriptions or resource groups you do not want to receive Advisor recommendations for.
+1. To see action details, click a **Recommended actions** link. The Azure page where you can act opens. Alternatively, open a page to the affected resources to take the recommended action (the two pages may be the same).
+
+ Understand the recommendation before you act by clicking the **Learn more** link on the recommended action page, or at the top of the recommendations details page.
- ![Advisor configure resources example](./media/advisor-get-started/advisor-configure-resources.png)
+1. You can postpone the recommendation.
+
+ :::image type="content" source="./media/advisor-get-started/advisor-recommendation-postpone.png" alt-text="Sreenshot of Azure Advisor recommendation postpone option." lightbox="./media/advisor-get-started/advisor-recommendation-postpone.png":::
-1. Click the **Apply** button.
+ You can't dismiss the recommendation without certain privileges. For information on permissions, see [Permissions in Azure Advisor](permissions.md).
-## Configure low usage VM recommendation
+## Download recommendations
-This procedure configures the average CPU utilization rule for the low usage virtual machine recommendation.
+To download your recommendations from the Advisor score or any recommendation details page, click **Download as CSV** or **Download as PDF** on the action bar at the top. The download option respects any filters you have applied to Advisor. If you select the download option while viewing a specific recommendation category or recommendation, the downloaded summary only includes information for that category or recommendation.
-Advisor monitors your virtual machine usage for 7 days by default and then identifies low-utilization virtual machines.
-Virtual machines are considered low-utilization if their CPU utilization is 5% or less and their network utilization is less than 2% or if the current workload can be accommodated by a smaller virtual machine size.
+## Configure recommendations
-If you would like to be more aggressive at identifying low usage virtual machines, you can adjust the average CPU utilization rule and the look back period on a per subscription basis.
-The CPU utilization rule can be set to 5%, 10%, 15%, 20%, or 100%(Default). In case the trigger is selected as 100%, it will present recommendations for virtual machines with less than 5%, 10%, 15%, and 20% of CPU utilization.
-You can select how far back in historical data you want to analyze: 7 days (default), 14, 21, 30, 60, or 90 days.
+You can exclude subscriptions or resources, such as 'test' resources, from Advisor recommendations and configure Advisor to generate recommendations only for specific subscriptions and resource groups.
> [!NOTE]
-> To adjust the average CPU utilization rule for identifying low usage virtual machines, you must be a subscription *Owner*. If you do not have the required permissions for a subscription or resource group, the option to include or exclude it will be disabled in the user interface.
-
-1. Sign in to the [Azure portal](https://portal.azure.com), and then open [Advisor](https://aka.ms/azureadvisordashboard).
-
-1. Click **Configure** in the action bar.
-
-1. Click the **Rules** tab.
-
-1. Select the subscriptions youΓÇÖd like to adjust the average CPU utilization rule for, and then click **Edit**.
-
-1. Select the desired average CPU utilization value, and click **Apply**.
-
-1. Click **Refresh recommendations** to update your existing recommendations to use the new average CPU utilization rule.
+> To change subscriptions or Advisor compute rules, you must be a subscription Owner. If you do not have the required permissions, the option is disabled in the user interface. For information on permissions, see [Permissions in Azure Advisor](permissions.md). For details on right sizing VMs, see [Reduce service costs by using Azure Advisor](advisor-cost-recommendations.md).
- ![Advisor configure recommendation rules example](./media/advisor-get-started/advisor-configure-rules.png)
+From any Azure Advisor page, click **Configuration** in the left navigation pane. The Advisor Configuration page opens with the **Resources** tab selected, by default.
-## Download recommendations
-Advisor enables you to download a summary of your recommendations. You can download your recommendations as a PDF file or a CSV file. Downloading your recommendations enables you to easily share with your colleagues or perform your own analysis on top of the recommendation data.
+* **Resources**: Uncheck any subscriptions you don't want to receive Advisor recommendations for, click **Apply**. The page refreshes.
-1. Sign in to the [Azure portal](https://portal.azure.com), and then open [Advisor](https://aka.ms/azureadvisordashboard).
+* **VM/VMSS right sizing**: You can adjust the average CPU utilization rule and the look back period on a per-subscription basis. Doing virtual machine (VM) right sizing requires specialized knowledge.
-1. Click **Download as CSV** or **Download as PDF** on the action bar.
+ 1. Select the subscriptions youΓÇÖd like to adjust the average CPU utilization rule for, and then click **Edit**. Not all subscriptions can be edited for VM/VMSS right sizing and certain privileges are required; for more information on permissions, see [Permissions in Azure Advisor](permissions.md).
+
+ 1. Select the desired average CPU utilization value and click **Apply**. It can take up to 24 hours for the new settings to be reflected in recommendations.
-The download option respects any filters you have applied to the Advisor dashboard. If you select the download option while viewing a specific recommendation category or recommendation, the downloaded summary only includes information for that category or recommendation.
+ :::image type="content" source="./media/advisor-get-started/advisor-configure-rules.png" alt-text="Screenshot of Azure Advisor configuration option for VM/VMSS sizing rules." lightbox="./media/advisor-get-started/advisor-configure-rules.png":::
## Next steps To learn more about Advisor, see: - [Introduction to Azure Advisor](advisor-overview.md)-- [Advisor Reliability recommendations](advisor-high-availability-recommendations.md)-- [Advisor Security recommendations](advisor-security-recommendations.md)-- [Advisor Performance recommendations](advisor-performance-recommendations.md) - [Advisor Cost recommendations](advisor-cost-recommendations.md)
+- [Advisor Security recommendations](advisor-security-recommendations.md)
+- [Advisor Reliability recommendations](advisor-high-availability-recommendations.md)
- [Advisor Operational Excellence recommendations](advisor-operational-excellence-recommendations.md)
+- [Advisor Performance recommendations](advisor-performance-recommendations.md)
ai-services Call Analyze Image 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/call-analyze-image-40.md
Last updated 08/01/2023-+ zone_pivot_groups: programming-languages-computer-vision-40
ai-services Image Analysis Client Library 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40.md
Last updated 01/24/2023 ms.devlang: csharp, golang, java, javascript, python-+ zone_pivot_groups: programming-languages-computer-vision-40 keywords: Azure AI Vision, Azure AI Vision service
ai-services Install Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/sdk/install-sdk.md
Last updated 08/01/2023 -+ zone_pivot_groups: programming-languages-vision-40-sdk
ai-services Overview Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/sdk/overview-sdk.md
Last updated 08/01/2023 -+ # Vision SDK overview
Before you create a new issue:
## Next steps - [Install the SDK](./install-sdk.md)-- [Try the Image Analysis Quickstart](../quickstarts-sdk/image-analysis-client-library-40.md)
+- [Try the Image Analysis Quickstart](../quickstarts-sdk/image-analysis-client-library-40.md)
ai-services Multi Service Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/multi-service-resource.md
keywords: Azure AI services, cognitive
-+ Last updated 08/02/2023
The multi-service resource enables access to the following Azure AI services wit
## Next steps
-* Now that you have a resource, you can authenticate your API requests to one of the [supported Azure AI services](#supported-services-with-a-multi-service-resource).
+* Now that you have a resource, you can authenticate your API requests to one of the [supported Azure AI services](#supported-services-with-a-multi-service-resource).
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
The `gpt-4` model supports 8192 max input tokens and the `gpt-4-32k` model suppo
## GPT-3.5
-GPT-3.5 models can understand and generate natural language or code. The most capable and cost effective model in the GPT-3.5 family is GPT-3.5 Turbo, which has been optimized for chat and works well for traditional completions tasks as well. We recommend using GPT-3.5 Turbo over [legacy GPT-3.5 and GPT-3 models](./legacy-models.md).
+GPT-3.5 models can understand and generate natural language or code. The most capable and cost effective model in the GPT-3.5 family is GPT-3.5 Turbo, which has been optimized for chat and works well for traditional completions tasks as well. GPT-3.5 Turbo is available for use with the Chat Completions API. GPT-3.5 Turbo Instruct has similar capabilities to `text-davinci-003` using the Completions API instead of the Chat Completions API. We recommend using GPT-3.5 Turbo and GPT-3.5 Turbo Instruct over [legacy GPT-3.5 and GPT-3 models](./legacy-models.md).
- `gpt-35-turbo` - `gpt-35-turbo-16k`
+- `gpt-35-turbo-instruct`
-The `gpt-35-turbo` model supports 4096 max input tokens and the `gpt-35-turbo-16k` model supports up to 16,384 tokens.
+The `gpt-35-turbo` model supports 4096 max input tokens and the `gpt-35-turbo-16k` model supports up to 16,384 tokens. `gpt-35-turbo-instruct` supports 4097 max input tokens.
-Like GPT-4, use the Chat Completions API to use GPT-3.5 Turbo. To learn more about how to interact with GPT-3.5 Turbo and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md).
+To learn more about how to interact with GPT-3.5 Turbo and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md).
## Embeddings models
GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can als
| `gpt-35-turbo`<sup>1</sup> (0301) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 | | `gpt-35-turbo` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Sweden Central, Switzerland North, UK South | N/A | 4,096 | Sep 2021 | | `gpt-35-turbo-16k` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Sweden Central, Switzerland North, UK South | N/A | 16,384 | Sep 2021 |
+| `gpt-35-turbo-instruct` (0914) | East US, Sweden Central | N/A | 4,097 | Sep 2021 |
<sup>1</sup> Version `0301` of gpt-35-turbo will be retired no earlier than July 5, 2024. See [model updates](#model-updates) for model upgrade behavior.
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
While Power Virtual Agents has features that leverage Azure OpenAI such as [gene
> [!NOTE] > Deploying to Power Virtual Agents from Azure OpenAI is only available to US regions.
+> Power Virtual Agents supports Azure Cognitive Search indexes with keyword or semantic search only. Other data sources and advanced features may not be supported.
#### Using the web app
ai-services Create Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/create-resource.md
description: Learn how to get started with Azure OpenAI Service and create your
-+ Last updated 08/25/2023 zone_pivot_groups: openai-create-resource
ai-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md
description: Use this article to import and use your data in Azure OpenAI.
+
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
keywords:
## September 2023
+### GPT-3.5 Turbo Instruct
+
+Azure OpenAI Service now supports the GPT-3.5 Turbo Instruct model. This model has performance comparable to `text-davinci-003` and is available to use with the Completions API. Check the [models page](concepts/models.md), for the latest information on model availability in each region.
+ ### Whisper public preview Azure OpenAI Service now supports speech to text APIs powered by OpenAI's Whisper model. Get AI-generated text based on the speech audio you provide. To learn more, check out the [quickstart](./whisper-quickstart.md).
aks Access Private Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/access-private-cluster.md
Title: Access a private Azure Kubernetes Service (AKS) cluster description: Learn how to access a private Azure Kubernetes Service (AKS) cluster using the Azure CLI or Azure portal. + Last updated 09/15/2023
aks App Routing Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-migration.md
description: Learn how to migrate from the HTTP application routing feature to t
-+ Last updated 08/18/2023
After migrating to the application routing add-on, learn how to [monitor ingress
<!-- EXTERNAL LINKS --> [dns-pricing]: https://azure.microsoft.com/pricing/details/dns/ [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete
+[kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete
aks App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing.md
Title: Azure Kubernetes Service (AKS) ingress with the application routing add-on (preview)
+ Title: Azure Kubernetes Service (AKS) managed nginx ingress with the application routing add-on (preview)
description: Use the application routing add-on to securely access applications deployed on Azure Kubernetes Service (AKS).
Last updated 08/07/2023
-# Azure Kubernetes Service (AKS) ingress with the application routing add-on (preview)
+# Managed nginx ingress with the application routing add-on (preview)
-The application routing add-on configures an [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) in your Azure Kubernetes Service (AKS) cluster with SSL termination through certificates stored in Azure Key Vault. It can optionally integrate with Open Service Mesh (OSM) for end-to-end encryption of inter-cluster communication using mutual TLS (mTLS). When you deploy ingresses, the add-on creates publicly accessible DNS names for endpoints on an Azure DNS zone.
+The application routing add-on configures an [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) in your Azure Kubernetes Service (AKS) cluster with SSL termination through certificates stored in Azure Key Vault. When you deploy ingresses, the add-on creates publicly accessible DNS names for endpoints on an Azure DNS zone.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
-## Application routing add-on overview
+## Application routing add-on with nginx overview
The application routing add-on deploys the following components:
aks Dapr Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-migration.md
Title: Migrate from Dapr OSS to the Dapr extension for Azure Kubernetes Service (AKS)
-description: Learn how to migrate your managed clusters from Dapr OSS to the Dapr extension for AKS
+description: Learn how to migrate your managed clusters from Dapr OSS to the Dapr extension for Azure Kubernetes Service (AKS).
Previously updated : 11/21/2022 Last updated : 09/26/2023 # Migrate from Dapr OSS to the Dapr extension for Azure Kubernetes Service (AKS)
-You've installed and configured Dapr OSS (using Dapr CLI or Helm) on your Kubernetes cluster, and want to start using the Dapr extension on AKS. In this guide, you'll learn how the Dapr extension for AKS can use the Kubernetes resources created by Dapr OSS and start managing them, by either:
+This article shows you how to migrate from Dapr OSS to the Dapr extension for AKS.
-- Checking for an existing Dapr installation via Azure CLI prompts (default method), or-- Using the release name and namespace from `--configuration-settings` to explicitly point to an existing Dapr installation.
+You can configure the Dapr extension to use and manage the Kubernetes resources created by Dapr OSS by [checking for an existing Dapr installation using the Azure CLI](#check-for-an-existing-dapr-installation) (*default method*) or [configuring the existing Dapr installation using `--configuration-settings`](#configure-the-existing-dapr-installation-usingconfiguration-settings).
+
+For more information, see [Dapr extension for AKS][dapr-extension-aks].
## Check for an existing Dapr installation
-The Dapr extension, by default, checks for existing Dapr installations when you run the `az k8s-extension create` command. To list the details of your current Dapr installation, run the following command and save the Dapr release name and namespace:
+When you [create the Dapr extension](./dapr.md), the extension checks for an existing Dapr installation on your cluster. If Dapr exists, the extension uses and manages the Kubernetes resources created by Dapr OSS.
-```bash
-helm list -A
-```
+1. List the details of your current Dapr installation using the `helm list -A` command and save the Dapr release name and namespace from the output.
-When [installing the extension][dapr-create], you'll receive a prompt asking if Dapr is already installed:
+ ```azurecli-interactive
+ helm list -A
+ ```
-```bash
-Is Dapr already installed in the cluster? (y/N): y
-```
+2. Enter the Helm release name and namespace (from `helm list -A`) when prompted with the following questions:
-If Dapr is already installed, please enter the Helm release name and namespace (from `helm list -A`) when prompted the following:
+ ```azurecli-interactive
+ Enter the Helm release name for Dapr, or press Enter to use the default name [dapr]:
+ Enter the namespace where Dapr is installed, or press Enter to use the default namespace [dapr-system]:
+ ```
-```bash
-Enter the Helm release name for Dapr, or press Enter to use the default name [dapr]:
-Enter the namespace where Dapr is installed, or press Enter to use the default namespace [dapr-system]:
-```
+## Configure the existing Dapr installation using `--configuration-settings`
-## Configuring the existing Dapr installation using `--configuration-settings`
+When you [create the Dapr extension](./dapr.md), you can configure the extension to use and manage the Kubernetes resources created by Dapr OSS using the `--configuration-settings` flag.
-Alternatively, when creating the Dapr extension, you can configure the above settings via `--configuration-settings`. This method is useful when you are automating the installation via bash scripts, CI pipelines, etc.
+1. List the details of your current Dapr installation using the `helm list -A` command and save the Dapr release name and namespace from the output.
-If you don't have an existing Dapr installation on your cluster, set `skipExistingDaprCheck` to `true`:
+ ```azurecli-interactive
+ helm list -A
+ ```
-```azurecli-interactive
-az k8s-extension create --cluster-type managedClusters \
cluster-name myAKScluster \resource-group myResourceGroup \name dapr \extension-type Microsoft.Dapr \configuration-settings "skipExistingDaprCheck=true"
-```
+2. Create the Dapr extension using the [`az k8s-extension create`][az-k8s-extension-create] and use the `--configuration-settings` flags to set the Dapr release name and namespace.
-If Dapr exists on your cluster, set the Helm release name and namespace (from `helm list -A`) via `--configuration-settings`:
-
-```azurecli-interactive
-az k8s-extension create --cluster-type managedClusters \
cluster-name myAKScluster \resource-group myResourceGroup \name dapr \extension-type Microsoft.Dapr \configuration-settings "existingDaprReleaseName=dapr" \configuration-settings "existingDaprReleaseNamespace=dapr-system"
-```
+ ```azurecli-interactive
+ az k8s-extension create --cluster-type managedClusters \
+ --cluster-name myAKSCluster \
+ --resource-group myResourceGroup \
+ --name dapr \
+ --extension-type Microsoft.Dapr \
+ --configuration-settings "existingDaprReleaseName=dapr" \
+ --configuration-settings "existingDaprReleaseNamespace=dapr-system"
+ ```
## Update HA mode or placement service settings
-When you install the Dapr extension on top of an existing Dapr installation, you'll see the following prompt:
+When installing the Dapr extension on top of an existing Dapr installation, you receive the following message:
-> ```The extension will be installed on your existing Dapr installation. Note, if you have updated the default values for global.ha.* or dapr_placement.* in your existing Dapr installation, you must provide them in the configuration settings. Failing to do so will result in an error, since Helm upgrade will try to modify the StatefulSet. See <link> for more information.```
+```output
+The extension will be installed on your existing Dapr installation. Note, if you have updated the default values for global.ha.* or dapr_placement.* in your existing Dapr installation, you must provide them in the configuration settings. Failing to do so will result in an error, since Helm upgrade will try to modify the StatefulSet. See <link> for more information.
+```
-Kubernetes only allows for limited fields in StatefulSets to be patched, subsequently failing upgrade of the placement service if any of the mentioned settings are configured. You can follow the steps below to update those settings:
+Kubernetes only allows patching for limited fields in StatefulSets. If any of the HA mode or placement service settings are configured, the upgrade fails. To update the HA mode or placement service settings, you must delete the stateful set and then update the HA mode.
-1. Delete the stateful set.
+1. Delete the stateful set using the `kubectl delete` command.
```azurecli-interactive kubectl delete statefulset.apps/dapr-placement-server -n dapr-system ```
-1. Update the HA mode:
-
+2. Update the HA mode using the [`az k8s-extension update`][az-k8s-extension-update] command.
+ ```azurecli-interactive az k8s-extension update --cluster-type managedClusters \ --cluster-name myAKSCluster \
Kubernetes only allows for limited fields in StatefulSets to be patched, subsequ
--configuration-settings "global.ha.enabled=true" \ ```
-For more information, see [Dapr Production Guidelines][dapr-prod-guidelines].
-
+For more information, see the [Dapr production guidelines][dapr-prod-guidelines].
## Next steps Learn more about [Dapr][dapr-overview] and [how to use it][dapr-howto]. - <!-- LINKS INTERNAL --> [dapr-overview]: ./dapr-overview.md [dapr-howto]: ./dapr.md
-[dapr-create]: ./dapr.md#create-the-extension-and-install-dapr-on-your-aks-or-arc-enabled-kubernetes-cluster
+[dapr-extension-aks]: ./dapr-overview.md
+[az-k8s-extension-create]: /cli/azure/k8s-extension#az-k8s-extension-create
+[az-k8s-extension-update]: /cli/azure/k8s-extension#az-k8s-extension-update
<!-- LINKS EXTERNAL --> [dapr-prod-guidelines]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-production/#enabling-high-availability-in-an-existing-dapr-deployment
aks Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/events.md
+
+ Title: Use Kubernetes events for troubleshooting
+description: Learn about Kubernetes events, which provide details on pods, nodes, and other Kubernetes objects.
+++ Last updated : 09/07/2023++
+# Kubernetes events for troubleshooting
+
+Events are one of the most prominent sources for monitoring and troubleshooting issues in Kubernetes. They capture and record information about the lifecycle of various Kubernetes objects, such as pods, nodes, services, and deployments. By monitoring events, you can gain visibility into your cluster's activities, identify issues, and troubleshoot problems effectively.
+
+Kubernetes events do not persist throughout your cluster life cycle, as there is no mechanism for retention. They are short-lived, only available for one hour after the event is generated. To store events for a longer time period, enable [Container Insights][container-insights].
+
+## Kubernetes event objects
+
+Below is a set of the important fields in a Kubernetes Event. For a comprehensive list of all fields, check the official [Kubernetes documentation][k8s-events].
+
+|Field name|Significance|
+|-||
+|type |Significance changes based on the severity of the event:<br/>**Warning:** these events signal potentially problematic situations, such as a pod repeatedly failing or a node running out of resources. They require attention, but might not result in immediate failure.<br/>**Normal:** These events represent routine operations, such as a pod being scheduled or a deployment scaling up. They usually indicate healthy cluster behavior.|
+|reason|The reason why the event was generated. For example, *FailedScheduling* or *CrashLoopBackoff*.|
+|message|A human-readable message that describes the event.|
+|namespace|The namespace of the Kubernetes object that the event is associated with.|
+|firstSeen|Timestamp when the event was first observed.|
+|lastSeen|Timestamp of when the event was last observed.|
+|reportingController|The name of the controller that reported the event. For example, `kubernetes.io/kubelet`|
+|object|The name of the Kubernetes object that the event is associated with.|
+
+## Accessing events
+
+# [Azure CLI](#tab/azure-cli)
+
+You can find events for your cluster and its components by using `kubectl`.
+
+```azurecli-interactive
+kubectl get events
+```
+
+To look at a specific pod's events, first find the name of the pod and then use `kubectl describe` to list events.
+
+```azurecli-interactive
+kubectl get pods
+
+kubectl describe pods <pod-name>
+```
+
+# [Portal](#tab/azure-portal)
+
+You can browse the events for your cluster by navigating to **Events** under **Kubernetes resources** from the Azure portal overview page for your cluster. By default, all events are shown.
++
+You can also filter by event type:
++
+by reason:
++
+or by pods or nodes:
++
+These filters can be combined to scope the query to your specific needs.
+++
+## Best practices for troubleshooting with events
+
+### Filtering events for relevance
+
+In your AKS cluster, you might have various namespaces and services running. Filtering events based on object type, namespace, or reason can help you narrow down your focus to what's most relevant to your applications. For instance, you can use the following command to filter events within a specific namespace:
+
+```azurecli-interactive
+kubectl get events -n <namespace>
+```
+
+### Automating event notifications
+
+To ensure timely response to critical events in your AKS cluster, set up automated notifications. Azure offers integration with monitoring and alerting services like [Azure Monitor][aks-azure-monitor]. You can configure alerts to trigger based on specific event patterns. This way, you're immediately informed about crucial issues that require attention.
+
+### Regularly reviewing events
+
+Make a habit of regularly reviewing events in your AKS cluster. This proactive approach can help you identify trends, catch potential problems early, and prevent escalations. By staying on top of events, you can maintain the stability and performance of your applications.
+
+## Next steps
+
+Now that you understand Kubernetes events, you can continue your monitoring and observability journey by [enabling Container Insights][container-insights].
+
+<!-- LINKS -->
+[aks-azure-monitor]: ./monitor-aks.md
+[container-insights]: ../azure-monitor/containers/container-insights-enable-aks.md
+[k8s-events]: https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/event-v1/
aks Http Application Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-application-routing.md
Title: HTTP application routing add-on for Azure Kubernetes Service (AKS)
-description: Use the HTTP application routing add-on to access applications deployed on Azure Kubernetes Service (AKS).
+ Title: HTTP application routing add-on for Azure Kubernetes Service (AKS) (retired)
+description: Use the HTTP application routing add-on to access applications deployed on Azure Kubernetes Service (AKS) (retired).
Last updated 04/05/2023
-# HTTP application routing add-on for Azure Kubernetes Service (AKS)
+# HTTP application routing add-on for Azure Kubernetes Service (AKS) (retired)
> [!CAUTION]
-> The HTTP application routing add-on is in the process of being retired and isn't recommended for production use. We recommend migrating to the [Application Routing add-on](./app-routing-migration.md) instead.
+> HTTP application routing add-on (preview) for Azure Kubernetes Service (AKS) will be [retired](https://azure.microsoft.com/updates/retirement-http-application-routing-addon-preview-for-aks-will-retire-03032025) on 03 March 2025. We recommend migrating to the [Application Routing add-on](./app-routing-migration.md) by that date.
The HTTP application routing add-on makes it easy to access applications that are deployed to your Azure Kubernetes Service (AKS) cluster by:
aks Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md
AKS uses the following rules for applying updates to installed add-ons:
| Name | Description | More details | ||||
-| http_application_routing | Configure ingress with automatic public DNS name creation for your AKS cluster. | [HTTP application routing add-on on Azure Kubernetes Service (AKS)][http-app-routing] |
+| web_application_routing | Use a managed NGINX ingress controller with your AKS cluster.| [Application Routing Overview][app-routing] |
+| ingress-appgw | Use Application Gateway Ingress Controller with your AKS cluster. | [What is Application Gateway Ingress Controller?][agic] |
+| keda | Use event-driven autoscaling for the applications on your AKS cluster. | [Simplified application autoscaling with Kubernetes Event-driven Autoscaling (KEDA) add-on][keda]|
| monitoring | Use Container Insights monitoring with your AKS cluster. | [Container insights overview][container-insights] |
-| virtual-node | Use virtual nodes with your AKS cluster. | [Use virtual nodes][virtual-nodes] |
| azure-policy | Use Azure Policy for AKS, which enables at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. | [Understand Azure Policy for Kubernetes clusters][azure-policy-aks] |
-| ingress-appgw | Use Application Gateway Ingress Controller with your AKS cluster. | [What is Application Gateway Ingress Controller?][agic] |
-| open-service-mesh | Use Open Service Mesh with your AKS cluster. | [Open Service Mesh AKS add-on][osm] |
| azure-keyvault-secrets-provider | Use Azure Keyvault Secrets Provider addon.| [Use the Azure Key Vault Provider for Secrets Store CSI Driver in an AKS cluster][keyvault-secret-provider] |
-| web_application_routing | Use a managed NGINX ingress controller with your AKS cluster.| [Application Routing Overview][app-routing] |
-| keda | Use event-driven autoscaling for the applications on your AKS cluster. | [Simplified application autoscaling with Kubernetes Event-driven Autoscaling (KEDA) add-on][keda]|
+| virtual-node | Use virtual nodes with your AKS cluster. | [Use virtual nodes][virtual-nodes] |
+| http_application_routing | Configure ingress with automatic public DNS name creation for your AKS cluster (retired). | [HTTP application routing add-on on Azure Kubernetes Service (AKS) (retired)][http-app-routing] |
+| open-service-mesh | Use Open Service Mesh with your AKS cluster (retired). | [Open Service Mesh AKS add-on (retired)][osm] |
## Extensions
For more details, see [Windows AKS partner solutions][windows-aks-partner-soluti
[spark-kubernetes]: https://spark.apache.org/docs/latest/running-on-kubernetes.html [managed-grafana]: ../managed-grafan [keda]: keda-about.md
-[web-app-routing]: web-app-routing.md
+[app-routing]: app-routing.md
[maintenance-windows]: planned-maintenance.md [release-tracker]: release-tracker.md [github-actions]: /azure/developer/github/github-actions [github-actions-aks]: kubernetes-action.md [az-aks-enable-addons]: /cli/azure/aks#az-aks-enable-addons
-[windows-aks-partner-solutions]: windows-aks-partner-solutions.md
+[windows-aks-partner-solutions]: windows-aks-partner-solutions.md
aks Keda Deploy Add On Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-cli.md
Title: Install the Kubernetes Event-driven Autoscaling (KEDA) add-on by using Azure CLI
-description: Use Azure CLI to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS).
+ Title: Install the Kubernetes Event-driven Autoscaling (KEDA) add-on using the Azure CLI
+description: Use the Azure CLI to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS).
Previously updated : 10/10/2022 Last updated : 09/26/2023
-# Install the Kubernetes Event-driven Autoscaling (KEDA) add-on by using Azure CLI
+# Install the Kubernetes Event-driven Autoscaling (KEDA) add-on using the Azure CLI
-This article shows you how to install the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS) by using Azure CLI. The article includes steps to verify that it's installed and running.
+This article shows you how to install the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS) using the Azure CLI.
[!INCLUDE [Current version callout](./includes/ked)]
-## Prerequisites
+## Before you begin
-- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).-- [Azure CLI installed](/cli/azure/install-azure-cli).-- Firewall rules are configured to allow access to the Kubernetes API server. ([learn more][aks-firewall-requirements])
+- You need an Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+- You need the [Azure CLI installed](/cli/azure/install-azure-cli).
+- Ensure you have firewall rules configured to allow access to the Kubernetes API server. For more information, see [Outbound network and FQDN rules for Azure Kubernetes Service (AKS) clusters][aks-firewall-requirements].
+- [Install the `aks-preview` Azure CLI extension](#install-the-aks-preview-azure-cli-extension).
+- [Register the `AKS-KedaPreview` feature flag](#register-the-aks-kedapreview-feature-flag).
-## Install the aks-preview Azure CLI extension
+### Install the `aks-preview` Azure CLI extension
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
-To install the aks-preview extension, run the following command:
+1. Install the `aks-preview` extension using the [`az extension add`][az-extension-add] command.
-```azurecli
-az extension add --name aks-preview
-```
+ ```azurecli-interactive
+ az extension add --name aks-preview
+ ```
-Run the following command to update to the latest version of the extension released:
+2. Update to the latest version of the `aks-preview` extension using the [`az extension update`][az-extension-update] command.
-```azurecli
-az extension update --name aks-preview
-```
+ ```azurecli-interactive
+ az extension update --name aks-preview
+ ```
-## Register the 'AKS-KedaPreview' feature flag
+### Register the `AKS-KedaPreview` feature flag
-Register the `AKS-KedaPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+1. Register the `AKS-KedaPreview` feature flag using the [`az feature register`][az-feature-register] command.
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview"
-```
+ ```azurecli-interactive
+ az feature register --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview"
+ ```
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+ It takes a few minutes for the status to show *Registered*.
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview"
-```
+2. Verify the registration status using the [`az feature show`][az-feature-show] command.
-When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+ ```azurecli-interactive
+ az feature show --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview"
+ ```
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
-## Install the KEDA add-on with Azure CLI
-To install the KEDA add-on, use `--enable-keda` when creating or updating a cluster.
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.ContainerService
+ ```
-The following example creates a *myResourceGroup* resource group. Then it creates a *myAKSCluster* cluster with the KEDA add-on.
+## Enable the KEDA add-on on your AKS cluster
-```azurecli-interactive
-az group create --name myResourceGroup --location eastus
+> [!NOTE]
+> While KEDA provides various customization options, the KEDA add-on currently provides basic common configuration.
+>
+> If you require custom configurations, you can manually edit the KEDA YAML files to customize the installation. **Azure doesn't offer support for custom configurations**.
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --enable-keda
-```
+### Create a new AKS cluster with KEDA add-on enabled
-For existing clusters, use `az aks update` with `--enable-keda` option. The following code shows an example.
+1. Create a resource group using the [`az group create`][az-group-create] command.
-```azurecli-interactive
-az aks update \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --enable-keda
-```
+ ```azurecli-interactive
+ az group create --name myResourceGroup --location eastus
+ ```
+
+2. Create a new AKS cluster using the [`az aks create`][az-aks-create] command and enable the KEDA add-on using the `--enable-keda` flag.
+
+ ```azurecli-interactive
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --enable-keda
+ ```
+
+### Enable the KEDA add-on on an existing AKS cluster
+
+- Update an existing cluster using the [`az aks update`][az-aks-update] command and enable the KEDA add-on using the `--enable-keda` flag.
+
+ ```azurecli-interactive
+ az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --enable-keda
+ ```
## Get the credentials for your cluster
-Get the credentials for your AKS cluster by using the `az aks get-credentials` command. The following example command gets the credentials for *myAKSCluster* in the *myResourceGroup* resource group:
-
-```azurecli-interactive
-az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
-```
-
-## Verify that the KEDA add-on is installed on your cluster
-
-To see if the KEDA add-on is installed on your cluster, verify that the `enabled` value is `true` for `keda` under `workloadAutoScalerProfile`.
-
-The following example shows the status of the KEDA add-on for *myAKSCluster* in *myResourceGroup*:
-
-```azurecli-interactive
-az aks show -g "myResourceGroup" --name myAKSCluster --query "workloadAutoScalerProfile.keda.enabled"
-```
-## Verify that KEDA is running on your cluster
-
-You can verify KEDA that's running on your cluster. Use `kubectl` to display the operator and metrics server installed in the AKS cluster under kube-system namespace. For example:
-
-```azurecli-interactive
-kubectl get pods -n kube-system
-```
-
-The following example output shows that the KEDA operator and metrics API server are installed in the AKS cluster along with its status.
-
-```output
-kubectl get pods -n kube-system
-
-keda-operator-********-k5rfv 1/1 Running 0 43m
-keda-operator-metrics-apiserver-*******-sj857 1/1 Running 0 43m
-```
-To verify the version of your KEDA, use `kubectl get crd/scaledobjects.keda.sh -o yaml `. For example:
-
-```azurecli-interactive
-kubectl get crd/scaledobjects.keda.sh -o yaml
-```
-The following example output shows the configuration of KEDA in the `app.kubernetes.io/version` label:
-
-```yaml
-kind: CustomResourceDefinition
-metadata:
- annotations:
- controller-gen.kubebuilder.io/version: v0.8.0
- creationTimestamp: "2022-06-08T10:31:06Z"
- generation: 1
- labels:
- addonmanager.kubernetes.io/mode: Reconcile
- app.kubernetes.io/component: operator
- app.kubernetes.io/name: keda-operator
- app.kubernetes.io/part-of: keda-operator
- app.kubernetes.io/version: 2.7.0
- name: scaledobjects.keda.sh
- resourceVersion: "2899"
- uid: 85b8dec7-c3da-4059-8031-5954dc888a0b
-spec:
- conversion:
- strategy: None
- group: keda.sh
- names:
- kind: ScaledObject
- listKind: ScaledObjectList
- plural: scaledobjects
- shortNames:
- - so
- singular: scaledobject
- scope: Namespaced
- # Redacted for simplicity
- ```
-
-While KEDA provides various customization options, the KEDA add-on currently provides basic common configuration.
-
-If you have requirement to run with another custom configurations, such as namespaces that should be watched or tweaking the log level, then you may edit the KEDA YAML manually and deploy it.
-
-However, when the installation is customized there will no support offered for custom configurations.
-
-## Disable KEDA add-on from your AKS cluster
-
-When you no longer need KEDA add-on in the cluster, use the `az aks update` command with--disable-keda option. This execution will disable KEDA workload auto-scaler.
-
-```azurecli-interactive
-az aks update \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --disable-keda
-```
+- Get the credentials for your AKS cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
+
+## Verify the KEDA add-on is installed on your cluster
+
+- Verify the KEDA add-on is installed on your cluster using the [`az aks show`][az-aks-show] command and set the `--query` parameter to `workloadAutoScalerProfile.keda.enabled`.
+
+ ```azurecli-interactive
+ az aks show -g myResourceGroup --name myAKSCluster --query "workloadAutoScalerProfile.keda.enabled"
+ ```
+
+ The following example output shows the KEDA add-on is installed on the cluster:
+
+ ```output
+ true
+ ```
+
+## Verify KEDA is running on your cluster
+
+- Verify the KEDA add-on is running on your cluster using the [`kubectl get pods`][kubectl] command.
+
+ ```azurecli-interactive
+ kubectl get pods -n kube-system
+ ```
+
+ The following example output shows the KEDA operator and metrics API server are installed on the cluster:
+
+ ```output
+ keda-operator-********-k5rfv 1/1 Running 0 43m
+ keda-operator-metrics-apiserver-*******-sj857 1/1 Running 0 43m
+ ```
+
+## Verify the KEDA version on your cluster
+
+- Verify the KEDA version using the `kubectl get crd/scaledobjects.keda.sh -o yaml` command.
+
+ ```azurecli-interactive
+ kubectl get crd/scaledobjects.keda.sh -o yaml
+ ```
+
+ The following condensed example output shows the configuration of KEDA in the `app.kubernetes.io/version` label:
+
+ ```output
+ apiVersion: apiextensions.k8s.io/v1
+ kind: CustomResourceDefinition
+ metadata:
+ annotations:
+ controller-gen.kubebuilder.io/version: v0.9.0
+ meta.helm.sh/release-name: aks-managed-keda
+ meta.helm.sh/release-namespace: kube-system
+ creationTimestamp: "2023-09-26T10:31:06Z"
+ generation: 1
+ labels:
+ app.kubernetes.io/component: operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/name: keda-operator
+ app.kubernetes.io/part-of: keda-operator
+ app.kubernetes.io/version: 2.10.1
+ ...
+ ```
+
+## Disable the KEDA add-on on your AKS cluster
+
+- Disable the KEDA add-on on your cluster using the [`az aks update`][az-aks-update] command with the `--disable-keda` flag.
+
+ ```azurecli-interactive
+ az aks update \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --disable-keda
+ ```
## Next steps
-This article showed you how to install the KEDA add-on on an AKS cluster using Azure CLI. The steps to verify that KEDA add-on is installed and running are included. With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps.
-You can troubleshoot KEDA add-on problems in [this article][keda-troubleshoot].
+This article showed you how to install the KEDA add-on on an AKS cluster using the Azure CLI.
+
+With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps.
+
+For information on KEDA troubleshooting, see [Troubleshoot the Kubernetes Event-driven Autoscaling (KEDA) add-on][keda-troubleshoot].
<!-- LINKS - internal --> [az-provider-register]: /cli/azure/provider#az-provider-register [az-feature-register]: /cli/azure/feature#az-feature-register [az-feature-show]: /cli/azure/feature#az-feature-show [az-aks-create]: /cli/azure/aks#az-aks-create
-[az aks install-cli]: /cli/azure/aks#az-aks-install-cli
-[az aks get-credentials]: /cli/azure/aks#az-aks-get-credentials
-[az aks update]: /cli/azure/aks#az-aks-update
-[az-group-delete]: /cli/azure/group#az-group-delete
[keda-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-kubernetes-event-driven-autoscaling-add-on?context=/azure/aks/context/aks-context [aks-firewall-requirements]: outbound-rules-control-egress.md#azure-global-required-network-rules-
+[az-aks-update]: /cli/azure/aks#az-aks-update
+[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
+[az-aks-show]: /cli/azure/aks#az-aks-show
+[az-group-create]: /cli/azure/group#az-group-create
+[az-extension-add]: /cli/azure/extension#az-extension-add
+[az-extension-update]: /cli/azure/extension#az-extension-update
+
+<!-- LINKS - external -->
[kubectl]: https://kubernetes.io/docs/user-guide/kubectl
-[keda]: https://keda.sh/
-[keda-scalers]: https://keda.sh/docs/scalers/
[keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue
aks Open Ai Secure Access Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-ai-secure-access-quickstart.md
Title: Secure access to Azure OpenAI from Azure Kubernetes Service (AKS) description: Learn how to secure access to Azure OpenAI from Azure Kubernetes Service (AKS). + Last updated 09/18/2023
aks Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md
description: Learn how to create and use a static IP address with the Azure Kube
+ Last updated 09/22/2023- #Customer intent: As a cluster operator or developer, I want to create and manage static IP address resources in Azure that I can use beyond the lifecycle of an individual Kubernetes service deployed in an AKS cluster.
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workl
description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity. Previously updated : 07/26/2023 Last updated : 09/27/2023 # Deploy and configure workload identity on an Azure Kubernetes Service (AKS) cluster
EOF
``` > [!IMPORTANT]
-> Ensure your application pods using workload identity have added the following label [azure.workload.identity/use: "true"] to your running pods/deployments, otherwise the pods will fail once restarted.
+> Ensure your application pods using workload identity have added the following label `azure.workload.identity/use: "true"` to your pod spec, otherwise the pods fail after their restarted.
```bash kubectl apply -f <your application>
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
This article helps you understand this new authentication feature, and reviews t
In the Azure Identity client libraries, choose one of the following approaches: -- Use `DefaultAzureCredential`, which will attempt to use the `WorkloadIdentityCredential`.
+- Use `DefaultAzureCredential`, which attempts to use the `WorkloadIdentityCredential`.
- Create a `ChainedTokenCredential` instance that includes `WorkloadIdentityCredential`. - Use `WorkloadIdentityCredential` directly.
The following table provides the **minimum** package version required for each l
| Node.js | [@azure/identity](/javascript/api/overview/azure/identity-readme) | 3.2.0 | | Python | [azure-identity](/python/api/overview/azure/identity-readme) | 1.13.0 |
-In the following code samples, `DefaultAzureCredential` is used. This credential type will use the environment variables injected by the Azure Workload Identity mutating webhook to authenticate with Azure Key Vault.
+In the following code samples, `DefaultAzureCredential` is used. This credential type uses the environment variables injected by the Azure Workload Identity mutating webhook to authenticate with Azure Key Vault.
## [.NET](#tab/dotnet)
The following diagram summarizes the authentication sequence using OpenID Connec
### Webhook Certificate Auto Rotation
-Similar to other webhook addons, the certificate will be rotated by cluster certificate [auto rotation][auto-rotation] operation.
+Similar to other webhook addons, the certificate is rotated by cluster certificate [auto rotation][auto-rotation] operation.
## Service account labels and annotations
All annotations are optional. If the annotation isn't specified, the default val
### Pod labels > [!NOTE]
-> For applications using Workload Identity it is now required to add the label 'azure.workload.identity/use: "true"' pod label in order for AKS to move Workload Identity to a "Fail Close" scenario before GA to provide a consistent and reliable behavior for pods that need to use workload identity.
+> For applications using workload identity, it's required to add the label `azure.workload.identity/use: "true"` to the pod spec for AKS to move workload identity to a *Fail Close* scenario to provide a consistent and reliable behavior for pods that need to use workload identity. Otherwise the pods fail after their restarted.
|Label |Description |Recommended value |Required | |||||
-|`azure.workload.identity/use` | This label is required in the pod template spec. Only pods with this label will be mutated by the azure-workload-identity mutating admission webhook to inject the Azure specific environment variables and the projected service account token volume. |true |Yes |
+|`azure.workload.identity/use` | This label is required in the pod template spec. Only pods with this label are mutated by the azure-workload-identity mutating admission webhook to inject the Azure specific environment variables and the projected service account token volume. |true |Yes |
### Pod annotations
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-app-insights.md
Last updated 08/25/2023 -+ # How to integrate Azure API Management with Azure Application Insights
application-gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview.md
Previously updated : 11/15/2022 Last updated : 09/27/2023 #Customer intent: As an IT administrator, I want to learn about Azure Application Gateways and what I can use them for. # What is Azure Application Gateway?
-Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Traditional load balancers operate at the transport layer (OSI layer 4 - TCP and UDP) and route traffic based on source IP address and port, to a destination IP address and port.
+Azure Application Gateway is a web traffic (OSI layer 7) load balancer that enables you to manage traffic to your web applications. Traditional load balancers operate at the transport layer (OSI layer 4 - TCP and UDP) and route traffic based on source IP address and port, to a destination IP address and port.
Application Gateway can make routing decisions based on additional attributes of an HTTP request, for example URI path or host headers. For example, you can route traffic based on the incoming URL. So if `/images` is in the incoming URL, you can route traffic to a specific set of servers (known as a pool) configured for images. If `/video` is in the URL, that traffic is routed to another pool that's optimized for videos.
application-gateway Quick Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-terraform.md
Title: 'Quickstart: Direct web traffic using Terraform'
+ Title: 'Quickstart: Direct web traffic with Azure Application Gateway - Terraform'
description: In this quickstart, you learn how to use Terraform to create an Azure Application Gateway that directs web traffic to virtual machines in a backend pool.
azure-arc Upgrade Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-direct-cli.md
description: Article describes how to upgrade a directly connected Azure Arc dat
-+
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
Flux version: [Release v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0
Changes made for this version: -- Updated SSH key entry to use the [updated RSA SSH host key](https://bitbucket.org/blog/ssh-host-key-changes) to prevent failures in configurations with `ssh` authentication type for Bitbucket.
+- Updated SSH key entry to use the [Ed25519 SSH host key](https://bitbucket.org/blog/ssh-host-key-changes) to prevent failures in configurations with `ssh` authentication type for Bitbucket.
### 1.7.6 (August 2023)
azure-arc License Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/license-extended-security-updates.md
Title: License provisioning guidelines for Extended Security Updates for Windows Server 2012 description: Learn about license provisioning guidelines for Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 09/14/2023 Last updated : 09/27/2023
An additional scenario (scenario 1, below) is a candidate for VM/Virtual core li
> In all cases, you are required to attest to their conformance with SA or SPLA. There is no exception for these requirements. Software Assurance or an equivalent Server Subscription is required for you to purchase Extended Security Updates on-premises and in hosted environments. You will be able to purchase Extended Security Updates from Enterprise Agreement (EA), Enterprise Subscription Agreement (EAS), a Server & Cloud Enrollment (SCE), and Enrollment for Education Solutions (EES). On Azure, you do not need Software Assurance to get free Extended Security Updates, but Software Assurance or Server Subscription is required to take advantage of the Azure Hybrid Benefit. >
+## Cost savings with migration and modernization of workloads
+
+As you migrate and modernize your Windows Server 2012 and Windows 2012 R2 infrastructure through the end of 2023, you can utilize the flexibility of monthly billing with Windows Server 2012 ESUs enabled by Azure Arc for cost savings benefits.
+
+As servers no longer require ESUs because they've been migrated to Azure, Azure VMware Solution (AVS), or Azure Stack HCI (where theyΓÇÖre eligible for free ESUs), or updated to Windows Server 2016 or higher, you can modify the number of cores associated with a license or delete/deactivate licenses. You can also link the license to a new scope of additional servers. See [Programmatically deploy and manage Azure Arc Extended Security Updates licenses](api-extended-security-updates.md) to learn more.
+
+> [!NOTE]
+> This process is not automatic; billing is tied to the activated licenses and you are responsible for modifying your provisioned licensing to take advantage of cost savings.
+>
## Scenario based examples: Compliant and Cost Effective Licensing ### Scenario 1: Eight modern 32-core hosts (not Windows Server 2012). While each of these hosts are running four 8-core VMs, only one VM on each host is running Windows Server 2012 R2
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
Title: How to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc description: Learn how to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 07/12/2023 Last updated : 09/27/2023
Other Azure services through Azure Arc-enabled servers are available, with offer
## Prepare delivery of ESUs
-To prepare for this new offer, you need to plan and prepare to onboard your machines to Azure Arc-enabled servers through the installation of the [Azure Connected Machine agent](agent-overview.md) and establishing a connection to Azure.
+To prepare for this new offer, you need to plan and prepare to onboard your machines to Azure Arc-enabled servers through the installation of the [Azure Connected Machine agent](agent-overview.md) (version 1.34 or higher) and establishing a connection to Azure.
- **Deployment options:** There are several at-scale onboarding options for Azure Arc-enabled servers, including running a [Custom Task Sequence](onboard-configuration-manager-custom-task.md) through Configuration Manager and deploying a [Scheduled Task through Group Policy](onboard-group-policy-powershell.md).
azure-cache-for-redis Cache Best Practices Enterprise Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-enterprise-tiers.md
Title: Best practices for the Enterprise tiers
-description: Learn about the Azure Cache for Redis Enterprise and Enterprise Flash tiers
+description: Learn the best practices when using the high performance Azure Cache for Redis Enterprise and Enterprise Flash tiers
Previously updated : 03/09/2023 Last updated : 09/26/2023
-# Best Practices for the Enterprise and Enterprise Flash tiers of Azure Cache for Redis
+# What are the best practices for the Enterprise and Enterprise Flash tiers
+
+Here are the best practices when using the Enterprise and Enterprise Flash tiers of Azure Cache for Redis.
## Zone Redundancy We strongly recommend that you deploy new caches in a [zone redundant](cache-high-availability.md) configuration. Zone redundancy ensures that Redis Enterprise nodes are spread among three availability zones, boosting redundancy from data center-level outages. Using zone redundancy increases availability. For more information, see [Service Level Agreements (SLA) for Online Services](https://azure.microsoft.com/support/legal/sla/cache/v1_1/).
-Zone redundancy is important on the Enterprise tier because your cache instance always uses at least three nodes. Two nodes are data nodes, which hold your data, and a _quorum node_. Increasing capacity scales the number of data nodes in even-number increments.
+Zone redundancy is important on the Enterprise tier because your cache instance always uses at least three nodes. Two nodes are data nodes, which hold your data, and a _quorum node_. Increasing capacity scales the number of data nodes in even-number increments.
There's also another node called a quorum node. This node monitors the data nodes and automatically selects the new primary node if there was a failover. Zone redundancy ensures that the nodes are distributed evenly across three availability zones, minimizing the potential for quorum loss. Customers aren't charged for the quorum node and there's no other charge for using zone redundancy beyond [intra-zonal bandwidth charges](https://azure.microsoft.com/pricing/details/bandwidth/). ## Scaling
-In the Enterprise and Enterprise Flash tiers of Azure Cache for Redis, we recommend prioritizing scaling up over scaling out. Prioritize scaling up because the Enterprise tiers are built on Redis Enterprise, which is able to utilize more CPU cores in larger VMs.
-
-Conversely, the opposite recommendation is true for the Basic, Standard, and Premium tiers, which are built on open-source Redis. In those tiers, prioritizing scaling out over scaling up is recommended in most cases.
+In the Enterprise and Enterprise Flash tiers of Azure Cache for Redis, we recommend prioritizing _scaling up_ over _scaling out_. Prioritize scaling up because the Enterprise tiers are built on Redis Enterprise, which is able to utilize more CPU cores in larger VMs.
+Conversely, the opposite recommendation is true for the Basic, Standard, and Premium tiers, which are built on open-source Redis. In those tiers, prioritizing _scaling out_ over _scaling up_ is recommended in most cases.
## Sharding and CPU utilization
-In the Basic, Standard, and Premium tiers of Azure Cache for Redis, determining the number of virtual CPUs (vCPUs) utilized is straightforward. Each Redis node runs on a dedicated VM. The Redis server process is single-threaded, utilizing one vCPU on each primary and each replica node. The other vCPUs on the VM are still used for other activities, such as workflow coordination for different tasks, health monitoring, and TLS load, among others.
+In the Basic, Standard, and Premium tiers of Azure Cache for Redis, determining the number of virtual CPUs (vCPUs) utilized is straightforward. Each Redis node runs on a dedicated VM. The Redis server process is single-threaded, utilizing one vCPU on each primary and each replica node. The other vCPUs on the VM are still used for other activities, such as workflow coordination for different tasks, health monitoring, and TLS load, among others.
-When you use clustering, the effect is to spread data across more nodes with one shard per node. By increasing the number of shards, you linearly increase the number of vCPUs you use, based on the number of shards in the cluster.
+When you use clustering, the effect is to spread data across more nodes with one shard per node. By increasing the number of shards, you linearly increase the number of vCPUs you use, based on the number of shards in the cluster.
-Redis Enterprise, on the other hand, can use multiple vCPUs for the Redis instance itself. In other words, all tiers of Azure Cache for Redis can use multiple vCPUs for background and monitoring tasks, but only the Enterprise and Enterprise Flash tiers are able to utilize multiple vCPUs per VM for Redis shards. The table shows the number of effective vCPUs used for each SKU and capacity (that is, scale-out) configuration.
+Redis Enterprise, on the other hand, can use multiple vCPUs for the Redis instance itself. In other words, all tiers of Azure Cache for Redis can use multiple vCPUs for background and monitoring tasks, but only the Enterprise and Enterprise Flash tiers are able to utilize multiple vCPUs per VM for Redis shards. The table shows the number of effective vCPUs used for each SKU and capacity (that is, scale-out) configuration.
-The tables show the number of vCPUs used for the primary shards, not the replica shards. Shards don't map one-to-one to the number of vCPUs. The tables only illustrate vCPUs, not shards. Some configurations use more shards than available vCPUs to boost performance in some usage scenarios.
+The tables show the number of vCPUs used for the primary shards, not the replica shards. Shards don't map one-to-one to the number of vCPUs. The tables only illustrate vCPUs, not shards. Some configurations use more shards than available vCPUs to boost performance in some usage scenarios.
-### E10
+### E5
+|Capacity|Effective vCPUs|
+|:|:|
+| 2 | 1 |
+| 4 | 2 |
+| 6 | 6 |
+### E10
|Capacity|Effective vCPUs| |:|:| | 2 | 2 |
The tables show the number of vCPUs used for the primary shards, not the replica
| 8 | 16 | | 10 | 20 | - ### E20+ |Capacity|Effective vCPUs| |:|:| |2| 2|
The tables show the number of vCPUs used for the primary shards, not the replica
|8|30 | |10|30| - ### E100+ |Capacity|Effective vCPUs| |:|:| |2| 6|
The tables show the number of vCPUs used for the primary shards, not the replica
|8|30| |10|30|
+### E200
+|Capacity|Effective vCPUs|
+|:|:|
+|2|30|
+|4|60|
+|6|60|
+|8|120|
+|10|120|
+
+### E400
+|Capacity|Effective vCPUs|
+|:|:|
+|2|60|
+|4|120|
+|6|120|
+|8|240|
+|10|240|
+ ### F300+ |Capacity|Effective vCPUs| |:|:| |3| 6| |9|30| ### F700+ |Capacity|Effective vCPUs| |:|:| |3| 30| |9| 30| ### F1500+ |Capacity|Effective vCPUs | |:|:| |3| 30 | |9| 90 | - ## Clustering on Enterprise Enterprise and Enterprise Flash tiers are inherently clustered, in contrast to the Basic, Standard, and Premium tiers. The implementation depends on the clustering policy that is selected.
-The Enterprise tiers offer two choices for Clustering Policy: _OSS_ and _Enterprise_. _OSS_ cluster policy is recommended for most applications because it supports higher maximum throughput, but there are advantages and disadvantages to each version.
+The Enterprise tiers offer two choices for Clustering Policy: _OSS_ and _Enterprise_. _OSS_ cluster policy is recommended for most applications because it supports higher maximum throughput, but there are advantages and disadvantages to each version.
-The _OSS clustering policy_ implements the same [Redis Cluster API](https://redis.io/docs/reference/cluster-spec/) as open-source Redis. The Redis Cluster API allows the Redis client to connect directly to each Redis node, minimizing latency and optimizing network throughput. As a result, near-linear scalability is obtained when scaling out the cluster with more nodes. The OSS clustering policy generally provides the best latency and throughput performance, but requires your client library to support Redis Clustering. OSS clustering policy also can't be used with the [RediSearch module](cache-redis-modules.md).
+The _OSS clustering policy_ implements the same [Redis Cluster API](https://redis.io/docs/reference/cluster-spec/) as open-source Redis. The Redis Cluster API allows the Redis client to connect directly to each Redis node, minimizing latency and optimizing network throughput. As a result, near-linear scalability is obtained when scaling out the cluster with more nodes. The OSS clustering policy generally provides the best latency and throughput performance, but requires your client library to support Redis Clustering. OSS clustering policy also can't be used with the [RediSearch module](cache-redis-modules.md).
-The _Enterprise clustering policy_ is a simpler configuration that utilizes a single endpoint for all client connections. Using the Enterprise clustering policy routes all requests to a single Redis node that is then used as a proxy, internally routing requests to the correct node in the cluster. The advantage of this approach is that Redis client libraries donΓÇÖt need to support Redis Clustering to take advantage of multiple nodes. The downside is that the single node proxy can be a bottleneck, in either compute utilization or network throughput. The Enterprise clustering policy is the only one that can be used with the [RediSearch module](cache-redis-modules.md).
+The _Enterprise clustering policy_ is a simpler configuration that utilizes a single endpoint for all client connections. Using the Enterprise clustering policy routes all requests to a single Redis node that is then used as a proxy, internally routing requests to the correct node in the cluster. The advantage of this approach is that Redis client libraries donΓÇÖt need to support Redis Clustering to take advantage of multiple nodes. The downside is that the single node proxy can be a bottleneck, in either compute utilization or network throughput. The Enterprise clustering policy is the only one that can be used with the [RediSearch module](cache-redis-modules.md).
## Multi-key commands
-Because the Enterprise tiers use a clustered configuration, you might see `CROSSSLOT` exceptions on commands that operate on multiple keys. Behavior varies depending on the clustering policy used. If you use the OSS clustering policy, multi-key commands require all keys to be mapped to [the same hash slot](https://docs.redis.com/latest/rs/databases/configure/oss-cluster-api/#multi-key-command-support).
+Because the Enterprise tiers use a clustered configuration, you might see `CROSSSLOT` exceptions on commands that operate on multiple keys. Behavior varies depending on the clustering policy used. If you use the OSS clustering policy, multi-key commands require all keys to be mapped to [the same hash slot](https://docs.redis.com/latest/rs/databases/configure/oss-cluster-api/#multi-key-command-support).
You might also see `CROSSSLOT` errors with Enterprise clustering policy. Only the following multi-key commands are allowed across slots with Enterprise clustering: `DEL`, `MSET`, `MGET`, `EXISTS`, `UNLINK`, and `TOUCH`.
For example, consider these tips:
- Identify in advance which other cache in the geo-replication group to switch over to if a region goes down. - Ensure that firewalls are set so that any applications and clients can access the identified backup cache.-- Each cache in the geo-replication group has its own access key. Determine how the application will switch access keys when targeting a backup cache.
+- Each cache in the geo-replication group has its own access key. Determine how the application switches to different access keys when targeting a backup cache.
- If a cache in the geo-replication group goes down, a buildup of metadata starts to occur in all the caches in the geo-replication group. The metadata can't be discarded until writes can be synced again to all caches. You can prevent the metadata build-up by _force unlinking_ the cache that is down. Consider monitoring the available memory in the cache and unlinking if there's memory pressure, especially for write-heavy workloads. It's also possible to use a [circuit breaker pattern](/azure/architecture/patterns/circuit-breaker). Use the pattern to automatically redirect traffic away from a cache experiencing a region outage, and towards a backup cache in the same geo-replication group. Use Azure services such as [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) or [Azure Load Balancer](../load-balancer/load-balancer-overview.md) to enable the redirection.
It's also possible to use a [circuit breaker pattern](/azure/architecture/patter
The [data persistence](cache-how-to-premium-persistence.md) feature in the Enterprise and Enterprise Flash tiers is designed to automatically provide a quick recovery point for data when a cache goes down. The quick recovery is made possible by storing the RDB or AOF file in a managed disk that is mounted to the cache instance. Persistence files on the disk aren't accessible to users.
-Many customers want to use persistence to take periodic backups of the data on their cache. We don't recommend that you use data persistence in this way. Instead, use the [import/export](cache-how-to-import-export-data.md) feature. You can export copies of cache data in RDB format directly into your chosen storage account and trigger the data export as frequently as you require. Export can be triggered either from the portal or by using the CLI, PowerShell, or SDK tools.
+Many customers want to use persistence to take periodic backups of the data on their cache. We don't recommend that you use data persistence in this way. Instead, use the [import/export](cache-how-to-import-export-data.md) feature. You can export copies of cache data in RDB format directly into your chosen storage account and trigger the data export as frequently as you require. Export can be triggered either from the portal or by using the CLI, PowerShell, or SDK tools.
-## Next steps
+## Related content
- [Development](cache-best-practices-development.md)--
azure-cache-for-redis Cache Overview Vector Similarity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview-vector-similarity.md
+
+ Title: About Vector Embeddings and Vector Search in Azure Cache for Redis
+description: Learn about Azure Cache for Redis to store vector embeddings and provide similarity search.
++++ Last updated : 09/18/2023++
+# About Vector Embeddings and Vector Search in Azure Cache for Redis
+
+Vector similarity search (VSS) has become a popular use-case for AI-driven applications. Azure Cache for Redis can be used to store vector embeddings and compare them through vector similarity search. This article is a high-level introduction to the concept of vector embeddings, vector comparison, and how Redis can be used as a seamless part of a vector similarity workflow.
+
+For a tutorial on how to use Azure Cache for Redis and Azure OpenAI to perform vector similarity search, see [Tutorial: Conduct vector similarity search on Azure OpenAI embeddings using Azure Cache for Redis](./cache-tutorial-vector-similarity.md)
+
+## Scope of Availability
+
+|Tier | Basic / Standard | Premium |Enterprise | Enterprise Flash |
+| |::|:-:|::|::|
+|Available | No | No | Yes | Yes (preview) |
+
+Vector search capabilities in Redis require [Redis Stack](https://redis.io/docs/about/about-stack/), specifically the [RediSearch](https://redis.io/docs/interact/search-and-query/) module. This capability is only available in the [Enterprise tiers of Azure Cache for Redis](./cache-redis-modules.md).
+
+## What are vector embeddings?
+
+### Concept
+
+Vector embeddings are a fundamental concept in machine learning and natural language processing that enable the representation of data, such as words, documents, or images as numerical vectors in a high-dimension vector space. The primary idea behind vector embeddings is to capture the underlying relationships and semantics of the data by mapping them to points in this vector space. In simpler terms, that means converting your text or images into a sequence of numbers that represents the data, and then comparing the different number sequences. This allows complex data to be manipulated and analyzed mathematically, making it easier to perform tasks like similarity comparison, recommendation, and classification.
+
+<!-- TODO - Add image example -->
+
+Each machine learning model classifies data and produces the vector in a different manner. Furthermore, it's typically not possible to determine exactly what semantic meaning each vector dimension represents. But because the model is consistent between each block of input data, similar words, documents, or images have vectors that are also similar. For example, the words `basketball` and `baseball` have embeddings vectors much closer to each other than a word like `rainforest`.
+
+### Vector comparison
+
+Vectors can be compared using various metrics. The most popular way to compare vectors is to use [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity), which measures the cosine of the angle between two vectors in a multi-dimensional space. The closer the vectors, the smaller the angle. Other common distance metrics include [Euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance) and [inner product](https://en.wikipedia.org/wiki/Inner_product_space).
+
+### Generating embeddings
+
+Many machine learning models support embeddings APIs. For an example of how to create vector embeddings using Azure OpenAI Service, see [Learn how to generate embeddings with Azure OpenAI](../ai-services/openai/how-to/embeddings.md).
+
+## What is a vector database?
+
+A vector database is a database that can store, manage, retrieve, and compare vectors. Vector databases must be able to efficiently store a high-dimensional vector and retrieve it with minimal latency and high throughput. Non-relational datastores are most commonly used as vector databases, although it's possible to use relational databases like PostgreSQL, for example, with the [pgvector](https://github.com/pgvector/pgvector) extension.
+
+### Index method
+
+Vector databases need to index data for fast search and retrieval. There are several common indexing methods, including:
+
+- **K-Nearest Neighbors (KNN)** - an exhaustive method that provides the most precision but with higher computational cost.
+- **Approximate Nearest Neighbors (ANN)** - a more efficient by trading precision for greater speed and lower processing overhead.
+
+### Search capabilities
+
+Finally, vector databases execute vector searches by using the chosen vector comparison method to return the most similar vectors. Some vector databases can also perform _hybrid_ searches by first narrowing results based on characteristics or metadata also stored in the database before conducting the vector search. This is a way to make the vector search more effective and customizable. For example, a vector search could be limited to only vectors with a specific tag in the database, or vectors with geolocation data in a certain region.
+
+## Vector search key scenarios
+
+Vector similarity search can be used in multiple applications. Some common use-cases include:
+
+- **Semantic Q&A**. Create a chatbot that can respond to questions about your own data. For instance, a chatbot that can respond to employee questions on their healthcare coverage. Hundreds of pages of dense healthcare coverage documentation can be split into chunks, converted into embeddings vectors, and searched based on vector similarity. The resulting documents can then be summarized for employees using another large language model (LLM). [Semantic Q&A Example](https://techcommunity.microsoft.com/t5/azure-developer-community-blog/vector-similarity-search-with-azure-cache-for-redis-enterprise/ba-p/3822059)
+- **Document Retrieval**. Use the deeper semantic understanding of text provided by LLMs to provide a richer document search experience where traditional keyword-based search falls short. [Document Retrieval Example](https://github.com/RedisVentures/redis-arXiv-search)
+- **Product Recommendation**. Find similar products or services to recommend based on past user activities, like search history or previous purchases. [Product Recommendation Example](https://github.com/RedisVentures/LLM-Recommender)
+- **Visual Search**. Search for products that look similar to a picture taken by a user or a picture of another product. [Visual Search Example](https://github.com/RedisVentures/redis-product-search)
+- **Semantic Caching**. Reduce the cost and latency of LLMs by caching LLM completions. LLM queries are compared using vector similarity. If a new query is similar enough to a previously cached query, the cached query is returned. [Semantic Caching example using LangChain](https://python.langchain.com/docs/integrations/llms/llm_caching#redis-cache)
+- **LLM Conversation Memory**. Persist conversation history with an LLM as embeddings in a vector database. Your application can use vector search to pull relevant history or "memories" into the response from the LLM. [LLM Conversation Memory example](https://github.com/continuum-llms/chatgpt-memory)
+
+## Why choose Azure Cache for Redis for storing and searching vectors?
+
+Azure Cache for Redis can be used effectively as a vector database to store embeddings vectors and to perform vector similarity searches. In many ways, Redis is naturally a great choice in this area. It's extremely fast because it runs in-memory, unlike other vector databases that run on-disk. This can be useful when processing large datasets! Redis is also battle-hardened. Support for vector storage and search has been available for years, and many key machine learning frameworks like [LangChain](https://python.langchain.com/docs/integrations/vectorstores/redis) and [LlamaIndex](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/RedisIndexDemo.html) feature rich integrations with Redis. For example, the Redis LangChain integration [automatically generates an index schema for metadata](https://python.langchain.com/docs/integrations/vectorstores/redis#inspecting-the-created-index) passed in when using Redis as a vector store. This makes it much easier to filter results based on metadata.
+
+Redis has a wide range of vector search capabilities through the [RediSearch module](cache-redis-modules.md#redisearch), which is available in the Enterprise tier of Azure Cache for Redis. These include:
+
+- Multiple distance metrics, including `Euclidean`, `Cosine`, and `Internal Product`.
+- Support for both KNN (using `FLAT`) and ANN (using `HNSW`) indexing methods.
+- Vector storage in hash or JSON data structures
+- Top K queries
+- [Vector range queries](https://redis.io/docs/interact/search-and-query/search/vectors/#creating-a-vss-range-query) (i.e., find all items within a specific vector distance)
+- Hybrid search with [powerful query features](https://redis.io/docs/interact/search-and-query/) such as:
+ - Geospatial filtering
+ - Numeric and text filters
+ - Prefix and fuzzy matching
+ - Phonetic matching
+ - Boolean queries
+
+Additionally, Redis is often an economical choice because it's already so commonly used for caching or session store applications. In these scenarios, it can pull double-duty by serving a typical caching role while simultaneously handling vector search applications.
+
+## What are my other options for storing and searching for vectors?
+
+There are multiple other solutions on Azure for vector storage and search. These include:
+
+- [Azure Cognitive Search](../search/vector-search-overview.md)
+- [Azure Cosmos DB](../cosmos-db/mongodb/vcore/vector-search.md) using the MongoDB vCore API
+- [Azure Database for PostgreSQL - Flexible Server](../postgresql/flexible-server/how-to-use-pgvector.md) using `pgvector`
+
+## Next Steps
+
+The best way to get started with embeddings and vector search is to try it yourself!
+
+> [!div class="nextstepaction"]
+> [Tutorial: Conduct vector similarity search on Azure OpenAI embeddings using Azure Cache for Redis](./cache-tutorial-vector-similarity.md)
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md
Previously updated : 03/28/2023 Last updated : 09/26/2023
-# About Azure Cache for Redis
+# What is Azure Cache for Redis?
Azure Cache for Redis provides an in-memory data store based on the [Redis](https://redis.io/) software. Redis improves the performance and scalability of an application that uses backend data stores heavily. It's able to process large volumes of application requests by keeping frequently accessed data in the server memory, which can be written to and read from quickly. Redis brings a critical low-latency and high-throughput data storage solution to modern applications.
The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
Consider the following options when choosing an Azure Cache for Redis tier: -- **Memory**: The Basic and Standard tiers offer 250 MB ΓÇô 53 GB; the Premium tier 6 GB - 1.2 TB; the Enterprise tiers 12 GB - 14 TB. To create a Premium tier cache larger than 120 GB, you can use Redis OSS clustering. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md).-- **Performance**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. Premium tier Caches have higher throughput and lower latencies. For more information, see [Azure Cache for Redis performance](./cache-planning-faq.yml#azure-cache-for-redis-performance).-- **Dedicated core for Redis server**: All caches except C0 run dedicated VM cores. Redis, by design, uses only one thread for command processing. Azure Cache for Redis uses other cores for I/O processing. Having more cores improves throughput performance even though it may not produce linear scaling. Furthermore, larger VM sizes typically come with higher bandwidth limits than smaller ones. That helps you avoid network saturation that cause timeouts in your application.-- **Network performance**: If you have a workload that requires high throughput, the Premium or Enterprise tier offers more bandwidth compared to Basic or Standard. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. For more information, see [Azure Cache for Redis performance](./cache-planning-faq.yml#azure-cache-for-redis-performance).
+- **Memory**: The Basic and Standard tiers offer 250 MB ΓÇô 53 GB; the Premium tier 6 GB - 1.2 TB; the Enterprise tier 4 GB - 2 TB, and the Enterprise Flash tier 300 GB - 4.5 TB. To create a Premium tier cache larger than 120 GB, you can use Redis OSS clustering. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md).
+- **Performance**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. The Enterprise tier typically has the best performance for most workloads, especially with larger cache instances. For more information, see [Performance testing](cache-best-practices-performance.md).
+- **Dedicated core for Redis server**: All caches except C0 run dedicated vCPUs. The Basic, Standard, and Premium tiers run open source Redis, which by design uses only one thread for command processing. On these tiers, having more vCPUs usually improves throughput performance because Azure Cache for Redis uses other vCPUs for I/O processing or for OS processes. However, adding more vCPUs per instance may not produce linear performance increases. Scaling out usually boosts performance more than scaling up in these tiers. Enterprise and Enterprise Flash tier caches run on Redis Enterprise which is able to utilize multiple vCPUs per instance, which can also significantly increase performance over other tiers. For Enterprise and Enterprise flash tiers, scaling up is recommended before scaling out. For more information, see [Sharding and CPU utilization](cache-best-practices-enterprise-tiers.md#sharding-and-cpu-utilization).
+- **Network performance**: If you have a workload that requires high throughput, the Premium or Enterprise tier offers more bandwidth compared to Basic or Standard. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. Higher bandwidth limits help you avoid network saturation that cause timeouts in your application.For more information, see [Performance testing](cache-best-practices-performance.md).
- **Maximum number of client connections**: The Premium and Enterprise tiers offer the maximum numbers of clients that can connect to Redis, offering higher numbers of connections for larger sized caches. Clustering increases the total amount of network bandwidth available for a clustered cache. - **High availability**: Azure Cache for Redis provides multiple [high availability](cache-high-availability.md) options. It guarantees that a Standard, Premium, or Enterprise cache is available according to our [SLA](https://azure.microsoft.com/support/legal/sla/cache/v1_0/). The SLA only covers connectivity to the cache endpoints. The SLA doesn't cover protection from data loss. We recommend using the Redis data persistence feature in the Premium and Enterprise tiers to increase resiliency against data loss. - **Data persistence**: The Premium and Enterprise tiers allow you to persist the cache data to an Azure Storage account and a Managed Disk respectively. Underlying infrastructure issues might result in potential data loss. We recommend using the Redis data persistence feature in these tiers to increase resiliency against data loss. Azure Cache for Redis offers both RDB and AOF (preview) options. Data persistence can be enabled through Azure portal and CLI. For the Premium tier, see [How to configure persistence for a Premium Azure Cache for Redis](cache-how-to-premium-persistence.md).-- **Network isolation**: Azure Private Link and Virtual Network (VNET) deployments provide enhanced security and traffic isolation for your Azure Cache for Redis. VNET allows you to further restrict access through network access control policies. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md) and [How to configure Virtual Network support for a Premium Azure Cache for Redis](cache-how-to-premium-vnet.md).
+- **Network isolation**: Azure Private Link and Virtual Network (VNet) deployments provide enhanced security and traffic isolation for your Azure Cache for Redis. VNet allows you to further restrict access through network access control policies. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md) and [How to configure Virtual Network support for a Premium Azure Cache for Redis](cache-how-to-premium-vnet.md).
- **Redis Modules**: Enterprise tiers support [RediSearch](https://docs.redis.com/latest/modules/redisearch/), [RedisBloom](https://docs.redis.com/latest/modules/redisbloom/), [RedisTimeSeries](https://docs.redis.com/latest/modules/redistimeseries/), and [RedisJSON](https://docs.redis.com/latest/modules/redisjson/). These modules add new data types and functionality to Redis. You can scale your cache from the Basic tier up to Premium after it has been created. Scaling down to a lower tier isn't supported currently. For step-by-step scaling instructions, see [How to Scale Azure Cache for Redis](cache-how-to-scale.md) and [How to scale - Basic, Standard, and Premium tiers](cache-how-to-scale.md#how-to-scalebasic-standard-and-premium-tiers). ### Special considerations for Enterprise tiers
-The Enterprise tiers rely on Redis Enterprise, a commercial variant of Redis from Redis Inc. Customers obtain and pay for a license to this software through an Azure Marketplace offer. Azure Cache for Redis manages the license acquisition so that you won't have to do it separately. To purchase in the Azure Marketplace, you must have the following prerequisites:
+The Enterprise tiers rely on Redis Enterprise, a commercial variant of Redis from Redis Inc. Customers obtain and pay for a license to this software through an Azure Marketplace offer. Azure Cache for Redis manages the license acquisition so that you don't have to do it separately. To purchase in the Azure Marketplace, you must have the following prerequisites:
- Your Azure subscription has a valid payment instrument. Azure credits or free MSDN subscriptions aren't supported. - Your organization allows [Azure Marketplace purchases](../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases).
The Enterprise tiers rely on Redis Enterprise, a commercial variant of Redis fro
Azure Cache for Redis is continually expanding into new regions. To check the availability by region, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=redis-cache&regions=all).
-## Next steps
+## Related content
- [Create an open-source Redis cache](quickstart-create-redis.md) - [Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md)
azure-cache-for-redis Cache Tutorial Vector Similarity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-vector-similarity.md
+
+ Title: 'Tutorial: Conduct vector similarity search on Azure OpenAI embeddings using Azure Cache for Redis'
+description: In this tutorial, you learn how to use Azure Cache for Redis to store and search for vector embeddings.
++++ Last updated : 09/15/2023+
+#CustomerIntent: As a < type of user >, I want < what? > so that < why? >.
++
+# Tutorial: Conduct vector similarity search on Azure OpenAI embeddings using Azure Cache for Redis
+
+In this tutorial, you'll walk through a basic vector similarity search use-case. You'll use embeddings generated by Azure OpenAI Service and the built-in vector search capabilities of the Enterprise tier of Azure Cache for Redis to query a dataset of movies to find the most relevant match.
+
+The tutorial uses the [Wikipedia Movie Plots dataset](https://www.kaggle.com/datasets/jrobischon/wikipedia-movie-plots) that features plot descriptions of over 35,000 movies from Wikipedia covering the years 1901 to 2017.
+The dataset includes a plot summary for each movie, plus metadata such as the year the film was released, the director(s), main cast, and genre. You'll follow the steps of the tutorial to generate embeddings based on the plot summary and use the other metadata to run hybrid queries.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create an Azure Cache for Redis instance configured for vector search
+> * Install Azure OpenAI and other required Python libraries.
+> * Download the movie dataset and prepare it for analysis.
+> * Use the **text-embedding-ada-002 (Version 2)** model to generate embeddings.
+> * Create a vector index in Azure Cache for Redis
+> * Use cosine similarity to rank search results.
+> * Use hybrid query functionality through [RediSearch](https://redis.io/docs/interact/search-and-query/) to prefilter the data and make the vector search even more powerful.
+
+>[!IMPORTANT]
+>This tutorial will walk you through building a Jupyter Notebook. You can follow this tutorial with a Python code file (.py) and get *similar* results, but you will need to add all of the code blocks in this tutorial into the `.py` file and execute once to see results. In other words, Jupyter Notebooks provides intermediate results as you execute cells, but this is not behavior you should expect when working in a Python code file.
+
+>[!IMPORTANT]
+>If you would like to follow along in a completed Jupyter notebook instead, [download the Jupyter notebook file named *tutorial.ipynb*](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/tutorial/vector-similarity-search-open-ai) and save it into the new *redis-vector* folder.
+
+## Prerequisites
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true)
+* Access granted to Azure OpenAI in the desired Azure subscription
+ Currently, you must apply for access to Azure OpenAI. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>.
+* <a href="https://www.python.org/" target="_blank">Python 3.7.1 or later version</a>
+* [Jupyter Notebooks](https://jupyter.org/) (optional)
+* An Azure OpenAI resource with the **text-embedding-ada-002 (Version 2)** model deployed. This model is currently only available in [certain regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability). See the [resource deployment guide](../ai-services/openai/how-to/create-resource.md) for instructions on how to deploy the model.
+
+## Create an Azure Cache for Redis Instance
+
+1. Follow the [Quickstart: Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md) guide. On the **Advanced** page, make sure that you've added the **RediSearch** module and have chosen the **Enterprise** Cluster Policy. All other settings can match the default described in the quickstart.
+
+ It takes a few minutes for the cache to create. You can move on to the next step in the meantime.
++
+## Set up your development environment
+
+1. Create a folder on your local computer named *redis-vector* in the location where you typically save your projects.
+
+1. Create a new python file (*tutorial.py*) or Jupyter notebook (*tutorial.ipynb*) in the folder.
+
+1. Install the required Python packages:
+
+ ```python
+ pip install openai num2words matplotlib plotly scipy scikit-learn pandas tiktoken redis langchain
+ ```
+
+## Download the dataset
+
+1. In a web browser, navigate to [https://www.kaggle.com/datasets/jrobischon/wikipedia-movie-plots](https://www.kaggle.com/datasets/jrobischon/wikipedia-movie-plots).
+
+1. Sign in or register with Kaggle. Registration is required to download the file.
+
+1. Select the **Download** link on Kaggle to download the *archive.zip* file.
+
+1. Extract the *archive.zip* file and move the *wiki_movie_plots_deduped.csv* into the *redis-vector* folder.
+
+## Import libraries and set up connection information
+
+To successfully make a call against Azure OpenAI, you need an **endpoint** and a **key**. You also need an **endpoint** and a **key** to connect to Azure Cache for Redis.
+
+1. Go to your Azure Open AI resource in the Azure portal.
+
+1. Locate **Endpoint and Keys** in the **Resource Management** section. Copy your endpoint and access key as you'll need both for authenticating your API calls. An example endpoint is: `https://docs-test-001.openai.azure.com`. You can use either `KEY1` or `KEY2`.
+
+1. Go to the **Overview** page of your Azure Cache for Redis resource in the Azure portal. Copy your endpoint.
+
+1. Locate **Access keys** in the **Settings** section. Copy your access key. You can use either `Primary` or `Secondary`.
+
+1. Add the following code to a new code cell:
+
+ ```python
+ # Code cell 2
+
+ import re
+ from num2words import num2words
+ import os
+ import pandas as pd
+ from openai.embeddings_utils import get_embedding
+ import tiktoken
+ from typing import List
+ from langchain.embeddings import OpenAIEmbeddings
+ from langchain.vectorstores.redis import Redis as RedisVectorStore
+ from langchain.document_loaders import DataFrameLoader
+
+ API_KEY = "<your-azure-openai-key>"
+ RESOURCE_ENDPOINT = "<your-azure-openai-endpoint>"
+ DEPLOYMENT_NAME = "<name-of-your-model-deployment>"
+ MODEL_NAME = "text-embedding-ada-002"
+ REDIS_ENDPOINT = "<your-azure-redis-endpoint>"
+ REDIS_PASSWORD = "<your-azure-redis-password>"
+ ```
+
+1. Update the value of `API_KEY` and `RESOURCE_ENDPOINT` with the key and endpoint values from your Azure OpenAI deployment. `DEPLOYMENT_NAME` should be set to the name of your deployment using the `text-embedding-ada-002 (Version 2)` embeddings model, and `MODEL_NAME` should be the specific embeddings model used.
+
+1. Update `REDIS_ENDPOINT` and `REDIS_PASSWORD` with the endpoint and key value from your Azure Cache for Redis instance.
+
+ > [!Important]
+ > We strongly recommend using environmental variables or a secret manager like [Azure Key Vault](../key-vault/general/overview.md) to pass in the API key, endpoint, and deployment name information. These variables are set in plaintext here for the sake of simplicity.
+
+1. Execute code cell 2.
+
+## Import dataset into pandas and process data
+
+Next, you'll read the csv file into a pandas DataFrame.
+
+1. Add the following code to a new code cell:
+
+ ```python
+ # Code cell 3
+
+ df=pd.read_csv(os.path.join(os.getcwd(),'wiki_movie_plots_deduped.csv'))
+ df
+ ```
+
+1. Execute code cell 3. You should see the following output:
+
+ :::image type="content" source="media/cache-tutorial-vector-similarity/code-cell-3.png" alt-text="Screenshot of results from executing code cell 3, displaying eight columns and a sampling of 10 rows of data." lightbox="media/cache-tutorial-vector-similarity/code-cell-3.png":::
+
+1. Next, process the data by adding an `id` index, removing spaces from the column titles, and filters the movies to take only movies made after 1970 and from English speaking countries. This filtering step reduces the number of movies in the dataset, which lowers the cost and time required to generate embeddings. You're free to change or remove the filter parameters based on your preferences.
+
+ To filter the data, add the following code to a new code cell:
+
+ ```python
+ # Code cell 4
+
+ df.insert(0, 'id', range(0, len(df)))
+ df['year'] = df['Release Year'].astype(int)
+ df['origin'] = df['Origin/Ethnicity'].astype(str)
+ del df['Release Year']
+ del df['Origin/Ethnicity']
+ df = df[df.year > 1970] # only movies made after 1970
+ df = df[df.origin.isin(['American','British','Canadian'])] # only movies from English-speaking cinema
+ df
+ ```
+
+1. Execute code cell 4. You should see the following results:
+
+ :::image type="content" source="media/cache-tutorial-vector-similarity/code-cell-4.png" alt-text="Screenshot of results from executing code cell 4, displaying nine columns and a sampling of 10 rows of data." lightbox="media/cache-tutorial-vector-similarity/code-cell-4.png":::
+
+1. Create a function to clean the data by removing whitespace and punctuation, then use it against the dataframe containing the plot.
+
+ Add the following code to a new code cell and execute it:
+
+ ```python
+ # Code cell 5
+
+ pd.options.mode.chained_assignment = None
+
+ # s is input text
+ def normalize_text(s, sep_token = " \n "):
+ s = re.sub(r'\s+', ' ', s).strip()
+ s = re.sub(r". ,","",s)
+ # remove all instances of multiple spaces
+ s = s.replace("..",".")
+ s = s.replace(". .",".")
+ s = s.replace("\n", "")
+ s = s.strip()
+
+ return s
+
+ df['Plot']= df['Plot'].apply(lambda x : normalize_text(x))
+ ```
+
+1. Finally, remove any entries that contain plot descriptions that are too long for the embeddings model. (In other words, they require more tokens than the 8192 token limit.) and then calculate the numbers of tokens required to generate embeddings. This also impacts pricing for embedding generation.
+
+ Add the following code to a new code cell:
+
+ ```python
+ # Code cell 6
+
+ tokenizer = tiktoken.get_encoding("cl100k_base")
+ df['n_tokens'] = df["Plot"].apply(lambda x: len(tokenizer.encode(x)))
+ df = df[df.n_tokens<8192]
+ print('Number of movies: ' + str(len(df)))
+ print('Number of tokens required:' + str(df['n_tokens'].sum()))
+ ```
+
+1. Execute code cell 6. You should see this output:
+
+ ```output
+ Number of movies: 11125
+ Number of tokens required:7044844
+ ```
+
+ > [!Important]
+ > Refer to [Azure OpenAI Service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) to caculate the cost of generating embeddings based on the number of tokens required.
+
+## Load DataFrame into LangChain
+
+Load the DataFrame into LangChain using the `DataFrameLoader` class. Once the data is in LangChain documents, it's far easier to use LangChain libraries to generate embeddings and conduct similarity searches. Set *Plot* as the `page_content_column` so that embeddings are generated on this column.
+
+1. Add the following code to a new code cell and execute it:
+
+ ```python
+ # Code cell 7
+
+ loader = DataFrameLoader(df, page_content_column="Plot" )
+ movie_list = loader.load()
+ ```
+
+## Generate embeddings and load them into Redis
+
+Now that the data has been filtered and loaded into LangChain, you'll create embeddings so you can query on the plot for each movie. The following code configures Azure OpenAI, generates embeddings, and loads the embeddings vectors into Azure Cache for Redis.
+
+1. Add the following code a new code cell:
+
+ ```python
+ # Code cell 8
+
+ embedding = OpenAIEmbeddings(
+ deployment=DEPLOYMENT_NAME,
+ model=MODEL_NAME,
+ openai_api_base=RESOURCE_ENDPOINT,
+ openai_api_type="azure",
+ openai_api_key=API_KEY,
+ openai_api_version="2023-05-15",
+ chunk_size=16 # current limit with Azure OpenAI service. This will likely increase in the future.
+ )
+
+ # name of the Redis search index to create
+ index_name = "movieindex"
+
+ # create a connection string for the Redis Vector Store. Uses Redis-py format: https://redis-py.readthedocs.io/en/stable/connections.html#redis.Redis.from_url
+ # This example assumes TLS is enabled. If not, use "redis://" instead of "rediss://
+ redis_url = "rediss://:" + REDIS_PASSWORD + "@"+ REDIS_ENDPOINT
+
+ # create and load redis with documents
+ vectorstore = RedisVectorStore.from_documents(
+ documents=movie_list,
+ embedding=embedding,
+ index_name=index_name,
+ redis_url=redis_url
+ )
+
+ # save index schema so you can reload in the future without re-generating embeddings
+ vectorstore.write_schema("redis_schema.yaml")
+ ```
+
+1. Execute code cell 8. This can take up to 10 minutes to complete. A `redis_schema.yaml` file is generated as well. This file is useful if you want to connect to your index in Azure Cache for Redis instance without re-generating embeddings.
+
+## Run vector search queries
+
+Now that your dataset, Azure OpenAI service API, and Redis instance are set up, you can search using vectors. In this example, the top 10 results for a given query are returned.
+
+1. Add the following code to your Python code file:
+
+ ```python
+ # Code cell 9
+
+ query = "Spaceships, aliens, and heroes saving America"
+ results = vectorstore.similarity_search_with_score(query, k=10)
+
+ for i, j in enumerate(results):
+ movie_title = str(results[i][0].metadata['Title'])
+ similarity_score = str(round((1 - results[i][1]),4))
+ print(movie_title + ' (Score: ' + similarity_score + ')')
+ ```
+
+1. Execute code cell 9. You should see the following output:
+
+ ```output
+ Independence Day (Score: 0.8348)
+ The Flying Machine (Score: 0.8332)
+ Remote Control (Score: 0.8301)
+ Bravestarr: The Legend (Score: 0.83)
+ Xenogenesis (Score: 0.8291)
+ Invaders from Mars (Score: 0.8291)
+ Apocalypse Earth (Score: 0.8287)
+ Invasion from Inner Earth (Score: 0.8287)
+ Thru the Moebius Strip (Score: 0.8283)
+ Solar Crisis (Score: 0.828)
+ ```
+
+ The similarity score is returned along with the ordinal ranking of movies by similarity. Notice that more specific queries have similarity scores decrease faster down the list.
+
+## Hybrid searches
+
+1. Since RediSearch also features rich search functionality on top of vector search, it's possible to filter results by the metadata in the data set, such as film genre, cast, release year, or director. In this case, filter based on the genre `comedy`.
+
+ Add the following code to a new code cell:
+
+ ```python
+ # Code cell 10
+
+ from langchain.vectorstores.redis import RedisText
+
+ query = "Spaceships, aliens, and heroes saving America"
+ genre_filter = RedisText("Genre") == "comedy"
+ results = vectorstore.similarity_search_with_score(query, filter=genre_filter, k=10)
+ for i, j in enumerate(results):
+ movie_title = str(results[i][0].metadata['Title'])
+ similarity_score = str(round((1 - results[i][1]),4))
+ print(movie_title + ' (Score: ' + similarity_score + ')')
+ ```
+
+1. Execute code cell 10. You should see the following output:
+
+ ```output
+ Remote Control (Score: 0.8301)
+ Meet Dave (Score: 0.8236)
+ Elf-Man (Score: 0.8208)
+ Fifty/Fifty (Score: 0.8167)
+ Mars Attacks! (Score: 0.8165)
+ Strange Invaders (Score: 0.8143)
+ Amanda and the Alien (Score: 0.8136)
+ Suburban Commando (Score: 0.8129)
+ Coneheads (Score: 0.8129)
+ Morons from Outer Space (Score: 0.8121)
+ ```
+
+With Azure Cache for Redis and Azure OpenAI Service, you can use embeddings and vector search to add powerful search capabilities to your application.
++
+## Related Content
+
+* [Learn more about Azure Cache for Redis](cache-overview.md)
+* Learn more about Azure Cache for Redis [vector search capabilities](./cache-overview-vector-similarity.md)
+* Learn more about [embeddings generated by Azure OpenAI Service](../ai-services/openai/concepts/understand-embeddings.md)
+* Learn more about [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity)
+* [Read how to build an AI-powered app with OpenAI and Redis](https://techcommunity.microsoft.com/t5/azure-developer-community-blog/vector-similarity-search-with-azure-cache-for-redis-enterprise/ba-p/3822059)
+* [Build a Q&A app with semantic answers](https://github.com/ruoccofabrizio/azure-open-ai-embeddings-qna)
azure-functions Dotnet Isolated In Process Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md
recommendations: false
#Customer intent: As a developer, I need to understand the differences between running in-process and running in an isolated worker process so that I can choose the best process model for my functions.
-# Differences between in-process and isolated worker process .NET Azure Functions
+# Differences between isolated worker model and in-process model .NET Azure Functions
There are two process models for .NET functions:
This article describes the current state of the functional and behavioral differ
Use the following table to compare feature and functional differences between the two models:
-| Feature/behavior | In-process<sup>3</sup> | Isolated worker process |
+| Feature/behavior | Isolated worker process | In-process<sup>3</sup> |
| - | - | - |
-| [Supported .NET versions](#supported-versions) | Long Term Support (LTS) versions<sup>6</sup> | Long Term Support (LTS) versions<sup>6</sup>,<br/>Standard Term Support (STS) versions,<br/>.NET Framework |
-| Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) |
-| Binding extension packages | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) |
-| Durable Functions | [Supported](durable/durable-functions-overview.md) | [Supported](durable/durable-functions-isolated-create-first-csharp.md?pivots=code-editor-visualstudio) (Support does not yet include Durable Entities) |
-| Model types exposed by bindings | Simple types<br/>[JSON serializable](/dotnet/api/system.text.json.jsonserializeroptions) types<br/>Arrays/enumerations<br/>Service SDK types<sup>4</sup> | Simple types<br/>JSON serializable types<br/>Arrays/enumerations<br/>[Service SDK types](dotnet-isolated-process-guide.md#sdk-types)<sup>4</sup> |
-| HTTP trigger model types| [HttpRequest] / [IActionResult]<sup>5</sup><br/>[HttpRequestMessage] / [HttpResponseMessage] | [HttpRequestData] / [HttpResponseData]<br/>[HttpRequest] / [IActionResult] (using [ASP.NET Core integration][aspnetcore-integration])<sup>5</sup>|
-| Output binding interactions | Return values (single output only),<br/>`out` parameters,<br/>`IAsyncCollector` | Return values in an expanded model with:<br/> - single or [multiple outputs](dotnet-isolated-process-guide.md#multiple-output-bindings)<br/> - arrays of outputs|
-| Imperative bindings<sup>1</sup> | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | Not supported - instead [work with SDK types directly](./dotnet-isolated-process-guide.md#register-azure-clients) |
-| Dependency injection | [Supported](functions-dotnet-dependency-injection.md) | [Supported](dotnet-isolated-process-guide.md#dependency-injection) (improved model consistent with .NET ecosystem) |
-| Middleware | Not supported | [Supported](dotnet-isolated-process-guide.md#middleware) |
-| Logging | [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via [dependency injection](functions-dotnet-dependency-injection.md) | [ILogger&lt;T&gt;]/[ILogger] obtained from [FunctionContext](/dotnet/api/microsoft.azure.functions.worker.functioncontext) or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)|
-| Application Insights dependencies | [Supported](functions-monitoring.md#dependencies) | [Supported](./dotnet-isolated-process-guide.md#application-insights) |
-| Cancellation tokens | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | [Supported](dotnet-isolated-process-guide.md#cancellation-tokens) |
-| Cold start times<sup>2</sup> | Optimized | [Configurable optimizations (preview)](./dotnet-isolated-process-guide.md#performance-optimizations) |
-| ReadyToRun | [Supported](functions-dotnet-class-library.md#readytorun) | [Supported](dotnet-isolated-process-guide.md#readytorun) |
+| [Supported .NET versions](#supported-versions) | Long Term Support (LTS) versions<sup>6</sup>,<br/>Standard Term Support (STS) versions,<br/>.NET Framework | Long Term Support (LTS) versions<sup>6</sup> |
+| Core packages | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) |
+| Binding extension packages | [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) |
+| Durable Functions | [Supported](durable/durable-functions-isolated-create-first-csharp.md?pivots=code-editor-visualstudio) (Support does not yet include Durable Entities) | [Supported](durable/durable-functions-overview.md) |
+| Model types exposed by bindings | Simple types<br/>JSON serializable types<br/>Arrays/enumerations<br/>[Service SDK types](dotnet-isolated-process-guide.md#sdk-types)<sup>4</sup> | Simple types<br/>[JSON serializable](/dotnet/api/system.text.json.jsonserializeroptions) types<br/>Arrays/enumerations<br/>Service SDK types<sup>4</sup> |
+| HTTP trigger model types| [HttpRequestData] / [HttpResponseData]<br/>[HttpRequest] / [IActionResult] (using [ASP.NET Core integration][aspnetcore-integration])<sup>5</sup>| [HttpRequest] / [IActionResult]<sup>5</sup><br/>[HttpRequestMessage] / [HttpResponseMessage] |
+| Output binding interactions | Return values in an expanded model with:<br/> - single or [multiple outputs](dotnet-isolated-process-guide.md#multiple-output-bindings)<br/> - arrays of outputs| Return values (single output only),<br/>`out` parameters,<br/>`IAsyncCollector` |
+| Imperative bindings<sup>1</sup> | Not supported - instead [work with SDK types directly](./dotnet-isolated-process-guide.md#register-azure-clients) | [Supported](functions-dotnet-class-library.md#binding-at-runtime) |
+| Dependency injection | [Supported](dotnet-isolated-process-guide.md#dependency-injection) (improved model consistent with .NET ecosystem) | [Supported](functions-dotnet-dependency-injection.md) |
+| Middleware | [Supported](dotnet-isolated-process-guide.md#middleware) | Not supported |
+| Logging | [ILogger&lt;T&gt;]/[ILogger] obtained from [FunctionContext](/dotnet/api/microsoft.azure.functions.worker.functioncontext) or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)| [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via [dependency injection](functions-dotnet-dependency-injection.md) |
+| Application Insights dependencies | [Supported](./dotnet-isolated-process-guide.md#application-insights) | [Supported](functions-monitoring.md#dependencies) |
+| Cancellation tokens | [Supported](dotnet-isolated-process-guide.md#cancellation-tokens) | [Supported](functions-dotnet-class-library.md#cancellation-tokens) |
+| Cold start times<sup>2</sup> | [Configurable optimizations (preview)](./dotnet-isolated-process-guide.md#performance-optimizations) | Optimized |
+| ReadyToRun | [Supported](dotnet-isolated-process-guide.md#readytorun) | [Supported](functions-dotnet-class-library.md#readytorun) |
<sup>1</sup> When you need to interact with a service using parameters determined at runtime, using the corresponding service SDKs directly is recommended over using imperative bindings. The SDKs are less verbose, cover more scenarios, and have advantages for error handling and debugging purposes. This recommendation applies to both models.
Use the following table to compare feature and functional differences between th
<sup>5</sup> ASP.NET Core types are not supported for .NET Framework.
-<sup>6</sup> The isolated worker model supports .NET 8 as a preview, currently for Linux applications only. .NET 8 is not yet available for the in-process model. See the [Azure Functions Roadmap Update post](https://aka.ms/azure-functions-dotnet-roadmap) for more information about .NET 8 plans.
+<sup>6</sup> The isolated worker model supports .NET 8 [as a preview](./dotnet-isolated-process-guide.md#preview-net-versions). For information about .NET 8 plans, including future options for the in-process model, see the [Azure Functions Roadmap Update post](https://aka.ms/azure-functions-dotnet-roadmap).
[HttpRequest]: /dotnet/api/microsoft.aspnetcore.http.httprequest [IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
After the debugger is attached, the process execution resumes, and you'll be abl
Because your isolated worker process app runs outside the Functions runtime, you need to attach the remote debugger to a separate process. To learn more about debugging using Visual Studio, see [Remote Debugging](functions-develop-vs.md?tabs=isolated-process#remote-debugging).
+## Preview .NET versions
+
+Azure Functions currently can be used with the following preview versions of .NET:
+
+| Operating system | .NET preview version |
+| - | - |
+| Windows | .NET 8 Preview 7 |
+| Linux | .NET 8 RC1 |
+
+### Using a preview .NET SDK
+
+To use Azure Functions with a preview version of .NET, you need to update your project by:
+
+1. Installing the relevant .NET SDK version in your development
+1. Changing the `TargetFramework` setting in your `.csproj` file
+
+When deploying to a function app in Azure, you also need to ensure that the framework is made available to the app. To do so on Windows, you can use the following CLI command. Replace `<groupName>` with the name of the resource group, and replace `<appName>` with the name of your function app. Replace `<framework>` with the appropriate version string, such as "v8.0".
+
+```azurecli
+az functionapp config set -g <groupName> -n <appName> --net-framework-version <framework>
+```
+
+### Considerations for using .NET preview versions
+
+Keep these considerations in mind when using Functions with preview versions of .NET:
+
+If you author your functions in Visual Studio, you must use [Visual Studio Preview](https://visualstudio.microsoft.com/vs/preview/), which supports building Azure Functions projects with .NET preview SDKs. You should also ensure you have the latest Functions tools and templates. To update these, navigate to `Tools->Options`, select `Azure Functions` under `Projects and Solutions`, and then click the `Check for updates` button, installing updates as prompted.
+
+During the preview period, your development environment might have a more recent version of the .NET preview than the hosted service. This can cause the application to fail when deployed. To address this, you can configure which version of the SDK to use in [`global.json`](/dotnet/core/tools/global-json). First, identify which versions you have installed using `dotnet --list-sdks` and note the version that matches what the service supports. Then you can run `dotnet new globaljson --sdk-version <sdk-version> --force`, substituting `<sdk-version>` for the version you noted in the previous command. For example, `dotnet new globaljson --sdk-version dotnet-sdk-8.0.100-preview.7.23376.3 --force` will cause the system to use the .NET 8 Preview 7 SDK when building your project.
+
+Note that due to just-in-time loading of preview frameworks, function apps running on Windows may experience increased cold start times when compared against earlier GA versions.
+ ## Next steps > [!div class="nextstepaction"]
azure-functions Functions Add Output Binding Azure Sql Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-azure-sql-vs-code.md
Because you're using an Azure SQL output binding, you must have the correspondin
With the exception of HTTP and timer triggers, bindings are implemented as extension packages. Run the following [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to add the Azure SQL extension package to your project.
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
```bash
-dotnet add package Microsoft.Azure.WebJobs.Extensions.Sql
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Sql
```
-# [Isolated process](#tab/isolated-process)
+# [In-process model](#tab/in-process)
```bash
-dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Sql
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Sql
``` ::: zone-end
Open the *HttpExample.cs* project file and add the following `ToDoItem` class, w
In a C# class library project, the bindings are defined as binding attributes on the function method. The *function.json* file required by Functions is then auto-generated based on these attributes.
-# [In-process](#tab/in-process)
-Open the *HttpExample.cs* project file and add the following parameter to the `Run` method definition:
--
-The `toDoItems` parameter is an `IAsyncCollector<ToDoItem>` type, which represents a collection of ToDo items that are written to your Azure SQL Database when the function completes. Specific attributes indicate the names of the database table (`dbo.ToDo`) and the connection string for your Azure SQL Database (`SqlConnectionString`).
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
Open the *HttpExample.cs* project file and add the following output type class, which defines the combined objects that will be output from our function for both the HTTP response and the SQL output:
Add a using statement to the `Microsoft.Azure.Functions.Worker.Extensions.Sql` l
using Microsoft.Azure.Functions.Worker.Extensions.Sql; ```
+# [In-process model](#tab/in-process)
+Open the *HttpExample.cs* project file and add the following parameter to the `Run` method definition:
++
+The `toDoItems` parameter is an `IAsyncCollector<ToDoItem>` type, which represents a collection of ToDo items that are written to your Azure SQL Database when the function completes. Specific attributes indicate the names of the database table (`dbo.ToDo`) and the connection string for your Azure SQL Database (`SqlConnectionString`).
+ ::: zone-end
In this code, `arg_name` identifies the binding parameter referenced in your cod
::: zone pivot="programming-language-csharp"
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+Replace the existing Run method with the following code:
+
+```cs
+[Function("HttpExample")]
+public static OutputType Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req,
+ FunctionContext executionContext)
+{
+ var logger = executionContext.GetLogger("HttpExample");
+ logger.LogInformation("C# HTTP trigger function processed a request.");
+
+ var message = "Welcome to Azure Functions!";
+
+ var response = req.CreateResponse(HttpStatusCode.OK);
+ response.Headers.Add("Content-Type", "text/plain; charset=utf-8");
+ response.WriteString(message);
+
+ // Return a response to both HTTP trigger and Azure SQL output binding.
+ return new OutputType()
+ {
+ ToDoItem = new ToDoItem
+ {
+ id = System.Guid.NewGuid().ToString(),
+ title = message,
+ completed = false,
+ url = ""
+ },
+ HttpResponse = response
+ };
+}
+```
+
+# [In-process model](#tab/in-process)
Add code that uses the `toDoItems` output binding object to create a new `ToDoItem`. Add this code before the method returns.
public static async Task<IActionResult> Run(
} ```
-# [Isolated process](#tab/isolated-process)
-
-Replace the existing Run method with the following code:
-
-```cs
-[Function("HttpExample")]
-public static OutputType Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req,
- FunctionContext executionContext)
-{
- var logger = executionContext.GetLogger("HttpExample");
- logger.LogInformation("C# HTTP trigger function processed a request.");
-
- var message = "Welcome to Azure Functions!";
-
- var response = req.CreateResponse(HttpStatusCode.OK);
- response.Headers.Add("Content-Type", "text/plain; charset=utf-8");
- response.WriteString(message);
-
- // Return a response to both HTTP trigger and Azure SQL output binding.
- return new OutputType()
- {
- ToDoItem = new ToDoItem
- {
- id = System.Guid.NewGuid().ToString(),
- title = message,
- completed = false,
- url = ""
- },
- HttpResponse = response
- };
-}
-```
- ::: zone-end
azure-functions Functions Add Output Binding Cosmos Db Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md
Because you're using an Azure Cosmos DB output binding, you must have the corres
Except for HTTP and timer triggers, bindings are implemented as extension packages. Run the following [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to add the Azure Cosmos DB extension package to your project.
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
```command
-dotnet add package Microsoft.Azure.WebJobs.Extensions.CosmosDB --version 3.0.10
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.CosmosDB --version 3.0.9
```
-# [Isolated process](#tab/isolated-process)
+# [In-process model](#tab/in-process)
```command
-dotnet add package Microsoft.Azure.Functions.Worker.Extensions.CosmosDB --version 3.0.9
+dotnet add package Microsoft.Azure.WebJobs.Extensions.CosmosDB --version 3.0.10
``` ::: zone-end
Now, you can add the Azure Cosmos DB output binding to your project.
::: zone pivot="programming-language-csharp" In a C# class library project, the bindings are defined as binding attributes on the function method.
-# [In-process](#tab/in-process)
-Open the *HttpExample.cs* project file and add the following parameter to the `Run` method definition:
--
-The `documentsOut` parameter is an `IAsyncCollector<T>` type, which represents a collection of JSON documents that are written to your Azure Cosmos DB container when the function completes. Specific attributes indicate the names of the container and its parent database. The connection string for your Azure Cosmos DB account is set by the `ConnectionStringSettingAttribute`.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
Open the *HttpExample.cs* project file and add the following classes:
The `MyDocument` class defines an object that gets written to the database. The
The `MultiResponse` class allows you to both write to the specified collection in the Azure Cosmos DB and return an HTTP success message. Because you need to return a `MultiResponse` object, you need to also update the method signature.
+# [In-process model](#tab/in-process)
+Open the *HttpExample.cs* project file and add the following parameter to the `Run` method definition:
++
+The `documentsOut` parameter is an `IAsyncCollector<T>` type, which represents a collection of JSON documents that are written to your Azure Cosmos DB container when the function completes. Specific attributes indicate the names of the container and its parent database. The connection string for your Azure Cosmos DB account is set by the `ConnectionStringSettingAttribute`.
+ Specific attributes specify the name of the container and the name of its parent database. The connection string for your Azure Cosmos DB account is set by the `CosmosDbConnectionString`.
In this code, `arg_name` identifies the binding parameter referenced in your cod
::: zone pivot="programming-language-csharp"
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+Replace the existing Run method with the following code:
++
+# [In-process model](#tab/in-process)
Add code that uses the `documentsOut` output binding object to create a JSON document. Add this code before the method returns.
public static async Task<IActionResult> Run(
} ```
-# [Isolated process](#tab/isolated-process)
-
-Replace the existing Run method with the following code:
-- ::: zone-end
azure-functions Functions Add Output Binding Storage Queue Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-vs.md
Because you're using a Queue storage output binding, you need the Storage bindin
1. In the console, run the following [Install-Package](/nuget/tools/ps-ref-install-package) command to install the Storage extensions:
- # [In-process](#tab/in-process)
+ # [Isolated worker model](#tab/isolated-process)
```bash
- Install-Package Microsoft.Azure.WebJobs.Extensions.Storage
+ Install-Package /dotnet/api/microsoft.azure.webjobs.blobattribute.Queues -IncludePrerelease
```
- # [Isolated process](#tab/isolated-process)
+ # [In-process model](#tab/in-process)
```bash
- Install-Package /dotnet/api/microsoft.azure.webjobs.blobattribute.Queues -IncludePrerelease
+ Install-Package Microsoft.Azure.WebJobs.Extensions.Storage
```
azure-functions Functions Bindings Azure Data Explorer Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-data-explorer-input.md
The Azure Data Explorer input binding retrieves data from a database.
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+More samples for the Azure Data Explorer input binding (out of process) are available in the [GitHub repository](https://github.com/Azure/Webjobs.Extensions.Kusto/tree/main/samples/samples-outofproc).
+
+This section contains the following examples:
+
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c-oop)
+* [HTTP trigger, get multiple rows from route data](#http-trigger-get-multiple-items-from-route-data-c-oop)
+
+The examples refer to a `Product` class and the Products table, both of which are defined in the previous sections.
+
+<a id="http-trigger-look-up-id-from-query-string-c-oop"></a>
+
+### HTTP trigger, get row by ID from query string
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a single record. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `Product` record with the specified query.
+
+> [!NOTE]
+> The HTTP query string parameter is case sensitive.
+>
+
+```cs
+using System.Text.Json.Nodes;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Extensions.Kusto;
+using Microsoft.Azure.Functions.Worker.Http;
+using Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common;
+
+namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.InputBindingSamples
+{
+ public static class GetProductsQuery
+ {
+ [Function("GetProductsQuery")]
+ public static JsonArray Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "getproductsquery")] HttpRequestData req,
+ [KustoInput(Database: "productsdb",
+ KqlCommand = "declare query_parameters (productId:long);Products | where ProductID == productId",
+ KqlParameters = "@productId={Query.productId}",Connection = "KustoConnectionString")] JsonArray products)
+ {
+ return products;
+ }
+ }
+}
+```
+
+<a id="http-trigger-get-multiple-items-from-route-data-c-oop"></a>
+
+### HTTP trigger, get multiple rows from route parameter
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves records returned by the query (based on the name of the product, in this case). The function is triggered by an HTTP request that uses route data to specify the value of a query parameter. That parameter is used to filter the `Product` records in the specified query.
+
+```cs
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Extensions.Kusto;
+using Microsoft.Azure.Functions.Worker.Http;
+using Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common;
+
+namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.InputBindingSamples
+{
+ public static class GetProductsFunction
+ {
+ [Function("GetProductsFunction")]
+ public static IEnumerable<Product> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "getproductsfn/{name}")] HttpRequestData req,
+ [KustoInput(Database: "productsdb",
+ KqlCommand = "declare query_parameters (name:string);GetProductsByName(name)",
+ KqlParameters = "@name={name}",Connection = "KustoConnectionString")] IEnumerable<Product> products)
+ {
+ return products;
+ }
+ }
+}
+```
+
+# [In-process model](#tab/in-process)
More samples for the Azure Data Explorer input binding are available in the [GitHub repository](https://github.com/Azure/Webjobs.Extensions.Kusto/blob/main/samples/samples-csharp).
namespace Microsoft.Azure.WebJobs.Extensions.Kusto.Samples.InputBindingSamples
} ```
-# [Isolated process](#tab/isolated-process)
-
-More samples for the Azure Data Explorer input binding (out of process) are available in the [GitHub repository](https://github.com/Azure/Webjobs.Extensions.Kusto/tree/main/samples/samples-outofproc).
-
-This section contains the following examples:
-
-* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c-oop)
-* [HTTP trigger, get multiple rows from route data](#http-trigger-get-multiple-items-from-route-data-c-oop)
-
-The examples refer to a `Product` class and the Products table, both of which are defined in the previous sections.
-
-<a id="http-trigger-look-up-id-from-query-string-c-oop"></a>
-
-### HTTP trigger, get row by ID from query string
-
-The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a single record. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `Product` record with the specified query.
-
-> [!NOTE]
-> The HTTP query string parameter is case sensitive.
->
-
-```cs
-using System.Text.Json.Nodes;
-using Microsoft.Azure.Functions.Worker;
-using Microsoft.Azure.Functions.Worker.Extensions.Kusto;
-using Microsoft.Azure.Functions.Worker.Http;
-using Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common;
-
-namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.InputBindingSamples
-{
- public static class GetProductsQuery
- {
- [Function("GetProductsQuery")]
- public static JsonArray Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "getproductsquery")] HttpRequestData req,
- [KustoInput(Database: "productsdb",
- KqlCommand = "declare query_parameters (productId:long);Products | where ProductID == productId",
- KqlParameters = "@productId={Query.productId}",Connection = "KustoConnectionString")] JsonArray products)
- {
- return products;
- }
- }
-}
-```
-
-<a id="http-trigger-get-multiple-items-from-route-data-c-oop"></a>
-
-### HTTP trigger, get multiple rows from route parameter
-
-The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves records returned by the query (based on the name of the product, in this case). The function is triggered by an HTTP request that uses route data to specify the value of a query parameter. That parameter is used to filter the `Product` records in the specified query.
-
-```cs
-using Microsoft.Azure.Functions.Worker;
-using Microsoft.Azure.Functions.Worker.Extensions.Kusto;
-using Microsoft.Azure.Functions.Worker.Http;
-using Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common;
-
-namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.InputBindingSamples
-{
- public static class GetProductsFunction
- {
- [Function("GetProductsFunction")]
- public static IEnumerable<Product> Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "getproductsfn/{name}")] HttpRequestData req,
- [KustoInput(Database: "productsdb",
- KqlCommand = "declare query_parameters (name:string);GetProductsByName(name)",
- KqlParameters = "@name={name}",Connection = "KustoConnectionString")] IEnumerable<Product> products)
- {
- return products;
- }
- }
-}
-```
-
-<!-- Uncomment to support C# script examples.
-# [C# Script](#tab/csharp-script)
-> ::: zone-end
azure-functions Functions Bindings Azure Data Explorer Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-data-explorer-output.md
For information on setup and configuration details, see the [overview](functions
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)]
-### [In-process](#tab/in-process)
+### [Isolated worker model](#tab/isolated-process)
+
+More samples for the Azure Data Explorer output binding are available in the [GitHub repository](https://github.com/Azure/Webjobs.Extensions.Kusto/tree/main/samples/samples-outofproc).
+
+This section contains the following examples:
+
+* [HTTP trigger, write one record](#http-trigger-write-one-record-c-oop)
+* [HTTP trigger, write records with mapping](#http-trigger-write-records-with-mapping-oop)
+
+The examples refer to `Product` class and a corresponding database table:
+
+```cs
+public class Product
+{
+ [JsonProperty(nameof(ProductID))]
+ public long ProductID { get; set; }
+
+ [JsonProperty(nameof(Name))]
+ public string Name { get; set; }
+
+ [JsonProperty(nameof(Cost))]
+ public double Cost { get; set; }
+}
+```
+
+```kusto
+.create-merge table Products (ProductID:long, Name:string, Cost:double)
+```
+
+<a id="http-trigger-write-one-record-c-oop"></a>
+
+#### HTTP trigger, write one record
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database. The function uses data provided in an HTTP POST request as a JSON body.
+
+```cs
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Extensions.Kusto;
+using Microsoft.Azure.Functions.Worker.Http;
+using Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common;
+
+namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples
+{
+ public static class AddProduct
+ {
+ [Function("AddProduct")]
+ [KustoOutput(Database: "productsdb", Connection = "KustoConnectionString", TableName = "Products")]
+ public static async Task<Product> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addproductuni")]
+ HttpRequestData req)
+ {
+ Product? prod = await req.ReadFromJsonAsync<Product>();
+ return prod ?? new Product { };
+ }
+ }
+}
+
+```
+
+<a id="http-trigger-write-records-with-mapping-oop"></a>
+
+#### HTTP trigger, write records with mapping
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that adds a collection of records to a database. The function uses mapping that transforms a `Product` to `Item`.
+
+To transform data from `Product` to `Item`, the function uses a mapping reference:
+
+```kusto
+.create-merge table Item (ItemID:long, ItemName:string, ItemCost:float)
++
+-- Create a mapping that transforms an Item to a Product
+
+.create-or-alter table Product ingestion json mapping "item_to_product_json" '[{"column":"ProductID","path":"$.ItemID"},{"column":"Name","path":"$.ItemName"},{"column":"Cost","path":"$.ItemCost"}]'
+```
+
+```cs
+namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common
+{
+ public class Item
+ {
+ public long ItemID { get; set; }
+
+ public string? ItemName { get; set; }
+
+ public double ItemCost { get; set; }
+ }
+}
+```
+
+```cs
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Extensions.Kusto;
+using Microsoft.Azure.Functions.Worker.Http;
+using Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common;
+
+namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples
+{
+ public static class AddProductsWithMapping
+ {
+ [Function("AddProductsWithMapping")]
+ [KustoOutput(Database: "productsdb", Connection = "KustoConnectionString", TableName = "Products", MappingRef = "item_to_product_json")]
+ public static async Task<Item> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addproductswithmapping")]
+ HttpRequestData req)
+ {
+ Item? item = await req.ReadFromJsonAsync<Item>();
+ return item ?? new Item { };
+ }
+ }
+}
+```
+### [In-process model](#tab/in-process)
More samples for the Azure Data Explorer output binding are available in the [GitHub repository](https://github.com/Azure/Webjobs.Extensions.Kusto/tree/main/samples/samples-csharp).
namespace Microsoft.Azure.WebJobs.Extensions.Kusto.Samples.OutputBindingSamples
} ```
-### [Isolated process](#tab/isolated-process)
-
-More samples for the Azure Data Explorer output binding are available in the [GitHub repository](https://github.com/Azure/Webjobs.Extensions.Kusto/tree/main/samples/samples-outofproc).
-
-This section contains the following examples:
-
-* [HTTP trigger, write one record](#http-trigger-write-one-record-c-oop)
-* [HTTP trigger, write records with mapping](#http-trigger-write-records-with-mapping-oop)
-
-The examples refer to `Product` class and a corresponding database table:
-
-```cs
-public class Product
-{
- [JsonProperty(nameof(ProductID))]
- public long ProductID { get; set; }
-
- [JsonProperty(nameof(Name))]
- public string Name { get; set; }
-
- [JsonProperty(nameof(Cost))]
- public double Cost { get; set; }
-}
-```
-
-```kusto
-.create-merge table Products (ProductID:long, Name:string, Cost:double)
-```
-
-<a id="http-trigger-write-one-record-c-oop"></a>
-
-#### HTTP trigger, write one record
-
-The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database. The function uses data provided in an HTTP POST request as a JSON body.
-
-```cs
-using Microsoft.Azure.Functions.Worker;
-using Microsoft.Azure.Functions.Worker.Extensions.Kusto;
-using Microsoft.Azure.Functions.Worker.Http;
-using Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common;
-
-namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples
-{
- public static class AddProduct
- {
- [Function("AddProduct")]
- [KustoOutput(Database: "productsdb", Connection = "KustoConnectionString", TableName = "Products")]
- public static async Task<Product> Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addproductuni")]
- HttpRequestData req)
- {
- Product? prod = await req.ReadFromJsonAsync<Product>();
- return prod ?? new Product { };
- }
- }
-}
-
-```
-
-<a id="http-trigger-write-records-with-mapping-oop"></a>
-
-#### HTTP trigger, write records with mapping
-
-The following example shows a [C# function](functions-dotnet-class-library.md) that adds a collection of records to a database. The function uses mapping that transforms a `Product` to `Item`.
-
-To transform data from `Product` to `Item`, the function uses a mapping reference:
-
-```kusto
-.create-merge table Item (ItemID:long, ItemName:string, ItemCost:float)
-- Create a mapping that transforms an Item to a Product-
-.create-or-alter table Product ingestion json mapping "item_to_product_json" '[{"column":"ProductID","path":"$.ItemID"},{"column":"Name","path":"$.ItemName"},{"column":"Cost","path":"$.ItemCost"}]'
-```
-
-```cs
-namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common
-{
- public class Item
- {
- public long ItemID { get; set; }
-
- public string? ItemName { get; set; }
-
- public double ItemCost { get; set; }
- }
-}
-```
-
-```cs
-using Microsoft.Azure.Functions.Worker;
-using Microsoft.Azure.Functions.Worker.Extensions.Kusto;
-using Microsoft.Azure.Functions.Worker.Http;
-using Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples.Common;
-
-namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindingSamples
-{
- public static class AddProductsWithMapping
- {
- [Function("AddProductsWithMapping")]
- [KustoOutput(Database: "productsdb", Connection = "KustoConnectionString", TableName = "Products", MappingRef = "item_to_product_json")]
- public static async Task<Item> Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addproductswithmapping")]
- HttpRequestData req)
- {
- Item? item = await req.ReadFromJsonAsync<Item>();
- return item ?? new Item { };
- }
- }
-}
-```
::: zone-end
azure-functions Functions Bindings Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-data-explorer.md
This set of articles explains how to work with [Azure Data Explorer](/azure/data
The extension NuGet package you install depends on the C# mode you're using in your function app.
-# [In-process](#tab/in-process)
-
-Functions run in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-
-Add the extension to your project by installing [this NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Kusto).
-
-```bash
-dotnet add package Microsoft.Azure.WebJobs.Extensions.Kusto --prerelease
-```
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
Functions run in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
Add the extension to your project by installing [this NuGet package](https://www
dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Kusto --prerelease ```
-<!-- awaiting bundle support
-# [C# script](#tab/csharp-script)
+# [In-process model](#tab/in-process)
+
+Functions run in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+Add the extension to your project by installing [this NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Kusto).
-You can install this version of the extension in your function app by registering the [extension bundle], version 4.x, or a later version.
>
+```bash
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Kusto --prerelease
+```
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
For information on setup and configuration details, see the [overview](./functio
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
+More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-outofproc).
This section contains the following examples:
-* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c)
-* [HTTP trigger, get multiple rows from route data](#http-trigger-get-multiple-items-from-route-data-c)
-* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-c)
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c-oop)
+* [HTTP trigger, get multiple rows from route data](#http-trigger-get-multiple-items-from-route-data-c-oop)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-c-oop)
The examples refer to a `ToDoItem` class and a corresponding database table:
The examples refer to a `ToDoItem` class and a corresponding database table:
:::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="1-7":::
-<a id="http-trigger-look-up-id-from-query-string-c"></a>
+<a id="http-trigger-look-up-id-from-query-string-c-oop"></a>
### HTTP trigger, get row by ID from query string The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a single record. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `ToDoItem` record with the specified query.
using System.Collections.Generic;
using System.Linq; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc;
-using Microsoft.Azure.WebJobs;
-using Microsoft.Azure.WebJobs.Extensions.Http;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Extensions.Sql;
+using Microsoft.Azure.Functions.Worker.Http;
namespace AzureSQLSamples {
namespace AzureSQLSamples
public static IActionResult Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitem")] HttpRequest req,
- [Sql(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
+ [SqlInput(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
commandType: System.Data.CommandType.Text, parameters: "@Id={Query.id}", connectionStringSetting: "SqlConnectionString")]
namespace AzureSQLSamples
} ```
-<a id="http-trigger-get-multiple-items-from-route-data-c"></a>
+<a id="http-trigger-get-multiple-items-from-route-data-c-oop"></a>
### HTTP trigger, get multiple rows from route parameter The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves documents returned by the query. The function is triggered by an HTTP request that uses route data to specify the value of a query parameter. That parameter is used to filter the `ToDoItem` records in the specified query.
The following example shows a [C# function](functions-dotnet-class-library.md) t
using System.Collections.Generic; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc;
-using Microsoft.Azure.WebJobs;
-using Microsoft.Azure.WebJobs.Extensions.Http;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Extensions.Sql;
+using Microsoft.Azure.Functions.Worker.Http;
namespace AzureSQLSamples {
namespace AzureSQLSamples
public static IActionResult Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitems/{priority}")] HttpRequest req,
- [Sql(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where [Priority] > @Priority",
+ [SqlInput(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where [Priority] > @Priority",
commandType: System.Data.CommandType.Text, parameters: "@Priority={priority}", connectionStringSetting: "SqlConnectionString")]
namespace AzureSQLSamples
} ```
-<a id="http-trigger-delete-one-or-multiple-rows-c"></a>
+<a id="http-trigger-delete-one-or-multiple-rows-c-oop"></a>
### HTTP trigger, delete rows The following example shows a [C# function](functions-dotnet-class-library.md) that executes a stored procedure with input from the HTTP request query parameter.
The stored procedure `dbo.DeleteToDo` must be created on the SQL database. In t
:::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="11-25":::
+```cs
+namespace AzureSQL.ToDo
+{
+ public static class DeleteToDo
+ {
+ // delete all items or a specific item from querystring
+ // returns remaining items
+ // uses input binding with a stored procedure DeleteToDo to delete items and return remaining items
+ [FunctionName("DeleteToDo")]
+ public static IActionResult Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "delete", Route = "DeleteFunction")] HttpRequest req,
+ ILogger log,
+ [SqlInput(commandText: "DeleteToDo", commandType: System.Data.CommandType.StoredProcedure,
+ parameters: "@Id={Query.id}", connectionStringSetting: "SqlConnectionString")]
+ IEnumerable<ToDoItem> toDoItems)
+ {
+ return new OkObjectResult(toDoItems);
+ }
+ }
+}
+```
-# [Isolated process](#tab/isolated-process)
+# [In-process model](#tab/in-process)
-More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-outofproc).
+More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
This section contains the following examples:
-* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c-oop)
-* [HTTP trigger, get multiple rows from route data](#http-trigger-get-multiple-items-from-route-data-c-oop)
-* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-c-oop)
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c)
+* [HTTP trigger, get multiple rows from route data](#http-trigger-get-multiple-items-from-route-data-c)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-c)
The examples refer to a `ToDoItem` class and a corresponding database table:
The examples refer to a `ToDoItem` class and a corresponding database table:
:::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="1-7":::
-<a id="http-trigger-look-up-id-from-query-string-c-oop"></a>
+<a id="http-trigger-look-up-id-from-query-string-c"></a>
### HTTP trigger, get row by ID from query string The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a single record. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `ToDoItem` record with the specified query.
using System.Collections.Generic;
using System.Linq; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc;
-using Microsoft.Azure.Functions.Worker;
-using Microsoft.Azure.Functions.Worker.Extensions.Sql;
-using Microsoft.Azure.Functions.Worker.Http;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Extensions.Http;
namespace AzureSQLSamples {
namespace AzureSQLSamples
public static IActionResult Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitem")] HttpRequest req,
- [SqlInput(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
+ [Sql(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
commandType: System.Data.CommandType.Text, parameters: "@Id={Query.id}", connectionStringSetting: "SqlConnectionString")]
namespace AzureSQLSamples
} ```
-<a id="http-trigger-get-multiple-items-from-route-data-c-oop"></a>
+<a id="http-trigger-get-multiple-items-from-route-data-c"></a>
### HTTP trigger, get multiple rows from route parameter The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves documents returned by the query. The function is triggered by an HTTP request that uses route data to specify the value of a query parameter. That parameter is used to filter the `ToDoItem` records in the specified query.
The following example shows a [C# function](functions-dotnet-class-library.md) t
using System.Collections.Generic; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc;
-using Microsoft.Azure.Functions.Worker;
-using Microsoft.Azure.Functions.Worker.Extensions.Sql;
-using Microsoft.Azure.Functions.Worker.Http;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Extensions.Http;
namespace AzureSQLSamples {
namespace AzureSQLSamples
public static IActionResult Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitems/{priority}")] HttpRequest req,
- [SqlInput(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where [Priority] > @Priority",
+ [Sql(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where [Priority] > @Priority",
commandType: System.Data.CommandType.Text, parameters: "@Priority={priority}", connectionStringSetting: "SqlConnectionString")]
namespace AzureSQLSamples
} ```
-<a id="http-trigger-delete-one-or-multiple-rows-c-oop"></a>
+<a id="http-trigger-delete-one-or-multiple-rows-c"></a>
### HTTP trigger, delete rows The following example shows a [C# function](functions-dotnet-class-library.md) that executes a stored procedure with input from the HTTP request query parameter.
The stored procedure `dbo.DeleteToDo` must be created on the SQL database. In t
:::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="11-25":::
-```cs
-namespace AzureSQL.ToDo
-{
- public static class DeleteToDo
- {
- // delete all items or a specific item from querystring
- // returns remaining items
- // uses input binding with a stored procedure DeleteToDo to delete items and return remaining items
- [FunctionName("DeleteToDo")]
- public static IActionResult Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "delete", Route = "DeleteFunction")] HttpRequest req,
- ILogger log,
- [SqlInput(commandText: "DeleteToDo", commandType: System.Data.CommandType.StoredProcedure,
- parameters: "@Id={Query.id}", connectionStringSetting: "SqlConnectionString")]
- IEnumerable<ToDoItem> toDoItems)
- {
- return new OkObjectResult(toDoItems);
- }
- }
-}
-```
-
-# [C# Script](#tab/csharp-script)
--
-More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csx).
-
-This section contains the following examples:
-
-* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-csharpscript)
-* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-csharpscript)
-
-The examples refer to a `ToDoItem` class and a corresponding database table:
---
-<a id="http-trigger-look-up-id-from-query-string-csharpscript"></a>
-### HTTP trigger, get row by ID from query string
-
-The following example shows an Azure SQL input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `ToDoItem` record with the specified query.
-
-> [!NOTE]
-> The HTTP query string parameter is case-sensitive.
->
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "methods": [
- "get"
- ]
-},
-{
- "type": "http",
- "direction": "out",
- "name": "res"
-},
-{
- "name": "todoItem",
- "type": "sql",
- "direction": "in",
- "commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
- "commandType": "Text",
- "parameters": "@Id = {Query.id}",
- "connectionStringSetting": "SqlConnectionString"
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```cs
-#r "Newtonsoft.Json"
-
-using System.Net;
-using Microsoft.AspNetCore.Mvc;
-using Microsoft.Extensions.Primitives;
-using Newtonsoft.Json;
-using System.Collections.Generic;
-
-public static IActionResult Run(HttpRequest req, ILogger log, IEnumerable<ToDoItem> todoItem)
-{
- return new OkObjectResult(todoItem);
-}
-```
--
-<a id="http-trigger-delete-one-or-multiple-rows-csharpscript"></a>
-### HTTP trigger, delete rows
-
-The following example shows an Azure SQL input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding to execute a stored procedure with input from the HTTP request query parameter. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
-
-The stored procedure `dbo.DeleteToDo` must be created on the SQL database.
--
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "methods": [
- "get"
- ]
-},
-{
- "type": "http",
- "direction": "out",
- "name": "res"
-},
-{
- "name": "todoItems",
- "type": "sql",
- "direction": "in",
- "commandText": "DeleteToDo",
- "commandType": "StoredProcedure",
- "parameters": "@Id = {Query.id}",
- "connectionStringSetting": "SqlConnectionString"
-}
-```
- :::code language="csharp" source="~/functions-sql-todo-sample/DeleteToDo.cs" range="4-30":::
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```cs
-#r "Newtonsoft.Json"
-
-using System.Net;
-using Microsoft.AspNetCore.Mvc;
-using Microsoft.Extensions.Primitives;
-using Newtonsoft.Json;
-using System.Collections.Generic;
-
-public static IActionResult Run(HttpRequest req, ILogger log, IEnumerable<ToDoItem> todoItems)
-{
- return new OkObjectResult(todoItems);
-}
-```
- ::: zone-end
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
For information on setup and configuration details, see the [overview](./functio
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)]
-# [In-process](#tab/in-process)
-
-More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
-
-This section contains the following examples:
-
-* [HTTP trigger, write one record](#http-trigger-write-one-record-c)
-* [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-c)
-* [HTTP trigger, write records using IAsyncCollector](#http-trigger-write-records-using-iasynccollector-c)
-
-The examples refer to a `ToDoItem` class and a corresponding database table:
----
-<a id="http-trigger-write-one-record-c"></a>
-
-### HTTP trigger, write one record
-
-The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database, using data provided in an HTTP POST request as a JSON body.
--
-<a id="http-trigger-write-to-two-tables-c"></a>
-
-### HTTP trigger, write to two tables
-
-The following example shows a [C# function](functions-dotnet-class-library.md) that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
-
-```sql
-CREATE TABLE dbo.RequestLog (
- Id int identity(1,1) primary key,
- RequestTimeStamp datetime2 not null,
- ItemCount int not null
-)
-```
--
-```cs
-namespace AzureSQL.ToDo
-{
- public static class PostToDo
- {
- // create a new ToDoItem from body object
- // uses output binding to insert new item into ToDo table
- [FunctionName("PostToDo")]
- public static async Task<IActionResult> Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequest req,
- ILogger log,
- [Sql(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems,
- [Sql(commandText: "dbo.RequestLog", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<RequestLog> requestLogs)
- {
- string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
- ToDoItem toDoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
-
- // generate a new id for the todo item
- toDoItem.Id = Guid.NewGuid();
-
- // set Url from env variable ToDoUri
- toDoItem.url = Environment.GetEnvironmentVariable("ToDoUri")+"?id="+toDoItem.Id.ToString();
-
- // if completed is not provided, default to false
- if (toDoItem.completed == null)
- {
- toDoItem.completed = false;
- }
-
- await toDoItems.AddAsync(toDoItem);
- await toDoItems.FlushAsync();
- List<ToDoItem> toDoItemList = new List<ToDoItem> { toDoItem };
-
- requestLog = new RequestLog();
- requestLog.RequestTimeStamp = DateTime.Now;
- requestLog.ItemCount = 1;
- await requestLogs.AddAsync(requestLog);
- await requestLogs.FlushAsync();
-
- return new OkObjectResult(toDoItemList);
- }
- }
-
- public class RequestLog {
- public DateTime RequestTimeStamp { get; set; }
- public int ItemCount { get; set; }
- }
-}
-```
-
-<a id="http-trigger-write-records-using-iasynccollector-c"></a>
-
-### HTTP trigger, write records using IAsyncCollector
-
-The following example shows a [C# function](functions-dotnet-class-library.md) that adds a collection of records to a database, using data provided in an HTTP POST body JSON array.
-
-```cs
-using Microsoft.AspNetCore.Http;
-using Microsoft.AspNetCore.Mvc;
-using Microsoft.Azure.WebJobs;
-using Microsoft.Azure.WebJobs.Extensions.Http;
-using Newtonsoft.Json;
-using System.IO;
-using System.Threading.Tasks;
-
-namespace AzureSQLSamples
-{
- public static class WriteRecordsAsync
- {
- [FunctionName("WriteRecordsAsync")]
- public static async Task<IActionResult> Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addtodo-asynccollector")]
- HttpRequest req,
- [Sql(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> newItems)
- {
- string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
- var incomingItems = JsonConvert.DeserializeObject<ToDoItem[]>(requestBody);
- foreach (ToDoItem newItem in incomingItems)
- {
- await newItems.AddAsync(newItem);
- }
- // Rows are upserted here
- await newItems.FlushAsync();
-
- return new CreatedResult($"/api/addtodo-asynccollector", "done");
- }
- }
-}
-```
--
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-outofproc).
namespace AzureSQL.ToDo
} ```
-# [C# Script](#tab/csharp-script)
+# [In-process model](#tab/in-process)
-More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csx).
+More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
This section contains the following examples:
-* [HTTP trigger, write records to a table](#http-trigger-write-records-to-table-csharpscript)
-* [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-csharpscript)
+* [HTTP trigger, write one record](#http-trigger-write-one-record-c)
+* [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-c)
+* [HTTP trigger, write records using IAsyncCollector](#http-trigger-write-records-using-iasynccollector-c)
The examples refer to a `ToDoItem` class and a corresponding database table:
The examples refer to a `ToDoItem` class and a corresponding database table:
:::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="1-7":::
-<a id="http-trigger-write-records-to-table-csharpscript"></a>
-### HTTP trigger, write records to a table
-
-The following example shows a SQL output binding in a function.json file and a [C# script function](functions-reference-csharp.md) that adds records to a table, using data provided in an HTTP POST request as a JSON body.
-
-The following is binding data in the function.json file:
-
-```json
-{
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "methods": [
- "post"
- ]
-},
-{
- "type": "http",
- "direction": "out",
- "name": "res"
-},
-{
- "name": "todoItem",
- "type": "sql",
- "direction": "out",
- "commandText": "dbo.ToDo",
- "connectionStringSetting": "SqlConnectionString"
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-The following is sample C# script code:
-
-```cs
-#r "Newtonsoft.Json"
+<a id="http-trigger-write-one-record-c"></a>
-using System.Net;
-using Microsoft.AspNetCore.Mvc;
-using Microsoft.Extensions.Primitives;
-using Newtonsoft.Json;
+### HTTP trigger, write one record
-public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem)
-{
- log.LogInformation("C# HTTP trigger function processed a request.");
+The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database, using data provided in an HTTP POST request as a JSON body.
- string requestBody = new StreamReader(req.Body).ReadToEnd();
- todoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
- return new OkObjectResult(todoItem);
-}
-```
+<a id="http-trigger-write-to-two-tables-c"></a>
-<a id="http-trigger-write-to-two-tables-csharpscript"></a>
### HTTP trigger, write to two tables
-The following example shows a SQL output binding in a function.json file and a [C# script function](functions-reference-csharp.md) that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
-
-The second table, `dbo.RequestLog`, corresponds to the following definition:
+The following example shows a [C# function](functions-dotnet-class-library.md) that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
```sql CREATE TABLE dbo.RequestLog (
CREATE TABLE dbo.RequestLog (
) ```
-The following is binding data in the function.json file:
-```json
-{
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "methods": [
- "post"
- ]
-},
-{
- "type": "http",
- "direction": "out",
- "name": "res"
-},
-{
- "name": "todoItem",
- "type": "sql",
- "direction": "out",
- "commandText": "dbo.ToDo",
- "connectionStringSetting": "SqlConnectionString"
-},
+```cs
+namespace AzureSQL.ToDo
{
- "name": "requestLog",
- "type": "sql",
- "direction": "out",
- "commandText": "dbo.RequestLog",
- "connectionStringSetting": "SqlConnectionString"
+ public static class PostToDo
+ {
+ // create a new ToDoItem from body object
+ // uses output binding to insert new item into ToDo table
+ [FunctionName("PostToDo")]
+ public static async Task<IActionResult> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequest req,
+ ILogger log,
+ [Sql(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems,
+ [Sql(commandText: "dbo.RequestLog", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<RequestLog> requestLogs)
+ {
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ ToDoItem toDoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
+
+ // generate a new id for the todo item
+ toDoItem.Id = Guid.NewGuid();
+
+ // set Url from env variable ToDoUri
+ toDoItem.url = Environment.GetEnvironmentVariable("ToDoUri")+"?id="+toDoItem.Id.ToString();
+
+ // if completed is not provided, default to false
+ if (toDoItem.completed == null)
+ {
+ toDoItem.completed = false;
+ }
+
+ await toDoItems.AddAsync(toDoItem);
+ await toDoItems.FlushAsync();
+ List<ToDoItem> toDoItemList = new List<ToDoItem> { toDoItem };
+
+ requestLog = new RequestLog();
+ requestLog.RequestTimeStamp = DateTime.Now;
+ requestLog.ItemCount = 1;
+ await requestLogs.AddAsync(requestLog);
+ await requestLogs.FlushAsync();
+
+ return new OkObjectResult(toDoItemList);
+ }
+ }
+
+ public class RequestLog {
+ public DateTime RequestTimeStamp { get; set; }
+ public int ItemCount { get; set; }
+ }
} ```
-The [configuration](#configuration) section explains these properties.
+<a id="http-trigger-write-records-using-iasynccollector-c"></a>
-The following is sample C# script code:
+### HTTP trigger, write records using IAsyncCollector
-```cs
-#r "Newtonsoft.Json"
+The following example shows a [C# function](functions-dotnet-class-library.md) that adds a collection of records to a database, using data provided in an HTTP POST body JSON array.
-using System.Net;
+```cs
+using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
-using Microsoft.Extensions.Primitives;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Extensions.Http;
using Newtonsoft.Json;
+using System.IO;
+using System.Threading.Tasks;
-public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem, out RequestLog requestLog)
+namespace AzureSQLSamples
{
- log.LogInformation("C# HTTP trigger function processed a request.");
-
- string requestBody = new StreamReader(req.Body).ReadToEnd();
- todoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
-
- requestLog = new RequestLog();
- requestLog.RequestTimeStamp = DateTime.Now;
- requestLog.ItemCount = 1;
-
- return new OkObjectResult(todoItem);
-}
+ public static class WriteRecordsAsync
+ {
+ [FunctionName("WriteRecordsAsync")]
+ public static async Task<IActionResult> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addtodo-asynccollector")]
+ HttpRequest req,
+ [Sql(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> newItems)
+ {
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ var incomingItems = JsonConvert.DeserializeObject<ToDoItem[]>(requestBody);
+ foreach (ToDoItem newItem in incomingItems)
+ {
+ await newItems.AddAsync(newItem);
+ }
+ // Rows are upserted here
+ await newItems.FlushAsync();
-public class RequestLog {
- public DateTime RequestTimeStamp { get; set; }
- public int ItemCount { get; set; }
+ return new CreatedResult($"/api/addtodo-asynccollector", "done");
+ }
+ }
} ``` - ::: zone-end
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
For more information on change tracking and how it's used by applications such a
<a id="example"></a>
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-outofproc).
The example refers to a `ToDoItem` class and a corresponding database table:
The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange`
The following example shows a [C# function](functions-dotnet-class-library.md) that is invoked when there are changes to the `ToDo` table: ```cs
+using System;
using System.Collections.Generic;
-using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Extensions.Sql;
using Microsoft.Extensions.Logging;
-using Microsoft.Azure.WebJobs.Extensions.Sql;
+using Newtonsoft.Json;
+ namespace AzureSQL.ToDo { public static class ToDoTrigger {
- [FunctionName("ToDoTrigger")]
+ [Function("ToDoTrigger")]
public static void Run( [SqlTrigger("[dbo].[ToDo]", "SqlConnectionString")] IReadOnlyList<SqlChange<ToDoItem>> changes,
- ILogger logger)
+ FunctionContext context)
{
+ var logger = context.GetLogger("ToDoTrigger");
foreach (SqlChange<ToDoItem> change in changes) { ToDoItem toDoItem = change.Item;
namespace AzureSQL.ToDo
} ```
-# [Isolated process](#tab/isolated-process)
-More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-outofproc).
+# [In-process model](#tab/in-process)
+
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
The example refers to a `ToDoItem` class and a corresponding database table:
The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange`
The following example shows a [C# function](functions-dotnet-class-library.md) that is invoked when there are changes to the `ToDo` table: ```cs
-using System;
using System.Collections.Generic;
-using Microsoft.Azure.Functions.Worker;
-using Microsoft.Azure.Functions.Worker.Extensions.Sql;
+using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
-using Newtonsoft.Json;
-
+using Microsoft.Azure.WebJobs.Extensions.Sql;
namespace AzureSQL.ToDo { public static class ToDoTrigger {
- [Function("ToDoTrigger")]
+ [FunctionName("ToDoTrigger")]
public static void Run( [SqlTrigger("[dbo].[ToDo]", "SqlConnectionString")] IReadOnlyList<SqlChange<ToDoItem>> changes,
- FunctionContext context)
+ ILogger logger)
{
- var logger = context.GetLogger("ToDoTrigger");
foreach (SqlChange<ToDoItem> change in changes) { ToDoItem toDoItem = change.Item;
namespace AzureSQL.ToDo
} ``` -
-# [C# Script](#tab/csharp-script)
-
-More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csx).
--
-The example refers to a `ToDoItem` class and a corresponding database table:
---
-[Change tracking](#set-up-change-tracking-required) is enabled on the database and on the table:
-
-```sql
-ALTER DATABASE [SampleDatabase]
-SET CHANGE_TRACKING = ON
-(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON);
-
-ALTER TABLE [dbo].[ToDo]
-ENABLE CHANGE_TRACKING;
-```
-
-The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` objects each with two properties:
-- **Item:** the item that was changed. The type of the item should follow the table schema as seen in the `ToDoItem` class.-- **Operation:** a value from `SqlChangeOperation` enum. The possible values are `Insert`, `Update`, and `Delete`.-
-The following example shows a SQL trigger in a function.json file and a [C# script function](functions-reference-csharp.md) that is invoked when there are changes to the `ToDo` table:
-
-The following is binding data in the function.json file:
-
-```json
-{
- "name": "todoChanges",
- "type": "sqlTrigger",
- "direction": "in",
- "tableName": "dbo.ToDo",
- "connectionStringSetting": "SqlConnectionString"
-}
-```
-The following is the C# script function:
-
-```csharp
-#r "Newtonsoft.Json"
-
-using System.Net;
-using Microsoft.AspNetCore.Mvc;
-using Microsoft.Extensions.Primitives;
-using Newtonsoft.Json;
-
-public static void Run(IReadOnlyList<SqlChange<ToDoItem>> todoChanges, ILogger log)
-{
- log.LogInformation($"C# SQL trigger function processed a request.");
-
- foreach (SqlChange<ToDoItem> change in todoChanges)
- {
- ToDoItem toDoItem = change.Item;
- log.LogInformation($"Change operation: {change.Operation}");
- log.LogInformation($"Id: {toDoItem.Id}, Title: {toDoItem.title}, Url: {toDoItem.url}, Completed: {toDoItem.completed}");
- }
-}
-```
- - ::: zone-end ::: zone pivot="programming-language-java"
param($todoChanges)
$changesJson = $todoChanges | ConvertTo-Json -Compress Write-Host "SQL Changes: $changesJson" ```-- ::: zone-end---- ::: zone pivot="programming-language-javascript" ## Example usage <a id="example"></a>
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
This set of articles explains how to work with [Azure SQL](/azure/azure-sql/inde
The extension NuGet package you install depends on the C# mode you're using in your function app:
-# [In-process](#tab/in-process)
-
-Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-
-Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Sql).
-
-```bash
-dotnet add package Microsoft.Azure.WebJobs.Extensions.Sql
-```
-
-To use a preview version of the Microsoft.Azure.WebJobs.Extensions.Sql package for [SQL trigger](functions-bindings-azure-sql-trigger.md) functionality, add the `--prerelease` flag to the command.
-
-```bash
-dotnet add package Microsoft.Azure.WebJobs.Extensions.Sql --prerelease
-```
-
-> [!NOTE]
-> Breaking changes between preview releases of the Azure SQL trigger for Functions requires that all Functions targeting the same database use the same version of the SQL extension package.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Sql --prerelease
> [!NOTE] > Breaking changes between preview releases of the Azure SQL trigger for Functions requires that all Functions targeting the same database use the same version of the SQL extension package.
-# [C# script](#tab/csharp-script)
+# [In-process model](#tab/in-process)
-Functions run as C# script, which is supported primarily for C# portal editing. The SQL bindings extension is part of the v4 [extension bundle], which is specified in your host.json project file.
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-This extension is available from the extension bundle v4, which is specified in your `host.json` file by:
+Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Sql).
-```json
-{
- "version": "2.0",
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[4.*, 5.0.0)"
- }
-}
+```bash
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Sql
```
+To use a preview version of the Microsoft.Azure.WebJobs.Extensions.Sql package for [SQL trigger](functions-bindings-azure-sql-trigger.md) functionality, add the `--prerelease` flag to the command.
-You can add the preview extension bundle to use the [SQL trigger](functions-bindings-azure-sql-trigger.md) by adding or replacing the following code in your `host.json` file:
-
-```json
-{
- "version": "2.0",
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
- "version": "[4.*, 5.0.0)"
- }
-}
+```bash
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Sql --prerelease
``` > [!NOTE]
-> Breaking changes between preview releases of the Azure SQL trigger for Functions requires that all Functions targeting the same database use the same version of the extension bundle.
-
+> Breaking changes between preview releases of the Azure SQL trigger for Functions requires that all Functions targeting the same database use the same version of the SQL extension package.
azure-functions Functions Bindings Cache Trigger Redislist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redislist.md
The `RedisListTrigger` pops new elements from a list and surfaces those entries
The following sample polls the key `listTest` at a localhost Redis instance at `127.0.0.1:6379`:
-### [In-process](#tab/in-process)
+### [Isolated worker model](#tab/isolated-process)
+
+The isolated process examples aren't available in preview.
+
+### [In-process model](#tab/in-process)
```csharp [FunctionName(nameof(ListsTrigger))]
public static void ListsTrigger(
} ```
-### [Isolated process](#tab/isolated-process)
-
-The isolated process examples aren't available in preview.
- ::: zone-end
azure-functions Functions Bindings Cache Trigger Redispubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redispubsub.md
Redis features [publish/subscribe functionality](https://redis.io/docs/interact/
[!INCLUDE [dotnet-execution](../../includes/functions-dotnet-execution-model.md)]
-### [In-process](#tab/in-process)
+### [Isolated worker model](#tab/isolated-process)
+
+The isolated process examples aren't available in preview.
+
+```csharp
+//TBD
+```
+
+### [In-process model](#tab/in-process)
This sample listens to the channel `pubsubTest`.
public static void KeyeventTrigger(
} ```
-### [Isolated process](#tab/isolated-process)
-
-The isolated process examples aren't available in preview.
-
-```csharp
-//TBD
-```
- ::: zone-end
azure-functions Functions Bindings Cache Trigger Redisstream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redisstream.md
The `RedisStreamTrigger` reads new entries from a stream and surfaces those elem
[!INCLUDE [dotnet-execution](../../includes/functions-dotnet-execution-model.md)]
-### [In-process](#tab/in-process)
+### [Isolated worker model](#tab/isolated-process)
+
+The isolated process examples aren't available in preview.
+
+```csharp
+//TBD
+```
+
+### [In-process model](#tab/in-process)
```csharp
public static void StreamsTrigger(
} ```
-### [Isolated process](#tab/isolated-process)
-
-The isolated process examples aren't available in preview.
-
-```csharp
-//TBD
-```
- ::: zone-end
azure-functions Functions Bindings Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache.md
You can integrate Azure Cache for Redis and Azure Functions to build functions t
## Install extension
-### [In-process](#tab/in-process)
+### [Isolated worker model](#tab/isolated-process)
-Functions run in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+Functions run in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
-Add the extension to your project by installing [this NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Redis).
+Add the extension to your project by installing [this NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Redis).
```bash
-dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Redis --prerelease
```
-### [Isolated process](#tab/isolated-process)
+### [In-process model](#tab/in-process)
-Functions run in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
+Functions run in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-Add the extension to your project by installing [this NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Redis).
+Add the extension to your project by installing [this NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Redis).
```bash
-dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Redis --prerelease
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease
```
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
Unless otherwise noted, examples in this article target version 3.x of the [Azur
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+This section contains examples that require version 3.x of Azure Cosmos DB extension and 5.x of Azure Storage extension. If not already present in your function app, add reference to the following NuGet packages:
+
+ * [Microsoft.Azure.Functions.Worker.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB)
+ * [Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues/5.0.0)
+
+* [Queue trigger, look up ID from JSON](#queue-trigger-look-up-id-from-json-isolated)
+
+The examples refer to a simple `ToDoItem` type:
++
+<a id="queue-trigger-look-up-id-from-json-isolated"></a>
+
+### Queue trigger, look up ID from JSON
+
+The following example shows a function that retrieves a single document. The function is triggered by a JSON message in the storage queue. The queue trigger parses the JSON into an object of type `ToDoItemLookup`, which contains the ID and partition key value to retrieve. That ID and partition key value are used to return a `ToDoItem` document from the specified database and collection.
++
+# [In-process model](#tab/in-process)
This section contains the following examples for using [in-process C# class library functions](functions-dotnet-class-library.md) with extension version 3.x:
namespace CosmosDBSamplesV2
} ```
-# [Isolated process](#tab/isolated-process)
-
-This section contains examples that require version 3.x of Azure Cosmos DB extension and 5.x of Azure Storage extension. If not already present in your function app, add reference to the following NuGet packages:
-
- * [Microsoft.Azure.Functions.Worker.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB)
- * [Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues/5.0.0)
-
-* [Queue trigger, look up ID from JSON](#queue-trigger-look-up-id-from-json-isolated)
-
-The examples refer to a simple `ToDoItem` type:
--
-<a id="queue-trigger-look-up-id-from-json-isolated"></a>
-
-### Queue trigger, look up ID from JSON
-
-The following example shows a function that retrieves a single document. The function is triggered by a JSON message in the storage queue. The queue trigger parses the JSON into an object of type `ToDoItemLookup`, which contains the ID and partition key value to retrieve. That ID and partition key value are used to return a `ToDoItem` document from the specified database and collection.
-- ::: zone-end
Here's the binding data in the *function.json* file:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#cosmos-db-input).
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#azure-cosmos-db-v2-input).
# [Extension 4.x+](#tab/extensionv4/in-process)
azure-functions Functions Bindings Cosmosdb V2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md
The Python v1 programming model requires you to define bindings in a separate *f
This article supports both programming models. ::: zone-end- ## Example Unless otherwise noted, examples in this article target version 3.x of the [Azure Cosmos DB extension](functions-bindings-cosmosdb-v2.md). For use with extension version 4.x, you need to replace the string `collection` in property and attribute names with `container`. ::: zone pivot="programming-language-csharp"
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+The following code defines a `MyDocument` type:
++
+In the following example, the return type is an [`IReadOnlyList<T>`](/dotnet/api/system.collections.generic.ireadonlylist-1), which is a modified list of documents from trigger binding parameter:
++
+# [In-process model](#tab/in-process)
This section contains the following examples:
namespace CosmosDBSamplesV2
} ``` -
-# [Isolated process](#tab/isolated-process)
-
-The following code defines a `MyDocument` type:
--
-In the following example, the return type is an [`IReadOnlyList<T>`](/dotnet/api/system.collections.generic.ireadonlylist-1), which is a modified list of documents from trigger binding parameter:
-- ::: zone-end
def main(req: func.HttpRequest, doc: func.Out[func.Document]) -> func.HttpRespon
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#cosmos-db-output).
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#azure-cosmos-db-v2-output).
# [Extension 4.x+](#tab/extensionv4/in-process)
Both [in-process](functions-dotnet-class-library.md) and [isolated worker proces
[!INCLUDE [functions-cosmosdb-output-attributes-v3](../../includes/functions-cosmosdb-output-attributes-v3.md)]
-# [Extension 4.x+](#tab/functionsv4/isolated-process)
+# [Extension 4.x+](#tab/extensionv4/isolated-process)
[!INCLUDE [functions-cosmosdb-output-attributes-v4](../../includes/functions-cosmosdb-output-attributes-v4.md)]
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
This article supports both programming models.
The usage of the trigger depends on the extension package version and the C# modality used in your function app, which can be one of the following:
-# [In-process](#tab/in-process)
-
-An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
+# [In-process model](#tab/in-process)
+
+An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
+
The following examples depend on the extension version for the given C# mode.
Here's the Python code:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [CosmosDBTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#cosmos-db-trigger).
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [CosmosDBTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#azure-cosmos-db-v2-trigger).
# [Extension 4.x+](#tab/extensionv4/in-process)
azure-functions Functions Bindings Cosmosdb V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2.md
This set of articles explains how to work with [Azure Cosmos DB](../cosmos-db/se
The extension NuGet package you install depends on the C# mode you're using in your function app:
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
-In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+# [In-process model](#tab/in-process)
-# [Isolated process](#tab/isolated-process)
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
+In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
You can install this version of the extension in your function app by registerin
The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following:
-# [In-process](#tab/in-process)
-
-An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
+# [In-process model](#tab/in-process)
+
+An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
+
Choose a version to see binding type details for the mode and version.
azure-functions Functions Bindings Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb.md
The Azure Cosmos DB Trigger uses the [Azure Cosmos DB Change Feed](../cosmos-db/
# [C#](#tab/csharp)
-The following example shows a [C# function](functions-dotnet-class-library.md) that is invoked when there are inserts or updates in the specified database and collection.
+The following example shows an [in-process C# function](functions-dotnet-class-library.md) that is invoked when there are inserts or updates in the specified database and collection.
```cs using Microsoft.Azure.Documents;
namespace CosmosDBSamplesV1
} ```
-# [C# Script](#tab/csharp-script)
-
-The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are modified.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "type": "cosmosDBTrigger",
- "name": "documents",
- "direction": "in",
- "leaseCollectionName": "leases",
- "connectionStringSetting": "<connection-app-setting>",
- "databaseName": "Tasks",
- "collectionName": "Items",
- "createLeaseCollectionIfNotExists": true
-}
-```
-
-Here's the C# script code:
-
-```cs
- #r "Microsoft.Azure.Documents.Client"
-
- using System;
- using Microsoft.Azure.Documents;
- using System.Collections.Generic;
-
-
- public static void Run(IReadOnlyList<Document> documents, TraceWriter log)
- {
- log.Info("Documents modified " + documents.Count);
- log.Info("First document Id " + documents[0].Id);
- }
-```
- # [JavaScript](#tab/javascript) The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are modified.
Here's the JavaScript code:
# [C#](#tab/csharp)
-In [C# class libraries](functions-dotnet-class-library.md), use the [CosmosDBTrigger](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) attribute.
+For [in-process C# class libraries](functions-dotnet-class-library.md), use the [CosmosDBTrigger](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) attribute.
The attribute's constructor takes the database name and collection name. For information about those settings and other properties that you can configure, see [Trigger - configuration](#triggerconfiguration). Here's a `CosmosDBTrigger` attribute example in a method signature:
The attribute's constructor takes the database name and collection name. For inf
For a complete example, see [Trigger - C# example](#trigger).
-# [C# Script](#tab/csharp-script)
-
-Attributes are not supported by C# Script.
- # [JavaScript](#tab/javascript) Attributes are not supported by JavaScript.
namespace CosmosDBSamplesV1
} ```
-# [C# Script](#tab/csharp-script)
-
-This section contains the following examples:
-
-* [Queue trigger, look up ID from string](#queue-trigger-look-up-id-from-string-c-script)
-* [Queue trigger, get multiple docs, using SqlQuery](#queue-trigger-get-multiple-docs-using-sqlquery-c-script)
-* [HTTP trigger, look up ID from query string](#http-trigger-look-up-id-from-query-string-c-script)
-* [HTTP trigger, look up ID from route data](#http-trigger-look-up-id-from-route-data-c-script)
-* [HTTP trigger, get multiple docs, using SqlQuery](#http-trigger-get-multiple-docs-using-sqlquery-c-script)
-* [HTTP trigger, get multiple docs, using DocumentClient](#http-trigger-get-multiple-docs-using-documentclient-c-script)
-
-The HTTP trigger examples refer to a simple `ToDoItem` type:
-
-```cs
-namespace CosmosDBSamplesV1
-{
- public class ToDoItem
- {
- public string Id { get; set; }
- public string Description { get; set; }
- }
-}
-```
-
-<a id="queue-trigger-look-up-id-from-string-c-script"></a>
-
-### Queue trigger, look up ID from string
-
-The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads a single document and updates the document's text value.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "name": "inputDocument",
- "type": "documentDB",
- "databaseName": "MyDatabase",
- "collectionName": "MyCollection",
- "id" : "{queueTrigger}",
- "partitionKey": "{partition key value}",
- "connection": "MyAccount_COSMOSDB",
- "direction": "in"
-}
-```
-
-The [configuration](#inputconfiguration) section explains these properties.
-
-Here's the C# script code:
-
-```cs
- using System;
-
- // Change input document contents using Azure Cosmos DB input binding
- public static void Run(string myQueueItem, dynamic inputDocument)
- {
- inputDocument.text = "This has changed.";
- }
-```
-
-<a id="queue-trigger-get-multiple-docs-using-sqlquery-c-script"></a>
-
-### Queue trigger, get multiple docs, using SqlQuery
-
-The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters.
-
-The queue trigger provides a parameter `departmentId`. A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "name": "documents",
- "type": "documentdb",
- "direction": "in",
- "databaseName": "MyDb",
- "collectionName": "MyCollection",
- "sqlQuery": "SELECT * from c where c.departmentId = {departmentId}",
- "connection": "CosmosDBConnection"
-}
-```
-
-The [configuration](#inputconfiguration) section explains these properties.
-
-Here's the C# script code:
-
-```csharp
- public static void Run(QueuePayload myQueueItem, IEnumerable<dynamic> documents)
- {
- foreach (var doc in documents)
- {
- // operate on each document
- }
- }
-
- public class QueuePayload
- {
- public string departmentId { get; set; }
- }
-```
-
-<a id="http-trigger-look-up-id-from-query-string-c-script"></a>
-
-### HTTP trigger, look up ID from query string
-
-The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a single document. The function is triggered by an HTTP request that uses a query string to specify the ID to look up. That ID is used to retrieve a `ToDoItem` document from the specified database and collection.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "authLevel": "anonymous",
- "name": "req",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "name": "$return",
- "type": "http",
- "direction": "out"
- },
- {
- "type": "documentDB",
- "name": "toDoItem",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "connection": "CosmosDBConnection",
- "direction": "in",
- "Id": "{Query.id}"
- }
- ],
- "disabled": true
-}
-```
-
-Here's the C# script code:
-
-```cs
-using System.Net;
-
-public static HttpResponseMessage Run(HttpRequestMessage req, ToDoItem toDoItem, TraceWriter log)
-{
- log.Info("C# HTTP trigger function processed a request.");
-
- if (toDoItem == null)
- {
- log.Info($"ToDo item not found");
- }
- else
- {
- log.Info($"Found ToDo item, Description={toDoItem.Description}");
- }
- return req.CreateResponse(HttpStatusCode.OK);
-}
-```
-
-<a id="http-trigger-look-up-id-from-route-data-c-script"></a>
-
-### HTTP trigger, look up ID from route data
-
-The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a single document. The function is triggered by an HTTP request that uses route data to specify the ID to look up. That ID is used to retrieve a `ToDoItem` document from the specified database and collection.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "authLevel": "anonymous",
- "name": "req",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "get",
- "post"
- ],
- "route":"todoitems/{id}"
- },
- {
- "name": "$return",
- "type": "http",
- "direction": "out"
- },
- {
- "type": "documentDB",
- "name": "toDoItem",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "connection": "CosmosDBConnection",
- "direction": "in",
- "Id": "{id}"
- }
- ],
- "disabled": false
-}
-```
-
-Here's the C# script code:
-
-```cs
-using System.Net;
-
-public static HttpResponseMessage Run(HttpRequestMessage req, ToDoItem toDoItem, TraceWriter log)
-{
- log.Info("C# HTTP trigger function processed a request.");
-
- if (toDoItem == null)
- {
- log.Info($"ToDo item not found");
- }
- else
- {
- log.Info($"Found ToDo item, Description={toDoItem.Description}");
- }
- return req.CreateResponse(HttpStatusCode.OK);
-}
-```
-
-<a id="http-trigger-get-multiple-docs-using-sqlquery-c-script"></a>
-
-### HTTP trigger, get multiple docs, using SqlQuery
-
-The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a list of documents. The function is triggered by an HTTP request. The query is specified in the `SqlQuery` attribute property.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "authLevel": "anonymous",
- "name": "req",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "name": "$return",
- "type": "http",
- "direction": "out"
- },
- {
- "type": "documentDB",
- "name": "toDoItems",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "connection": "CosmosDBConnection",
- "direction": "in",
- "sqlQuery": "SELECT top 2 * FROM c order by c._ts desc"
- }
- ],
- "disabled": false
-}
-```
-
-Here's the C# script code:
-
-```cs
-using System.Net;
-
-public static HttpResponseMessage Run(HttpRequestMessage req, IEnumerable<ToDoItem> toDoItems, TraceWriter log)
-{
- log.Info("C# HTTP trigger function processed a request.");
-
- foreach (ToDoItem toDoItem in toDoItems)
- {
- log.Info(toDoItem.Description);
- }
- return req.CreateResponse(HttpStatusCode.OK);
-}
-```
-
-<a id="http-trigger-get-multiple-docs-using-documentclient-c-script"></a>
-
-### HTTP trigger, get multiple docs, using DocumentClient
-
-The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a list of documents. The function is triggered by an HTTP request. The code uses a `DocumentClient` instance provided by the Azure Cosmos DB binding to read a list of documents. The `DocumentClient` instance could also be used for write operations.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "authLevel": "anonymous",
- "name": "req",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "name": "$return",
- "type": "http",
- "direction": "out"
- },
- {
- "type": "documentDB",
- "name": "client",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "connection": "CosmosDBConnection",
- "direction": "inout"
- }
- ],
- "disabled": false
-}
-```
-
-Here's the C# script code:
-
-```cs
-#r "Microsoft.Azure.Documents.Client"
-
-using System.Net;
-using Microsoft.Azure.Documents.Client;
-using Microsoft.Azure.Documents.Linq;
-
-public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, DocumentClient client, TraceWriter log)
-{
- log.Info("C# HTTP trigger function processed a request.");
-
- Uri collectionUri = UriFactory.CreateDocumentCollectionUri("ToDoItems", "Items");
- string searchterm = req.GetQueryNameValuePairs()
- .FirstOrDefault(q => string.Compare(q.Key, "searchterm", true) == 0)
- .Value;
-
- if (searchterm == null)
- {
- return req.CreateResponse(HttpStatusCode.NotFound);
- }
-
- log.Info($"Searching for word: {searchterm} using Uri: {collectionUri.ToString()}");
- IDocumentQuery<ToDoItem> query = client.CreateDocumentQuery<ToDoItem>(collectionUri)
- .Where(p => p.Description.Contains(searchterm))
- .AsDocumentQuery();
-
- while (query.HasMoreResults)
- {
- foreach (ToDoItem result in await query.ExecuteNextAsync())
- {
- log.Info(result.Description);
- }
- }
- return req.CreateResponse(HttpStatusCode.OK);
-}
-```
- # [JavaScript](#tab/javascript) This section contains the following examples:
Here's the JavaScript code:
# [C#](#tab/csharp)
-In [C# class libraries](functions-dotnet-class-library.md), use the [DocumentDB](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/v2.x/src/WebJobs.Extensions.DocumentDB/DocumentDBAttribute.cs) attribute.
+In [in-process C# class libraries](functions-dotnet-class-library.md), use the [DocumentDB](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/v2.x/src/WebJobs.Extensions.DocumentDB/DocumentDBAttribute.cs) attribute.
The attribute's constructor takes the database name and collection name. For information about those settings and other properties that you can configure, see [the following configuration section](#inputconfiguration).
-# [C# Script](#tab/csharp-script)
-
-Attributes are not supported by C# Script.
- # [JavaScript](#tab/javascript) Attributes are not supported by JavaScript.
The following table explains the binding configuration properties that you set i
When the function exits successfully, any changes made to the input document via named input parameters are automatically persisted.
-# [C# Script](#tab/csharp-script)
-
-When the function exits successfully, any changes made to the input document via named input parameters are automatically persisted.
- # [JavaScript](#tab/javascript) Updates are not made automatically upon function exit. Instead, use `context.bindings.<documentName>In` and `context.bindings.<documentName>Out` to make updates. See the [input example](#input).
namespace CosmosDBSamplesV1
} ```
-# [C# Script](#tab/csharp-script)
-
-This section contains the following examples:
-
-* Queue trigger, write one doc
-* Queue trigger, write docs using `IAsyncCollector`
-
-### Queue trigger, write one doc
-
-The following example shows an Azure Cosmos DB output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses a queue input binding for a queue that receives JSON in the following format:
-
-```json
-{
- "name": "John Henry",
- "employeeId": "123456",
- "address": "A town nearby"
-}
-```
-
-The function creates Azure Cosmos DB documents in the following format for each record:
-
-```json
-{
- "id": "John Henry-123456",
- "name": "John Henry",
- "employeeId": "123456",
- "address": "A town nearby"
-}
-```
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "name": "employeeDocument",
- "type": "documentDB",
- "databaseName": "MyDatabase",
- "collectionName": "MyCollection",
- "createIfNotExists": true,
- "connection": "MyAccount_COSMOSDB",
- "direction": "out"
-}
-```
-
-The [configuration](#outputconfiguration) section explains these properties.
-
-Here's the C# script code:
-
-```cs
- #r "Newtonsoft.Json"
-
- using Microsoft.Azure.WebJobs.Host;
- using Newtonsoft.Json.Linq;
-
- public static void Run(string myQueueItem, out object employeeDocument, TraceWriter log)
- {
- log.Info($"C# Queue trigger function processed: {myQueueItem}");
-
- dynamic employee = JObject.Parse(myQueueItem);
-
- employeeDocument = new {
- id = employee.name + "-" + employee.employeeId,
- name = employee.name,
- employeeId = employee.employeeId,
- address = employee.address
- };
- }
-```
-
-### Queue trigger, write docs using IAsyncCollector
-
-To create multiple documents, you can bind to `ICollector<T>` or `IAsyncCollector<T>` where `T` is one of the supported types.
-
-This example refers to a simple `ToDoItem` type:
-
-```cs
-namespace CosmosDBSamplesV1
-{
- public class ToDoItem
- {
- public string Id { get; set; }
- public string Description { get; set; }
- }
-}
-```
-
-Here's the function.json file:
-
-```json
-{
- "bindings": [
- {
- "name": "toDoItemsIn",
- "type": "queueTrigger",
- "direction": "in",
- "queueName": "todoqueueforwritemulti",
- "connection": "AzureWebJobsStorage"
- },
- {
- "type": "documentDB",
- "name": "toDoItemsOut",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "connection": "CosmosDBConnection",
- "direction": "out"
- }
- ],
- "disabled": false
-}
-```
-
-Here's the C# script code:
-
-```cs
-using System;
-
-public static async Task Run(ToDoItem[] toDoItemsIn, IAsyncCollector<ToDoItem> toDoItemsOut, TraceWriter log)
-{
- log.Info($"C# Queue trigger function processed {toDoItemsIn?.Length} items");
-
- foreach (ToDoItem toDoItem in toDoItemsIn)
- {
- log.Info($"Description={toDoItem.Description}");
- await toDoItemsOut.AddAsync(toDoItem);
- }
-}
-```
- # [JavaScript](#tab/javascript) The following example shows an Azure Cosmos DB output binding in a *function.json* file and a [JavaScript function](functions-reference-node.md) that uses the binding. The function uses a queue input binding for a queue that receives JSON in the following format:
Here's the JavaScript code:
# [C#](#tab/csharp)
-In [C# class libraries](functions-dotnet-class-library.md), use the [DocumentDB](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/v2.x/src/WebJobs.Extensions.DocumentDB/DocumentDBAttribute.cs) attribute.
+In [in-process C# class libraries](functions-dotnet-class-library.md), use the [DocumentDB](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/v2.x/src/WebJobs.Extensions.DocumentDB/DocumentDBAttribute.cs) attribute.
The attribute's constructor takes the database name and collection name. For information about those settings and other properties that you can configure, see [Output - configuration](#outputconfiguration). Here's a `DocumentDB` attribute example in a method signature:
The attribute's constructor takes the database name and collection name. For inf
For a complete example, see [Output](#output).
-# [C# Script](#tab/csharp-script)
-
-Attributes are not supported by C# Script.
- # [JavaScript](#tab/javascript) Attributes are not supported by JavaScript.
azure-functions Functions Bindings Error Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md
This behavior means that the maximum retry count is a best effort. In some rare
::: zone pivot="programming-language-csharp"
-# [In-process](#tab/in-process/fixed-delay)
-
-Retries require NuGet package [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) >= 3.0.23
-
-```csharp
-[FunctionName("EventHubTrigger")]
-[FixedDelayRetry(5, "00:00:10")]
-public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubConnection")] EventData[] events, ILogger log)
-{
-// ...
-}
-```
-
-|Property | Description |
-||-|
-|MaxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
-|DelayInterval|The delay that's used between retries. Specify it as a string with the format `HH:mm:ss`.|
-
-# [Isolated process](#tab/isolated-process/fixed-delay)
+# [Isolated worker model](#tab/isolated-process/fixed-delay)
Function-level retries are supported with the following NuGet packages:
Function-level retries are supported with the following NuGet packages:
|MaxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.| |DelayInterval|The delay that's used between retries. Specify it as a string with the format `HH:mm:ss`.| -
-# [C# script](#tab/csharp-script/fixed-delay)
-
-Here's the retry policy in the *function.json* file:
-
-```json
-{
- "disabled": false,
- "bindings": [
- {
- ....
- }
- ],
- "retry": {
- "strategy": "fixedDelay",
- "maxRetryCount": 4,
- "delayInterval": "00:00:10"
- }
-}
-```
-
-|*function.json*&nbsp;property | Description |
-||-|
-|strategy|Use `fixedDelay`.|
-|maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
-|delayInterval|The delay that's used between retries. Specify it as a string with the format `HH:mm:ss`.|
-
-# [In-process](#tab/in-process/exponential-backoff)
+# [In-process model](#tab/in-process/fixed-delay)
Retries require NuGet package [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) >= 3.0.23 ```csharp [FunctionName("EventHubTrigger")]
-[ExponentialBackoffRetry(5, "00:00:04", "00:15:00")]
+[FixedDelayRetry(5, "00:00:10")]
public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubConnection")] EventData[] events, ILogger log) { // ...
public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubCon
|Property | Description | ||-| |MaxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
-|MinimumInterval|The minimum retry delay. Specify it as a string with the format `HH:mm:ss`.|
-|MaximumInterval|The maximum retry delay. Specify it as a string with the format `HH:mm:ss`.|
+|DelayInterval|The delay that's used between retries. Specify it as a string with the format `HH:mm:ss`.|
-# [Isolated process](#tab/isolated-process/exponential-backoff)
+# [Isolated worker model](#tab/isolated-process/exponential-backoff)
Function-level retries are supported with the following NuGet packages:
Function-level retries are supported with the following NuGet packages:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/CosmosDB/CosmosDBFunction.cs" id="docsnippet_exponential_backoff_retry_example" :::
-# [C# script](#tab/csharp-script/exponential-backoff)
+# [In-process model](#tab/in-process/exponential-backoff)
-Here's the retry policy in the *function.json* file:
+Retries require NuGet package [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) >= 3.0.23
-```json
+```csharp
+[FunctionName("EventHubTrigger")]
+[ExponentialBackoffRetry(5, "00:00:04", "00:15:00")]
+public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubConnection")] EventData[] events, ILogger log)
{
- "disabled": false,
- "bindings": [
- {
- ....
- }
- ],
- "retry": {
- "strategy": "exponentialBackoff",
- "maxRetryCount": 5,
- "minimumInterval": "00:00:10",
- "maximumInterval": "00:15:00"
- }
+// ...
} ```
-|*function.json*&nbsp;property | Description |
+|Property | Description |
||-|
-|strategy|Use `exponentialBackoff`.|
-|maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
-|minimumInterval|The minimum retry delay. Specify it as a string with the format `HH:mm:ss`.|
-|maximumInterval|The maximum retry delay. Specify it as a string with the format `HH:mm:ss`.|
+|MaxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
+|MinimumInterval|The minimum retry delay. Specify it as a string with the format `HH:mm:ss`.|
+|MaximumInterval|The maximum retry delay. Specify it as a string with the format `HH:mm:ss`.|
::: zone-end
azure-functions Functions Bindings Event Grid Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md
The type of the output parameter used with an Event Grid output binding depends
* [In-process class library](functions-dotnet-class-library.md): compiled C# function that runs in the same process as the Functions runtime. * [Isolated worker process class library](dotnet-isolated-process-guide.md): compiled C# function that runs in a worker process isolated from the runtime.
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+The following example shows how the custom type is used in both the trigger and an Event Grid output binding:
++
+# [In-process model](#tab/in-process)
The following example shows a C# function that publishes a `CloudEvent` using version 3.x of the extension:
When you use the `Connection` property, the `topicEndpointUri` must be specified
``` When deployed, you must add this same information to application settings for the function app. For more information, see [Identity-based authentication](#identity-based-authentication).
-# [Isolated process](#tab/isolated-process)
-
-The following example shows how the custom type is used in both the trigger and an Event Grid output binding:
-- ::: zone-end
Both [in-process](functions-dotnet-class-library.md) and [isolated worker proces
The attribute's constructor takes the name of an application setting that contains the name of the custom topic, and the name of an application setting that contains the topic key.
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-The following table explains the parameters for the `EventGridAttribute`.
+The following table explains the parameters for the `EventGridOutputAttribute`.
|Parameter | Description|
-|||-|
+|||
|**TopicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. | |**TopicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
-|**Connection**<sup>*</sup> | The value of the common prefix for the setting that contains the topic endpoint URI. For more information about the naming format of this application setting, see [Identity-based authentication](#identity-based-authentication). |
+|**connection**<sup>*</sup> | The value of the common prefix for the setting that contains the topic endpoint URI. For more information about the naming format of this application setting, see [Identity-based authentication](#identity-based-authentication). |
-# [Isolated process](#tab/isolated-process)
+# [In-process model](#tab/in-process)
-The following table explains the parameters for the `EventGridOutputAttribute`.
+The following table explains the parameters for the `EventGridAttribute`.
|Parameter | Description|
-|||-|
+|||
|**TopicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. | |**TopicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
-|**connection**<sup>*</sup> | The value of the common prefix for the setting that contains the topic endpoint URI. For more information about the naming format of this application setting, see [Identity-based authentication](#identity-based-authentication). |
+|**Connection**<sup>*</sup> | The value of the common prefix for the setting that contains the topic endpoint URI. For more information about the naming format of this application setting, see [Identity-based authentication](#identity-based-authentication). |
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md
The type of the input parameter used with an Event Grid trigger depends on these
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+When running your C# function in an isolated worker process, you need to define a custom type for event properties. The following example defines a `MyEventType` class.
++
+The following example shows how the custom type is used in both the trigger and an Event Grid output binding:
++
+# [In-process model](#tab/in-process)
The following example shows a Functions version 4.x function that uses a `CloudEvent` binding parameter:
namespace Company.Function
} } ```
-# [Isolated process](#tab/isolated-process)
-
-When running your C# function in an isolated worker process, you need to define a custom type for event properties. The following example defines a `MyEventType` class.
--
-The following example shows how the custom type is used in both the trigger and an Event Grid output binding:
-- ::: zone-end
def main(event: func.EventGridEvent):
Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [EventGridTrigger](https://github.com/Azure/azure-functions-eventgrid-extension/blob/master/src/EventGridExtension/TriggerBinding/EventGridTriggerAttribute.cs) attribute. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#event-grid-trigger).
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+Here's an `EventGridTrigger` attribute in a method signature:
++
+# [In-process model](#tab/in-process)
Here's an `EventGridTrigger` attribute in a method signature:
Here's an `EventGridTrigger` attribute in a method signature:
public static void EventGridTest([EventGridTrigger] JObject eventGridEvent, ILogger log) { ```
-# [Isolated process](#tab/isolated-process)
-
-Here's an `EventGridTrigger` attribute in a method signature:
-- ::: zone-end
azure-functions Functions Bindings Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid.md
This reference shows how to connect to Azure Event Grid using Azure Functions tr
The extension NuGet package you install depends on the C# mode you're using in your function app:
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
-In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+# [In-process model](#tab/in-process)
-# [Isolated process](#tab/isolated-process)
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
+In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
The Event Grid output binding is only available for Functions 2.x and higher. Ev
The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following:
-# [In-process](#tab/in-process)
-
-An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
+# [In-process model](#tab/in-process)
+
+An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
+
Choose a version to see binding type details for the mode and version.
azure-functions Functions Bindings Event Hubs Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-output.md
This article supports both programming models.
::: zone pivot="programming-language-csharp"
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+The following example shows a [C# function](dotnet-isolated-process-guide.md) that writes a message string to an event hub, using the method return value as the output:
++
+# [In-process model](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that writes a message to an event hub, using the method return value as the output:
public static async Task Run(
} } ```
-# [Isolated process](#tab/isolated-process)
-
-The following example shows a [C# function](dotnet-isolated-process-guide.md) that writes a message string to an event hub, using the method return value as the output:
-- ::: zone-end
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#event-hubs-output).
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-Use the [EventHubAttribute] to define an output binding to an event hub, which supports the following properties.
+Use the [EventHubOutputAttribute] to define an output binding to an event hub, which supports the following properties.
| Parameters | Description| ||-| |**EventHubName** | The name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. | |**Connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. To learn more, see [Connections](#connections).|
-# [Isolated process](#tab/isolated-process)
+# [In-process model](#tab/in-process)
-Use the [EventHubOutputAttribute] to define an output binding to an event hub, which supports the following properties.
+Use the [EventHubAttribute] to define an output binding to an event hub, which supports the following properties.
| Parameters | Description| ||-|
azure-functions Functions Bindings Http Webhook Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-output.md
The default return value for an HTTP-triggered function is:
Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries don't require an attribute. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#http-output).
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
A return value attribute isn't required. To learn more, see [Usage](#usage).
-# [Isolated process](#tab/isolated-process)
+# [In-process model](#tab/in-process)
A return value attribute isn't required. To learn more, see [Usage](#usage).
To send an HTTP response, use the language-standard response patterns.
::: zone pivot="programming-language-csharp" The response type depends on the C# mode:
-# [In-process](#tab/in-process)
-
-The HTTP triggered function returns a type of [IActionResult] or `Task<IActionResult>`.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
The HTTP triggered function returns an [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata) object or a `Task<HttpResponseData>`. If the app uses [ASP.NET Core integration in .NET Isolated](./dotnet-isolated-process-guide.md#aspnet-core-integration), it could also use [IActionResult], `Task<IActionResult>`, [HttpResponse], or `Task<HttpResponse>`. [IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult [HttpResponse]: /dotnet/api/microsoft.aspnetcore.http.httpresponse
+# [In-process model](#tab/in-process)
+
+The HTTP triggered function returns a type of [IActionResult] or `Task<IActionResult>`.
+ ::: zone-end
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md
This article supports both programming models.
The code in this article defaults to .NET Core syntax, used in Functions version 2.x and higher. For information on the 1.x syntax, see the [1.x functions templates](https://github.com/Azure/azure-functions-templates/tree/v1.x/Functions.Templates/Templates).
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+The following example shows an HTTP trigger that returns a "hello world" response as an [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata) object:
++
+The following example shows an HTTP trigger that returns a "hello, world" response as an [IActionResult], using [ASP.NET Core integration in .NET Isolated]:
+
+```csharp
+[Function("HttpFunction")]
+public IActionResult Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get")] HttpRequest req)
+{
+ return new OkObjectResult($"Welcome to Azure Functions, {req.Query["name"]}!");
+}
+```
+
+[IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult
+
+# [In-process model](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that looks for a `name` parameter either in the query string or the body of the HTTP request. Notice that the return value is used for the output binding, but a return value attribute isn't required.
public static async Task<IActionResult> Run(
} ```
-# [Isolated process](#tab/isolated-process)
-
-The following example shows an HTTP trigger that returns a "hello world" response as an [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata) object:
--
-The following example shows an HTTP trigger that returns a "hello, world" response as an [IActionResult], using [ASP.NET Core integration in .NET Isolated]:
-
-```csharp
-[Function("HttpFunction")]
-public IActionResult Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "get")] HttpRequest req)
-{
- return new OkObjectResult($"Welcome to Azure Functions, {req.Query["name"]}!");
-}
-```
-
-[IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult
- ::: zone-end
def main(req: func.HttpRequest) -> func.HttpResponse:
Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `HttpTriggerAttribute` to define the trigger binding. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#http-trigger).
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-In [in-process functions](functions-dotnet-class-library.md), the `HttpTriggerAttribute` supports the following parameters:
+In [isolated worker process](dotnet-isolated-process-guide.md) function apps, the `HttpTriggerAttribute` supports the following parameters:
| Parameters | Description| ||-| | **AuthLevel** | Determines what keys, if any, need to be present on the request in order to invoke the function. For supported values, see [Authorization level](#http-auth). | | **Methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the HTTP endpoint](#customize-the-http-endpoint). | | **Route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is `<functionname>`. For more information, see [customize the HTTP endpoint](#customize-the-http-endpoint). |
-| **WebHookType** | _Supported only for the version 1.x runtime._<br/><br/>Configures the HTTP trigger to act as a [webhook](https://en.wikipedia.org/wiki/Webhook) receiver for the specified provider. For supported values, see [WebHook type](#webhook-type).|
-# [Isolated process](#tab/isolated-process)
+# [In-process model](#tab/in-process)
-In [isolated worker process](dotnet-isolated-process-guide.md) function apps, the `HttpTriggerAttribute` supports the following parameters:
+In [in-process functions](functions-dotnet-class-library.md), the `HttpTriggerAttribute` supports the following parameters:
| Parameters | Description| ||-| | **AuthLevel** | Determines what keys, if any, need to be present on the request in order to invoke the function. For supported values, see [Authorization level](#http-auth). | | **Methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the HTTP endpoint](#customize-the-http-endpoint). | | **Route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is `<functionname>`. For more information, see [customize the HTTP endpoint](#customize-the-http-endpoint). |
+| **WebHookType** | _Supported only for the version 1.x runtime._<br/><br/>Configures the HTTP trigger to act as a [webhook](https://en.wikipedia.org/wiki/Webhook) receiver for the specified provider. For supported values, see [WebHook type](#webhook-type).|
The [HttpTrigger](/java/api/com.microsoft.azure.functions.annotation.httptrigger
### Payload
-# [In-process](#tab/in-process)
-
-The trigger input type is declared as either `HttpRequest` or a custom type. If you choose `HttpRequest`, you get full access to the request object. For a custom type, the runtime tries to parse the JSON request body to set the object properties.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
The trigger input type is declared as one of the following types:
namespace AspNetIntegration
} ```
+# [In-process model](#tab/in-process)
+
+The trigger input type is declared as either `HttpRequest` or a custom type. If you choose `HttpRequest`, you get full access to the request object. For a custom type, the runtime tries to parse the JSON request body to set the object properties.
+ ::: zone-end
You can customize this route using the optional `route` property on the HTTP tri
::: zone pivot="programming-language-csharp"
-# [In-process](#tab/in-process)
-
-The following C# function code accepts two parameters `category` and `id` in the route and writes a response using both parameters.
-
-```csharp
-[FunctionName("Function1")]
-public static IActionResult Run(
-[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "products/{category:alpha}/{id:int?}")] HttpRequest req,
-string category, int? id, ILogger log)
-{
- log.LogInformation("C# HTTP trigger function processed a request.");
-
- var message = String.Format($"Category: {category}, ID: {id}");
- return (ActionResult)new OkObjectResult(message);
-}
-```
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
The following function code accepts two parameters `category` and `id` in the route and writes a response using both parameters.
FunctionContext executionContext)
} ```
+# [In-process model](#tab/in-process)
+
+The following C# function code accepts two parameters `category` and `id` in the route and writes a response using both parameters.
+
+```csharp
+[FunctionName("Function1")]
+public static IActionResult Run(
+[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "products/{category:alpha}/{id:int?}")] HttpRequest req,
+string category, int? id, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ var message = String.Format($"Category: {category}, ID: {id}");
+ return (ActionResult)new OkObjectResult(message);
+}
+```
::: zone-end
You can also read this information from binding data. This capability is only av
::: zone pivot="programming-language-csharp" Information regarding authenticated clients is available as a [ClaimsPrincipal], which is available as part of the request context as shown in the following example:
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+The authenticated user is available via [HTTP Headers](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code).
+
+# [In-process model](#tab/in-process)
```csharp using System.Net;
public static void Run(JObject input, ClaimsPrincipal principal, ILogger log)
return; } ```
-# [Isolated process](#tab/isolated-process)
-
-The authenticated user is available via [HTTP Headers](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code).
- ::: zone-end
azure-functions Functions Bindings Http Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook.md
Azure Functions may be invoked via HTTP requests to build serverless APIs and re
The extension NuGet package you install depends on the C# mode you're using in your function app:
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
-# [Isolated process](#tab/isolated-process)
+# [In-process model](#tab/in-process)
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
azure-functions Functions Bindings Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-output.md
The output binding allows an Azure Functions app to write messages to a Kafka to
The usage of the binding depends on the C# modality used in your function app, which can be one of the following:
-# [In-process](#tab/in-process)
-
-An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
An [isolated worker process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime.
+# [In-process model](#tab/in-process)
+
+An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime.
+
The attributes you use depend on the specific event provider.
azure-functions Functions Bindings Kafka Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-trigger.md
You can use the Apache Kafka trigger in Azure Functions to run your function cod
The usage of the trigger depends on the C# modality used in your function app, which can be one of the following modes:
-# [In-process](#tab/in-process)
-
-An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
An [isolated worker process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime.
+# [In-process model](#tab/in-process)
+
+An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime.
+
The attributes you use depend on the specific event provider.
The following table explains the binding configuration properties that you set i
::: zone pivot="programming-language-csharp"
-# [In-process](#tab/in-process)
-
-Kafka events are passed to the function as `KafkaEventData<string>` objects or arrays. Strings and string arrays that are JSON payloads are also supported.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
Kafka events are currently supported as strings and string arrays that are JSON payloads.
+# [In-process model](#tab/in-process)
+
+Kafka events are passed to the function as `KafkaEventData<string>` objects or arrays. Strings and string arrays that are JSON payloads are also supported.
+
::: zone-end
azure-functions Functions Bindings Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka.md
The Kafka extension for Azure Functions lets you write values out to [Apache Kaf
The extension NuGet package you install depends on the C# mode you're using in your function app:
-# [In-process](#tab/in-process)
-
-Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-
-Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Kafka).
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Kafka).
-<!--
-# [C# script](#tab/csharp-script)
-Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+# [In-process model](#tab/in-process)
+
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+
+Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Kafka).
-The Kafka extension is part of an [extension bundle], which is specified in your host.json project file. When you create a project that targets version 2.x or later, you should already have this bundle installed. To learn more, see [extension bundle].
>
azure-functions Functions Bindings Rabbitmq Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-output.md
For information on setup and configuration details, see the [overview](functions
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+++
+# [In-process model](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that sends a RabbitMQ message when triggered by a TimerTrigger every 5 minutes using the method return value as the output:
namespace Company.Function
} ```
-# [Isolated process](#tab/isolated-process)
---
-# [C# Script](#tab/csharp-script)
-
-The following example shows a RabbitMQ output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads in the message from an HTTP trigger and outputs it to the RabbitMQ queue.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "type": "httpTrigger",
- "direction": "in",
- "authLevel": "function",
- "name": "input",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "type": "rabbitMQ",
- "name": "outputMessage",
- "queueName": "outputQueue",
- "connectionStringSetting": "rabbitMQConnectionAppSetting",
- "direction": "out"
- }
- ]
-}
-```
-
-Here's the C# script code:
-
-```C#
-using System;
-using Microsoft.Extensions.Logging;
-
-public static void Run(string input, out string outputMessage, ILogger log)
-{
- log.LogInformation(input);
- outputMessage = input;
-}
-```
::: zone-end
def main(req: func.HttpRequest, outputMessage: func.Out[str]) -> func.HttpRespon
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a [function.json configuration file](#configuration).
The attribute's constructor takes the following parameters:
The attribute's constructor takes the following parameters:
|**ConnectionStringSetting**|The name of the app setting that contains the RabbitMQ message queue connection string. The trigger won't work when you specify the connection string directly instead through an app setting. For example, when you have set `ConnectionStringSetting: "rabbitMQConnection"`, then in both the *local.settings.json* and in your function app you need a setting like `"RabbitMQConnection" : "< ActualConnectionstring >"`.| |**Port**|Gets or sets the port used. Defaults to 0, which points to the RabbitMQ client's default port setting of `5672`. |
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute.
+
+Here's a `RabbitMQTrigger` attribute in a method signature for an isolated worker process library:
+++
+# [In-process model](#tab/in-process)
In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQAttribute](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/RabbitMQAttribute.cs).
ILogger log)
} ```
-# [Isolated process](#tab/isolated-process)
-
-In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute.
-
-Here's a `RabbitMQTrigger` attribute in a method signature for an isolated worker process library:
---
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property | Description|
-||-|
-|**type** | Must be set to `RabbitMQ`.|
-|**direction** | Must be set to `out`.|
-|**name** | The name of the variable that represents the queue in function code. |
-|**queueName**| See the **QueueName** attribute above.|
-|**hostName**|See the **HostName** attribute above.|
-|**userNameSetting**|See the **UserNameSetting** attribute above.|
-|**passwordSetting**|See the **PasswordSetting** attribute above.|
-|**connectionStringSetting**|See the **ConnectionStringSetting** attribute above.|
-|**port**|See the **Port** attribute above.|
- ::: zone-end
See the [Example section](#example) for complete examples.
::: zone pivot="programming-language-csharp" The parameter type supported by the RabbitMQ trigger depends on the Functions runtime version, the extension package version, and the C# modality used.
-# [In-process](#tab/in-process)
-
-Use the following parameter types for the output binding:
-
-* `byte[]` - If the parameter value is null when the function exits, Functions doesn't create a message.
-* `string` - If the parameter value is null when the function exits, Functions doesn't create a message.
-* `POCO` - The message is formatted as a C# object.
-
-When working with C# functions:
-
-* Async functions need a return value or `IAsyncCollector` instead of an `out` parameter.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
The RabbitMQ bindings currently support only string and serializable object types when running in an isolated worker process.
-# [C# script](#tab/csharp-script)
+# [In-process model](#tab/in-process)
Use the following parameter types for the output binding: * `byte[]` - If the parameter value is null when the function exits, Functions doesn't create a message. * `string` - If the parameter value is null when the function exits, Functions doesn't create a message.
-* `POCO` - If the parameter value isn't formatted as a C# object, an error will be received. For a complete example, see C# Script [example](#example).
+* `POCO` - The message is formatted as a C# object.
-When working with C# Script functions:
+When working with C# functions:
* Async functions need a return value or `IAsyncCollector` instead of an `out` parameter.
azure-functions Functions Bindings Rabbitmq Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-trigger.md
For information on setup and configuration details, see the [overview](functions
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
++
+# [In-process model](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that reads and logs the RabbitMQ message as a [RabbitMQ Event](https://rabbitmq.github.io/rabbitmq-dotnet-client/api/RabbitMQ.Client.Events.BasicDeliverEventArgs.html):
namespace Company.Function
Like with Json objects, an error will occur if the message isn't properly formatted as a C# object. If it is, it's then bound to the variable pocObj, which can be used for what whatever it's needed for.
-# [Isolated process](#tab/isolated-process)
--
-# [C# Script](#tab/csharp-script)
-
-The following example shows a RabbitMQ trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads and logs the RabbitMQ message.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{ΓÇïΓÇï
- "bindings": [
- {ΓÇïΓÇï
- "name": "myQueueItem",
- "type": "rabbitMQTrigger",
- "direction": "in",
- "queueName": "queue",
- "connectionStringSetting": "rabbitMQConnectionAppSetting"
- }ΓÇïΓÇï
- ]
-}ΓÇïΓÇï
-```
-
-Here's the C# script code:
-
-```C#
-using System;
-
-public static void Run(string myQueueItem, ILogger log)
-{ΓÇïΓÇï
- log.LogInformation($"C# Script RabbitMQ trigger function processed: {ΓÇïΓÇïmyQueueItem}ΓÇïΓÇï");
-}ΓÇïΓÇï
-```
::: zone-end
def main(myQueueItem) -> None:
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the <!--attribute API here--> attribute to define the function. C# script instead uses a [function.json configuration file](#configuration).
The attribute's constructor takes the following parameters:
The attribute's constructor takes the following parameters:
|**ConnectionStringSetting**|The name of the app setting that contains the RabbitMQ message queue connection string. The trigger won't work when you specify the connection string directly instead through an app setting. For example, when you have set `ConnectionStringSetting: "rabbitMQConnection"`, then in both the *local.settings.json* and in your function app you need a setting like `"RabbitMQConnection" : "< ActualConnectionstring >"`.| |**Port**|Gets or sets the port used. Defaults to 0, which points to the RabbitMQ client's default port setting of `5672`. |
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute.
+
+Here's a `RabbitMQTrigger` attribute in a method signature for an isolated worker process library:
++
+# [In-process model](#tab/in-process)
In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute.
public static void RabbitMQTest([RabbitMQTrigger("queue")] string message, ILogg
} ```
-# [Isolated process](#tab/isolated-process)
-
-In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute.
-
-Here's a `RabbitMQTrigger` attribute in a method signature for an isolated worker process library:
--
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property | Description|
-||-|
-|**type** | Must be set to `RabbitMQTrigger`.|
-|**direction** | Must be set to "in".|
-|**name** | The name of the variable that represents the queue in function code. |
-|**queueName**| See the **QueueName** attribute above.|
-|**hostName**|See the **HostName** attribute above.|
-|**userNameSetting**|See the **UserNameSetting** attribute above.|
-|**passwordSetting**|See the **PasswordSetting** attribute above.|
-|**connectionStringSetting**|See the **ConnectionStringSetting** attribute above.|
-|**port**|See the **Port** attribute above.|
- ::: zone-end
See the [Example section](#example) for complete examples.
::: zone pivot="programming-language-csharp" The parameter type supported by the RabbitMQ trigger depends on the C# modality used.
-# [In-process](#tab/in-process)
-
-The default message type is [RabbitMQ Event](https://rabbitmq.github.io/rabbitmq-dotnet-client/api/RabbitMQ.Client.Events.BasicDeliverEventArgs.html), and the `Body` property of the RabbitMQ Event can be read as the types listed below:
-
-* `An object serializable as JSON` - The message is delivered as a valid JSON string.
-* `string`
-* `byte[]`
-* `POCO` - The message is formatted as a C# object. For complete code, see C# [example](#example).
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
The RabbitMQ bindings currently support only string and serializable object types when running in an isolated process.
-# [C# script](#tab/csharp-script)
+# [In-process model](#tab/in-process)
The default message type is [RabbitMQ Event](https://rabbitmq.github.io/rabbitmq-dotnet-client/api/RabbitMQ.Client.Events.BasicDeliverEventArgs.html), and the `Body` property of the RabbitMQ Event can be read as the types listed below: * `An object serializable as JSON` - The message is delivered as a valid JSON string. * `string` * `byte[]`
-* `POCO` - The message is formatted as a C# object. For a complete example, see C# Script [example](#example).
+* `POCO` - The message is formatted as a C# object. For complete code, see C# [example](#example).
azure-functions Functions Bindings Rabbitmq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq.md
Before working with the RabbitMQ extension, you must [set up your RabbitMQ endpo
The extension NuGet package you install depends on the C# mode you're using in your function app:
-# [In-process](#tab/in-process)
-
-Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-
-Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.RabbitMQ).
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Rabbitmq).
-# [C# script](#tab/csharp-script)
+# [In-process model](#tab/in-process)
-Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-You can install this version of the extension in your function app by registering the [extension bundle], version 2.x, or a later version.
+Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.RabbitMQ).
azure-functions Functions Bindings Return Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-return-value.md
Set the `name` property in *function.json* to `$return`. If there are multiple o
How return values are used depends on the C# mode you're using in your function app:
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+See [Output bindings in the .NET worker guide](./dotnet-isolated-process-guide.md#output-bindings) for details and examples.
+
+# [In-process model](#tab/in-process)
In a C# class library, apply the output binding attribute to the method return value. In C# and C# script, alternative ways to send data to an output binding are `out` parameters and [collector objects](functions-reference-csharp.md#writing-multiple-output-values).
public static Task<string> Run([QueueTrigger("inputqueue")]WorkItem input, ILogg
} ```
-# [Isolated process](#tab/isolated-process)
-
-See [Output bindings in the .NET worker guide](./dotnet-isolated-process-guide.md#output-bindings) for details and examples.
- ::: zone-end
azure-functions Functions Bindings Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-sendgrid.md
This article explains how to send email by using [SendGrid](https://sendgrid.com
The extension NuGet package you install depends on the C# mode you're using in your function app:
-# [In-process](#tab/in-process)
-
-Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
-# [C# script](#tab/csharp-script)
+# [In-process model](#tab/in-process)
-Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
Add the extension to your project by installing the [NuGet package](https://www.
Functions 1.x doesn't support running in an isolated worker process.
-# [Functions v2.x+](#tab/functionsv2/csharp-script)
-
-This version of the extension should already be available to your function app with [extension bundle], version 2.x.
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-You can add the extension to your project by explicitly installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.SendGrid), version 2.x. To learn more, see [Explicitly install extensions](functions-bindings-register.md#explicitly-install-extensions).
- ::: zone-end
You can add the extension to your project by explicitly installing the [NuGet pa
::: zone pivot="programming-language-csharp" [!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+We don't currently have an example for using the SendGrid binding in a function app running in an isolated worker process.
+
+# [In-process model](#tab/in-process)
The following examples shows a [C# function](functions-dotnet-class-library.md) that uses a Service Bus queue trigger and a SendGrid output binding.
public class OutgoingEmail
You can omit setting the attribute's `ApiKey` property if you have your API key in an app setting named "AzureWebJobsSendGridApiKey".
-# [Isolated process](#tab/isolated-process)
-
-We don't currently have an example for using the SendGrid binding in a function app running in an isolated worker process.
-
-# [C# Script](#tab/csharp-script)
-
-The following example shows a SendGrid output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "type": "queueTrigger",
- "name": "mymsg",
- "queueName": "myqueue",
- "connection": "AzureWebJobsStorage",
- "direction": "in"
- },
- {
- "type": "sendGrid",
- "name": "$return",
- "direction": "out",
- "apiKey": "SendGridAPIKeyAsAppSetting",
- "from": "{FromEmail}",
- "to": "{ToEmail}"
- }
- ]
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```csharp
-#r "SendGrid"
-
-using System;
-using SendGrid.Helpers.Mail;
-using Microsoft.Azure.WebJobs.Host;
-
-public static SendGridMessage Run(Message mymsg, ILogger log)
-{
- SendGridMessage message = new SendGridMessage()
- {
- Subject = $"{mymsg.Subject}"
- };
-
- message.AddContent("text/plain", $"{mymsg.Content}");
-
- return message;
-}
-public class Message
-{
- public string ToEmail { get; set; }
- public string FromEmail { get; set; }
- public string Subject { get; set; }
- public string Content { get; set; }
-}
-```
::: zone-end
public class HttpTriggerSendGrid {
Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-In [in-process](functions-dotnet-class-library.md) function apps, use the [SendGridAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.SendGrid/SendGridAttribute.cs), which supports the following parameters.
+In [isolated worker process](dotnet-isolated-process-guide.md) function apps, the `SendGridOutputAttribute` supports the following parameters:
| Attribute/annotation property | Description | |-|-|
In [in-process](functions-dotnet-class-library.md) function apps, use the [SendG
| **Subject** | (Optional) The subject of the email. | | **Text** | (Optional) The email content. |
-# [Isolated process](#tab/isolated-process)
+# [In-process model](#tab/in-process)
-In [isolated worker process](dotnet-isolated-process-guide.md) function apps, the `SendGridOutputAttribute` supports the following parameters:
+In [in-process](functions-dotnet-class-library.md) function apps, use the [SendGridAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.SendGrid/SendGridAttribute.cs), which supports the following parameters.
| Attribute/annotation property | Description | |-|-|
In [isolated worker process](dotnet-isolated-process-guide.md) function apps, th
| **Subject** | (Optional) The subject of the email. | | **Text** | (Optional) The email content. |
-# [C# Script](#tab/csharp-script)
-
-The following table explains the trigger configuration properties that you set in the *function.json* file:
-
-| *function.json* property | Description |
-|--||
-| **type** | Must be set to `sendGrid`.|
-| **direction** | Must be set to `out`.|
-| **name** | The variable name used in function code for the request or request body. This value is `$return` when there is only one return value. |
-| **apiKey** | The name of an app setting that contains your API key. If not set, the default app setting name is *AzureWebJobsSendGridApiKey*.|
-| **to**| (Optional) The recipient's email address. |
-| **from**| (Optional) The sender's email address. |
-| **subject**| (Optional) The subject of the email. |
-| **text**| (Optional) The email content. |
- ::: zone-end
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
This article supports both programming models.
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue:
+++
+# [In-process model](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that sends a Service Bus queue message:
public static string ServiceBusOutput([HttpTrigger] dynamic input, ILogger log)
return input.Text; } ```
-# [Isolated process](#tab/isolated-process)
-
-The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue:
--- ::: zone-end
def main(req: func.HttpRequest, msg: func.Out[str]) -> func.HttpResponse:
Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#service-bus-output).
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+In [C# class libraries](dotnet-isolated-process-guide.md), use the [ServiceBusOutputAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.ServiceBus/src/ServiceBusOutputAttribute.cs) to define the queue or topic written to by the output.
+
+The following table explains the properties you can set using the attribute:
+
+| Property |Description|
+| | |
+|**EntityType**|Sets the entity type as either `Queue` for sending messages to a queue or `Topic` when sending messages to a topic. |
+|**QueueOrTopicName**|Name of the topic or queue to send messages to. Use `EntityType` to set the destination type.|
+|**Connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
+
+# [In-process model](#tab/in-process)
In [C# class libraries](functions-dotnet-class-library.md), use the [ServiceBusAttribute](https://github.com/Azure/azure-functions-servicebus-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ServiceBusAttribute.cs).
For a complete example, see [Example](#example).
You can use the `ServiceBusAccount` attribute to specify the Service Bus account to use at class, method, or parameter level. For more information, see [Attributes](functions-bindings-service-bus-trigger.md#attributes) in the trigger reference.
-# [Isolated process](#tab/isolated-process)
-
-In [C# class libraries](dotnet-isolated-process-guide.md), use the [ServiceBusOutputAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.ServiceBus/src/ServiceBusOutputAttribute.cs) to define the queue or topic written to by the output.
-
-The following table explains the properties you can set using the attribute:
-
-| Property |Description|
-| | |
-|**EntityType**|Sets the entity type as either `Queue` for sending messages to a queue or `Topic` when sending messages to a topic. |
-|**QueueOrTopicName**|Name of the topic or queue to send messages to. Use `EntityType` to set the destination type.|
-|**Connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
- ::: zone-end
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
This article supports both programming models.
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue:
+++
+# [In-process model](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that reads [message metadata](#message-metadata) and logs a Service Bus queue message:
public static void Run(
log.LogInformation($"MessageId={messageId}"); } ```
-# [Isolated process](#tab/isolated-process)
-
-The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue:
--- ::: zone-end
def main(msg: azf.ServiceBusMessage) -> str:
Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [ServiceBusTriggerAttribute](https://github.com/Azure/azure-functions-servicebus-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ServiceBusTriggerAttribute.cs) attribute to define the function trigger. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#service-bus-trigger).
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
The following table explains the properties you can set using this trigger attribute:
The following table explains the properties you can set using this trigger attri
|**TopicName**|Name of the topic to monitor. Set only if monitoring a topic, not for a queue.| |**SubscriptionName**|Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.| |**Connection**| The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
-|**Access**|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
|**IsBatched**| Messages are delivered in batches. Requires an array or collection type. | |**IsSessionsEnabled**|`true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
-|**AutoComplete**|`true` Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed. |
-# [Isolated process](#tab/isolated-process)
+# [In-process model](#tab/in-process)
The following table explains the properties you can set using this trigger attribute:
The following table explains the properties you can set using this trigger attri
|**TopicName**|Name of the topic to monitor. Set only if monitoring a topic, not for a queue.| |**SubscriptionName**|Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.| |**Connection**| The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
+|**Access**|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
|**IsBatched**| Messages are delivered in batches. Requires an array or collection type. | |**IsSessionsEnabled**|`true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
+|**AutoComplete**|`true` Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed. |
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md
Azure Functions integrates with [Azure Service Bus](https://azure.microsoft.com/
The extension NuGet package you install depends on the C# mode you're using in your function app:
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-_This section describes using a [class library](./functions-dotnet-class-library.md). For [C# scripting], you would need to instead [install the extension bundle][Update your extensions], version 2.x or later._
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
-Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+Add the extension to your project installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.servicebus).
-Add the extension to your project installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus).
+# [In-process model](#tab/in-process)
-# [Isolated process](#tab/isolated-process)
+_This section describes using a [class library](./functions-dotnet-class-library.md). For [C# scripting], you would need to instead [install the extension bundle][Update your extensions], version 2.x or later._
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-Add the extension to your project installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.servicebus).
+Add the extension to your project installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus).
Functions 1.x apps automatically have a reference to the extension.
The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following:
+# [Isolated worker model](#tab/isolated-process)
+
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
+ # [In-process class library](#tab/in-process) An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
-# [Isolated process](#tab/isolated-process)
-
-An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
- Choose a version to see binding type details for the mode and version.
azure-functions Functions Bindings Signalr Service Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-input.md
For information on setup and configuration details, see the [overview](functions
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)]
-# [In-process](#tab/in-process)
-
-The following example shows a [C# function](functions-dotnet-class-library.md) that acquires SignalR connection information using the input binding and returns it over HTTP.
-
-```cs
-[FunctionName("negotiate")]
-public static SignalRConnectionInfo Negotiate(
- [HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req,
- [SignalRConnectionInfo(HubName = "chat")]SignalRConnectionInfo connectionInfo)
-{
- return connectionInfo;
-}
-```
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
The following example shows a [C# function](dotnet-isolated-process-guide.md) that acquires SignalR connection information using the input binding and returns it over HTTP. :::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/SignalR/SignalRNegotiationFunctions.cs" id="snippet_negotiate":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows a SignalR connection info input binding in a *function.json* file and a [C# Script function](functions-reference-csharp.md) that uses the binding to return the connection information.
-
-Here's binding data in the *function.json* file:
-
-Example function.json:
+# [In-process model](#tab/in-process)
-```json
-{
- "type": "signalRConnectionInfo",
- "name": "connectionInfo",
- "hubName": "chat",
- "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
- "direction": "in"
-}
-```
-
-Here's the C# Script code:
+The following example shows a [C# function](functions-dotnet-class-library.md) that acquires SignalR connection information using the input binding and returns it over HTTP.
```cs
-#r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
-using Microsoft.Azure.WebJobs.Extensions.SignalRService;
-
-public static SignalRConnectionInfo Run(HttpRequest req, SignalRConnectionInfo connectionInfo)
+[FunctionName("negotiate")]
+public static SignalRConnectionInfo Negotiate(
+ [HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req,
+ [SignalRConnectionInfo(HubName = "chat")]SignalRConnectionInfo connectionInfo)
{ return connectionInfo; }
App Service authentication sets HTTP headers named `x-ms-client-principal-id` an
::: zone pivot="programming-language-csharp"
-# [In-process](#tab/in-process)
-
-You can set the `UserId` property of the binding to the value from either header using a [binding expression](#binding-expressions-for-http-trigger): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
-
-```cs
-[FunctionName("negotiate")]
-public static SignalRConnectionInfo Negotiate(
- [HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req,
- [SignalRConnectionInfo
- (HubName = "chat", UserId = "{headers.x-ms-client-principal-id}")]
- SignalRConnectionInfo connectionInfo)
-{
- // connectionInfo contains an access key token with a name identifier claim set to the authenticated user
- return connectionInfo;
-}
-```
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
```cs [Function("Negotiate")]
public static string Negotiate([HttpTrigger(AuthorizationLevel.Anonymous)] HttpR
} ```
-# [C# Script](#tab/csharp-script)
-
-You can set the `userId` property of the binding to the value from either header using a [binding expression](#binding-expressions-for-http-trigger): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
-
-Example function.json:
+# [In-process model](#tab/in-process)
-```json
-{
- "type": "signalRConnectionInfo",
- "name": "connectionInfo",
- "hubName": "chat",
- "userId": "{headers.x-ms-client-principal-id}",
- "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
- "direction": "in"
-}
-```
-
-Here's the C# Script code:
+You can set the `UserId` property of the binding to the value from either header using a [binding expression](#binding-expressions-for-http-trigger): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
```cs
-#r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
-using Microsoft.Azure.WebJobs.Extensions.SignalRService;
-
-public static SignalRConnectionInfo Run(HttpRequest req, SignalRConnectionInfo connectionInfo)
+[FunctionName("negotiate")]
+public static SignalRConnectionInfo Negotiate(
+ [HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req,
+ [SignalRConnectionInfo
+ (HubName = "chat", UserId = "{headers.x-ms-client-principal-id}")]
+ SignalRConnectionInfo connectionInfo)
{
- // connectionInfo contains an access key token with a name identifier
- // claim set to the authenticated user
+ // connectionInfo contains an access key token with a name identifier claim set to the authenticated user
return connectionInfo; } ```+ ::: zone-end
public SignalRConnectionInfo negotiate(
Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-The following table explains the properties of the `SignalRConnectionInfo` attribute:
+The following table explains the properties of the `SignalRConnectionInfoInput` attribute:
| Attribute property |Description| ||-|
The following table explains the properties of the `SignalRConnectionInfo` attri
|**IdToken**| Optional. A JWT token whose claims will be added to the user claims. It should be used together with **ClaimTypeList**. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. | |**ClaimTypeList**| Optional. A list of claim types, which filter the claims in **IdToken** . |
-# [Isolated process](#tab/isolated-process)
+# [In-process model](#tab/in-process)
-The following table explains the properties of the `SignalRConnectionInfoInput` attribute:
+The following table explains the properties of the `SignalRConnectionInfo` attribute:
| Attribute property |Description| ||-|
The following table explains the properties of the `SignalRConnectionInfoInput`
|**IdToken**| Optional. A JWT token whose claims will be added to the user claims. It should be used together with **ClaimTypeList**. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. | |**ClaimTypeList**| Optional. A list of claim types, which filter the claims in **IdToken** . |
-# [C# Script](#tab/csharp-script)
-
-The following table explains the binding configuration properties that you set in the *function.json* file.
-
-|function.json property | Description|
-||--|
-|**type**| Must be set to `signalRConnectionInfo`.|
-|**direction**| Must be set to `in`.|
-|**name**| Variable name used in function code for connection info object. |
-|**hubName**| Required. The hub name. |
-|**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
-|**userId**| Optional. The user identifier of a SignalR connection. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. |
-|**idToken**| Optional. A JWT token whose claims will be added to the user claims. It should be used together with **claimTypeList**. You can use a [binding expression](#binding-expressions-for-http-trigger) to bind the value to an HTTP request header or query. |
-|**claimTypeList**| Optional. A list of claim types, which filter the claims in **idToken** . |
- ::: zone-end
azure-functions Functions Bindings Signalr Service Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-output.md
For information on setup and configuration details, see the [overview](functions
### Broadcast to all clients
-# [In-process](#tab/in-process)
-
-The following example shows a function that sends a message using the output binding to all connected clients. The *target* is the name of the method to be invoked on each client. The *Arguments* property is an array of zero or more objects to be passed to the client method.
-
-```cs
-[FunctionName("SendMessage")]
-public static Task SendMessage(
- [HttpTrigger(AuthorizationLevel.Anonymous, "post")]object message,
- [SignalR(HubName = "chat")]IAsyncCollector<SignalRMessage> signalRMessages)
-{
- return signalRMessages.AddAsync(
- new SignalRMessage
- {
- Target = "newMessage",
- Arguments = new [] { message }
- });
-}
-```
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
The following example shows a function that sends a message using the output binding to all connected clients. The *newMessage* is the name of the method to be invoked on each client. :::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/SignalR/SignalROutputBindingFunctions2.cs" id="snippet_broadcast_to_all":::
-# [C# Script](#tab/csharp-script)
-
-Here's binding data in the *function.json* file:
-
-Example function.json:
-
-```json
-{
- "type": "signalR",
- "name": "signalRMessages",
- "hubName": "<hub_name>",
- "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
- "direction": "out"
-}
-```
+# [In-process model](#tab/in-process)
-Here's the C# Script code:
+The following example shows a function that sends a message using the output binding to all connected clients. The *target* is the name of the method to be invoked on each client. The *Arguments* property is an array of zero or more objects to be passed to the client method.
```cs
-#r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
-using Microsoft.Azure.WebJobs.Extensions.SignalRService;
-
-public static Task Run(
- object message,
- IAsyncCollector<SignalRMessage> signalRMessages)
+[FunctionName("SendMessage")]
+public static Task SendMessage(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "post")]object message,
+ [SignalR(HubName = "chat")]IAsyncCollector<SignalRMessage> signalRMessages)
{ return signalRMessages.AddAsync( new SignalRMessage
public SignalRMessage sendMessage(
You can send a message only to connections that have been authenticated to a user by setting the *user ID* in the SignalR message.
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
++
+# [In-process model](#tab/in-process)
```cs [FunctionName("SendMessage")]
public static Task SendMessage(
} ```
-# [Isolated process](#tab/isolated-process)
--
-# [C# Script](#tab/csharp-script)
-
-Example function.json:
-
-```json
-{
- "type": "signalR",
- "name": "signalRMessages",
- "hubName": "<hub_name>",
- "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
- "direction": "out"
-}
-```
-
-Here's the C# script code:
-
-```cs
-#r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
-using Microsoft.Azure.WebJobs.Extensions.SignalRService;
-
-public static Task Run(
- object message,
- IAsyncCollector<SignalRMessage> signalRMessages)
-{
- return signalRMessages.AddAsync(
- new SignalRMessage
- {
- // the message will only be sent to this user ID
- UserId = "userId1",
- Target = "newMessage",
- Arguments = new [] { message }
- });
-}
-```
- ::: zone-end
public SignalRMessage sendMessage(
You can send a message only to connections that have been added to a group by setting the *group name* in the SignalR message.
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
++
+# [In-process model](#tab/in-process)
```cs [FunctionName("SendMessage")]
public static Task SendMessage(
}); } ```
-# [Isolated process](#tab/isolated-process)
--
-# [C# Script](#tab/csharp-script)
-
-Example function.json:
-
-```json
-{
- "type": "signalR",
- "name": "signalRMessages",
- "hubName": "<hub_name>",
- "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
- "direction": "out"
-}
-```
-
-Here's the C# Script code:
-
-```cs
-#r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
-using Microsoft.Azure.WebJobs.Extensions.SignalRService;
-
-public static Task Run(
- object message,
- IAsyncCollector<SignalRMessage> signalRMessages)
-{
- return signalRMessages.AddAsync(
- new SignalRMessage
- {
- // the message will be sent to the group with this name
- GroupName = "myGroup",
- Target = "newMessage",
- Arguments = new [] { message }
- });
-}
-```
- ::: zone-end
public SignalRMessage sendMessage(
SignalR Service allows users or connections to be added to groups. Messages can then be sent to a group. You can use the `SignalR` output binding to manage groups.
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+Specify `SignalRGroupActionType` to add or remove a member. The following example removes a user from a group.
++
+# [In-process model](#tab/in-process)
Specify `GroupAction` to add or remove a member. The following example adds a user to a group.
public static Task AddToGroup(
} ```
-# [Isolated process](#tab/isolated-process)
-
-Specify `SignalRGroupActionType` to add or remove a member. The following example removes a user from a group.
--
-# [C# Script](#tab/csharp-script)
-
-The following example adds a user to a group.
-
-Example *function.json*
-
-```json
-{
- "type": "signalR",
- "name": "signalRGroupActions",
- "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
- "hubName": "chat",
- "direction": "out"
-}
-```
-
-*Run.csx*
-
-```cs
-#r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
-using Microsoft.Azure.WebJobs.Extensions.SignalRService;
-
-public static Task Run(
- HttpRequest req,
- ClaimsPrincipal claimsPrincipal,
- IAsyncCollector<SignalRGroupAction> signalRGroupActions)
-{
- var userIdClaim = claimsPrincipal.FindFirst(ClaimTypes.NameIdentifier);
- return signalRGroupActions.AddAsync(
- new SignalRGroupAction
- {
- UserId = userIdClaim.Value,
- GroupName = "myGroup",
- Action = GroupAction.Add
- });
-}
-```
-
-The following example removes a user from a group.
-
-Example *function.json*
-
-```json
-{
- "type": "signalR",
- "name": "signalRGroupActions",
- "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
- "hubName": "chat",
- "direction": "out"
-}
-```
-
-*Run.csx*
-
-```cs
-#r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
-using Microsoft.Azure.WebJobs.Extensions.SignalRService;
-
-public static Task Run(
- HttpRequest req,
- ClaimsPrincipal claimsPrincipal,
- IAsyncCollector<SignalRGroupAction> signalRGroupActions)
-{
- var userIdClaim = claimsPrincipal.FindFirst(ClaimTypes.NameIdentifier);
- return signalRGroupActions.AddAsync(
- new SignalRGroupAction
- {
- UserId = userIdClaim.Value,
- GroupName = "myGroup",
- Action = GroupAction.Remove
- });
-}
-```
- > [!NOTE]
public SignalRGroupAction removeFromGroup(
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a [function.json configuration file](#configuration).
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-The following table explains the properties of the `SignalR` output attribute.
+The following table explains the properties of the `SignalROutput` attribute.
| Attribute property |Description| ||-| |**HubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.| |**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
+# [In-process model](#tab/in-process)
-
-# [Isolated process](#tab/isolated-process)
-
-The following table explains the properties of the `SignalROutput` attribute.
+The following table explains the properties of the `SignalR` output attribute.
| Attribute property |Description| ||-| |**HubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.| |**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
-# [C# Script](#tab/csharp-script)
-
-The following table explains the binding configuration properties that you set in the *function.json* file.
-
-|function.json property | Description|
-||-|
-|**type**| Must be set to `signalR`.|
-|**direction**|Must be set to `out`.|
-|**name**| Variable name used in function code for connection info object. |
-|**hubName**| This value must be set to the name of the SignalR hub for which the connection information is generated.|
-|**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
azure-functions Functions Bindings Signalr Service Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-trigger.md
For information on setup and configuration details, see the [overview](functions
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+The following sample shows a C# function that receives a message event from clients and logs the message content.
+++
+# [In-process model](#tab/in-process)
SignalR Service trigger binding for C# has two programming models. Class based model and traditional model. Class based model provides a consistent SignalR server-side programming experience. Traditional model provides more flexibility and is similar to other function bindings.
public static async Task Run([SignalRTrigger("SignalRTest", "messages", "SendMes
} ``` -
-# [Isolated process](#tab/isolated-process)
-
-The following sample shows a C# function that receives a message event from clients and logs the message content.
---
-# [C# Script](#tab/csharp-script)
-
-Here's example binding data in the *function.json* file:
-
-```json
-{
- "type": "signalRTrigger",
- "name": "invocation",
- "hubName": "SignalRTest",
- "category": "messages",
- "event": "SendMessage",
- "parameterNames": [
- "message"
- ],
- "direction": "in"
-}
-```
-
-And, here's the code:
-
-```cs
-#r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
-using System;
-using Microsoft.Azure.WebJobs.Extensions.SignalRService;
-using Microsoft.Extensions.Logging;
-
-public static void Run(InvocationContext invocation, string message, ILogger logger)
-{
- logger.LogInformation($"Receive {message} from {invocationContext.ConnectionId}.");
-}
-```
- ::: zone-end
def main(invocation) -> None:
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `SignalRTrigger` attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `SignalRTrigger` attribute to define the function. C# script instead uses a [function.json configuration file](#configuration).
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
The following table explains the properties of the `SignalRTrigger` attribute.
The following table explains the properties of the `SignalRTrigger` attribute.
|**ParameterNames**| (Optional) A list of names that binds to the parameters. | |**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
-# [Isolated process](#tab/isolated-process)
+# [In-process model](#tab/in-process)
The following table explains the properties of the `SignalRTrigger` attribute.
The following table explains the properties of the `SignalRTrigger` attribute.
|**ParameterNames**| (Optional) A list of names that binds to the parameters. | |**ConnectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property |Description|
-||--|
-|**type**| Must be set to `SignalRTrigger`.|
-|**direction**| Must be set to `in`.|
-|**name**| Variable name used in function code for trigger invocation context object. |
-|**hubName**| This value must be set to the name of the SignalR hub for the function to be triggered.|
-|**category**| This value must be set as the category of messages for the function to be triggered. The category can be one of the following values: <ul><li>**connections**: Including *connected* and *disconnected* events</li><li>**messages**: Including all other events except those in *connections* category</li></ul> |
-|**event**| This value must be set as the event of messages for the function to be triggered. For *messages* category, event is the *target* in [invocation message](https://github.com/dotnet/aspnetcore/blob/master/src/SignalR/docs/specs/HubProtocol.md#invocation-message-encoding) that clients send. For *connections* category, only *connected* and *disconnected* is used. |
-|**parameterNames**| (Optional) A list of names that binds to the parameters. |
-|**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. |
- ::: zone-end
azure-functions Functions Bindings Signalr Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service.md
This set of articles explains how to authenticate and send real-time messages to
The extension NuGet package you install depends on the C# mode you're using in your function app:
-# [In-process](#tab/in-process)
-
-Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-
-Add the extension to your project by installing this [NuGet package].
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md). Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.SignalRService/).
-# [C# script](#tab/csharp-script)
+# [In-process model](#tab/in-process)
-Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-You can install this version of the extension in your function app by registering the [extension bundle], version 2.x, or a later version.
+Add the extension to your project by installing this [NuGet package].
azure-functions Functions Bindings Storage Blob Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md
This article supports both programming models.
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)]
+# [Isolated process](#tab/isolated-process)
+
+The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
++ # [In-process](#tab/in-process) The following example is a [C# function](functions-dotnet-class-library.md) that uses a queue trigger and an input blob binding. The queue message contains the name of the blob, and the function logs the size of the blob.
public static void Run(
} ```
-# [Isolated process](#tab/isolated-process)
-
-The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
-- ::: zone-end
def main(queuemsg: func.QueueMessage, inputblob: bytes) -> bytes:
Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#blob-input).
+# [Isolated process](#tab/isolated-process)
+
+isolated worker process defines an input binding by using a `BlobInputAttribute` attribute, which takes the following parameters:
+
+|Parameter | Description|
+||-|
+|**BlobPath** | The path to the blob.|
+|**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
+ # [In-process](#tab/in-process) In [C# class libraries](functions-dotnet-class-library.md), use the [BlobAttribute](/dotnet/api/microsoft.azure.webjobs.blobattribute), which takes the following parameters:
public static void Run(
[!INCLUDE [functions-bindings-storage-attribute](../../includes/functions-bindings-storage-attribute.md)]
-# [Isolated process](#tab/isolated-process)
-
-isolated worker process defines an input binding by using a `BlobInputAttribute` attribute, which takes the following parameters:
-
-|Parameter | Description|
-||-|
-|**BlobPath** | The path to the blob.|
-|**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
- [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
See the [Example section](#example) for complete examples.
The binding types supported by Blob input depend on the extension package version and the C# modality used in your function app.
-# [In-process](#tab/in-process)
-
-See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types.
- # [Isolated process](#tab/isolated-process) [!INCLUDE [functions-bindings-storage-blob-input-dotnet-isolated-types](../../includes/functions-bindings-storage-blob-input-dotnet-isolated-types.md)]
+# [In-process](#tab/in-process)
+
+See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types.
+ Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage).
azure-functions Functions Bindings Storage Blob Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md
This article supports both programming models.
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
++
+# [In-process model](#tab/in-process)
The following example is a [C# function](functions-dotnet-class-library.md) that runs in-process and uses a blob trigger and two output blob bindings. The function is triggered by the creation of an image blob in the *sample-images* container. It creates small and medium size copies of the image blob.
public class ResizeImages
} ```
-# [Isolated process](#tab/isolated-process)
-
-The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
-- ::: zone-end
def main(queuemsg: func.QueueMessage, inputblob: bytes, outputblob: func.Out[byt
Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#blob-output).
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+The `BlobOutputAttribute` constructor takes the following parameters:
+
+|Parameter | Description|
+||-|
+|**BlobPath** | The path to the blob.|
+|**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
+
+# [In-process model](#tab/in-process)
The [BlobAttribute](/dotnet/api/microsoft.azure.webjobs.blobattribute) attribute's constructor takes the following parameters:
public static void Run(
[!INCLUDE [functions-bindings-storage-attribute](../../includes/functions-bindings-storage-attribute.md)]
-# [Isolated process](#tab/isolated-process)
-
-The `BlobOutputAttribute` constructor takes the following parameters:
-
-|Parameter | Description|
-||-|
-|**BlobPath** | The path to the blob.|
-|**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
- [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
See the [Example section](#example) for complete examples.
The binding types supported by blob output depend on the extension package version and the C# modality used in your function app.
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types.
-# [Isolated process](#tab/isolated-process)
+# [In-process model](#tab/in-process)
+See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types.
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
This article supports both programming models.
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
++
+# [In-process model](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that writes a log when a blob is added or updated in the `samples-workitems` container.
The string `{name}` in the blob trigger path `samples-workitems/{name}` creates
For more information about the `BlobTrigger` attribute, see [Attributes](#attributes).
-# [Isolated process](#tab/isolated-process)
-
-The following example is a [C# function](dotnet-isolated-process-guide.md) that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the *test-samples-trigger* container. It reads a text file from the *test-samples-input* container and creates a new text file in an output container based on the name of the triggered file.
-- ::: zone-end
The attribute's constructor takes the following parameters:
|**Access** | Indicates whether you will be reading or writing.| |**Source** | Sets the source of the triggering event. Use `BlobTriggerSource.EventGrid` for an [Event Grid-based blob trigger](functions-event-grid-blob-trigger.md), which provides much lower latency. The default is `BlobTriggerSource.LogsAndContainerScan`, which uses the standard polling mechanism to detect changes in the container. |
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+Here's an `BlobTrigger` attribute in a method signature:
++
+# [In-process model](#tab/in-process)
In [C# class libraries](functions-dotnet-class-library.md), the attribute's constructor takes a path string that indicates the container to watch and optionally a [blob name pattern](#blob-name-patterns). Here's an example:
public static void Run(
[!INCLUDE [functions-bindings-storage-attribute](../../includes/functions-bindings-storage-attribute.md)]
-# [Isolated process](#tab/isolated-process)
-
-Here's an `BlobTrigger` attribute in a method signature:
-- [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
Metadata is available through the `$TriggerMetadata` parameter.
The binding types supported by Blob trigger depend on the extension package version and the C# modality used in your function app.
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types.
-# [Isolated process](#tab/isolated-process)
+# [In-process model](#tab/in-process)
+See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types.
azure-functions Functions Bindings Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md
Azure Functions integrates with [Azure Storage](../storage/index.yml) via [trigg
The extension NuGet package you install depends on the C# mode you're using in your function app:
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
-In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+# [In-process model](#tab/in-process)
-# [Isolated process](#tab/isolated-process)
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
+In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
Functions 1.x apps automatically have a reference to the extension.
The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following:
-# [In-process](#tab/in-process)
-
-An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
+# [In-process model](#tab/in-process)
+
+An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
+
Choose a version to see binding type details for the mode and version.
azure-functions Functions Bindings Storage Queue Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-output.md
This article supports both programming models.
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
++
+# [In-process model](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that creates a queue message for each HTTP request received.
public static class QueueFunctions
} ```
-# [Isolated process](#tab/isolated-process)
-- ::: zone-end
def main(req: func.HttpRequest, msg: func.Out[typing.List[str]]) -> func.HttpRes
The attribute that defines an output binding in C# libraries depends on the mode in which the C# class library runs.
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+When running in an isolated worker process, you use the [QueueOutputAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Storage.Queues/src/QueueOutputAttribute.cs), which takes the name of the queue, as shown in the following example:
++
+Only returned variables are supported when running in an isolated worker process. Output parameters can't be used.
+
+# [In-process model](#tab/in-process)
In [C# class libraries](functions-dotnet-class-library.md), use the [QueueAttribute](/dotnet/api/microsoft.azure.webjobs.queueattribute). C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#queue-output).
public static string Run([HttpTrigger] dynamic input, ILogger log)
You can use the `StorageAccount` attribute to specify the storage account at class, method, or parameter level. For more information, see Trigger - attributes.
-# [Isolated process](#tab/isolated-process)
-
-When running in an isolated worker process, you use the [QueueOutputAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Storage.Queues/src/QueueOutputAttribute.cs), which takes the name of the queue, as shown in the following example:
--
-Only returned variables are supported when running in an isolated worker process. Output parameters can't be used.
- ::: zone-end
See the [Example section](#example) for complete examples.
::: zone pivot="programming-language-csharp" The usage of the Queue output binding depends on the extension package version and the C# modality used in your function app, which can be one of the following:
-# [In-process](#tab/in-process)
-
-An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
+# [In-process model](#tab/in-process)
+
+An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
+
Choose a version to see usage details for the mode and version.
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md
Use the queue trigger to start a function when a new item is received on a queue
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+The following example shows a [C# function](dotnet-isolated-process-guide.md) that polls the `input-queue` queue and writes several messages to an output queue each time a queue item is processed.
++
+# [In-process model](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that polls the `myqueue-items` queue and writes a log each time a queue item is processed.
public static class QueueFunctions
} ```
-# [Isolated process](#tab/isolated-process)
-
-The following example shows a [C# function](dotnet-isolated-process-guide.md) that polls the `input-queue` queue and writes several messages to an output queue each time a queue item is processed.
-- ::: zone-end
def main(msg: func.QueueMessage):
Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [QueueTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Storage/Queues/QueueTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#queue-trigger).
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+In [C# class libraries](dotnet-isolated-process-guide.md), the attribute's constructor takes the name of the queue to monitor, as shown in the following example:
++
+This example also demonstrates setting the [connection string setting](#connections) in the attribute itself.
+
+# [In-process model](#tab/in-process)
In [C# class libraries](functions-dotnet-class-library.md), the attribute's constructor takes the name of the queue to monitor, as shown in the following example:
public static void Run(
} ```
-# [Isolated process](#tab/isolated-process)
-
-In [C# class libraries](dotnet-isolated-process-guide.md), the attribute's constructor takes the name of the queue to monitor, as shown in the following example:
--
-This example also demonstrates setting the [connection string setting](#connections) in the attribute itself.
- ::: zone-end
See the [Example section](#example) for complete examples.
The usage of the Queue trigger depends on the extension package version, and the C# modality used in your function app, which can be one of the following:
+# [Isolated worker model](#tab/isolated-process)
+
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
+ # [In-process class library](#tab/in-process) An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
-# [Isolated process](#tab/isolated-process)
-
-An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
- Choose a version to see usage details for the mode and version.
azure-functions Functions Bindings Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue.md
Azure Functions can run as new Azure Queue storage messages are created and can
The extension NuGet package you install depends on the C# mode you're using in your function app:
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
-In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+# [In-process model](#tab/in-process)
-# [Isolated process](#tab/isolated-process)
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
+In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
Functions 1.x apps automatically have a reference to the extension.
The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following:
-# [In-process](#tab/in-process)
-
-An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
+# [In-process model](#tab/in-process)
+
+An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
+
Choose a version to see binding type details for the mode and version.
azure-functions Functions Bindings Storage Table Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md
For information on setup and configuration details, see the [overview](./functio
The usage of the binding depends on the extension package version and the C# modality used in your function app, which can be one of the following:
-# [In-process](#tab/in-process)
-
-An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
An [isolated worker process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime.
+# [In-process model](#tab/in-process)
+
+An [in-process class library](functions-dotnet-class-library.md) is a compiled C# function runs in the same process as the Functions runtime.
+
Choose a version to see examples for the mode and version.
With this simple binding, you can't programmatically handle a case in which no r
Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#table-input).
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+In [C# class libraries](dotnet-isolated-process-guide.md), the `TableInputAttribute` supports the following properties:
+
+| Attribute property |Description|
+|||
+| **TableName** | The name of the table.|
+| **PartitionKey** |Optional. The partition key of the table entity to read. |
+|**RowKey** | Optional. The row key of the table entity to read. |
+| **Take** | Optional. The maximum number of entities to read into an [`IEnumerable<T>`]. Can't be used with `RowKey`.|
+|**Filter** | Optional. An OData filter expression for entities to read into an [`IEnumerable<T>`]. Can't be used with `RowKey`. |
+|**Connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). |
+
+# [In-process model](#tab/in-process)
In [C# class libraries](functions-dotnet-class-library.md), the `TableAttribute` supports the following properties:
public static void Run(
[!INCLUDE [functions-bindings-storage-attribute](../../includes/functions-bindings-storage-attribute.md)]
-# [Isolated process](#tab/isolated-process)
-
-In [C# class libraries](dotnet-isolated-process-guide.md), the `TableInputAttribute` supports the following properties:
-
-| Attribute property |Description|
-|||
-| **TableName** | The name of the table.|
-| **PartitionKey** |Optional. The partition key of the table entity to read. |
-|**RowKey** | Optional. The row key of the table entity to read. |
-| **Take** | Optional. The maximum number of entities to read into an [`IEnumerable<T>`]. Can't be used with `RowKey`.|
-|**Filter** | Optional. An OData filter expression for entities to read into an [`IEnumerable<T>`]. Can't be used with `RowKey`. |
-|**Connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). |
- ::: zone-end
The following table explains the binding configuration properties that you set i
The usage of the binding depends on the extension package version, and the C# modality used in your function app, which can be one of the following:
-# [In-process](#tab/in-process)
-
-An in-process class library is a compiled C# function that runs in the same process as the Functions runtime.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
+# [In-process model](#tab/in-process)
+
+An in-process class library is a compiled C# function that runs in the same process as the Functions runtime.
+
Choose a version to see usage details for the mode and version.
azure-functions Functions Bindings Storage Table Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-output.md
For information on setup and configuration details, see the [overview](./functio
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)]
-# [In-process](#tab/in-process)
-
-The following example shows a [C# function](functions-dotnet-class-library.md) that uses an HTTP trigger to write a single table row.
-
-```csharp
-public class TableStorage
-{
- public class MyPoco
- {
- public string PartitionKey { get; set; }
- public string RowKey { get; set; }
- public string Text { get; set; }
- }
-
- [FunctionName("TableOutput")]
- [return: Table("MyTable")]
- public static MyPoco TableOutput([HttpTrigger] dynamic input, ILogger log)
- {
- log.LogInformation($"C# http trigger function processed: {input.Text}");
- return new MyPoco { PartitionKey = "Http", RowKey = Guid.NewGuid().ToString(), Text = input.Text };
- }
-}
-```
--
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
The following `MyTableData` class represents a row of data in the table:
public static MyTableData Run(
} ```
+# [In-process model](#tab/in-process)
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that uses an HTTP trigger to write a single table row.
+
+```csharp
+public class TableStorage
+{
+ public class MyPoco
+ {
+ public string PartitionKey { get; set; }
+ public string RowKey { get; set; }
+ public string Text { get; set; }
+ }
+
+ [FunctionName("TableOutput")]
+ [return: Table("MyTable")]
+ public static MyPoco TableOutput([HttpTrigger] dynamic input, ILogger log)
+ {
+ log.LogInformation($"C# http trigger function processed: {input.Text}");
+ return new MyPoco { PartitionKey = "Http", RowKey = Guid.NewGuid().ToString(), Text = input.Text };
+ }
+}
+```
++ ::: zone-end
def main(req: func.HttpRequest, message: func.Out[str]) -> func.HttpResponse:
Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#table-output).
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+In [C# class libraries](dotnet-isolated-process-guide.md), the `TableInputAttribute` supports the following properties:
+
+| Attribute property |Description|
+|||
+|**TableName** | The name of the table to which to write.|
+|**PartitionKey** | The partition key of the table entity to write. |
+|**RowKey** | The row key of the table entity to write. |
+|**Connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). |
+
+# [In-process model](#tab/in-process)
In [C# class libraries](functions-dotnet-class-library.md), the `TableAttribute` supports the following properties:
public static MyPoco TableOutput(
[!INCLUDE [functions-bindings-storage-attribute](../../includes/functions-bindings-storage-attribute.md)]
-# [Isolated process](#tab/isolated-process)
-
-In [C# class libraries](dotnet-isolated-process-guide.md), the `TableInputAttribute` supports the following properties:
-
-| Attribute property |Description|
-|||
-|**TableName** | The name of the table to which to write.|
-|**PartitionKey** | The partition key of the table entity to write. |
-|**RowKey** | The row key of the table entity to write. |
-|**Connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). |
- ::: zone-end
The following table explains the binding configuration properties that you set i
The usage of the binding depends on the extension package version, and the C# modality used in your function app, which can be one of the following:
-# [In-process](#tab/in-process)
-
-An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
+# [In-process model](#tab/in-process)
+
+An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
+
Choose a version to see usage details for the mode and version.
azure-functions Functions Bindings Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md
Azure Functions integrates with [Azure Tables](../cosmos-db/table/introduction.m
The extension NuGet package you install depends on the C# mode you're using in your function app:
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
-Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
-In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+# [In-process model](#tab/in-process)
-# [Isolated process](#tab/isolated-process)
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
+In a variation of this model, Functions can be run using [C# scripting], which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
Functions 1.x apps automatically have a reference to the extension.
The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following:
-# [In-process](#tab/in-process)
-
-An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
+# [In-process model](#tab/in-process)
+
+An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
+
Choose a version to see binding type details for the mode and version.
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
This example shows a C# function that executes each time the minutes have a valu
[!INCLUDE [functions-bindings-csharp-intro](../../includes/functions-bindings-csharp-intro.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
++
+# [In-process model](#tab/in-process)
```csharp [FunctionName("TimerTriggerCSharp")]
public static void Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, ILogger
} ```
-# [Isolated process](#tab/isolated-process)
-- ::: zone-end
Write-Host "PowerShell timer trigger function ran! TIME: $currentU
::: zone pivot="programming-language-csharp" ## Attributes
-[In-process](functions-dotnet-class-library.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs) from [Microsoft.Azure.WebJobs.Extensions](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions) whereas [isolated worker process](dotnet-isolated-process-guide.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Timer/src/TimerTriggerAttribute.cs) from [Microsoft.Azure.Functions.Worker.Extensions.Timer](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Timer) to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#timer-trigger).
+[In-process](functions-dotnet-class-library.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs) from [Microsoft.Azure.WebJobs.Extensions](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions) whereas [isolated worker process](dotnet-isolated-process-guide.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Timer/src/TimerTriggerAttribute.cs) from [Microsoft.Azure.Functions.Worker.Extensions.Timer](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Timer) to define the function. C# script instead uses a [function.json configuration file](#configuration).
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
|Attribute property | Description| ||-|
Write-Host "PowerShell timer trigger function ran! TIME: $currentU
|**RunOnStartup**| If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **RunOnStartup** should rarely if ever be set to `true`, especially in production. | |**UseMonitor**| Set to `true` or `false` to indicate whether the schedule should be monitored. Schedule monitoring persists schedule occurrences to aid in ensuring the schedule is maintained correctly even when function app instances restart. If not set explicitly, the default is `true` for schedules that have a recurrence interval greater than or equal to 1 minute. For schedules that trigger more than once per minute, the default is `false`. |
-# [Isolated process](#tab/isolated-process)
+# [In-process model](#tab/in-process)
|Attribute property | Description| ||-|
azure-functions Functions Bindings Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-twilio.md
This article explains how to send text messages by using [Twilio](https://www.tw
The extension NuGet package you install depends on the C# mode you're using in your function app:
-# [In-process](#tab/in-process)
-
-Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
-# [C# script](#tab/csharp-script)
+# [In-process model](#tab/in-process)
-Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
There is currently no support for Twilio for an isolated worker process app.
Functions 1.x doesn't support running in an isolated worker process.
-# [Functions v2.x+](#tab/functionsv2/csharp-script)
-
-This version of the extension should already be available to your function app with [extension bundle], version 2.x.
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-You can add the extension to your project by explicitly installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Twilio), version 1.x. To learn more, see [Explicitly install extensions](functions-bindings-register.md#explicitly-install-extensions).
- ::: zone-end
Unless otherwise noted, these examples are specific to version 2.x and later ver
::: zone pivot="programming-language-csharp" [!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+The Twilio binding isn't currently supported for a function app running in an isolated worker process.
+
+# [In-process model](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that sends a text message when triggered by a queue message.
namespace TwilioQueueOutput
This example uses the `TwilioSms` attribute with the method return value. An alternative is to use the attribute with an `out CreateMessageOptions` parameter or an `ICollector<CreateMessageOptions>` or `IAsyncCollector<CreateMessageOptions>` parameter.
-# [Isolated process](#tab/isolated-process)
-
-The Twilio binding isn't currently supported for a function app running in an isolated worker process.
-
-# [C# Script](#tab/csharp-script)
-
-The following example shows a Twilio output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses an `out` parameter to send a text message.
-
-Here's binding data in the *function.json* file:
-
-Example function.json:
-
-```json
-{
- "type": "twilioSms",
- "name": "message",
- "accountSidSetting": "TwilioAccountSid",
- "authTokenSetting": "TwilioAuthToken",
- "from": "+1425XXXXXXX",
- "direction": "out",
- "body": "Azure Functions Testing"
-}
-```
-
-Here's C# script code:
-
-```cs
-#r "Newtonsoft.Json"
-#r "Twilio"
-#r "Microsoft.Azure.WebJobs.Extensions.Twilio"
-
-using System;
-using Microsoft.Extensions.Logging;
-using Newtonsoft.Json;
-using Microsoft.Azure.WebJobs.Extensions.Twilio;
-using Twilio.Rest.Api.V2010.Account;
-using Twilio.Types;
-
-public static void Run(string myQueueItem, out CreateMessageOptions message, ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
-
- // In this example the queue item is a JSON string representing an order that contains the name of a
- // customer and a mobile number to send text updates to.
- dynamic order = JsonConvert.DeserializeObject(myQueueItem);
- string msg = "Hello " + order.name + ", thank you for your order.";
-
- // You must initialize the CreateMessageOptions variable with the "To" phone number.
- message = new CreateMessageOptions(new PhoneNumber("+1704XXXXXXX"));
-
- // A dynamic message can be set instead of the body in the output binding. In this example, we use
- // the order information to personalize a text message.
- message.Body = msg;
-}
-```
-
-You can't use out parameters in asynchronous code. Here's an asynchronous C# script code example:
-
-```cs
-#r "Newtonsoft.Json"
-#r "Twilio"
-#r "Microsoft.Azure.WebJobs.Extensions.Twilio"
-
-using System;
-using Microsoft.Extensions.Logging;
-using Newtonsoft.Json;
-using Microsoft.Azure.WebJobs.Extensions.Twilio;
-using Twilio.Rest.Api.V2010.Account;
-using Twilio.Types;
-
-public static async Task Run(string myQueueItem, IAsyncCollector<CreateMessageOptions> message, ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
-
- // In this example the queue item is a JSON string representing an order that contains the name of a
- // customer and a mobile number to send text updates to.
- dynamic order = JsonConvert.DeserializeObject(myQueueItem);
- string msg = "Hello " + order.name + ", thank you for your order.";
-
- // You must initialize the CreateMessageOptions variable with the "To" phone number.
- CreateMessageOptions smsText = new CreateMessageOptions(new PhoneNumber("+1704XXXXXXX"));
-
- // A dynamic message can be set instead of the body in the output binding. In this example, we use
- // the order information to personalize a text message.
- smsText.Body = msg;
-
- await message.AddAsync(smsText);
-}
-```
- ::: zone-end
public class TwilioOutput {
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a [function.json configuration file](#configuration).
+
+# [Isolated worker model](#tab/isolated-process)
+
+The Twilio binding isn't currently supported for a function app running in an isolated worker process.
-# [In-process](#tab/in-process)
+# [In-process model](#tab/in-process)
In [in-process](functions-dotnet-class-library.md) function apps, use the [TwilioSmsAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.Twilio/TwilioSMSAttribute.cs), which supports the following parameters.
In [in-process](functions-dotnet-class-library.md) function apps, use the [Twili
| **Body**| This value can be used to hard code the SMS text message if you don't need to set it dynamically in the code for your function. |
-# [Isolated process](#tab/isolated-process)
-
-The Twilio binding isn't currently supported for a function app running in an isolated worker process.
-
-# [C# Script](#tab/csharp-script)
- ::: zone-end
azure-functions Functions Bindings Warmup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md
The following considerations apply when using a warmup trigger:
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)]
-# [In-process](#tab/in-process)
+# [Isolated worker model](#tab/isolated-process)
+
+The following example shows a [C# function](dotnet-isolated-process-guide.md) that runs on each new instance when it's added to your app.
++
+# [In-process model](#tab/in-process)
The following example shows a [C# function](functions-dotnet-class-library.md) that runs on each new instance when it's added to your app.
namespace WarmupSample
{ //Initialize shared dependencies here
- log.LogInformation("Function App instance is warm 🌞🌞🌞");
+ log.LogInformation("Function App instance is warm.");
} } } ```
-# [Isolated process](#tab/isolated-process)
-
-The following example shows a [C# function](dotnet-isolated-process-guide.md) that runs on each new instance when it's added to your app.
--
-# [C# Script](#tab/csharp-script)
-
-The following example shows a warmup trigger in a *function.json* file and a [C# script function](functions-reference-csharp.md) that runs on each new instance when it's added to your app.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "type": "warmupTrigger",
- "direction": "in",
- "name": "warmupContext"
- }
- ]
-}
-```
-
-For more information, see [Attributes](#attributes).
-
-```cs
-public static void Run(WarmupContext warmupContext, ILogger log)
-{
- log.LogInformation("Function App instance is warm 🌞🌞🌞");
-}
-```
- ::: zone-end
The following example shows a warmup trigger that runs when each new instance is
```java @FunctionName("Warmup") public void warmup( @WarmupTrigger Object warmupContext, ExecutionContext context) {
- context.getLogger().info("Function App instance is warm 🌞🌞🌞");
+ context.getLogger().info("Function App instance is warm.");
} ```
Here's the JavaScript code:
```javascript module.exports = async function (context, warmupContext) {
- context.log('Function App instance is warm 🌞🌞🌞');
+ context.log('Function App instance is warm.');
}; ```
import azure.functions as func
def main(warmupContext: func.Context) -> None:
- logging.info('Function App instance is warm 🌞🌞🌞')
+ logging.info('Function App instance is warm.')
``` ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `WarmupTrigger` attribute to define the function. C# script instead uses a *function.json* configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `WarmupTrigger` attribute to define the function. C# script instead uses a [function.json configuration file](#configuration).
-# [In-process](#tab/in-process)
-
-Use the `WarmupTrigger` attribute to define the function. This attribute has no parameters.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
Use the `WarmupTrigger` attribute to define the function. This attribute has no parameters.
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
+# [In-process model](#tab/in-process)
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property |Description |
-||-|
-| **type** | Required - must be set to `warmupTrigger`. |
-| **direction** | Required - must be set to `in`. |
-| **name** | Required - the name of the binding parameter, which is usually `warmupContext`. |
+Use the `WarmupTrigger` attribute to define the function. This attribute has no parameters.
See the [Example section](#example) for complete examples.
::: zone pivot="programming-language-csharp" The following considerations apply to using a warmup function in C#:
-# [In-process](#tab/in-process)
--- Your function must be named `warmup` (case-insensitive) using the `FunctionName` attribute.-- A return value attribute isn't required.-- You must be using version `3.0.5` of the `Microsoft.Azure.WebJobs.Extensions` package, or a later version. -- You can pass a `WarmupContext` instance to the function.-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
- Your function must be named `warmup` (case-insensitive) using the `Function` attribute. - A return value attribute isn't required. - Use the `Microsoft.Azure.Functions.Worker.Extensions.Warmup` package - You can pass an object instance to the function.
-# [C# script](#tab/csharp-script)
+# [In-process model](#tab/in-process)
-Not supported for version 1.x of the Functions runtime.
+- Your function must be named `warmup` (case-insensitive) using the `FunctionName` attribute.
+- A return value attribute isn't required.
+- You must be using version `3.0.5` of the `Microsoft.Azure.WebJobs.Extensions` package, or a later version.
+- You can pass a `WarmupContext` instance to the function.
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md
Title: Develop Azure Functions by using Visual Studio Code
description: Learn how to develop and test Azure Functions by using the Azure Functions extension for Visual Studio Code. ms.devlang: csharp, java, javascript, powershell, python-+ Last updated 09/01/2023 zone_pivot_groups: programming-languages-set-functions #Customer intent: As an Azure Functions developer, I want to understand how Visual Studio Code supports Azure Functions so that I can more efficiently create, publish, and maintain my Functions projects.
You can connect your function to other Azure services by adding input and output
::: zone pivot="programming-language-csharp" For example, the way you define an output binding that writes data to a storage queue depends on your process model:
-### [In-process](#tab/in-process)
-
-Update the function method to add a binding parameter defined by using the `Queue` attribute. You can use an `ICollector<T>` type to represent a collection of messages.
- ### [Isolated process](#tab/isolated-process) Update the function method to add a binding parameter defined by using the `QueueOutput` attribute. You can use a `MultiResponse` object to return multiple messages or multiple output streams.
+### [In-process](#tab/in-process)
+
+Update the function method to add a binding parameter defined by using the `Queue` attribute. You can use an `ICollector<T>` type to represent a collection of messages.
+ ::: zone-end
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md
As with triggers, input and output bindings are added to your function as bindin
1. Use the following command in the Package Manager Console to install a specific package:
- # [In-process](#tab/in-process)
+ # [Isolated worker model](#tab/isolated-process)
```powershell
- Install-Package Microsoft.Azure.WebJobs.Extensions.<BINDING_TYPE> -Version <TARGET_VERSION>
+ Install-Package Microsoft.Azure.Functions.Worker.Extensions.<BINDING_TYPE> -Version <TARGET_VERSION>
```
- # [Isolated process](#tab/isolated-process)
+ # [In-process model](#tab/in-process)
```powershell
- Install-Package Microsoft.Azure.Functions.Worker.Extensions.<BINDING_TYPE> -Version <TARGET_VERSION>
+ Install-Package Microsoft.Azure.WebJobs.Extensions.<BINDING_TYPE> -Version <TARGET_VERSION>
```
The way you attach the debugger depends on your execution mode. When debugging a
When you're done, you should [disable remote debugging](#disable-remote-debugging).
-# [In-process](#tab/in-process)
-
-To attach a remote debugger to a function app running in-process with the Functions host:
-
-+ From the **Publish** tab, select the ellipses (**...**) in the **Hosting** section, and then choose **Attach debugger**.
-
- :::image type="content" source="media/functions-develop-vs/attach-to-process-in-process.png" alt-text="Screenshot of attaching the debugger from Visual Studio.":::
-
-Visual Studio connects to your function app and enables remote debugging, if it's not already enabled. It also locates and attaches the debugger to the host process for the app. At this point, you can debug your function app as normal.
-
-# [Isolated process](#tab/isolated-process)
+# [Isolated worker model](#tab/isolated-process)
To attach a remote debugger to a function app running in a process separate from the Functions host:
To attach a remote debugger to a function app running in a process separate from
1. Check **Show process from all users** and then choose **dotnet.exe** and select **Attach**. When the operation completes, you're attached to your C# class library code running in an isolated worker process. At this point, you can debug your function app as normal.
+# [In-process model](#tab/in-process)
+
+To attach a remote debugger to a function app running in-process with the Functions host:
+++ From the **Publish** tab, select the ellipses (**...**) in the **Hosting** section, and then choose **Attach debugger**. +
+ :::image type="content" source="media/functions-develop-vs/attach-to-process-in-process.png" alt-text="Screenshot of attaching the debugger from Visual Studio.":::
+
+Visual Studio connects to your function app and enables remote debugging, if it's not already enabled. It also locates and attaches the debugger to the host process for the app. At this point, you can debug your function app as normal.
+ ### Disable remote debugging
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
The following assemblies are automatically added by the Azure Functions hosting
The following assemblies may be referenced by simple-name, by runtime version:
-# [v2.x+](#tab/functionsv2)
+### [v2.x+](#tab/functionsv2)
* `Newtonsoft.Json` * `Microsoft.WindowsAzure.Storage`<sup>*</sup> <sup>*</sup>Removed in version 4.x of the runtime.
-# [v1.x](#tab/functionsv1)
+### [v1.x](#tab/functionsv1)
* `Newtonsoft.Json` * `Microsoft.WindowsAzure.Storage`
The directory that contains the function script file is automatically watched fo
The way that both binding extension packages and other NuGet packages are added to your function app depends on the [targeted version of the Functions runtime](functions-versions.md).
-# [v2.x+](#tab/functionsv2)
+### [v2.x+](#tab/functionsv2)
By default, the [supported set of Functions extension NuGet packages](functions-triggers-bindings.md#supported-bindings) are made available to your C# script function app by using extension bundles. To learn more, see [Extension bundles](functions-bindings-register.md#extension-bundles).
By default, Core Tools reads the function.json files and adds the required packa
> [!NOTE] > For C# script (.csx), you must set `TargetFramework` to a value of `netstandard2.0`. Other target frameworks, such as `net6.0`, aren't supported.
-# [v1.x](#tab/functionsv1)
+### [v1.x](#tab/functionsv1)
Version 1.x of the Functions runtime uses a *project.json* file to define dependencies. Here's an example *project.json* file:
public static string GetEnvironmentVariable(string name)
} ```
+## Retry policies
+
+Functions supports two built-in retry policies. For more information, see [Retry policies](functions-bindings-error-pages.md#retry-policies).
+
+### [Fixed delay](#tab/fixed-delay)
+
+Here's the retry policy in the *function.json* file:
+
+```json
+{
+ "disabled": false,
+ "bindings": [
+ {
+ ....
+ }
+ ],
+ "retry": {
+ "strategy": "fixedDelay",
+ "maxRetryCount": 4,
+ "delayInterval": "00:00:10"
+ }
+}
+```
+
+|*function.json*&nbsp;property | Description |
+||-|
+|strategy|Use `fixedDelay`.|
+|maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
+|delayInterval|The delay that's used between retries. Specify it as a string with the format `HH:mm:ss`.|
+
+### [Exponential backoff](#tab/exponential-backoff)
+
+Here's the retry policy in the *function.json* file:
+
+```json
+{
+ "disabled": false,
+ "bindings": [
+ {
+ ....
+ }
+ ],
+ "retry": {
+ "strategy": "exponentialBackoff",
+ "maxRetryCount": 5,
+ "minimumInterval": "00:00:10",
+ "maximumInterval": "00:15:00"
+ }
+}
+```
+
+|*function.json*&nbsp;property | Description |
+||-|
+|strategy|Use `exponentialBackoff`.|
+|maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
+|minimumInterval|The minimum retry delay. Specify it as a string with the format `HH:mm:ss`.|
+|maximumInterval|The maximum retry delay. Specify it as a string with the format `HH:mm:ss`.|
+++ <a name="imperative-bindings"></a> ## Binding at runtime
public static void Run(string myQueueItem, string myInputBlob, out string myOutp
} ```
+### RabbitMQ trigger
+
+The following example shows a RabbitMQ trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads and logs the RabbitMQ message.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{ΓÇïΓÇï
+ "bindings": [
+ {ΓÇïΓÇï
+ "name": "myQueueItem",
+ "type": "rabbitMQTrigger",
+ "direction": "in",
+ "queueName": "queue",
+ "connectionStringSetting": "rabbitMQConnectionAppSetting"
+ }ΓÇïΓÇï
+ ]
+}ΓÇïΓÇï
+```
+
+Here's the C# script code:
+
+```C#
+using System;
+
+public static void Run(string myQueueItem, ILogger log)
+{ΓÇïΓÇï
+ log.LogInformation($"C# Script RabbitMQ trigger function processed: {ΓÇïΓÇïmyQueueItem}ΓÇïΓÇï");
+}ΓÇïΓÇï
+```
+ ### Queue trigger The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
public static async Task Run(TimerInfo myTimer, ILogger log, IAsyncCollector<str
} ```
-### Cosmos DB trigger
+### Azure Cosmos DB v2 trigger
This section outlines support for the [version 4.x+ of the extension](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4) only.
Here's the C# script code:
} ```
-### Cosmos DB input
+### Azure Cosmos DB v2 input
This section outlines support for the [version 4.x+ of the extension](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4) only.
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, Docume
} ```
-### Cosmos DB output
+### Azure Cosmos DB v2 output
This section outlines support for the [version 4.x+ of the extension](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4) only.
public static async Task Run(ToDoItem[] toDoItemsIn, IAsyncCollector<ToDoItem> t
} ```
+### Azure Cosmos DB v1 trigger
+
+The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are modified.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "type": "cosmosDBTrigger",
+ "name": "documents",
+ "direction": "in",
+ "leaseCollectionName": "leases",
+ "connectionStringSetting": "<connection-app-setting>",
+ "databaseName": "Tasks",
+ "collectionName": "Items",
+ "createLeaseCollectionIfNotExists": true
+}
+```
+
+Here's the C# script code:
+
+```cs
+ #r "Microsoft.Azure.Documents.Client"
+
+ using System;
+ using Microsoft.Azure.Documents;
+ using System.Collections.Generic;
+
+
+ public static void Run(IReadOnlyList<Document> documents, TraceWriter log)
+ {
+ log.Info("Documents modified " + documents.Count);
+ log.Info("First document Id " + documents[0].Id);
+ }
+```
+
+### Azure Cosmos DB v1 input
+
+This section contains the following examples:
+
+* [Queue trigger, look up ID from string](#queue-trigger-look-up-id-from-string-c-script)
+* [Queue trigger, get multiple docs, using SqlQuery](#queue-trigger-get-multiple-docs-using-sqlquery-c-script)
+* [HTTP trigger, look up ID from query string](#http-trigger-look-up-id-from-query-string-c-script)
+* [HTTP trigger, look up ID from route data](#http-trigger-look-up-id-from-route-data-c-script)
+* [HTTP trigger, get multiple docs, using SqlQuery](#http-trigger-get-multiple-docs-using-sqlquery-c-script)
+* [HTTP trigger, get multiple docs, using DocumentClient](#http-trigger-get-multiple-docs-using-documentclient-c-script)
+
+The HTTP trigger examples refer to a simple `ToDoItem` type:
+
+```cs
+namespace CosmosDBSamplesV1
+{
+ public class ToDoItem
+ {
+ public string Id { get; set; }
+ public string Description { get; set; }
+ }
+}
+```
+
+<a id="queue-trigger-look-up-id-from-string-c-script"></a>
+
+#### Queue trigger, look up ID from string
+
+The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads a single document and updates the document's text value.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "name": "inputDocument",
+ "type": "documentDB",
+ "databaseName": "MyDatabase",
+ "collectionName": "MyCollection",
+ "id" : "{queueTrigger}",
+ "partitionKey": "{partition key value}",
+ "connection": "MyAccount_COSMOSDB",
+ "direction": "in"
+}
+```
+
+Here's the C# script code:
+
+```cs
+ using System;
+
+ // Change input document contents using Azure Cosmos DB input binding
+ public static void Run(string myQueueItem, dynamic inputDocument)
+ {
+ inputDocument.text = "This has changed.";
+ }
+```
+
+<a id="queue-trigger-get-multiple-docs-using-sqlquery-c-script"></a>
+
+#### Queue trigger, get multiple docs, using SqlQuery
+
+The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters.
+
+The queue trigger provides a parameter `departmentId`. A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "name": "documents",
+ "type": "documentdb",
+ "direction": "in",
+ "databaseName": "MyDb",
+ "collectionName": "MyCollection",
+ "sqlQuery": "SELECT * from c where c.departmentId = {departmentId}",
+ "connection": "CosmosDBConnection"
+}
+```
+
+Here's the C# script code:
+
+```csharp
+ public static void Run(QueuePayload myQueueItem, IEnumerable<dynamic> documents)
+ {
+ foreach (var doc in documents)
+ {
+ // operate on each document
+ }
+ }
+
+ public class QueuePayload
+ {
+ public string departmentId { get; set; }
+ }
+```
+
+<a id="http-trigger-look-up-id-from-query-string-c-script"></a>
+
+#### HTTP trigger, look up ID from query string
+
+The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a single document. The function is triggered by an HTTP request that uses a query string to specify the ID to look up. That ID is used to retrieve a `ToDoItem` document from the specified database and collection.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "type": "documentDB",
+ "name": "toDoItem",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "connection": "CosmosDBConnection",
+ "direction": "in",
+ "Id": "{Query.id}"
+ }
+ ],
+ "disabled": true
+}
+```
+
+Here's the C# script code:
+
+```cs
+using System.Net;
+
+public static HttpResponseMessage Run(HttpRequestMessage req, ToDoItem toDoItem, TraceWriter log)
+{
+ log.Info("C# HTTP trigger function processed a request.");
+
+ if (toDoItem == null)
+ {
+ log.Info($"ToDo item not found");
+ }
+ else
+ {
+ log.Info($"Found ToDo item, Description={toDoItem.Description}");
+ }
+ return req.CreateResponse(HttpStatusCode.OK);
+}
+```
+
+<a id="http-trigger-look-up-id-from-route-data-c-script"></a>
+
+#### HTTP trigger, look up ID from route data
+
+The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a single document. The function is triggered by an HTTP request that uses route data to specify the ID to look up. That ID is used to retrieve a `ToDoItem` document from the specified database and collection.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ],
+ "route":"todoitems/{id}"
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "type": "documentDB",
+ "name": "toDoItem",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "connection": "CosmosDBConnection",
+ "direction": "in",
+ "Id": "{id}"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+using System.Net;
+
+public static HttpResponseMessage Run(HttpRequestMessage req, ToDoItem toDoItem, TraceWriter log)
+{
+ log.Info("C# HTTP trigger function processed a request.");
+
+ if (toDoItem == null)
+ {
+ log.Info($"ToDo item not found");
+ }
+ else
+ {
+ log.Info($"Found ToDo item, Description={toDoItem.Description}");
+ }
+ return req.CreateResponse(HttpStatusCode.OK);
+}
+```
+
+<a id="http-trigger-get-multiple-docs-using-sqlquery-c-script"></a>
+
+#### HTTP trigger, get multiple docs, using SqlQuery
+
+The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a list of documents. The function is triggered by an HTTP request. The query is specified in the `SqlQuery` attribute property.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "type": "documentDB",
+ "name": "toDoItems",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "connection": "CosmosDBConnection",
+ "direction": "in",
+ "sqlQuery": "SELECT top 2 * FROM c order by c._ts desc"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+using System.Net;
+
+public static HttpResponseMessage Run(HttpRequestMessage req, IEnumerable<ToDoItem> toDoItems, TraceWriter log)
+{
+ log.Info("C# HTTP trigger function processed a request.");
+
+ foreach (ToDoItem toDoItem in toDoItems)
+ {
+ log.Info(toDoItem.Description);
+ }
+ return req.CreateResponse(HttpStatusCode.OK);
+}
+```
+
+<a id="http-trigger-get-multiple-docs-using-documentclient-c-script"></a>
+
+#### HTTP trigger, get multiple docs, using DocumentClient
+
+The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a list of documents. The function is triggered by an HTTP request. The code uses a `DocumentClient` instance provided by the Azure Cosmos DB binding to read a list of documents. The `DocumentClient` instance could also be used for write operations.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "type": "documentDB",
+ "name": "client",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "connection": "CosmosDBConnection",
+ "direction": "inout"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+#r "Microsoft.Azure.Documents.Client"
+
+using System.Net;
+using Microsoft.Azure.Documents.Client;
+using Microsoft.Azure.Documents.Linq;
+
+public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, DocumentClient client, TraceWriter log)
+{
+ log.Info("C# HTTP trigger function processed a request.");
+
+ Uri collectionUri = UriFactory.CreateDocumentCollectionUri("ToDoItems", "Items");
+ string searchterm = req.GetQueryNameValuePairs()
+ .FirstOrDefault(q => string.Compare(q.Key, "searchterm", true) == 0)
+ .Value;
+
+ if (searchterm == null)
+ {
+ return req.CreateResponse(HttpStatusCode.NotFound);
+ }
+
+ log.Info($"Searching for word: {searchterm} using Uri: {collectionUri.ToString()}");
+ IDocumentQuery<ToDoItem> query = client.CreateDocumentQuery<ToDoItem>(collectionUri)
+ .Where(p => p.Description.Contains(searchterm))
+ .AsDocumentQuery();
+
+ while (query.HasMoreResults)
+ {
+ foreach (ToDoItem result in await query.ExecuteNextAsync())
+ {
+ log.Info(result.Description);
+ }
+ }
+ return req.CreateResponse(HttpStatusCode.OK);
+}
+```
+
+### Azure Cosmos DB v1 output
+
+This section contains the following examples:
+
+* Queue trigger, write one doc
+* Queue trigger, write docs using `IAsyncCollector`
+
+#### Queue trigger, write one doc
+
+The following example shows an Azure Cosmos DB output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses a queue input binding for a queue that receives JSON in the following format:
+
+```json
+{
+ "name": "John Henry",
+ "employeeId": "123456",
+ "address": "A town nearby"
+}
+```
+
+The function creates Azure Cosmos DB documents in the following format for each record:
+
+```json
+{
+ "id": "John Henry-123456",
+ "name": "John Henry",
+ "employeeId": "123456",
+ "address": "A town nearby"
+}
+```
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "name": "employeeDocument",
+ "type": "documentDB",
+ "databaseName": "MyDatabase",
+ "collectionName": "MyCollection",
+ "createIfNotExists": true,
+ "connection": "MyAccount_COSMOSDB",
+ "direction": "out"
+}
+```
+
+Here's the C# script code:
+
+```cs
+ #r "Newtonsoft.Json"
+
+ using Microsoft.Azure.WebJobs.Host;
+ using Newtonsoft.Json.Linq;
+
+ public static void Run(string myQueueItem, out object employeeDocument, TraceWriter log)
+ {
+ log.Info($"C# Queue trigger function processed: {myQueueItem}");
+
+ dynamic employee = JObject.Parse(myQueueItem);
+
+ employeeDocument = new {
+ id = employee.name + "-" + employee.employeeId,
+ name = employee.name,
+ employeeId = employee.employeeId,
+ address = employee.address
+ };
+ }
+```
+
+#### Queue trigger, write docs using IAsyncCollector
+
+To create multiple documents, you can bind to `ICollector<T>` or `IAsyncCollector<T>` where `T` is one of the supported types.
+
+This example refers to a simple `ToDoItem` type:
+
+```cs
+namespace CosmosDBSamplesV1
+{
+ public class ToDoItem
+ {
+ public string Id { get; set; }
+ public string Description { get; set; }
+ }
+}
+```
+
+Here's the function.json file:
+
+```json
+{
+ "bindings": [
+ {
+ "name": "toDoItemsIn",
+ "type": "queueTrigger",
+ "direction": "in",
+ "queueName": "todoqueueforwritemulti",
+ "connection": "AzureWebJobsStorage"
+ },
+ {
+ "type": "documentDB",
+ "name": "toDoItemsOut",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "connection": "CosmosDBConnection",
+ "direction": "out"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+using System;
+
+public static async Task Run(ToDoItem[] toDoItemsIn, IAsyncCollector<ToDoItem> toDoItemsOut, TraceWriter log)
+{
+ log.Info($"C# Queue trigger function processed {toDoItemsIn?.Length} items");
+
+ foreach (ToDoItem toDoItem in toDoItemsIn)
+ {
+ log.Info($"Description={toDoItem.Description}");
+ await toDoItemsOut.AddAsync(toDoItem);
+ }
+}
+```
+
+### Azure SQL trigger
+
+More samples for the Azure SQL trigger are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csx).
++
+The example refers to a `ToDoItem` class and a corresponding database table:
+++
+[Change tracking](./functions-bindings-azure-sql-trigger.md#set-up-change-tracking-required) is enabled on the database and on the table:
+
+```sql
+ALTER DATABASE [SampleDatabase]
+SET CHANGE_TRACKING = ON
+(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON);
+
+ALTER TABLE [dbo].[ToDo]
+ENABLE CHANGE_TRACKING;
+```
+
+The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` objects each with two properties:
+- **Item:** the item that was changed. The type of the item should follow the table schema as seen in the `ToDoItem` class.
+- **Operation:** a value from `SqlChangeOperation` enum. The possible values are `Insert`, `Update`, and `Delete`.
+
+The following example shows a SQL trigger in a function.json file and a [C# script function](functions-reference-csharp.md) that is invoked when there are changes to the `ToDo` table:
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "name": "todoChanges",
+ "type": "sqlTrigger",
+ "direction": "in",
+ "tableName": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+The following is the C# script function:
+
+```csharp
+#r "Newtonsoft.Json"
+
+using System.Net;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+
+public static void Run(IReadOnlyList<SqlChange<ToDoItem>> todoChanges, ILogger log)
+{
+ log.LogInformation($"C# SQL trigger function processed a request.");
+
+ foreach (SqlChange<ToDoItem> change in todoChanges)
+ {
+ ToDoItem toDoItem = change.Item;
+ log.LogInformation($"Change operation: {change.Operation}");
+ log.LogInformation($"Id: {toDoItem.Id}, Title: {toDoItem.title}, Url: {toDoItem.url}, Completed: {toDoItem.completed}");
+ }
+}
+```
+
+### Azure SQL input
+
+More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csx).
+
+This section contains the following examples:
+
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-csharpscript)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-csharpscript)
+
+The examples refer to a `ToDoItem` class and a corresponding database table:
+++
+<a id="http-trigger-look-up-id-from-query-string-csharpscript"></a>
+#### HTTP trigger, get row by ID from query string
+
+The following example shows an Azure SQL input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `ToDoItem` record with the specified query.
+
+> [!NOTE]
+> The HTTP query string parameter is case-sensitive.
+>
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItem",
+ "type": "sql",
+ "direction": "in",
+ "commandText": "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
+ "commandType": "Text",
+ "parameters": "@Id = {Query.id}",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+Here's the C# script code:
+
+```cs
+#r "Newtonsoft.Json"
+
+using System.Net;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+using System.Collections.Generic;
+
+public static IActionResult Run(HttpRequest req, ILogger log, IEnumerable<ToDoItem> todoItem)
+{
+ return new OkObjectResult(todoItem);
+}
+```
++
+<a id="http-trigger-delete-one-or-multiple-rows-csharpscript"></a>
+#### HTTP trigger, delete rows
+
+The following example shows an Azure SQL input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding to execute a stored procedure with input from the HTTP request query parameter. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
+
+The stored procedure `dbo.DeleteToDo` must be created on the SQL database.
++
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItems",
+ "type": "sql",
+ "direction": "in",
+ "commandText": "DeleteToDo",
+ "commandType": "StoredProcedure",
+ "parameters": "@Id = {Query.id}",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
++
+Here's the C# script code:
+
+```cs
+#r "Newtonsoft.Json"
+
+using System.Net;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+using System.Collections.Generic;
+
+public static IActionResult Run(HttpRequest req, ILogger log, IEnumerable<ToDoItem> todoItems)
+{
+ return new OkObjectResult(todoItems);
+}
+```
+
+### Azure SQL output
+
+More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csx).
+
+This section contains the following examples:
+
+* [HTTP trigger, write records to a table](#http-trigger-write-records-to-table-csharpscript)
+* [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-csharpscript)
+
+The examples refer to a `ToDoItem` class and a corresponding database table:
++++
+<a id="http-trigger-write-records-to-table-csharpscript"></a>
+#### HTTP trigger, write records to a table
+
+The following example shows a SQL output binding in a function.json file and a [C# script function](functions-reference-csharp.md) that adds records to a table, using data provided in an HTTP POST request as a JSON body.
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "post"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItem",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The following is sample C# script code:
+
+```cs
+#r "Newtonsoft.Json"
+
+using System.Net;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+
+public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ string requestBody = new StreamReader(req.Body).ReadToEnd();
+ todoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
+
+ return new OkObjectResult(todoItem);
+}
+```
+
+<a id="http-trigger-write-to-two-tables-csharpscript"></a>
+#### HTTP trigger, write to two tables
+
+The following example shows a SQL output binding in a function.json file and a [C# script function](functions-reference-csharp.md) that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
+
+The second table, `dbo.RequestLog`, corresponds to the following definition:
+
+```sql
+CREATE TABLE dbo.RequestLog (
+ Id int identity(1,1) primary key,
+ RequestTimeStamp datetime2 not null,
+ ItemCount int not null
+)
+```
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "post"
+ ]
+},
+{
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+},
+{
+ "name": "todoItem",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+},
+{
+ "name": "requestLog",
+ "type": "sql",
+ "direction": "out",
+ "commandText": "dbo.RequestLog",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+The following is sample C# script code:
+
+```cs
+#r "Newtonsoft.Json"
+
+using System.Net;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+
+public static IActionResult Run(HttpRequest req, ILogger log, out ToDoItem todoItem, out RequestLog requestLog)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ string requestBody = new StreamReader(req.Body).ReadToEnd();
+ todoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
+
+ requestLog = new RequestLog();
+ requestLog.RequestTimeStamp = DateTime.Now;
+ requestLog.ItemCount = 1;
+
+ return new OkObjectResult(todoItem);
+}
+
+public class RequestLog {
+ public DateTime RequestTimeStamp { get; set; }
+ public int ItemCount { get; set; }
+}
+```
+
+### RabbitMQ output
+
+The following example shows a RabbitMQ output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads in the message from an HTTP trigger and outputs it to the RabbitMQ queue.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "httpTrigger",
+ "direction": "in",
+ "authLevel": "function",
+ "name": "input",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "type": "rabbitMQ",
+ "name": "outputMessage",
+ "queueName": "outputQueue",
+ "connectionStringSetting": "rabbitMQConnectionAppSetting",
+ "direction": "out"
+ }
+ ]
+}
+```
+
+Here's the C# script code:
+
+```C#
+using System;
+using Microsoft.Extensions.Logging;
+
+public static void Run(string input, out string outputMessage, ILogger log)
+{
+ log.LogInformation(input);
+ outputMessage = input;
+}
+```
+### SendGrid output
+
+The following example shows a SendGrid output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "queueTrigger",
+ "name": "mymsg",
+ "queueName": "myqueue",
+ "connection": "AzureWebJobsStorage",
+ "direction": "in"
+ },
+ {
+ "type": "sendGrid",
+ "name": "$return",
+ "direction": "out",
+ "apiKey": "SendGridAPIKeyAsAppSetting",
+ "from": "{FromEmail}",
+ "to": "{ToEmail}"
+ }
+ ]
+}
+```
+
+Here's the C# script code:
+
+```csharp
+#r "SendGrid"
+
+using System;
+using SendGrid.Helpers.Mail;
+using Microsoft.Azure.WebJobs.Host;
+
+public static SendGridMessage Run(Message mymsg, ILogger log)
+{
+ SendGridMessage message = new SendGridMessage()
+ {
+ Subject = $"{mymsg.Subject}"
+ };
+
+ message.AddContent("text/plain", $"{mymsg.Content}");
+
+ return message;
+}
+public class Message
+{
+ public string ToEmail { get; set; }
+ public string FromEmail { get; set; }
+ public string Subject { get; set; }
+ public string Content { get; set; }
+}
+```
+
+### SignalR trigger
+
+Here's example binding data in the *function.json* file:
+
+```json
+{
+ "type": "signalRTrigger",
+ "name": "invocation",
+ "hubName": "SignalRTest",
+ "category": "messages",
+ "event": "SendMessage",
+ "parameterNames": [
+ "message"
+ ],
+ "direction": "in"
+}
+```
+
+And, here's the code:
+
+```cs
+#r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
+using System;
+using Microsoft.Azure.WebJobs.Extensions.SignalRService;
+using Microsoft.Extensions.Logging;
+
+public static void Run(InvocationContext invocation, string message, ILogger logger)
+{
+ logger.LogInformation($"Receive {message} from {invocationContext.ConnectionId}.");
+}
+```
+
+### SignalR input
+
+The following example shows a SignalR connection info input binding in a *function.json* file and a [C# Script function](functions-reference-csharp.md) that uses the binding to return the connection information.
+
+Here's binding data in the *function.json* file:
+
+Example function.json:
+
+```json
+{
+ "type": "signalRConnectionInfo",
+ "name": "connectionInfo",
+ "hubName": "chat",
+ "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
+ "direction": "in"
+}
+```
+
+Here's the C# Script code:
+
+```cs
+#r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
+using Microsoft.Azure.WebJobs.Extensions.SignalRService;
+
+public static SignalRConnectionInfo Run(HttpRequest req, SignalRConnectionInfo connectionInfo)
+{
+ return connectionInfo;
+}
+```
+
+You can set the `userId` property of the binding to the value from either header using a [binding expression](./functions-bindings-signalr-service-input.md#binding-expressions-for-http-trigger): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
+
+Example function.json:
+
+```json
+{
+ "type": "signalRConnectionInfo",
+ "name": "connectionInfo",
+ "hubName": "chat",
+ "userId": "{headers.x-ms-client-principal-id}",
+ "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
+ "direction": "in"
+}
+```
+
+Here's the C# Script code:
+
+```cs
+#r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
+using Microsoft.Azure.WebJobs.Extensions.SignalRService;
+
+public static SignalRConnectionInfo Run(HttpRequest req, SignalRConnectionInfo connectionInfo)
+{
+ // connectionInfo contains an access key token with a name identifier
+ // claim set to the authenticated user
+ return connectionInfo;
+}
+```
+
+### SignalR output
+
+Here's binding data in the *function.json* file:
+
+Example function.json:
+
+```json
+{
+ "type": "signalR",
+ "name": "signalRMessages",
+ "hubName": "<hub_name>",
+ "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
+ "direction": "out"
+}
+```
+
+Here's the C# Script code:
+
+```cs
+#r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
+using Microsoft.Azure.WebJobs.Extensions.SignalRService;
+
+public static Task Run(
+ object message,
+ IAsyncCollector<SignalRMessage> signalRMessages)
+{
+ return signalRMessages.AddAsync(
+ new SignalRMessage
+ {
+ Target = "newMessage",
+ Arguments = new [] { message }
+ });
+}
+```
+
+You can send a message only to connections that have been authenticated to a user by setting the *user ID* in the SignalR message.
+
+Example function.json:
+
+```json
+{
+ "type": "signalR",
+ "name": "signalRMessages",
+ "hubName": "<hub_name>",
+ "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
+ "direction": "out"
+}
+```
+
+Here's the C# script code:
+
+```cs
+#r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
+using Microsoft.Azure.WebJobs.Extensions.SignalRService;
+
+public static Task Run(
+ object message,
+ IAsyncCollector<SignalRMessage> signalRMessages)
+{
+ return signalRMessages.AddAsync(
+ new SignalRMessage
+ {
+ // the message will only be sent to this user ID
+ UserId = "userId1",
+ Target = "newMessage",
+ Arguments = new [] { message }
+ });
+}
+```
+
+You can send a message only to connections that have been added to a group by setting the *group name* in the SignalR message.
+
+Example function.json:
+
+```json
+{
+ "type": "signalR",
+ "name": "signalRMessages",
+ "hubName": "<hub_name>",
+ "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
+ "direction": "out"
+}
+```
+
+Here's the C# Script code:
+
+```cs
+#r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
+using Microsoft.Azure.WebJobs.Extensions.SignalRService;
+
+public static Task Run(
+ object message,
+ IAsyncCollector<SignalRMessage> signalRMessages)
+{
+ return signalRMessages.AddAsync(
+ new SignalRMessage
+ {
+ // the message will be sent to the group with this name
+ GroupName = "myGroup",
+ Target = "newMessage",
+ Arguments = new [] { message }
+ });
+}
+```
+
+SignalR Service allows users or connections to be added to groups. Messages can then be sent to a group. You can use the `SignalR` output binding to manage groups.
+
+The following example adds a user to a group.
+
+Example *function.json*
+
+```json
+{
+ "type": "signalR",
+ "name": "signalRGroupActions",
+ "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
+ "hubName": "chat",
+ "direction": "out"
+}
+```
+
+*Run.csx*
+
+```cs
+#r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
+using Microsoft.Azure.WebJobs.Extensions.SignalRService;
+
+public static Task Run(
+ HttpRequest req,
+ ClaimsPrincipal claimsPrincipal,
+ IAsyncCollector<SignalRGroupAction> signalRGroupActions)
+{
+ var userIdClaim = claimsPrincipal.FindFirst(ClaimTypes.NameIdentifier);
+ return signalRGroupActions.AddAsync(
+ new SignalRGroupAction
+ {
+ UserId = userIdClaim.Value,
+ GroupName = "myGroup",
+ Action = GroupAction.Add
+ });
+}
+```
+
+The following example removes a user from a group.
+
+Example *function.json*
+
+```json
+{
+ "type": "signalR",
+ "name": "signalRGroupActions",
+ "connectionStringSetting": "<name of setting containing SignalR Service connection string>",
+ "hubName": "chat",
+ "direction": "out"
+}
+```
+
+*Run.csx*
+
+```cs
+#r "Microsoft.Azure.WebJobs.Extensions.SignalRService"
+using Microsoft.Azure.WebJobs.Extensions.SignalRService;
+
+public static Task Run(
+ HttpRequest req,
+ ClaimsPrincipal claimsPrincipal,
+ IAsyncCollector<SignalRGroupAction> signalRGroupActions)
+{
+ var userIdClaim = claimsPrincipal.FindFirst(ClaimTypes.NameIdentifier);
+ return signalRGroupActions.AddAsync(
+ new SignalRGroupAction
+ {
+ UserId = userIdClaim.Value,
+ GroupName = "myGroup",
+ Action = GroupAction.Remove
+ });
+}
+```
+
+### Twilio output
+
+The following example shows a Twilio output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses an `out` parameter to send a text message.
+
+Here's binding data in the *function.json* file:
+
+Example function.json:
+
+```json
+{
+ "type": "twilioSms",
+ "name": "message",
+ "accountSidSetting": "TwilioAccountSid",
+ "authTokenSetting": "TwilioAuthToken",
+ "from": "+1425XXXXXXX",
+ "direction": "out",
+ "body": "Azure Functions Testing"
+}
+```
+
+Here's C# script code:
+
+```cs
+#r "Newtonsoft.Json"
+#r "Twilio"
+#r "Microsoft.Azure.WebJobs.Extensions.Twilio"
+
+using System;
+using Microsoft.Extensions.Logging;
+using Newtonsoft.Json;
+using Microsoft.Azure.WebJobs.Extensions.Twilio;
+using Twilio.Rest.Api.V2010.Account;
+using Twilio.Types;
+
+public static void Run(string myQueueItem, out CreateMessageOptions message, ILogger log)
+{
+ log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
+
+ // In this example the queue item is a JSON string representing an order that contains the name of a
+ // customer and a mobile number to send text updates to.
+ dynamic order = JsonConvert.DeserializeObject(myQueueItem);
+ string msg = "Hello " + order.name + ", thank you for your order.";
+
+ // You must initialize the CreateMessageOptions variable with the "To" phone number.
+ message = new CreateMessageOptions(new PhoneNumber("+1704XXXXXXX"));
+
+ // A dynamic message can be set instead of the body in the output binding. In this example, we use
+ // the order information to personalize a text message.
+ message.Body = msg;
+}
+```
+
+You can't use out parameters in asynchronous code. Here's an asynchronous C# script code example:
+
+```cs
+#r "Newtonsoft.Json"
+#r "Twilio"
+#r "Microsoft.Azure.WebJobs.Extensions.Twilio"
+
+using System;
+using Microsoft.Extensions.Logging;
+using Newtonsoft.Json;
+using Microsoft.Azure.WebJobs.Extensions.Twilio;
+using Twilio.Rest.Api.V2010.Account;
+using Twilio.Types;
+
+public static async Task Run(string myQueueItem, IAsyncCollector<CreateMessageOptions> message, ILogger log)
+{
+ log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
+
+ // In this example the queue item is a JSON string representing an order that contains the name of a
+ // customer and a mobile number to send text updates to.
+ dynamic order = JsonConvert.DeserializeObject(myQueueItem);
+ string msg = "Hello " + order.name + ", thank you for your order.";
+
+ // You must initialize the CreateMessageOptions variable with the "To" phone number.
+ CreateMessageOptions smsText = new CreateMessageOptions(new PhoneNumber("+1704XXXXXXX"));
+
+ // A dynamic message can be set instead of the body in the output binding. In this example, we use
+ // the order information to personalize a text message.
+ smsText.Body = msg;
+
+ await message.AddAsync(smsText);
+}
+```
+
+### Warmup trigger
+
+The following example shows a warmup trigger in a *function.json* file and a [C# script function](functions-reference-csharp.md) that runs on each new instance when it's added to your app.
+
+Not supported for version 1.x of the Functions runtime.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "warmupTrigger",
+ "direction": "in",
+ "name": "warmupContext"
+ }
+ ]
+}
+```
+
+```cs
+public static void Run(WarmupContext warmupContext, ILogger log)
+{
+ log.LogInformation("Function App instance is warm.");
+}
+```
++ ## Next steps > [!div class="nextstepaction"]
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md
description: Learn the Azure Functions concepts and techniques that you need to
ms.assetid: d8efe41a-bef8-4167-ba97-f3e016fcd39e Last updated 09/06/2023-+ zone_pivot_groups: programming-languages-set-functions
You need to create a role assignment that provides access to Azure SignalR Servi
An identity-based connection for an Azure service accepts the following common properties, where `<CONNECTION_NAME_PREFIX>` is the value of your `connection` property in the trigger or binding definition: | Property | Environment variable template | Description |
-|||||
+||||
| Token Credential | `<CONNECTION_NAME_PREFIX>__credential` | Defines how a token should be obtained for the connection. This setting should be set to `managedidentity` if your deployed Azure Function intends to use managed identity authentication. This value is only valid when a managed identity is available in the hosting environment. | | Client ID | `<CONNECTION_NAME_PREFIX>__clientId` | When `credential` is set to `managedidentity`, this property can be set to specify the user-assigned identity to be used when obtaining a token. The property accepts a client ID corresponding to a user-assigned identity assigned to the application. It's invalid to specify both a Resource ID and a client ID. If not specified, the system-assigned identity is used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` shouldn't be set. | | Resource ID | `<CONNECTION_NAME_PREFIX>__managedIdentityResourceId` | When `credential` is set to `managedidentity`, this property can be set to specify the resource Identifier to be used when obtaining a token. The property accepts a resource identifier corresponding to the resource ID of the user-defined managed identity. It's invalid to specify both a resource ID and a client ID. If neither are specified, the system-assigned identity is used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` shouldn't be set.
azure-functions Migrate Cosmos Db Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-cosmos-db-version-3-version-4.md
This article walks you through the process of migrating your function app to run
Update your `.csproj` project file to use the latest extension version for your process model. The following `.csproj` file uses version 4 of the Azure Cosmos DB extension.
-### [In-process](#tab/in-process)
+### [Isolated worker model](#tab/isolated-process)
```xml <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net7.0</TargetFramework> <AzureFunctionsVersion>v4</AzureFunctionsVersion>
+ <OutputType>Exe</OutputType>
</PropertyGroup> <ItemGroup>
- <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.CosmosDB" Version="4.3.0" />
- <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="4.1.1" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.14.1" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.CosmosDB" Version="4.4.1" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.10.0" />
</ItemGroup> <ItemGroup> <None Update="host.json">
Update your `.csproj` project file to use the latest extension version for your
</Project> ```
-### [Isolated process](#tab/isolated-process)
+### [In-process model](#tab/in-process)
```xml <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net7.0</TargetFramework> <AzureFunctionsVersion>v4</AzureFunctionsVersion>
- <OutputType>Exe</OutputType>
</PropertyGroup> <ItemGroup>
- <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.14.1" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.CosmosDB" Version="4.4.1" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.10.0" />
+ <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.CosmosDB" Version="4.3.0" />
+ <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="4.1.1" />
</ItemGroup> <ItemGroup> <None Update="host.json">
azure-functions Recover Python Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/recover-python-functions.md
zone_pivot_groups: python-mode-functions
# Troubleshoot Python errors in Azure Functions
-This article provides information to help you troubleshoot errors with your Python functions in Azure Functions. This article supports both the v1 and v2 programming models. Choose the model you want to use from the selector at the top of the article. The v2 model is currently in preview. For more information on Python programming models, see the [Python developer guide](./functions-reference-python.md).
+This article provides information to help you troubleshoot errors with your Python functions in Azure Functions. This article supports both the v1 and v2 programming models. Choose the model you want to use from the selector at the top of the article.
> [!NOTE] > The Python v2 programming model is only supported in the 4.x functions runtime. For more information, see [Azure Functions runtime versions overview](./functions-versions.md).
azure-functions Supported Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/supported-languages.md
Title: Supported languages in Azure Functions description: Learn which languages are supported for developing your Functions in Azure, the support level of the various language versions, and potential end-of-life dates. + Last updated 08/27/2023 zone_pivot_groups: programming-languages-set-functions
Starting with version 2.x, the runtime is designed to offer [language extensibil
## Next steps ::: zone pivot="programming-language-csharp"
-### [Isolated process](#tab/isolated-process)
+### [Isolated worker model](#tab/isolated-process)
> [!div class="nextstepaction"] > [.NET isolated worker process reference](dotnet-isolated-process-guide.md).
-### [In-process](#tab/in-process)
+### [In-process model](#tab/in-process)
> [!div class="nextstepaction"] > [In-process C# developer reference](functions-dotnet-class-library.md)
azure-maps Azure Maps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-authentication.md
After the application receives a SAS token, the Azure Maps SDK and/or applicatio
## Cross origin resource sharing (CORS)
+[CORS] is an HTTP protocol that enables a web application running under one domain to access resources in another domain. Web browsers implement a security restriction known as [same-origin policy] that prevents a web page from calling APIs in a different domain; CORS provides a secure way to allow one domain (the origin domain) to call APIs in another domain. Using the Azure Maps account resource, you can configure which origins are allowed to access the Azure Maps REST API from your applications.
-Cross Origin Resource Sharing (CORS) is in preview.
+> [!IMPORTANT]
+> CORS is not an authorization mechanism. Any request made to a map account using REST API, when CORS is enabled, also needs a valid map account authentication scheme such as Shared Key, Azure AD, or SAS token.
+>
+> CORS is supported for all map account pricing tiers, data-plane endpoints, and locations.
### Prerequisites
To prevent malicious code execution on the client, modern browsers block request
- If you're unfamiliar with CORS, see [Cross-origin resource sharing (CORS)], it lets an `Access-Control-Allow-Origin` header declare which origins are allowed to call endpoints of an Azure Maps account. CORS protocol isn't specific to Azure Maps.
-### Account CORS
-
-[CORS] is an HTTP protocol that enables a web application running under one domain to access resources in another domain. Web browsers implement a security restriction known as [same-origin policy] that prevents a web page from calling APIs in a different domain; CORS provides a secure way to allow one domain (the origin domain) to call APIs in another domain. Using the Azure Maps account resource, you can configure which origins are allowed to access the Azure Maps REST API from your applications.
-
-> [!IMPORTANT]
-> CORS is not an authorization mechanism. Any request made to a map account using REST API, when CORS is enabled, also needs a valid map account authentication scheme such as Shared Key, Azure AD, or SAS token.
->
-> CORS is supported for all map account pricing tiers, data-plane endpoints, and locations.
- ### CORS requests A CORS request from an origin domain may consist of two separate requests:
azure-maps How To Manage Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-pricing-tier.md
To change your pricing tier from Gen1 to Gen2 in the Azure Portal, navigate to t
To change your pricing tier from Gen1 to Gen2 in the ARM template, update `pricingTier` to **G2** and `kind` to **Gen2**. For more info on using ARM templates, see [Create account with ARM template].
+<!
+ :::image type="content" source="./media/how-to-manage-pricing-tier/arm-template.png" border="true" alt-text="Screenshot of an ARM template that demonstrates updating pricingTier to G2 and kind to Gen2.":::
-<!
```json "pricingTier": { "type": "string",
To change your pricing tier from Gen1 to Gen2 in the ARM template, update `prici
} } ```- :::code language="json" source="~/quickstart-templates/quickstarts/microsoft.maps/maps-create/azuredeploy.json" range="27-46"::: > + ## Next steps Learn how to see the API usage metrics for your Azure Maps account:
azure-maps How To Render Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-render-custom-data.md
# Render custom data on a raster map
-This article describes how to use the [static image service] with image composition functionality. Image composition functionality supports the retrieval of static raster tile that contains custom data.
+This article describes how to use the [Get Map Static Image] command with image composition functionality. Image composition functionality supports the retrieval of static raster tiles that contain custom data.
The following are examples of custom data:
The following are examples of custom data:
- Geometry overlays > [!TIP]
-> To show a simple map on a web page, it's often more cost effective to use the Azure Maps Web SDK, rather than to use the static image service. The web SDK uses map tiles; and unless the user pans and zooms the map, they will often generate only a fraction of a transaction per map load. The Azure Maps web SDK has options for disabling panning and zooming. Also, the Azure Maps web SDK provides a richer set of data visualization options than a static map web service does.
+> To show a simple map on a web page, it's often more cost effective to use the Azure Maps Web SDK, rather than to use the static image service. The web SDK uses map tiles; and unless the user pans and zooms the map, they will often generate only a fraction of a transaction per map load. The Azure Maps Web SDK has options for disabling panning and zooming. Also, the Azure Maps Web SDK provides a richer set of data visualization options than a static map web service does.
## Prerequisites
This article uses the [Postman] application, but you may use a different API dev
> [!NOTE] > The procedure in this section requires an Azure Maps account in the Gen1 or Gen2 pricing tier.
-The Azure Maps account Gen1 Standard S0 tier supports only a single instance of the `pins` parameter. It allows you to render up to five pushpins, specified in the URL request, with a custom image.
+The Azure Maps account Gen1 S0 pricing tier only supports a single instance of the [pins] parameter. It allows you to render up to five pushpins, specified in the URL request, with a custom image.
> > **Azure Maps Gen1 pricing tier retirement** >
To get a static image with custom pins and labels:
2. In the **Create New** window, select **HTTP Request**.
-3. Enter a **Request name** for the request, such as *GET Static Image*.
+3. Enter a **Request name** for the request, such as *Get Map Static Image*.
4. Select the **GET** HTTP method. 5. Enter the following URL: ```HTTP
- https://atlas.microsoft.com/map/static/png?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&layer=basic&style=main&zoom=12&center=-73.98,%2040.77&pins=custom%7Cla15+50%7Cls12%7Clc003b61%7C%7C%27CentralPark%27-73.9657974+40.781971%7C%7Chttps%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FAzureMapsCodeSamples%2Fmaster%2FAzureMapsCodeSamples%2FCommon%2Fimages%2Ficons%2Fylw-pushpin.png
+ https://atlas.microsoft.com/map/static/png?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2022-08-01&layer=basic&style=main&zoom=12&center=-73.98,%2040.77&pins=custom%7Cla15+50%7Cls12%7Clc003b61%7C%7C%27CentralPark%27-73.9657974+40.781971%7C%7Chttps%3A%2F%2Fsamples.azuremaps.com%2Fimages%2Ficons%2Fylw-pushpin.png
``` 6. Select **Send**.
To get a static image with custom pins and labels:
> [!NOTE] > The procedure in this section requires an Azure Maps account Gen1 (S1) or Gen2 pricing tier.
-You can modify the appearance of a polygon by using style modifiers with the [path parameter].
+You can modify the appearance of a polygon by using style modifiers with the [path] parameter.
To render a polygon with color and opacity:
To render a polygon with color and opacity:
4. Select the **GET** HTTP method.
-5. Enter the following URL to the [Render service]:
+5. Enter the following URL to the [Render] service:
```HTTP https://atlas.microsoft.com/map/static/png?api-version=2022-08-01&style=main&layer=basic&sku=S1&zoom=14&height=500&Width=500&center=-74.040701, 40.698666&path=lc0000FF|fc0000FF|lw3|la0.80|fa0.50||-74.03995513916016 40.70090237454063|-74.04082417488098 40.70028420372218|-74.04113531112671 40.70049568385827|-74.04298067092896 40.69899904076542|-74.04271245002747 40.69879568992435|-74.04367804527283 40.6980961582905|-74.04364585876465 40.698055487620714|-74.04368877410889 40.698022951066996|-74.04168248176573 40.696444909137|-74.03901100158691 40.69837271818651|-74.03824925422668 40.69837271818651|-74.03809905052185 40.69903971085914|-74.03771281242369 40.699340668780984|-74.03940796852112 40.70058515602143|-74.03948307037354 40.70052821920425|-74.03995513916016 40.70090237454063
To render a polygon with color and opacity:
> [!NOTE] > The procedure in this section requires an Azure Maps account Gen1 (S1) or Gen2 pricing tier.
-You can modify the appearance of the pins by adding style modifiers. For example, to make pushpins and their labels larger or smaller, use the `sc` "scale style" modifier. This modifier takes a value that's greater than zero. A value of 1 is the standard scale. Values larger than 1 makes the pins larger, and values smaller than 1 makes them smaller. For more information about style modifiers, see [static image service path parameters].
+You can modify the appearance of the pins by adding style modifiers. For example, to make pushpins and their labels larger or smaller, use the `sc` "scale style" modifier. This modifier takes a value that's greater than zero. A value of 1 is the standard scale. Values larger than 1 makes the pins larger, and values smaller than 1 makes them smaller. For more information about style modifiers, see the [Path] parameter of the [Get Map Static Image] command.
To render a circle and pushpins with custom labels:
To render a circle and pushpins with custom labels:
4. Select the **GET** HTTP method.
-5. Enter the following URL to the [Render service]:
+5. Enter the following URL to the [Render] service:
```HTTP https://atlas.microsoft.com/map/static/png?api-version=2022-08-01&style=main&layer=basic&zoom=14&height=700&Width=700&center=-122.13230609893799,47.64599069048016&path=lcFF0000|lw2|la0.60|ra1000||-122.13230609893799 47.64599069048016&pins=default|la15+50|al0.66|lc003C62|co002D62||'Microsoft Corporate Headquarters'-122.14131832122801 47.64690503939462|'Microsoft Visitor Center'-122.136828 47.642224|'Microsoft Conference Center'-122.12552547454833 47.642940335653996|'Microsoft The Commons'-122.13687658309935 47.64452336193245&subscription-key={Your-Azure-Maps-Subscription-key}
Similarly, you can change, add, and remove other style modifiers.
## Next steps > [!div class="nextstepaction"]
-> [Render - Get Map Image]
+> [Render - Get Map Static Image]
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Render - Get Map Image]: /rest/api/maps/render/getmapimage
-[path parameter]: /rest/api/maps/render/getmapimage#uri-parameters
[Postman]: https://www.postman.com/
-[Render service]: /rest/api/maps/render/get-map-image
-[static image service path parameters]: /rest/api/maps/render/getmapimage#uri-parameters
-[static image service]: /rest/api/maps/render/getmapimage
[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account+
+[Get Map Static Image]: /rest/api/maps/render-v2/get-map-static-image
+[Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md
+[path]: /rest/api/maps/render-v2/get-map-static-image#uri-parameters
+[pins]: /rest/api/maps/render-v2/get-map-static-image#uri-parameters
+[Render]: /rest/api/maps/render-v2/get-map-static-image
+[Render - Get Map Static Image]: /rest/api/maps/render-v2/get-map-static-image
azure-monitor Availability Test Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-test-migration.md
Title: Migrate from Azure Monitor Application Insights classic URL ping tests to
description: How to migrate from Azure Monitor Application Insights classic availability URL ping tests to standard tests. Previously updated : 07/19/2023 Last updated : 09/27/2023 # Migrate availability tests
-In this article, we guide you through the process of migrating from [classic URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) to the modern and efficient [standard tests](availability-standard-tests.md) .
+In this article, we guide you through the process of migrating from [classic URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) to the modern and efficient [standard tests](availability-standard-tests.md).
We simplify this process by providing clear step-by-step instructions to ensure a seamless transition and equip your applications with the most up-to-date monitoring capabilities.
We simplify this process by providing clear step-by-step instructions to ensure
The following steps walk you through the process of creating [standard tests](availability-standard-tests.md) that replicate the functionality of your [URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability). It allows you to more easily start using the advanced features of [standard tests](availability-standard-tests.md) using your previously created [URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability).
-> [!NOTE]
-> A cost is associated with running [standard tests](availability-standard-tests.md). Once you create a [standard test](availability-standard-tests.md), you will be charged for test executions.
-> Refer to [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#pricing) before starting this process.
+> [!IMPORTANT]
+>
+> On 30 September 2026, the **[URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability)** will be retired, and ping tests. Before that date, you'll need to transition to **[standard tests](/editor/availability-standard-tests.md)**.
+>
+> - A cost is associated with running **[standard tests](/editor/availability-standard-tests.md)**. Once you create a **[standard test](/editor/availability-standard-tests.md)**, you will be charged for test executions.
+> - Refer to **[Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#pricing)** before starting this process.
### Prerequisites
The following steps walk you through the process of creating [standard tests](av
We recommend using these commands to migrate a URL ping test to a standard test and take advantage of the available capabilities. Remember, this migration is optional. - #### Do these steps work for both HTTP and HTTPS endpoints? Yes, these commands work for both HTTP and HTTPS endpoints, which are used in your URL ping Tests.
Yes, these commands work for both HTTP and HTTPS endpoints, which are used in yo
* [Availability alerts](availability-alerts.md) * [Troubleshooting](troubleshoot-availability.md) * [Web tests Azure Resource Manager template](/azure/templates/microsoft.insights/webtests?tabs=json)
-* [Web test REST API](/rest/api/application-insights/web-tests)
+* [Web test REST API](/rest/api/application-insights/web-tests)
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
You can't extend the Java Distro with community instrumentation libraries. To re
Other OpenTelemetry Instrumentations are available [here](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node) and could be added using TraceHandler in ApplicationInsightsClient. ```javascript
+ // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); const { metrics, trace, ProxyTracerProvider } = require("@opentelemetry/api");+
+ // Import the OpenTelemetry instrumentation registration function and Express instrumentation
const { registerInstrumentations } = require( "@opentelemetry/instrumentation"); const { ExpressInstrumentation } = require('@opentelemetry/instrumentation-express');
- useAzureMonitor();
+ // Get the OpenTelemetry tracer provider and meter provider
const tracerProvider = (trace.getTracerProvider() as ProxyTracerProvider).getDelegate(); const meterProvider = metrics.getMeterProvider();+
+ // Enable Azure Monitor integration
+ useAzureMonitor();
+
+ // Register the Express instrumentation
registerInstrumentations({
- instrumentations: [
- new ExpressInstrumentation(),
- ],
- tracerProvider: tracerProvider,
- meterProvider: meterProvider
+ // List of instrumentations to register
+ instrumentations: [
+ new ExpressInstrumentation(), // Express instrumentation
+ ],
+ // OpenTelemetry tracer provider
+ tracerProvider: tracerProvider,
+ // OpenTelemetry meter provider
+ meterProvider: meterProvider
});
-```
+ ```
### [Python](#tab/python)
public class Program {
#### [Node.js](#tab/nodejs) ```javascript
+ // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); const { metrics } = require("@opentelemetry/api");
+ // Enable Azure Monitor integration
useAzureMonitor();+
+ // Get the meter for the "testMeter" namespace
const meter = metrics.getMeter("testMeter");+
+ // Create a histogram metric
let histogram = meter.createHistogram("histogram");+
+ // Record values to the histogram metric with different tags
histogram.record(1, { "testKey": "testValue" }); histogram.record(30, { "testKey": "testValue2" }); histogram.record(100, { "testKey2": "testValue" });
public class Program {
#### [Node.js](#tab/nodejs) ```javascript
+ // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); const { metrics } = require("@opentelemetry/api");
+ // Enable Azure Monitor integration
useAzureMonitor();+
+ // Get the meter for the "testMeter" namespace
const meter = metrics.getMeter("testMeter");+
+ // Create a counter metric
let counter = meter.createCounter("counter");+
+ // Add values to the counter metric with different tags
counter.add(1, { "testKey": "testValue" }); counter.add(5, { "testKey2": "testValue" }); counter.add(3, { "testKey": "testValue2" });
public class Program {
#### [Node.js](#tab/nodejs) ```typescript
+ // Import the useAzureMonitor function and the metrics module from the @azure/monitor-opentelemetry and @opentelemetry/api packages, respectively.
const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); const { metrics } = require("@opentelemetry/api");
+ // Enable Azure Monitor integration.
useAzureMonitor();
- const meter = metrics.getMeter("testMeter");
+
+ // Get the meter for the "testMeter" meter name.
+ const meter = metrics.getMeter("testMeter");
+
+ // Create an observable gauge metric with the name "gauge".
let gauge = meter.createObservableGauge("gauge");+
+ // Add a callback to the gauge metric. The callback will be invoked periodically to generate a new value for the gauge metric.
gauge.addCallback((observableResult: ObservableResult) => {
- let randomNumber = Math.floor(Math.random() * 100);
- observableResult.observe(randomNumber, {"testKey": "testValue"});
+ // Generate a random number between 0 and 99.
+ let randomNumber = Math.floor(Math.random() * 100);
+
+ // Set the value of the gauge metric to the random number.
+ observableResult.observe(randomNumber, {"testKey": "testValue"});
}); ```
You can use `opentelemetry-api` to update the status of a span and record except
#### [Node.js](#tab/nodejs) ```javascript
+ // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); const { trace } = require("@opentelemetry/api");
+ // Enable Azure Monitor integration
useAzureMonitor();+
+ // Get the tracer for the "testTracer" namespace
const tracer = trace.getTracer("testTracer");+
+ // Start a span with the name "hello"
let span = tracer.startSpan("hello");+
+ // Try to throw an error
try{
- throw new Error("Test Error");
+ throw new Error("Test Error");
}+
+ // Catch the error and record it to the span
catch(error){
- span.recordException(error);
+ span.recordException(error);
} ```
you can add your spans by using the OpenTelemetry API.
#### [Node.js](#tab/nodejs) ```javascript
+ // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); const { trace } = require("@opentelemetry/api");
+ // Enable Azure Monitor integration
useAzureMonitor();+
+ // Get the tracer for the "testTracer" namespace
const tracer = trace.getTracer("testTracer");+
+ // Start a span with the name "hello"
let span = tracer.startSpan("hello");+
+ // End the span
span.end(); ``` - #### [Python](#tab/python) The OpenTelemetry API can be used to add your own spans, which appear in the `requests` and `dependencies` tables in Application Insights.
If you want to add custom events or access the Application Insights API, replace
You need to use the `applicationinsights` v3 Beta package to send custom telemetry using the Application Insights classic API. (https://www.npmjs.com/package/applicationinsights/v/beta) ```javascript
+ // Import the TelemetryClient class from the Application Insights SDK for JavaScript.
const { TelemetryClient } = require("applicationinsights");
+ // Create a new TelemetryClient instance.
const telemetryClient = new TelemetryClient(); ```
Then use the `TelemetryClient` to send custom telemetry:
##### Events ```javascript
+ // Create an event telemetry object.
let eventTelemetry = {
- name: "testEvent"
+ name: "testEvent"
};+
+ // Send the event telemetry object to Azure Monitor Application Insights.
telemetryClient.trackEvent(eventTelemetry); ``` ##### Logs ```javascript
+ // Create a trace telemetry object.
let traceTelemetry = {
- message: "testMessage",
- severity: "Information"
+ message: "testMessage",
+ severity: "Information"
};+
+ // Send the trace telemetry object to Azure Monitor Application Insights.
telemetryClient.trackTrace(traceTelemetry); ``` ##### Exceptions ```javascript
+ // Try to execute a block of code.
try {
- ...
- } catch (error) {
- let exceptionTelemetry = {
- exception: error,
- severity: "Critical"
- };
- telemetryClient.trackException(exceptionTelemetry);
+ ...
}+
+ // If an error occurs, catch it and send it to Azure Monitor Application Insights as an exception telemetry item.
+ catch (error) {
+ let exceptionTelemetry = {
+ exception: error,
+ severity: "Critical"
+ };
+ telemetryClient.trackException(exceptionTelemetry);
+}
``` #### [Python](#tab/python)
Adding one or more span attributes populates the `customDimensions` field in the
##### [Node.js](#tab/nodejs) ```typescript
- const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
- const { trace, ProxyTracerProvider } = require("@opentelemetry/api");
- const { ReadableSpan, Span, SpanProcessor } = require("@opentelemetry/sdk-trace-base");
- const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node");
- const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
+// Import the necessary packages.
+const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+const { trace, ProxyTracerProvider } = require("@opentelemetry/api");
+const { ReadableSpan, Span, SpanProcessor } = require("@opentelemetry/sdk-trace-base");
+const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node");
+const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
+
+// Enable Azure Monitor integration.
+useAzureMonitor();
+
+// Get the NodeTracerProvider instance.
+const tracerProvider = ((trace.getTracerProvider() as ProxyTracerProvider).getDelegate() as NodeTracerProvider);
+
+// Create a new SpanEnrichingProcessor class.
+class SpanEnrichingProcessor implements SpanProcessor {
+ forceFlush(): Promise<void> {
+ return Promise.resolve();
+ }
- useAzureMonitor();
- const tracerProvider = ((trace.getTracerProvider() as ProxyTracerProvider).getDelegate() as NodeTracerProvider);
+ shutdown(): Promise<void> {
+ return Promise.resolve();
+ }
- class SpanEnrichingProcessor implements SpanProcessor{
- forceFlush(): Promise<void>{
- return Promise.resolve();
- }
- shutdown(): Promise<void>{
- return Promise.resolve();
- }
- onStart(_span: Span): void{}
- onEnd(span: ReadableSpan){
- span.attributes["CustomDimension1"] = "value1";
- span.attributes["CustomDimension2"] = "value2";
- }
- }
+ onStart(_span: Span): void {}
- tracerProvider.addSpanProcessor(new SpanEnrichingProcessor());
+ onEnd(span: ReadableSpan) {
+ // Add custom dimensions to the span.
+ span.attributes["CustomDimension1"] = "value1";
+ span.attributes["CustomDimension2"] = "value2";
+ }
+}
+
+// Add the SpanEnrichingProcessor instance to the NodeTracerProvider instance.
+tracerProvider.addSpanProcessor(new SpanEnrichingProcessor());
``` ##### [Python](#tab/python)
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
```typescript ...
+ // Import the SemanticAttributes class from the @opentelemetry/semantic-conventions package.
const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
- class SpanEnrichingProcessor implements SpanProcessor{
- ...
+ // Create a new SpanEnrichingProcessor class.
+ class SpanEnrichingProcessor implements SpanProcessor {
- onEnd(span){
- span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>";
- }
+ onEnd(span) {
+ // Set the HTTP_CLIENT_IP attribute on the span to the IP address of the client.
+ span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>";
+ }
} ```
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
```typescript ...
+ // Import the SemanticAttributes class from the @opentelemetry/semantic-conventions package.
import { SemanticAttributes } from "@opentelemetry/semantic-conventions";
- class SpanEnrichingProcessor implements SpanProcessor{
- ...
+ // Create a new SpanEnrichingProcessor class.
+ class SpanEnrichingProcessor implements SpanProcessor {
- onEnd(span: ReadableSpan){
- span.attributes[SemanticAttributes.ENDUSER_ID] = "<User ID>";
- }
+ onEnd(span: ReadableSpan) {
+ // Set the ENDUSER_ID attribute on the span to the ID of the user.
+ span.attributes[SemanticAttributes.ENDUSER_ID] = "<User ID>";
+ }
} ```
Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching c
#### [Node.js](#tab/nodejs) ```typescript
+ // Import the useAzureMonitor function and the logs module from the @azure/monitor-opentelemetry and @opentelemetry/api-logs packages, respectively.
const { useAzureMonitor } = require("@azure/monitor-opentelemetry"); const { logs } = require("@opentelemetry/api-logs"); import { Logger } from "@opentelemetry/sdk-logs";
+ // Enable Azure Monitor integration.
useAzureMonitor();+
+ // Get the logger for the "testLogger" logger name.
const logger = (logs.getLogger("testLogger") as Logger);+
+ // Create a new log record.
const logRecord = {
- body: "testEvent",
- attributes: {
- "testAttribute1": "testValue1",
- "testAttribute2": "testValue2",
- "testAttribute3": "testValue3"
- }
+ body: "testEvent",
+ attributes: {
+ "testAttribute1": "testValue1",
+ "testAttribute2": "testValue2",
+ "testAttribute3": "testValue3"
+ }
};+
+ // Emit the log record.
logger.emit(logRecord); ```
See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) a
The following example shows how to exclude a certain URL from being tracked by using the [HTTP/HTTPS instrumentation library](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http): ```typescript
+ // Import the useAzureMonitor function and the ApplicationInsightsOptions class from the @azure/monitor-opentelemetry package.
const { useAzureMonitor, ApplicationInsightsOptions } = require("@azure/monitor-opentelemetry");+
+ // Import the HttpInstrumentationConfig class from the @opentelemetry/instrumentation-http package.
const { HttpInstrumentationConfig }= require("@opentelemetry/instrumentation-http");+
+ // Import the IncomingMessage and RequestOptions classes from the http and https packages, respectively.
const { IncomingMessage } = require("http"); const { RequestOptions } = require("https");
+ // Create a new HttpInstrumentationConfig object.
const httpInstrumentationConfig: HttpInstrumentationConfig = {
- enabled: true,
- ignoreIncomingRequestHook: (request: IncomingMessage) => {
- // Ignore OPTIONS incoming requests
- if (request.method === 'OPTIONS') {
- return true;
- }
- return false;
- },
- ignoreOutgoingRequestHook: (options: RequestOptions) => {
- // Ignore outgoing requests with /test path
- if (options.path === '/test') {
- return true;
- }
- return false;
+ enabled: true,
+ ignoreIncomingRequestHook: (request: IncomingMessage) => {
+ // Ignore OPTIONS incoming requests.
+ if (request.method === 'OPTIONS') {
+ return true;
}
+ return false;
+ },
+ ignoreOutgoingRequestHook: (options: RequestOptions) => {
+ // Ignore outgoing requests with the /test path.
+ if (options.path === '/test') {
+ return true;
+ }
+ return false;
+ }
};+
+ // Create a new ApplicationInsightsOptions object.
const config: ApplicationInsightsOptions = {
- instrumentationOptions: {
- http: {
- httpInstrumentationConfig
- },
- },
+ instrumentationOptions: {
+ http: {
+ httpInstrumentationConfig
+ }
+ }
};+
+ // Enable Azure Monitor integration using the useAzureMonitor function and the ApplicationInsightsOptions object.
useAzureMonitor(config); ```
See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) a
Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code: ```typescript
+ // Import the SpanKind and TraceFlags classes from the @opentelemetry/api package.
const { SpanKind, TraceFlags } = require("@opentelemetry/api");
+ // Create a new SpanEnrichingProcessor class.
class SpanEnrichingProcessor {
- ...
- onEnd(span) {
- if(span.kind == SpanKind.INTERNAL){
- span.spanContext().traceFlags = TraceFlags.NONE;
- }
+ onEnd(span) {
+ // If the span is an internal span, set the trace flags to NONE.
+ if(span.kind == SpanKind.INTERNAL){
+ span.spanContext().traceFlags = TraceFlags.NONE;
}
+ }
} ```
You can use `opentelemetry-api` to get the trace ID or span ID.
Get the request trace ID and the span ID in your code: ```javascript
- const { trace } = require("@opentelemetry/api");
+ // Import the trace module from the OpenTelemetry API.
+ const { trace } = require("@opentelemetry/api");
- let spanId = trace.getActiveSpan().spanContext().spanId;
- let traceId = trace.getActiveSpan().spanContext().traceId;
+ // Get the span ID and trace ID of the active span.
+ let spanId = trace.getActiveSpan().spanContext().spanId;
+ let traceId = trace.getActiveSpan().spanContext().traceId;
``` ### [Python](#tab/python)
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
Use one of the following two ways to configure the connection string:
- Use configuration object: ```typescript
- const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry");
+ // Import the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions class from the @azure/monitor-opentelemetry package.
+ const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry");
+
+ // Create a new AzureMonitorOpenTelemetryOptions object.
const options: AzureMonitorOpenTelemetryOptions = {
- azureMonitorExporterOptions: {
- connectionString: "<your connection string>"
- }
+ azureMonitorExporterOptions: {
+ connectionString: "<your connection string>"
+ }
};
- useAzureMonitor(options);
+ // Enable Azure Monitor integration using the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions object.
+ useAzureMonitor(options);
``` ### [Python](#tab/python)
To set the cloud role instance, see [cloud role instance](java-standalone-config
Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md). ```typescript
-...
+// Import the useAzureMonitor function, the AzureMonitorOpenTelemetryOptions class, the Resource class, and the SemanticResourceAttributes class from the @azure/monitor-opentelemetry, @opentelemetry/resources, and @opentelemetry/semantic-conventions packages, respectively.
const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry"); const { Resource } = require("@opentelemetry/resources"); const { SemanticResourceAttributes } = require("@opentelemetry/semantic-conventions");
-// -
-// Setting role name and role instance
-// -
+
+// Create a new Resource object with the following custom resource attributes:
+//
+// * service_name: my-service
+// * service_namespace: my-namespace
+// * service_instance_id: my-instance
const customResource = new Resource({
- [SemanticResourceAttributes.SERVICE_NAME]: "my-service",
- [SemanticResourceAttributes.SERVICE_NAMESPACE]: "my-namespace",
- [SemanticResourceAttributes.SERVICE_INSTANCE_ID]: "my-instance",
+ [SemanticResourceAttributes.SERVICE_NAME]: "my-service",
+ [SemanticResourceAttributes.SERVICE_NAMESPACE]: "my-namespace",
+ [SemanticResourceAttributes.SERVICE_INSTANCE_ID]: "my-instance",
});+
+// Create a new AzureMonitorOpenTelemetryOptions object and set the resource property to the customResource object.
const options: AzureMonitorOpenTelemetryOptions = {
- resource: customResource
+ resource: customResource
};+
+// Enable Azure Monitor integration using the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions object.
useAzureMonitor(options); ```
Starting from 3.4.0, rate-limited sampling is available and is now the default.
The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces are sent. ```typescript
+// Import the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions class from the @azure/monitor-opentelemetry package.
const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry");
+// Create a new AzureMonitorOpenTelemetryOptions object and set the samplingRatio property to 0.1.
const options: AzureMonitorOpenTelemetryOptions = {
- samplingRatio: 0.1
+ samplingRatio: 0.1
};+
+// Enable Azure Monitor integration using the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions object.
useAzureMonitor(options); ```
For more information about Java, see the [Java supplemental documentation](java-
We support the credential classes provided by [Azure Identity](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/identity/identity#credential-classes). ```typescript
+// Import the useAzureMonitor function, the AzureMonitorOpenTelemetryOptions class, and the ManagedIdentityCredential class from the @azure/monitor-opentelemetry and @azure/identity packages, respectively.
const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry"); const { ManagedIdentityCredential } = require("@azure/identity");
+// Create a new ManagedIdentityCredential object.
+const credential = new ManagedIdentityCredential();
+
+// Create a new AzureMonitorOpenTelemetryOptions object and set the credential property to the credential object.
const options: AzureMonitorOpenTelemetryOptions = {
- credential: new ManagedIdentityCredential()
+ credential: credential
};+
+// Enable Azure Monitor integration using the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions object.
useAzureMonitor(options); ```
For example:
```typescript
+// Import the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions class from the @azure/monitor-opentelemetry package.
const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry");
+// Create a new AzureMonitorOpenTelemetryOptions object and set the azureMonitorExporterOptions property to an object with the following properties:
+//
+// * connectionString: The connection string for your Azure Monitor Application Insights resource.
+// * storageDirectory: The directory where the Azure Monitor OpenTelemetry exporter will store telemetry data when it is offline.
+// * disableOfflineStorage: A boolean value that specifies whether to disable offline storage.
const options: AzureMonitorOpenTelemetryOptions = {
- azureMonitorExporterOptions = {
- connectionString: "<Your Connection String>",
- storageDirectory: "C:\\SomeDirectory",
- disableOfflineStorage: false
- }
+ azureMonitorExporterOptions: {
+ connectionString: "<Your Connection String>",
+ storageDirectory: "C:\\SomeDirectory",
+ disableOfflineStorage: false
+ }
};+
+// Enable Azure Monitor integration using the useAzureMonitor function and the AzureMonitorOpenTelemetryOptions object.
useAzureMonitor(options); ```
For more information about Java, see the [Java supplemental documentation](java-
2. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-js/tree/main/examples/otlp-exporter-node). ```typescript
+ // Import the useAzureMonitor function, the AzureMonitorOpenTelemetryOptions class, the trace module, the ProxyTracerProvider class, the BatchSpanProcessor class, the NodeTracerProvider class, and the OTLPTraceExporter class from the @azure/monitor-opentelemetry, @opentelemetry/api, @opentelemetry/sdk-trace-base, @opentelemetry/sdk-trace-node, and @opentelemetry/exporter-trace-otlp-http packages, respectively.
const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry"); const { trace, ProxyTracerProvider } = require("@opentelemetry/api"); const { BatchSpanProcessor } = require('@opentelemetry/sdk-trace-base'); const { NodeTracerProvider } = require('@opentelemetry/sdk-trace-node'); const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');
+ // Enable Azure Monitor integration.
useAzureMonitor();+
+ // Create a new OTLPTraceExporter object.
const otlpExporter = new OTLPTraceExporter();+
+ // Get the NodeTracerProvider instance.
const tracerProvider = ((trace.getTracerProvider() as ProxyTracerProvider).getDelegate() as NodeTracerProvider);+
+ // Add a BatchSpanProcessor to the NodeTracerProvider instance.
tracerProvider.addSpanProcessor(new BatchSpanProcessor(otlpExporter)); ``` - #### [Python](#tab/python) 1. Install the [opentelemetry-exporter-otlp](https://pypi.org/project/opentelemetry-exporter-otlp/) package.
azure-monitor Container Insights V2 Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-v2-migration.md
To transition to ContainerLogV2, we recommend the following approach.
The following table highlights the key differences between using ContainerLog and ContainerLogV2 schema.
-| Feature Differences | ContainerLog | ContainerLogV2 |
+| Feature differences | ContainerLog | ContainerLogV2 |
| - | -- | - | | Onboarding | Only configurable through the ConfigMap | Configurable through both the ConfigMap and DCR | | Pricing | Only compatible with full-priced analytics logs | Supports the low cost basic logs tier in addition to analytics logs |
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
The default pricing for Log Analytics is a pay-as-you-go model that's based on i
## Data size calculation
-Data volume is measured as the size of the data that will be stored in GB (10^9 bytes). The data size of a single record is calculated from a string representation of the columns that are stored in the Log Analytics workspace for that record. It doesn't matter whether the data is sent from an agent or added during the ingestion process. This calculation includes any custom columns added by the [logs ingestion API](logs-ingestion-api-overview.md), [transformations](../essentials/data-collection-transformations.md) or [custom fields](custom-fields.md) that are added as data is collected and then stored in the workspace.
+Data volume is measured as the size of the data sent to be stored and is measured in units of GB (10^9 bytes). The data size of a single record is calculated from a string representation of the columns that are stored in the Log Analytics workspace for that record. It doesn't matter whether the data is sent from an agent or added during the ingestion process. This calculation includes any custom columns added by the [logs ingestion API](logs-ingestion-api-overview.md), [transformations](../essentials/data-collection-transformations.md) or [custom fields](custom-fields.md) that are added as data is collected and then stored in the workspace.
>[!NOTE] >The billable data volume calculation is generally substantially smaller than the size of the entire incoming JSON-packaged event. On average, across all event types, the billed size is around 25 percent less than the incoming data size. It can be up to 50 percent for small events. The percentage includes the effect of the standard columns excluded from billing. It's essential to understand this calculation of billed data size when you estimate costs and compare other pricing models.
Azure Commitment Discounts, such as discounts received from [Microsoft Enterpris
## Dedicated clusters
-An [Azure Monitor Logs dedicated cluster](logs-dedicated-clusters.md) is a collection of workspaces in a single managed Azure Data Explorer cluster. Dedicated clusters support advanced features, such as [customer-managed keys](customer-managed-keys.md), and use the same commitment-tier pricing model as workspaces, although they must have a commitment level of at least 100 GB per day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. There's no pay-as-you-go option for clusters.
+An [Azure Monitor Logs dedicated cluster](logs-dedicated-clusters.md) is a collection of workspaces in a single managed Azure Data Explorer cluster. Dedicated clusters support advanced features, such as [customer-managed keys](customer-managed-keys.md), and use the same commitment-tier pricing model as workspaces, although they must have a commitment level of at least 500 GB per day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. There's no pay-as-you-go option for clusters.
The cluster commitment tier has a 31-day commitment period after the commitment level is increased. During the commitment period, the commitment tier level can't be reduced, but it can be increased at any time. When workspaces are associated to a cluster, the data ingestion billing for those workspaces is done at the cluster level by using the configured commitment tier level.
This query isn't an exact replication of how usage is calculated, but it provide
- See [Set daily cap on Log Analytics workspace](daily-cap.md) to control your costs by configuring a maximum volume that might be ingested in a workspace each day. - See [Azure Monitor best practices - Cost management](../best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges. +
azure-monitor Vminsights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-log-query.md
Every record in VMBoundPort is identified by the following fields:
|Ip | Port IP address (can be wildcard IP, *0.0.0.0*) | |Port |The Port number | |Protocol | The protocol. Example, *tcp* or *udp* (only *tcp* is currently supported).|
-
+ The identity a port is derived from the above five fields and is stored in the PortId property. This property can be used to quickly find records for a specific port across time. #### Metrics
let remoteMachines = remote | summarize by RemoteMachine;
``` ## Performance records
-Records with a type of *InsightsMetrics* have performance data from the guest operating system of the virtual machine. These records have the properties in the following table:
+Records with a type of *InsightsMetrics* have performance data from the guest operating system of the virtual machine. These records are collected at 60 second intervals and have the properties in the following table:
+ | Property | Description |
The performance counters currently collected into the *InsightsMetrics* table ar
| LogicalDisk | BytesPerSecond | Logical Disk Bytes Per Second | BytesPerSecond | mountId - Mount ID of the device | +++ ## Next steps * If you're new to writing log queries in Azure Monitor, review [how to use Log Analytics](../logs/log-analytics-tutorial.md) in the Azure portal to write log queries. * Learn about [writing search queries](../logs/get-started-queries.md).++
azure-netapp-files Access Smb Volume From Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/access-smb-volume-from-windows-client.md
Last updated 09/21/2023
-# Access SMB volumes from Azure Active Directory joined Windows virtual machines
+# Access SMB volumes from Azure Active Directory-joined Windows virtual machines
You can use Azure Active Directory (Azure AD) with the Hybrid Authentication Management module to authenticate credentials in your hybrid cloud. This solution enables Azure AD to become the trusted source for both cloud and on-premises authentication, circumventing the need for clients connecting to Azure NetApp Files to join the on-premises AD domain. >[!NOTE]
->This process does not eliminate the need for Active Directory Domain Services (AD DS) as Azure NetApp Files requires connectivity to AD DS. For more information, see [Understand guidelines for Active Directory Domain Services site design and planning](understand-guidelines-active-directory-domain-service-site.md).
+>Using Azure AD for authenticating [hybrid user identities](../active-directory/hybrid/whatis-hybrid-identity.md) allows Azure AD users to access Azure NetApp Files SMB shares. This means your end users can access Azure NetApp Files SMB shares without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. Cloud-only identities aren't currently supported. For more information, see [Understand guidelines for Active Directory Domain Services site design and planning](understand-guidelines-active-directory-domain-service-site.md).
:::image type="content" source="../media/azure-netapp-files/diagram-windows-joined-active-directory.png" alt-text="Diagram of SMB volume joined to Azure Active Directory." lightbox="../media/azure-netapp-files/diagram-windows-joined-active-directory.png":::
The configuration process takes you through five process:
* Add the CIFS SPN to the computer account * Register a new Azure AD application * Sync CIFS password from AD DS to the Azure AD application registration
-* Configure the Azure AD joined VM to use Kerberos authentication
+* Configure the Azure AD-joined VM to use Kerberos authentication
* Mount the Azure NetApp Files SMB volumes ### Add the CIFS SPN to the computer account
The configuration process takes you through five process:
* `$servicePrincipalName`: The SPN details from mounting the Azure NetApp Files volume. Use the CIFS/FQDN format. For example: `CIFS/NETBIOS-1234.CONTOSO.COM` * `$targetApplicationID`: Application (client) ID of the Azure AD application. * `$domainCred`: use `Get-Credential` (should be an AD DS domain administrator)
- * `$cloudCred`: use `Get-Credential` (should be an AD DS domain administrator)
+ * `$cloudCred`: use `Get-Credential` (should be an Azure AD global administrator)
```powershell $servicePrincipalName = CIFS/NETBIOS-1234.CONTOSO.COM
The configuration process takes you through five process:
Import-AzureADKerberosOnPremServicePrincipal -Domain $domain -DomainCredential $domainCred -CloudCredential $cloudCred -ServicePrincipalName $servicePrincipalName -ApplicationId $targetApplicationId ```
-### Configure the Azure AD joined VM to use Kerberos authentication
+### Configure the Azure AD-joined VM to use Kerberos authentication
-1. Log in to the Azure AD joined VM using hybrid credentials with administrative rights (for example: user@mydirectory.onmicrosoft.com).
+1. Log in to the Azure AD-joined VM using hybrid credentials with administrative rights (for example: user@mydirectory.onmicrosoft.com).
1. Configure the VM: 1. Navigate to **Edit group policy** > **Computer Configuration** > **Administrative Templates** > **System** > **Kerberos**. 1. Enable **Allow retrieving the Azure AD Kerberos Ticket Granting Ticket during logon**.
The configuration process takes you through five process:
### Mount the Azure NetApp Files SMB volumes
-1. Log into to the Azure AD joined VM using a hybrid identity account synced from AD DS.
+1. Log into to the Azure AD-joined VM using a hybrid identity account synced from AD DS.
2. Mount the Azure NetApp Files SMB volume using the info provided in the Azure portal. For more information, see [Mount SMB volumes for Windows VMs](mount-volumes-vms-smb.md). 3. Confirm the mounted volume is using Kerberos authentication and not NTLM authentication. Open a command prompt, issue the `klist` command; observe the output in the cloud TGT (krbtgt) and CIFS server ticket information.
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
Azure NetApp Files customer-managed keys is supported for the following regions:
* UAE Central * UAE North * UK South
-* US Gov Virginia (public preview)
+* US Gov Virginia
* West Europe * West US * West US 2
azure-portal Azure Portal Add Remove Sort Favorites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-add-remove-sort-favorites.md
Title: Manage favorites in Azure portal
-description: Learn how to add or remove services from the favorites list.
Previously updated : 02/17/2022
+description: Learn how to add or remove services from the Favorites list.
Last updated : 09/27/2023 # Manage favorites
-Add or remove items from your **Favorites** list in the Azure portal so that you can quickly go to the services you use most often. We've already added some common services to your **Favorites** list, but you may want to customize it. You're the only one who sees the changes you make to **Favorites**.
+The **Favorites** list in the Azure portal lets you quickly go to the services you use most often. We've already added some common services to your **Favorites** list, but you may want to customize it by adding or removing items. You're the only one who sees the changes you make to **Favorites**.
+
+You can view your **Favorites** list in the Azure portal menu, or from the **Favorites** section within **All services**.
## Add a favorite service
-Items that are listed under **Favorites** are selected from **All services**. Hover over a service name to display information and resources related to the service. A filled star icon ![Filled star icon](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-graystar.png) next to the service name indicates that the item appears on the **Favorites** list. Select the star icon to add a service to the **Favorites** list.
+Items that are listed under **Favorites** are selected from **All services**. Within **All services**, you can hover over a service name to display information and resources related to the service. A filled star icon ![Filled star icon](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-graystar.png) next to the service name indicates that the item appears in the **Favorites** list. If the star icon isn't filled in for a service, select the star icon to add it to your **Favorites** list.
In this example, we'll add **Cost Management + Billing** to the **Favorites** list.
In this example, we'll add **Cost Management + Billing** to the **Favorites** li
:::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-new-all-services.png" alt-text="Screenshot showing All services in the Azure portal menu.":::
-1. Enter the word "cost" in the search field. Services that have "cost" in the title or that have "cost" as a keyword are shown.
+1. Enter the word "cost" in the **Filter services** field near the top of the **All services** page. Services that have "cost" in the title or that have "cost" as a keyword are shown.
:::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-find-service.png" alt-text="Screenshot showing a search in All services in the Azure portal.":::
In this example, we'll add **Cost Management + Billing** to the **Favorites** li
## Remove an item from Favorites
-You can now remove an item directly from the **Favorites** list.
+You can remove items directly from the **Favorites** list.
-1. In the **Favorites** section of the portal menu, hover over the name of the service you want to remove.
+1. In the **Favorites** section of the portal menu, or within the **Favorites** section of **All services**, hover over the name of the service you want to remove.
:::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-remove.png" alt-text="Screenshot showing how to remove a service from Favorites in the Azure portal.":::
azure-portal Azure Portal Dashboards Create Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboards-create-programmatically.md
Title: Programmatically create Azure Dashboards description: Use a dashboard in the Azure portal as a template to programmatically create Azure Dashboards. Includes JSON reference. -+ Last updated 09/05/2023
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
Title: Bicep config file
description: Describes the configuration file for your Bicep deployments Previously updated : 09/11/2023 Last updated : 09/27/2023 # Configure your Bicep environment
You can enable preview features by adding:
The preceding sample enables 'userDefineTypes' and 'extensibility`. The available experimental features include: -- **assertions**: Should be enabled in tandem with `testFramework` experimental feature flag for expected functionality. Allows you to author boolean assertions using the `assert` keyword comparing the actual value of a parameter, variable, or resource name to an expected value. Assert statements can only be written directly within the Bicep file whose resources they reference.
+- **assertions**: Should be enabled in tandem with `testFramework` experimental feature flag for expected functionality. Allows you to author boolean assertions using the `assert` keyword comparing the actual value of a parameter, variable, or resource name to an expected value. Assert statements can only be written directly within the Bicep file whose resources they reference. For more information, see [Bicep Experimental Test Framework](https://github.com/Azure/bicep/issues/11967).
- **compileTimeImports**: Allows you to use symbols defined in another template. See [Import user-defined data types](./bicep-import.md#import-user-defined-data-types-preview). - **extensibility**: Allows Bicep to use a provider model to deploy non-ARM resources. Currently, we only support a Kubernetes provider. See [Bicep extensibility Kubernetes provider](./bicep-extensibility-kubernetes-provider.md). - **sourceMapping**: Enables basic source mapping to map an error location returned in the ARM template layer back to the relevant location in the Bicep file. - **resourceTypedParamsAndOutputs**: Enables the type for a parameter or output to be of type resource to make it easier to pass resource references between modules. This feature is only partially implemented. See [Simplifying resource referencing](https://github.com/azure/bicep/issues/2245). - **symbolicNameCodegen**: Allows the ARM template layer to use a new schema to represent resources as an object dictionary rather than an array of objects. This feature improves the semantic equivalent of the Bicep and ARM templates, resulting in more reliable code generation. Enabling this feature has no effect on the Bicep layer's functionality.-- **testFramework**: Should be enabled in tandem with `assertions` experimental feature flag for expected functionality. Allows you to author client-side, offline unit-test test blocks that reference Bicep files and mock deployment parameters in a separate `test.bicep` file using the new `test` keyword. Test blocks can be run with the command *bicep test <filepath_to_file_with_test_blocks>* which runs all `assert` statements in the Bicep files referenced by the test blocks.
+- **testFramework**: Should be enabled in tandem with `assertions` experimental feature flag for expected functionality. Allows you to author client-side, offline unit-test test blocks that reference Bicep files and mock deployment parameters in a separate `test.bicep` file using the new `test` keyword. Test blocks can be run with the command *bicep test <filepath_to_file_with_test_blocks>* which runs all `assert` statements in the Bicep files referenced by the test blocks. For more information, see [Bicep Experimental Test Framework](https://github.com/Azure/bicep/issues/11967).
- **userDefinedFunctions**: Allows you to define your own custom functions. See [User-defined functions in Bicep](./user-defined-functions.md). ## Next steps
azure-resource-manager Delete Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/delete-resource-group.md
Title: Delete resource group and resources description: Describes how to delete resource groups and resources. It describes how Azure Resource Manager orders the deletion of resources when a deleting a resource group. It describes the response codes and how Resource Manager handles them to determine if the deletion succeeded. Previously updated : 04/10/2023 Last updated : 09/27/2023 content_well_notification: - AI-contribution
To delete a resource group, you need access to the delete action for the **Micro
For a list of operations, see [Azure resource provider operations](../../role-based-access-control/resource-provider-operations.md). For a list of built-in roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
-If you have the required access, but the delete request fails, it may be because there's a [lock on the resources or resource group](lock-resources.md). Even if you didn't manually lock a resource group, it may have been [automatically locked by a related service](lock-resources.md#managed-applications-and-locks). Or, the deletion can fail if the resources are connected to resources in other resource groups that aren't being deleted. For example, you can't delete a virtual network with subnets that are still in use by a virtual machine.
+If you have the required access, but the delete request fails, it may be because there's a [lock on the resources or resource group](lock-resources.md). Even if you didn't manually lock a resource group, [a related service may have automatically locked it](lock-resources.md#managed-applications-and-locks). Or, the deletion can fail if the resources are connected to resources in other resource groups that aren't being deleted. For example, you can't delete a virtual network with subnets that are still in use by a virtual machine.
-## Accidental deletion
+## Can I recover a deleted resource group?
-If you accidentally delete a resource group or resource, in some situations it might be possible to recover it.
+No, you can't recover a deleted resource group. However, you might be able to resore some recently deleted resources.
-Some resource types support *soft delete*. You might have to configure soft delete before you can use it. For more information about enabling soft delete, see the documentation for [Azure Key Vault](../../key-vault/general/soft-delete-overview.md), [Azure Backup](../../backup/backup-azure-delete-vault.md), and [Azure Storage](../../storage/blobs/soft-delete-container-overview.md).
+Some resource types support *soft delete*. You might have to configure soft delete before you can use it. For information about enabling soft delete, see:
-You can also [open an Azure support case](../../azure-portal/supportability/how-to-create-azure-support-request.md). Provide as much detail as you can about the deleted resources, including their resource IDs, types, and resource names, and request that the support engineer check if the resources can be restored.
+* [Azure Key Vault soft-delete overview](../../key-vault/general/soft-delete-overview.md)
+* [Azure Storage - Soft delete for containers](../../storage/blobs/soft-delete-container-overview.md)
+* [Azure Storage - Soft delete for blobs](../../storage/blobs/soft-delete-blob-overview.md)
+* [Soft delete for Azure Backup](../../backup/backup-azure-security-feature-cloud.md)
+* [Soft delete for SQL server in Azure VM and SAP HANA in Azure VM workloads](../../backup/soft-delete-sql-saphana-in-azure-vm.md)
+* [Soft delete for virtual machines](../..//backup/soft-delete-virtual-machines.md)
+
+To restore deleted resources, see:
+
+* [Recover deleted Azure AI services resources](../../ai-services/manage-resources.md)
+* [Microsoft Entra - Recover from deletions](../../active-directory/architecture/recover-from-deletions.md)
+
+You can also [open an Azure support case](../../azure-portal/supportability/how-to-create-azure-support-request.md). Provide as much detail as you can about the deleted resources, including their resource IDs, types, and resource names. Request that the support engineer check if the resources can be restored.
> [!NOTE] > Recovery of deleted resources is not possible under all circumstances. A support engineer will investigate your scenario and advise you whether it's possible.
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/overview.md
Title: Azure Resource Manager overview description: Describes how to use Azure Resource Manager for deployment, management, and access control of resources on Azure. Previously updated : 02/28/2023 Last updated : 09/27/2023 # What is Azure Resource Manager?
All capabilities that are available in the portal are also available through Pow
If you're new to Azure Resource Manager, there are some terms you might not be familiar with. * **resource** - A manageable item that is available through Azure. Virtual machines, storage accounts, web apps, databases, and virtual networks are examples of resources. Resource groups, subscriptions, management groups, and tags are also examples of resources.
-* **resource group** - A container that holds related resources for an Azure solution. The resource group includes those resources that you want to manage as a group. You decide which resources belong in a resource group based on what makes the most sense for your organization. See [Resource groups](#resource-groups).
+* **resource group** - A container that holds related resources for an Azure solution. The resource group includes those resources that you want to manage as a group. You decide which resources belong in a resource group based on what makes the most sense for your organization. See [What is a resource group?](#resource-groups).
* **resource provider** - A service that supplies Azure resources. For example, a common resource provider is `Microsoft.Compute`, which supplies the virtual machine resource. `Microsoft.Storage` is another common resource provider. See [Resource providers and types](resource-providers-and-types.md). * **declarative syntax** - Syntax that lets you state "Here's what I intend to create" without having to write the sequence of programming commands to create it. ARM templates and Bicep files are examples of declarative syntax. In those files, you define the properties for the infrastructure to deploy to Azure. * **ARM template** - A JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group, subscription, management group, or tenant. The template can be used to deploy the resources consistently and repeatedly. See [Template deployment overview](../templates/overview.md).
For information about managing identities and access, see [Azure Active Director
You can deploy templates to tenants, management groups, subscriptions, or resource groups.
-## Resource groups
+## <a name="resource-groups"></a>What is a resource group?
+
+A resource group is a container that enables you to manage related resources for an Azure solution. By using the resource group, you can coordinate changes to the related resources. For example, you can deploy an update to the resource group and have confidence that the resources are updated in a coordinated operation. Or, when you're finished with the solution, you can delete the resource group and know that all of the resources are deleted.
There are some important factors to consider when defining your resource group:
There are some important factors to consider when defining your resource group:
To ensure state consistency for the resource group, all [control plane operations](./control-plane-and-data-plane.md) are routed through the resource group's location. When selecting a resource group location, we recommend that you select a location close to where your control operations originate. Typically, this location is the one closest to your current location. This routing requirement only applies to control plane operations for the resource group. It doesn't affect requests that are sent to your applications.
- If a resource group's region is temporarily unavailable, you can't update resources in the resource group because the metadata is unavailable. The resources in other regions will still function as expected, but you can't update them.
+ If a resource group's region is temporarily unavailable, you can't update resources in the resource group because the metadata is unavailable. The resources in other regions still function as expected, but you can't update them.
For more information about building reliable applications, see [Designing reliable Azure applications](/azure/architecture/checklist/resiliency-per-service).
There are some important factors to consider when defining your resource group:
The Azure Resource Manager service is designed for resiliency and continuous availability. Resource Manager and control plane operations (requests sent to `management.azure.com`) in the REST API are:
-* Distributed across regions. Azure Resource Manager has a separate instance in each region of Azure, meaning that a failure of the Azure Resource Manager instance in one region won't impact the availability of Azure Resource Manager or other Azure services in another region. Although Azure Resource Manager is distributed across regions, some services are regional. This distinction means that while the initial handling of the control plane operation is resilient, the request may be susceptible to regional outages when forwarded to the service.
+* Distributed across regions. Azure Resource Manager has a separate instance in each region of Azure, meaning that a failure of the Azure Resource Manager instance in one region doesn't affect the availability of Azure Resource Manager or other Azure services in another region. Although Azure Resource Manager is distributed across regions, some services are regional. This distinction means that while the initial handling of the control plane operation is resilient, the request may be susceptible to regional outages when forwarded to the service.
* Distributed across Availability Zones (and regions) in locations that have multiple Availability Zones. This distribution ensures that when a region loses one or more zones, Azure Resource Manager can either fail over to another zone or to another region to continue to provide control plane capability for the resources.
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
# Azure AI Video Indexer account types + This article gives an overview of Azure AI Video Indexer accounts types and provides links to other articles for more details. ## Trial account
azure-video-indexer Add Contributor Role On The Media Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/add-contributor-role-on-the-media-service.md
# Add contributor role to Media Services + This article describes how to assign contributor role on the Media Services account. > [!NOTE]
azure-video-indexer Audio Effects Detection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/audio-effects-detection-overview.md
-# Audio effects detection
+# Audio effects detection
+ Audio effects detection is an Azure AI Video Indexer feature that detects insights on various acoustic events and classifies them into acoustic categories. Audio effect detection can detect and classify different categories such as laughter, crowd reactions, alarms and/or sirens.
azure-video-indexer Audio Effects Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/audio-effects-detection.md
# Enable audio effects detection (preview) + **Audio effects detection** is one of Azure AI Video Indexer AI capabilities that detects various acoustics events and classifies them into different acoustic categories (such as dog barking, crowd reactions, laugher and more). Some scenarios where this feature is useful:
azure-video-indexer Clapperboard Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/clapperboard-metadata.md
In the following example, the board contains the following fields:
#### View the insight + To see the instances on the website, select **Insights** and scroll to **Clapper boards**. You can hover over each clapper board, or unfold **Show/Hide clapper board info** and see the metadata: > [!div class="mx-imgBorder"]
azure-video-indexer Concepts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/concepts-overview.md
-# Azure AI Video Indexer terminology & concepts
+# Azure AI Video Indexer terminology & concepts
+ This article gives a brief overview of Azure AI Video Indexer terminology and concepts. Also, review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
-# Connect an existing classic paid Azure AI Video Indexer account to ARM-based account
+# Connect an existing classic paid Azure AI Video Indexer account to ARM-based account
+ This article shows how to connect an existing classic paid Azure AI Video Indexer account to an Azure Resource Manager (ARM)-based (recommended) account. To create a new ARM-based account, see [create a new account](create-account-portal.md). To understand the Azure AI Video Indexer account types, review [account types](accounts-overview.md).
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
# Create a classic Azure AI Video Indexer account + [!INCLUDE [Gate notice](./includes/face-limited-access.md)] This topic shows how to create a new classic account connected to Azure using the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link). You can also create an Azure AI Video Indexer classic account through our [API](https://aka.ms/avam-dev-portal).
azure-video-indexer Considerations When Use At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/considerations-when-use-at-scale.md
# Things to consider when using Azure AI Video Indexer at scale + When using Azure AI Video Indexer to index videos and your archive of videos is growing, consider scaling. This article answers questions like:
azure-video-indexer Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-account-portal.md
# Tutorial: create an ARM-based account with Azure portal + [!INCLUDE [Gate notice](./includes/face-limited-access.md)] To start using unlimited features and robust capabilities of Azure AI Video Indexer, you need to create an Azure AI Video Indexer unlimited account.
azure-video-indexer Customize Brands Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-overview.md
# Customize a Brands model in Azure AI Video Indexer + Azure AI Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in a video or audio content or if it shows up in visual text in a video, Azure AI Video Indexer detects it as a brand in the content. Brands are disambiguated from other terms using context. Brand detection is useful in a wide variety of business scenarios such as contents archive and discovery, contextual advertising, social media analysis, retail compete analysis, and many more. Azure AI Video Indexer brand detection enables you to index brand mentions in speech and visual text, using Bing's brands database as well as with customization by building a custom Brands model for each Azure AI Video Indexer account. The custom Brands model feature allows you to select whether or not Azure AI Video Indexer will detect brands from the Bing brands database, exclude certain brands from being detected (essentially creating a list of unapproved brands), and include brands that should be part of your model that might not be in Bing's brands database (essentially creating a list of approved brands). The custom Brands model that you create will only be available in the account in which you created the model.
azure-video-indexer Customize Brands Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-with-api.md
# Customize a Brands model with the Azure AI Video Indexer API + Azure AI Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in video or audio content or if it shows up in visual text in a video, Azure AI Video Indexer detects it as a brand in the content. A custom Brands model allows you to exclude certain brands from being detected and include brands that should be part of your model that might not be in Bing's brands database. For more information, see [Overview](customize-brands-model-overview.md). > [!NOTE]
azure-video-indexer Customize Brands Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-brands-model-with-website.md
# Customize a Brands model with the Azure AI Video Indexer website + Azure AI Video Indexer supports brand detection from speech and visual text during indexing and reindexing of video and audio content. The brand detection feature identifies mentions of products, services, and companies suggested by Bing's brands database. For example, if Microsoft is mentioned in video or audio content or if it shows up in visual text in a video, Azure AI Video Indexer detects it as a brand in the content. A custom Brands model allows you to:
azure-video-indexer Customize Content Models Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-content-models-overview.md
# Customizing content models in Azure AI Video Indexer + [!INCLUDE [Gate notice](./includes/face-limited-access.md)] Azure AI Video Indexer allows you to customize some of its models to be adapted to your specific use case. These models include [brands](customize-brands-model-overview.md), [language](customize-language-model-overview.md), and [person](customize-person-model-overview.md). You can easily customize these models using the Azure AI Video Indexer website or API.
azure-video-indexer Customize Language Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-overview.md
-# Customize a Language model with Azure AI Video Indexer
+# Customize a Language model with Azure AI Video Indexer
+ Azure AI Video Indexer supports automatic speech recognition through integration with the Microsoft [Custom Speech Service](https://azure.microsoft.com/services/cognitive-services/custom-speech-service/). You can customize the Language model by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized, assuming default pronunciation, and the Language model will learn new probable sequences of words. See the list of supported by Azure AI Video Indexer languages in [supported langues](language-support.md).
azure-video-indexer Customize Language Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-with-api.md
# Customize a Language model with the Azure AI Video Indexer API + Azure AI Video Indexer lets you create custom Language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized. For a detailed overview and best practices for custom Language models, see [Customize a Language model with Azure AI Video Indexer](customize-language-model-overview.md).
azure-video-indexer Customize Language Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-language-model-with-website.md
# Customize a Language model with the Azure AI Video Indexer website + Azure AI Video Indexer lets you create custom Language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized. For a detailed overview and best practices for custom language models, see [Customize a Language model with Azure AI Video Indexer](customize-language-model-overview.md).
azure-video-indexer Customize Person Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-overview.md
# Customize a Person model in Azure AI Video Indexer + [!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-Azure AI Video Indexer supports celebrity recognition in your videos. The celebrity recognition feature covers approximately one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that are not recognized by Azure AI Video Indexer are still detected but are left unnamed. Customers can build custom Person models and enable Azure AI Video Indexer to recognize faces that are not recognized by default. Customers can build these Person models by pairing a person's name with image files of the person's face.
+Azure AI Video Indexer supports celebrity recognition in your videos. The celebrity recognition feature covers approximately one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that aren't recognized by Azure AI Video Indexer are still detected but are left unnamed. Customers can build custom Person models and enable Azure AI Video Indexer to recognize faces that aren't recognized by default. Customers can build these Person models by pairing a person's name with image files of the person's face.
If your account caters to different use-cases, you can benefit from being able to create multiple Person models per account. For example, if the content in your account is meant to be sorted into different channels, you might want to create a separate Person model for each channel. > [!NOTE] > Each Person model supports up to 1 million people and each account has a limit of 50 Person models.
-Once a model is created, you can use it by providing the model ID of a specific Person model when uploading/indexing or reindexing a video. Training a new face for a video, updates the specific custom model that the video was associated with.
+Once a model is created, you can use it by providing the model ID of a specific Person model when uploading/indexing or reindexing a video. Training a new face for a video updates the specific custom model that the video was associated with.
-If you do not need the multiple Person model support, do not assign a Person model ID to your video when uploading/indexing or reindexing. In this case, Azure AI Video Indexer will use the default Person model in your account.
+If you don't need the multiple Person model support, don't assign a Person model ID to your video when uploading/indexing or reindexing. In this case, Azure AI Video Indexer will use the default Person model in your account.
-You can use the Azure AI Video Indexer website to edit faces that were detected in a video and to manage multiple custom Person models in your account, as described in the [Customize a Person model using a website](customize-person-model-with-website.md) topic. You can also use the API, as described inΓÇ»[Customize a Person model using APIs](customize-person-model-with-api.md).
+You can use the Azure AI Video Indexer website to edit faces that were detected in a video and to manage multiple custom Person models in your account, as described in the [Customize a Person model using a website](customize-person-model-with-website.md) article. You can also use the API, as described inΓÇ»[Customize a Person model using APIs](customize-person-model-with-api.md).
azure-video-indexer Customize Person Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-with-api.md
# Customize a Person model with the Azure AI Video Indexer API + [!INCLUDE [Gate notice](./includes/face-limited-access.md)] Azure AI Video Indexer supports face detection and celebrity recognition for video content. The celebrity recognition feature covers about one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that aren't recognized by the celebrity recognition feature are detected but left unnamed. After you upload your video to Azure AI Video Indexer and get results back, you can go back and name the faces that weren't recognized. Once you label a face with a name, the face and name get added to your account's Person model. Azure AI Video Indexer will then recognize this face in your future videos and past videos.
azure-video-indexer Customize Person Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-with-website.md
# Customize a Person model with the Azure AI Video Indexer website + [!INCLUDE [Gate notice](./includes/face-limited-access.md)] Azure AI Video Indexer supports celebrity recognition for video content. The celebrity recognition feature covers approximately one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. For a detailed overview, see [Customize a Person model in Azure AI Video Indexer](customize-person-model-overview.md).
azure-video-indexer Customize Speech Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-overview.md
# Customize a speech model + [!INCLUDE [speech model](./includes/speech-model.md)] Through Azure AI Video Indexer integration with [Azure AI Speech services](../ai-services/speech-service/captioning-concepts.md), a Universal Language Model is utilized as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pretrained with dialects and phonetics representing various common domains. The base model works well in most speech recognition scenarios.
azure-video-indexer Customize Speech Model With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-with-api.md
# Customize a speech model with the API + [!INCLUDE [speech model](./includes/speech-model.md)] Azure AI Video Indexer lets you create custom language models to customize speech recognition by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to or aligning word or name pronunciation with how it should be written.
azure-video-indexer Customize Speech Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-speech-model-with-website.md
# Customize a speech model in the website + [!INCLUDE [speech model](./includes/speech-model.md)] Azure AI Video Indexer lets you create custom speech models to customize speech recognition by uploading datasets that are used to create a speech model. This article goes through the steps to do so through the Video Indexer website. You can also use the API, as described inΓÇ»[Customize speech model using API](customize-speech-model-with-api.md).
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
# Tutorial: Deploy Azure AI Video Indexer by using an ARM template + [!INCLUDE [Gate notice](./includes/face-limited-access.md)] In this tutorial, you'll create an Azure AI Video Indexer account by using the Azure Resource Manager template (ARM template, which is in preview). The resource will be deployed to your subscription and will create the Azure AI Video Indexer resource based on parameters defined in the *avam.template* file.
azure-video-indexer Deploy With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-bicep.md
# Tutorial: deploy Azure AI Video Indexer by using Bicep + In this tutorial, you create an Azure AI Video Indexer account by using [Bicep](../azure-resource-manager/bicep/overview.md). > [!NOTE]
azure-video-indexer Detect Textual Logo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/detect-textual-logo.md
-# How to detect textual logo (preview)
+# How to detect textual logo
++ > [!NOTE] > Textual logo detection (preview) creation process is currently available through API. The result can be viewed through the Azure AI Video Indexer [website](https://www.videoindexer.ai/).
azure-video-indexer Detected Clothing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/detected-clothing.md
-# Enable detected clothing feature (preview)
+# Enable detected clothing feature
+ Azure AI Video Indexer detects clothing associated with the person wearing it in the video and provides information such as the type of clothing detected and the timestamp of the appearance (start, end). The API returns the detection confidence level. The clothing types that are detected are long pants, short pants, long sleeves, short sleeves, and skirt or dress.
azure-video-indexer Digital Patterns Color Bars https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/digital-patterns-color-bars.md
-# Enable and view digital patterns with color bars (preview)
+# Enable and view digital patterns with color bars
+ This article shows how to enable and view digital patterns with color bars (preview).
azure-video-indexer Edit Speakers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-speakers.md
# Edit speakers with the Azure AI Video Indexer website + Azure AI Video Indexer identifies each speaker in a video and attributes each transcribed line to a speaker. The speakers are given a unique identity such as `Speaker #1` and `Speaker #2`. To provide clarity and enrich the transcript quality, you may want to replace the assigned identity with each speaker's actual name. To edit speakers' names, use the edit actions as described in the article. The article demonstrates how to edit speakers with the [Azure AI Video Indexer website](https://www.videoindexer.ai/). The same editing operations are possible with an API. To use API, call [update video index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Video-Index).
azure-video-indexer Edit Transcript Lines Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-transcript-lines-portal.md
# View and update transcriptions + This article explains how to insert or remove a transcript line in the Azure AI Video Indexer website. It also shows how to view word-level information. ## Insert or remove transcript lines in the Azure AI Video Indexer website
azure-video-indexer Emotions Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/emotions-detection.md
# Text-based emotion detection + Emotions detection is an Azure AI Video Indexer AI feature that automatically detects emotions in video's transcript lines. Each sentence can either be detected as: - *Anger*,
azure-video-indexer Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/face-detection.md
-# Face detection
+# Face detection
+ Face detection, a feature of Azure AI Video Indexer, automatically detects faces in a media file, and then aggregates instances of similar faces into groups. The celebrities recognition model then runs to recognize celebrities.
azure-video-indexer Face Redaction With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/face-redaction-with-api.md
# Redact faces by using Azure AI Video Indexer API + You can use Azure AI Video Indexer to detect and identify faces in video. To modify your video to blur (redact) faces of specific individuals, you can use API. A few minutes of footage that contain multiple faces can take hours to redact manually, but by using presets in Video Indexer API, the face redaction process requires just a few simple steps.
azure-video-indexer Import Content From Trial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/import-content-from-trial.md
# Import content from your trial account to a regular account + If you would like to transition from the Video Indexer trial account experience to that of a regular paid account, Video Indexer allows you at not cost to import the content in your trial content to your new regular account. When might you want to switch from a trial to a regular account?
azure-video-indexer Indexing Configuration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/indexing-configuration-guide.md
# The indexing configuration guide + It's important to understand the configuration options to index efficiently while ensuring you meet your indexing objectives. When indexing videos, users can use the default settings or adjust many of the settings. Azure AI Video Indexer allows you to choose between a range of language, indexing, custom models, and streaming settings that have implications on the insights generated, cost, and performance. This article explains each of the options and the impact of each option to enable informed decisions when indexing. The article discusses the [Azure AI Video Indexer website](https://www.videoindexer.ai/) experience but the same options apply when submitting jobs through the API (see the [API guide](video-indexer-use-apis.md)). When indexing large volumes, follow the [at-scale guide](considerations-when-use-at-scale.md).
azure-video-indexer Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/insights-overview.md
# Azure AI Video Indexer insights + When a video is indexed, Azure AI Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. Insights contain an aggregated view of the data: transcripts, optical character recognition elements (OCRs), face, topics, emotions, etc. Once the video is indexed and analyzed, Azure AI Video Indexer produces a JSON content that contains details of the video insights. For example, each insight type includes instances of time ranges that show when the insight appears in the video. Read details about the following insights here:
azure-video-indexer Keywords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/keywords.md
-# Keywords extraction
+# Keywords extraction
+ Keywords extraction is an Azure AI Video Indexer AI feature that automatically detects insights on the different keywords discussed in media files. Keywords extraction can extract insights in both single language and multi-language media files. The total number of extracted keywords and their categories are listed in the Insights tab, where clicking a Keyword and then clicking Play Previous or Play Next jumps to the keyword in the media file.
azure-video-indexer Labels Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/labels-identification.md
-# Labels identification
+# Labels identification
+ Labels identification is an Azure AI Video Indexer AI feature that identifies visual objects like sunglasses or actions like swimming, appearing in the video footage of a media file. There are many labels identification categories and once extracted, labels identification instances are displayed in the Insights tab and can be translated into over 50 languages. Clicking a Label opens the instance in the media file, select Play Previous or Play Next to see more instances.
azure-video-indexer Language Identification Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-identification-model.md
# Automatically identify the spoken language with language identification model + Azure AI Video Indexer supports automatic language identification (LID), which is the process of automatically identifying the spoken language from audio content. The media file is transcribed in the dominant identified language. See the list of supported by Azure AI Video Indexer languages in [supported languages](language-support.md).
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
# Language support in Azure AI Video Indexer + This article explains Video Indexer's language options and provides a list of language support for each one. It includes the languages support for Video Indexer features, translation, language identification, customization, and the language settings of the Video Indexer website. ## Supported languages per scenario
azure-video-indexer Limited Access Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/limited-access-features.md
-# Limited Access features of Azure AI Video Indexer
+# Limited Access features of Azure AI Video Indexer
+ [!INCLUDE [Gate notice](../ai-services/computer-vision/includes/identity-gate-notice.md)]
azure-video-indexer Logic Apps Connector Arm Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md
# Logic Apps connector with ARM-based AVI accounts + Azure AI Video Indexer (AVI) [REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) supports both server-to-server and client-to-server communication. The API enables you to integrate video and audio insights into your application logic. > [!TIP]
azure-video-indexer Logic Apps Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-tutorial.md
# Use Azure AI Video Indexer with Logic App and Power Automate + Azure AI Video Indexer [REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Video) supports both server-to-server and client-to-server communication and enables Azure AI Video Indexer users to integrate video and audio insights easily into their application logic, unlocking new experiences and monetization opportunities. To make the integration even easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://make.powerautomate.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with our API. You can use the connectors to set up custom workflows to effectively index and extract insights from a large amount of video and audio files, without writing a single line of code. Furthermore, using the connectors for your integration gives you better visibility on the health of your workflow and an easy way to debug it. 
azure-video-indexer Manage Account Connected To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-account-connected-to-azure.md
# Repair the connection to Azure, examine errors/warnings ++ This article demonstrates how to manage an Azure AI Video Indexer account that's connected to your Azure subscription and an Azure Media Services account. > [!NOTE]
azure-video-indexer Manage Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-multiple-tenants.md
# Manage multiple tenants + This article discusses different options for managing multiple tenants with Azure AI Video Indexer. Choose a method that is most suitable for your scenario: * Azure AI Video Indexer account per tenant
azure-video-indexer Matched Person https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/matched-person.md
-# Enable the matched person insight (preview)
+# Enable the matched person insight
+ [!INCLUDE [Gate notice](./includes/face-limited-access.md)]
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
Last updated 04/17/2023
-<!-- VERSION 2.3
-Template for monitoring data reference article for Azure services. This article is support for the main "Monitoring [servicename]" article for the service. -->
-
-<!-- IMPORTANT STEP 1. Do a search and replace of Azure AI Video Indexer with the name of your service. That will make the template easier to read -->
# Monitor Azure AI Video Indexer data reference + See [Monitoring Azure AI Video Indexer](monitor-video-indexer.md) for details on collecting and analyzing monitoring data for Azure AI Video Indexer. ## Metrics
azure-video-indexer Monitor Video Indexer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer.md
-<!-- VERSION 2.2
-Template for the main monitoring article for Azure services.
-Keep the required sections and add/modify any content for any information specific to your service.
-This article should be in your TOC with the name *monitor-Azure AI Video Indexer.md* and the TOC title "Monitor Azure AI Video Indexer".
-Put accompanying reference information into an article in the Reference section of your TOC with the name *monitor-Azure AI Video Indexer-reference.md* and the TOC title "Monitoring data".
-Keep the headings in this order.
>-
-<!-- IMPORTANT STEP 1. Do a search and replace of Azure AI Video Indexer with the name of your service. That will make the template easier to read -->
- # Monitoring Azure AI Video Indexer
-<!-- REQUIRED. Please keep headings in this order -->
-<!-- Most services can use this section unchanged. Add to it if there are any unique charges if your service has significant monitoring beyond Azure Monitor. -->
When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Azure AI Video Indexer. Azure AI Video Indexer uses [Azure Monitor](../azure-monitor/overview.md). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md). -
-<!-- Optional diagram showing monitoring for your service. -->
-
-<!--## Monitoring overview page in Azure portal
-<!-- OPTIONAL. Please keep headings in this order -->
-<!-- If you don't have an over page, remove this section. If you keep it, edit it if there are any unique charges if your service has significant monitoring beyond Azure Monitor. -->
-
-<!--The **Overview** page in the Azure portal for each *Azure AI Video Indexer account* includes *[provide a description of the data in the Overview page.]*.
-
-## *Azure AI Video Indexer* insights -->
- Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights".
-<!-- Give a quick outline of what your "insight page" provides and refer to another article that gives details -->
- > [!NOTE] > The monitoring feature is not available for trial and classic accounts. To update to an ARM account, see [Connect a classic account to ARM](connect-classic-account-to-arm.md) or [Import content from a trial account](import-content-from-trial.md). ## Monitoring data
-<!-- REQUIRED. Please keep headings in this order -->
Azure AI Video Indexer collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources). See [Monitoring *Azure AI Video Indexer* data reference](monitor-video-indexer-data-reference.md) for detailed information on the metrics and logs metrics created by Azure AI Video Indexer.
-<!-- If your service has additional non-Azure Monitor monitoring data then outline and refer to that here. Also include that information in the data reference as appropriate. -->
- ## Collection and routing
-<!-- REQUIRED. Please keep headings in this order -->
-
-<!-- Platform metrics and the -->Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+Activity logs are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
-<!-- Include any additional information on collecting logs. The number of things that diagnostics settings control is expanding -->
- See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *Azure AI Video Indexer* are listed in [Azure AI Video Indexer monitoring data reference](monitor-video-indexer-data-reference.md#resource-logs). | Category | Description |
See [Create diagnostic setting to collect platform logs and metrics in Azure](/a
:::image type="content" source="./media/monitor/toc-diagnostics-save.png" alt-text="Screenshot of diagnostic settings." lightbox="./media/monitor/toc-diagnostics-save.png"::: :::image type="content" source="./media/monitor/diagnostics-settings-destination.png" alt-text="Screenshot of where to send lots." lightbox="./media/monitor/diagnostics-settings-destination.png":::
-<!-- OPTIONAL: Add specific examples of configuration for this service. For example, CLI and PowerShell commands for creating diagnostic setting. Ideally, customers should set up a policy to automatically turn on collection for services. Azure monitor has Resource Manager template examples you can point to. See https://learn.microsoft.com/azure/azure-monitor/samples/resource-manager-diagnostic-settings. Contact azmondocs@microsoft.com if you have questions. -->
The metrics and logs you can collect are discussed in the following sections. ## Analyzing metrics Currently Azure AI Video Indexer does not support monitoring of metrics.
-<!-- REQUIRED. Please keep headings in this order
-If you don't support metrics, say so. Some services may be only onboarded to logs -->
-
-<!--You can analyze metrics for *Azure AI Video Indexer* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
-
-<!-- Point to the list of metrics available in your monitor-service-reference article. -->
-<!--For a list of the platform metrics collected for Azure AI Video Indexer, see [Monitoring *Azure AI Video Indexer* data reference metrics](monitor-service-reference.md#metrics)
-
-<!-- REQUIRED for services that use a Guest OS. That includes agent based services like Virtual Machines, Service Fabric, Cloud Services, and perhaps others. Delete the section otherwise -->
-<!--Guest OS metrics must be collected by agents running on the virtual machines hosting your service. <!-- Add additional information as appropriate -->
-<!--For more information, see [Overview of Azure Monitor agents](/azure/azure-monitor/platform/agents-overview)
-
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
-
-<!-- Optional: Call out additional information to help your customers. For example, you can include additional information here about how to use metrics explorer specifically for your service. Remember that the UI is subject to change quite often so you will need to maintain these screenshots yourself if you add them in. -->
## Analyzing logs
-<!-- REQUIRED. Please keep headings in this order
-If you don't support resource logs, say so. Some services may be only onboarded to metrics and the activity log. -->
- Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md) The schema for Azure AI Video Indexer resource logs is found in the [Azure AI Video Indexer Data Reference](monitor-video-indexer-data-reference.md#schemas)
The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of pla
For a list of the types of resource logs collected for Azure AI Video Indexer, see [Monitoring Azure AI Video Indexer data reference](monitor-video-indexer-data-reference.md#resource-logs)
-For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure AI Video Indexer data reference](monitor-video-indexer-data-reference.md#azure-monitor-logs-tables)
-
-<!-- Optional: Call out additional information to help your customers. For example, you can include additional information here about log usage or what logs are most important. Remember that the UI is subject to change quite often so you will need to maintain these screenshots yourself if you add them in. -->
+For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure AI Video Indexer data reference](monitor-video-indexer-data-reference.md#azure-monitor-logs-tables)
### Sample Kusto queries
-#### Audit related sample queries
-<!-- REQUIRED if you support logs. Please keep headings in this order -->
-<!-- Add sample Log Analytics Kusto queries for your service. -->
+#### Audit related sample queries
> [!IMPORTANT] > When you select **Logs** from the Azure AI Video Indexer account menu, Log Analytics is opened with the query scope set to the current Azure AI Video Indexer account. This means that log queries will only include data from that resource. If you want to run a query that includes data from other Azure AI Video Indexer account or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
-<!-- REQUIRED: Include queries that are helpful for figuring out the health and state of your service. Ideally, use some of these queries in the alerts section. It's possible that some of your queries may be in the Log Analytics UI (sample or example queries). Check if so. -->
-
-Following are queries that you can use to help you monitor your Azure AI Video Indexer account.
-<!-- Put in a code section here. -->
+Following are queries that you can use to help you monitor your Azure AI Video Indexer account.
```kusto // Project failures summarized by operationName and Upn, aggregated in 30m windows.
VIIndexing
## Alerts
-<!-- SUGGESTED: Include useful alerts on metrics, logs, log conditions or activity log. Ask your PMs if you don't know.
-This information is the BIGGEST request we get in Azure Monitor so do not avoid it long term. People don't know what to monitor for best results. Be prescriptive
>- Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
-<!-- only include next line if applications run on your service and work with App Insights. -->
-<!-- If you are creating or running an application which run on <*service*> [Azure Monitor Application Insights](../azure-monitor/overview.md#application-insights) may offer additional types of alerts.
-<!-- end -->
- The following table lists common and recommended alert rules for Azure AI Video Indexer.
-<!-- Fill in the table with metric and log alerts that would be valuable for your service. Change the format as necessary to make it more readable -->
| Alert type | Condition | Description | |:|:|:| | Log Alert|Failed operation |Send an alert when an upload failed |
VIAudit
## Next steps
-<!-- Add additional links. You can change the wording of these and add more if useful. -->
- - See [Monitoring Azure AI Video Indexer data reference](monitor-video-indexer-data-reference.md) for a reference of the metrics, logs, and other important values created by Azure AI Video Indexer account. - See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-video-indexer Multi Language Identification Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/multi-language-identification-transcription.md
# Automatically identify and transcribe multi-language content + Azure AI Video Indexer supports automatic language identification and transcription in multi-language content. This process involves automatically identifying the spoken language in different segments from audio, sending each segment of the media file to be transcribed and combine the transcription back to one unified transcription. ## Choosing multilingual identification on indexing with portal
azure-video-indexer Named Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/named-entities.md
-# Named entities extraction
+# Named entities extraction
+ Named entities extraction is an Azure AI Video Indexer AI feature that uses Natural Language Processing (NLP) to extract insights on the locations, people and brands appearing in audio and images in media files. Named entities extraction is automatically used with Transcription and OCR and its insights are based on those extracted during these processes. The resulting insights are displayed in the **Insights** tab and are filtered into locations, people and brand categories. Clicking a named entity, displays its instance in the media file. It also displays a description of the entity and a Find on Bing link of recognizable entities.
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
# NSG service tags for Azure AI Video Indexer + Azure AI Video Indexer is a service hosted on Azure. In some cases the service needs to interact with other services in order to index video files (for example, a Storage account) or when you orchestrate indexing jobs against Azure AI Video Indexer API endpoint using your own service hosted on Azure (for example, AKS, Web Apps, Logic Apps, Functions). > [!NOTE]
azure-video-indexer Observed Matched People https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-matched-people.md
# Observed people tracking and matched faces + > [!IMPORTANT] > Face identification, customization and celebrity recognition features access is limited based on eligibility and usage criteria in order to support our Responsible AI principles. Face identification, customization and celebrity recognition features are only available to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to apply for access.
azure-video-indexer Observed People Featured Clothing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-people-featured-clothing.md
-# Enable featured clothing of an observed person (preview)
+# Enable featured clothing of an observed person
+ When indexing a video using Azure AI Video Indexer advanced video settings, you can view the featured clothing of an observed person. The insight provides moments within the video where key people are prominently featured and clearly visible, including the coordinates of the people, timestamp, and the frame of the shot. This insight allows high-quality in-video contextual advertising, where relevant clothing ads are matched with the specific time within the video in which they're viewed.
azure-video-indexer Observed People Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-people-tracking.md
-# Track observed people in a video (preview)
+# Track observed people in a video
+ Azure AI Video Indexer detects observed people in videos and provides information such as the location of the person in the video frame and the exact timestamp (start, end) when a person appears. The API returns the bounding box coordinates (in pixels) for each person instance detected, including detection confidence.
azure-video-indexer Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/ocr.md
-# Optical character recognition (OCR)
+# Optical character recognition (OCR)
+ Optical character recognition (OCR) is an Azure AI Video Indexer AI feature that extracts text from images like pictures, street signs and products in media files to create insights.
azure-video-indexer Odrv Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/odrv-download.md
# Index your videos stored on OneDrive + This article shows how to index videos stored on OneDrive by using the Azure AI Video Indexer website. ## Supported file formats
azure-video-indexer Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/regions.md
# Azure regions in which Azure AI Video Indexer exists + Azure AI Video Indexer APIs contain a **location** parameter that you should set to the Azure region to which the call should be routed. This must be an [Azure region in which Azure AI Video Indexer is available](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services&regions=all). ## Locations
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure AI Video Indexer release notes | Microsoft Docs
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure AI Video Indexer. Previously updated : 07/03/2023 Last updated : 09/27/2023
To stay up-to-date with the most recent Azure AI Video Indexer developments, thi
* Bug fixes * Deprecated functionality
+## September 2023
+
+### Changes related to AMS retirement
+As a result of the June 30th 2024 [retirement of Azure Media Services (AMS)](/azure/media-services/latest/azure-media-services-retirement), Video Indexer has announced a number of related retirements. They include the June 30th 2024 retirement of Video Indexer Classic accounts, API changes, and no longer supporting adaptive bitrate. For full details, see[Changes related to Azure Media Service (AMS) retirement](https://aka.ms/vi-ams-related-changes).
+ ## July 2023 ### Redact faces with Azure Video Indexer API
azure-video-indexer Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/resource-health.md
-# Diagnose Video Indexer resource issues with Azure Resource Health
+# Diagnose Video Indexer resource issues with Azure Resource Health
+ [Azure Resource Health](../service-health/resource-health-overview.md) can help you diagnose and get support for service problems that affect your Azure AI Video Indexer resources. Resource health is updated every 1-2 minutes and reports the current and past health of your resources. For additional details on how health is assessed, review the [full list of resource types and health checks](../service-health/resource-health-checks-resource-types.md#microsoftnetworkapplicationgateways) in Azure Resource Health.
azure-video-indexer Restricted Viewer Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/restricted-viewer-role.md
-# Manage access to an Azure AI Video Indexer account
+# Manage access to an Azure AI Video Indexer account
+ In this article, you'll learn how to manage access (authorization) to an Azure AI Video Indexer account. As Azure AI Video IndexerΓÇÖs role management differs depending on the Video Indexer Account type, this document will first cover access management of regular accounts (ARM-based) and then of Classic and Trial accounts.
azure-video-indexer Scenes Shots Keyframes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/scenes-shots-keyframes.md
# Scenes, shots, and keyframes + Azure AI Video Indexer supports segmenting videos into temporal units based on structural and semantic properties. This capability enables customers to easily browse, manage, and edit their video content based on varying granularities. For example, based on scenes, shots, and keyframes, described in this topic. ![Scenes, shots, and keyframes](./media/scenes-shots-keyframes/scenes-shots-keyframes.png)
azure-video-indexer Slate Detection Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/slate-detection-insight.md
-# The slate detection insights (preview)
+# The slate detection insights
+ The following slate detection insights (listed below) are automatically identified when indexing a video using the advanced indexing option. These insights are most useful to customers involved in the movie post-production process.
azure-video-indexer Storage Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/storage-behind-firewall.md
# Configure Video Indexer to work with storage accounts behind firewall + When you create a Video Indexer account, you must associate it with a Media Services and Storage account. Video Indexer can access Media Services and Storage using system authentication or Managed Identity authentication. Video Indexer validates that the user adding the association has access to the Media Services and Storage account with Azure Resource Manager Role Based Access Control (RBAC). If you want to use a firewall to secure your storage account and enable trusted storage, [Managed Identities](/azure/media-services/latest/concept-managed-identities) authentication that allows Video Indexer access through the firewall is the preferred option. It allows Video Indexer and Media Services to access the storage account that has been configured without needing public access for [trusted storage access.](../storage/common/storage-network-security.md?tabs=azure-portal#grant-access-to-trusted-azure-services)
azure-video-indexer Switch Tenants Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/switch-tenants-portal.md
# Switch between multiple tenants + When working with multiple tenants/directories in the Azure environment user might need to switch between the different directories. When logging in the Azure AI Video Indexer website, a default directory will load and the relevant accounts and list them in the **Account list**.
azure-video-indexer Textless Slate Scene Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/textless-slate-scene-matching.md
-# Enable and view a textless slate with matching scene (preview)
+# Enable and view a textless slate with matching scene
+ This article shows how to enable and view a textless slate with matching scene (preview).
azure-video-indexer Topics Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/topics-inference.md
-# Topics inference
+# Topics inference
+ Topics inference is an Azure AI Video Indexer AI feature that automatically creates inferred insights derived from the transcribed audio, OCR content in visual text, and celebrities recognized in the video using the Video Indexer facial recognition model. The extracted Topics and categories (when available) are listed in the Insights tab. To jump to the topic in the media file, click a Topic -> Play Previous or Play Next.
azure-video-indexer Transcription Translation Lid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/transcription-translation-lid.md
-# Media transcription, translation and language identification
+# Media transcription, translation and language identification
+ Azure AI Video Indexer transcription, translation and language identification automatically detects, transcribes, and translates the speech in media files into over 50 languages.
azure-video-indexer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/upload-index-videos.md
# Upload media files using the Video Indexer website + You can upload media files from your file system or from a URL. You can also configure basic or advanced settings for indexing, such as privacy, streaming quality, language, presets, people and brands models, custom logos and metadata. This article shows how to upload and index media files (audio or video) using the [Azure AI Video Indexer website](https://aka.ms/vi-portal-link).
azure-video-indexer Use Editor Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/use-editor-create-project.md
# Add video clips to your projects + The [Azure AI Video Indexer](https://www.videoindexer.ai/) website enables you to use your video's deep insights to: find the right media content, locate the parts that youΓÇÖre interested in, and use the results to create an entirely new project. Once created, the project can be rendered and downloaded from Azure AI Video Indexer and be used in your own editing applications or downstream workflows.
azure-video-indexer Video Indexer Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-disaster-recovery.md
Last updated 07/29/2019
+ # Azure AI Video Indexer failover and disaster recovery + Azure AI Video Indexer doesn't provide instant failover of the service if there's a regional datacenter outage or failure. This article explains how to configure your environment for a failover to ensure optimal availability for apps and minimized recovery time if a disaster occurs. We recommend that you configure business continuity disaster recovery (BCDR) across regional pairs to benefit from Azure's isolation and availability policies. For more information, see [Azure paired regions](../availability-zones/cross-region-replication-azure.md).
azure-video-indexer Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-embed-widgets.md
# Embed Azure AI Video Indexer widgets in your apps + This article shows how you can embed Azure AI Video Indexer widgets in your apps. Azure AI Video Indexer supports embedding three types of widgets into your apps: *Cognitive Insights*, *Player*, and *Editor*. Starting with version 2, the widget base URL includes the region of the specified account. For example, an account in the West US region generates: `https://www.videoindexer.ai/embed/insights/.../?location=westus2`.
azure-video-indexer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-get-started.md
# Quickstart: How to sign up and upload your first video + [!INCLUDE [Gate notice](./includes/face-limited-access.md)] You can access Azure AI Video Indexer capabilities in three ways:
azure-video-indexer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md
# Examine the Azure AI Video Indexer output + When a video is indexed, Azure AI Video Indexer produces the JSON content that contains details of the specified video insights. The insights include transcripts, optical character recognition elements (OCRs), faces, topics, and similar details. Each insight type includes instances of time ranges that show when the insight appears in the video. For information, see [Azure AI Video Indexer insights](insights-overview.md).
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
-# What is Azure AI Video Indexer?
+# Azure AI Video Indexer overview
-> [!IMPORTANT]
-> Following [Azure Media Services retirement announcement](https://aka.ms/ams-retirement), Azure Video Indexer makes the following announcements: [June release notes](release-notes.md#june-2023).
->
-> Also checkout related [AMS deprecation FAQ](ams-deprecation-faq.yml).
-- Azure AI Video Indexer is a cloud application, part of Azure AI services, built on Azure Media Services and Azure AI services (such as the Face, Translator, Azure AI Vision, and Speech). It enables you to extract the insights from your videos using Azure AI Video Indexer video and audio models.
Learn how to [get started with Azure AI Video Indexer](video-indexer-get-started
Once you set up, start using [insights](video-indexer-output-json-v2.md) and check out other **How to guides**.
-## Compliance, Privacy and Security
+## Compliance, privacy and security
++ As an important reminder, you must comply with all applicable laws in your use of Azure AI Video Indexer, and you may not use Azure AI Video Indexer or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
azure-video-indexer Video Indexer Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-search.md
# Search for exact moments in videos with Azure AI Video Indexer + This topic shows you how to use the Azure AI Video Indexer website to search for exact moments in videos. 1. Go to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
azure-video-indexer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md
# Tutorial: Use the Azure AI Video Indexer API + Azure AI Video Indexer consolidates various audio and video artificial intelligence (AI) technologies offered by Microsoft into one integrated service, making development simpler. The APIs are designed to enable developers to focus on consuming Media AI technologies without worrying about scale, global reach, availability, and reliability of cloud platforms. You can use the API to upload your files, get detailed video insights, get URLs of embeddable insight and player widgets, and more. [!INCLUDE [accounts](./includes/create-accounts-intro.md)]
azure-video-indexer Video Indexer View Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-view-edit.md
# View Azure AI Video Indexer insights + This article shows you how to view the Azure AI Video Indexer insights of a video. 1. Browse to the [Azure AI Video Indexer](https://www.videoindexer.ai/) website and sign in.
azure-video-indexer View Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/view-closed-captions.md
# View closed captions in the Azure AI Video Indexer website + This article shows how to view closed captions in the [Azure AI Video Indexer video player](https://www.videoindexer.ai). ## View closed captions
azure-vmware Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-customer-managed-keys.md
Before you begin to enable customer-managed key (CMK) functionality, ensure the
1. Navigate to **Key vaults** and locate the key vault you want to use. 1. From the left navigation, under **Settings**, select **Access policies**. 1. In **Access policies**, select **Add Access Policy**.
- 1. From the Key Permissions drop-down, check: **Select all**, **Get**, **List**, **Wrap Key**, and **Unwrap Key**.
+ 1. From the Key Permissions drop-down, check: **Select**, **Get**, **Wrap Key**, and **Unwrap Key**.
1. Under Select principal, select **None selected**. A new **Principal** window with a search box will open. 1. In the search box, paste the **Object ID** from the previous step, or search the private cloud name you want to use. Choose **Select** when you're done. 1. Select **ADD**.
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
Last updated 06/16/2022 -+ # Azure Chaos Studio Preview fault and action library
Currently, only virtual machine scale sets configured with the **Uniform** orche
} ] }
-```
+```
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
# Call Recording overview > [!NOTE]
-> Call Recording is not enabled for [Teams interoperability](../teams-interop.md).
+> Call Recording for [Teams interoperability](../call-automation/call-automation-teams-interop.md) is now in Public Preview.
Call Recording enables you to record multiple calling scenarios available in Azure Communication Services by providing you with a set of APIs to start, stop, pause and resume recording. Whether it's a PSTN, WebRTC, or SIP call, these APIs can be accessed from your server-side business logic. Also, recordings can be triggered by a user action that tells the server application to start recording.
Many countries/regions and states have laws and regulations that apply to call r
Regulations around the maintenance of personal data require the ability to export user data. In order to support these requirements, recording metadata files include the participantId for each call participant in the `participants` array. You can cross-reference the Azure Communication Services User Identity in the `participants` array with your internal user identities to identify participants in a call. ## Next steps
-For more information, see the following articles:
-- Learn more about Call recording, check out the [Call Recording Quickstart](../../quickstarts/voice-video-calling/get-started-call-recording.md).
+> [!div class="nextstepaction"]
+> [Get started with Call Recording](../../quickstarts/voice-video-calling/get-started-call-recording.md).
+
+Here are some articles of interest to you:
+ - Learn more about call recording [Insights](../analytics/insights/call-recording-insights.md) and [Logs](../analytics/logs/recording-logs.md) - Learn more about [Call Automation](../../quickstarts/call-automation/callflows-for-customer-interactions.md). - Learn more about [Video Calling](../../quickstarts/voice-video-calling/get-started-with-video-calling.md).
communication-services Number Lookup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/number-lookup.md
+ Last updated 08/10/2023
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/find-request-unit-charge.md
Headers returned by the Gremlin API are mapped to custom status attributes, whic
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. [Create a new Azure Cosmos account](quickstart-console.md#create-a-database-account) and feed it with data, or select an existing account that already contains data.
+1. [Create a new Azure Cosmos account](quickstart-console.md) and feed it with data, or select an existing account that already contains data.
1. Go to the **Data Explorer** pane, and then select the container you want to work on.
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/how-to-create-container.md
This article explains the different ways to create a container in Azure Cosmos D
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. [Create a new Azure Cosmos DB account](quickstart-dotnet.md#create-a-database-account), or select an existing account.
+1. [Create a new Azure Cosmos DB account](quickstart-dotnet.md), or select an existing account.
1. Open the **Data Explorer** pane, and select **New Graph**. Next, provide the following details:
cosmos-db Quickstart Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-console.md
Title: 'Query with Azure Cosmos DB for Gremlin using TinkerPop Gremlin Console: Tutorial'
-description: An Azure Cosmos DB quickstart to creates vertices, edges, and queries using the Azure Cosmos DB for Gremlin.
+ Title: 'Quickstart: Traverse vertices & edges with the console'
+
+description: In this quickstart, connect to an Azure Cosmos DB for Apache Gremlin account using the console. Then; create vertices, create edges, and traverse them.
+++ Previously updated : 07/10/2020--- Last updated : 09/27/2023
+# CustomerIntent: As a developer, I want to use the Gremlin console so that I can manually create and traverse vertices and edges.
-# Quickstart: Create, query, and traverse an Azure Cosmos DB graph database using the Gremlin console
-> [!div class="op_single_selector"]
-> * [Gremlin console](quickstart-console.md)
-> * [.NET](quickstart-dotnet.md)
-> * [Java](quickstart-java.md)
-> * [Node.js](quickstart-nodejs.md)
-> * [Python](quickstart-python.md)
-> * [PHP](quickstart-php.md)
->
+# Quickstart: Traverse vertices and edges with the Gremlin console and Azure Cosmos DB for Apache Gremlin
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
-This quickstart demonstrates how to create an Azure Cosmos DB [Gremlin API](introduction.md) account, database, and graph (container) using the Azure portal and then use the [Gremlin Console](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console) from [Apache TinkerPop](https://tinkerpop.apache.org) to work with Gremlin API data. In this tutorial, you create and query vertices and edges, updating a vertex property, query vertices, traverse the graph, and drop a vertex.
+Azure Cosmos DB for Apache Gremlin is a fully managed graph database service implementing the popular [`Apache Tinkerpop`](https://tinkerpop.apache.org/), a graph computing framework using the Gremlin query language. The API for Gremlin gives you a low-friction way to get started using Gremlin with a service that can grow and scale out as much as you need with minimal management.
-The Gremlin console is Groovy/Java based and runs on Linux, Mac, and Windows. You can download it from the [Apache TinkerPop site](https://tinkerpop.apache.org/download.html).
+In this quickstart, you use the Gremlin console to connect to a newly created Azure Cosmos DB for Gremlin account.
## Prerequisites
-You need to have an Azure subscription to create an Azure Cosmos DB account for this quickstart.
--
-You also need to install the [Gremlin Console](https://tinkerpop.apache.org/download.html). The **recommended version is v3.4.13**. (To use Gremlin Console on Windows, you need to install [Java Runtime](https://www.oracle.com/technetwork/java/javase/overview/https://docsupdatetracker.net/index.html), minimum requires Java 8 but it is preferable to use Java 11).
-
-## Create a database account
--
-## Add a graph
--
-## <a id="ConnectAppService"></a>Connect to your app service/Graph
-
-1. Before starting the Gremlin Console, create or modify the remote-secure.yaml configuration file in the `apache-tinkerpop-gremlin-console-3.2.5/conf` directory.
-2. Fill in your *host*, *port*, *username*, *password*, *connectionPool*, and *serializer* configurations as defined in the following table:
-
- Setting|Suggested value|Description
- ||
- hosts|[*account-name*.**gremlin**.cosmos.azure.com]|See the following screenshot. This is the **Gremlin URI** value on the Overview page of the Azure portal, in square brackets, with the trailing :443/ removed. Note: Be sure to use the Gremlin value, and **not** the URI that ends with [*account-name*.documents.azure.com] which would likely result in a "Host did not respond in a timely fashion" exception when attempting to execute Gremlin queries later.
- port|443|Set to 443.
- username|*Your username*|The resource of the form `/dbs/<db>/colls/<coll>` where `<db>` is your database name and `<coll>` is your collection name.
- password|*Your primary key*| See second screenshot below. This is your primary key, which you can retrieve from the Keys page of the Azure portal, in the Primary Key box. Use the copy button on the left side of the box to copy the value.
- connectionPool|{enableSsl: true}|Your connection pool setting for TLS.
- serializer|{ className: org.apache.tinkerpop.gremlin.<br>driver.ser.GraphSONMessageSerializerV2d0,<br> config: { serializeResultToString: true }}|Set to this value and delete any `\n` line breaks when pasting in the value.
-
- For the hosts value, copy the **Gremlin URI** value from the **Overview** page:
-
- :::image type="content" source="./media/quickstart-console/gremlin-uri.png" alt-text="View and copy the Gremlin URI value on the Overview page in the Azure portal":::
-
- For the password value, copy the **Primary key** from the **Keys** page:
-
- :::image type="content" source="./media/quickstart-console/keys.png" alt-text="View and copy your primary key in the Azure portal, Keys page":::
-
- Your remote-secure.yaml file should look like this:
-
- ```yaml
- hosts: [your_database_server.gremlin.cosmos.azure.com]
- port: 443
- username: /dbs/your_database/colls/your_collection
- password: your_primary_key
- connectionPool: {
- enableSsl: true
- }
- serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV2d0, config: { serializeResultToString: true }}
- ```
-
- make sure to wrap the value of hosts parameter within brackets [].
-
-1. In your terminal, run `bin/gremlin.bat` or `bin/gremlin.sh` to start the [Gremlin Console](https://tinkerpop.apache.org/docs/3.2.5/tutorials/getting-started/).
-
-1. In your terminal, run `:remote connect tinkerpop.server conf/remote-secure.yaml` to connect to your app service.
-
- > [!TIP]
- > If you receive the error `No appenders could be found for logger` ensure that you updated the serializer value in the remote-secure.yaml file as described in step 2. If your configuration is correct, then this warning can be safely ignored as it should not impact the use of the console.
-
-1. Next run `:remote console` to redirect all console commands to the remote server.
-
- > [!NOTE]
- > If you don't run the `:remote console` command but would like to redirect all console commands to the remote server, you should prefix the command with `:>`, for example you should run the command as `:> g.V().count()`. This prefix is a part of the command and it is important when using the Gremlin console with Azure Cosmos DB. Omitting this prefix instructs the console to execute the command locally, often against an in-memory graph. Using this prefix `:>` tells the console to execute a remote command, in this case against Azure Cosmos DB (either the localhost emulator, or an Azure instance).
-
-Great! Now that we finished the setup, let's start running some console commands.
-
-Let's try a simple count() command. Type the following into the console at the prompt:
-
-```console
-g.V().count()
-```
-
-## Create vertices and edges
-
-Let's begin by adding five person vertices for *Thomas*, *Mary Kay*, *Robin*, *Ben*, and *Jack*.
-
-Input (Thomas):
-
-```console
-g.addV('person').property('firstName', 'Thomas').property('lastName', 'Andersen').property('age', 44).property('userid', 1).property('pk', 'pk')
-```
-
-Output:
-
-```bash
-==>[id:796cdccc-2acd-4e58-a324-91d6f6f5ed6d,label:person,type:vertex,properties:[firstName:[[id:f02a749f-b67c-4016-850e-910242d68953,value:Thomas]],lastName:[[id:f5fa3126-8818-4fda-88b0-9bb55145ce5c,value:Andersen]],age:[[id:f6390f9c-e563-433e-acbf-25627628016e,value:44]],userid:[[id:796cdccc-2acd-4e58-a324-91d6f6f5ed6d|userid,value:1]]]]
-```
+- An Azure account with an active subscription.
+ - No Azure subscription? [Sign up for a free Azure account](https://azure.microsoft.com/free/).
+ - Don't want an Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no subscription required.
+- [Docker host](https://www.docker.com/)
+ - Don't have Docker installed? Try this quickstart in [GitHub Codespaces](https://codespaces.new/github/codespaces-blank?quickstart=1).
+- [Azure Command-Line Interface (CLI)](/cli/azure/)
-Input (Mary Kay):
-```console
-g.addV('person').property('firstName', 'Mary Kay').property('lastName', 'Andersen').property('age', 39).property('userid', 2).property('pk', 'pk')
+## Create an API for Gremlin account and relevant resources
-```
+The API for Gremlin account should be created prior to using the Gremlin console. Additionally, it helps to also have the database and graph in place.
-Output:
-```bash
-==>[id:0ac9be25-a476-4a30-8da8-e79f0119ea5e,label:person,type:vertex,properties:[firstName:[[id:ea0604f8-14ee-4513-a48a-1734a1f28dc0,value:Mary Kay]],lastName:[[id:86d3bba5-fd60-4856-9396-c195ef7d7f4b,value:Andersen]],age:[[id:bc81b78d-30c4-4e03-8f40-50f72eb5f6da,value:39]],userid:[[id:0ac9be25-a476-4a30-8da8-e79f0119ea5e|userid,value:2]]]]
+## Start and configure the Gremlin console using Docker
-```
+For the gremlin console, this quickstart uses the `tinkerpop/gremlin-console` container image from Docker Hub. This image ensures that you're using the appropriate version of the console (`3.4`) for connection with the API for Gremlin. Once the console is running, connect from your local Docker host to the remote API for Gremlin account.
-Input (Robin):
+1. Pull the `3.4` version of the `tinkerpop/gremlin-console` container image.
-```console
-g.addV('person').property('firstName', 'Robin').property('lastName', 'Wakefield').property('userid', 3).property('pk', 'pk')
-```
+ ```bash
+ docker pull tinkerpop/gremlin-console:3.4
+ ```
-Output:
+1. Create an empty working folder. In the empty folder, create a **remote-secure.yaml** file. Add this YAML configuration to the file.
-```bash
-==>[id:8dc14d6a-8683-4a54-8d74-7eef1fb43a3e,label:person,type:vertex,properties:[firstName:[[id:ec65f078-7a43-4cbe-bc06-e50f2640dc4e,value:Robin]],lastName:[[id:a3937d07-0e88-45d3-a442-26fcdfb042ce,value:Wakefield]],userid:[[id:8dc14d6a-8683-4a54-8d74-7eef1fb43a3e|userid,value:3]]]]
-```
+ ```yml
+ hosts: [<account-name>.gremlin.cosmos.azure.com]
+ port: 443
+ username: /dbs/cosmicworks/colls/products
+ password: <account-key>
+ connectionPool: {
+ enableSsl: true,
+ sslEnabledProtocols: [TLSv1.2]
+ }
+ serializer: {
+ className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV2d0,
+ config: {
+ serializeResultToString: true
+ }
+ }
+ ```
-Input (Ben):
+ > [!NOTE]
+ > Replace the `<account-name>` and `<account-key>` placeholders with the *NAME* and *KEY* values obtained earlier in this quickstart.
-```console
-g.addV('person').property('firstName', 'Ben').property('lastName', 'Miller').property('userid', 4).property('pk', 'pk')
+1. Open a new terminal in the context of your working folder that includes the **remote-secure.yaml** file.
-```
+1. Run the Docker container image in interactive (`--interactive --tty`) mode. Ensure that you mount the current working folder to the `/opt/gremlin-console/conf/` path within the container.
-Output:
+ ```bash
+ docker run -it --mount type=bind,source=.,target=/opt/gremlin-console/conf/ tinkerpop/gremlin-console:3.4
+ ```
-```bash
-==>[id:ee86b670-4d24-4966-9a39-30529284b66f,label:person,type:vertex,properties:[firstName:[[id:a632469b-30fc-4157-840c-b80260871e9a,value:Ben]],lastName:[[id:4a08d307-0719-47c6-84ae-1b0b06630928,value:Miller]],userid:[[id:ee86b670-4d24-4966-9a39-30529284b66f|userid,value:4]]]]
-```
+1. Within the Gremlin console container, connect to the remote (API for Gremlin) account using the **remote-secure.yaml** configuration file.
-Input (Jack):
+ ```gremlin
+ :remote connect tinkerpop.server conf/remote-secure.yaml
+ ```
-```console
-g.addV('person').property('firstName', 'Jack').property('lastName', 'Connor').property('userid', 5).property('pk', 'pk')
-```
+## Create and traverse vertices and edges
-Output:
+Now that the console is connected to the account, use the standard Gremlin syntax to create and traverse both vertices and edges.
-```bash
-==>[id:4c835f2a-ea5b-43bb-9b6b-215488ad8469,label:person,type:vertex,properties:[firstName:[[id:4250824e-4b72-417f-af98-8034aa15559f,value:Jack]],lastName:[[id:44c1d5e1-a831-480a-bf94-5167d133549e,value:Connor]],userid:[[id:4c835f2a-ea5b-43bb-9b6b-215488ad8469|userid,value:5]]]]
-```
+1. Add a vertex for a **product** with the following properties:
+ | | Value |
+ | | |
+ | **label** | `product` |
+ | **id** | `68719518371` |
+ | **`name`** | `Kiama classic surfboard` |
+ | **`price`** | `285.55` |
+ | **`category`** | `surfboards` |
-Next, let's add edges for relationships between our people.
+ ```gremlin
+ :> g.addV('product').property('id', '68719518371').property('name', 'Kiama classic surfboard').property('price', 285.55).property('category', 'surfboards')
+ ```
-Input (Thomas -> Mary Kay):
+ > [!IMPORTANT]
+ > Don't foget the `:>` prefix. THis prefix is required to run the command remotely.
-```console
-g.V().hasLabel('person').has('firstName', 'Thomas').addE('knows').to(g.V().hasLabel('person').has('firstName', 'Mary Kay'))
-```
+1. Add another **product** vertex with these properties:
-Output:
+ | | Value |
+ | | |
+ | **label** | `product` |
+ | **id** | `68719518403` |
+ | **`name`** | `Montau Turtle Surfboard` |
+ | **`price`** | `600` |
+ | **`category`** | `surfboards` |
-```bash
-==>[id:c12bf9fb-96a1-4cb7-a3f8-431e196e702f,label:knows,type:edge,inVLabel:person,outVLabel:person,inV:0d1fa428-780c-49a5-bd3a-a68d96391d5c,outV:1ce821c6-aa3d-4170-a0b7-d14d2a4d18c3]
-```
+ ```gremlin
+ :> g.addV('product').property('id', '68719518403').property('name', 'Montau Turtle Surfboard').property('price', 600).property('category', 'surfboards')
+ ```
-Input (Thomas -> Robin):
+1. Create an **edge** named `replaces` to define a relationship between the two products.
-```console
-g.V().hasLabel('person').has('firstName', 'Thomas').addE('knows').to(g.V().hasLabel('person').has('firstName', 'Robin'))
-```
+ ```gremlin
+ :> g.V(['surfboards', '68719518403']).addE('replaces').to(g.V(['surfboards', '68719518371']))
+ ```
-Output:
+1. Count all vertices within the graph.
-```bash
-==>[id:58319bdd-1d3e-4f17-a106-0ddf18719d15,label:knows,type:edge,inVLabel:person,outVLabel:person,inV:3e324073-ccfc-4ae1-8675-d450858ca116,outV:1ce821c6-aa3d-4170-a0b7-d14d2a4d18c3]
-```
+ ```gremlin
+ :> g.V().count()
+ ```
-Input (Robin -> Ben):
+1. Traverse the graph to find all vertices that replaces the `Kiama classic surfboard`.
-```console
-g.V().hasLabel('person').has('firstName', 'Robin').addE('knows').to(g.V().hasLabel('person').has('firstName', 'Ben'))
-```
+ ```gremlin
+ :> g.V().hasLabel('product').has('category', 'surfboards').has('name', 'Kiama classic surfboard').inE('replaces').outV()
+ ```
-Output:
+1. Traverse the graph to find all vertices that `Montau Turtle Surfboard` replaces.
-```bash
-==>[id:889c4d3c-549e-4d35-bc21-a3d1bfa11e00,label:knows,type:edge,inVLabel:person,outVLabel:person,inV:40fd641d-546e-412a-abcc-58fe53891aab,outV:3e324073-ccfc-4ae1-8675-d450858ca116]
-```
+ ```gremlin
+ :> g.V().hasLabel('product').has('category', 'surfboards').has('name', 'Montau Turtle Surfboard').outE('replaces').inV()
+ ```
-## Update a vertex
-
-Let's update the *Thomas* vertex with a new age of *45*.
-
-Input:
-```console
-g.V().hasLabel('person').has('firstName', 'Thomas').property('age', 45)
-```
-Output:
-
-```bash
-==>[id:ae36f938-210e-445a-92df-519f2b64c8ec,label:person,type:vertex,properties:[firstName:[[id:872090b6-6a77-456a-9a55-a59141d4ebc2,value:Thomas]],lastName:[[id:7ee7a39a-a414-4127-89b4-870bc4ef99f3,value:Andersen]],age:[[id:a2a75d5a-ae70-4095-806d-a35abcbfe71d,value:45]]]]
-```
-
-## Query your graph
-
-Now, let's run a variety of queries against your graph.
-
-First, let's try a query with a filter to return only people who are older than 40 years old.
-
-Input (filter query):
-
-```console
-g.V().hasLabel('person').has('age', gt(40))
-```
-
-Output:
-
-```bash
-==>[id:ae36f938-210e-445a-92df-519f2b64c8ec,label:person,type:vertex,properties:[firstName:[[id:872090b6-6a77-456a-9a55-a59141d4ebc2,value:Thomas]],lastName:[[id:7ee7a39a-a414-4127-89b4-870bc4ef99f3,value:Andersen]],age:[[id:a2a75d5a-ae70-4095-806d-a35abcbfe71d,value:45]]]]
-```
-
-Next, let's project the first name for the people who are older than 40 years old.
-
-Input (filter + projection query):
-
-```console
-g.V().hasLabel('person').has('age', gt(40)).values('firstName')
-```
-
-Output:
-
-```bash
-==>Thomas
-```
-
-## Traverse your graph
-
-Let's traverse the graph to return all of Thomas's friends.
-
-Input (friends of Thomas):
-
-```console
-g.V().hasLabel('person').has('firstName', 'Thomas').outE('knows').inV().hasLabel('person')
-```
-
-Output:
-
-```bash
-==>[id:f04bc00b-cb56-46c4-a3bb-a5870c42f7ff,label:person,type:vertex,properties:[firstName:[[id:14feedec-b070-444e-b544-62be15c7167c,value:Mary Kay]],lastName:[[id:107ab421-7208-45d4-b969-bbc54481992a,value:Andersen]],age:[[id:4b08d6e4-58f5-45df-8e69-6b790b692e0a,value:39]]]]
-==>[id:91605c63-4988-4b60-9a30-5144719ae326,label:person,type:vertex,properties:[firstName:[[id:f760e0e6-652a-481a-92b0-1767d9bf372e,value:Robin]],lastName:[[id:352a4caa-bad6-47e3-a7dc-90ff342cf870,value:Wakefield]]]]
-```
-
-Next, let's get the next layer of vertices. Traverse the graph to return all the friends of Thomas's friends.
-
-Input (friends of friends of Thomas):
-
-```console
-g.V().hasLabel('person').has('firstName', 'Thomas').outE('knows').inV().hasLabel('person').outE('knows').inV().hasLabel('person')
-```
-Output:
-
-```bash
-==>[id:a801a0cb-ee85-44ee-a502-271685ef212e,label:person,type:vertex,properties:[firstName:[[id:b9489902-d29a-4673-8c09-c2b3fe7f8b94,value:Ben]],lastName:[[id:e084f933-9a4b-4dbc-8273-f0171265cf1d,value:Miller]]]]
-```
-
-## Drop a vertex
-
-Let's now delete a vertex from the graph database.
-
-Input (drop Jack vertex):
-
-```console
-g.V().hasLabel('person').has('firstName', 'Jack').drop()
-```
-
-## Clear your graph
-
-Finally, let's clear the database of all vertices and edges.
-
-Input:
-
-```console
-g.E().drop()
-g.V().drop()
-```
-
-Congratulations! You've completed this Azure Cosmos DB: Gremlin API tutorial!
+## Clean up resources
-## Review SLAs in the Azure portal
+When you no longer need the API for Gremlin account, delete the corresponding resource group.
-## Clean up resources
+## How did we solve the problem?
+Azure Cosmos DB for Apache Gremlin solved our problem by offering Gremlin as a service. With this offering, you aren't required to stand up your own Gremlin server instances or manage your own infrastructure. Even more, you can scale your solution as your needs grow over time.
-## Next steps
+To connect to the API for Gremlin account, you used the `tinkerpop/gremlin-console` container image to run the gremlin console in a manner that didn't require a local installation. Then, you used the configuration stored in the **remote-secure.yaml** file to connect from the running container the API for Gremlin account. From there, you ran multiple common Gremlin commands.
-In this quickstart, you've learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, create vertices and edges, and traverse your graph using the Gremlin console. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
+## Next step
> [!div class="nextstepaction"]
-> [Query using Gremlin](tutorial-query.md)
+> [Create and query data using Azure Cosmos DB for Apache Gremlin](tutorial-query.md)
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-dotnet.md
Title: Build an Azure Cosmos DB .NET Framework, Core application using the Gremlin API
-description: Presents a .NET Framework/Core code sample you can use to connect to and query Azure Cosmos DB
+ Title: 'Quickstart: Gremlin library for .NET'
+
+description: In this quickstart, connect to Azure Cosmos DB for Apache Gremlin using .NET. Then, create and traverse vertices and edges.
+ - Previously updated : 05/02/2020-+ Last updated : 09/27/2023
+# CustomerIntent: As a .NET developer, I want to use a library for my programming language so that I can create and traverse vertices and edges in code.
-# Quickstart: Build a .NET Framework or Core application using the Azure Cosmos DB for Gremlin account
+
+# Quickstart: Azure Cosmos DB for Apache Gremlin library for .NET
+ [!INCLUDE[Gremlin](../includes/appliesto-gremlin.md)]
-> [!div class="op_single_selector"]
-> * [Gremlin console](quickstart-console.md)
-> * [.NET](quickstart-dotnet.md)
-> * [Java](quickstart-java.md)
-> * [Node.js](quickstart-nodejs.md)
-> * [Python](quickstart-python.md)
-> * [PHP](quickstart-php.md)
->
+
+Azure Cosmos DB for Apache Gremlin is a fully managed graph database service implementing the popular [`Apache Tinkerpop`](https://tinkerpop.apache.org/), a graph computing framework using the Gremlin query language. The API for Gremlin gives you a low-friction way to get started using Gremlin with a service that can grow and scale out as much as you need with minimal management.
-Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases. All of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
+In this quickstart, you use the `Gremlin.Net` library to connect to a newly created Azure Cosmos DB for Gremlin account.
-This quickstart demonstrates how to create an Azure Cosmos DB [Gremlin API](introduction.md) account, database, and graph (container) using the Azure portal. You then build and run a console app built using the open-source driver [Gremlin.Net](https://tinkerpop.apache.org/docs/3.2.7/reference/#gremlin-DotNet).
+[Library source code](https://github.com/apache/tinkerpop/tree/master/gremlin-dotnet) | [Package (NuGet)](https://www.nuget.org/packages/Gremlin.Net)
## Prerequisites
-Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
+- An Azure account with an active subscription.
+ - No Azure subscription? [Sign up for a free Azure account](https://azure.microsoft.com/free/).
+ - Don't want an Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no subscription required.
+- [.NET (LTS)](https://dotnet.microsoft.com/)
+ - Don't have .NET installed? Try this quickstart in [GitHub Codespaces](https://codespaces.new/github/codespaces-blank?quickstart=1).
+- [Azure Command-Line Interface (CLI)](/cli/azure/)
++
+## Setting up
+This section walks you through creating an API for Gremlin account and setting up a .NET project to use the library to connect to the account.
-## Create a database account
+### Create an API for Gremlin account
+The API for Gremlin account should be created prior to using the .NET library. Additionally, it helps to also have the database and graph in place.
-## Add a graph
+### Create a new .NET console application
-## Clone the sample application
+Create a .NET console application in an empty folder using your preferred terminal.
-Now let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it's to work with data programmatically.
+1. Open your terminal in an empty folder.
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+1. Use the `dotnet new` command specifying the **console** template.
```bash
- md "C:\git-samples"
+ dotnet new console
```
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+### Install the NuGet package
+
+Add the `Gremlin.NET` NuGet package to the .NET project.
+
+1. Use the `dotnet add package` command specifying the `Gremlin.Net` NuGet package.
```bash
- cd "C:\git-samples"
+ dotnet add package Gremlin.Net
```
-3. Run the following command to clone the sample repository. The ``git clone`` command creates a copy of the sample app on your computer.
+1. Build the .NET project using `dotnet build`.
```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-gremlindotnet-getting-started.git
+ dotnet build
```
-4. Then open Visual Studio and open the solution file.
+ Make sure that the build was successful with no errors. The expected output from the build should look something like this:
+
+ ```output
+ Determining projects to restore...
+ All projects are up-to-date for restore.
+ dslkajfjlksd -> \dslkajfjlksd\bin\Debug\net6.0\dslkajfjlksd.dll
+
+ Build succeeded.
+ 0 Warning(s)
+ 0 Error(s)
+ ```
-5. Restore the NuGet packages in the project. The restore operation should include the Gremlin.Net driver, and the Newtonsoft.Json package.
+### Configure environment variables
-6. You can also install the Gremlin.Net@v3.4.13 driver manually using the NuGet package manager, or the [NuGet command-line utility](/nuget/install-nuget-client-tools):
+To use the *NAME* and *URI* values obtained earlier in this quickstart, persist them to new environment variables on the local machine running the application.
+
+1. To set the environment variable, use your terminal to persist the values as `COSMOS_ENDPOINT` and `COSMOS_KEY` respectively.
```bash
- nuget install Gremlin.NET -Version 3.4.13
+ export COSMOS_GREMLIN_ENDPOINT="<account-name>"
+ export COSMOS_GREMLIN_KEY="<account-key>"
```
-
-> [!NOTE]
-> The supported Gremlin.NET driver version for Gremlin API is available [here](support.md#compatible-client-libraries). Latest released versions of Gremlin.NET may see incompatibilities, so please check the linked table for compatibility updates.
-## Review the code
+1. Validate that the environment variables were set correctly.
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
+ ```bash
+ printenv COSMOS_GREMLIN_ENDPOINT
+ printenv COSMOS_GREMLIN_KEY
+ ```
-The following snippets are all taken from the Program.cs file.
+## Code examples
-* Set your connection parameters based on the account created above:
+- [Authenticate the client](#authenticate-the-client)
+- [Create vertices](#create-vertices)
+- [Create edges](#create-edges)
+- [Query vertices &amp; edges](#query-vertices--edges)
- :::code language="csharp" source="~/azure-cosmosdb-graph-dotnet/GremlinNetSample/Program.cs" id="configureConnectivity":::
+The code in this article connects to a database named `cosmicworks` and a graph named `products`. The code then adds vertices and edges to the graph before traversing the added items.
-* The Gremlin commands to be executed are listed in a Dictionary:
+### Authenticate the client
- :::code language="csharp" source="~/azure-cosmosdb-graph-dotnet/GremlinNetSample/Program.cs" id="defineQueries":::
+Application requests to most Azure services must be authorized. For the API for Gremlin, use the *NAME* and *URI* values obtained earlier in this quickstart.
-* Create a new `GremlinServer` and `GremlinClient` connection objects using the parameters provided above:
+1. Open the **Program.cs** file.
- :::code language="csharp" source="~/azure-cosmosdb-graph-dotnet/GremlinNetSample/Program.cs" id="defineClientandServerObjects":::
+1. Delete any existing content within the file.
-* Execute each Gremlin query using the `GremlinClient` object with an async task. You can read the Gremlin queries from the dictionary defined in the previous step and execute them. Later get the result and read the values, which are formatted as a dictionary, using the `JsonSerializer` class from Newtonsoft.Json package:
+1. Add a using block for the `Gremlin.Net.Driver` namespace.
- :::code language="csharp" source="~/azure-cosmosdb-graph-dotnet/GremlinNetSample/Program.cs" id="executeQueries":::
+ :::code language="csharp" source="~/cosmos-db-apache-gremlin-dotnet-samples/001-quickstart/Program.cs" id="imports":::
-## Update your connection string
+1. Create `accountName` and `accountKey` string variables. Store the `COSMOS_GREMLIN_ENDPOINT` and `COSMOS_GREMLIN_KEY` environment variables as the values for each respective variable.
-Now go back to the Azure portal to get your connection string information and copy it into the app.
+ :::code language="csharp" source="~/cosmos-db-apache-gremlin-dotnet-samples/001-quickstart/Program.cs" id="environment_variables":::
-1. From the [Azure portal](https://portal.azure.com/), navigate to your graph database account. In the **Overview** tab, you can see two endpoints-
-
- **.NET SDK URI** - This value is used when you connect to the graph account by using Microsoft.Azure.Graphs library.
+1. Create a new instance of `GremlinServer` using the account's credentials.
- **Gremlin Endpoint** - This value is used when you connect to the graph account by using Gremlin.Net library.
+ :::code language="csharp" source="~/cosmos-db-apache-gremlin-dotnet-samples/001-quickstart/Program.cs" id="authenticate_client":::
- :::image type="content" source="./media/quickstart-dotnet/endpoint.png" alt-text="Copy the endpoint":::
+1. Create a new instance of `GremlinClient` using the remote server credentials and the **GraphSON 2.0** serializer.
- For this sample, record the *Host* value of the **Gremlin Endpoint**. For example, if the URI is ``https://graphtest.gremlin.cosmosdb.azure.com``, the *Host* value would be ``graphtest.gremlin.cosmosdb.azure.com``.
+ :::code language="csharp" source="~/cosmos-db-apache-gremlin-dotnet-samples/001-quickstart/Program.cs" id="connect_client":::
-1. Next, navigate to the **Keys** tab and record the *PRIMARY KEY* value from the Azure portal.
+### Create vertices
-1. After you've copied the URI and PRIMARY KEY of your account, save them to a new environment variable on the local machine running the application. To set the environment variable, open a command prompt window, and run the following command. Make sure to replace ``<cosmos-account-name>`` and ``<cosmos-account-primary-key>`` values.
+Now that the application is connected to the account, use the standard Gremlin syntax to create vertices.
- ### [Windows](#tab/windows)
-
- ```powershell
- setx Host "<cosmos-account-name>.gremlin.cosmosdb.azure.com"
- setx PrimaryKey "<cosmos-account-primary-key>"
- ```
-
- ### [Linux / macOS](#tab/linux+macos)
-
- ```bash
- export Host=<cosmos-account-name>.gremlin.cosmosdb.azure.com
- export PrimaryKey=<cosmos-account-primary-key>
- ```
-
-
+1. Use `SubmitAsync` to run a command server-side on the API for Gremlin account. Create a **product** vertex with the following properties:
+
+ | | Value |
+ | | |
+ | **label** | `product` |
+ | **id** | `68719518371` |
+ | **`name`** | `Kiama classic surfboard` |
+ | **`price`** | `285.55` |
+ | **`category`** | `surfboards` |
+
+ :::code language="csharp" source="~/cosmos-db-apache-gremlin-dotnet-samples/001-quickstart/Program.cs" id="create_vertices_1":::
+
+1. Create a second **product** vertex with these properties:
+
+ | | Value |
+ | | |
+ | **label** | `product` |
+ | **id** | `68719518403` |
+ | **`name`** | `Montau Turtle Surfboard` |
+ | **`price`** | `600.00` |
+ | **`category`** | `surfboards` |
+
+ :::code language="csharp" source="~/cosmos-db-apache-gremlin-dotnet-samples/001-quickstart/Program.cs" id="create_vertices_2":::
+
+1. Create a third **product** vertex with these properties:
+
+ | | Value |
+ | | |
+ | **label** | `product` |
+ | **id** | `68719518409` |
+ | **`name`** | `Bondi Twin Surfboard` |
+ | **`price`** | `585.50` |
+ | **`category`** | `surfboards` |
-1. Open the *Program.cs* file and update the "database and "container" variables with the database and container (which is also the graph name) names created above.
+ :::code language="csharp" source="~/cosmos-db-apache-gremlin-dotnet-samples/001-quickstart/Program.cs" id="create_vertices_3":::
- `private static string database = "your-database-name";`
- `private static string container = "your-container-or-graph-name";`
+### Create edges
-1. Save the Program.cs file.
+Create edges using the Gremlin syntax to define relationships between vertices.
-You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
+1. Create an edge from the `Montau Turtle Surfboard` product named **replaces** to the `Kiama classic surfboard` product.
-## Run the console app
+ :::code language="csharp" source="~/cosmos-db-apache-gremlin-dotnet-samples/001-quickstart/Program.cs" id="create_edges_1":::
-Select CTRL + F5 to run the application. The application will print both the Gremlin query commands and results in the console.
+ > [!TIP]
+ > This edge defintion uses the `g.V(['<partition-key>', '<id>'])` syntax. Alternatively, you can use `g.V('<id>').has('category', '<partition-key>')`.
- The console window displays the vertexes and edges being added to the graph. When the script completes, press ENTER to close the console window.
+1. Create another **replaces** edge from the same product to the `Bondi Twin Surfboard`.
-## Browse using the Data Explorer
+ :::code language="csharp" source="~/cosmos-db-apache-gremlin-dotnet-samples/001-quickstart/Program.cs" id="create_edges_2":::
-You can now go back to Data Explorer in the Azure portal and browse and query your new graph data.
+### Query vertices &amp; edges
-1. In Data Explorer, the new database appears in the Graphs pane. Expand the database and container nodes, and then select **Graph**.
+Use the Gremlin syntax to traverse the graph and discover relationships between vertices.
-2. Select the **Apply Filter** button to use the default query to view all the vertices in the graph. The data generated by the sample app is displayed in the Graphs pane.
+1. Traverse the graph and find all vertices that `Montau Turtle Surfboard` replaces.
- You can zoom in and out of the graph, you can expand the graph display space, add extra vertices, and move vertices on the display surface.
+ :::code language="csharp" source="~/cosmos-db-apache-gremlin-dotnet-samples/001-quickstart/Program.cs" id="query_vertices_edges":::
- :::image type="content" source="./media/quickstart-dotnet/graph-explorer.png" alt-text="View the graph in Data Explorer in the Azure portal":::
+1. Write to the console the static string `[CREATED PRODUCT]\t68719518403`. Then, iterate over each matching vertex using a `foreach` loop and write to the console a message that starts with `[REPLACES PRODUCT]` and includes the matching product `id` field as a suffix.
-## Review SLAs in the Azure portal
+ :::code language="csharp" source="~/cosmos-db-apache-gremlin-dotnet-samples/001-quickstart/Program.cs" id="output_vertices_edges":::
+## Run the code
+
+Validate that your application works as expected by running the application. The application should execute with no errors or warnings. The output of the application includes data about the created and queried items.
+
+1. Open the terminal in the .NET project folder.
+
+1. Use `dotnet run` to run the application.
+
+ ```bash
+ dotnet run
+ ```
+
+1. Observe the output from the application.
+
+ ```output
+ [CREATED PRODUCT] 68719518403
+ [REPLACES PRODUCT] 68719518371
+ [REPLACES PRODUCT] 68719518409
+ ```
## Clean up resources
+When you no longer need the API for Gremlin account, delete the corresponding resource group.
-## Next steps
-In this quickstart, you've learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, and run an app. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
+## Next step
> [!div class="nextstepaction"]
-> [Query using Gremlin](tutorial-query.md)
+> [Create and query data using Azure Cosmos DB for Apache Gremlin](tutorial-query.md)
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-java.md
- Title: Build a graph database with Java in Azure Cosmos DB
-description: Presents a Java code sample you can use to connect to and query graph data in Azure Cosmos DB using Gremlin.
--- Previously updated : 03/26/2019-----
-# Quickstart: Build a graph database with the Java SDK and the Azure Cosmos DB for Gremlin
-
-> [!div class="op_single_selector"]
-> * [Gremlin console](quickstart-console.md)
-> * [.NET](quickstart-dotnet.md)
-> * [Java](quickstart-java.md)
-> * [Node.js](quickstart-nodejs.md)
-> * [Python](quickstart-python.md)
-> * [PHP](quickstart-php.md)
->
-
-In this quickstart, you create and manage an Azure Cosmos DB for Gremlin (graph) API account from the Azure portal, and add data by using a Java app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
-
-## Prerequisites
-- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -- [Java Development Kit (JDK) 8](/java/openjdk/download#openjdk-8). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.-- A [Maven binary archive](https://maven.apache.org/download.cgi). -- [Git](https://www.git-scm.com/downloads). -- [Gremlin-driver 3.4.13](https://mvnrepository.com/artifact/org.apache.tinkerpop/gremlin-driver/3.4.13), this dependency is mentioned in the quickstart sample's pom.xml-
-## Create a database account
-
-Before you can create a graph database, you need to create a Gremlin (Graph) database account with Azure Cosmos DB.
--
-## Add a graph
--
-## Clone the sample application
-
-Now let's switch to working with code. Let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
-
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
-
- ```bash
- md "C:\git-samples"
- ```
-
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to a folder to install the sample app.
-
- ```bash
- cd "C:\git-samples"
- ```
-
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
-
- ```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-java-getting-started.git
- ```
-
-## Review the code
-
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-information).
-
-The following snippets are all taken from the *C:\git-samples\azure-cosmos-db-graph-java-getting-started\src\GetStarted\Program.java* file.
-
-This Java console app uses a [Gremlin API](introduction.md) database with the OSS [Apache TinkerPop](https://tinkerpop.apache.org/) driver.
--- The Gremlin `Client` is initialized from the configuration in the *C:\git-samples\azure-cosmos-db-graph-java-getting-started\src\remote.yaml* file.-
- ```java
- cluster = Cluster.build(new File("src/remote.yaml")).create();
- ...
- client = cluster.connect();
- ```
--- Series of Gremlin steps are executed using the `client.submit` method.-
- ```java
- ResultSet results = client.submit(gremlin);
-
- CompletableFuture<List<Result>> completableFutureResults = results.all();
- List<Result> resultList = completableFutureResults.get();
-
- for (Result result : resultList) {
- System.out.println(result.toString());
- }
- ```
-
-## Update your connection information
-
-Now go back to the Azure portal to get your connection information and copy it into the app. These settings enable your app to communicate with your hosted database.
-
-1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Keys**.
-
- Copy the first portion of the URI value.
-
- :::image type="content" source="./media/quickstart-java/copy-access-key-azure-portal.png" alt-text="View and copy an access key in the Azure portal, Keys page":::
-
-2. Open the *src/remote.yaml* file and paste the unique ID value over `$name$` in `hosts: [$name$.graphs.azure.com]`.
-
- Line 1 of *remote.yaml* should now look similar to
-
- `hosts: [test-graph.graphs.azure.com]`
-
-3. Change `graphs` to `gremlin.cosmosdb` in the `endpoint` value. (If you created your graph database account before December 20, 2017, make no changes to the endpoint value and continue to the next step.)
-
- The endpoint value should now look like this:
-
- `"endpoint": "https://testgraphacct.gremlin.cosmosdb.azure.com:443/"`
-
-4. In the Azure portal, use the copy button to copy the PRIMARY KEY and paste it over `$masterKey$` in `password: $masterKey$`.
-
- Line 4 of *remote.yaml* should now look similar to
-
- `password: 2Ggkr662ifxz2Mg==`
-
-5. Change line 3 of *remote.yaml* from
-
- `username: /dbs/$database$/colls/$collection$`
-
- to
-
- `username: /dbs/sample-database/colls/sample-graph`
-
- If you used a unique name for your sample database or graph, update the values as appropriate.
-
-6. Save the *remote.yaml* file.
-
-## Run the console app
-
-1. In the git terminal window, `cd` to the azure-cosmos-db-graph-java-getting-started folder.
-
- ```git
- cd "C:\git-samples\azure-cosmos-db-graph-java-getting-started"
- ```
-
-2. In the git terminal window, use the following command to install the required Java packages.
-
- ```git
- mvn package
- ```
-
-3. In the git terminal window, use the following command to start the Java application.
-
- ```git
- mvn exec:java -D exec.mainClass=GetStarted.Program
- ```
-
- The terminal window displays the vertices being added to the graph.
-
- If you experience timeout errors, check that you updated the connection information correctly in [Update your connection information](#update-your-connection-information), and also try running the last command again.
-
- Once the program stops, select Enter, then switch back to the Azure portal in your internet browser.
-
-<a id="add-sample-data"></a>
-## Review and add sample data
-
-You can now go back to Data Explorer and see the vertices added to the graph, and add additional data points.
-
-1. In your Azure Cosmos DB account in the Azure portal, select **Data Explorer**, expand **sample-graph**, select **Graph**, and then select **Apply Filter**.
-
- :::image type="content" source="./media/quickstart-java/azure-cosmosdb-data-explorer-expanded.png" alt-text="Screenshot shows Graph selected from the A P I with the option to Apply Filter.":::
-
-2. In the **Results** list, notice the new users added to the graph. Select **ben** and notice that the user is connected to robin. You can move the vertices around by dragging and dropping, zoom in and out by scrolling the wheel of your mouse, and expand the size of the graph with the double-arrow.
-
- :::image type="content" source="./media/quickstart-java/azure-cosmosdb-graph-explorer-new.png" alt-text="New vertices in the graph in Data Explorer in the Azure portal":::
-
-3. Let's add a few new users. Select **New Vertex** to add data to your graph.
-
- :::image type="content" source="./media/quickstart-java/azure-cosmosdb-data-explorer-new-vertex.png" alt-text="Screenshot shows the New Vertex pane where you can enter values.":::
-
-4. In the label box, enter *person*.
-
-5. Select **Add property** to add each of the following properties. Notice that you can create unique properties for each person in your graph. Only the id key is required.
-
- key|value|Notes
- -|-|-
- id|ashley|The unique identifier for the vertex. If you don't specify an id, one is generated for you.
- gender|female|
- tech | java |
-
- > [!NOTE]
- > In this quickstart you create a non-partitioned collection. However, if you create a partitioned collection by specifying a partition key during the collection creation, then you need to include the partition key as a key in each new vertex.
-
-6. Select **OK**. You may need to expand your screen to see **OK** on the bottom of the screen.
-
-7. Select **New Vertex** again and add an additional new user.
-
-8. Enter a label of *person*.
-
-9. Select **Add property** to add each of the following properties:
-
- key|value|Notes
- -|-|-
- id|rakesh|The unique identifier for the vertex. If you don't specify an id, one is generated for you.
- gender|male|
- school|MIT|
-
-10. Select **OK**.
-
-11. Select the **Apply Filter** button with the default `g.V()` filter to display all the values in the graph. All of the users now show in the **Results** list.
-
- As you add more data, you can use filters to limit your results. By default, Data Explorer uses `g.V()` to retrieve all vertices in a graph. You can change it to a different [graph query](tutorial-query.md), such as `g.V().count()`, to return a count of all the vertices in the graph in JSON format. If you changed the filter, change the filter back to `g.V()` and select **Apply Filter** to display all the results again.
-
-12. Now you can connect rakesh, and ashley. Ensure **ashley** is selected in the **Results** list, then select :::image type="content" source="./media/quickstart-java/edit-pencil-button.png" alt-text="Change the target of a vertex in a graph"::: next to **Targets** on lower right side. You may need to widen your window to see the button.
-
- :::image type="content" source="./media/quickstart-java/azure-cosmosdb-data-explorer-edit-target.png" alt-text="Change the target of a vertex in a graph - Azure CosmosDB":::
-
-13. In the **Target** box enter *rakesh*, and in the **Edge label** box enter *knows*, and then select the check box.
-
- :::image type="content" source="./media/quickstart-java/azure-cosmosdb-data-explorer-set-target.png" alt-text="Add a connection in Data Explorer - Azure CosmosDB":::
-
-14. Now select **rakesh** from the results list and see that ashley and rakesh are connected.
-
- :::image type="content" source="./media/quickstart-java/azure-cosmosdb-graph-explorer.png" alt-text="Two vertices connected in Data Explorer - Azure CosmosDB":::
-
-That completes the resource creation part of this tutorial. You can continue to add vertexes to your graph, modify the existing vertexes, or change the queries. Now let's review the metrics Azure Cosmos DB provides, and then clean up the resources.
-
-## Review SLAs in the Azure portal
--
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, and run a Java app that adds data to the graph. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
-
-> [!div class="nextstepaction"]
-> [Query using Gremlin](tutorial-query.md)
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-nodejs.md
Title: Build an Azure Cosmos DB Node.js application by using Gremlin API
-description: Presents a Node.js code sample you can use to connect to and query Azure Cosmos DB
--- Previously updated : 06/05/2019
+ Title: 'Quickstart: Gremlin library for Node.js'
+
+description: In this quickstart, connect to Azure Cosmos DB for Apache Gremlin using Node.js. Then, create and traverse vertices and edges.
-++++ Last updated : 09/27/2023
+# CustomerIntent: As a Node.js developer, I want to use a library for my programming language so that I can create and traverse vertices and edges in code.
-# Quickstart: Build a Node.js application by using Azure Cosmos DB for Gremlin account
+
+# Quickstart: Azure Cosmos DB for Apache Gremlin library for Node.js
+ [!INCLUDE[Gremlin](../includes/appliesto-gremlin.md)]
-> [!div class="op_single_selector"]
-> * [Gremlin console](quickstart-console.md)
-> * [.NET](quickstart-dotnet.md)
-> * [Java](quickstart-java.md)
-> * [Node.js](quickstart-nodejs.md)
-> * [Python](quickstart-python.md)
-> * [PHP](quickstart-php.md)
->
-In this quickstart, you create and manage an Azure Cosmos DB for Gremlin (graph) API account from the Azure portal, and add data by using a Node.js app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+Azure Cosmos DB for Apache Gremlin is a fully managed graph database service implementing the popular [`Apache Tinkerpop`](https://tinkerpop.apache.org/), a graph computing framework using the Gremlin query language. The API for Gremlin gives you a low-friction way to get started using Gremlin with a service that can grow and scale out as much as you need with minimal management.
+
+In this quickstart, you use the `gremlin` library to connect to a newly created Azure Cosmos DB for Gremlin account.
+
+[Library source code](https://github.com/apache/tinkerpop/tree/master/gremlin-javascript/src/main/javascript/gremlin-javascript) | [Package (npm)](https://www.npmjs.com/package/gremlin)
## Prerequisites-- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -- [Node.js 0.10.29+](https://nodejs.org/).-- [Git](https://git-scm.com/downloads).
-## Create a database account
+- An Azure account with an active subscription.
+ - No Azure subscription? [Sign up for a free Azure account](https://azure.microsoft.com/free/).
+ - Don't want an Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no subscription required.
+- [Node.js (LTS)](https://nodejs.org/)
+ - Don't have Node.js installed? Try this quickstart in [GitHub Codespaces](https://codespaces.new/github/codespaces-blank?quickstart=1).codespaces.new/github/codespaces-blank?quickstart=1)
+- [Azure Command-Line Interface (CLI)](/cli/azure/)
++
+## Setting up
+
+This section walks you through creating an API for Gremlin account and setting up a Node.js project to use the library to connect to the account.
+### Create an API for Gremlin account
-## Add a graph
+The API for Gremlin account should be created prior to using the Node.js library. Additionally, it helps to also have the database and graph in place.
-## Clone the sample application
+### Create a new Node.js console application
-Now let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+Create a Node.js console application in an empty folder using your preferred terminal.
-1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+1. Open your terminal in an empty folder.
+
+1. Initialize a new module
```bash
- md "C:\git-samples"
+ npm init es6 --yes
```
-2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+1. Create the **app.js** file
```bash
- cd "C:\git-samples"
+ touch app.js
+ ```
+
+### Install the npm package
+
+Add the `gremlin` npm package to the Node.js project.
+
+1. Open the **package.json** file and replace the contents with this JSON configuration.
+
+ ```json
+ {
+ "main": "app.js",
+ "type": "module",
+ "scripts": {
+ "start": "node app.js"
+ },
+ "dependencies": {
+ "gremlin": "^3.*"
+ }
+ }
```
-3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+1. Use the `npm install` command to install all packages specified in the **package.json** file.
```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-nodejs-getting-started.git
+ npm install
```
-3. Open the solution file in Visual Studio.
+### Configure environment variables
-## Review the code
+To use the *NAME* and *URI* values obtained earlier in this quickstart, persist them to new environment variables on the local machine running the application.
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
+1. To set the environment variable, use your terminal to persist the values as `COSMOS_ENDPOINT` and `COSMOS_KEY` respectively.
-The following snippets are all taken from the *app.js* file.
+ ```bash
+ export COSMOS_GREMLIN_ENDPOINT="<account-name>"
+ export COSMOS_GREMLIN_KEY="<account-key>"
+ ```
-This console app uses the open-source [Gremlin Node.js](https://www.npmjs.com/package/gremlin) driver.
+1. Validate that the environment variables were set correctly.
-* The Gremlin client is created.
+ ```bash
+ printenv COSMOS_GREMLIN_ENDPOINT
+ printenv COSMOS_GREMLIN_KEY
+ ```
- ```javascript
- const authenticator = new Gremlin.driver.auth.PlainTextSaslAuthenticator(
- `/dbs/${config.database}/colls/${config.collection}`,
- config.primaryKey
- )
+## Code examples
+- [Authenticate the client](#authenticate-the-client)
+- [Create vertices](#create-vertices)
+- [Create edges](#create-edges)
+- [Query vertices &amp; edges](#query-vertices--edges)
- const client = new Gremlin.driver.Client(
- config.endpoint,
- {
- authenticator,
- traversalsource : "g",
- rejectUnauthorized : true,
- mimeType : "application/vnd.gremlin-v2.0+json"
- }
- );
+The code in this article connects to a database named `cosmicworks` and a graph named `products`. The code then adds vertices and edges to the graph before traversing the added items.
- ```
+### Authenticate the client
- The configurations are all in *config.js*, which we edit in the [following section](#update-your-connection-string).
+Application requests to most Azure services must be authorized. For the API for Gremlin, use the *NAME* and *URI* values obtained earlier in this quickstart.
-* A series of functions are defined to execute different Gremlin operations. This is one of them:
+1. Open the **app.js** file.
- ```javascript
- function addVertex1()
- {
- console.log('Running Add Vertex1');
- return client.submit("g.addV(label).property('id', id).property('firstName', firstName).property('age', age).property('userid', userid).property('pk', 'pk')", {
- label:"person",
- id:"thomas",
- firstName:"Thomas",
- age:44, userid: 1
- }).then(function (result) {
- console.log("Result: %s\n", JSON.stringify(result));
- });
- }
- ```
+1. Import the `gremlin` module.
-* Each function executes a `client.execute` method with a Gremlin query string parameter. Here is an example of how `g.V().count()` is executed:
+ :::code language="javascript" source="~/cosmos-db-apache-gremlin-javascript-samples/001-quickstart/app.js" id="imports":::
- ```javascript
- function countVertices()
- {
- console.log('Running Count');
- return client.submit("g.V().count()", { }).then(function (result) {
- console.log("Result: %s\n", JSON.stringify(result));
- });
- }
- ```
+1. Create `accountName` and `accountKey` variables. Store the `COSMOS_GREMLIN_ENDPOINT` and `COSMOS_GREMLIN_KEY` environment variables as the values for each respective variable.
-* At the end of the file, all methods are then invoked. This will execute them one after the other:
-
- ```javascript
- client.open()
- .then(dropGraph)
- .then(addVertex1)
- .then(addVertex2)
- .then(addEdge)
- .then(countVertices)
- .catch((err) => {
- console.error("Error running query...");
- console.error(err)
- }).then((res) => {
- client.close();
- finish();
- }).catch((err) =>
- console.error("Fatal error:", err)
- );
- ```
+ :::code language="javascript" source="~/cosmos-db-apache-gremlin-javascript-samples/001-quickstart/app.js" id="environment_variables":::
+1. Use `PlainTextSaslAuthenticator` to create a new object for the account's credentials.
-## Update your connection string
+ :::code language="javascript" source="~/cosmos-db-apache-gremlin-javascript-samples/001-quickstart/app.js" id="authenticate_client":::
-1. Open the *config.js* file.
+1. Use `Client` to connect using the remote server credentials and the **GraphSON 2.0** serializer. Then, use `Open` to create a new connection to the server.
-2. In *config.js*, fill in the `config.endpoint` key with the **Gremlin Endpoint** value from the **Overview** page of your Cosmos DB account in the Azure portal.
+ :::code language="javascript" source="~/cosmos-db-apache-gremlin-javascript-samples/001-quickstart/app.js" id="connect_client":::
- `config.endpoint = "https://<your_Gremlin_account_name>.gremlin.cosmosdb.azure.com:443/";`
+### Create vertices
- :::image type="content" source="./media/quickstart-nodejs/gremlin-uri.png" alt-text="View and copy an access key in the Azure portal, Overview page":::
+Now that the application is connected to the account, use the standard Gremlin syntax to create vertices.
-3. In *config.js*, fill in the config.primaryKey value with the **Primary Key** value from the **Keys** page of your Cosmos DB account in the Azure portal.
+1. Use `submit` to run a command server-side on the API for Gremlin account. Create a **product** vertex with the following properties:
- `config.primaryKey = "PRIMARYKEY";`
+ | | Value |
+ | | |
+ | **label** | `product` |
+ | **id** | `68719518371` |
+ | **`name`** | `Kiama classic surfboard` |
+ | **`price`** | `285.55` |
+ | **`category`** | `surfboards` |
- :::image type="content" source="./media/quickstart-nodejs/keys.png" alt-text="Azure portal keys blade":::
+ :::code language="javascript" source="~/cosmos-db-apache-gremlin-javascript-samples/001-quickstart/app.js" id="create_vertices_1":::
-4. Enter the database name, and graph (container) name for the value of config.database and config.collection.
+1. Create a second **product** vertex with these properties:
-Here's an example of what your completed *config.js* file should look like:
+ | | Value |
+ | | |
+ | **label** | `product` |
+ | **id** | `68719518403` |
+ | **`name`** | `Montau Turtle Surfboard` |
+ | **`price`** | `600.00` |
+ | **`category`** | `surfboards` |
-```javascript
-var config = {}
+ :::code language="javascript" source="~/cosmos-db-apache-gremlin-javascript-samples/001-quickstart/app.js" id="create_vertices_2":::
-// Note that this must include the protocol (HTTPS:// for .NET SDK URI or wss:// for Gremlin Endpoint) and the port number
-config.endpoint = "https://testgraphacct.gremlin.cosmosdb.azure.com:443/";
-config.primaryKey = "Pams6e7LEUS7LJ2Qk0fjZf3eGo65JdMWHmyn65i52w8ozPX2oxY3iP0yu05t9v1WymAHNcMwPIqNAEv3XDFsEg==";
-config.database = "graphdb"
-config.collection = "Persons"
+1. Create a third **product** vertex with these properties:
-module.exports = config;
-```
+ | | Value |
+ | | |
+ | **label** | `product` |
+ | **id** | `68719518409` |
+ | **`name`** | `Bondi Twin Surfboard` |
+ | **`price`** | `585.50` |
+ | **`category`** | `surfboards` |
-## Run the console app
+ :::code language="javascript" source="~/cosmos-db-apache-gremlin-javascript-samples/001-quickstart/app.js" id="create_vertices_3":::
-1. Open a terminal window and change (via `cd` command) to the installation directory for the *package.json* file that's included in the project.
+### Create edges
-2. Run `npm install` to install the required npm modules, including `gremlin`.
+Create edges using the Gremlin syntax to define relationships between vertices.
-3. Run `node app.js` in a terminal to start your node application.
+1. Create an edge from the `Montau Turtle Surfboard` product named **replaces** to the `Kiama classic surfboard` product.
-## Browse with Data Explorer
+ :::code language="javascript" source="~/cosmos-db-apache-gremlin-javascript-samples/001-quickstart/app.js" id="create_edges_1":::
-You can now go back to Data Explorer in the Azure portal to view, query, modify, and work with your new graph data.
+ > [!TIP]
+ > This edge defintion uses the `g.V(['<partition-key>', '<id>'])` syntax. Alternatively, you can use `g.V('<id>').has('category', '<partition-key>')`.
-In Data Explorer, the new database appears in the **Graphs** pane. Expand the database, followed by the container, and then select **Graph**.
+1. Create another **replaces** edge from the same product to the `Bondi Twin Surfboard`.
-The data generated by the sample app is displayed in the next pane within the **Graph** tab when you select **Apply Filter**.
+ :::code language="javascript" source="~/cosmos-db-apache-gremlin-javascript-samples/001-quickstart/app.js" id="create_edges_2":::
-Try completing `g.V()` with `.has('firstName', 'Thomas')` to test the filter. Note that the value is case sensitive.
+### Query vertices &amp; edges
-## Review SLAs in the Azure portal
+Use the Gremlin syntax to traverse the graph and discover relationships between vertices.
+1. Traverse the graph and find all vertices that `Montau Turtle Surfboard` replaces.
+
+ :::code language="javascript" source="~/cosmos-db-apache-gremlin-javascript-samples/001-quickstart/app.js" id="query_vertices_edges":::
+
+1. Write to the console the result of this traversal.
+
+ :::code language="javascript" source="~/cosmos-db-apache-gremlin-javascript-samples/001-quickstart/app.js" id="output_vertices_edges":::
+
+## Run the code
+
+Validate that your application works as expected by running the application. The application should execute with no errors or warnings. The output of the application includes data about the created and queried items.
+
+1. Open the terminal in the Node.js project folder.
+
+1. Use `npm <script>` to run the application. Observe the output from the application.
+
+ ```bash
+ npm start
+ ```
-## Clean up your resources
+## Clean up resources
+When you no longer need the API for Gremlin account, delete the corresponding resource group.
-## Next steps
-In this article, you learned how to create an Azure Cosmos DB account, create a graph by using Data Explorer, and run a Node.js app to add data to the graph. You can now build more complex queries and implement powerful graph traversal logic by using Gremlin.
+## Next step
> [!div class="nextstepaction"]
-> [Query by using Gremlin](tutorial-query.md)
+> [Create and query data using Azure Cosmos DB for Apache Gremlin](tutorial-query.md)
cosmos-db Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-php.md
- Title: 'Quickstart: Gremlin API with PHP - Azure Cosmos DB'
-description: Follow this quickstart to run a PHP console application that populates an Azure Cosmos DB for Gremlin database in the Azure portal.
--- Previously updated : 06/29/2022----
-# Quickstart: Create an Azure Cosmos DB graph database with PHP and the Azure portal
--
-> [!div class="op_single_selector"]
-> * [Gremlin console](quickstart-console.md)
-> * [.NET](quickstart-dotnet.md)
-> * [Java](quickstart-java.md)
-> * [Node.js](quickstart-nodejs.md)
-> * [Python](quickstart-python.md)
-> * [PHP](quickstart-php.md)
->
-
-In this quickstart, you create and use an Azure Cosmos DB [Gremlin (Graph) API](introduction.md) database by using PHP and the Azure portal.
-
-Azure Cosmos DB is Microsoft's multi-model database service that lets you quickly create and query document, table, key-value, and graph databases, with global distribution and horizontal scale capabilities. Azure Cosmos DB provides five APIs: Core (SQL), MongoDB, Gremlin, Azure Table, and Cassandra.
-
-You must create a separate account to use each API. In this article, you create an account for the Gremlin (Graph) API.
-
-This quickstart walks you through the following steps:
--- Use the Azure portal to create an Azure Cosmos DB for Gremlin (Graph) API account and database.-- Clone a sample Gremlin API PHP console app from GitHub, and run it to populate your database.-- Use Data Explorer in the Azure portal to query, add, and connect data in your database.-
-## Prerequisites
--- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] Alternatively, you can [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb) without an Azure subscription.-- [PHP](https://php.net/) 5.6 or newer installed.-- [Composer](https://getcomposer.org/download) open-source dependency management tool for PHP installed.-
-## Create a Gremlin (Graph) database account
-
-First, create a Gremlin (Graph) database account for Azure Cosmos DB.
-
-1. In the [Azure portal](https://portal.azure.com), select **Create a resource** from the left menu.
-
- :::image type="content" source="../includes/media/cosmos-db-create-dbaccount-graph/create-nosql-db-databases-json-tutorial-0.png" alt-text="Screenshot of Create a resource in the Azure portal.":::
-
-1. On the **New** page, select **Databases** > **Azure Cosmos DB**.
-
-1. On the **Select API Option** page, under **Gremlin (Graph)**, select **Create**.
-
-1. On the **Create Azure Cosmos DB Account - Gremlin (Graph)** page, enter the following required settings for the new account:
-
- - **Subscription**: Select the Azure subscription that you want to use for this account.
- - **Resource Group**: Select **Create new**, then enter a unique name for the new resource group.
- - **Account Name**: Enter a unique name between 3-44 characters, using only lowercase letters, numbers, and hyphens. Your account URI is *gremlin.azure.com* appended to your unique account name.
- - **Location**: Select the Azure region to host your Azure Cosmos DB account. Use the location that's closest to your users to give them the fastest access to the data.
-
- :::image type="content" source="../includes/media/cosmos-db-create-dbaccount-graph/azure-cosmos-db-create-new-account.png" alt-text="Screenshot showing the Create Account page for Azure Cosmos DB for a Gremlin (Graph) account.":::
-
-1. For this quickstart, you can leave the other fields and tabs at their default values. Optionally, you can configure more details for the account. See [Optional account settings](#optional-account-settings).
-
-1. Select **Review + create**, and then select **Create**. Deployment takes a few minutes.
-
-1. When the **Your deployment is complete** message appears, select **Go to resource**.
-
- You go to the **Overview** page for the new Azure Cosmos DB account.
-
- :::image type="content" source="../includes/media/cosmos-db-create-dbaccount-graph/azure-cosmos-db-graph-created.png" alt-text="Screenshot showing the Azure Cosmos DB Quick start page.":::
-
-### Optional account settings
-
-Optionally, you can also configure the following settings on the **Create Azure Cosmos DB Account - Gremlin (Graph)** page.
--- On the **Basics** tab:-
- |Setting|Value|Description |
- ||||
- |**Capacity mode**|**Provisioned throughput** or **Serverless**|Select **Provisioned throughput** to create an account in [provisioned throughput](../set-throughput.md) mode. Select **Serverless** to create an account in [serverless](../serverless.md) mode.|
- |**Apply Azure Cosmos DB free tier discount**|**Apply** or **Do not apply**|With Azure Cosmos DB free tier, you get the first 1000 RU/s and 25 GB of storage for free in an account. Learn more about [free tier](https://azure.microsoft.com/pricing/details/cosmos-db/).|
-
- > [!NOTE]
- > You can have up to one free tier Azure Cosmos DB account per Azure subscription and must opt-in when creating the account. If you don't see the option to apply the free tier discount, this means another account in the subscription has already been enabled with free tier.
-
-- On the **Global Distribution** tab:-
- |Setting|Value|Description |
- ||||
- |**Geo-redundancy**|**Enable** or **Disable**|Enable or disable global distribution on your account by pairing your region with a pair region. You can add more regions to your account later.|
- |**Multi-region Writes**|**Enable** or **Disable**|Multi-region writes capability allows you to take advantage of the provisioned throughput for your databases and containers across the globe.|
-
- > [!NOTE]
- > The following options aren't available if you select **Serverless** as the **Capacity mode**:
- > - **Apply Free Tier Discount**
- > - **Geo-redundancy**
- > - **Multi-region Writes**
--- Other tabs:-
- - **Networking**: Configure [access from a virtual network](../how-to-configure-vnet-service-endpoint.md).
- - **Backup Policy**: Configure either [periodic](../periodic-backup-restore-introduction.md) or [continuous](../provision-account-continuous-backup.md) backup policy.
- - **Encryption**: Use either a service-managed key or a [customer-managed key](../how-to-setup-cmk.md#create-a-new-azure-cosmos-account).
- - **Tags**: Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the same tag to multiple resources and resource groups.
-
-## Add a graph
-
-1. On the Azure Cosmos DB account **Overview** page, select **Add Graph**.
-
- :::image type="content" source="../includes/media/cosmos-db-create-dbaccount-graph/azure-cosmos-db-add-graph.png" alt-text="Screenshot showing the Add Graph on the Azure Cosmos DB account page.":::
-
-1. Fill out the **New Graph** form. For this quickstart, use the following values:
-
- - **Database id**: Enter *sample-database*. Database names must be between 1 and 255 characters, and can't contain `/ \ # ?` or a trailing space.
- - **Database Throughput**: Select **Manual**, so you can set the throughput to a low value.
- - **Database Max RU/s**: Change the throughput to *400* request units per second (RU/s). If you want to reduce latency, you can scale up throughput later.
- - **Graph id**: Enter *sample-graph*. Graph names have the same character requirements as database IDs.
- - **Partition key**: Enter */pk*. All Cosmos DB accounts need a partition key to horizontally scale. To learn how to select an appropriate partition key, see [Use a partitioned graph in Azure Cosmos DB](partitioning.md).
-
- :::image type="content" source="../includes/media/cosmos-db-create-graph/azure-cosmosdb-data-explorer-graph.png" alt-text="Screenshot showing the Azure Cosmos DB Data Explorer, New Graph page.":::
-
-1. Select **OK**. The new graph database is created.
-
-### Get the connection keys
-
-Get the Azure Cosmos DB account connection keys to use later in this quickstart.
-
-1. On the Azure Cosmos DB account page, select **Keys** under **Settings** in the left navigation.
-
-1. Copy and save the following values to use later in the quickstart:
-
- - The first part (Azure Cosmos DB account name) of the **.NET SDK URI**.
- - The **PRIMARY KEY** value.
-
- :::image type="content" source="media/quickstart-php/keys.png" alt-text="Screenshot that shows the access keys for the Azure Cosmos DB account.":::
--
-## Clone the sample application
-
-Now, switch to working with code. Clone a Gremlin API app from GitHub, set the connection string, and run the app to see how easy it is to work with data programmatically.
-
-1. In git terminal window, such as git bash, create a new folder named *git-samples*.
-
- ```bash
- mkdir "C:\git-samples"
- ```
-
-1. Switch to the new folder.
-
- ```bash
- cd "C:\git-samples"
- ```
-
-1. Run the following command to clone the sample repository and create a copy of the sample app on your computer.
-
- ```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-php-getting-started.git
- ```
-
-Optionally, you can now review the PHP code you cloned. Otherwise, go to [Update your connection information](#update-your-connection-information).
-
-### Review the code
-
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. The snippets are all taken from the *connect.php* file in the *C:\git-samples\azure-cosmos-db-graph-php-getting-started* folder.
--- The Gremlin `connection` is initialized in the beginning of the `connect.php` file, using the `$db` object.-
- ```php
- $db = new Connection([
- 'host' => '<your_server_address>.graphs.azure.com',
- 'username' => '/dbs/<db>/colls/<coll>',
- 'password' => 'your_primary_key'
- ,'port' => '443'
-
- // Required parameter
- ,'ssl' => TRUE
- ]);
- ```
--- A series of Gremlin steps execute, using the `$db->send($query);` method.-
- ```php
- $query = "g.V().drop()";
- ...
- $result = $db->send($query);
- $errors = array_filter($result);
- }
- ```
-
-## Update your connection information
-
-1. Open the *connect.php* file in the *C:\git-samples\azure-cosmos-db-graph-php-getting-started* folder.
-
-1. In the `host` parameter, replace `<your_server_address>` with the Azure Cosmos DB account name value you saved from the Azure portal.
-
-1. In the `username` parameter, replace `<db>` and `<coll>` with your database and graph name. If you used the recommended values of `sample-database` and `sample-graph`, it should look like the following code:
-
- `'username' => '/dbs/sample-database/colls/sample-graph'`
-
-1. In the `password` parameter, replace `your_primary_key` with the PRIMARY KEY value you saved from the Azure portal.
-
- The `Connection` object initialization should now look like the following code:
-
- ```php
- $db = new Connection([
- 'host' => 'testgraphacct.graphs.azure.com',
- 'username' => '/dbs/sample-database/colls/sample-graph',
- 'password' => '2Ggkr662ifxz2Mg==',
- 'port' => '443'
-
- // Required parameter
- ,'ssl' => TRUE
- ]);
- ```
-
-1. Save the *connect.php* file.
-
-## Run the console app
-
-1. In the git terminal window, `cd` to the *azure-cosmos-db-graph-php-getting-started* folder.
-
- ```git
- cd "C:\git-samples\azure-cosmos-db-graph-php-getting-started"
- ```
-
-1. Use the following command to install the required PHP dependencies.
-
- ```
- composer install
- ```
-
-1. Use the following command to start the PHP application.
-
- ```
- php connect.php
- ```
-
- The terminal window displays the vertices being added to the graph.
-
- If you experience timeout errors, check that you updated the connection information correctly in [Update your connection information](#update-your-connection-information), and also try running the last command again.
-
- Once the program stops, press Enter.
-
-<a id="add-sample-data"></a>
-## Review and add sample data
-
-You can now go back to Data Explorer in the Azure portal, see the vertices added to the graph, and add more data points.
-
-1. In your Azure Cosmos DB account in the Azure portal, select **Data Explorer**, expand **sample-database** and **sample-graph**, select **Graph**, and then select **Execute Gremlin Query**.
-
- :::image type="content" source="./media/quickstart-php/azure-cosmosdb-data-explorer-expanded.png" alt-text="Screenshot that shows Graph selected with the option to Execute Gremlin Query.":::
-
-1. In the **Results** list, notice the new users added to the graph. Select **ben**, and notice that they're connected to **robin**. You can move the vertices around by dragging and dropping, zoom in and out by scrolling the wheel of your mouse, and expand the size of the graph with the double-arrow.
-
- :::image type="content" source="./media/quickstart-php/azure-cosmosdb-graph-explorer-new.png" alt-text="Screenshot that shows new vertices in the graph in Data Explorer.":::
-
-1. Add a new user. Select the **New Vertex** button to add data to your graph.
-
- :::image type="content" source="./media/quickstart-php/azure-cosmosdb-data-explorer-new-vertex.png" alt-text="Screenshot that shows the New Vertex pane where you can enter values.":::
-
-1. Enter a label of *person*.
-
-1. Select **Add property** to add each of the following properties. You can create unique properties for each person in your graph. Only the **id** key is required.
-
- Key | Value | Notes
- -|-|-
- **id** | ashley | The unique identifier for the vertex. If you don't specify an id, one is generated for you.
- **gender** | female |
- **tech** | java |
-
- > [!NOTE]
- > In this quickstart you create a non-partitioned collection. However, if you create a partitioned collection by specifying a partition key during the collection creation, then you need to include the partition key as a key in each new vertex.
-
-1. Select **OK**.
-
-1. Select **New Vertex** again and add another new user.
-
-1. Enter a label of *person*.
-
-1. Select **Add property** to add each of the following properties:
-
- Key | Value | Notes
- -|-|-
- **id** | rakesh | The unique identifier for the vertex. If you don't specify an id, one is generated for you.
- **gender** | male |
- **school** | MIT |
-
-1. Select **OK**.
-
-1. Select **Execute Gremlin Query** with the default `g.V()` filter to display all the values in the graph. All the users now show in the **Results** list.
-
- As you add more data, you can use filters to limit your results. By default, Data Explorer uses `g.V()` to retrieve all vertices in a graph. You can change to a different [graph query](tutorial-query.md), such as `g.V().count()`, to return a count of all the vertices in the graph in JSON format. If you changed the filter, change the filter back to `g.V()` and select **Execute Gremlin Query** to display all the results again.
-
-1. Now you can connect rakesh and ashley. Ensure **ashley** is selected in the **Results** list, then select the edit icon next to **Targets** at lower right.
-
- :::image type="content" source="./media/quickstart-php/azure-cosmosdb-data-explorer-edit-target.png" alt-text="Screenshot that shows changing the target of a vertex in a graph.":::
-
-1. In the **Target** box, type *rakesh*, and in the **Edge label** box type *knows*, and then select the check mark.
-
- :::image type="content" source="./media/quickstart-php/azure-cosmosdb-data-explorer-set-target.png" alt-text="Screenshot that shows adding a connection between ashley and rakesh in Data Explorer.":::
-
-1. Now select **rakesh** from the results list, and see that ashley and rakesh are connected.
-
- :::image type="content" source="./media/quickstart-php/azure-cosmosdb-graph-explorer.png" alt-text="Screenshot that shows two vertices connected in Data Explorer.":::
-
-You've completed the resource creation part of this quickstart. You can continue to add vertexes to your graph, modify the existing vertexes, or change the queries.
-
-You can review the metrics that Azure Cosmos DB provides, and then clean up the resources you created.
-
-## Review SLAs in the Azure portal
--
-## Clean up resources
--
-This action deletes the resource group and all resources within it, including the Azure Cosmos DB for Gremlin (Graph) account and database.
-
-## Next steps
-
-In this quickstart, you learned how to create an Azure Cosmos DB for Gremlin (Graph) account and database, clone and run a PHP app, and work with your database using the Data Explorer. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
-
-> [!div class="nextstepaction"]
-> [Query using Gremlin](tutorial-query.md)
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-python.md
Title: 'Quickstart: Gremlin API with Python - Azure Cosmos DB'
-description: This quickstart shows how to use the Azure Cosmos DB for Gremlin to create a console application with the Azure portal and Python
--- Previously updated : 02/14/2023
+ Title: 'Quickstart: Gremlin library for Python'
+
+description: In this quickstart, connect to Azure Cosmos DB for Apache Gremlin using Python. Then, create and traverse vertices and edges.
-++++ Last updated : 09/27/2023
+# CustomerIntent: As a Python developer, I want to use a library for my programming language so that I can create and traverse vertices and edges in code.
-# Quickstart: Create a graph database in Azure Cosmos DB using Python and the Azure portal
+
+# Quickstart: Azure Cosmos DB for Apache Gremlin library for Python
+ [!INCLUDE[Gremlin](../includes/appliesto-gremlin.md)]
-> [!div class="op_single_selector"]
-> * [Gremlin console](quickstart-console.md)
-> * [.NET](quickstart-dotnet.md)
-> * [Java](quickstart-java.md)
-> * [Node.js](quickstart-nodejs.md)
-> * [Python](quickstart-python.md)
-> * [PHP](quickstart-php.md)
->
-In this quickstart, you create and manage an Azure Cosmos DB for Gremlin (graph) API account from the Azure portal, and add data by using a Python app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+Azure Cosmos DB for Apache Gremlin is a fully managed graph database service implementing the popular [`Apache Tinkerpop`](https://tinkerpop.apache.org/), a graph computing framework using the Gremlin query language. The API for Gremlin gives you a low-friction way to get started using Gremlin with a service that can grow and scale out as much as you need with minimal management.
-## Prerequisites
-- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.-- [Python 3.7+](https://github.com/Azure/azure-sdk-for-python/wiki/Azure-SDKs-Python-version-support-policy) including [pip](https://pip.pypa.io/en/stable/installing/) package installer.-- [Python Driver for Gremlin](https://github.com/apache/tinkerpop/tree/master/gremlin-python).
+In this quickstart, you use the `gremlinpython` library to connect to a newly created Azure Cosmos DB for Gremlin account.
- You can also install the Python driver for Gremlin by using the `pip` command line:
+[Library source code](https://github.com/apache/tinkerpop/tree/master/gremlin-python/src/main/python) | [Package (PyPi)](https://pypi.org/project/gremlinpython/)
- ```bash
- pip install gremlinpython==3.7.*
- ```
+## Prerequisites
-- [Git](https://git-scm.com/downloads).
+- An Azure account with an active subscription.
+ - No Azure subscription? [Sign up for a free Azure account](https://azure.microsoft.com/free/).
+ - Don't want an Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no subscription required.
+- [Python (latest)](https://www.python.org/)
+ - Don't have Python installed? Try this quickstart in [GitHub Codespaces](https://codespaces.new/github/codespaces-blank?quickstart=1).
+- [Azure Command-Line Interface (CLI)](/cli/azure/)
-## Create a database account
-Before you can create a graph database, you need to create a Gremlin (Graph) database account with Azure Cosmos DB.
+## Setting up
+This section walks you through creating an API for Gremlin account and setting up a Python project to use the library to connect to the account.
-## Add a graph
+### Create an API for Gremlin account
+The API for Gremlin account should be created prior to using the Python library. Additionally, it helps to also have the database and graph in place.
-## Clone the sample application
-Now let's switch to working with code. Let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it's to work with data programmatically.
+### Create a new Python console application
-1. Run the following command to clone the sample repository to your local machine. This command creates a copy of the sample app on your computer. Start at in the root of the folder where you typically store GitHub repositories.
+Create a Python console application in an empty folder using your preferred terminal.
- ```bash
- git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-python-getting-started.git
- ```
+1. Open your terminal in an empty folder.
-1. Change to the directory where the sample app is located.
+1. Create the **app.py** file.
```bash
- cd azure-cosmos-db-graph-python-getting-started
+ touch app.py
```
-## Review the code
+### Install the PyPI package
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. The snippets are all taken from the *connect.py* file in the repo [git-samples\azure-cosmos-db-graph-python-getting-started](https://github.com/Azure-Samples/azure-cosmos-db-graph-python-getting-started). Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-information).
+Add the `gremlinpython` PyPI package to the Python project.
-* The Gremlin `client` is initialized in *connect.py* with `client.Client()`. Make sure to replace `<YOUR_DATABASE>` and `<YOUR_CONTAINER_OR_GRAPH>` with the values of your account's database name and graph name:
+1. Create the **requirements.txt** file.
- ```python
- ...
- client = client.Client('wss://<YOUR_ENDPOINT>.gremlin.cosmosdb.azure.com:443/','g',
- username="/dbs/<YOUR_DATABASE>/colls/<YOUR_CONTAINER_OR_GRAPH>",
- password="<YOUR_PASSWORD>")
- ...
+ ```bash
+ touch requirements.txt
```
-* A series of Gremlin steps (queries) are declared at the beginning of the *connect.py* file. They're then executed using the `client.submitAsync()` method. For example, to run the cleanup graph step, you'd use the following code:
+1. Add the `gremlinpython` package from the Python Package Index to the requirements file.
- ```python
- client.submitAsync(_gremlin_cleanup_graph)
+ ```requirements
+ gremlinpython==3.7.0
```
-## Update your connection information
-
-Now go back to the Azure portal to get your connection information and copy it into the app. These settings enable your app to communicate with your hosted database.
+1. Install all the requirements to your project.
-1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Keys**.
+ ```bash
+ python install -r requirements.txt
+ ```
- Copy the first portion of the **URI** value.
+### Configure environment variables
- :::image type="content" source="./media/quickstart-python/keys.png" alt-text="View and copy an access key in the Azure portal, Keys page":::
+To use the *NAME* and *URI* values obtained earlier in this quickstart, persist them to new environment variables on the local machine running the application.
-2. Open the *connect.py* file, find the `client.Client()` definition, and paste the URI value over `<YOUR_ENDPOINT>` in here:
+1. To set the environment variable, use your terminal to persist the values as `COSMOS_ENDPOINT` and `COSMOS_KEY` respectively.
- ```python
- client = client.Client('wss://<YOUR_ENDPOINT>.gremlin.cosmosdb.azure.com:443/','g',
- username="/dbs/<YOUR_DATABASE>/colls/<YOUR_COLLECTION_OR_GRAPH>",
- password="<YOUR_PASSWORD>")
+ ```bash
+ export COSMOS_GREMLIN_ENDPOINT="<account-name>"
+ export COSMOS_GREMLIN_KEY="<account-key>"
```
- The URI portion of the client object should now look similar to this code:
+1. Validate that the environment variables were set correctly.
- ```python
- client = client.Client('wss://test.gremlin.cosmosdb.azure.com:443/','g',
- username="/dbs/<YOUR_DATABASE>/colls/<YOUR_COLLECTION_OR_GRAPH>",
- password="<YOUR_PASSWORD>")
+ ```bash
+ printenv COSMOS_GREMLIN_ENDPOINT
+ printenv COSMOS_GREMLIN_KEY
```
-3. Change the second parameter of the `client` object to replace the `<YOUR_DATABASE>` and `<YOUR_COLLECTION_OR_GRAPH>` strings. If you used the suggested values, the parameter should look like this code:
+## Code examples
- `username="/dbs/sample-database/colls/sample-graph"`
+- [Authenticate the client](#authenticate-the-client)
+- [Create vertices](#create-vertices)
+- [Create edges](#create-edges)
+- [Query vertices &amp; edges](#query-vertices--edges)
- The entire `client` object should now look like this code:
+The code in this article connects to a database named `cosmicworks` and a graph named `products`. The code then adds vertices and edges to the graph before traversing the added items.
- ```python
- client = client.Client('wss://test.gremlin.cosmosdb.azure.com:443/','g',
- username="/dbs/sample-database/colls/sample-graph",
- password="<YOUR_PASSWORD>")
- ```
-
-4. On the **Keys** page, use the copy button to copy the **PRIMARY KEY** and paste it over `<YOUR_PASSWORD>` in the `password=<YOUR_PASSWORD>` parameter.
+### Authenticate the client
- The `client` object definition should now look similar to the following:
- ```python
- client = client.Client('wss://test.gremlin.cosmosdb.azure.com:443/','g',
- username="/dbs/sample-database/colls/sample-graph",
- password="asdb13Fadsf14FASc22Ggkr662ifxz2Mg==")
- ```
+Application requests to most Azure services must be authorized. For the API for Gremlin, use the *NAME* and *URI* values obtained earlier in this quickstart.
-6. Save the *connect.py* file.
+1. Open the **app.py** file.
-## Run the console app
+1. Import `client` and `serializer` from the `gremlin_python.driver` module.
-1. Start in a terminal window in the root of the folder where you cloned the sample app. If you are using Visual Studio Code, you can open a terminal window by selecting **Terminal** > **New Terminal**. Typically, you'll create a virtual environment to run the code. For more information, see [Python virtual environments](https://docs.python.org/3/tutorial/venv.html).
+ :::code language="python" source="~/cosmos-db-apache-gremlin-python-samples/001-quickstart/app.py" id="imports":::
- ```bash
- cd azure-cosmos-db-graph-python-getting-started
- ```
+ > [!WARNING]
+ > Depending on your version of Python, you may also need to import `asyncio` and override the event loop policy:
+ >
+ > :::code language="python" source="~/cosmos-db-apache-gremlin-python-samples/001-quickstart/app.py" id="import_async_bug_fix":::
+ >
-1. Install the required Python packages.
+1. Create `ACCOUNT_NAME` and `ACCOUNT_KEY` variables. Store the `COSMOS_GREMLIN_ENDPOINT` and `COSMOS_GREMLIN_KEY` environment variables as the values for each respective variable.
- ```
- pip install -r requirements.txt
- ```
+ :::code language="python" source="~/cosmos-db-apache-gremlin-python-samples/001-quickstart/app.py" id="environment_variables":::
-1. Start the Python application.
-
- ```
- python connect.py
- ```
+1. Use `Client` to connect using the account's credentials and the **GraphSON 2.0** serializer.
- The terminal window displays the vertices and edges being added to the graph.
-
- If you experience timeout errors, check that you updated the connection information correctly in [Update your connection information](#update-your-connection-information), and also try running the last command again.
-
- Once the program stops, press Enter, then switch back to the Azure portal in your internet browser.
+ :::code language="python" source="~/cosmos-db-apache-gremlin-python-samples/001-quickstart/app.py" id="authenticate_connect_client":::
-<a id="add-sample-data"></a>
-## Review and add sample data
+### Create vertices
-After the vertices and edges are inserted, you can now go back to Data Explorer and see the vertices added to the graph, and add more data points.
+Now that the application is connected to the account, use the standard Gremlin syntax to create vertices.
-1. In your Azure Cosmos DB account in the Azure portal, select **Data Explorer**, expand **sample-database**, expand **sample-graph**, select **Graph**, and then select **Execute Gremlin Query**.
+1. Use `submit` to run a command server-side on the API for Gremlin account. Create a **product** vertex with the following properties:
- :::image type="content" source="./media/quickstart-python/azure-cosmosdb-data-explorer-expanded.png" alt-text="Screenshot shows Graph selected from the A P I with the option to Execute Gremlin Query.":::
+ | | Value |
+ | | |
+ | **label** | `product` |
+ | **id** | `68719518371` |
+ | **`name`** | `Kiama classic surfboard` |
+ | **`price`** | `285.55` |
+ | **`category`** | `surfboards` |
-2. In the **Results** list, notice three new users are added to the graph. You can move the vertices around by dragging and dropping, zoom in and out by scrolling the wheel of your mouse, and expand the size of the graph with the double-arrow.
+ :::code language="python" source="~/cosmos-db-apache-gremlin-python-samples/001-quickstart/app.py" id="create_vertices_1":::
- :::image type="content" source="./media/quickstart-python/azure-cosmosdb-graph-explorer-new.png" alt-text="New vertices in the graph in Data Explorer in the Azure portal":::
+1. Create a second **product** vertex with these properties:
-3. Let's add a few new users. Select the **New Vertex** button to add data to your graph.
+ | | Value |
+ | | |
+ | **label** | `product` |
+ | **id** | `68719518403` |
+ | **`name`** | `Montau Turtle Surfboard` |
+ | **`price`** | `600.00` |
+ | **`category`** | `surfboards` |
- :::image type="content" source="./media/quickstart-python/azure-cosmosdb-data-explorer-new-vertex.png" alt-text="Screenshot shows the New Vertex pane where you can enter values.":::
+ :::code language="python" source="~/cosmos-db-apache-gremlin-python-samples/001-quickstart/app.py" id="create_vertices_2":::
-4. Enter a label of *person*.
+1. Create a third **product** vertex with these properties:
-5. Select **Add property** to add each of the following properties. Notice that you can create unique properties for each person in your graph. Only the ID key is required.
+ | | Value |
+ | | |
+ | **label** | `product` |
+ | **id** | `68719518409` |
+ | **`name`** | `Bondi Twin Surfboard` |
+ | **`price`** | `585.50` |
+ | **`category`** | `surfboards` |
- key|value|Notes
- -|-|-
- pk|/pk|
- id|ashley|The unique identifier for the vertex. If you don't specify an ID, one is generated for you.
- gender|female|
- tech | java |
+ :::code language="python" source="~/cosmos-db-apache-gremlin-python-samples/001-quickstart/app.py" id="create_vertices_3":::
- > [!NOTE]
- > In this quickstart create a non-partitioned collection. However, if you create a partitioned collection by specifying a partition key during the collection creation, then you need to include the partition key as a key in each new vertex.
+### Create edges
-6. Select **OK**. You may need to expand your screen to see **OK** on the bottom of the screen.
+Create edges using the Gremlin syntax to define relationships between vertices.
-7. Select **New Vertex** again and add another new user.
+1. Create an edge from the `Montau Turtle Surfboard` product named **replaces** to the `Kiama classic surfboard` product.
-8. Enter a label of *person*.
+ :::code language="python" source="~/cosmos-db-apache-gremlin-python-samples/001-quickstart/app.py" id="create_edges_1":::
-9. Select **Add property** to add each of the following properties:
+ > [!TIP]
+ > This edge defintion uses the `g.V(['<partition-key>', '<id>'])` syntax. Alternatively, you can use `g.V('<id>').has('category', '<partition-key>')`.
- key|value|Notes
- -|-|-
- pk|/pk|
- id|rakesh|The unique identifier for the vertex. If you don't specify an ID, one is generated for you.
- gender|male|
- school|MIT|
+1. Create another **replaces** edge from the same product to the `Bondi Twin Surfboard`.
-10. Select **OK**.
+ :::code language="python" source="~/cosmos-db-apache-gremlin-python-samples/001-quickstart/app.py" id="create_edges_2":::
-11. Select the **Execute Gremlin Query** button with the default `g.V()` filter to display all the values in the graph. All of the users now show in the **Results** list.
+### Query vertices &amp; edges
- As you add more data, you can use filters to limit your results. By default, Data Explorer uses `g.V()` to retrieve all vertices in a graph. You can change it to a different [graph query](tutorial-query.md), such as `g.V().count()`, to return a count of all the vertices in the graph in JSON format. If you changed the filter, change the filter back to `g.V()` and select **Execute Gremlin Query** to display all the results again.
+Use the Gremlin syntax to traverse the graph and discover relationships between vertices.
-12. Now we can connect **rakesh** and **ashley**. Ensure **ashley** is selected in the **Results** list, then select the edit button next to **Targets** on lower right side. You may need to widen your window to see the **Properties** area.
+1. Traverse the graph and find all vertices that `Montau Turtle Surfboard` replaces.
- :::image type="content" source="./media/quickstart-python/azure-cosmosdb-data-explorer-edit-target.png" alt-text="Change the target of a vertex in a graph":::
+ :::code language="python" source="~/cosmos-db-apache-gremlin-python-samples/001-quickstart/app.py" id="query_vertices_edges":::
-13. In the **Target** box type *rakesh*, and in the **Edge label** box type *knows*, and then select the check.
+1. Write to the console the result of this traversal.
- :::image type="content" source="./media/quickstart-python/azure-cosmosdb-data-explorer-set-target.png" alt-text="Add a connection between ashley and rakesh in Data Explorer":::
+ :::code language="python" source="~/cosmos-db-apache-gremlin-python-samples/001-quickstart/app.py" id="output_vertices_edges":::
-14. Now select **rakesh** from the results list and see that ashley and rakesh are connected.
+## Run the code
- :::image type="content" source="./media/quickstart-python/azure-cosmosdb-graph-explorer.png" alt-text="Two vertices connected in Data Explorer":::
+Validate that your application works as expected by running the application. The application should execute with no errors or warnings. The output of the application includes data about the created and queried items.
-That completes the resource creation part of this tutorial. You can continue to add vertexes to your graph, modify the existing vertexes, or change the queries. Now let's review the metrics Azure Cosmos DB provides, and then clean up the resources.
+1. Open the terminal in the Python project folder.
-## Review SLAs in the Azure portal
+1. Use `python <filename>` to run the application. Observe the output from the application.
+ ```bash
+ python app.py
+ ```
## Clean up resources
+When you no longer need the API for Gremlin account, delete the corresponding resource group.
-## Next steps
-In this quickstart, you learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, and run a Python app to add data to the graph. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
+## Next step
> [!div class="nextstepaction"]
-> [Query using Gremlin](tutorial-query.md)
+> [Create and query data using Azure Cosmos DB for Apache Gremlin](tutorial-query.md)
cosmos-db Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/indexing.md
description: This article presents an overview of Azure Cosmos DB indexing capab
ms.devlang: javascript-+ Last updated 12/2/2022
In the preceding example, omitting the ```"university":1``` clause returns an er
Unique indexes need to be created while the collection is empty.
-Unique indexes on nested fields are not supported by default due to limiations with arrays. If your nested field does not contain an array, the index will work as intended. If your nested field contains an array (anywhere on the path), that value will be ignored in the unique index and uniqueness wil not be preserved for that value.
+Unique indexes on nested fields are not supported by default due to limitations with arrays. If your nested field does not contain an array, the index will work as intended. If your nested field contains an array (anywhere on the path), that value will be ignored in the unique index and uniqueness will not be preserved for that value.
For example a unique index on people.tom.age will work in this case since there's no array on the path: ```javascript { "people": { "tom": { "age": "25" }, "mark": { "age": "30" } } } ```
-but won't won't work in this case since there's an array in the path:
+but won't work in this case since there's an array in the path:
```javascript { "people": { "tom": [ { "age": "25" } ], "mark": [ { "age": "30" } ] } } ```
cosmos-db Best Practices Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/best-practices-javascript.md
+ Last updated 09/11/2023
cosmos-db Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-backup.md
-+ Previously updated : 07/10/2023 Last updated : 09/17/2023 # Backup and restore in Azure Cosmos DB for PostgreSQL
Last updated 07/10/2023
[!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] Azure Cosmos DB for PostgreSQL automatically creates
-backups of each node and stores them in locally redundant storage. Backups can
+backups of each node in a cluster. Backups can
be used to restore your cluster to a specified time - point-in-time restore (PITR). Backup and restore are an essential part of any business continuity strategy because they protect your data from accidental corruption or deletion.
server to any point in time within the retention period. (The retention period
is currently 35 days for all clusters.) All backups are encrypted using AES 256-bit encryption.
-In Azure regions that support availability zones, backup snapshots and WAL files are stored
-in three availability zones. As long as at least one availability zone is
-online, the cluster is restorable.
- Backup files can't be exported. They may only be used for restore operations in Azure Cosmos DB for PostgreSQL.
+### Backup redundancy
+
+Azure Cosmos DB for PostgreSQL supports the following backup redundancy options.
+
+* Same region backup
+ * Zone-redundant backup storage: This option is automatically chosen for regions that support availability zones. When the backups are stored in zone-redundant backup storage, in addition to multiple copies of data stored within the availability zone where each cluster's node is hosted, the data is also replicated to other availability zones.
+
+ * Locally redundant backup storage: This option is automatically chosen for regions that don't support availability zones. When the backups are stored in locally redundant backup storage, multiple copies of backups are stored in the same region.
+
+* Cross-region backup (in preview)
+ * Geo-redundant backup storage: You can choose this option at the time of cluster creation. When the backups are stored in another region, in addition to three copies of data stored within the region where your cluster is hosted, the data is replicated to another region.
+
+Geo-redundant backup is supported in the following Azure regions.
+
+| Cluster's region | Geo-backup stored in |
+|--|--|
+| Canada Central | Canada East |
+| Central US | East US 2 |
+| East Asia | Southeast Asia |
+| East US | West US |
+| East US 2 | Central US |
+| Japan East | Japan West |
+| Japan West | Japan East |
+| North Central US | South Central US |
+| North Europe | West Europe |
+| South Central US | North Central US |
+| Southeast Asia | East Asia |
+| Switzerland North | Switzerland West |
+| Switzerland West | Switzerland North |
+| West Central US | West US 2 |
+| West Europe | North Europe |
+| West US | East US |
+| West US 2 | West Central US |
+
+> [!IMPORTANT]
+> Geo-redundant backup and restore in Azure Cosmos DB for PostgreSQL is currently in preview.
+> This preview version is provided without a service level agreement, and it's not recommended
+> for production workloads. Certain features might not be supported or might have constrained
+> capabilities.
+ ### Backup storage cost For current backup storage pricing, see the Azure Cosmos DB for PostgreSQL
For current backup storage pricing, see the Azure Cosmos DB for PostgreSQL
## Restore You can restore a cluster to any point in time within
-the last 35 days. Point-in-time restore is useful in multiple scenarios. For
+the last 35 days. Point-in-time restore is useful in multiple scenarios. For
example, when a user accidentally deletes data, drops an important table or database, or if an application accidentally overwrites good data with bad data.
database, or if an application accidentally overwrites good data with bad data.
> open a support request to restore the cluster to a point that is earlier > than the latest failover time.
-When all nodes are up and running, you can restore cluster without any data loss. In an extremely rare case of a node experiencing a catastrophic event (and [high availability](./concepts-high-availability.md) isn't enabled on the cluster), you may lose up to 5 minutes of data.
+For same-region restore, when all nodes are up and running, you can restore cluster without any data loss. In an extremely rare case of a node experiencing a catastrophic event (and [high availability](./concepts-high-availability.md) isn't enabled on the cluster), you may lose up to 5 minutes of data.
+
+On clusters with geo-backup enabled, restore can be performed in the remote region or in the same region where cluster is located.
> [!IMPORTANT] > Deleted clusters can't be restored. If you delete the
When all nodes are up and running, you can restore cluster without any data loss
> accidental deletion or unexpected changes, administrators can leverage > [management locks](../../azure-resource-manager/management/lock-resources.md).
-The restore process creates a new cluster in the same Azure region,
+The restore process creates a new cluster in the same or remote Azure region,
subscription, and resource group as the original. The cluster has the original's configuration: the same number of nodes, number of vCores, storage size, user roles, PostgreSQL version, and version of the Citus extension.
In most cases, cluster restore takes up to 1 hour.
* See the steps to [restore a cluster](howto-restore-portal.md) in the Azure portal.
+* See [backup and restore limits and limitations](./reference-limits.md#backup-and-restore).
* Learn aboutΓÇ»[Azure availability zones](../../availability-zones/az-overview.md).
cosmos-db Concepts Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-customer-managed-keys.md
Last updated 04/06/2023
-# Data Encryption with Customer Managed Keys Preview
+# Data Encryption with Customer Managed Keys
[!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)]
cosmos-db How To Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/how-to-customer-managed-keys.md
Last updated 05/16/2023
-# Enable data encryption with customer-managed keys (preview) in Azure Cosmos DB for PostgreSQL
+# Enable data encryption with customer-managed keys in Azure Cosmos DB for PostgreSQL
[!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)]
Make sure <b>Vault access policy</b> is selected under Permission model and then
1. Select the Key created in the previous step, and then select Review+create.
- 1. Verify that CMK is encryption is enabled by Navigating to the Data Encryption(preview) blade of the Cosmos DB for PostgreSQL cluster in the Azure portal.
+ 1. Verify that CMK is encryption is enabled by Navigating to the Data Encryption blade of the Cosmos DB for PostgreSQL cluster in the Azure portal.
![Screenshot of data encryption tab.](media/how-to-customer-managed-keys/data-encryption-tab-note.png) > [!NOTE]
Encryption configuration can be changed from service managed encryption to CMK e
1. Navigate to the Data Encryption blade, and select Initiate restore operation. Alternatively, you can perform PITR by selecting the Restore option in the overview blade. [ ![Screenshot of PITR.](media/how-to-customer-managed-keys/point-in-time-restore.png)](media/how-to-customer-managed-keys/point-in-time-restore.png#lightbox)
- 1. You can change/configure the Data Encryption from the Encryption(preview) Tab.
+ 1. You can change/configure the Data Encryption from the Encryption Tab.
# [ARM Template](#tab/arm)
cosmos-db Howto Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-read-replicas-portal.md
To create a read replica, follow these steps:
4. Under **Cluster name**, enter a name for the read replica.
-5. Select a value from the **Location (preview)** drop-down.
+5. Select a value from the **Location** drop-down.
6. Select **OK**.
After the read replica is created, you can see it listed on the **Replicate data
> replica setting to an equal or greater value. This action helps the replica > keep up with any changes made to the master.
+## Promote a read replica
+
+To [promote a cluster read replica](./concepts-read-replicas.md#replica-promotion-to-independent-cluster) to an independent read-write cluster, follow these steps:
+
+1. Select the read replica you would like to promote in the portal.
+
+2. On the cluster sidebar, under **Cluster management**, select
+ **Replicate data globally**.
+
+3. On the **Replicate data globally** page, find the read replica in the list of clusters under the map and click the promote icon.
+
+4. On the **Promote \<cluster name>** screen, double check the read replica's name, confirm that you understand that promotion is irreversible by setting the checkbox, and select **Promote**.
+
+After the read replica is promoted, it becomes an independent readable and writable cluster with the same connection string.
+ ## Delete a primary cluster To delete a primary cluster, you use the same steps as to delete a
cosmos-db Howto Restore Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-restore-portal.md
Title: Restore - Azure Cosmos DB for PostgreSQL - Azure portal description: See how to perform restore operations in Azure Cosmos DB for PostgreSQL through the Azure portal.--++ -+ Previously updated : 06/12/2023 Last updated : 09/17/2023
-# Point-in-time restore of a cluster in Azure Cosmos DB for PostgreSQL
+# Backup and point-in-time restore of a cluster in Azure Cosmos DB for PostgreSQL
[!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)]
-This article provides step-by-step procedures to perform [point-in-time
+> [!IMPORTANT]
+> Geo-redundant backup and restore in Azure Cosmos DB for PostgreSQL is currently in preview.
+> This preview version is provided without a service level agreement, and it's not recommended
+> for production workloads. Certain features might not be supported or might have constrained
+> capabilities.
+
+This article provides step-by-step procedures to select backup type, to check type of backup enabled on a cluster, and to perform [point-in-time
recoveries](concepts-backup.md#restore) for a cluster using backups. You can restore either to the earliest backup or to a custom restore point within your retention period.
+> [!NOTE]
+> While cluster backups are always stored for 35 days, you may need to
+> open a support request to restore the cluster to a point that is earlier
+> than the latest failover time.
+
+## Select type of cluster backup
+Enabling geo-redundant backup is possible during cluster creation on the **Scale** screen that can be accessed on the **Basics** tab. Click the **Save** button to apply your selection.
+
+> [!NOTE]
+> Geo-redundant backup can be enabled only during cluster creation.
+> You can't disable geo-redundant backup once cluster is created.
+
+## Confirm type of backup
+To check what type of backup is enabled on a cluster, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com/), select an existing Azure Cosmos DB for PostgreSQL cluster.
+1. On the **Overview** page, check **Backup** field in the **Essentials** section.
+
+The **Backup** field values can be **Locally redundant** or **Zone redundant** for the same region cluster backup or **Geo-redundant** for the backup stored in another Azure region.
+ ## Restore to the earliest restore point Follow these steps to restore your cluster to its earliest existing backup.
-1. In the [Azure portal](https://portal.azure.com/), from the **Overview** page of the cluster you want to restore, select **Restore**.
+1. In the [Azure portal](https://portal.azure.com/), from the **Overview** page of the cluster you want to restore, select **Restore**.
1. On the **Restore** page, select the **Earliest** restore point, which is shown.
-1. Provide a new cluster name in the **Restore to new cluster** field. The subscription, resource group, and location fields aren't editable.
+1. Provide a new cluster name in the **Restore to new cluster** field. The subscription and resource group fields aren't editable.
-1. Select **OK**. A notification shows that the restore operation is initiated.
+1. If cluster has geo-redundant backup enabled, select remote or same region for restore in the **Location** field. On clusters with zone-redundant and locally redundant backup, location field isn't editable.
+
+1. Select **Next**.
+
+1. (optional) Make data encryption selection for restored cluster on the **Encryption (preview)** tab.
+
+1. Select **Create**. A notification shows that the restore operation is initiated.
1. When the restore completes, follow the [post-restore tasks](#post-restore-tasks).
earliest existing backup.
Follow these steps to restore your cluster to a date and time of your choosing.
-1. In the [Azure portal](https://portal.azure.com/), from the **Overview** page of the cluster you want to restore, select **Restore**.
+1. In the [Azure portal](https://portal.azure.com/), from the **Overview** page of the cluster you want to restore, select **Restore**.
1. On the **Restore** page, choose **Custom restore point**.
-1. Select a date and provide a time in the date and time fields, and enter a cluster name in the **Restore to new cluster** field. The other fields aren't editable.
-
-1. Select **OK**. A notification shows that the restore operation is initiated.
+1. Select a date and provide a time in the date and time fields, and enter a cluster name in the **Restore to new cluster** field. The subscription and resource group fields aren't editable.
+
+1. If cluster has geo-redundant backup enabled, select remote or same region for restore in the **Location** field. On clusters with zone-redundant and locally redundant backup, location field isn't editable.
+
+1. Select **Next**.
+
+1. (optional) Make data encryption selection for restored cluster on the **Encryption (preview)** tab.
+
+1. Select **Create**. A notification shows that the restore operation is initiated.
1. When the restore completes, follow the [post-restore tasks](#post-restore-tasks).
and time of your choosing.
After a restore, you should do the following to get your users and applications back up and running:
-* If the new server is meant to replace the original server, redirect clients
- and client applications to the new server
+* If the new cluster is meant to replace the original cluster, redirect clients
+ and client applications to the new cluster.
* Ensure appropriate [networking settings for private or public access](./concepts-security-overview.md#network-security) are in place for users to connect. These settings aren't copied from the original cluster. * Ensure appropriate [logins](./howto-create-users.md) and database level permissions are in place.
back up and running:
* Learn more about [backup and restore](concepts-backup.md) in Azure Cosmos DB for PostgreSQL.
+* See [backup and restore limits and limitations](./reference-limits.md#backup-and-restore).
* SetΓÇ»[suggested alerts](./howto-alert-on-metric.md#suggested-alerts) on clusters.
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Previously updated : 09/18/2023 Last updated : 09/25/2023 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g
Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters. ### September 2023+
+* General availability: Data Encryption at rest using [Customer Managed Keys](./concepts-customer-managed-keys.md) is now supported for all available regions.
+ * See [this guide](./how-to-customer-managed-keys.md) for the steps to enable data encryption using customer managed keys.
+* Preview: Geo-redundant backup and restore
+ * Learn more about [backup and restore Azure Cosmos DB for PostgreSQL](./concepts-backup.md)
* Preview: [32 TiB storage per node for multi-node configurations](./resources-compute.md#multi-node-cluster) is now available in all supported regions. * See [how to maximize IOPS on your cluster](./resources-compute.md#maximum-iops-for-your-compute--storage-configuration). * General availability: Azure Cosmos DB for PostgreSQL is now available in Australia Central, Canada East, and Qatar Central.
might have constrained capabilities. For more information, see
[Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+* [Geo-redundant backup and restore](./concepts-backup.md#backup-redundancy)
* [32 TiB storage per node in multi-node clusters](./resources-compute.md#multi-node-cluster) * [Azure Active Directory (Azure AD) authentication](./concepts-authentication.md#azure-active-directory-authentication-preview) * [Azure CLI support for Azure Cosmos DB for PostgreSQL](/cli/azure/cosmosdb/postgres)
cosmos-db Quickstart Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-create-bicep.md
+ Last updated 09/07/2023
Get-AzResource -ResourceGroupName exampleRG
With your cluster created, it's time to connect with a PostgreSQL client. > [!div class="nextstepaction"]
-> [Connect to your cluster](quickstart-connect-psql.md)
+> [Connect to your cluster](quickstart-connect-psql.md)
cosmos-db Reference Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-limits.md
be scaled down (decreased).
### Storage size
-Up to 16 TiB of storage is supported on coordinator and worker nodes in multi-node configuration. Up to 2 TiB of storage is supported for single node configurations. See [the available storage options and IOPS calculation](resources-compute.md)
+Up to 32 TiB of storage is supported on coordinator and worker nodes in multi-node configuration. Up to 2 TiB of storage is supported for single node configurations. See [the available storage options and IOPS calculation](resources-compute.md)
for various node and cluster sizes. ## Compute
currently **not supported**:
* PostgreSQL 11 support * Read replicas * High availability
+* Geo-redundant backup
* The [azure_storage](howto-ingest-azure-blob-storage.md) extension ## Authentication
with an error.
By default this database is called `citus`. Azure Cosmos DB for PostgreSQL supports custom database names at cluster provisioning time only.
+## Backup and restore
+
+### Geo-redundant backup and restore (preview)
+* Geo-redundant backup can be enabled only during cluster creation.
+ * You can enable geo-redundant backup when you perform a cluster restore.
+ * You can enable geo-redundant backup when you [promote a cluster read-replica to an independent cluster](./howto-read-replicas-portal.md#promote-a-read-replica).
+* Geo-redundant backup can't be enabled on single node clusters with [burstable compute](./concepts-burstable-compute.md).
+* Geo-redundant backup can't be disabled once cluster is created.
+* [Customer managed key (CMK)](./concepts-customer-managed-keys.md) isn't supported for clusters with geo-redundant backup enabled.
+* Azure Cosmos DB for PostgreSQL cluster with geo-redundant backup enabled can't have a [cluster read replica](./concepts-read-replicas.md) in the region where geo-redundant backup is stored.
+ ## Next steps * Learn how to [create a cluster in the
cosmos-db Try Free https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/try-free.md
Launch the Quickstart in Data Explorer in Azure portal to start using Azure Cosm
* [API for PostgreSQL](postgresql/quickstart-create-portal.md) * [API for MongoDB](mongodb/quickstart-python.md#object-model) * [API for Apache Cassandra](cassandr)
-* [API for Apache Gremlin](gremlin/quickstart-console.md#add-a-graph)
+* [API for Apache Gremlin](gremlin/quickstart-console.md)
* [API for Table](table/quickstart-dotnet.md) You can also get started with one of the learning resources in the Data Explorer.
After you create a Try Azure Cosmos DB sandbox account, you can start building a
* [Get started with Azure Cosmos DB for PostgreSQL](postgresql/quickstart-create-portal.md) * [Get started with Azure Cosmos DB for MongoDB](mongodb/quickstart-python.md#object-model) * [Get started with Azure Cosmos DB for Cassandra](cassandr)
- * [Get started with Azure Cosmos DB for Gremlin](gremlin/quickstart-console.md#add-a-graph)
+ * [Get started with Azure Cosmos DB for Gremlin](gremlin/quickstart-console.md)
* [Get started with Azure Cosmos DB for Table](table/quickstart-dotnet.md) * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for [capacity planning](sql/estimate-ru-with-capacity-planner.md). * If all you know is the number of vCores and servers in your existing database cluster, see [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md).
cost-management-billing Tutorial Acm Opt Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-opt-recommendations.md
Medium impact recommendations include:
Azure Advisor monitors your virtual machine usage for seven days and then identifies underutilized virtual machines. Virtual machines whose CPU utilization is five percent or less and network usage is seven MB or less for four or more days are considered low-utilization virtual machines.
-The 5% or less CPU utilization setting is the default, but you can adjust the settings. For more information about adjusting the setting, see the [Configure the average CPU utilization rule or the low usage virtual machine recommendation](../../advisor/advisor-get-started.md#configure-low-usage-vm-recommendation).
+The 5% or less CPU utilization setting is the default, but you can adjust the settings. For more information about adjusting the setting, see the [Configure the average CPU utilization rule or the low usage virtual machine recommendation](../../advisor/advisor-get-started.md#configure-recommendations).
Although some scenarios can result in low utilization by design, you can often save money by changing the size of your virtual machines to less expensive sizes. Your actual savings might vary if you choose a resize action. Let's walk through an example of resizing a virtual machine.
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| Previous Azure offer in CSP | Previous Azure offer in CSP | ΓÇó Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. | | Previous Azure offer in CSP | MPA | For details, see [Transfer a customer's Azure subscriptions to a different CSP (under an Azure plan)](/partner-center/transfer-azure-subscriptions-under-azure-plan). | | MPA | EA | ΓÇó Automatic transfer isn't supported. Any transfer requires resources to move from the existing MPA product manually to a newly created or an existing EA product.<br><br> ΓÇó Use the information in the [Perform resource transfers](#perform-resource-transfers) section. <br><br> ΓÇó Reservations and savings plan don't automatically transfer and transferring them isn't supported. |
-| MPA | MPA | ΓÇó For details, see [Transfer a customer's Azure subscriptions and/or Reservations (under an Azure plan) to a different CSP](/partner-center/transfer-azure-subscriptions-under-azure-plan).<br><br> ΓÇó Self-service reservation transfers are supported. |
+| MPA | MPA | ΓÇó For details, see [Transfer a customer's Azure subscriptions and/or Reservations (under an Azure plan) to a different CSP](/partner-center/transfer-azure-subscriptions-under-azure-plan). |
| MOSP (PAYG) | MOSP (PAYG) | ΓÇó If you're changing the billing owner of the subscription, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md).<br><br> ΓÇó Reservations don't automatically transfer so you must open a [billing support ticket](https://azure.microsoft.com/support/create-ticket/) to transfer them. | | MOSP (PAYG) | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. | | MOSP (PAYG) | EA | ΓÇó If you're transferring the subscription to the EA enrollment, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea).<br><br> ΓÇó If you're changing billing ownership, see [Change Azure subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). |
cost-management-billing Limited Time Central Poland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/limited-time-central-poland.md
+
+ Title: Save on select VMs in Poland Central for a limited time
+description: Learn about how to save up to 50% on select Linux VMs in Poland Central for a limited time.
+++++ Last updated : 09/15/2023++++
+# Save on select VMs in Poland Central for a limited time
+
+Save up to 67 percent compared to pay-as-you-go pricing when you purchase one or three-year [Azure Reserved Virtual Machine (VM) Instances](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json) for select VMs Poland Central for a limited time. This offer is available between October 1, 2023 ΓÇô March 31, 2024.
+
+## Purchase the limited time offer
+
+To take advantage of this limited-time offer, [purchase](https://aka.ms/azure/pricing/PolandCentral/VM/Purchase) a one or three-year term for Azure Reserved Virtual Machine Instances for qualified VM instances in the Poland Central region.
+
+## Charge back limited time offer costs
+
+Enterprise Agreement and Microsoft Customer Agreement billing readers can view amortized cost data for reservations. They can use the cost data to charge back the monetary value for a subscription, resource group, resource, or a tag to their partners. In amortized data, the effective price is the prorated hourly reservation cost. The cost is the total cost of reservation usage by the resource on that day. Users with an individual subscription can get the amortized cost data from their usage file. For more information, see [Charge back Azure Reservation costs](charge-back-usage.md).
+
+## Terms and conditions of the limited time offer
+
+These terms and conditions (hereinafter referred to as "terms") govern the limited time offer ("offer") provided by Microsoft to customers purchasing a one or three year Azure Reserved VM Instance in Poland Central between October 1, 2023 (12 AM Pacific Standard Time) ΓÇô March 31, 2024 (11:59 PM Pacific Standard Time), for any of the following VM series:
+
+| VM series | VM series | VM series | VM series |
+|||||
+|`B12ms`|`B16ms`|`B1ls`|`B1ms`|
+|`B1s`|`B20ms`|`B2ms`|`B2s`|
+|`B4ms`|`B8ms`|`D1 v2`|`D11 v2`|
+|`D12 v2`|`D13 v2`|`D14 v2`|`D15 v2`|
+|`D15i v2`|`D16 v3`|`D16 v4`|`D16 v5`|
+|`D16a v4`|`D16ads v5`|`D16as v4`|`D16as v5`|
+|`D16d v4`|`D16d v5`|`D16ds v4`|`D16ds v5`|
+|`D16lds v5`|`D16ls v5`|`D16s v3`|`D16s v4`|
+|`D16s v5`|`D2 v2`|`D2 v3`|`D2 v4`|
+|`D2 v5`|`D2a v4`|`D2ads v5`|`D2as v4`|
+|`D2as v5`|`D2d v4`|`D2d v5`|`D2ds v4`|
+|`D2ds v5`|`D2lds v5`|`D2ls v5`|`D2s v3`|
+|`D2s v4`|`D2s v5`|`D3 v2`|`D32 v3`|
+|`D32 v4`|`D32 v5`|`D32a v4`|`D32ads v5`|
+|`D32as v4`|`D32as v5`|`D32d v4`|`D32d v5`|
+|`D32ds v4`|`D32ds v5`|`D32lds v5`|`D32ls v5`|
+|`D32s v3`|`D32s v4`|`D32s v5`|`D4 v2`|
+|`D4 v3`|`D4 v4`|`D4 v5`|`D48 v3`|
+|`D48 v4`|`D48 v5`|`D48a v4`|`D48ads v5`|
+|`D48as v4`|`D48as v5`|`D48d v4`|`D48d v5`|
+|`D48ds v4`|`D48ds v5`|`D48lds v5`|`D48ls v5`|
+|`D48s v3`|`D48s v4`|`D48s v5`|`D4a v4`|
+|`D4ads v5`|`D4as v4`|`D4as v5`|`D4d v4`|
+|`D4d v5`|`D4ds v4`|`D4ds v5`|`D4lds v5`|
+|`D4ls v5`|`D4s v3`|`D4s v4`|`D4s v5`|
+|`D5 v2`|`D64 v3`|`D64 v4`|`D64 v5`|
+|`D64a v4`|`D64ads v5`|`D64as v4`|`D64as v5`|
+|`D64d v4`|`D64d v5`|`D64ds v4`|`D64ds v5`|
+|`D64lds v5`|`D64ls v5`|`D64s v3`|`D64s v4`|
+|`D64s v5`|`D8 v3`|`D8 v4`|`D8 v5`|
+|`D8a v4`|`D8ads v5`|`D8as v4`|`D8as v5`|
+|`D8d v4`|`D8d v5`|`D8ds v4`|`D8ds v5`|
+|`D8lds v5`|`D8ls v5`|`D8s v3`|`D8s v4`|
+|`D8s v5`|`D96 v5`|`D96a v4`|`D96ads v5`|
+|`D96as v4`|`D96as v5`|`D96d v5`|`D96ds v5`|
+|`D96lds v5`|`D96ls v5`|`D96s v5`|`Dadsv5 Type1`|
+|`Dasv4 Type1`|`Dasv4 Type2`|`Dasv5 Type1`|`Ddsv4 Type 1`|
+|`Ddsv4 Type 2`|`Ddsv5 Type1`|`DS1 v2`|`DS11 v2`|
+|`DS11-1 v2`|`DS12 v2`|`DS12-1 v2`|`DS12-2 v2`|
+|`DS13 v2`|`DS13-2 v2`|`DS13-4 v2`|`DS14 v2`|
+|`DS14-4 v2`|`DS14-8 v2`|`DS15 v2`|`DS15i v2`|
+|`DS2 v2`|`DS3 v2`|`DS4 v2`|`DS5 v2`|
+|`Dsv3 Type1`|`Dsv3 Type2`|`Dsv3 Type3`|`Dsv3 Type4`|
+|`Dsv4 Type1`|`Dsv4 Type2`|`Dsv5 Type1`|`E104i v5`|
+|`E104id v5`|`E104ids v5`|`E104is v5`|`E112iads v5`|
+|`E112ias v5`|`E112ibds v5`|`E112ibs v5`|`E16 v3`|
+|`E16 v4`|`E16 v5`|`E16-4ads v5`|`E16-4as v5`|
+|`E16-4as_v4`|`E16-4ds v4`|`E16-4ds v5`|`E16-4s v3`|
+|`E16-4s v4`|`E16-4s v5`|`E16-8ads v5`|`E16-8as v5`|
+|`E16-8as_v4`|`E16-8ds v4`|`E16-8ds v5`|`E16-8s v3`|
+|`E16-8s v4`|`E16-8s v5`|`E16a v4`|`E16ads v5`|
+|`E16as v4`|`E16as v5`|`E16bds v5`|`E16bs v5`|
+|`E16d v4`|`E16d v5`|`E16ds v4`|`E16ds v5`|
+|`E16ds_v4_ ADHType1`|`E16s v3`|`E16s v4`|`E16s v5`|
+|`E16s_v4_ ADHType1`|`E2 v3`|`E2 v4`|`E2 v5`|
+|`E20 v3`|`E20 v4`|`E20 v5`|`E20a v4`|
+|`E20ads v5`|`E20as v4`|`E20as v5`|`E20d v4`|
+|`E20d v5`|`E20ds v4`|`E20ds v5`|`E20s v3`|
+|`E20s v4`|`E20s v5`|`E2a v4`|`E2ads v5`|
+|`E2as v4`|`E2as v5`|`E2bds v5`|`E2bs v5`|
+|`E2d v4`|`E2d v5`|`E2ds v4`|`E2ds v5`|
+|`E2s v3`|`E2s v4`|`E2s v5`|`E32 v3`|
+|`E32 v4`|`E32 v5`|`E32-16ads v5`|`E32-16as v5`|
+|`E32-16as_v4`|`E32-16ds v4`|`E32-16ds v5`|`E32-16s v3`|
+|`E32-16s v4`|`E32-16s v5`|`E32-8ads v5`|`E32-8as v5`|
+|`E32-8as_v4`|`E32-8ds v4`|`E32-8ds v5`|`E32-8s v3`|
+|`E32-8s v4`|`E32-8s v5`|`E32a v4`|`E32ads v5`|
+|`E32as v4`|`E32as v5`|`E32bds v5`|`E32bs v5`|
+|`E32d v4`|`E32d v5`|`E32ds v4`|`E32ds v5`|
+|`E32ds_v4_ ADHType1`|`E32s v3`|`E32s v4`|`E32s v5`|
+|`E32s_v4_ ADHType1`|`E4 v3`|`E4 v4`|`E4 v5`|
+|`E4-2ads v5`|`E4-2as v5`|`E4-2as_v4`|`E4-2ds v4`|
+|`E4-2ds v5`|`E4-2s v3`|`E4-2s v4`|`E4-2s v5`|
+|`E48 v3`|`E48 v4`|`E48 v5`|`E48a v4`|
+|`E48ads v5`|`E48as v4`|`E48as v5`|`E48bds v5`|
+|`E48bs v5`|`E48d v4`|`E48d v5`|`E48ds v4`|
+|`E48ds v5`|`E48s v3`|`E48s v4`|`E48s v5`|
+|`E4a v4`|`E4ads v5`|`E4as v4`|`E4as v5`|
+|`E4bds v5`|`E4bs v5`|`E4d v4`|`E4d v5`|
+|`E4ds v4`|`E4ds v5`|`E4ds_v4_ADHType1`|`E4s v3`|
+|`E4s v4`|`E4s v5`|`E4s_v4_ADHType1`|`E64 v3`|
+|`E64 v4`|`E64 v5`|`E64-16ads v5`|`E64-16as v5`|
+|`E64-16as_v4`|`E64-16ds v4`|`E64-16ds v5`|`E64-16s v3`|
+|`E64-16s v4`|`E64-16s v5`|`E64-32ads v5`|`E64-32as v5`|
+|`E64-32as_v4`|`E64-32ds v4`|`E64-32ds v5`|`E64-32s v3`|
+|`E64-32s v4`|`E64-32s v5`|`E64a v4`|`E64ads v5`|
+|`E64as v4`|`E64as v5`|`E64bds v5`|`E64bs v5`|
+|`E64d v4`|`E64d v5`|`E64ds v4`|`E64ds v5`|
+|`E64i v3`|`E64i_v4_SPECIAL`|`E64id_v4_SPECIAL`|`E64ids_v4_SPECIAL`|
+|`E64is v3`|`E64is_v4_SPECIAL`|`E64s v3`|`E64s v4`|
+|`E64s v5`|`E8 v3`|`E8 v4`|`E8 v5`|
+|`E80ids v4`|`E80is v4`|`E8-2ads v5`|`E8-2as v5`|
+|`E8-2as_v4`|`E8-2ds v4`|`E8-2ds v5`|`E8-2s v3`|
+|`E8-2s v4`|`E8-2s v5`|`E8-4ads v5`|`E8-4as v5`|
+|`E8-4as_v4`|`E8-4ds v4`|`E8-4ds v5`|`E8-4s v3`|
+|`E8-4s v4`|`E8-4s v5`|`E8a v4`|`E8ads v5`|
+|`E8as v4`|`E8as v5`|`E8bds v5`|`E8bs v5`|
+|`E8d v4`|`E8d v5`|`E8ds v4`|`E8ds v5`|
+|`E8ds_v4_ ADHType1`|`E8s v3`|`E8s v4`|`E8s v5`|
+|`E8s_v4_ADHType1`|`E96 v5`|`E96-24ads v5`|`E96-24as v5`|
+|`E96-24as_v4`|`E96-24ds v5`|`E96-24s v5`|`E96-48ads v5`|
+|`E96-48as v5`|`E96-48as_v4`|`E96-48ds v5`|`E96-48s v5`|
+|`E96a v4`|`E96ads v5`|`E96as v4`|`E96as v5`|
+|`E96bds v5`|`E96bs v5`|`E96d v5`|`E96ds v5`|
+|`E96iads v5`|`E96ias v4`|`E96ias v5`|`E96s v5`|
+|`Eadsv5 Type1`|`Easv4 Type1`|`Easv4 Type2`|`Easv5 Type1`|
+|`Ebdsv5-Type1`|`Ebsv5-Type1`|`Edsv4 Type 1`|`Edsv4 Type 2`|
+|`Edsv5 Type1`|`Esv3 Type1`|`Esv3 Type2`|`Esv3 Type3`|
+|`Esv3 Type4`|`Esv4 Type1`|`Esv4 Type2`|`Esv5 Type1`|
+|`F1`|`F16`|`F16s`|`F16s v2`|
+|`F1s`|`F2`|`F2s`|`F2s v2`|
+|`F32s v2`|`F4`|`F48s v2`|`F4s`|
+|`F4s v2`|`F64s v2`|`F72s v2`|`F8`|
+|`F8s`|`F8s v2`|`Fsv2 Type2`|`Fsv2 Type3`|
+|`Fsv2 Type4`|`SQLG7_AMD_IaaS`|`SQLG7_AMD_NVME`| |
+
+The 67 percent saving is based on one DS1 v2 Azure VM for Linux in the Poland Central region running for 36 months at a pay-as-you-go rate as of September 2023. Actual savings may vary based on location, term commitment, instance type, or usage. The savings doesn't include operating system costs.
+
+**Eligibility** - The Offer is open to individuals who meet the following criteria:
+
+- To buy a reservation, you must have the owner role or reservation purchaser role on an Azure subscription that's of one of the following types:
+ - Enterprise (MS-AZR-0017P or MS-AZR-0148P)
+ - Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P)
+ - Microsoft Customer Agreement
+- Cloud solution providers can use the Azure portal or [Partner Center](/partner-center/azure-reservations) to purchase Azure Reservations. You can't purchase a reservation if you have a custom role that mimics the owner role or reservation purchaser role on an Azure subscription. You must use the built-in owner or built-in reservation purchaser role.
+- For more information about who can purchase a reservation, see [Buy an Azure reservation](prepare-buy-reservation.md).
+
+**Offer details** - Upon successful purchase and payment for the one or three-year Azure Reserved VM Instance in Poland Central for one or more of the qualified VMs during the specified period, the discount applies automatically to the number of running virtual machines in Poland Central that match the reservation scope and attributes. You don't need to assign a reservation to a virtual machine to get the discounts. A reserved instance purchase covers only the compute part of your VM usage. For more information about how to pay and save with an Azure Reserved VM Instance, see [Prepay for Azure virtual machines to save money](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json).
+
+- Additional taxes may apply.
+- Payment will be processed using the payment method on file for the selected subscriptions.
+- Estimated savings are calculated based on your current on-demand rate.
+
+**Qualifying purchase** - To be eligible for the limited time offer discount, customers must make a purchase of the one or three-year Azure Reserved Virtual Machine Instances for one of the following qualified VMs in Poland Central between October 1, 2023, and March 31, 2024.
+
+| VM series | VM series | VM series | VM series |
+|||||
+|`B12ms`|`B16ms`|`B1ls`|`B1ms`|
+|`B1s`|`B20ms`|`B2ms`|`B2s`|
+|`B4ms`|`B8ms`|`D1 v2`|`D11 v2`|
+|`D12 v2`|`D13 v2`|`D14 v2`|`D15 v2`|
+|`D15i v2`|`D16 v3`|`D16 v4`|`D16 v5`|
+|`D16a v4`|`D16ads v5`|`D16as v4`|`D16as v5`|
+|`D16d v4`|`D16d v5`|`D16ds v4`|`D16ds v5`|
+|`D16lds v5`|`D16ls v5`|`D16s v3`|`D16s v4`|
+|`D16s v5`|`D2 v2`|`D2 v3`|`D2 v4`|
+|`D2 v5`|`D2a v4`|`D2ads v5`|`D2as v4`|
+|`D2as v5`|`D2d v4`|`D2d v5`|`D2ds v4`|
+|`D2ds v5`|`D2lds v5`|`D2ls v5`|`D2s v3`|
+|`D2s v4`|`D2s v5`|`D3 v2`|`D32 v3`|
+|`D32 v4`|`D32 v5`|`D32a v4`|`D32ads v5`|
+|`D32as v4`|`D32as v5`|`D32d v4`|`D32d v5`|
+|`D32ds v4`|`D32ds v5`|`D32lds v5`|`D32ls v5`|
+|`D32s v3`|`D32s v4`|`D32s v5`|`D4 v2`|
+|`D4 v3`|`D4 v4`|`D4 v5`|`D48 v3`|
+|`D48 v4`|`D48 v5`|`D48a v4`|`D48ads v5`|
+|`D48as v4`|`D48as v5`|`D48d v4`|`D48d v5`|
+|`D48ds v4`|`D48ds v5`|`D48lds v5`|`D48ls v5`|
+|`D48s v3`|`D48s v4`|`D48s v5`|`D4a v4`|
+|`D4ads v5`|`D4as v4`|`D4as v5`|`D4d v4`|
+|`D4d v5`|`D4ds v4`|`D4ds v5`|`D4lds v5`|
+|`D4ls v5`|`D4s v3`|`D4s v4`|`D4s v5`|
+|`D5 v2`|`D64 v3`|`D64 v4`|`D64 v5`|
+|`D64a v4`|`D64ads v5`|`D64as v4`|`D64as v5`|
+|`D64d v4`|`D64d v5`|`D64ds v4`|`D64ds v5`|
+|`D64lds v5`|`D64ls v5`|`D64s v3`|`D64s v4`|
+|`D64s v5`|`D8 v3`|`D8 v4`|`D8 v5`|
+|`D8a v4`|`D8ads v5`|`D8as v4`|`D8as v5`|
+|`D8d v4`|`D8d v5`|`D8ds v4`|`D8ds v5`|
+|`D8lds v5`|`D8ls v5`|`D8s v3`|`D8s v4`|
+|`D8s v5`|`D96 v5`|`D96a v4`|`D96ads v5`|
+|`D96as v4`|`D96as v5`|`D96d v5`|`D96ds v5`|
+|`D96lds v5`|`D96ls v5`|`D96s v5`|`Dadsv5 Type1`|
+|`Dasv4 Type1`|`Dasv4 Type2`|`Dasv5 Type1`|`Ddsv4 Type 1`|
+|`Ddsv4 Type 2`|`Ddsv5 Type1`|`DS1 v2`|`DS11 v2`|
+|`DS11-1 v2`|`DS12 v2`|`DS12-1 v2`|`DS12-2 v2`|
+|`DS13 v2`|`DS13-2 v2`|`DS13-4 v2`|`DS14 v2`|
+|`DS14-4 v2`|`DS14-8 v2`|`DS15 v2`|`DS15i v2`|
+|`DS2 v2`|`DS3 v2`|`DS4 v2`|`DS5 v2`|
+|`Dsv3 Type1`|`Dsv3 Type2`|`Dsv3 Type3`|`Dsv3 Type4`|
+|`Dsv4 Type1`|`Dsv4 Type2`|`Dsv5 Type1`|`E104i v5`|
+|`E104id v5`|`E104ids v5`|`E104is v5`|`E112iads v5`|
+|`E112ias v5`|`E112ibds v5`|`E112ibs v5`|`E16 v3`|
+|`E16 v4`|`E16 v5`|`E16-4ads v5`|`E16-4as v5`|
+|`E16-4as_v4`|`E16-4ds v4`|`E16-4ds v5`|`E16-4s v3`|
+|`E16-4s v4`|`E16-4s v5`|`E16-8ads v5`|`E16-8as v5`|
+|`E16-8as_v4`|`E16-8ds v4`|`E16-8ds v5`|`E16-8s v3`|
+|`E16-8s v4`|`E16-8s v5`|`E16a v4`|`E16ads v5`|
+|`E16as v4`|`E16as v5`|`E16bds v5`|`E16bs v5`|
+|`E16d v4`|`E16d v5`|`E16ds v4`|`E16ds v5`|
+|`E16ds_v4_ ADHType1`|`E16s v3`|`E16s v4`|`E16s v5`|
+|`E16s_v4_ ADHType1`|`E2 v3`|`E2 v4`|`E2 v5`|
+|`E20 v3`|`E20 v4`|`E20 v5`|`E20a v4`|
+|`E20ads v5`|`E20as v4`|`E20as v5`|`E20d v4`|
+|`E20d v5`|`E20ds v4`|`E20ds v5`|`E20s v3`|
+|`E20s v4`|`E20s v5`|`E2a v4`|`E2ads v5`|
+|`E2as v4`|`E2as v5`|`E2bds v5`|`E2bs v5`|
+|`E2d v4`|`E2d v5`|`E2ds v4`|`E2ds v5`|
+|`E2s v3`|`E2s v4`|`E2s v5`|`E32 v3`|
+|`E32 v4`|`E32 v5`|`E32-16ads v5`|`E32-16as v5`|
+|`E32-16as_v4`|`E32-16ds v4`|`E32-16ds v5`|`E32-16s v3`|
+|`E32-16s v4`|`E32-16s v5`|`E32-8ads v5`|`E32-8as v5`|
+|`E32-8as_v4`|`E32-8ds v4`|`E32-8ds v5`|`E32-8s v3`|
+|`E32-8s v4`|`E32-8s v5`|`E32a v4`|`E32ads v5`|
+|`E32as v4`|`E32as v5`|`E32bds v5`|`E32bs v5`|
+|`E32d v4`|`E32d v5`|`E32ds v4`|`E32ds v5`|
+|`E32ds_v4_ ADHType1`|`E32s v3`|`E32s v4`|`E32s v5`|
+|`E32s_v4_ ADHType1`|`E4 v3`|`E4 v4`|`E4 v5`|
+|`E4-2ads v5`|`E4-2as v5`|`E4-2as_v4`|`E4-2ds v4`|
+|`E4-2ds v5`|`E4-2s v3`|`E4-2s v4`|`E4-2s v5`|
+|`E48 v3`|`E48 v4`|`E48 v5`|`E48a v4`|
+|`E48ads v5`|`E48as v4`|`E48as v5`|`E48bds v5`|
+|`E48bs v5`|`E48d v4`|`E48d v5`|`E48ds v4`|
+|`E48ds v5`|`E48s v3`|`E48s v4`|`E48s v5`|
+|`E4a v4`|`E4ads v5`|`E4as v4`|`E4as v5`|
+|`E4bds v5`|`E4bs v5`|`E4d v4`|`E4d v5`|
+|`E4ds v4`|`E4ds v5`|`E4ds_v4_ADHType1`|`E4s v3`|
+|`E4s v4`|`E4s v5`|`E4s_v4_ADHType1`|`E64 v3`|
+|`E64 v4`|`E64 v5`|`E64-16ads v5`|`E64-16as v5`|
+|`E64-16as_v4`|`E64-16ds v4`|`E64-16ds v5`|`E64-16s v3`|
+|`E64-16s v4`|`E64-16s v5`|`E64-32ads v5`|`E64-32as v5`|
+|`E64-32as_v4`|`E64-32ds v4`|`E64-32ds v5`|`E64-32s v3`|
+|`E64-32s v4`|`E64-32s v5`|`E64a v4`|`E64ads v5`|
+|`E64as v4`|`E64as v5`|`E64bds v5`|`E64bs v5`|
+|`E64d v4`|`E64d v5`|`E64ds v4`|`E64ds v5`|
+|`E64i v3`|`E64i_v4_SPECIAL`|`E64id_v4_SPECIAL`|`E64ids_v4_SPECIAL`|
+|`E64is v3`|`E64is_v4_SPECIAL`|`E64s v3`|`E64s v4`|
+|`E64s v5`|`E8 v3`|`E8 v4`|`E8 v5`|
+|`E80ids v4`|`E80is v4`|`E8-2ads v5`|`E8-2as v5`|
+|`E8-2as_v4`|`E8-2ds v4`|`E8-2ds v5`|`E8-2s v3`|
+|`E8-2s v4`|`E8-2s v5`|`E8-4ads v5`|`E8-4as v5`|
+|`E8-4as_v4`|`E8-4ds v4`|`E8-4ds v5`|`E8-4s v3`|
+|`E8-4s v4`|`E8-4s v5`|`E8a v4`|`E8ads v5`|
+|`E8as v4`|`E8as v5`|`E8bds v5`|`E8bs v5`|
+|`E8d v4`|`E8d v5`|`E8ds v4`|`E8ds v5`|
+|`E8ds_v4_ ADHType1`|`E8s v3`|`E8s v4`|`E8s v5`|
+|`E8s_v4_ADHType1`|`E96 v5`|`E96-24ads v5`|`E96-24as v5`|
+|`E96-24as_v4`|`E96-24ds v5`|`E96-24s v5`|`E96-48ads v5`|
+|`E96-48as v5`|`E96-48as_v4`|`E96-48ds v5`|`E96-48s v5`|
+|`E96a v4`|`E96ads v5`|`E96as v4`|`E96as v5`|
+|`E96bds v5`|`E96bs v5`|`E96d v5`|`E96ds v5`|
+|`E96iads v5`|`E96ias v4`|`E96ias v5`|`E96s v5`|
+|`Eadsv5 Type1`|`Easv4 Type1`|`Easv4 Type2`|`Easv5 Type1`|
+|`Ebdsv5-Type1`|`Ebsv5-Type1`|`Edsv4 Type 1`|`Edsv4 Type 2`|
+|`Edsv5 Type1`|`Esv3 Type1`|`Esv3 Type2`|`Esv3 Type3`|
+|`Esv3 Type4`|`Esv4 Type1`|`Esv4 Type2`|`Esv5 Type1`|
+|`F1`|`F16`|`F16s`|`F16s v2`|
+|`F1s`|`F2`|`F2s`|`F2s v2`|
+|`F32s v2`|`F4`|`F48s v2`|`F4s`|
+|`F4s v2`|`F64s v2`|`F72s v2`|`F8`|
+|`F8s`|`F8s v2`|`Fsv2 Type2`|`Fsv2 Type3`|
+|`Fsv2 Type4`|`SQLG7_AMD_IaaS`|`SQLG7_AMD_NVME`| |
+
+Instance size flexibility is available for these VMs. For more information about Instance Size Flexibility, see [Virtual machine size flexibility](../../virtual-machines/reserved-vm-instance-size-flexibility.md).
+
+**Discount limitations**
+
+- The discount automatically applies to the number of running virtual machines in Poland Central that match the reservation scope and attributes.
+- The discount applies for one year or three years after the date of purchase, depending on term length purchased.
+- The discount only applies to resources associated with subscriptions purchased through Enterprise Agreement, Cloud Solution Provider (CSP), Microsoft Customer Agreement and individual plans with pay-as-you-go rates.
+- A reservation discount is "use-it-or-lose-it." So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.
+- When you deallocate, delete, or scale the number of VMs, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost.
+- Stopped VMs are billed and continue to use reservation hours. Deallocate or delete VM resources or scale-in other VMs to use your available reservation hours with other workloads.
+- For more information about how Azure Reserved VM Instance discounts are applied, see [Understand Azure Reserved VM Instances discount](../manage/understand-vm-reservation-charges.md).
+
+**Exchanges and refunds** - The offer follows standard exchange and refund policies for reservations. For more information about exchanges and refunds, see [Self-service exchanges and refunds for Azure Reservations](exchange-and-refund-azure-reservations.md).
+
+**Renewals**
+
+- The renewal price **will not be** the limited time offer price, but the price available at time of renewal.
+- For more information about renewals, see [Automatically renew Azure reservations](reservation-renew.md).
+
+**Termination or modification** - Microsoft reserves the right to modify, suspend, or terminate the offer at any time without prior notice.
+
+If you have purchased the one or three-year Azure Reserved Virtual Machine Instances for the qualified VMs in Poland Central between October 1, 2023, and March 31, 2024 you'll continue to get the discount throughout the purchased term length, even if the offer is canceled.
+
+By participating in the offer, customers agree to be bound by these terms and the decisions of Microsoft. Microsoft reserves the right to disqualify any customer who violates these terms or engages in any fraudulent or harmful activities related to the offer.
+
+## Next steps
+
+- [Understand Azure Reserved VM Instances discount](../manage/understand-vm-reservation-charges.md)
+- [Purchase Azure Reserved VM Instances in the Azure portal](https://aka.ms/azure/pricing/PolandCentral/VM/Purchase)
cost-management-billing Manage Reserved Vm Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/manage-reserved-vm-instance.md
If you're a billing administrator, use following steps to view and manage all re
We donΓÇÖt allow changing Billing subscription after a reservation is purchased. If you want to change the subscription, use the exchange process to set the right billing subscription for the reservation.
+## Change billing frequency for an Azure Reservation
+
+We donΓÇÖt allow changing billing frequency after a reservation is purchased. If you want to change the billing frequency, use the exchange process to set the right billing frequency for the reservation or select a different billing frequency when setting up a renewal for an already purchased reservation.
+ ## Split a single reservation into two reservations After you buy more than one resource instance within a reservation, you may want to assign instances within that reservation to different subscriptions. By default, all instances have one scope - either single subscription, resource group or shared. Lets say, you bought a reservation for 10 VM instances and specified the scope to be subscription A. You now want to change the scope for seven VM instances to subscription A and the remaining three to subscription B. Splitting a reservation allows you todo that. After you split a reservation, the original ReservationID is canceled and two new reservations are created. Split doesn't impact the reservation order - there's no new commercial transaction with split and the new reservations have the same end date as the one that was split.
data-factory Continuous Integration Delivery Automate Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-automate-github-actions.md
Title: Automate continuous integration with GitHub Actions
description: Learn how to automate continuous integration in Azure Data Factory with GitHub Actions. +
The workflow is composed of two jobs:
:::image type="content" source="media/continuous-integration-delivery-github-actions/saving-package-json-file.png" lightbox="media/continuous-integration-delivery-github-actions/saving-package-json-file.png" alt-text="Screenshot of saving the package.json file in GitHub.":::
+> [!IMPORTANT]
+> Make sure to place the build folder under the root folder of your connected repository. In the above example and workflow, the root folder is ADFroot. If you are not sure what is your root folder, navigate to your Data Factory instance, Manage tab -> Git configuration -> Root folder.
+ 2. Navigate to the Actions tab -> New workflow :::image type="content" source="media/continuous-integration-delivery-github-actions/new-workflow.png" lightbox="media/continuous-integration-delivery-github-actions/new-workflow.png" alt-text="Screenshot of creating a new workflow in GitHub.":::
databox-online Azure Stack Edge Deploy Aks On Azure Stack Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-aks-on-azure-stack-edge.md
Previously updated : 08/30/2023 Last updated : 09/26/2023 # Customer intent: As an IT admin, I need to understand how to deploy and configure Azure Kubernetes service on Azure Stack Edge.
Follow these steps to deploy the AKS cluster.
1. Select **Add** to configure AKS.
-1. On the **Create Kubernetes service** dialog, select the Kubernetes **Node size** for the infrastructure VM. Select a VM node size that's appropriate for the workload size you're deploying. In this example, we've selected VM size **Standard_F16s_HPN ΓÇô 16 vCPUs, 32.77 GB memory**.
+1. On the **Create Kubernetes service** dialog, select the Kubernetes **Node size** for the infrastructure VM. Select a VM node size that's appropriate for the workload size you're deploying. In this example, we've selected VM size **Standard_F16s_HPN ΓÇô 16 vCPUs, 32.77 GB memory**.
+
+ For SAP deployments, select VM node size **Standard_DS5_v2**.
> [!NOTE] > If the node size dropdown menu isnΓÇÖt populated, wait a few minutes so that it's synchronized after VMs are enabled in the preceding step.
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 09/21/2023 Last updated : 09/22/2023 # Update your Azure Stack Edge Pro GPU
Use the following steps to update your Azure Stack Edge version and Kubernetes v
If you are running 2210 or 2301, you can update both your device version and Kubernetes version directly to 2303 and then to 2309.
-If you are running 2303, you can update both your device version and Kubernetes version directly to
-2309.
+If you are running 2303, you can update both your device version and Kubernetes version directly to 2309.
In Azure portal, the process will require two clicks, the first update gets your device version to 2303 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2309.
defender-for-cloud Data Aware Security Dashboard Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-aware-security-dashboard-overview.md
+
+ Title: The data-aware security dashboard
+description: Learn about the capabilities and functions of the data-aware security view in Microsoft Defender for Cloud
+++ Last updated : 09/27/2023++
+# Data security dashboard
+
+The data security dashboard addresses the need for an interactive, data-centric security dashboard that illuminates significant risks to customers' sensitive data. This tool effectively prioritizes alerts and potential attack paths for data across multicloud data resources, making data protection management less overwhelming and more effective.
+
+## Capabilities
+
+- You can view a centralized summary of your cloud data estate that identifies the location of sensitive data, so that you can discover the most critical data resources affected.
+- You can identify the data resources that are at risk and that require attention, so that you can prioritize actions that explore, prevent and respond to sensitive data breaches.
+- Investigate active high severity threats that lead to sensitive data
+- Explore potential threats data by highlighting [attack paths](concept-attack-path.md) that lead to sensitive data.
+- Explore useful data insights by highlighting useful data queries in the [security explorer](how-to-manage-cloud-security-explorer.md).
+
+You can select any element on the page to get more detailed information.
+
+| Aspect | Details |
+|||
+|Release state: | Public Preview |
+| Prerequisites: | Defender for CSPM fully enabled, including sensitive data discovery <br/> Workload protection for database and storage to explore active risks |
+| Required roles and permissions: | No other roles needed on top of what is required for the security explorer. |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds <br/> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government <br/> :::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet |
+
+## Support and prerequisites
+
+Sensitive data discovery is available in the Defender CSPM and Defender for storage plans.
+
+When you enable one of the plans, the sensitive data discovery extension is turned on as part of the plan.
+
+The feature is turned on at the subscription level.
+
+## Data security overview section
+
+The data security overview section provides a general overview of your cloud data estate, per cloud, including all data resources, divided into storage assets, managed databases, and hosted databases (IaaS).
++
+**By coverage status** - displays the limited data coverage for resources without Defender CSPM workload protection:
+
+- **Covered** ΓÇô resources that have the necessary Defender CSPM, or Defender for Storage, or Defender for Databases enabled.
+- **Partially covered** ΓÇô missing either the Defender CSPM, Defender for Storage, or Defender for Storage plan. Select the tooltip to present a detailed view of what is missing.
+- **Sensitive resources** ΓÇô displays how many resources are sensitive.
+- **Sensitive resources requiring attention** - displays the number of sensitive resources that have either high severity security alerts or attack paths.
+
+## Top issues
+
+The **Top issues** section provides a highlighted view of top active and potential risks to sensitive data.
+
+- **Sensitive data resources with high severity alerts** - summarizes the active threats to sensitive data resources and which data types are at risk.
+- **Sensitive data resources in attack paths** - summarizes the potential threats to sensitive data resources by presenting attack paths leading to sensitive data resources and which data types are at potential risk.
+- **Data queries in security explorer** - presents the top data-related queries in security explorer that helps focus on multicloud risks to sensitive data.
+
+ :::image type="content" source="media/data-aware-security-dashboard/top-issues.png" alt-text="Screenshot that shows the top issues section of the data security view." lightbox="media/data-aware-security-dashboard/top-issues.png":::
+
+## Closer look
+
+The **Closer look** section provides a more detailed view into the sensitive data within the organization.
+
+- **Sensitive data discovery** - summarizes the results of the sensitive resources discovered, allowing customers to explore a specific sensitive information type and label.
+- **Internet-exposed data resources** - summarizes the discovery of sensitive data resources that are internet-exposed for storage and managed databases.
+
+ :::image type="content" source="media/data-aware-security-dashboard/closer-look.png" alt-text="Screenshot that shows the closer look section of the data security dashboard." lightbox="media/data-aware-security-dashboard/closer-look.png":::
+
+You can select the **Manage data sensitivity settings** to get to the **Data sensitivity** page. The **Data sensitivity** page allows you to manage the data sensitivity settings of cloud resources at the tenant level, based on selective info types and labels originating from the Purview compliance portal, and [customize sensitivity settings](data-sensitivity-settings.md) such as creating your own customized info types and labels, and setting sensitivity label thresholds.
++
+### Data resources security status
+
+**Sensitive resources status over time** - displays how data security evolves over time with a graph that shows the number of sensitive resources affected by alerts, attack paths, and recommendations within a defined period (last 30, 14, or 7 days).
++
+## Next steps
+
+- Learn more about [data-aware security posture](concept-data-security-posture.md).
+- Learn how to [enable Defender CSPM](tutorial-enable-cspm-plan.md).
defender-for-cloud Export To Siem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-siem.md
To view the event schemas of the exported data types, visit the [Event Hubs even
## Use the Microsoft Graph Security API to stream alerts to third-party applications
-As an alternative to Microsoft Sentinel and Azure Monitor, you can use Defender for Cloud's built-in integration with [Microsoft Graph Security API](https://www.microsoft.com/security/business/graph-security-api). No configuration is required.
+As an alternative to Microsoft Sentinel and Azure Monitor, you can use Defender for Cloud's built-in integration with [Microsoft Graph Security API](/graph/security-concept-overview/). No configuration is required.
You can use this API to stream alerts from your **entire tenant** (and data from many Microsoft Security products) into third-party SIEMs and other popular platforms:
You can use this API to stream alerts from your **entire tenant** (and data from
- **QRadar** - [Use IBM's Device Support Module for Microsoft Defender for Cloud via Microsoft Graph API](https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/com.ibm.dsm.doc/c_dsm_guide_ms_azure_security_center_overview.html). - **Palo Alto Networks**, **Anomali**, **Lookout**, **InSpark**, and more - [Use the Microsoft Graph Security API](https://www.microsoft.com/security/business/graph-security-api#office-MultiFeatureCarousel-09jr2ji).
+> [!NOTE]
+> The preferred way to export alerts is through [Continuously export Microsoft Defender for Cloud data](continuous-export.md).
+ ## Next steps This page explained how to ensure your Microsoft Defender for Cloud alert data is available in your SIEM, SOAR, or ITSM tool of choice. For related material, see:
defender-for-cloud Recommendations Reference Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-devops.md
+
+ Title: Reference table for all DevOps recommendations
+description: This article lists Microsoft Defender for Cloud's DevOps security recommendations that help you harden and protect your resources.
+++ Last updated : 09/27/2023++++
+# Security recommendations for DevOps resources - a reference guide
+
+This article lists the recommendations you might see in Microsoft Defender for Cloud if you've [connected an Azure DevOps](quickstart-onboard-devops.md) or [GitHub](quickstart-onboard-github.md) environment from the **Environment settings** page. The recommendations shown in your environment depend on the resources you're protecting and your customized configuration.
+
+To learn about how to respond to these recommendations, see
+[Remediate recommendations in Defender for Cloud](implement-security-recommendations.md).
+
+Learn more about [Defender for DevOps's](defender-for-devops-introduction.md) benefits and features.
+
+DevOps recommendations do not currently affect the [Secure Score](secure-score-security-controls.md). To prioritize recommendations, consider the number of impacted resources, the total number of findings and the level of severity.
++
+## Next steps
+
+To learn more about recommendations, see the following:
+
+- [What are security policies, initiatives, and recommendations?](security-policy-concept.md)
+- [Review your security recommendations](review-security-recommendations.md)
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference.md
description: This article lists Microsoft Defender for Cloud's security recommen
Previously updated : 01/24/2023 Last updated : 09/27/2023
impact on your secure score.
[!INCLUDE [asc-recs-data](../../includes/asc-recs-data.md)] - ## <a name='recs-identityandaccess'></a>IdentityAndAccess recommendations [!INCLUDE [asc-recs-identityandaccess](../../includes/asc-recs-identityandaccess.md)]
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 09/21/2023 Last updated : 09/27/2023 # What's new in Microsoft Defender for Cloud?
If you're looking for items older than six months, you can find them in the [Arc
|Date |Update | |-|-|
+| September 27 | [Data security dashboard available in public preview](#data-security-dashboard-available-in-public-preview)
| September 21 | [Preview release: New autoprovisioning process for SQL Server on machines plan](#preview-release-new-autoprovisioning-process-for-sql-server-on-machines-plan) | | September 20 | [GitHub Advanced Security for Azure DevOps alerts in Defender for Cloud](#github-advanced-security-for-azure-devops-alerts-in-defender-for-cloud) | | September 11 | [Exempt functionality now available for Defender for APIs recommendations](#exempt-functionality-now-available-for-defender-for-apis-recommendations) |
If you're looking for items older than six months, you can find them in the [Arc
| September 5 | [Sensitive data discovery for PaaS databases (Preview)](#sensitive-data-discovery-for-paas-databases-preview) | | September 1 | [General Availability (GA): malware scanning in Defender for Storage](#general-availability-ga-malware-scanning-in-defender-for-storage)|
+### Data security dashboard available in public preview
+
+September 27, 2023
+
+The data security dashboard is now available in public preview as part of the Defender CSPM plan.
+The data security dashboard is an interactive, data-centric dashboard that illuminates significant risks to sensitive data, prioritizing alerts and potential attack paths for data across hybrid cloud workloads. Learn more about the [data security dashboard](data-aware-security-dashboard-overview.md).
+ ### Preview release: New autoprovisioning process for SQL Server on machines plan September 21, 2023
For more information, see [Migrate to SQL server-targeted Azure Monitoring Agent
### GitHub Advanced Security for Azure DevOps alerts in Defender for Cloud
-September 21, 2023
+September 20, 2023
You can now view GitHub Advanced Security for Azure DevOps (GHAzDO) alerts related to CodeQL, secrets, and dependencies in Defender for Cloud. Results will be displayed in the DevOps blade and in Recommendations. To see these results, onboard your GHAzDO-enabled repositories to Defender for Cloud.
defender-for-cloud Support Matrix Defender For Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-containers.md
Defender for Containers relies on the **Defender agent** for several features. T
Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, only get partial coverage.
+#### Defender agent limitations
+The Defender agent is currently not supported on ARM64 nodes.
+ #### Network restrictions ##### Private link
dev-box Quickstart Configure Dev Box Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-arm-template.md
description: In this quickstart, you learn how to configure the Microsoft Dev Bo
-+ Last updated 09/20/2023
When you no longer need them, delete the resource group: Go to the Azure portal,
## Next steps - [Quickstart: Create a dev box](/azure/dev-box/quickstart-create-dev-box)-- [Configure Azure Compute Gallery for Microsoft Dev Box](how-to-configure-azure-compute-gallery.md)
+- [Configure Azure Compute Gallery for Microsoft Dev Box](how-to-configure-azure-compute-gallery.md)
dms Dms Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-overview.md
Title: What is Azure Database Migration Service?
description: Overview of Azure Database Migration Service, which provides seamless migrations from many database sources to Azure Data platforms. -+ Last updated 02/08/2023
dms How To Migrate Ssis Packages Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-migrate-ssis-packages-managed-instance.md
description: Learn how to migrate SQL Server Integration Services (SSIS) packages and projects to an Azure SQL Managed Instance using the Azure Database Migration Service or the Data Migration Assistant. -+ Last updated 02/20/2020
dms How To Migrate Ssis Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-migrate-ssis-packages.md
description: Learn how to migrate or redeploy SQL Server Integration Services packages and projects to Azure SQL Database single database using the Azure Database Migration Service and Data Migration Assistant. -+ Last updated 02/20/2020
dms How To Monitor Migration Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-monitor-migration-activity.md
Title: Monitor migration activity - Azure Database Migration Service
description: Learn to use the Azure Database Migration Service to monitor migration activity. -+ Last updated 02/20/2020
dms Howto Sql Server To Azure Sql Managed Instance Powershell Offline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md
description: Learn to offline migrate from SQL Server to Azure SQL Managed Instance by using Azure PowerShell and the Azure Database Migration Service. -+ Last updated 12/16/2020
dms Howto Sql Server To Azure Sql Managed Instance Powershell Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-online.md
description: Learn to online migrate from SQL Server to Azure SQL Managed Instance by using Azure PowerShell and the Azure Database Migration Service. -+ Last updated 12/16/2020
dms Howto Sql Server To Azure Sql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-powershell.md
description: Learn to migrate a database from SQL Server to Azure SQL Database by using Azure PowerShell with the Azure Database Migration Service. -+ Last updated 02/20/2020
dms Known Issues Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-postgresql-online.md
description: Learn about known issues and migration limitations with online migrations from PostgreSQL to Azure Database for PostgreSQL using the Azure Database Migration Service. -+ Last updated 02/20/2020
dms Known Issues Azure Sql Db Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-db-managed-instance-online.md
Title: Known issues and limitations with online migrations to Azure SQL Managed
description: Learn about known issues/migration limitations associated with online migrations to Azure SQL Managed Instance. -+ Last updated 02/20/2020
dms Known Issues Dms Hybrid Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-dms-hybrid-mode.md
Title: Known issues/migration limitations with using Hybrid mode
description: Learn about known issues/migration limitations with using Azure Database Migration Service in hybrid mode. -+ Last updated 02/20/2020
dms Known Issues Mongo Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-mongo-cosmos-db.md
description: Learn about known issues and migration limitations with migrations from MongoDB to Azure Cosmos DB using the Azure Database Migration Service. -+ Last updated 05/18/2022
dms Known Issues Troubleshooting Dms Source Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms-source-connectivity.md
description: Learn about how to troubleshoot known issues/errors associated with connecting Azure Database Migration Service to source databases. -+ Last updated 02/20/2020
dms Known Issues Troubleshooting Dms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms.md
Title: "Common issues - Azure Database Migration Service"
description: Learn about how to troubleshoot common known issues/errors associated with using Azure Database Migration Service. -+ Last updated 02/20/2020 -+
+ - seo-lt-2019
+ - ignite-2022
+ - has-azure-ad-ps-ref
# Troubleshoot common Azure Database Migration Service issues and errors
dms Pre Reqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/pre-reqs.md
Title: Prerequisites for Azure Database Migration Service
description: Learn about an overview of the prerequisites for using the Azure Database Migration Service to perform database migrations. -+ Last updated 02/25/2020
dms Quickstart Create Data Migration Service Hybrid Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-hybrid-portal.md
description: Use the Azure portal to create an instance of Azure Database Migration Service in hybrid mode. -+ Last updated 03/13/2020
dms Quickstart Create Data Migration Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-portal.md
description: Use the Azure portal to create an instance of Azure Database Migration Service. -+ Last updated 01/29/2021
dms Resource Custom Roles Sql Db Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance.md
description: Learn to use the custom roles for SQL Server to Azure SQL Managed Instance online migrations. -+ Last updated 02/08/2021
dms Resource Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-network-topologies.md
description: Learn the source and target configurations for Azure SQL Managed Instance migrations using the Azure Database Migration Service. -+ Last updated 01/08/2020
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-scenario-status.md
description: Learn which migration scenarios are currently supported for Azure Database Migration Service and their availability status. -+ Last updated 04/27/2022
dms Tutorial Azure Postgresql To Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md
description: Learn to perform an online migration from one Azure Database for PostgreSQL to another Azure Database for PostgreSQL by using Azure Database Migration Service via the Azure portal. -+ Last updated 07/21/2020
dms Tutorial Mongodb Cosmos Db Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mongodb-cosmos-db-online.md
description: Learn to migrate from MongoDB on-premises to Azure Cosmos DB for MongoDB online by using Azure Database Migration Service. -+ Last updated 09/21/2021
dms Tutorial Mongodb Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mongodb-cosmos-db.md
description: Migrate from MongoDB on-premises to Azure Cosmos DB for MongoDB offline via Azure Database Migration Service. -+ Last updated 09/21/2021
dms Tutorial Postgresql Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online-portal.md
description: Learn to perform an online migration from PostgreSQL on-premises to Azure Database for PostgreSQL by using Azure Database Migration Service via the Azure portal. -+ Last updated 03/31/2023
dms Tutorial Postgresql Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online.md
description: Learn to perform an online migration from PostgreSQL on-premises to Azure Database for PostgreSQL by using Azure Database Migration Service via the CLI. -+ Last updated 04/11/2020
dms Tutorial Rds Postgresql Server Azure Db For Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-rds-postgresql-server-azure-db-for-postgresql-online.md
description: Learn to perform an online migration from RDS PostgreSQL to Azure Database for PostgreSQL by using the Azure Database Migration Service. -+ Last updated 04/11/2020
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md
description: Learn to perform an online migration from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service (classic) -+ Last updated 06/07/2023
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-azure-sql.md
description: Learn to migrate from SQL Server to Azure SQL Database offline by using Azure Database Migration Service (classic). -+ Last updated 02/08/2023
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-managed-instance.md
description: Learn to migrate from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service (classic). -+ Last updated 02/08/2023
energy-data-services How To Convert Segy To Ovds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-ovds.md
Title: Microsoft Azure Data Manager for Energy - How to convert a segy to ovds file
+ Title: Microsoft Azure Data Manager for Energy Preview - How to convert a segy to ovds file
description: This article explains how to convert a SGY file to oVDS file format--++ Previously updated : 08/18/2022 Last updated : 09/13/2023 # How to convert a SEG-Y file to oVDS
-In this article, you will learn how to convert SEG-Y formatted data to the Open VDS (oVDS) format. Seismic data stored in the industry standard SEG-Y format can be converted to oVDS format for use in applications via the Seismic DMS.
-
-[OSDU&trade; SEG-Y to oVDS conversation](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-vds-conversion/-/tree/release/0.15)
+In this article, you learn how to convert SEG-Y formatted data to the Open VDS (oVDS) format. Seismic data stored in the industry standard SEG-Y format can be converted to oVDS format for use in applications via the Seismic DMS. See here for OSDU&trade; community here: [SEG-Y to oVDS conversation](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-vds-conversion/-/tree/master). This tutorial is a step by step guideline how to perform the conversion. Note the actual production workflow may differ and use as a guide for the required set of steps to achieve the conversion.
## Prerequisites
+- An Azure subscription
+- An instance of [Azure Data Manager for Energy](quickstart-create-microsoft-energy-data-services-instance.md) created in your Azure subscription.
+- A SEG-Y File
+ - You may use any of the following files from the Volve dataset as a test. The Volve data set itself is available from [Equinor](https://www.equinor.com/energy/volve-data-sharing).
+ - [Small < 100 MB](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/azure/m16-master/source/ddms-smoke-tests/ST0202R08_PSDM_DELTA_FIELD_DEPTH.MIG_FIN.POST_STACK.3D.JS-017534.segy)
+ - [Medium < 250 MB](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/azure/m16-master/source/ddms-smoke-tests/ST0202R08_PS_PSDM_RAW_DEPTH.MIG_RAW.POST_STACK.3D.JS-017534.segy)
+ - [Large ~ 1 GB](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/283ba58aff7c40e62c2ac649e48a33643571f449/source/ddms-smoke-tests/sample-ST10010ZC11_PZ_PSDM_KIRCH_FULL_T.MIG_FIN.POST_STACK.3D.JS-017536.segy)
-1. Download and install [Postman](https://www.postman.com/) desktop app.
-2. Import the [oVDS Conversions.postman_collection](https://community.opengroup.org/osdu/platform/pre-shipping/-/blob/main/R3-M9/Azure-M9/Services/DDMS/oVDS_Conversions.postman_collection.json) into Postman. All curl commands used below are added to this collection. Update your Environment file accordingly
-3. Ensure that an Azure Data Manager for Energy instance is created already
-4. Clone the **sdutil** repo as shown below:
-
- ```markdown
- git clone https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil.git
-
- git checkout azure/stable
- ```
-
-## Convert SEG-Y file to oVDS file
-
-1. Check if VDS is registered with the workflow service or not:
-
- ```markdown
- curl --location --request GET '<url>/api/workflow/v1/workflow/'
- --header 'Data-Partition-Id: <datapartition>'
- --header 'Content-Type: application/json'
- --header 'Authorization: Bearer {{TOKEN}}
- ```
-
- You should see VDS converter DAG in the list. IF NOT in the response list then REPORT the issue to Azure Team
-
-2. Open **sdutil** and edit the `config.yaml` at the root to include the following yaml and fill in the three templatized values (two instances of `<meds-instance-url>` and one `<put refresh token here...>`). See [Generate a refresh token](how-to-generate-refresh-token.md) on how to generate a refresh token. If you continue to follow other "how-to" documentation, you'll use this refresh token again. Once you've generated the token, store it in a place where you'll be able to access it in the future.
-
- ```yaml
- seistore:
- service: '{"azure": {"azureEnv":{"url": "<url>/seistore-svc/api/v3", "appkey": ""}}}'
- url: '<url>/seistore-svc/api/v3'
- cloud_provider: azure
- env: glab
- auth-mode: JWT Token
- ssl_verify: false
- auth_provider:
- azure: '{
- "provider": "azure",
- "authorize_url": "https://login.microsoftonline.com/", "oauth_token_host_end": "/oauth2/v2.0/token",
- "scope_end":"/.default openid profile offline_access",
- "redirect_uri":"http://localhost:8080",
- "login_grant_type": "refresh_token",
- "refresh_token": "<RefreshToken acquired earlier>"
- }'
- azure:
- empty: none
- ```
+## Get your Azure Data Manager for Energy instance details
-3. Run **sdutil** to see if it's working fine. Follow the directions in [Setup and Usage for Azure env](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/home/-/tree/master). Understand that depending on your OS and Python version, you may have to run `python3` command as opposed to `python`.
+The first step is to get the following information from your [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md) in the [Azure portal](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden):
- > [!NOTE]
- > when running `python sdutil config init`, you don't need to enter anything when prompted with `Insert the azure (azureGlabEnv) application key:`.
+| Parameter | Value | Example |
+| | |-- |
+| client_id | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx |
+| client_secret | Client secrets | _fl****************** |
+| tenant_id | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-xxxxxxxxxxxx |
+| base_url | URL | `https://<instance>.energy.azure.com` |
+| data-partition-id | Data Partition(s) | `<data-partition-name>` |
-4. Upload the seismic file
+You use this information later in the tutorial.
- ```markdown
- python sdutil cp \source.segy sd://<datapartition>/<subproject>/destination.segy
- ```
+## Set up Postman
-5. Fetch the idtoken from sdutil for the uploaded file.
+Next, set up Postman:
- ```markdown
- python sdutil auth idtoken
- ```
+1. Download and install the [Postman](https://www.postman.com/downloads/) desktop app.
+
+2. Import the following files in Postman:
+
+ - [Converter Postman collection](https://github.com/microsoft/adme-samples/blob/main/postman/SEGYtoVDS.postman_collection.json)
+ - [Converter Postman environment](https://github.com/microsoft/adme-samples/blob/main/postman/SEGYtoVDS.postman_environment.json)
+
+ To import the files:
+
+ 1. Select **Import** in Postman.
+
+ [![Screenshot that shows the import button in Postman.](media/tutorial-ddms/postman-import-button.png)](media/tutorial-ddms/postman-import-button.png#lightbox)
+
+ 2. Paste the URL of each file into the search box.
+
+ [![Screenshot that shows importing collection and environment files in Postman via URL.](media/tutorial-ddms/postman-import-search.png)](media/tutorial-ddms/postman-import-search.png#lightbox)
+
+3. In the Postman environment, update **CURRENT VALUE** with the information from your Azure Data Manager for Energy instance details
+
+ 1. In Postman, in the left menu, select **Environments**, and then select **SEGYtoVDS Environment**.
+
+ 2. In the **CURRENT VALUE** column, enter the information that's described in the table in 'Get your Azure Data Manager for Energy instance details'.
+
+ [![Screenshot that shows where to enter current values in SEGYtoVDS environment.](media/how-to-convert-segy-to-vds/postman-environment-current-values.png)](media/how-to-convert-segy-to-vds/postman-environment-current-values.png#lightbox)
+
+## Step by Step Process to convert SEG-Y file to oVDS
+
+The Postman collection provided has all of the sample calls to serve as a guide. You can also retrieve the equivalent cURL command for a Postman call by clicking the **Code** button.
+
+[![Screenshot that shows the Code button in Postman.](media/how-to-convert-segy-to-vds/postman-code-button.png)](media/how-to-convert-segy-to-vds/postman-code-button.png#lightbox)
+
+### Create a Legal Tag
+
+[![Screenshot of creating Legal Tag.](media/how-to-convert-segy-to-vds/postman-api-create-legal-tag.png)](media/how-to-convert-segy-to-vds/postman-api-create-legal-tag.png#lightbox)
+
+### Prepare dataset files
+
+This file contains the sample [Vector Header Mapping](https://github.com/microsoft/adme-samples/blob/main/postman/CreateVectorHeaderMappingKeys_SEGYtoVDS.json) and this file contains the sample [Storage Records](https://github.com/microsoft/adme-samples/blob/main/postman/StorageRecord_SEGYtoVDS.json) for the VDS conversion.
+
+### User Access
+
+The user needs to be part of the `users.datalake.admins` group. Validate the current entitlements for the user using the following call:
+
+[![Screenshot that shows the API call to get user groups in Postman.](media/how-to-convert-segy-to-vds/postman-api-get-user-groups.png)](media/how-to-convert-segy-to-vds/postman-api-get-user-groups.png#lightbox)
+
+Later in this tutorial, you need at least one `owner` and at least one `viewer`. These user groups look like `data.default.owners` and `data.default.viewers`. Make sure to note one of each in your list.
+
+If the user isn't part of the required group, you can add the required entitlement using the following sample call:
+ email-id: Is the value "Id" returned from the call above.
+
+[![Screenshot that shows the API call to get register a user as an admin in Postman.](media/how-to-convert-segy-to-vds/postman-api-add-user-to-admins.png)](media/how-to-convert-segy-to-vds/postman-api-add-user-to-admins.png#lightbox)
+
+If you haven't yet created entitlements groups, follow the directions as outlined in [How to manage users](how-to-manage-users.md). If you would like to see what groups you have, use [Get entitlements groups for a given user](how-to-manage-users.md#get-entitlements-groups-for-a-given-user). Data access isolation is achieved with this dedicated ACL (access control list) per object within a given data partition.
+
+### Prepare Subproject
+
+#### 1. Register Data Partition to Seismic
+
+[![Screenshot that shows the API call to register a data partition as a seismic tenant in Postman.](media/how-to-convert-segy-to-vds/postman-api-register-tenant.png)](media/how-to-convert-segy-to-vds/postman-api-register-tenant.png#lightbox)
+
+#### 2. Create Subproject
+
+Use your previously created entitlement groups that you would like to add as ACL (Access Control List) admins and viewers. Data partition entitlements don't necessarily translate to the subprojects within it, so it is important to be explicit about the ACLs for each subproject, regardless of what data partition it is in.
+
+[![Screenshot that shows the API call to create a seismic subproject in Postman.](media/how-to-convert-segy-to-vds/postman-api-create-subproject.png)](media/how-to-convert-segy-to-vds/postman-api-create-subproject.png#lightbox)
+
+#### 3. Create dataset
+
+> [!NOTE]
+> This step is only required if you are not using `sdutil` for uploading the seismic files.
+
+[![Screenshot that shows the API call to create a seismic dataset in Postman.](media/how-to-convert-segy-to-vds/postman-api-create-dataset.png)](media/how-to-convert-segy-to-vds/postman-api-create-dataset.png#lightbox)
+
+### Upload the File
+
+There are two ways to upload a SEGY file. One option is to use the sasurl through Postman / curl call. You need to download Postman or setup Curl on your OS.
+The second method is to use [SDUTIL](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tags/azure-stable). To login to your instance for ADME via the tool you need to generate a refresh token for the instance. See [How to generate a refresh token](how-to-generate-refresh-token.md). Alternatively, you can modify the code of SDUTIL to use client credentials instead to log in. If you haven't already, you need to setup SDUTIL. Download the codebase and edit the `config.yaml` at the root. Replace the contents of this config file with the following yaml.
+
+```yaml
+seistore:
+ service: '{"azure": {"azureEnv":{"url": "<instance url>/seistore-svc/api/v3", "appkey": ""}}}'
+ url: '<instance url>/seistore-svc/api/v3'
+ cloud_provider: azure
+ env: glab
+ auth-mode: JWT Token
+ ssl_verify: false
+auth_provider:
+ azure: '{
+ "provider": "azure",
+ "authorize_url": "https://login.microsoftonline.com/", "oauth_token_host_end": "/oauth2/v2.0/token",
+ "scope_end":"/.default openid profile offline_access",
+ "redirect_uri":"http://localhost:8080",
+ "login_grant_type": "refresh_token",
+ "refresh_token": "<RefreshToken acquired earlier>"
+ }'
+azure:
+ empty: none
+```
+
+#### Method 1: Postman
+
+##### Get the sasurl:
+
+[![Screenshot that shows the API call to get a GCS upload URL in Postman.](media/how-to-convert-segy-to-vds/postman-api-get-gcs-upload-url.png)](media/how-to-convert-segy-to-vds/postman-api-get-gcs-upload-url.png#lightbox)
+
+##### Upload the file:
+
+You need to select the file to upload in the Body section of the API call.
+
+[![Screenshot that shows the API call to upload a file in Postman.](media/how-to-convert-segy-to-vds/postman-api-upload-file.png)](media/how-to-convert-segy-to-vds/postman-api-upload-file.png#lightbox)
++
+[![Screenshot that shows the API call to upload a file binary in Postman.](media/how-to-convert-segy-to-vds/postman-api-upload-file-binary.png)](media/how-to-convert-segy-to-vds/postman-api-upload-file-binary.png#lightbox)
+
+##### Verify upload
+
+[![Screenshot that shows the API call to verify a file binary is uploaded in Postman.](media/how-to-convert-segy-to-vds/postman-api-verify-file-upload.png)](media/how-to-convert-segy-to-vds/postman-api-verify-file-upload.png#lightbox)
+
+#### Method 2: SDUTIL
+
+**sdutil** is an OSDU desktop utility to access seismic service. We use it to upload/download files. Use the azure-stable tag from [SDUTIL](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tags/azure-stable).
+
+> [!NOTE]
+> When running `python sdutil config init`, you don't need to enter anything when prompted with `Insert the azure (azureGlabEnv) application key:`.
+
+```bash
+python sdutil config init
+python sdutil auth login
+python sdutil ls sd://<data-partition-id>/<subproject>/
+```
+
+Upload your seismic file to your Seismic Store. Here's an example with a SEGY-format file called `source.segy`:
+
+```bash
+python sdutil cp <local folder>/source.segy sd://<data-partition-id>/<subproject>/destination.segy
+```
+For example:
+
+```bash
+python sdutil cp ST10010ZC11_PZ_PSDM_KIRCH_FULL_T.MIG_FIN.POST_STACK.3D.JS-017536.segy sd://<data-partition-id>/<subproject>/destination.segy
+```
+
+### Create Header Vector Mapping
+
+Generate the Header Vector Mapping
+
+[![Screenshot that shows the API call to create header vector mapping in Postman.](media/how-to-convert-segy-to-vds/postman-api-create-headermapping.png)](media/how-to-convert-segy-to-vds/postman-api-create-headermapping.png#lightbox)
+
+### Create Storage Records
+
+[![Screenshot that shows the API call to create storage records in Postman.](media/how-to-convert-segy-to-vds/postman-api-create-records.png)](media/how-to-convert-segy-to-vds/postman-api-create-records.png#lightbox)
+
+### Run Converter
+
+1. Trigger the VDS Conversion DAG to convert your data using the execution context values you had saved above.
+
+ Fetch the id token from sdutil for the uploaded file or use an access/bearer token from Postman.
+
+```markdown
+python sdutil auth idtoken
+```
+
+[![Screenshot that shows the API call to start the conversion workflow in Postman.](media/how-to-convert-segy-to-vds/postman-api-start-workflow.png)](media/how-to-convert-segy-to-vds/postman-api-start-workflow.png#lightbox)
+
+2. Let the DAG run to the `succeeded` state. You can check the status using the workflow status call. The run ID is in the response of the above call
+
+[![Screenshot that shows the API call to check the conversion workflow's status in Postman.](media/how-to-convert-segy-to-vds/postman-api-check-workflow-status.png)](media/how-to-convert-segy-to-vds/postman-api-check-workflow-status.png#lightbox)
+
+3. You can see if the converted file is present using the following command in sdutil or in the Postman API call:
-6. Trigger the DAG through `POSTMAN` or using the call below:
-
- ```bash
- curl --location --request POST '<url>/api/workflow/v1/workflow/<dag-name>/workflowRun' \
- --header 'data-partition-id: <datapartition>' \
- --header 'Content-Type: application/json' \
- --header 'Authorization: Bearer {{TOKEN}}' \
- --data-raw '{
- "executionContext": {
- "vds_url": "sd://<datapartition>/<subproject>",
- "persistent_id": "<filename>",
- "id_token": "<token>",
- "segy_url": "sd://<datapartition>/<subproject>/<filename>.segy"
-
- }
- }'
+ ```bash
+ python sdutil ls sd://<data-partition-id>/<subproject>
```
-7. Let the DAG run to complete state. You can check the status using the workflow status call
+[![Screenshot that shows the API call to check if the file has been converted.](media/how-to-convert-segy-to-vds/postman-api-verify-file-converted.png)](media/how-to-convert-segy-to-vds/postman-api-verify-file-converted.png#lightbox)
-8. Verify the converted files are present on the specified location in DAG Trigger or not
+4. Verify the converted files are present on the specified location in DAG Trigger or not
```markdown
- python sdutil ls sd://<datapartition>/<subproject>/
+ python sdutil ls sd://<data-partition-id>/<subproject>/
```
-9. If you would like to download and inspect your VDS files, don't use the `cp` command as it will not work. The VDS conversion results in multiple files, therefore the `cp` command won't be able to download all of them in one command. Use either the [SEGYExport](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/SEGYExport/README.html) or [VDSCopy](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/VDSCopy/README.html) tool instead. These tools use a series of REST calls accessing a [naming scheme](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/connection.html) to retrieve information about all the resulting VDS files.
+5. If you would like to download and inspect your VDS files, don't use the `cp` command as it will not work. The VDS conversion results in multiple files, therefore the `cp` command won't be able to download all of them in one command. Use either the [SEGYExport](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/SEGYExport/README.html) or [VDSCopy](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/VDSCopy/README.html) tool instead. These tools use a series of REST calls accessing a [naming scheme](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/connection.html) to retrieve information about all the resulting VDS files.
OSDU&trade; is a trademark of The Open Group. ## Next steps <!-- Add a context sentence for the following links --> > [!div class="nextstepaction"]
-> [How to convert a segy to zgy file](./how-to-convert-segy-to-zgy.md)
+> [How to convert a segy to zgy file](./how-to-convert-segy-to-zgy.md)
energy-data-services How To Convert Segy To Zgy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-zgy.md
Title: Microsoft Azure Data Manager for Energy - How to convert segy to zgy file
+ Title: Microsoft Azure Data Manager for Energy Preview - How to convert segy to zgy file
description: This article describes how to convert a SEG-Y file to a ZGY file--++ Previously updated : 08/18/2022 Last updated : 09/13/2023 # How to convert a SEG-Y file to ZGY
-In this article, you will learn how to convert SEG-Y formatted data to the ZGY format. Seismic data stored in industry standard SEG-Y format can be converted to ZGY for use in applications such as Petrel via the Seismic DMS. See here for [ZGY Conversion FAQ's](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion#faq) and more background can be found in the OSDU&trade; community here: [SEG-Y to ZGY conversation](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion)
-
+In this article, you learn how to convert SEG-Y formatted data to the ZGY format. Seismic data stored in industry standard SEG-Y format can be converted to ZGY for use in applications such as Petrel via the Seismic DMS. See here for [ZGY Conversion FAQ's](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion#faq) and more background can be found in the OSDU&trade; community here: [SEG-Y to ZGY conversation](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion). This tutorial is a step by step guideline how to perform the conversion. Note the actual production workflow may differ and use as a guide for the required set of steps to achieve the conversion.
## Prerequisites
+- An Azure subscription
+- An instance of [Azure Data Manager for Energy](quickstart-create-microsoft-energy-data-services-instance.md) created in your Azure subscription.
+- A SEG-Y File
+ - You may use any of the following files from the Volve dataset as a test. The Volve data set itself is available from [Equinor](https://www.equinor.com/energy/volve-data-sharing).
+ - [Small < 100 MB](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/azure/m16-master/source/ddms-smoke-tests/ST0202R08_PSDM_DELTA_FIELD_DEPTH.MIG_FIN.POST_STACK.3D.JS-017534.segy)
+ - [Medium < 250 MB](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/azure/m16-master/source/ddms-smoke-tests/ST0202R08_PS_PSDM_RAW_DEPTH.MIG_RAW.POST_STACK.3D.JS-017534.segy)
+ - [Large ~ 1 GB](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/283ba58aff7c40e62c2ac649e48a33643571f449/source/ddms-smoke-tests/sample-ST10010ZC11_PZ_PSDM_KIRCH_FULL_T.MIG_FIN.POST_STACK.3D.JS-017536.segy)
-1. Download and install [Postman](https://www.postman.com/) desktop app.
-2. Import the [oZGY Conversions.postman_collection](https://github.com/microsoft/meds-samples/blob/main/postman/SegyToZgyConversion%20Workflow%20using%20SeisStore%20R3%20CI-CD%20v1.0.postman_collection.json) into Postman. All curl commands used below are added to this collection. Update your Environment file accordingly
-3. Ensure that your Azure Data Manager for Energy instance is created already
-4. Clone the **sdutil** repo as shown below:
- ```markdown
- git clone https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil.git
+## Get your Azure Data Manager for Energy instance details
- git checkout azure/stable
- ```
-5. The [jq command](https://stedolan.github.io/jq/download/), using your favorite tool on your favorite OS.
-
-## Convert SEG-Y file to ZGY file
+The first step is to get the following information from your [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md) in the [Azure portal](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden):
-1. The user needs to be part of the `users.datalake.admins` group and user needs to generate a valid refresh token. See [How to generate a refresh token](how-to-generate-refresh-token.md) for further instructions. If you continue to follow other "how-to" documentation, you'll use this refresh token again. Once you've generated the token, store it in a place where you'll be able to access it in the future. If it isn't present, add the group for the member ID. In this case, use the app ID you have been using for everything as the `user-email`. Additionally, the `data-partition-id` should be in the format `<instance-name>-<data-partition-name>` in both the header and the url, and will be for any following command that requires `data-partition-id`.
+| Parameter | Value | Example |
+| | |-- |
+| client_id | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx |
+| client_secret | Client secrets | _fl****************** |
+| tenant_id | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-xxxxxxxxxxxx |
+| base_url | URL | `https://<instance>.energy.azure.com` |
+| data-partition-id | Data Partition(s) | `<data-partition-name>` |
- ```bash
- curl --location --request POST "<url>/api/entitlements/v2/groups/users.datalake.admins@<data-partition>.<domain>.com/members" \
- --header 'Content-Type: application/json' \
- --header 'data-partition-id: <data-partition>' \
- --header 'Authorization: Bearer {{TOKEN}}' \
- --data-raw '{
- "email" : "<user-email>",
- "role" : "MEMBER"
- }
- ```
+You use this information later in the tutorial.
- You can also add the user to this group by using the entitlements API and assigning the required group ID. In order to check the entitlements groups for a user, perform the command [Get entitlements groups for a given user](how-to-manage-users.md#get-entitlements-groups-for-a-given-user). In order to get all the groups available, do the following command:
+## Set up Postman
- ```bash
- curl --location --request GET "<url>/api/entitlements/v2/groups/" \
- --header 'data-partition-id: <data-partition>' \
- --header 'Authorization: Bearer {{TOKEN}}'
- ```
+Next, set up Postman:
-2. Check if ZGY is registered with the workflow service or not:
+1. Download and install the [Postman](https://www.postman.com/downloads/) desktop app.
- ```bash
- curl --location --request GET '<url>/api/workflow/v1/workflow/' \
- --header 'Data-Partition-Id: <data-partition>' \
- --header 'Content-Type: application/json' \
- --header 'Authorization: Bearer {{TOKEN}}'
- ```
+2. Import the following files in Postman:
- You should see ZGY converter DAG in the list. IF NOT in the response list then REPORT the issue to Azure Team
+ - [Converter Postman collection](https://github.com/microsoft/adme-samples/blob/main/postman/SEGYtoZGY.postman_collection.json)
+ - [Converter Postman environment](https://github.com/microsoft/adme-samples/blob/main/postman/SEGYtoZGY.postman_environment.json)
-3. Register Data partition to Seismic:
+ To import the files:
- ```bash
- curl --location --request POST '<url>/seistore-svc/api/v3/tenant/<data-partition>' \
- --header 'Authorization: Bearer {{TOKEN}}' \
- --header 'Content-Type: application/json' \
- --data-raw '{
- "esd": "{{data-partition}}.{{domain}}.com",
- "gcpid": "{{data-partition}}",
- "default_acl": "users.datalake.admins@{{data-partition}}.{{domain}}.com"}'
- ```
+ 1. Select **Import** in Postman.
-4. Create Legal tag
+ [![Screenshot that shows the import button in Postman.](media/tutorial-ddms/postman-import-button.png)](media/tutorial-ddms/postman-import-button.png#lightbox)
- ```bash
- curl --location --request POST '<url>/api/legal/v1/legaltags' \
- --header 'Content-Type: application/json' \
- --header 'data-partition-id: <data-partition>' \
- --header 'Authorization: Bearer {{TOKEN}}' \
- --data-raw '{
- "name": "<tag-name>",
- "description": "Legal Tag added for Seismic",
- "properties": {
- "contractId": "123456",
- "countryOfOrigin": [
- "US",
- "CA"
- ],
- "dataType": "Public Domain Data",
- "exportClassification": "EAR99",
- "originator": "Schlumberger",
- "personalData": "No Personal Data",
- "securityClassification": "Private",
- "expirationDate": "2025-12-25"
- }
- }'
- ```
+ 2. Paste the URL of each file into the search box.
-5. Create Subproject. Use your previously created entitlements groups that you would like to add as ACLs (Access Control List) admins and viewers. If you haven't yet created entitlements groups, follow the directions as outlined in [How to manage users](how-to-manage-users.md). If you would like to see what groups you have, use [Get entitlements groups for a given user](how-to-manage-users.md#get-entitlements-groups-for-a-given-user). Data access isolation is achieved with this dedicated ACL (access control list) per object within a given data partition. You may have many subprojects within a data partition, so this command allows you to provide access to a specific subproject without providing access to an entire data partition. Data partition entitlements don't necessarily translate to the subprojects within it, so it's important to be explicit about the ACLs for each subproject, regardless of what data partition it is in.
+ [![Screenshot that shows importing collection and environment files in Postman via URL.](media/tutorial-ddms/postman-import-search.png)](media/tutorial-ddms/postman-import-search.png#lightbox)
+
+3. In the Postman environment, update **CURRENT VALUE** with the information from your Azure Data Manager for Energy instance details
- Later in this tutorial, you'll need at least one `owner` and at least one `viewer`. These user groups will look like `data.default.owners` and `data.default.viewers`. Make sure to include one of each in your list of `acls` in the request below.
+ 1. In Postman, in the left menu, select **Environments**, and then select **SEGYtoZGY Environment**.
- ```bash
- curl --location --request POST '<url>/seistore-svc/api/v3/subproject/tenant/<data-partition>/subproject/<subproject>' \
- --header 'Authorization: Bearer {{TOKEN}}' \
- --header 'Content-Type: text/plain' \
- --data-raw '{
- "admin": "test@email",
- "storage_class": "MULTI_REGIONAL",
- "storage_location": "US",
- "acls": {
- "admins": [
- "<user-group>@<data-partition>.<domain>.com",
- "<user-group>@<data-partition>.<domain>.com"
- ],
- "owners": [
- "<user-group>@<data-partition>.<domain>.com"
- ],
- "viewers": [
- "<user-group>@<data-partition>.<domain>.com"
- ]
- }
- }'
- ```
+ 2. In the **CURRENT VALUE** column, enter the information that's described in the table in 'Get your Azure Data Manager for Energy instance details'.
- The following request is an example of the create subproject request:
+ [![Screenshot that shows where to enter current values in SEGYtoZGY environment.](media/how-to-convert-segy-to-zgy/postman-environment-current-values.png)](media/how-to-convert-segy-to-zgy/postman-environment-current-values.png#lightbox)
- ```bash
- curl --location --request POST 'https://<instance>.energy.azure.com/seistore-svc/api/v3/subproject/tenant/<instance>-<data-partition-name>/subproject/subproject1' \
- --header 'Authorization: Bearer eyJ...' \
- --header 'Content-Type: text/plain' \
- --data-raw '{
- "admin": "test@email",
- "storage_class": "MULTI_REGIONAL",
- "storage_location": "US",
- "acls": {
- "admins": [
- "service.seistore.p4d.tenant01.subproject01.admin@slb.p4d.cloud.slb-ds.com",
- "service.seistore.p4d.tenant01.subproject01.editor@slb.p4d.cloud.slb-ds.com"
- ],
- "owners": [
- "data.default.owners@slb.p4d.cloud.slb-ds.com"
- ],
- "viewers": [
- "service.seistore.p4d.tenant01.subproject01.viewer@slb.p4d.cloud.slb-ds.com"
- ]
- }
- }'
- ```
+## Step by Step Process to convert SEG-Y file to ZGY file
+The Postman collection provided has all of the sample calls to serve as a guide. You can also retrieve the equivalent cURL command for a Postman call by clicking the **Code** button.
-6. Patch Subproject with the legal tag you created above. Recall that the format of the legal tag will be prefixed with the Azure Data Manager for Energy instance name and data partition name, so it looks like `<instancename>`-`<datapartitionname>`-`<legaltagname>`.
+[![Screenshot that shows the Code button in Postman.](media/how-to-convert-segy-to-zgy/postman-code-button.png)](media/how-to-convert-segy-to-zgy/postman-code-button.png#lightbox)
- ```bash
- curl --location --request PATCH '<url>/seistore-svc/api/v3/subproject/tenant/<data-partition>/subproject/<subproject-name>' \
- --header 'ltag: <Tag-name-above>' \
- --header 'recursive: true' \
- --header 'Authorization: Bearer {{TOKEN}}' \
- --header 'Content-Type: text/plain' \
- --data-raw '{
- "admin": "test@email",
- "storage_class": "MULTI_REGIONAL",
- "storage_location": "US",
- "acls": {
- "admins": [
- "<user-group>@<data-partition>.<domain>.com",
- "<user-group>@<data-partition>.<domain>.com"
- ],
- "viewers": [
- "<user-group>@<data-partition>.<domain>.com"
- ]
- }
- }'
- ```
+### Create a Legal Tag
-7. Open the [sdutil](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil) codebase and edit the `config.yaml` at the root. Replace the contents of this config file with the following yaml. See [How to generate a refresh token](how-to-generate-refresh-token.md) to generate the required refresh token. Once you've generated the token, store it in a place where you'll be able to access it in the future.
-
- ```yaml
- seistore:
- service: '{"azure": {"azureEnv":{"url": "<url>/seistore-svc/api/v3", "appkey": ""}}}'
- url: '<url>/seistore-svc/api/v3'
- cloud_provider: azure
- env: glab
- auth-mode: JWT Token
- ssl_verify: false
- auth_provider:
- azure: '{
- "provider": "azure",
- "authorize_url": "https://login.microsoftonline.com/", "oauth_token_host_end": "/oauth2/v2.0/token",
- "scope_end":"/.default openid profile offline_access",
- "redirect_uri":"http://localhost:8080",
- "login_grant_type": "refresh_token",
- "refresh_token": "<RefreshToken acquired earlier>"
- }'
- azure:
- empty: none
- ```
+[![Screenshot of creating Legal Tag.](media/how-to-convert-segy-to-zgy/postman-api-create-legal-tag.png)](media/how-to-convert-segy-to-zgy/postman-api-create-legal-tag.png#lightbox)
-8. Run the following commands using **sdutil** to see its working fine. Follow the directions in [Setup and Usage for Azure env](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil#setup-and-usage-for-azure-env). Understand that depending on your OS and Python version, you may have to run `python3` command as opposed to `python`. If you run into errors with these commands, refer to the [SDUTIL tutorial](./tutorial-seismic-ddms-sdutil.md). See [How to generate a refresh token](how-to-generate-refresh-token.md). Once you've generated the token, store it in a place where you'll be able to access it in the future.
+### Prepare dataset files
+Prepare the metadata / manifest file / records file for the dataset. The manifest file includes:
+ - WorkProduct
+ - SeismicBinGrid
+ - FileCollection
+ - SeismicTraceData
- > [!NOTE]
- > when running `python sdutil config init`, you don't need to enter anything when prompted with `Insert the azure (azureGlabEnv) application key:`.
+Conversion uses a manifest file that you upload to your storage account later in order to run the conversion. This manifest file is created by using multiple JSON files and running a script. The JSON files for this process are stored [here](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/master/doc/sample-records/volve) for the Volve Dataset. For more information on Volve, such as where the dataset definitions come from, visit [their website](https://www.equinor.com/energy/volve-data-sharing). Complete the following steps in order to create the manifest file:
- ```bash
- python sdutil config init
- python sdutil auth login
- python sdutil ls sd://<data-partition>/<subproject>/
- ```
+1. Clone the [repo](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/master/) and navigate to the folder `doc/sample-records/volve`
+2. Edit the values in the `prepare-records.sh` bash script. Recall that the format of the legal tag is prefixed with the Azure Data Manager for Energy instance name and data partition name, so it looks like `<instancename>-<datapartitionname>-<legaltagname>`.
+```bash
+DATA_PARTITION_ID=<your-partition-id>
+ACL_OWNER=data.default.owners@<your-partition-id>.<your-tenant>.com
+ACL_VIEWER=data.default.viewers@<your-partition-id>.<your-tenant>.com
+LEGAL_TAG=<legal-tag-created>
+```
+3. Run the `prepare-records.sh` script.
+4. The output is a JSON array with all objects and is saved in the `all_records.json` file.
+5. Save the `filecollection_segy_id` and the `work_product_id` values in that JSON file to use in the conversion step. That way the converter knows where to look for this contents of your `all_records.json`.
-9. Upload your seismic file to your Seismic Store. Here's an example with a SEGY-format file called `source.segy`:
+> [!NOTE]
+> The `all_records.json` file must also contain appropriate data for each element.
+>
+> **Example**: The following parameters are used when calculating the ZGY coordinates for `SeismicBinGrid`:
+> - `P6BinGridOriginEasting`
+> - `P6BinGridOriginI`
+> - `P6BinGridOriginJ`
+> - `P6BinGridOriginNorthing`
+> - `P6ScaleFactorOfBinGrid`
+> - `P6BinNodeIncrementOnIaxis`
+> - `P6BinNodeIncrementOnJaxis`
+> - `P6BinWidthOnIaxis`
+> - `P6BinWidthOnJaxis`
+> - `P6MapGridBearingOfBinGridJaxis`
+> - `P6TransformationMethod`
+> - `persistableReferenceCrs` from the `asIngestedCoordinates` block
+> If the `SeismicBinGrid` has the P6 parameters and the CRS specified under `AsIngestedCoordinates`, the conversion itself should be able to complete successfully, but Petrel will not understand the survey geometry of the file unless it also gets the 5 corner points under `SpatialArea`,`AsIngestedCoordinates`, `SpatialArea`, and `Wgs84Coordinates`.
- ```bash
- python sdutil cp source.segy sd://<data-partition>/<subproject>/destination.segy
- ```
+### User Access
- If you would like to use a test file we supply instead, download [this file](https://community.opengroup.org/osdu/platform/testing/-/tree/master/Postman%20Collection/40_CICD_OpenVDS) to your local machine then run the following command:
+The user needs to be part of the `users.datalake.admins` group. Validate the current entitlements for the user using the following call:
+[![Screenshot that shows the API call to get user groups in Postman.](media/how-to-convert-segy-to-zgy/postman-api-get-user-groups.png)](media/how-to-convert-segy-to-zgy/postman-api-get-user-groups.png#lightbox)
- ```bash
- python sdutil cp ST10010ZC11_PZ_PSDM_KIRCH_FULL_T.MIG_FIN.POST_STACK.3D.JS-017536.segy sd://<data-partition>/<subproject>/destination.segy
- ```
+Later in this tutorial, you need at least one `owner` and at least one `viewer`. These user groups look like `data.default.owners` and `data.default.viewers`. Make sure to note one of each in your list.
- The sample records were meant to be similar to real-world data so a significant part of their content isn't directly related to conversion. This file is large and will take up about 1 GB of space.
+If the user isn't part of the required group, you can add the required entitlement using the following sample call:
+ email-id: Is the value "Id" returned from the call above.
-10. Create the manifest file (otherwise known as the records file)
+[![Screenshot that shows the API call to get register a user as an admin in Postman.](media/how-to-convert-segy-to-zgy/postman-api-add-user-to-admins.png)](media/how-to-convert-segy-to-zgy/postman-api-add-user-to-admins.png#lightbox)
- ZGY conversion uses a manifest file that you'll upload to your storage account in order to run the conversion. This manifest file is created by using multiple JSON files and running a script. The JSON files for this process are stored [here](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/master/doc/sample-records/volve). For more information on Volve, such as where the dataset definitions come from, visit [their website](https://www.equinor.com/energy/volve-data-sharing). Complete the following steps in order to create the manifest file:
+If you haven't yet created entitlements groups, follow the directions as outlined in [How to manage users](how-to-manage-users.md). If you would like to see what groups you have, use [Get entitlements groups for a given user](how-to-manage-users.md#get-entitlements-groups-for-a-given-user). Data access isolation is achieved with this dedicated ACL (access control list) per object within a given data partition.
- * Clone the [repo](https://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-zgy-conversion/-/tree/master/) and navigate to the folder doc/sample-records/volve
- * Edit the values in the `prepare-records.sh` bash script. Recall that the format of the legal tag will be prefixed with the Azure Data Manager for Energy instance name and data partition name, so it looks like `<instancename>`-`<datapartitionname>`-`<legaltagname>`.
+### Prepare Subproject
- * `DATA_PARTITION_ID=<your-partition-id>`
- * `ACL_OWNER=data.default.owners@<your-partition-id>.<your-tenant>.com`
- * `ACL_VIEWER=data.default.viewers@<your-partition-id>.<your-tenant>.com`
- * `LEGAL_TAG=<legal-tag-created-above>`
+#### 1. Register Data Partition to Seismic
- * Run the `prepare-records.sh` script.
- * The output will be a JSON array with all objects and will be saved in the `all_records.json` file.
- * Save the `filecollection_segy_id` and the `work_product_id` values in that JSON file to use in the conversion step. That way the converter knows where to look for this contents of your `all_records.json`.
+[![Screenshot that shows the API call to register a data partition as a seismic tenant in Postman.](media/how-to-convert-segy-to-zgy/postman-api-register-tenant.png)](media/how-to-convert-segy-to-zgy/postman-api-register-tenant.png#lightbox)
-11. Insert the contents of your `all_records.json` file in storage for work-product, seismic trace data, seismic grid, and file collection. In other words, copy and paste the contents of that file to the `--data-raw` field in the following command. If the above steps have produced two sets, you can run this command twice, using each set once.
+#### 2. Create Subproject
- ```bash
- curl --location --request PUT '<url>/api/storage/v2/records' \
- --header 'Content-Type: application/json' \
- --header 'data-partition-id: <data-partition>' \
- --header 'Authorization: Bearer {{TOKEN}}' \
- --data-raw '[
- {
- ...
- "kind": "osdu:wks:work-product--WorkProduct:1.0.0",
- ...
- },
- {
- ...
- "kind": "osdu:wks:work-product-component--SeismicTraceData:1.0.0"
- ...
- },
- {
- ...
- "kind": "osdu:wks:work-product-component--SeismicBinGrid:1.0.0",
- ...
- },
- {
- ...
- "kind": "osdu:wks:dataset--FileCollection.SEGY:1.0.0",
- ...
- }
- ]
- '
- ```
+Use your previously created entitlement groups that you would like to add as ACL (Access Control List) admins and viewers. Data partition entitlements don't necessarily translate to the subprojects within it, so it is important to be explicit about the ACLs for each subproject, regardless of what data partition it is in.
-12. Trigger the ZGY Conversion DAG to convert your data using the values you had saved above. Your call will look like this:
+[![Screenshot that shows the API call to create a seismic subproject in Postman.](media/how-to-convert-segy-to-zgy/postman-api-create-subproject.png)](media/how-to-convert-segy-to-zgy/postman-api-create-subproject.png#lightbox)
- ```bash
- curl --location --request POST '<url>/api/workflow/v1/workflow/<dag-name>/workflowRun' \
- --header 'data-partition-id: <data-partition>' \
- --header 'Content-Type: application/json' \
- --header 'Authorization: Bearer {{TOKEN}}' \
- --data-raw '{
- "executionContext": {
- "data_partition_id": <data-partition>,
- "sd_svc_api_key": "test-sd-svc",
- "storage_svc_api_key": "test-storage-svc",
- "filecollection_segy_id": "<data-partition>:dataset--FileCollection.SEGY:<guid>",
- "work_product_id": "<data-partition>:work-product--WorkProduct:<guid>"
- }
+#### 3. Create dataset
+
+> [!NOTE]
+> This step is only required if you are not using `sdutil` for uploading the seismic files.
+
+[![Screenshot that shows the API call to create a seismic dataset in Postman.](media/how-to-convert-segy-to-zgy/postman-api-create-dataset.png)](media/how-to-convert-segy-to-zgy/postman-api-create-dataset.png#lightbox)
+
+### Upload the File
+
+There are two ways to upload a SEGY file. One option is used the sasurl through Postman / curl call. You need to download Postman or setup Curl on your OS.
+The second method is to use [SDUTIL](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tags/azure-stable). To login to your instance for ADME via the tool, you need to generate a refresh token for the instance. See [How to generate a refresh token](how-to-generate-refresh-token.md). Alternatively, you can modify the code of SDUTIL to use client credentials instead to log in. If you have not already, you need to setup SDUTIL. Download the codebase and edit the `config.yaml` at the root. Replace the contents of this config file with the following yaml.
+
+```yaml
+seistore:
+ service: '{"azure": {"azureEnv":{"url": "<instance url>/seistore-svc/api/v3", "appkey": ""}}}'
+ url: '<instance url>/seistore-svc/api/v3'
+ cloud_provider: azure
+ env: glab
+ auth-mode: JWT Token
+ ssl_verify: false
+auth_provider:
+ azure: '{
+ "provider": "azure",
+ "authorize_url": "https://login.microsoftonline.com/", "oauth_token_host_end": "/oauth2/v2.0/token",
+ "scope_end":"/.default openid profile offline_access",
+ "redirect_uri":"http://localhost:8080",
+ "login_grant_type": "refresh_token",
+ "refresh_token": "<RefreshToken acquired earlier>"
}'
- ```
+azure:
+ empty: none
+```
-13. Let the DAG run to the `succeeded` state. You can check the status using the workflow status call. You'll get run ID in the response of the above call
+#### Method 1: Postman
- ```bash
- curl --location --request GET '<url>/api/workflow/v1/workflow/<dag-name>/workflowRun/<run-id>' \
- --header 'Data-Partition-Id: <data-partition>' \
- --header 'Content-Type: application/json' \
- --header 'Authorization: Bearer {{TOKEN}}'
- ```
+##### Get the sasurl:
+
+[![Screenshot that shows the API call to get a GCS upload URL in Postman.](media/how-to-convert-segy-to-zgy/postman-api-get-gcs-upload-url.png)](media/how-to-convert-segy-to-zgy/postman-api-get-gcs-upload-url.png#lightbox)
+
+##### Upload the file:
+
+You need to select the file to upload in the Body section of the API call.
+
+[![Screenshot that shows the API call to upload a file in Postman.](media/how-to-convert-segy-to-zgy/postman-api-upload-file.png)](media/how-to-convert-segy-to-zgy/postman-api-upload-file.png#lightbox)
++
+[![Screenshot that shows the API call to upload a file binary in Postman.](media/how-to-convert-segy-to-zgy/postman-api-upload-file-binary.png)](media/how-to-convert-segy-to-zgy/postman-api-upload-file-binary.png#lightbox)
+
+##### Verify upload
+
+[![Screenshot that shows the API call to verify a file binary is uploaded in Postman.](media/how-to-convert-segy-to-zgy/postman-api-verify-file-upload.png)](media/how-to-convert-segy-to-zgy/postman-api-verify-file-upload.png#lightbox)
+
+#### Method 2: SDUTIL
+
+**sdutil** is an OSDU desktop utility to access seismic service. We use it to upload/download files. Use the azure-stable tag from [SDUTIL](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tags/azure-stable).
+
+> [!NOTE]
+> When running `python sdutil config init`, you don't need to enter anything when prompted with `Insert the azure (azureGlabEnv) application key:`.
-14. You can see if the converted file is present using the following command:
+```bash
+python sdutil config init
+python sdutil auth login
+python sdutil ls sd://<data-partition-id>/<subproject>/
+```
+
+Upload your seismic file to your Seismic Store. Here's an example with a SEGY-format file called `source.segy`:
+
+```bash
+python sdutil cp <local folder>/source.segy sd://<data-partition-id>/<subproject>/destination.segy
+```
+For example:
+
+```bash
+python sdutil cp ST10010ZC11_PZ_PSDM_KIRCH_FULL_T.MIG_FIN.POST_STACK.3D.JS-017536.segy sd://<data-partition-id>/<subproject>/destination.segy
+```
+
+### Create Storage Records
+
+Insert the contents of your `all_records.json` file in storage for work-product, seismic trace data, seismic grid, and file collection. Copy and paste the contents of that file to the request body of the API call.
+
+[![Screenshot that shows the API call to create storage records in Postman.](media/how-to-convert-segy-to-zgy/postman-api-create-records.png)](media/how-to-convert-segy-to-zgy/postman-api-create-records.png#lightbox)
+
+### Run Converter
+
+1. Trigger the ZGY Conversion DAG to convert your data using the execution context values you had saved above.
+
+ Fetch the id token from sdutil for the uploaded file or use an access/bearer token from Postman.
+
+```markdown
+python sdutil auth idtoken
+```
+
+[![Screenshot that shows the API call to start the conversion workflow in Postman.](media/how-to-convert-segy-to-zgy/postman-api-start-workflow.png)](media/how-to-convert-segy-to-zgy/postman-api-start-workflow.png#lightbox)
+
+2. Let the DAG run to the `succeeded` state. You can check the status using the workflow status call. The run ID is in the response of the above call
+
+[![Screenshot that shows the API call to check the conversion workflow's status in Postman.](media/how-to-convert-segy-to-zgy/postman-api-check-workflow-status.png)](media/how-to-convert-segy-to-zgy/postman-api-check-workflow-status.png#lightbox)
+
+3. You can see if the converted file is present using the following command in sdutil or in the Postman API call:
```bash
- python sdutil ls sd://<data-partition>/<subproject>
+ python sdutil ls sd://<data-partition-id>/<subproject>
```
-15. You can download and inspect the file using the [sdutil](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil) `cp` command:
+[![Screenshot that shows the API call to check if the file has been converted.](media/how-to-convert-segy-to-zgy/postman-api-verify-file-converted.png)](media/how-to-convert-segy-to-zgy/postman-api-verify-file-converted.png#lightbox)
+
+4. You can download and inspect the file using the [sdutil](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/tags/azure-stable) `cp` command:
```bash
- python sdutil cp sd://<data-partition>/<subproject>/<filename.zgy> <local/destination/path>
+ python sdutil cp sd://<data-partition-id>/<subproject>/<filename.zgy> <local/destination/path>
``` OSDU&trade; is a trademark of The Open Group. ## Next steps <!-- Add a context sentence for the following links --> > [!div class="nextstepaction"]
-> [How to convert segy to ovds](./how-to-convert-segy-to-ovds.md)
+> [How to convert SEGY to OVDS](./how-to-convert-segy-to-ovds.md)
event-grid Delivery Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/delivery-properties.md
To set headers with a fixed value, provide the name of the header and its value
You might want to check **Is secret?** when you're providing sensitive data. The visibility of sensitive data on the Azure portal depends on the user's RBAC permission. ## Setting dynamic header values
-You can set the value of a header based on a property in an incoming event. Use JsonPath syntax to refer to an incoming event's property value to be used as the value for a header in outgoing requests. For example, to set the value of a header named **Channel** using the value of the incoming event property **system** in the event data, configure your event subscription in the following way:
+You can set the value of a header based on a property in an incoming event. Use JsonPath syntax to refer to an incoming event's property value to be used as the value for a header in outgoing requests. Only JSON values of string, number and boolean are supported. For example, to set the value of a header named **Channel** using the value of the incoming event property **system** in the event data, configure your event subscription in the following way:
:::image type="content" source="./media/delivery-properties/dynamic-header-property.png" alt-text="Delivery properties - dynamic":::
external-attack-surface-management Understanding Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-dashboards.md
Microsoft Defender External Attack Surface Management (Defender EASM) offers a s
Defender EASM provides five dashboards: - **Overview**: this dashboard is the default landing page when you access Defender EASM. It provides the key context that can help you familiarize yourself with your attack surface. -- **Attack Surface Summary**: this dashboard summarizes the key observations derived from your inventory. It provides a high-level overview of your Attack Surface and the asset types that comprise it, and surfaces potential vulnerabilities by severity (high, medium, low). This dashboard also provides key context on the infrastructure that comprises your Attack Surface, providing insight into cloud hosting, sensitive services, SSL certificate and domain expiry, and IP reputation.-- **Security Posture**: this dashboard helps organizations understand the maturity and complexity of their security program based on the metadata derived from assets in your Approved inventory. It is comprised of technical and non-technical policies, processes and controls that mitigate risk of external threats. This dashboard provides insight on CVE exposure, domain administration and configuration, hosting and networking, open ports, and SSL certificate configuration.-- **GDPR Compliance**: this dashboard surfaces key areas of compliance risk based on the General Data Protection Regulation (GDPR) requirements for online infrastructure thatΓÇÖs accessible to European nations. This dashboard provides insight on the status of your websites, SSL certificate issues, exposed personal identifiable information (PII), login protocols, and cookie compliance.
+- **Attack surface summary**: this dashboard summarizes the key observations derived from your inventory. It provides a high-level overview of your Attack Surface and the asset types that comprise it, and surfaces potential vulnerabilities by severity (high, medium, low). This dashboard also provides key context on the infrastructure that comprises your Attack Surface, providing insight into cloud hosting, sensitive services, SSL certificate and domain expiry, and IP reputation.
+- **Security posture**: this dashboard helps organizations understand the maturity and complexity of their security program based on the metadata derived from assets in your Approved inventory. It is comprised of technical and non-technical policies, processes and controls that mitigate risk of external threats. This dashboard provides insight on CVE exposure, domain administration and configuration, hosting and networking, open ports, and SSL certificate configuration.
+- **GDPR compliance**: this dashboard surfaces key areas of compliance risk based on the General Data Protection Regulation (GDPR) requirements for online infrastructure thatΓÇÖs accessible to European nations. This dashboard provides insight on the status of your websites, SSL certificate issues, exposed personal identifiable information (PII), login protocols, and cookie compliance.
- **OWASP Top 10**: this dashboard surfaces any assets that are vulnerable according to OWASPΓÇÖs list of the most critical web application security risks. On this dashboard, organizations can quickly identify assets with broken access control, cryptographic failures, injections, insecure designs, security misconfigurations and other critical risks as defined by OWASP. ## Accessing dashboards
Insight Priorities are determined by MicrosoftΓÇÖs assessment of the potential i
Some insights will be flagged with "Potential" in the title. A "Potential" insight occurs when Defender EASM is unable to confirm that an asset is impacted by a vulnerability. This is common when our scanning system detects the presence of a specific service but cannot detect the version number; for example, some services enable administrators to hide version information. Vulnerabilities are often associated with specific versions of the software, so manual investigation is required to determine whether the asset is impacted. Other vulnerabilities can be remediated by steps that Defender EASM is unable to detect. For instance, users can make recommended changes to service configurations or run backported patches. If an insight is prefaced with "Potential", the system has reason to believe that the asset is impacted by the vulnerability but is unable to confirm it for one of the above listed reasons. To manually investigate, please click the insight name to review remediation guidance that can help you determine whether your assets are impacted.
-![Screenshot of attack surface priorities with clickable options highlighted](media/Dashboards-2.png)
+![Screenshot of attack surface priorities with clickable options highlighted.](media/Dashboards-2.png)
A user will usually decide to first investigate any High Severity Observations. You can click the top-listed observation to be directly routed to a list of impacted assets, or instead select ΓÇ£View All __ InsightsΓÇ¥ to see a comprehensive, expandable list of all potential observations within that severity group. The Observations page features a list of all potential insights in the left-hand column. This list is sorted by the number of assets that are impacted by each security risk, displaying the issues that impact the greatest number of assets first. To view the details of any security risk, simply click on it from this list.
-![Screenshot of attack surface drilldown for medium severity priorities](media/Dashboards-3.png)
+![Screenshot of attack surface drilldown for medium severity priorities.](media/Dashboards-3.png)
This detailed view for any observation will include the title of the issue, a description, and remediation guidance from the Defender EASM team. In this example, the description explains how expired SSL certificates can lead to critical business functions becoming unavailable, preventing customers or employees from accessing web content and thus damaging your organizationΓÇÖs brand. The Remediation section provides advice on how to swiftly fix the issue; in this example, Microsoft recommends that you review the certificates associated with the impacted host assets, update the coinciding SSL certificate(s), and update your internal procedures to ensure that SSL certificates are updated in a timely manner.
Finally, the Asset section lists any entities that have been impacted by this sp
From the Asset Details page, weΓÇÖll then click on the ΓÇ£SSL certificatesΓÇ¥ tab to view more information about the expired certificate. In this example, the listed certificate shows an ΓÇ£ExpiresΓÇ¥ date in the past, indicating that the certificate is currently expired and therefore likely inactive. This section also provides the name of the SSL certificate which you can then send to the appropriate team within your organization for swift remediation.
-![Screenshot of impacted asset list from drilldown view, must be expired SSL certificate](media/Dashboards-4.png)
+![Screenshot of impacted asset list from drilldown view, must be expired SSL certificate.](media/Dashboards-4.png)
### Attack surface composition The following section provides a high-level summary of the composition of your Attack Surface. This chart provides counts of each asset type, helping users understand how their infrastructure is spread across domains, hosts, pages, SSL certificates, ASNs, IP blocks, IP addresses and email contacts.
-![Screenshot of asset details view of same SSL certificate showing expiration highlighted](media/Dashboards-5.png)
+![Screenshot of asset details view of same SSL certificate showing expiration highlighted.](media/Dashboards-5.png)
Each value is clickable, routing users to their inventory list filtered to display only assets of the designated type. From this page, you can click on any asset to view more details, or you can add additional filters to narrow down the list according to your needs.
Each value is clickable, routing users to their inventory list filtered to displ
This section of the Attack Surface Summary dashboard provides insight on the cloud technologies used across your infrastructure. As most organizations adapt to the cloud gradually, the hybrid nature of your online infrastructure can be difficult to monitor and manage. Defender EASM helps organizations understand the usage of specific cloud technologies across your Attack Surface, mapping cloud host providers to your confirmed assets to inform your cloud adoption program and ensure compliance with your organizations process.
-![Screenshot of cloud chart](media/Dashboards-6.png)
+![Screenshot of cloud chart.](media/Dashboards-6.png)
For instance, your organization may have recently decided to migrate all cloud infrastructure to a single provider to simplify and consolidate their Attack Surface. This chart can help you identify assets that still need to be migrated. Each bar of the chart is clickable, routing users to a filtered list that displays the assets that comprise the chart value.
For instance, your organization may have recently decided to migrate all cloud i
This section displays sensitive services detected on your Attack Surface that should be assessed and potentially adjusted to ensure the security of your organization. This chart highlights any services that have historically been vulnerable to attack or are common vectors of information leakage to malicious actors. Any assets in this section should be investigated, and Microsoft recommends that organizations consider alternative services with a better security posture to mitigate risk.
-![Screenshot of sensitive services chart](media/Dashboards-7.png)
+![Screenshot of sensitive services chart.](media/Dashboards-7.png)
The chart is organized by the name of each service; clicking on any individual bar will return a list of assets that are running that particular service. The chart below is empty, indicating that the organization is not currently running any services that are especially susceptible to attack.
The chart is organized by the name of each service; clicking on any individual b
These two expiration charts display upcoming SSL Certificate and Domain expirations, ensuring that an organization has ample visibility into upcoming renewals of key infrastructure. An expired domain can suddenly make key content inaccessible, and the domain could even be swiftly purchased by a malicious actor who intends to target your organization. An expired SSL Certificate leaves corresponding assets susceptible to attack.
-![Screenshot of SSL charts](media/Dashboards-8.png)
+![Screenshot of SSL charts.](media/Dashboards-8.png)
Both charts are organized by the expiration timeframe, ranging from ΓÇ£greater than 90 daysΓÇ¥ to already expired. Microsoft recommends that organizations immediately renew any expired SSL certificates or domains, and proactively arrange the renewal of assets due to expire in 30-60 days.
Both charts are organized by the expiration timeframe, ranging from ΓÇ£greater t
IP reputation data helps users understand the trustworthiness of your attack surface and identifying potentially compromised hosts. Microsoft develops IP reputation scores based on our proprietary data as well as IP information collected from external sources. We recommend further investigation of any IP addresses identified here, as a suspicious or malicious score associated with an owned asset indicates that the asset is susceptible to attack or has already been leveraged by malicious actors.
-![Screenshot of IP reputation chart](media/Dashboards-9.png)
+![Screenshot of IP reputation chart.](media/Dashboards-9.png)
This chart is organized by the detection policy that triggered a negative reputation score. For instance, the DDOS value indicates that the IP address has been involved in a Distributed Denial-Of-Service attack. Users can click on any bar value to access a list of assets that comprise it. In the example below, the chart is empty which indicates all IP addresses in your inventory have satisfactory reputation scores.
This chart is organized by the detection policy that triggered a negative reputa
The Security Posture dashboard helps organizations measure the maturity of their security program based on the status of assets in your Confirmed Inventory. It is comprised of technical and non-technical policies, processes and controls that mitigate the risk of external threats. This dashboard provides insight on CVE exposure, domain administration and configuration, hosting and networking, open ports, and SSL certificate configuration.
-![Screenshot of security posture chart](media/Dashboards-10.png)
+![Screenshot of security posture chart.](media/Dashboards-10.png)
### CVE exposure The first chart in the Security Posture dashboard relates to the management of an organizationΓÇÖs website portfolio. Microsoft analyzes website components such as frameworks, server software, and 3rd party plugins and then matches them to a current list of Common Vulnerability Exposures (CVEs) to identify vulnerability risks to your organization. The web components that comprise each website are inspected daily to ensure recency and accuracy.
-![Screenshot of CVE exposure chart](media/Dashboards-11.png)
+![Screenshot of CVE exposure chart.](media/Dashboards-11.png)
It is recommended that users immediately address any CVE-related vulnerabilities, mitigating risk by updating your web components or following the remediation guidance for each CVE. Each bar on the chart is clickable, displaying a list of any impacted assets.
It is recommended that users immediately address any CVE-related vulnerabilities
This chart provides insight on how an organization manages their domains. Companies with a decentralized domain portfolio management program are susceptible to unnecessary threats, including domain hijacking, domain shadowing, email spoofing, phishing, and illegal domain transfers. A cohesive domain registration process mitigates this risk. For instance, organizations should use the same registrars and registrant contact information for their domains to ensure that all domains are mappable to the same entities. This helps ensure that domains donΓÇÖt slip through the cracks as you update and maintain them.
-![Screenshot of domain administration chart](media/Dashboards-12.png)
+![Screenshot of domain administration chart.](media/Dashboards-12.png)
Each bar of the chart is clickable, routing to a list of all assets that comprise the value.
Each bar of the chart is clickable, routing to a list of all assets that compris
This chart provides insight on the security posture related to where an organizationΓÇÖs hosts are located. Risk associated with ownership of Autonomous systems depends on the size, maturity of an organizationΓÇÖs IT department.
-![Screenshot of hosting and networking chart](media/Dashboards-13.png)
+![Screenshot of hosting and networking chart.](media/Dashboards-13.png)
Each bar of the chart is clickable, routing to a list of all assets that comprise the value.
Each bar of the chart is clickable, routing to a list of all assets that compris
This section helps organizations understand the configuration of their domain names, surfacing any domains that may be susceptible to unnecessary risk. Extensible Provisioning Protocol (EPP) domain status codes indicate the status of a domain name registration. All domains have at least one code, although multiple codes can apply to a single domain. This section is useful to understanding the policies in place to manage your domains, or missing policies that leave domains vulnerable.
-![Screenshot of domain config chart](media/Dashboards-14.png)
+![Screenshot of domain config chart.](media/Dashboards-14.png)
For instance, the ΓÇ£clientUpdateProhibitedΓÇ¥ status code prevents unauthorized updates to your domain name; an organization must contact their registrar to lift this code and make any updates. The chart below searches for domain assets that do not have this status code, indicating that the domain is currently open to updates which can potentially result in fraud. Users should click any bar on this chart to view a list of assets that do not have the appropriate status codes applied to them so they can update their domain configurations accordingly.
For instance, the ΓÇ£clientUpdateProhibitedΓÇ¥ status code prevents unauthorized
This section helps users understand how their IP space is managed, detecting services that are exposed on the open internet. Attackers commonly scan ports across the internet to look for known exploits related to service vulnerabilities or misconfigurations. Microsoft identifies these open ports to complement vulnerability assessment tools, flagging observations for review to ensure they are properly managed by your information technology team.
-![Screenshot of open ports chart](media/Dashboards-15.png)
+![Screenshot of open ports chart.](media/Dashboards-15.png)
By performing basic TCP SYN/ACK scans across all open ports on the addresses in an IP space, Microsoft detects ports that may need to be restricted from direct access to the open internet. Examples include databases, DNS servers, IoT devices, routers and switches. This data can also be used to detect shadow IT assets or insecure remote access services. All bars on this chart are clickable, opening a list of assets that comprise the value so your organization can investigate the open port in question and remediate any risk.
By performing basic TCP SYN/ACK scans across all open ports on the addresses in
The SSL configuration and organization charts display common SSL-related issues that may impact functions of your online infrastructure.
-![Screenshot of SSL configuration and organization charts](media/Dashboards-16.png)
+![Screenshot of SSL configuration and organization charts.](media/Dashboards-16.png)
For instance, the SSL configuration chart displays any detected configuration issues that can disrupt your online services. This includes expired SSL certificates and certificates using outdated signature algorithms like SHA1 and MD5, resulting in unnecessary security risk to your organization. The SSL organization chart provides insight on the registration of your SSL certificates, indicating the organization and business units associated with each certificate. This can help users understand the designated ownership of these certificates; it is recommended that companies consolidate their organization and unit list when possible to help ensure proper management moving forward. ++ ## GDPR compliance dashboard The GDPR compliance dashboard presents an analysis of assets in your Confirmed Inventory as they relate to the requirements outlined in General Data Protection Regulation (GDPR). GDPR is a regulation in European Union (EU) law that enforces data protection and privacy standards for any online entities accessible to the EU. These regulations have become a model for similar laws outside of the EU, so it serves as an excellent guide on how to handle data privacy worldwide. This dashboard analyzes an organizationΓÇÖs public-facing web properties to surface any assets that are potentially non-compliant with GDPR.
+![Screenshot of GDPR compliance dashboard.](media/Dashboards-18.png)
### Websites by status
This chart organizes your website assets by HTTP response status code. These cod
This chart organizes your websites by status code. Options include Active, Inactive, Requires Authorization, Broken, and Browser Error; users can click any component on the bar graph to view a comprehensive list of assets that comprise the value.
-### SSL certificate posture
+![Screenshot of Websites by status chart.](media/Dashboards-19.png)
-An organizationΓÇÖs security posture for SSL/TLS Certificates is a critical component of security for web-based communication. SSL certificates are leveraged by websites to ensure secure communication between a website and its users. Decentralized or complex management of SSL certificates heightens the risk of SSL certificates expiring, use of weak ciphers, and potential exposure to fraudulent SSL Registration. The GDPR compliance dashboard provides charts on live sites with certificate issues, certificate expiration time frames, and sites by certificate posture.
### Live sites with cert issues This chart displays pages that are actively serving content and present users with a warning that the site is insecure. The user must manually accept the warning to view the content on these pages. This can occur for a variety of reasons; this chart organizes results by the specific reason for easy mitigation. Options include broken certificates, active certificate issues, requires authorization and browser certificate errors.
+![Screenshot of SSL certificate posture chart.](media/Dashboards-20.png)
++ ### SSL certificate expiration This chart displays upcoming SSL Certificate expirations, ensuring that an organization has ample visibility into any upcoming renewals. An expired SSL Certificate leaves corresponding assets susceptible to attack and can make the content of a page inaccessible to the internet. This chart is organized by the detected expiry window, ranging from already expired to expiring in over 90 days. Users can click any component in the bar graph to access a list of applicable assets, making it easy to send a list of certificate names to your IT Department for remediation.
-### SSL certificate posture
+
+![Screenshot of Live sites with cert issues chart.](media/Dashboards-21.png)
+++
+### Sites by certificate posture
This section analysis the signature algorithms that power an SSL certificate. SSL certificates can be secured with a variety of cryptographic algorithms; certain newer algorithms are considered more reputable and secure than older algorithms, so companies are advised to retire older algorithms like SHA-1. Users can click any segment of the pie chart to view a list of assets that comprise the selected value. SHA256 is considered secure, whereas organizations should update any certificates using the SHA1 algorithm.
-### Personal identifiable information (PII) posture
-The protection of personal identifiable information (PII) is a critical component to the General Data Protection Regulation. PII is defined as any data that can identify an individual, including names, addresses, birthdays, or email addresses. Any website that accepts this data through a form must be thoroughly secured according to GDPR guidelines. By analyzing the Document Object Model (DOM) of your pages, Microsoft identifies forms and login pages that may accept PII and should therefore be assessed according to European Union law.
+![Screenshot of Sites by certificate posture chart.](media/Dashboards-22.png)
++
+### Live PII sites by protocol
+
+The protection of personal identifiable information (PII) is a critical component to the General Data Protection Regulation. PII is defined as any data that can identify an individual, including names, addresses, birthdays, or email addresses. Any website that accepts this data through a form must be thoroughly secured according to GDPR guidelines. By analyzing the Document Object Model (DOM) of your pages, Microsoft identifies forms and login pages that may accept PII and should therefore be assessed according to European Union law. The first chart in this section displays live sites by protocol, identifying sites using HTTP versus HTTPS protocols.
+
+![Screenshot of Live PII sites by protocol chart.](media/Dashboards-23.png)
++
+### Live PII sites by certificate posture
+
+This chart displays live PII sites by their usage of SSL certificates. By referencing this chart, you can quickly understand the hashing algorithms used across your sites that contain personal identifiable information.
+
+![Screenshot of Live PII sites by certificate posture chart.](media/Dashboards-24.png)
++
+### Login websites by protcol
+
+A login page is a page on a website where a user has the option to enter a username and password to gain access to services hosted on that site. Login pages have specific requirements under GDPR, so Defender EASM references the DOM of all scanned pages to search for code that correlates to a login. For instance, login pages must be secure to be compliant. This first chart displays Login websites by protocol (HTTP or HTTPS) and the second by certificate posture.
+
+![Screenshot of Login websites by protcol chart.](media/Dashboards-25.png)
+
+![Screenshot of Login websites by certificate posture chart.](media/Dashboards-26.png)
-### Login posture
-A login page is a page on a website where a user has the option to enter a username and password to gain access to services hosted on that site. Login pages have specific requirements under GDPR, so Defender EASM references the DOM of all scanned pages to search for code that correlates to a login. For instance, login pages must be secure to be compliant.
### Cookie posture A cookie is information in the form of a very small text file that is placed on the hard drive of the computer running a web browser when browsing a site. Each time a website is visited, the browser sends the cookie back to the server to notify the website of your previous activity. GDPR has specific requirements for obtaining consent to issue a cookie, and different storage regulations for first- versus third-party cookies.
+![Screenshot of Cookie posture chart.](media/Dashboards-27.png)
++ ## OWASP top 10 dashboard
external-attack-surface-management Using And Managing Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/using-and-managing-discovery.md
We recommend that you search for your organization's attack surface before you c
When you first access your Defender EASM instance, select **Getting Started** in the **General** section to search for your organization in the list of automated attack surfaces. Then choose your organization from the list and select **Build my Attack Surface**.
+![Screenshot that shows a preconfigured attack surface selection screen.](media/Discovery_1.png)
+ At this point, the discovery runs in the background. If you selected a preconfigured attack surface from the list of available organizations, you're redirected to the dashboard overview screen where you can view insights into your organization's infrastructure in Preview mode.
Custom discoveries are organized into discovery groups. They're independent seed
1. On the leftmost pane, under **Manage**, select **Discovery**.
- :::image type="content" source="media/Discovery_2.png" alt-text="Screenshot that shows a Defender EASM instance on the overview page with the Manage section highlighted.":::
+ ![Screenshot that shows a Defender EASM instance on the overview page with the Manage section highlighted.](media/Discovery_2.png)
-1. The **Discovery** page shows your list of discovery groups by default. This list is empty when you first access the platform. To run your first discovery, select **Add Discovery Group**.
+2. The **Discovery** page shows your list of discovery groups by default. This list is empty when you first access the platform. To run your first discovery, select **Add Discovery Group**.
- :::image type="content" source="media/Discovery_3.png" alt-text="Screenshot that shows the Discovery screen with Add Discovery Group highlighted.":::
+ ![Screenshot that shows the Discovery screen with Add Discovery Group highlighted.](media/Discovery_3.png)
-1. Name your new discovery group and add a description. The **Recurring Frequency** field allows you to schedule discovery runs for this group by scanning for new assets related to the designated seeds on a continuous basis. The default recurrence selection is **Weekly**. We recommend this cadence to ensure that your organization's assets are routinely monitored and updated.
+3. Name your new discovery group and add a description. The **Recurring Frequency** field allows you to schedule discovery runs for this group by scanning for new assets related to the designated seeds on a continuous basis. The default recurrence selection is **Weekly**. We recommend this cadence to ensure that your organization's assets are routinely monitored and updated.
For a single, one-time discovery run, select **Never**. We recommend that you keep the **Weekly** default cadence and instead turn off historical monitoring within your discovery group settings if you later decide to discontinue recurrent discovery runs.
-1. Select **Next: Seeds**.
+4. Select **Next: Seeds**.
+
+ ![Screenshot that shows the first page of the discovery group setup.](media/Discovery_4.png)
- :::image type="content" source="media/Discovery_4.png" alt-text="Screenshot that shows the first page of the discovery group setup.":::
-1. Select the seeds that you want to use for this discovery group. Seeds are known assets that belong to your organization. The Defender EASM platform scans these entities and maps their connections to other online infrastructure to create your attack surface.
+5. Select the seeds that you want to use for this discovery group. Seeds are known assets that belong to your organization. The Defender EASM platform scans these entities and maps their connections to other online infrastructure to create your attack surface.
+
+ ![Screenshot that shows the seed selection page of the discovery group setup.](media/Discovery_5.png)
- :::image type="content" source="media/Discovery_5.png" alt-text="Screenshot that shows the seed selection page of the discovery group setup.":::
The **Quick Start** option lets you search for your organization in a list of prepopulated attack surfaces. You can quickly create a discovery group based on the known assets that belong to your organization.
+
+ ![Screenshot that shows the prebaked attack surface selection page output in a seed list.](media/Discovery_6.png)
- :::image type="content" source="media/Discovery_6.png" alt-text="Screenshot that shows the prebaked attack surface selection page output in a seed list.":::
-
- :::image type="content" source="media/Discovery_7.png" alt-text="Screenshot that shows the prebaked attack surface selection page.":::
+ ![Screenshot that shows the prebaked attack surface selection page.](media/Discovery_7.png)
+
Alternatively, you can manually input your seeds. Defender EASM accepts organization names, domains, IP blocks, hosts, email contacts, ASNs, and Whois organizations as seed values. You can also specify entities to exclude from asset discovery to ensure they aren't added to your inventory if detected. For example, exclusions are useful for organizations that have subsidiaries that will likely be connected to their central infrastructure, but don't belong to their organization. After your seeds are selected, select **Review + Create**.
-1. Review your group information and seed list and select **Create & Run**.
+6. Review your group information and seed list and select **Create & Run**.
+
+ ![Screenshot that shows the Review + Create screen.](media/Discovery_8.png)
- :::image type="content" source="media/Discovery_8.png" alt-text="Screenshot that shows the Review + Create screen.":::
You're taken back to the main Discovery page that displays your discovery groups. After your discovery run is finished, you see new assets added to your approved inventory.
Custom discoveries are organized into discovery groups. They're independent seed
You can manage your discovery groups from the main **Discovery** page. The default view displays a list of all your discovery groups and some key data about each one. From the list view, you can see the number of seeds, recurrence schedule, last run date, and created date for each group.
+ ![Screenshot that shows the discovery groups screen.](media/Discovery_9.png)
Select any discovery group to view more information, edit the group, or kickstart a new discovery process.
The discovery group details page contains the run history for the group. This se
Run history is organized by the seed assets that were scanned during the discovery run. To see a list of the applicable seeds, select **Details**. A pane opens on the right of your screen that lists all the seeds and exclusions by kind and name.
+ ![Screenshot that shows the run history for the discovery group screen.](media/Discovery_10.png)
+ ### View seeds and exclusions
The source name is the value that was input in the appropriate type box when you
When you input seeds, remember to validate the appropriate format for each entry. When you save the discovery group, the platform runs a series of validation checks and alerts you of any misconfigured seeds. For example, IP blocks should be input by network address (for example, the start of the IP range).
+ ![Screenshot that shows the Seeds view of a discovery page.](media/Discovery_11.png)
### Exclusions
firewall-manager Secure Hybrid Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-hybrid-network.md
Previously updated : 06/15/2022 Last updated : 09/26/2023
For this tutorial, you create three virtual networks:
- **VNet-Hub** - the firewall is in this virtual network. - **VNet-Spoke** - the spoke virtual network represents the workload located on Azure.-- **VNet-Onprem** - The on-premises virtual network represents an on-premises network. In an actual deployment, it can be connected by either a VPN or ExpressRoute connection. For simplicity, this tutorial uses a VPN gateway connection, and an Azure-located virtual network is used to represent an on-premises network.
+- **VNet-Onprem** - The on-premises virtual network represents an on-premises network. In an actual deployment, it can be connected using either a VPN or ExpressRoute connection. For simplicity, this tutorial uses a VPN gateway connection, and an Azure-located virtual network is used to represent an on-premises network.
![Hybrid network](media/tutorial-hybrid-portal/hybrid-network-firewall.png)
A hybrid network uses the hub-and-spoke architecture model to route traffic betw
- Set **AllowGatewayTransit** when peering VNet-Hub to VNet-Spoke. In a hub-and-spoke network architecture, a gateway transit allows the spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network.
- Additionally, routes to the gateway-connected virtual networks or on-premises networks will automatically propagate to the routing tables for the peered virtual networks using the gateway transit. For more information, see [Configure VPN gateway transit for virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md).
+ Additionally, routes to the gateway-connected virtual networks or on-premises networks are automatically propagated to the routing tables for the peered virtual networks using the gateway transit. For more information, see [Configure VPN gateway transit for virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md).
- Set **UseRemoteGateways** when you peer VNet-Spoke to VNet-Hub. If **UseRemoteGateways** is set and **AllowGatewayTransit** on remote peering is also set, the spoke virtual network uses gateways of the remote virtual network for transit. - To route the spoke subnet traffic through the hub firewall, you need a User Defined route (UDR) that points to the firewall with the **Virtual network gateway route propagation** setting disabled. This option prevents route distribution to the spoke subnets. This prevents learned routes from conflicting with your UDR.
If you don't have an Azure subscription, create a [free account](https://azure.m
1. Sign in to the [Azure portal](https://portal.azure.com). 2. In the Azure portal search bar, type **Firewall Manager** and press **Enter**.
-3. On the Azure Firewall Manager page, select **View Azure firewall policies**.
+3. On the Azure Firewall Manager page, under **Security**, select **Azure firewall policies**.
- ![Firewall policy](media/tutorial-hybrid-portal/firewall-manager-policy.png)
+ :::image type="content" source="media/secure-hybrid-network/firewall-manager-policy.png" alt-text="Screenshot showing Firewall Manager main page."lightbox="media/secure-hybrid-network/firewall-manager-policy.png":::
1. Select **Create Azure Firewall Policy**. 1. Select your subscription, and for Resource group, select **Create new** and create a resource group named **FW-Hybrid-Test**.
If you don't have an Azure subscription, create a [free account](https://azure.m
1. For **Resource group**, select **FW-Hybrid-Test**. 1. For **Name**, type **VNet-hub**. 1. For **Region**, select **East US**.
-1. Select **Next : IP Addresses**.
+1. Select **Next**.
+1. On the **Security**, select **Next**.
1. For **IPv4 address space**, type **10.5.0.0/16**.
-1. Under **Subnet name**, select **default**.
-1. Change the **Subnet name** to **AzureFirewallSubnet**. The firewall is in this subnet, and the subnet name **must** be AzureFirewallSubnet.
-1. For **Subnet address range**, type **10.5.0.0/26**.
+1. Under **Subnets**, select **default**.
+1. For Subnet template, select **Azure Firewall**.
+1. For **Starting address**, type **10.5.0.0/26**.
1. Accept the other default settings, and then select **Save**. 1. Select **Review + create**. 1. Select **Create**.
+Add another subnet named **GatewaySubnet** with an address space of 10.5.1.0/27. This subnet is used for the VPN gateway.
+ ## Create the spoke virtual network 1. From the Azure portal home page, select **Create a resource**.
If you don't have an Azure subscription, create a [free account](https://azure.m
1. For **Resource group**, select **FW-Hybrid-Test**. 1. For **Name**, type **VNet-Spoke**. 1. For **Region**, select **East US**.
+1. Select **Next**.
+1. On the **Security** page, select **Next**.
1. Select **Next : IP Addresses**.- 1. For **IPv4 address space**, type **10.6.0.0/16**.
-1. Under **Subnet name**, select **default**.
-1. Change the **Subnet name** to **SN-Workload**.
-1. For **Subnet address range**, type **10.6.0.0/24**.
+1. Under **Subnets**, select **default**.
+1. Change the **Name** to **SN-Workload**.
+1. For **Starting address**, type **10.6.0.0/24**.
1. Accept the other default settings, and then select **Save**. 1. Select **Review + create**. 1. Select **Create**.
If you don't have an Azure subscription, create a [free account](https://azure.m
1. Select **Create**. 1. For **Subscription**, select your subscription. 1. For **Resource group**, select **FW-Hybrid-Test**.
-1. For **Name**, type **VNet-OnPrem**.
+1. For **Virtual network name**, type **VNet-OnPrem**.
1. For **Region**, select **East US**.
-1. Select **Next : IP Addresses**.
+1. Select **Next**.
+1. On the **Security** page, select **Next**.
1. For **IPv4 address space**, type **192.168.0.0/16**.
-1. Under **Subnet name**, select **default**.
-1. Change the **Subnet name** to **SN-Corp**.
-1. For **Subnet address range**, type **192.168.1.0/24**.
+1. Under **Subnets**, select **default**.
+1. Change the **Name** to **SN-Corp**.
+1. For **Starting address**, type **192.168.1.0/24**.
1. Accept the other default settings, and then select **Save**.
-2. Select **Add Subnet**.
-3. For **Subnet name**, type **GatewaySubnet**.
-4. For **Subnet address range** type **192.168.2.0/24**.
-5. Select **Add**.
+2. Select **Add a subnet**.
+1. For **Subnet template**, select **Virtual Network Gateway**.
+1. For **Starting address** type **192.168.2.0/27**.
+1. Select **Add**.
1. Select **Review + create**. 1. Select **Create**.
When security policies are associated with a hub, it's referred to as a *hub vir
Convert the **VNet-Hub** virtual network into a *hub virtual network* and secure it with Azure Firewall. 1. In the Azure portal search bar, type **Firewall Manager** and press **Enter**.
-3. On the Azure Firewall Manager page, under **Add security to virtual networks**, select **View hub virtual networks**.
+1. In the right pane, select **Overview**.
+1. On the Azure Firewall Manager page, under **Add security to virtual networks**, select **View hub virtual networks**.
1. Under **Virtual Networks**, select the check box for **VNet-hub**. 1. Select **Manage Security**, and then select **Deploy a Firewall with Firewall Policy**.
-1. On the **Convert virtual networks** page, under **Firewall Policy**, select the check box for **Pol-Net01**.
+1. On the **Convert virtual networks** page, under **Azure Firewall tier**, select **Premium**. Under **Firewall Policy**, select the check box for **Pol-Net01**.
1. Select **Next : Review + confirm** 1. Review the details and then select **Confirm**. This takes a few minutes to deploy. 7. After deployment completes, go to the **FW-Hybrid-Test** resource group, and select the firewall.
-9. Note the **Firewall private IP** address on the **Overview** page. You'll use it later when you create the default route.
+9. Note the **Firewall private IP** address on the **Overview** page. You use it later when you create the default route.
## Create and connect the VPN gateways
Now create the VPN gateway for the hub virtual network. Network-to-network confi
5. For **Region**, select **(US) East US**. 6. For **Gateway type**, select **VPN**. 7. For **VPN type**, select **Route-based**.
-8. For **SKU**, select **Basic**.
-9. For **Virtual network**, select **VNet-hub**.
-10. For **Public IP address**, select **Create new**, and type **VNet-hub-GW-pip** for the name.
-11. Accept the remaining defaults and then select **Review + create**.
-12. Review the configuration, then select **Create**.
+8. For **SKU**, select **VpnGw2**.
+1. For **Generation**, select **Generation2**.
+1. For **Virtual network**, select **VNet-hub**.
+1. For **Public IP address**, select **Create new**, and type **VNet-hub-GW-pip** for the name.
+1. For **Enable active-active mode**, select **Disabled**.
+1. Accept the remaining defaults and then select **Review + create**.
+1. Review the configuration, then select **Create**.
### Create a VPN gateway for the on-premises virtual network
Now create the VPN gateway for the on-premises virtual network. Network-to-netwo
5. For **Region**, select **(US) East US**. 6. For **Gateway type**, select **VPN**. 7. For **VPN type**, select **Route-based**.
-8. For **SKU**, select **Basic**.
-9. For **Virtual network**, select **VNet-Onprem**.
-10. For **Public IP address**, select **Create new**, and type **VNet-Onprem-GW-pip** for the name.
-11. Accept the remaining defaults and then select **Review + create**.
-12. Review the configuration, then select **Create**.
+8. For **SKU**, select **VpnGw2**.
+1. For **Generation**, select **Generation2**.
+1. For **Virtual network**, select **VNet-Onprem**.
+1. For **Public IP address**, select **Create new**, and type **VNet-Onprem-GW-pip** for the name.
+1. For **Enable active-active mode**, select **Disabled**.
+1. Accept the remaining defaults and then select **Review + create**.
+1. Review the configuration, then select **Create**.
### Create the VPN connections Now you can create the VPN connections between the hub and on-premises gateways.
-In this step, you create the connection from the hub virtual network to the on-premises virtual network. You'll see a shared key referenced in the examples. You can use your own values for the shared key. The important thing is that the shared key must match for both connections. It takes some time to create the connection.
+In this step, you create the connection from the hub virtual network to the on-premises virtual network. A shared key is referenced in the examples. You can use your own values for the shared key. The important thing is that the shared key must match for both connections. It takes some time to create the connection.
1. Open the **FW-Hybrid-Test** resource group and select the **GW-hub** gateway. 2. Select **Connections** in the left column. 3. Select **Add**. 4. For the connection name, type **Hub-to-Onprem**. 5. Select **VNet-to-VNet** for **Connection type**.
-6. For the **Second virtual network gateway**, select **GW-Onprem**.
-7. For **Shared key (PSK)**, type **AzureA1b2C3**.
-8. Select **OK**.
+1. Select **Next : Settings**.
+1. For the **First virtual network gateway**, select **GW-hub**.
+1. For the **Second virtual network gateway**, select **GW-Onprem**.
+1. For **Shared key (PSK)**, type **AzureA1b2C3**.
+1. Select **Review + create**.
+1. Select **Create**.
Create the on-premises to hub virtual network connection. This step is similar to the previous one, except you create the connection from VNet-Onprem to VNet-hub. Make sure the shared keys match. The connection will be established after a few minutes.
Create the on-premises to hub virtual network connection. This step is similar t
After about five minutes or so, the status of both connections should be **Connected**.
-![Gateway connections](media/secure-hybrid-network/gateway-connections.png)
## Peer the hub and spoke virtual networks
Now peer the hub and spoke virtual networks.
|Setting name |Value | ||| |Peering link name| HubtoSpoke|
- |Traffic to remote virtual network| Allow (default) |
- |Traffic forwarded from remote virtual network | Allow (default) |
- |Virtual network gateway or route server | Use this virtual network's gateway |
+ |Allow traffic to remote virtual network| selected |
+ |Allow traffic forwarded from the remote virtual network (allow gateway transit) | selected |
+ |Use remote Virtual network gateway or route server | not selected |
5. Under **Remote virtual network**:
Now peer the hub and spoke virtual networks.
|Virtual network deployment model| Resource Manager| |Subscription|\<your subscription\>| |Virtual network| VNet-Spoke
- |Traffic to remote virtual network | Allow (default) |
- |Traffic forwarded from remote virtual network | Allow (default) |
- |Virtual network gateway | Use the remote virtual network's gateway |
+ |Allow traffic to current virtual network | selected |
+ |Allow traffic forwarded from current virtual network (allow gateway transit) | selected |
+ |Use current virtual network gateway or route server | selected |
5. Select **Add**.
- :::image type="content" source="media/secure-hybrid-network/firewall-peering.png" alt-text="Vnet peering":::
+ :::image type="content" source="media/secure-hybrid-network/firewall-peering.png" alt-text="Screenshot showing Vnet peering.":::
## Create the routes
Next, create a couple routes:
1. Select **Routes** in the left column. 1. Select **Add**. 1. For the route name, type **ToSpoke**.
-1. For the address prefix, type **10.6.0.0/16**.
+1. For **Destination type**, select **IP addresses**.
+1. For **Destination IP addresses/CIDR ranges**, type **10.6.0.0/16**.
1. For next hop type, select **Virtual appliance**. 1. For next hop address, type the firewall's private IP address that you noted earlier.
-1. Select **OK**.
+1. Select **Add**.
Now associate the route to the subnet.
Now create the default route from the spoke subnet.
1. Select **Routes** in the left column. 1. Select **Add**. 1. For the route name, type **ToHub**.
-1. For the address prefix, type **0.0.0.0/0**.
+1. For **Destination type**, select **IP addresses**
+1. For **Destination IP addresses/CIDR ranges**, type **0.0.0.0/0**.
1. For next hop type, select **Virtual appliance**. 1. For next hop address, type the firewall's private IP address that you noted earlier.
-1. Select **OK**.
+1. Select **Add**.
Now associate the route to the subnet.
Now create the spoke workload and on-premises virtual machines, and place them i
Create a virtual machine in the spoke virtual network, running IIS, with no public IP address. 1. From the Azure portal home page, select **Create a resource**.
-2. Under **Popular**, select **Windows Server 2016 Datacenter**.
+2. Under **Popular Marketplace products**, select **Windows Server 2019 Datacenter**.
3. Enter these values for the virtual machine: - **Resource group** - Select **FW-Hybrid-Test** - **Virtual machine name**: *VM-Spoke-01*
Create a virtual machine in the spoke virtual network, running IIS, with no publ
- **User name**: type a user name - **Password**: type a password
-4. Select **Next:Disks**.
-5. Accept the defaults and select **Next: Networking**.
-6. Select **VNet-Spoke** for the virtual network and the subnet is **SN-Workload**.
-8. For **Public inbound ports**, select **Allow selected ports**, and then select **HTTP (80)**, and **RDP (3389)**
+4. For **Public inbound ports**, select **Allow selected ports**, and then select **HTTP (80)**, and **RDP (3389)**
+1. Select **Next:Disks**.
+1. Accept the defaults and select **Next: Networking**.
+1. Select **VNet-Spoke** for the virtual network and the subnet is **SN-Workload**.
1. Select **Next:Management**.
+1. Select **Next : Monitoring**.
1. For **Boot diagnostics**, Select **Disable**. 1. Select **Review + Create**, review the settings on the summary page, and then select **Create**.
Create a virtual machine in the spoke virtual network, running IIS, with no publ
This is a virtual machine that you use to connect using Remote Desktop to the public IP address. From there, you then connect to the on-premises server through the firewall. 1. From the Azure portal home page, select **Create a resource**.
-2. Under **Popular**, select **Windows Server 2016 Datacenter**.
+2. Under **Popular**, select **Windows Server 2019 Datacenter**.
3. Enter these values for the virtual machine: - **Resource group** - Select existing, and then select **FW-Hybrid-Test** - **Virtual machine name** - *VM-Onprem*
This is a virtual machine that you use to connect using Remote Desktop to the pu
- **User name**: type a user name - **Password**: type your password
+7. For **Public inbound ports**, select **Allow selected ports**, and then select **RDP (3389)**
4. Select **Next:Disks**. 5. Accept the defaults and select **Next:Networking**. 6. Select **VNet-Onprem** for virtual network and verify the subnet is **SN-Corp**.
-7. For **Public inbound ports**, select **Allow selected ports**, and then select **RDP (3389)**
+ 8. Select **Next:Management**.
-9. For **Boot diagnostics**, select **Disable**.
-10. Select **Review + Create**, review the settings on the summary page, and then select **Create**.
+1. Select **Next : Monitoring**.
+1. For **Boot diagnostics**, select **Disable**.
+1. Select **Review + Create**, review the settings on the summary page, and then select **Create**.
## Test the firewall
This is a virtual machine that you use to connect using Remote Desktop to the pu
3. Open a web browser on **VM-Onprem**, and browse to http://\<VM-spoke-01 private IP\>. You should see the **VM-spoke-01** web page:
- ![VM-Spoke-01 web page](media/secure-hybrid-network/vm-spoke-01-web.png)
+ :::image type="content" source="media/secure-hybrid-network/vm-spoke-01-web.png" alt-text="Screenshot showing vm-spoke-01 web page.":::
4. From the **VM-Onprem** virtual machine, open a remote desktop to **VM-spoke-01** at the private IP address.
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
To compare Azure Firewall features for all Firewall SKUs, see [Choose the right
The TLS (Transport Layer Security) protocol primarily provides cryptography for privacy, integrity, and authenticity using certificates between two or more communicating applications. It runs in the application layer and is widely used to encrypt the HTTP protocol.
-Encrypted traffic has a possible security risk and can hide illegal user activity and malicious traffic. Azure Firewall without TLS inspection (as shown in the following diagram) has no visibility into the data that flows in the encrypted TLS tunnel, and so can't provide a full protection coverage.
+Encrypted traffic has a possible security risk and can hide illegal user activity and malicious traffic. Azure Firewall without TLS inspection (as shown in the following diagram) has no visibility into the data that flows in the encrypted TLS tunnel, so it can't provide full-protection coverage.
-The second diagram shows how Azure Firewall Premium terminates and inspects TLS connections to detect, alert, and mitigate malicious activity in HTTPS. The firewall actually creates two dedicated TLS connections: one with the Web Server (contoso.com) and another connection with the client. Using the customer provided CA certificate, it generates an on-the-fly certificate, which replaces the Web Server certificate and shares it with the client to establish the TLS connection between the firewall and the client.
+The second diagram shows how Azure Firewall Premium terminates and inspects TLS connections to detect, alert, and mitigate malicious activity in HTTPS. The firewall creates two dedicated TLS connections: one with the Web Server (contoso.com) and another connection with the client. Using the customer provided CA certificate, it generates an on-the-fly certificate, which replaces the Web Server certificate and shares it with the client to establish the TLS connection between the firewall and the client.
Azure Firewall without TLS inspection: :::image type="content" source="media/premium-features/end-to-end-transport-layer-security.png" alt-text="End-to-end TLS for Azure Firewall Standard":::
To learn more about TLS inspection, see [Building a POC for TLS inspection in Az
A network intrusion detection and prevention system (IDPS) allows you to monitor your network for malicious activity, log information about this activity, report it, and optionally attempt to block it.
-Azure Firewall Premium provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network level traffic (Layers 3-7), they're fully managed, and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic. Spoke-to-spoke (East-West) includes traffic that goes from/to an on-premises network. You can configure your IDPS private IP address ranges using the **Private IP ranges** preview feature. For more information, see [IDPS Private IP ranges](#idps-private-ip-ranges).
+Azure Firewall Premium provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network-level traffic (Layers 3-7). They're fully managed and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic. Spoke-to-spoke (East-West) includes traffic that goes from/to an on-premises network. You can configure your IDPS private IP address ranges using the **Private IP ranges** preview feature. For more information, see [IDPS Private IP ranges](#idps-private-ip-ranges).
The Azure Firewall signatures/rulesets include: - An emphasis on fingerprinting actual malware, Command and Control, exploit kits, and in the wild malicious activity missed by traditional prevention methods.
The IDPS Bypass List is a configuration that allows you to not filter traffic to
### IDPS Private IP ranges
-In Azure Firewall Premium IDPS, private IP address ranges are used to identify if traffic is inbound, outbound, or internal (East-West). Each signature is applied on specific traffic direction, as indicated in the signature rules table. By default, only ranges defined by IANA RFC 1918 are considered private IP addresses. So traffic sent from a private IP address range to a private IP address range is considered internal. To modify your private IP addresses, you can now easily edit, remove, or add ranges as needed.
+In Azure Firewall Premium IDPS, private IP address ranges are used to identify if traffic is inbound, outbound, or internal (East-West). Each signature is applied on specific traffic direction, as indicated in the signature rules table. By default, only ranges defined by IANA RFC 1918 are considered private IP addresses. So, traffic sent from a private IP address range to a private IP address range is considered internal. To modify your private IP addresses, you can now easily edit, remove, or add ranges as needed.
:::image type="content" source="media/premium-features/idps-private-ip.png" alt-text="Screenshot showing IDPS private IP address ranges.":::
IDPS signature rules allow you to:
- Customize one or more signatures and change their mode to *Disabled*, *Alert* or *Alert and Deny*.
- For example, if you receive a false positive where a legitimate request is blocked by Azure Firewall due to a faulty signature, you can use the signature ID from the network rules logs, and set its IDPS mode to off. This causes the "faulty" signature to be ignored and resolves the false positive issue.
+ For example, if you receive a false positive where a legitimate request is blocked by Azure Firewall due to a faulty signature, you can use the signature ID from the network rules logs and set its IDPS mode to off. This causes the "faulty" signature to be ignored and resolves the false positive issue.
- You can apply the same fine-tuning procedure for signatures that are creating too many low-priority alerts, and therefore interfering with visibility for high-priority alerts. - Get a holistic view of the entire 55,000 signatures - Smart search
- Allows you to search through the entire signatures database by any type of attribute. For example, you can search for specific CVE-ID to discovered what signatures are taking care of this CVE by typing the ID in the search bar.
+ This action allows you to search through the entire signatures database by any type of attribute. For example, you can search for specific CVE-ID to discover what signatures are taking care of this CVE by typing the ID in the search bar.
IDPS signature rules have the following properties:
IDPS signature rules have the following properties:
:::image type="content" source="media/idps-signature-categories/firewall-idps-signature.png" alt-text="Screenshot showing the IDPS signature rule columns." lightbox="media/idps-signature-categories/firewall-idps-signature.png":::
-For more informaton about IDPS, see [Taking Azure Firewall IDPS on a Test Drive](https://techcommunity.microsoft.com/t5/azure-network-security-blog/taking-azure-firewall-idps-on-a-test-drive/ba-p/3872706).
+For more information about IDPS, see [Taking Azure Firewall IDPS on a Test Drive](https://techcommunity.microsoft.com/t5/azure-network-security-blog/taking-azure-firewall-idps-on-a-test-drive/ba-p/3872706).
## URL filtering
URL Filtering can be applied both on HTTP and HTTPS traffic. When HTTPS traffic
## Web categories
-Web categories lets administrators allow or deny user access to web site categories such as gambling websites, social media websites, and others. Web categories are also included in Azure Firewall Standard, but it's more fine-tuned in Azure Firewall Premium. As opposed to the Web categories capability in the Standard SKU that matches the category based on an FQDN, the Premium SKU matches the category according to the entire URL for both HTTP and HTTPS traffic.
+Web categories let administrators allow or deny user access to web site categories such as gambling websites, social media websites, and others. Web categories are also included in Azure Firewall Standard, but it's more fine-tuned in Azure Firewall Premium. As opposed to the Web categories capability in the Standard SKU that matches the category based on an FQDN, the Premium SKU matches the category according to the entire URL for both HTTP and HTTPS traffic.
Azure Firewall Premium web categories are only available in firewall policies. Ensure that your policy SKU matches the SKU of your firewall instance. For example, if you have a Firewall Premium instance, you must use a Firewall Premium policy.
You can view traffic that has been filtered by **Web categories** in the Applica
### Category exceptions
-You can create exceptions to your web category rules. Create a separate allow or deny rule collection with a higher priority within the rule collection group. For example, you can configure a rule collection that allows `www.linkedin.com` with priority 100, with a rule collection that denies **Social networking** with priority 200. This creates the exception for the predefined **Social networking** web category.
+You can create exceptions to your web category rules. Create separate allow or deny rule collection with a higher priority within the rule collection group. For example, you can configure a rule collection that allows `www.linkedin.com` with priority 100, with a rule collection that denies **Social networking** with priority 200. This creates the exception for the predefined **Social networking** web category.
### Web category search
You can identify what category a given FQDN or URL is by using the **Web Categor
:::image type="content" source="media/premium-features/firewall-category-search.png" alt-text="Firewall category search dialog"::: > [!IMPORTANT]
-> To use **Web Category Check** feature, user has to have an access of Microsoft.Network/azureWebCategories/getwebcategory/action for **subscription** level, not resource group level.
+> To use **Web Category Check** feature, user must have an access of Microsoft.Network/azureWebCategories/getwebcategory/action for **subscription** level, not resource group level.
### Category change
For the supported regions for Azure Firewall, see [Azure products available by r
- [Learn about Azure Firewall Premium certificates](premium-certificates.md) - [Deploy and configure Azure Firewall Premium](premium-deploy.md) - [Migrate to Azure Firewall Premium](premium-migrate.md)-- [Learn more about Azure network security](../networking/security/index.yml)
+- [Learn more about Azure network security](../networking/security/index.yml)
hdinsight Ssh Domain Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/ssh-domain-accounts.md
Title: Manage SSH access for domain accounts in Azure HDInsight
description: Steps to manage SSH access for Azure AD accounts in HDInsight. Previously updated : 06/30/2022 Last updated : 09/19/2023 # Manage SSH access for domain accounts in Azure HDInsight
-On secure clusters, by default, all domain users in [Azure AD DS](../../active-directory-domain-services/overview.md) are allowed to [SSH](../hdinsight-hadoop-linux-use-ssh-unix.md) into the head and edge nodes. These users are not part of the sudoers group and do not get root access. The SSH user created during cluster creation will have root access.
+On secure clusters, by default, all domain users in [Azure AD DS](../../active-directory-domain-services/overview.md) are allowed to [SSH](../hdinsight-hadoop-linux-use-ssh-unix.md) into the head and edge nodes. These users are not part of the sudoers group and do not get root access. The SSH user created during cluster creation has root access.
## Manage access To modify SSH access to specific users or groups, update `/etc/ssh/sshd_config` on each of the nodes.
-1. Use [ssh command](../hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your cluster. Edit the command below by replacing CLUSTERNAME with the name of your cluster, and then enter the command:
+1. Use [ssh command](../hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your cluster. Edit the following command by replacing CLUSTERNAME with the name of your cluster, and then enter the command:
```cmd ssh sshuser@CLUSTERNAME-ssh.azurehdinsight.net
To modify SSH access to specific users or groups, update `/etc/ssh/sshd_config`
sudo nano /etc/ssh/sshd_config ```
-1. Modify the `sshd_config` file as desired. If you restrict users to certain groups, then the local accounts cannot SSH into that node. The following is only an example of syntax:
+1. Modify the `sshd_config` file as desired. If you restrict users to certain groups, then the local accounts cannot SSH into that node. The following command is only an example of syntax:
```bash AllowUsers useralias1 useralias2
To modify SSH access to specific users or groups, update `/etc/ssh/sshd_config`
## SSH authentication log
-SSH authentication log is written into `/var/log/auth.log`. If you see any login failures through SSH for local or domain accounts, you will need to go through the log to debug the errors. Often the issue might be related to specific user accounts and it's usually a good practice to try other user accounts or SSH using the default SSH user (local account) and then attempt a kinit.
+SSH authentication log is written into `/var/log/auth.log`. If you see any login failures through SSH for local or domain accounts, you need to go through the log to debug the errors. Often the issue might be related to specific user accounts and it's usually a good practice to try other user accounts or SSH using the default SSH user (local account) and then attempt a kinit.
## SSH debug log
-To enable verbose logging, you will need to restart `sshd` with the `-d` option. Like `/usr/sbin/sshd -d` You can also run `sshd` at a custom port (like 2222) so that you don't have to stop the main SSH daemon. You can also use `-v` option with the SSH client to get more logs (client side view of the failures).
+To enable verbose logging, you need to restart `sshd` with the `-d` option. Like `/usr/sbin/sshd -d` You can also run `sshd` at a custom port (like 2222) so that you don't have to stop the main SSH daemon. You can also use `-v` option with the SSH client to get more logs (client side view of the failures).
## Next steps
hdinsight Apache Hadoop Use Mapreduce Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-mapreduce-ssh.md
description: Learn how to use SSH to run MapReduce jobs using Apache Hadoop on H
Previously updated : 08/30/2022 Last updated : 09/27/2023 # Use MapReduce with Apache Hadoop on HDInsight with SSH
hdinsight Apache Hive Warehouse Connector Zeppelin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-warehouse-connector-zeppelin.md
Previously updated : 07/18/2022 Last updated : 09/27/2023 # Integrate Apache Zeppelin with Hive Warehouse Connector in Azure HDInsight
-HDInsight Spark clusters include Apache Zeppelin notebooks with different interpreters. In this article, we'll focus only on the Livy interpreter to access Hive tables from Spark using Hive Warehouse Connector.
+HDInsight Spark clusters include Apache Zeppelin notebooks with different interpreters. In this article, we focus only on the Livy interpreter to access Hive tables from Spark using Hive Warehouse Connector.
> [!NOTE] > This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
Complete the [Hive Warehouse Connector setup](apache-hive-warehouse-connector.md
## Getting started
-1. Use [ssh command](../hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your Apache Spark cluster. Edit the command below by replacing CLUSTERNAME with the name of your cluster, and then enter the command:
+1. Use [ssh command](../hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your Apache Spark cluster. Edit the following command by replacing CLUSTERNAME with the name of your cluster, and then enter the command:
```cmd ssh sshuser@CLUSTERNAME-ssh.azurehdinsight.net
Following configurations are required to access hive tables from Zeppelin with t
| livy.spark.security.credentials.hiveserver2.enabled | true | | livy.spark.sql.hive.llap | true | | livy.spark.yarn.security.credentials.hiveserver2.enabled | true |
- | livy.superusers | livy,zeppelin |
+ | livy.superusers | livy, zeppelin |
| livy.spark.jars | `file:///usr/hdp/current/hive_warehouse_connector/hive-warehouse-connector-assembly-VERSION.jar`.<br>Replace VERSION with value you obtained from [Getting started](#getting-started), earlier. | | livy.spark.submit.pyFiles | `file:///usr/hdp/current/hive_warehouse_connector/pyspark_hwc-VERSION.zip`.<br>Replace VERSION with value you obtained from [Getting started](#getting-started), earlier. | | livy.spark.sql.hive.hiveserver2.jdbc.url | Set it to the HiveServer2 Interactive JDBC URL of the Interactive Query cluster. |
Following configurations are required to access hive tables from Zeppelin with t
||| | livy.spark.sql.hive.hiveserver2.jdbc.url.principal | `hive/_HOST@<AAD-Domain>` |
- * Use [ssh command](../hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your Interactive Query cluster. Look for `default_realm` parameter in the `/etc/krb5.conf` file. Replace `<AAD-DOMAIN>` with this value as an uppercase string, otherwise the credential won't be found.
+ * Use [ssh command](../hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your Interactive Query cluster. Look for `default_realm` parameter in the `/etc/krb5.conf` file. Replace `<AAD-DOMAIN>` with this value as an uppercase string, otherwise the credential cannot be found.
:::image type="content" source="./media/apache-hive-warehouse-connector/aad-domain.png" alt-text="hive warehouse connector AAD Domain" border="true":::
hdinsight Apache Kafka Performance Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-performance-tuning.md
Title: Performance optimization for Apache Kafka HDInsight clusters
description: Provides an overview of techniques for optimizing Apache Kafka workloads on Azure HDInsight. Previously updated : 08/21/2022 Last updated : 09/15/2023 # Performance optimization for Apache Kafka HDInsight clusters
-This article gives some suggestions for optimizing the performance of your Apache Kafka workloads in HDInsight. The focus is on adjusting producer, broker and consumer configuration. Sometimes, you also need to adjust OS settings to tune the performance with heavy workload. There are different ways of measuring performance, and the optimizations that you apply will depend on your business needs.
+This article gives some suggestions for optimizing the performance of your Apache Kafka workloads in HDInsight. The focus is on adjusting producer, broker and consumer configuration. Sometimes, you also need to adjust OS settings to tune the performance with heavy workload. There are different ways of measuring performance, and the optimizations that you apply depends on your business needs.
## Architecture overview
-Kafka topics are used to organize records. Records are produced by producers, and consumed by consumers. Producers send records to Kafka brokers, which then store the data. Each worker node in your HDInsight cluster is a Kafka broker.
+Kafka topics are used to organize records. Producers produce records, and consumers consume them. Producers send records to Kafka brokers, which then store the data. Each worker node in your HDInsight cluster is a Kafka broker.
Topics partition records across brokers. When consuming records, you can use up to one consumer per partition to achieve parallel processing of the data.
-Replication is used to duplicate partitions across nodes. This protects against node (broker) outages. A single partition among the group of replicas is designated as the partition leader. Producer traffic is routed to the leader of each node, using the state managed by ZooKeeper.
+Replication is used to duplicate partitions across nodes. This partition protects against node (broker) outages. A single partition among the group of replicas is designated as the partition leader. Producer traffic is routed to the leader of each node, using the state managed by ZooKeeper.
## Identify your scenario
-Apache Kafka performance has two main aspects ΓÇô throughput and latency. Throughput is the maximum rate at which data can be processed. Higher throughput is usually better. Latency is the time it takes for data to be stored or retrieved. Lower latency is usually better. Finding the right balance between throughput, latency and the cost of the application's infrastructure can be challenging. Your performance requirements will likely match one of the following three common situations, based on whether you require high throughput, low latency, or both:
+Apache Kafka performance has two main aspects ΓÇô throughput and latency. Throughput is the maximum rate at which data can be processed. Higher throughput is better. Latency is the time it takes for data to be stored or retrieved. Lower latency is better. Finding the right balance between throughput, latency and the cost of the application's infrastructure can be challenging. Your performance requirements should match with one of the following three common situations, based on whether you require high throughput, low latency, or both:
* High throughput, low latency. This scenario requires both high throughput and low latency (~100 milliseconds). An example of this type of application is service availability monitoring. * High throughput, high latency. This scenario requires high throughput (~1.5 GBps) but can tolerate higher latency (< 250 ms). An example of this type of application is telemetry data ingestion for near real-time processes like security and intrusion detection applications.
Apache Kafka performance has two main aspects ΓÇô throughput and latency. Throug
## Producer configurations
-The following sections will highlight some of the most important generic configuration properties to optimize performance of your Kafka producers. For a detailed explanation of all configuration properties, see [Apache Kafka documentation on producer configurations](https://kafka.apache.org/documentation/#producerconfigs).
+The following sections highlight some of the most important generic configuration properties to optimize performance of your Kafka producers. For a detailed explanation of all configuration properties, see [Apache Kafka documentation on producer configurations](https://kafka.apache.org/documentation/#producerconfigs).
### Batch size
A Kafka producer can be configured to compress messages before sending them to b
Among the two commonly used compression codecs, `gzip` and `snappy`, `gzip` has a higher compression ratio, which results in lower disk usage at the cost of higher CPU load. The `snappy` codec provides less compression with less CPU overhead. You can decide which codec to use based on broker disk or producer CPU limitations. `gzip` can compress data at a rate five times higher than `snappy`.
-Using data compression will increase the number of records that can be stored on a disk. It may also increase CPU overhead in cases where there's a mismatch between the compression formats being used by the producer and the broker. as the data must be compressed before sending and then decompressed before processing.
+Data compression increases the number of records that can be stored on a disk. It may also increase CPU overhead in cases where there's a mismatch between the compression formats being used by the producer and the broker. as the data must be compressed before sending and then decompressed before processing.
## Broker settings
-The following sections will highlight some of the most important settings to optimize performance of your Kafka brokers. For a detailed explanation of all broker settings, see [Apache Kafka documentation on broker configurations](https://kafka.apache.org/documentation/#brokerconfigs).
+The following sections highlight some of the most important settings to optimize performance of your Kafka brokers. For a detailed explanation of all broker settings, see [Apache Kafka documentation on broker configurations](https://kafka.apache.org/documentation/#brokerconfigs).
### Number of disks
Each Kafka partition is a log file on the system, and producer threads can write
Increasing the partition density (the number of partitions per broker) adds an overhead related to metadata operations and per partition request/response between the partition leader and its followers. Even in the absence of data flowing through, partition replicas still fetch data from leaders, which results in extra processing for send and receive requests over the network.
-For Apache Kafka clusters 2.1 and 2.4 and above in HDInsight, we recommend you to have a maximum of 2000 partitions per broker, including replicas. Increasing the number of partitions per broker decreases throughput and may also cause topic unavailability. For more information on Kafka partition support, see [the official Apache Kafka blog post on the increase in the number of supported partitions in version 1.1.0](https://blogs.apache.org/kafka/entry/apache-kafka-supports-more-partitions). For details on modifying topics, see [Apache Kafka: modifying topics](https://kafka.apache.org/documentation/#basic_ops_modify_topic).
+For Apache Kafka clusters 2.1 and 2.4 and as noted before in HDInsight, we recommend you to have a maximum of 2000 partitions per broker, including replicas. Increasing the number of partitions per broker decreases throughput and may also cause topic unavailability. For more information on Kafka partition support, see [the official Apache Kafka blog post on the increase in the number of supported partitions in version 1.1.0](https://blogs.apache.org/kafka/entry/apache-kafka-supports-more-partitions). For details on modifying topics, see [Apache Kafka: modifying topics](https://kafka.apache.org/documentation/#basic_ops_modify_topic).
### Number of replicas
For more information on replication, see [Apache Kafka: replication](https://kaf
## Consumer configurations
-The following section will highlight some important generic configurations to optimize the performance of your Kafka consumers. For a detailed explanation of all configurations, see [Apache Kafka documentation on consumer configurations](https://kafka.apache.org/documentation/#consumerconfigs).
+The following section highlight some important generic configurations to optimize the performance of your Kafka consumers. For a detailed explanation of all configurations, see [Apache Kafka documentation on consumer configurations](https://kafka.apache.org/documentation/#consumerconfigs).
### Number of consumers
-It is a good practice to have the number of partitions equal to the number of consumers. If the number of consumers is less than the number of partitions then a few of the consumers will read from multiple partitions, increasing consumer latency.
+It is a good practice to have the number of partitions equal to the number of consumers. If the number of consumers is less than the number of partitions, then a few of the consumers read from multiple partitions, increasing consumer latency.
-If the number of consumers is greater than the number of partitions, then you will be wasting your consumer resources since those consumers will be idle.
+If the number of consumers is greater than the number of partitions, then you are wasting your consumer resources since those consumers are idle.
### Avoid frequent consumer rebalance Consumer rebalance is triggered by partition ownership change (i.e., consumers scales out or scales down), a broker crash (since brokers are group coordinator for consumer groups), a consumer crash, adding a new topic or adding new partitions. During rebalancing, consumers cannot consume, hence increasing the latency.
-Consumers are considered alive if it can send a heartbeat to a broker within `session.timeout.ms`. Otherwise, the consumer will be considered dead or failed. This will lead to a consumer rebalance. The lower the consumer `session.timeout.ms` the faster we will be able to detect those failures.
+Consumers are considered alive if it can send a heartbeat to a broker within `session.timeout.ms`. Otherwise, the consumer is considered dead or failed. This delay leads to a consumer rebalance. Lower the consumer `session.timeout.ms`, faster we can detect those failures.
If the `session.timeout.ms` is too low, a consumer could experience repeated unnecessary rebalances, due to scenarios such as when a batch of messages takes longer to process or when a JVM GC pause takes too long. If you have a consumer that spends too much time processing messages, you can address this either by increasing the upper bound on the amount of time that a consumer can be idle before fetching more records with `max.poll.interval.ms` or by reducing the maximum size of batches returned with the configuration parameter `max.poll.records`. ### Batching
-Like producers, we can add batching for consumers. The amount of data consumers can get in each fetch request can be configured by changing the configuration `fetch.min.bytes`. This parameter defines the minimum bytes expected from a fetch response of a consumer. Increasing this value will reduce the number of fetch requests made to the broker, therefore reducing extra overhead. By default, this value is 1. Similarly, there is another configuration `fetch.max.wait.ms`. If a fetch request doesnΓÇÖt have enough messages as per the size of `fetch.min.bytes`, it will wait until the expiration of the wait time based on this config `fetch.max.wait.ms`.
+Like producers, we can add batching for consumers. The amount of data consumers can get in each fetch request can be configured by changing the configuration `fetch.min.bytes`. This parameter defines the minimum bytes expected from a fetch response of a consumer. Increasing this value reduces the number of fetch requests made to the broker, therefore reducing extra overhead. By default, this value is 1. Similarly, there is another configuration `fetch.max.wait.ms`. If a fetch request does not have enough messages as per the size of `fetch.min.bytes`, it waits until the expiration of the wait time based on this config `fetch.max.wait.ms`.
> [!NOTE] > In few scenarios, consumers may seem to be slow, when it fails to process the message. If you are not committing the offset after an exception, consumer will be stuck at a particular offset in an infinite loop and will not move forward, increasing the lag on consumer side as a result.
Like producers, we can add batching for consumers. The amount of data consumers
`vm.max_map_count` defines maximum number of mmap a process can have. By default, on HDInsight Apache Kafka cluster linux VM, the value is 65535.
-In Apache Kafka, each log segment requires a pair of index/timeindex files, and each of these files consumes 1 mmap. In other words, each log segment uses 2 mmap. Thus, if each partition hosts a single log segment, it requires minimum 2 mmap. The number of log segments per partition varies depending on the **segment size, load intensity, retention policy, rolling period** and, generally tends to be more than one. `Mmap value = 2*((partition size)/(segment size))*(partitions)`
+In Apache Kafka, each log segment requires a pair of index/timeindex files, and each of these files consumes one mmap. In other words, each log segment uses two mmap. Thus, if each partition hosts a single log segment, it requires minimum two mmap. The number of log segments per partition varies depending on the **segment size, load intensity, retention policy, rolling period** and, generally tends to be more than one. `Mmap value = 2*((partition size)/(segment size))*(partitions)`
If required mmap value exceeds the `vm.max_map_count`, broker would raise **"Map failed"** exception.
hdinsight Apache Spark Troubleshoot Illegalargumentexception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-illegalargumentexception.md
Title: IllegalArgumentException error for Apache Spark - Azure HDInsight
description: IllegalArgumentException for Apache Spark activity in Azure HDInsight for Azure Data Factory Previously updated : 06/29/2022 Last updated : 09/19/2023 # Scenario: IllegalArgumentException for Apache Spark activity in Azure HDInsight
Wrong FS: wasbs://additional@xxx.blob.core.windows.net/spark-examples_2.11-2.1.0
## Cause
-A Spark job will fail if the application jar file is not located in the Spark clusterΓÇÖs default/primary storage.
+A Spark job fails if the application jar file is not located in the Spark clusterΓÇÖs default/primary storage.
This is a known issue with the Spark open-source framework tracked in this bug: [Spark job fails if fs.defaultFS and application jar are different url](https://issues.apache.org/jira/browse/SPARK-22587).
This issue has been resolved in Spark 2.3.0.
## Resolution
-Make sure the application jar is stored on the default/primary storage for the HDInsight cluster. In case of Azure Data Factory, make sure the ADF linked service is pointed to the HDInsight default container rather than a secondary container.
+Make sure the application jar is stored on the default/primary storage for the HDInsight cluster. In Azure Data Factory, make sure the ADF linked service is pointed to the HDInsight default container rather than a secondary container.
## Next steps
healthcare-apis Autoscale Azure Api Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/autoscale-azure-api-fhir.md
Previously updated : 06/02/2022 Last updated : 9/27/2023 # Autoscale for Azure API for FHIR + Azure API for FHIR, as a managed service, allows customers to persist with Fast Healthcare Interoperability Resources (FHIR&#174;) compliant healthcare data and exchange it securely through the service API. To accommodate different transaction workloads, customers can use manual scale or autoscale. Azure API for FHIR provides scaling capabilities at database and compute level.
To ensure the best possible outcome, we recommend customers to gradually increas
The data size is one of several factors used in calculating the total throughput RU/s required for manual scale and autoscale. You can find the data size using the Metrics menu option under **Monitoring**. Start a new chart and select **Cosmos DB Collection Size** in the Metric dropdown box and **Max** in the "Aggregation" box.
-[ ![Screenshot of metrics_new_chart](media/cosmosdb/metrics-new-chart.png) ](media/cosmosdb/metrics-new-chart.png#lightbox)
+[Screenshot of metrics_new_chart](media/cosmosdb/metrics-new-chart.png) ](media/cosmosdb/metrics-new-chart.png#lightbox)
You should be able to see the Max data collection size over the time period selected. Change the "Time Range" if necessary, for example from "Last 30 minutes" to "Last 48 Hours".
-[ ![Screenshot of cosmosdb_collection_size](media/cosmosdb/cosmosdb-collection-size.png) ](media/cosmosdb/cosmosdb-collection-size.png#lightbox)
+[Screenshot of cosmosdb_collection_size](media/cosmosdb/cosmosdb-collection-size.png) ](media/cosmosdb/cosmosdb-collection-size.png#lightbox)
Use the formula to calculate required RU/s.
healthcare-apis Azure Active Directory Identity Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-active-directory-identity-configuration.md
Previously updated : 06/02/2022 Last updated : 9/27/2023 # Azure Active Directory identity configuration for Azure API for FHIR + When you're working with healthcare data, it's important to ensure that the data is secure, and it can't be accessed by unauthorized users or applications. FHIR servers use [OAuth 2.0](https://oauth.net/2/) to ensure this data security. [Azure API for FHIR](https://azure.microsoft.com/services/azure-api-for-fhir/) is secured using [Azure Active Directory](../../active-directory/index.yml), which is an example of an OAuth 2.0 identity provider. This article provides an overview of FHIR server authorization and the steps needed to obtain a token to access a FHIR server. While these steps apply to any FHIR server and any identity provider, we'll walk through Azure API for FHIR as the FHIR server and Azure Active Directory (Azure AD) as our identity provider in this article. ## Access control overview
healthcare-apis Azure Api Fhir Access Token Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-access-token-validation.md
Previously updated : 06/02/2022 Last updated : 09/27/2023 # Azure API for FHIR access token validation + How Azure API for FHIR validates the access token will depend on implementation and configuration. In this article, we'll walk through the validation steps, which can be helpful when troubleshooting access issues. ## Validate token has no issues with identity provider
healthcare-apis Azure Api Fhir Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-resource-manager-template.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Quickstart: Use an ARM template to deploy Azure API for FHIR + In this quickstart, you'll learn how to use an Azure Resource Manager template (ARM template) to deploy Azure API for Fast Healthcare Interoperability Resources (FHIR®). You can deploy Azure API for FHIR through the Azure portal, PowerShell, or CLI. [!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)]
healthcare-apis Azure Api For Fhir Additional Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-for-fhir-additional-settings.md
Previously updated : 06/02/2022 Last updated : 09/27/2023 # Additional settings for Azure API for FHIR + In this how-to guide, we'll review the additional settings you may want to set in your Azure API for FHIR. There are additional pages that drill into even more details. ## Configure Database settings
healthcare-apis Carin Implementation Guide Blue Button Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/carin-implementation-guide-blue-button-tutorial.md
Previously updated : 06/02/2022 Last updated : 09/27/2023 # CARIN Implementation Guide for Blue Button&#174; for Azure API for FHIR + In this tutorial, we'll walk through setting up Azure API for FHIR to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [CARIN Implementation Guide for Blue Button](https://build.fhir.org/ig/HL7/carin-bb/https://docsupdatetracker.net/index.html) (C4BB IG). ## Touchstone capability statement
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/centers-for-medicare-tutorial-introduction.md
Previously updated : 06/02/2022 Last updated : 09/27/2023 # Centers for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule introduction + In this series of tutorials, we'll cover a high-level summary of the Center for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule, and the technical requirements outlined in this rule. We'll walk through the various implementation guides referenced for this rule. We'll also provide details on how to configure the Azure API for FHIR to support these implementation guides.
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-azure-rbac.md
Previously updated : 06/02/2022 Last updated : 09/27/2023 # Configure Azure RBAC for FHIR + In this article, you'll learn how to use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml) to assign access to the Azure API for FHIR data plane. Azure RBAC is the preferred methods for assigning data plane access when data plane users are managed in the Azure Active Directory tenant associated with your Azure subscription. If you're using an external Azure Active Directory tenant, refer to the [local RBAC assignment reference](configure-local-rbac.md). ## Confirm Azure RBAC mode
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-cross-origin-resource-sharing.md
Title: Configure cross-origin resource sharing in Azure API for FHIR
description: This article describes how to configure cross-origin resource sharing in Azure API for FHIR. Previously updated : 06/03/2022 Last updated : 09/27/2023 # Configure cross-origin resource sharing in Azure API for FHIR + Azure API for FHIR supports [cross-origin resource sharing (CORS)](https://wikipedia.org/wiki/Cross-Origin_Resource_Sharing). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request. CORS is often used in a single-page app that must call a RESTful API to a different domain.
healthcare-apis Configure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-database.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Configure database settings + Azure API for FHIR uses database to store its data. Performance of the underlying database depends on the number of Request Units (RU) selected during service provisioning or in database settings after the service has been provisioned. Azure API for FHIR borrows the concept of [Request Units (RUs) in Azure Cosmos DB](../../cosmos-db/request-units.md)) when setting the performance of underlying database.
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-export-data.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Configure export settings in Azure API for FHIR and set up a storage account + Azure API for FHIR supports $export command that allows you to export the data out of Azure API for FHIR account to a storage account. There are three steps involved in configuring export in Azure API for FHIR:
healthcare-apis Configure Local Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-local-rbac.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 ms.devlang: azurecli # Configure local RBAC for FHIR + This article explains how to configure the Azure API for FHIR to use a secondary Azure Active Directory (Azure AD) tenant for data access. Use this mode only if it isn't possible for you to use the Azure AD tenant associated with your subscription. > [!NOTE]
healthcare-apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-private-link.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Configure private link + Private link enables you to access Azure API for FHIR over a private endpoint, which is a network interface that connects you privately and securely using a private IP address from your virtual network. With private link, you can access our services securely from your VNet as a first party service without having to go through a public Domain Name System (DNS). This article describes how to create, test, and manage your private endpoint for Azure API for FHIR. >[!Note]
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/convert-data.md
Previously updated : 03/09/2023 Last updated : 09/27/2023 # Converting your data to FHIR for Azure API for FHIR + The `$convert-data` custom endpoint in the FHIR service is meant for data conversion from different data types to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently the `$convert-data` custom endpoint supports `four` types of data conversion:
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/copy-to-synapse.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Copy data from Azure API for FHIR to Azure Synapse Analytics + In this article, you'll learn three ways to copy data from Azure API for FHIR to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics. * Use the [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deploy-FhirToDatalake.md) OSS tool
healthcare-apis Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/customer-managed-key.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 ms.devlang: azurecli
ms.devlang: azurecli
# Configure customer-managed keys at rest + When you create a new Azure API for FHIR account, your data is encrypted using Microsoft-managed keys by default. Now, you can add a second layer of encryption for the data using your own key that you choose and manage yourself. In Azure, this is typically accomplished using an encryption key in the customer's Azure Key Vault. Azure SQL, Azure Storage, and Azure Cosmos DB are some examples that provide this capability today. Azure API for FHIR leverages this support from Azure Cosmos DB. When you create an account, you'll have the option to specify an Azure Key Vault key URI. This key will be passed on to Azure Cosmos DB when the DB account is provisioned. When a Fast Healthcare Interoperability Resources (FHIR&#174;) request is made, Azure Cosmos DB fetches your key and uses it to encrypt/decrypt the data.
healthcare-apis Davinci Drug Formulary Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/davinci-drug-formulary-tutorial.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Tutorial for Da Vinci Drug Formulary for Azure API for FHIR + In this tutorial, we'll walk through setting up Azure API for FHIR to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [Da Vinci Payer Data Exchange US Drug Formulary Implementation Guide](http://hl7.org/fhir/us/Davinci-drug-formulary/). ## Touchstone capability statement
healthcare-apis Davinci Pdex Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/davinci-pdex-tutorial.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Da Vinci PDex for Azure API for FHIR
healthcare-apis Davinci Plan Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/davinci-plan-net.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Da Vinci Plan Net for Azure API for FHIR + In this tutorial, we'll walk through setting up the FHIR service in Azure API for FHIR to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the Da Vinci PDEX Payer Network (Plan-Net) Implementation Guide. ## Touchstone capability statement
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/de-identified-export.md
Previously updated : 08/24/2022 Last updated : 9/27/2023 # Exporting de-identified data for Azure API for FHIR + > [!Note] > Results when using the de-identified export will vary based on factors such as data inputted, and functions selected by the customer. Microsoft is unable to evaluate the de-identified export outputs or determine the acceptability for customer's use cases and compliance needs. The de-identified export is not guaranteed to meet any specific legal, regulatory, or compliance requirements.
healthcare-apis Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/disaster-recovery.md
Previously updated : 06/03/2022 Last updated : 9/27/2023 # Disaster recovery for Azure API for FHIR + Azure API for FHIR is a fully managed service, based on Fast Healthcare Interoperability Resources (FHIR®). To meet business and compliance requirements you can use the disaster recovery (DR) feature for Azure API for FHIR. The DR feature provides a Recovery Point Objective (RPO) of 15 minutes and a Recovery Time Objective (RTO) of 60 minutes.
healthcare-apis Enable Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/enable-diagnostic-logging.md
Last updated 06/03/2022
# Enable Diagnostic Logging in Azure API for FHIR + In this article, you'll learn how to enable diagnostic logging in Azure API for FHIR and be able to review some sample queries for these logs. Access to diagnostic logs is essential for any healthcare service where compliance with regulatory requirements (such as HIPAA) is a must. The feature in Azure API for FHIR that enables diagnostic logs is the [**Diagnostic settings**](../../azure-monitor/essentials/diagnostic-settings.md) in the Azure portal. ## View and Download FHIR Metrics Data
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
Previously updated : 06/03/2022 Last updated : 9/27/2023 # Export FHIR data in Azure API for FHIR + The Bulk Export feature allows data to be exported from the FHIR Server per the [FHIR specification](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html). Before using $export, you'll want to make sure that the Azure API for FHIR is configured to use it. For configuring export settings and creating Azure storage account, refer to [the configure export data page](configure-export-data.md).
healthcare-apis Fhir App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-app-registration.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Register the Azure Active Directory apps for Azure API for FHIR + You have several configuration options to choose from when you're setting up the Azure API for FHIR or the FHIR Server for Azure (OSS). For open source, you'll need to create your own resource application registration. For Azure API for FHIR, this resource application is created automatically. ## Application registrations
healthcare-apis Fhir Github Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-github-projects.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Related GitHub Projects + We have many open-source projects on GitHub that provide you the source code and instructions to deploy services for various uses. You're always welcome to visit our GitHub repositories to learn and experiment with our features and products. ## FHIR Server
healthcare-apis Fhir Paas Cli Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-cli-quickstart.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Quickstart: Deploy Azure API for FHIR using Azure CLI + In this quickstart, you'll learn how to deploy Azure API for FHIR in Azure using the Azure CLI. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
healthcare-apis Fhir Paas Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-portal-quickstart.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Quickstart: Deploy Azure API for FHIR using Azure portal + In this quickstart, you'll learn how to deploy Azure API for FHIR using the Azure portal. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
healthcare-apis Fhir Paas Powershell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-powershell-quickstart.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Quickstart: Deploy Azure API for FHIR using PowerShell + In this quickstart, you'll learn how to deploy Azure API for FHIR using PowerShell. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md
Previously updated : 06/03/2022 Last updated : 9/27/2023 # FHIR REST API capabilities for Azure API for FHIR
-In this article, we'll cover some of the nuances of the RESTful interactions of Azure API for FHIR.
+In this article, we'll cover some of the nuances of the RESTful interactions of Azure API for FHIR.
## Conditional create/update
healthcare-apis Find Identity Object Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/find-identity-object-ids.md
Previously updated : 06/03/2022 Last updated : 9/27/2023 # Find identity object IDs for authentication configuration for Azure API for FHIR + In this article, you'll learn how to find identity object IDs needed when configuring the Azure API for FHIR to [use an external or secondary Active Directory tenant](configure-local-rbac.md) for data plane. ## Find user object ID
healthcare-apis Get Healthcare Apis Access Token Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/get-healthcare-apis-access-token-cli.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Get access token for Azure API for FHIR using Azure CLI + In this article, you'll learn how to obtain an access token for the Azure API for FHIR using the Azure CLI. When you [provision the Azure API for FHIR](fhir-paas-portal-quickstart.md), you configure a set of users or service principals that have access to the service. If your user object ID is in the list of allowed object IDs, you can access the service using a token obtained using the Azure CLI. [!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
healthcare-apis Get Started With Azure Api Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/get-started-with-azure-api-fhir.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Get started with Azure API for FHIR
-> [!Note]
-> Azure Health Data services is the evolved version of Azure API for FHIR enabling customers to manage FHIR, DICOM, and MedTech services with integrations into other Azure Services. To learn about Azure Health Data Services [click here](https://azure.microsoft.com/products/health-data-services/).
This article outlines the basic steps to get started with Azure API for FHIR. Azure API for FHIR is a managed, standards-based, compliant API for clinical health data that enables solutions for actionable analytics and machine learning.
healthcare-apis How To Do Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/how-to-do-custom-search.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Defining custom search parameters for Azure API for FHIR + The Fast Healthcare Interoperability Resources (FHIR&#174;) specification defines a set of search parameters for all resources and search parameters that are specific to a resource(s). However, there are scenarios where you might want to search against an element in a resource that isnΓÇÖt defined by the FHIR specification as a standard search parameter. This article describes how you can define your own [search parameters](https://www.hl7.org/fhir/searchparameter.html) to be used in the Azure API for FHIR. > [!NOTE]
healthcare-apis How To Run A Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/how-to-run-a-reindex.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Running a reindex job in Azure API for FHIR + There are scenarios where you may have search or sort parameters in the Azure API for FHIR that haven't yet been indexed. This scenario is relevant when you define your own search parameters. Until the search parameter is indexed, it can't be used in search. This article covers how to run a reindex job to index search parameters that haven't yet been indexed in your FHIR service database. > [!Warning]
healthcare-apis Move Fhir Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/move-fhir-service.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Move Azure API for FHIR to a different subscription or resource group
-In this article, you'll learn how to move an Azure API for FHIR instance to a different subscription or another resource group.
+In this article, you'll learn how to move an Azure API for FHIR instance to a different subscription or another resource group.
Moving to a different region isnΓÇÖt supported, though the option may be available from the list. For more information, see [Move operation support for resources](../../azure-resource-manager/management/move-support-resources.md).
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/overview-of-search.md
Previously updated : 06/03/2022 Last updated : 9/27/2023 # Overview of search in Azure API for FHIR + The Fast Healthcare Interoperability Resources (FHIR&#174;) specification defines the fundamentals of search for FHIR resources. This article will guide you through some key aspects to searching resources in FHIR. For complete details about searching FHIR resources, refer to [Search](https://www.hl7.org/fhir/search.html) in the HL7 FHIR Specification. Throughout this article, we'll give examples of search syntax. Each search will be against your FHIR server, which typically has a URL of `https://<FHIRSERVERNAME>.azurewebsites.net`. In the examples, we'll use the placeholder {{FHIR_URL}} for this URL. FHIR searches can be against a specific resource type, a specified [compartment](https://www.hl7.org/fhir/compartmentdefinition.html), or all resources. The simplest way to execute a search in FHIR is to use a `GET` request. For example, if you want to pull all patients in the database, you could use the following request:
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/overview.md
Previously updated : 09/01/2023 Last updated : 09/27/2023 # What is Azure API for FHIR?
-> [!Note]
-> Azure Health Data services is the evolved version of Azure API for FHIR enabling customers to manage FHIR, DICOM, and MedTech services with integrations into other Azure Services.
Azure API for FHIR enables rapid exchange of data through Fast Healthcare Interoperability Resources (FHIR®) APIs, backed by a managed Platform-as-a Service (PaaS) offering in the cloud. It makes it easier for anyone working with health data to ingest, manage, and persist Protected Health Information [PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html) in the cloud:
healthcare-apis Patient Everything https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/patient-everything.md
Previously updated : 06/03/2022 Last updated : 09/23/2023 # Patient-everything in FHIR + The [Patient-everything](https://www.hl7.org/fhir/patient-operation-everything.html) operation is used to provide a view of all resources related to a patient. This operation can be useful to give patients' access to their entire record or for a provider or other user to perform a bulk data download related to a patient. According to the FHIR specification, Patient-everything returns all the information related to one or more patients described in the resource or context on which this operation is invoked. In the Azure API for FHIR, Patient-everything is available to pull data related to a specific patient. ## Use Patient-everything
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 09/19/2023 Last updated : 09/27/2023
# Azure Policy built-in definitions for Azure API for FHIR + This page is an index of [Azure Policy](../../governance/policy/overview.md) built-in policy definitions for Azure API for FHIR. For additional Azure Policy built-ins for other services, see [Azure Policy built-in definitions](../../governance/policy/samples/built-in-policies.md).
healthcare-apis Purge History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/purge-history.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Purge history operation for Azure API for FHIR + `$purge-history` is an operation that allows you to delete the history of a single FHIR resource. This operation isn't defined in the FHIR specification. ## Overview of purge history
healthcare-apis Register Confidential Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-confidential-azure-ad-client-app.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Register a confidential client application in Azure Active Directory for Azure API for FHIR + In this tutorial, you'll learn how to register a confidential client application in Azure Active Directory (Azure AD). A client application registration is an Azure AD representation of an application that can be used to authenticate on behalf of a user and request access to [resource applications](register-resource-azure-ad-client-app.md). A confidential client application is an application that can be trusted to hold a secret and present that secret when requesting access tokens. Examples of confidential applications are server-side applications.
healthcare-apis Register Public Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Register a public client application in Azure Active Directory for Azure API for FHIR + In this article, you'll learn how to register a public application in Azure Active Directory (Azure AD). Client application registrations are Azure AD representations of applications that can authenticate and ask for API permissions on behalf of a user. Public clients are applications such as mobile applications and single page JavaScript applications that can't keep secrets confidential. The procedure is similar to [registering a confidential client](register-confidential-azure-ad-client-app.md), but since public clients can't be trusted to hold an application secret, there's no need to add one.
healthcare-apis Register Resource Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-resource-azure-ad-client-app.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Register a resource application in Azure Active Directory for Azure API for FHIR + In this article, you'll learn how to register a resource (or API) application in Azure Active Directory (Azure AD). A resource application is an Azure AD representation of the FHIR server API itself and client applications can request access to the resource when authenticating. The resource application is also known as the *audience* in OAuth parlance. ## Azure API for FHIR
healthcare-apis Register Service Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-service-azure-ad-client-app.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Register a service client application in Azure Active Directory for Azure API for FHIR + In this article, you'll learn how to register a service client application in Azure Active Directory (Azure AD). Client application registrations are Azure AD representations of applications that can be used to authenticate and obtain tokens. A service client is intended to be used by an application to obtain an access token without interactive authentication of a user. It will have certain application permissions and use an application secret (password) when obtaining access tokens. Follow these steps to create a new service client.
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Previously updated : 06/16/2022 Last updated : 09/27/2023 # Release notes: Azure API for FHIR
-Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Server for Azure. The server is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document provides details about the features and enhancements made to Azure API for FHIR.
-> [!Note]
-> Azure Health Data services is the evolved version of Azure API for FHIR enabling customers to manage FHIR, DICOM, and MedTech services with integrations into other Azure Services. To learn about Azure Health Data Services [click here](https://azure.microsoft.com/products/health-data-services/).
+Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Server for Azure. The server is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document provides details about the features and enhancements made to Azure API for FHIR.
## **August 2023** **Decimal value precision in FHIR service is updated per FHIR specification**
healthcare-apis Search Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/search-samples.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # FHIR search examples for Azure API for FHIR + Below are some examples of using Fast Healthcare Interoperability Resources (FHIR&#174;) search operations, including search parameters and modifiers, chain and reverse chain search, composite search, viewing the next entry set for search results, and searching with a `POST` request. For more information about search, see [Overview of FHIR Search](overview-of-search.md). ## Search result parameters
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API for FHIR description: Lists Azure Policy Regulatory Compliance controls available for Azure API for FHIR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 09/19/2023 Last updated : 09/27/2023
# Azure Policy Regulatory Compliance controls for Azure API for FHIR + [Regulatory Compliance in Azure Policy](../../governance/policy/concepts/regulatory-compliance.md) provides Microsoft created and managed initiative definitions, known as _built-ins_, for the **compliance domains** and **security controls** related to different compliance standards. This
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/smart-on-fhir.md
Previously updated : 12/06/2022 Last updated : 09/27/2023 # SMART on FHIR overview + Substitutable Medical Applications and Reusable Technologies ([SMART on FHIR](https://docs.smarthealthit.org/)) is a healthcare standard through which applications can access clinical information through a data store. It adds a security layer based on open standards including OAuth2 and OpenID Connect, to FHIR interfaces to enable integration with EHR systems. Using SMART on FHIR provides at least three important benefits: - Applications have a known method for obtaining authentication/authorization to a FHIR repository. - Users accessing a FHIR repository with SMART on FHIR are restricted to resources associated with the user, rather than having access to all data in the repository.
Below tutorials describe steps to enable SMART on FHIR applications with FHIR Se
- After registering the application, make note of the applicationId for client application. - Ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
-## SMART on FHIR using AHDS Samples OSS (SMART on FHIR(Enhanced))
+## SMART on FHIR using Samples OSS (SMART on FHIR(Enhanced))
### Step 1: Set up FHIR SMART user role
-Follow the steps listed under section [Manage Users: Assign Users to Role](/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to role - "FHIR SMART User" will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token, which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
+Follow the steps listed under section [Manage Users: Assign Users to Role](../../role-based-access-control/role-assignments-portal.md). Any user added to role - "FHIR SMART User" will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token, which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
### Step 2: FHIR server integration with samples
-[Follow the steps](https://aka.ms/azure-health-data-services-smart-on-fhir-sample) under Azure Health Data Service Samples OSS. This will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
+[Follow the steps](https://aka.ms/azure-health-data-services-smart-on-fhir-sample) under Azure Health Data and AI Samples OSS. This will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
> [!NOTE] > Samples are open-source code, and you should review the information and licensing terms on GitHub before using it. They are not part of the Azure Health Data Service and are not supported by Microsoft Support. These samples can be used to demonstrate how Azure Health Data Services and other open-source tools can be used together to demonstrate ONC (g)(10) compliance, using Azure Active Directory as the identity provider workflow.
Follow the steps listed under section [Manage Users: Assign Users to Role](/azur
<summary> Click to expand! </summary> > [!NOTE]
-> This is another option to SMART on FHIR(Enhanced) mentioned above. SMART on FHIR Proxy option only enables EHR launch sequence.
+> This is another option to SMART on FHIR (Enhanced) mentioned above. SMART on FHIR Proxy option only enables EHR launch sequence.
### Step 1: Set admin consent for your client application To use SMART on FHIR, you must first authenticate and authorize the app. The first time you use SMART on FHIR, you also must get administrative consent to let the app access your FHIR resources.
These fields are meant to provide guidance to the app, but they don't convey any
Notice that the SMART on FHIR app launcher updates the **Launch URL** information at the bottom of the page. Select **Launch** to start the sample app. </details>
+## Migrate from SMART on FHIR Proxy to SMART on FHIR (Enhanced)
+ ## Next steps Now that you've learned about enabling SMART on FHIR functionality, see the search samples page for details about how to search using search parameters, modifiers, and other FHIR search methods.
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/store-profiles-in-fhir.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Store profiles in Azure API for FHIR + HL7 Fast Healthcare Interoperability Resources (FHIR&#174;) defines a standard and interoperable way to store and exchange healthcare data. Even within the base FHIR specification, it can be helpful to define other rules or extensions based on the context that FHIR is being used. For such context-specific uses of FHIR, **FHIR profiles** are used for the extra layer of specifications. [FHIR profile](https://www.hl7.org/fhir/profiling.html) allows you to narrow down and customize resource definitions using constraints and extensions. Azure API for FHIR allows validating resources against profiles to see if the resources conform to the profiles. This article guides you through the basics of FHIR profiles and how to store them. For more information about FHIR profiles outside of this article, visit [HL7.org](https://www.hl7.org/fhir/profiling.html).
healthcare-apis Tutorial Member Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-member-match.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # $member-match operation for Azure API for FHIR + [$member-match](http://hl7.org/fhir/us/davinci-hrex/2020Sep/OperationDefinition-member-match.html) is an operation that is defined as part of the Da Vinci Health Record Exchange (HRex). In this guide, we'll walk through what $member-match is and how to use it. ## Overview of $member-match
healthcare-apis Tutorial Web App Fhir Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-fhir-server.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Deploy JavaScript app to read data from Azure API for FHIR + In this tutorial, you'll deploy a small JavaScript app, which reads data from a FHIR service. The steps in this tutorial are: 1. Deploy a FHIR server
healthcare-apis Tutorial Web App Public App Reg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-public-app-reg.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Client application registration for Azure API for FHIR + In the previous tutorial, you deployed and set up your Azure API for FHIR. Now that you have your Azure API for FHIR setup, weΓÇÖll register a public client application. You can read through the full [register a public client app](register-public-azure-ad-client-app.md) how-to guide for more details or troubleshooting, but weΓÇÖve called out the major steps for this tutorial in this article. 1. Navigate to Azure Active Directory
healthcare-apis Tutorial Web App Test Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-test-postman.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Testing the FHIR API on Azure API for FHIR + In the previous tutorial, you deployed the Azure API for FHIR and registered your client application. You're now ready to test your Azure API for FHIR. ## Retrieve capability statement
healthcare-apis Tutorial Web App Write Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-write-web-app.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Write Azure web application to read FHIR data in Azure API for FHIR + Now that you're able to connect to your FHIR server and POST data, youΓÇÖre ready to write a web application that will read FHIR data. In this final step of the tutorial, weΓÇÖll walk through writing and accessing the web application. ## Create web application
healthcare-apis Use Custom Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/use-custom-headers.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Add custom HTTP headers to audit logs in FHIR service + [!INCLUDE [Specific IP ranges for storage account](../includes/custom-header-auditlog.md)] ## Next steps
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/validation-against-profiles.md
Previously updated : 06/03/2022 Last updated : 09/27/2023 # Validate Operation : Overview + In the [store profiles in Azure API for FHIR](store-profiles-in-fhir.md) article, you walked through the basics of FHIR profiles and storing them. This article will guide you through how to use `$validate` for validating resources against profiles. Validating a resource against a profile means checking if the resource conforms to the profile, including the specifications listed in `Resource.meta.profile` or in an Implementation Guide. `$validate` is an operation in Fast Healthcare Interoperability Resources (FHIR&#174;) that allows you to ensure that a FHIR resource conforms to the base resource requirements or a specified profile. This operation ensures that the data in Azure API for FHIR has the expected attributes and values. For information on validate operation, visit [HL7 FHIR Specification](https://www.hl7.org/fhir/resource-operation-validate.html). Per specification, Mode can be specified with `$validate`, such as create and update:
healthcare-apis Fhir Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-faq.md
Title: FAQs about FHIR service in Azure Health Data Services
+ Title: FAQ about FHIR service in Azure Health Data Services
description: Get answers to frequently asked questions about FHIR service, such as the storage location of data behind FHIR APIs and version support. Previously updated : 06/06/2022 Last updated : 09/27/2023 # Frequently asked questions about FHIR service + This section covers some of the frequently asked questions about the Azure Health Data Services FHIR service (hereby called FHIR service).
-## FHIR service: The Basics
+## FHIR service: The basics
### What is FHIR?
The Fast Healthcare Interoperability Resources (FHIR - Pronounced "fire") is an
Yes, the data is stored in managed databases in Azure. The FHIR service in Azure Health Data Services doesn't provide direct access to the underlying data store.
-## How can I get access to the underlying data?
+### How can I get access to the underlying data?
In the managed service, you can't access the underlying data. This is to ensure that the FHIR service offers the privacy and compliance certifications needed for healthcare data. If you need access to the underlying data, you can use the [open-source FHIR server](https://github.com/microsoft/fhir-server).
In the managed service, you can't access the underlying data. This is to ensure
We support Microsoft Azure Active Directory as the identity provider.
-## Can I use Azure AD B2C with the FHIR service?
+### Can I use Azure AD B2C with the FHIR service?
No, we don't support B2C in the FHIR service. If you need more granular access controls, we recommend looking at the [open-source FHIR proxy](https://github.com/microsoft/fhir-proxy).
For more information, see [Supported FHIR features](fhir-features-supported.md).
### What is the difference between Azure API for FHIR and the FHIR service in the Azure Health Data Services?
-FHIR service is our implementation of the FHIR specification that sits in the Azure Health Data Services, which allows you to have a FHIR service and a DICOM service within a single workspace. Azure API for FHIR was our initial GA product and is still available as a stand-alone product. The main feature differences are:
+Azure API for FHIR was our initial generally available product and is being retired as of September 30, 2026. The Azure Health Data Services FHIR service supports additional capabilities such as:
+
+- [Transaction bundles](https://www.hl7.org/fhir/http.html#transaction).
+- [Incremental Import](configure-import-data.md)
+- [Autoscaling](fhir-service-autoscale.md) enabled by default
-* FHIR service has a limit of 4 TB, and Azure API for FHIR supports more than 4 TB.
-* FHIR service support additional capabilities as
-** [Transaction bundles](https://www.hl7.org/fhir/http.html#transaction).
-** [Incremental Import](configure-import-data.md).
-** [Autoscaling](fhir-service-autoscale.md) is enabled by default.
-* Azure API for FHIR has more platform features (such as customer managed keys, and cross region DR) that aren't yet available in FHIR service in Azure Health Data Services.
### What's the difference between the FHIR service in Azure Health Data Services and the open-source FHIR server?
There are two basic Delete types supported within the FHIR service. These are [D
### Can I perform health checks on FHIR service?
-To perform health check on FHIR service , enter `{{fhirurl}}/health/check` in the GET request. You should be able to see Status of FHIR service. HTTP Status code response with 200 and OverallStatus as "Healthy" in response, means your health check is successful.
-In case of errors, you will receive error response with HTTP status code 404 (Not Found) or status code 500 (Internal Server Error), and detailed information in response body in some scenarios.
+To perform a health check on a FHIR service, enter `{{fhirurl}}/health/check` in the GET request. You should be able to see status of FHIR service. HTTP Status code response with 200 and OverallStatus as **Healthy** in response means your health check is successful.
+
+In case of errors, you may receive an error response with HTTP status code 404 (Not Found) or status code 500 (Internal Server Error), and detailed information in the response body.
## Next steps
healthcare-apis Migration Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/migration-faq.md
+
+ Title: FAQ about migrations from Azure API for FHIR
+description: Find answers to your questions about migrating FHIR data from Azure API for FHIR to the Azure Health Data Services FHIR service.
++++++ Last updated : 9/27/2023++
+# FAQ about migration from Azure API for FHIR
+
+## When will Azure API for FHIR be retired?
+
+Azure API for FHIR will be retired on September 30, 2026.
+
+## Are new deployments of Azure API for FHIR allowed?
+
+Due to the transition from Azure API for FHIR to Azure Health Data Services, after April 1, 2025 customers won't be able to create new deployments of Azure API of FHIR. Until April 1, 2025 new deployments are allowed.
+
+## Why is Microsoft retiring Azure API for FHIR?
+
+Azure API for FHIR is a service that was purpose built for protected health information (PHI), meeting regional compliance requirements. In March 2022, we announced the general availability of Azure Health Data Services, which enables quick deployment of managed, enterprise-grade FHIR, DICOM, and MedTech services for diverse health data integration. With this new experience, weΓÇÖre retiring Azure API for FHIR.
+
+## What are the benefits of migrating to Azure Health Data Services FHIR service?
+
+AHDS FHIR service offers a rich set of capabilities such as:
+
+- Consumption-based pricing model where customers pay only for used storage and throughput
+- Support for transaction bundles
+- Chained search improvements
+- Improved ingress and egress of data with \$import, \$export including new features such as incremental import (preview)
+- Events to trigger new workflows when FHIR resources are created, updated or deleted
+- Connectors to Azure Synapse Analytics, Power BI and Azure Machine Learning for enhanced analytics
+
+## What are the steps to enable SMART on FHIR in Azure Health Data Service FHIR service?
+
+SMART on FHIR proxy is retiring. Organizations need to transition to the SMART on FHIR (Enhanced), which uses Azure Health Data and AI OSS samples by **September 21, 2026**. After September 21, 2026, applications relying on SMART on FHIR proxy will report errors when accessing the FHIR service.
+
+SMART on FHIR (Enhanced) provides more capabilities than SMART on FHIR proxy, and meets requirements in the SMART on FHIR Implementation Guide (v 1.0.0) and §170.315(g)(10) Standardized API for patient and population services criterion.
+
+## What will happen after the service is retired on September 30, 2026?
+
+After September 30, 2026 customers won't be able to:
+
+- Create or manage Azure API for FHIR accounts
+- Access the data through the Azure portal or APIs/SDKs/client tools
+- Receive service updates to Azure API for FHIR or APIs/SDKs/client tools
+- Access customer support (phone, email, web)
+- Where can customers go to learn more about migrating to Azure Health Data Services FHIR service?
+
+Start with [migration strategies](migration-strategies.md) to learn more about Azure API for FHIR to Azure Health Data Services FHIR service migration. The migration from Azure API for FHIR to Azure Health Data Services FHIR service involves data migration and updating the applications to use Azure Health Data Services FHIR service. Find more documentation on the step-by-step approach to migrating your data and applications in the [migration tool](https://go.microsoft.com/fwlink/?linkid=2247964).
+
+## Where can customers go to get answers to their questions?
+
+Check out these resources if you need further assistance:
+
+- Get answers from community experts in [Microsoft Q&A](https://go.microsoft.com/fwlink/?linkid=2248420).
+- If you have a support plan and require technical support, [contact us](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
healthcare-apis Migration Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/migration-strategies.md
+
+ Title: Migration strategies for moving from Azure API for FHIR
+description: Learn how to migrate FHIR data from Azure API for FHIR to the Azure Health Data Services FHIR service. This article provides steps and tools for a smooth transition.
++++++ Last updated : 9/27/2023++
+# Migration strategies for moving from Azure API for FHIR
++
+Azure Health Data Services FHIR service is the next-generation platform for health data integration. It offers managed, enterprise-grade FHIR, DICOM, and MedTech services for diverse health data exchange.
+
+When you migrate your FHIR data from Azure API for FHIR to Azure Health Data Services FHIR service, your organization can benefit from improved performance, scalability, security, and compliance. Organizations can also access new features and capabilities that aren't available in Azure API for FHIR.
+
+Azure API for FHIR will be retired on September 30, 2026, so you need to migrate your FHIR data to Azure Health Data Services FHIR service as soon as feasible. To make the process easier, we created some tools and tips to help you assess your readiness, prepare your data, migrate your applications, and cut over to the new service.
+
+## Recommended approach
+
+To migrate your data, follow these steps:
+
+- Step 1: Assess readiness
+- Step 2: Prepare to migrate
+- Step 3: Migrate data and application workloads
+- Step 4: Cut over from Azure API for FHIR to Azure Health Data Services
+
+## Step 1: Assess readiness
+
+Compare the differences between Azure API for FHIR and Azure Health Data Services. Also review your architecture and assess if any changes need to be made.
+
+|**Capabilities** |**Azure API for FHIR** |**Azure Health Data Services** |
+||||
+| **Settings** | Supported: <br>ΓÇó Local RBAC <br>ΓÇó SMART on FHIR Proxy | Planned deprecation <br>ΓÇó Local RBAC (9/6/23) <br>ΓÇó SMART on FHIR Proxy (9/21/26) | | |
+| **Data storage Volume** | More than 4 TB | Current support is 4 TB. Reach out to CSS team if you need more than 4 TB. | | |
+| **Data ingress** | Tools available in OSS | $import operation | | |
+| **Autoscaling** | Supported on request and incurs charge | Enabled by default at no extra charge | | |
+| **Search parameters** | ΓÇó Bundle type supported: Batch <br> ΓÇó Include and revinclude, iterate modifier not supported <br> ΓÇó Sorting supported by first name, last name, birthdate and clinical date | ΓÇó Bundle Type supported: Batch and transaction <br> ΓÇó Selectable search parameters <br> ΓÇó Include, revinclude, and iterate modifier is supported <br>ΓÇó Sorting supported by string and dateTime fields | | |
+| **Events** | Not Supported | Supported | | |
+| **Infrastructure** | Supported: - <br> ΓÇó Customer managed keys <br> ΓÇó AZ support and PITR <br> ΓÇó Cross region DR | Supported - Data recovery <br> Upcoming: AZ support for customer managed keys | | |
+
+### Things to consider that may affect your architecture
+
+- **Sync agent is being deprecated**. If you're using sync agent to connect to Dataverse, see [Overview of data integration toolkit](/dynamics365/industry/healthcare/data-integration-toolkit-overview?toc=%2Findustry%2Fhealthcare%2Ftoc.json&bc=%2Findustry%2Fbreadcrumb%2Ftoc.json)
+
+- **FHIR Proxy is being deprecated**. If you're using FHIR Proxy for events, refer to the built-in [eventing](../events/events-overview.md) feature. Alternatives can be customized and built using the [Azure Health Data Services toolkit](https://github.com/microsoft/azure-health-data-services-toolkit).
+
+- **SMART on FHIR proxy is being deprecated**. You need to use the new SMART on FHIR capability. More information: [SMART on FHIR](smart-on-fhir.md)
+
+- **Azure Health Data Services FHIR Service does not support local RBAC and custom authority**. The token issuer authority needs to be the authentication endpoint for the tenant that the FHIR Service is running in.
+
+- **The IoT connector is only supported using an Azure API for FHIR service**. The IoT connector is succeeded by the MedTech service. You need to deploy a MedTech service and corresponding FHIR service within an existing or new Azure Health Data Services workspace and point your devices to the new Azure Events Hubs device event hub. Use the existing IoT connector device and destination mapping files with the MedTech service deployment.
+
+If you want to migrate existing IoT connector device FHIR data from your Azure API for FHIR service to the Azure Health Data Services FHIR service, use the bulk export and import functionality in the migration tool. Another migration path would be to deploy a new MedTech service and replay the IoT device messages through the MedTech service.
+
+## Step 2: Prepare to migrate
+
+First, create a migration plan. We recommend the migration patterns described in the table. Depending on your organizationΓÇÖs tolerance for downtime, you may decide to use certain patterns and tools to help facilitate your migration.
+
+| Migration Pattern | Details | How? |
+||||
+| Lift and shift | The simplest pattern. Ideal if your data pipelines can afford longer downtime. | Choose the option that works best for your organization: <br> ΓÇó Configure a workflow to [\$export](../azure-api-for-fhir/export-data.md) your data on Azure API for FHIR, and then [\$import](configure-import-data.md) into Azure Health Data Services FHIR service. <br> ΓÇó The [GitHub repo](https://go.microsoft.com/fwlink/?linkid=2247964) provides tips on running these commands, and a script to help automate creating the \$import payload. <br> ΓÇó Or create your own tool to migrate the data using \$export and \$import. |
+| Incremental copy | Continuous version of lift and shift, with less downtime. Ideal for large amounts of data that take longer to copy, or if you want to continue running Azure API for FHIR during the migration. | Choose the option that works best for your organization. <br> ΓÇó We created an [OSS migration tool](https://go.microsoft.com/fwlink/?linkid=2248131) to help with this migration pattern. <br> ΓÇó Or create your own tool to migrate the data incrementally.|
+
+### OSS migration tool considerations
+
+If you decide to use the OSS migration tool, review and understand the migration toolΓÇÖs [capabilities and limitations](https://go.microsoft.com/fwlink/?linkid=2248324).
+
+#### Prepare Azure API for FHIR server
+
+Identify data to migrate.
+- Take this opportunity to clean up data or FHIR servers that you no longer use.
+
+- Decide if you want to migrate historical versions or not.
+
+Deploy a new Azure Health Data Services FHIR Service server.
+- First, deploy an Azure Health Data Services workspace.
+
+- Then deploy an Azure Health Data Services FHIR Service server. More information: [Deploy a FHIR service within Azure Health Data Services](fhir-portal-quickstart.md)
+
+- Configure your new Azure Health Data Services FHIR Service server. If you need to use the same configurations as you have in Azure API for FHIR for your new server, see the recommended list of what to check for in the [migration tool documentation](https://go.microsoft.com/fwlink/?linkid=2248324). Configure the settings before you migrate.
+
+## Step 3: Migrate data
+
+Choose the migration pattern that works best for your organization. If you're using OSS migration tools, follow the instructions on [GitHub](https://go.microsoft.com/fwlink/?linkid=2248130).
+
+## Step 4: Migrate applications and reconfigure settings
+
+Migrate applications that were pointing to the old FHIR server.
+
+- Change the endpoints on your applications so that they point to the new FHIR serverΓÇÖs URL.
+
+- Set up permissions again for [these apps](/azure/storage/blobs/assign-azure-role-data-access).
+
+- Reconfigure any remaining settings in the new Azure Health Data Services FHIR Service server after migration.
+
+- If youΓÇÖd like to double check to make sure that the Azure Health Data Services FHIR Service and Azure API for FHIR servers have the same configurations, you can check both [metadata endpoints](use-postman.md#get-capability-statement) to compare and contrast the two servers.
+
+- Set up any jobs that were previously running in your old Azure API for FHIR server (for example, \$export jobs)
+
+## Step 5: Cut over to Azure Health Data Services FHIR services
+
+After youΓÇÖre confident that your Azure Health Data Services FHIR Service server is stable, you can begin using Azure Health Data Services FHIR Service to satisfy your business scenarios. Turn off any remaining pipelines that are running on Azure API for FHIR, delete data from the intermediate storage account that was used in the migration tool if necessary, delete data from your Azure API for FHIR server, and decommission your Azure API for FHIR account.
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/smart-on-fhir.md
Last updated 11/10/2022
# SMART on FHIR
-Substitutable Medical Applications and Reusable Technologies([SMART on FHIR](https://docs.smarthealthit.org/)) is a healthcare standard through which applications can access clinical information through a data store. It adds a security layer based on open standards including OAuth2 and OpenID Connect, to FHIR interfaces to enable integration with EHR systems. Using SMART on FHIR provides at least three important benefits:
+Substitutable Medical Applications and Reusable Technologies ([SMART on FHIR](https://docs.smarthealthit.org/)) is a healthcare standard through which applications can access clinical information through a data store. It adds a security layer based on open standards including OAuth2 and OpenID Connect, to FHIR interfaces to enable integration with EHR systems. Using SMART on FHIR provides at least three important benefits:
- Applications have a known method for obtaining authentication/authorization to a FHIR repository. - Users accessing a FHIR repository with SMART on FHIR are restricted to resources associated with the user, rather than having access to all data in the repository. - Users have the ability to grant applications access to a limited set of their data by using SMART clinical scopes.
Below tutorials provide steps to enable SMART on FHIR applications with FHIR Ser
## SMART on FHIR using Azure Health Data Services Samples (SMART on FHIR (Enhanced)) ### Step 1: Set up FHIR SMART user role
-Follow the steps listed under section [Manage Users: Assign Users to Role](/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to this role will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
+Follow the steps listed under section [Manage Users: Assign Users to Role](../../role-based-access-control/role-assignments-portal.md). Any user added to this role would be able to access the FHIR Service, provided their requests comply with the SMART on FHIR implementation Guide. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
### Step 2: FHIR server integration with samples
-For integration with Azure Health Data Services samples, you would need to follow the steps in samples open source solution.
-
-**[Click on the link](https://aka.ms/azure-health-data-services-smart-on-fhir-sample)** to navigate to Azure Health Data Service Samples OSS. This step listed in the document will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
+**[Click on the link](https://github.com/Azure-Samples/azure-health-data-and-ai-samples/tree/main/samples/smartonfhir)** to navigate to Azure Health Data and AI Samples Open source solution. This step listed in the document enables integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
> [!NOTE] > Samples are open-source code, and you should review the information and licensing terms on GitHub before using it. They are not part of the Azure Health Data Service and are not supported by Microsoft Support. These samples can be used to demonstrate how Azure Health Data Services and other open-source tools can be used together to demonstrate [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg) compliance, using Azure Active Directory as the identity provider workflow.
Add the reply URL to the public client application that you created earlier for
### Step 3: Get a test patient
-To test the FHIR service and the SMART on FHIR proxy, you'll need to have at least one patient in the database. If you've not interacted with the API yet, and you don't have data in the database, see [Access the FHIR service using Postman](./../fhir/use-postman.md) to load a patient. Make a note of the ID of a specific patient.
+To test the FHIR service and the SMART on FHIR proxy, you need to have at least one patient in the database. If you've not interacted with the API yet, and you don't have data in the database, see [Access the FHIR service using Postman](./../fhir/use-postman.md) to load a patient. Make a note of the ID of a specific patient.
### Step 4: Download the SMART on FHIR app launcher
After you start the SMART on FHIR app launcher, you can point your browser to `h
![Screenshot showing SMART on FHIR app launcher.](media/smart-on-fhir/smart-on-fhir-app-launcher.png)
-When you enter **Patient**, **Encounter**, or **Practitioner** information, you'll notice that the **Launch context** is updated. When you're using the FHIR service, the launch context is simply a JSON document that contains information about patient, practitioner, and more. This launch context is base64 encoded and passed to the SMART on FHIR app as the `launch` query parameter. According to the SMART on FHIR specification, this variable is opaque to the SMART on FHIR app and passed on to the identity provider.
+When you enter **Patient**, **Encounter**, or **Practitioner** information, you notice that the **Launch context** is updated. When you're using the FHIR service, the launch context is simply a JSON document that contains information about patient, practitioner, and more. This launch context is base64 encoded and passed to the SMART on FHIR app as the `launch` query parameter. According to the SMART on FHIR specification, this variable is opaque to the SMART on FHIR app and passed on to the identity provider.
The SMART on FHIR proxy uses this information to populate fields in the token response. The SMART on FHIR app *can* use these fields to control which patient it requests data for and how it renders the application on the screen. The SMART on FHIR proxy supports the following fields:
Notice that the SMART on FHIR app launcher updates the **Launch URL** informatio
Inspect the token response to see how the launch context fields are passed on to the app. </details>
+## Migrate from SMART on FHIR Proxy to SMART on FHIR (Enhanced)
+ ## Next steps Now that you've learned about enabling SMART on FHIR functionality, see the search samples page for details about how to search using search parameters, modifiers, and other FHIR search methods.
healthcare-apis Healthcare Apis Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-faqs.md
Azure Health Data Services enables you to:
### Can I migrate my existing production workload from Azure API for FHIR to Azure Health Data Services?
-No, unfortunately we don't offer migration capabilities at this time.
+Yes. Azure API for FHIR is retiring on September 30, 2023. See [migration strategies](./fhir/migration-strategies.md)
### What is the pricing of Azure Health Data Services?
load-balancer Gateway Deploy Dual Stack Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-deploy-dual-stack-load-balancer.md
Last updated 09/25/2023 -+ # Deploy a dual-stack Azure Gateway Load Balancer
logic-apps Ise Manage Integration Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/ise-manage-integration-service-environment.md
ms.suite: integration Previously updated : 08/29/2023 Last updated : 09/27/2023 # Manage your integration service environment (ISE) in Azure Logic Apps
Last updated 08/29/2023
> before this date are supported through August 31, 2024. For more information, see the following resources: > > - [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/ise-retirement-what-you-need-to-know/ba-p/3645220)
-> - [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
+> - [Single-tenant versus multitenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
> - [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) > - [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md) > - [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/)
Last updated 08/29/2023
This article shows how to perform management tasks for your [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), for example:
-* Manage the resources such as multi-tenant based logic apps, connections, integration accounts, and connectors in your ISE.
+* Find and view your ISE.
+
+* Enable access for your ISE.
* Check your ISE's network health.
+* Manage the resources such as multitenant based logic apps, connections, integration accounts, and connectors in your ISE.
+ * Add capacity, restart your ISE, or delete your ISE, follow the steps in this topic. To add these artifacts to your ISE, see [Add artifacts to your integration service environment](../logic-apps/add-artifacts-integration-service-environment-ise.md). ## View your ISE
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the portal's search box, enter "integration service environments", and then select **Integration Service Environments**.
+1. In the [Azure portal](https://portal.azure.com) search box, enter **integration service environments**, and select **Integration Service Environments**.
![Find integration service environments](./media/ise-manage-integration-service-environment/find-integration-service-environment.png)
This article shows how to perform management tasks for your [integration service
1. Continue to the next sections to find logic apps, connections, connectors, or integration accounts in your ISE.
+## Enable access for your ISE
+
+When you use an ISE with an Azure virtual network, a common setup problem is having one or more blocked ports. The connectors that you use for creating connections between your ISE and destination systems might also have their own port requirements. For example, if you communicate with an FTP system by using the FTP connector, the port that you use on your FTP system needs to be available, for example, port 21 for sending commands.
+
+To make sure that your ISE is accessible and that the logic apps in that ISE can communicate across each subnet in your virtual network, [open the ports described in this table for each subnet](#network-ports-for-ise). If any required ports are unavailable, your ISE won't work correctly.
+
+* If you have multiple ISE instances that need access to other endpoints that have IP restrictions, deploy an [Azure Firewall](../firewall/overview.md) or a [network virtual appliance](../virtual-network/virtual-networks-overview.md#filter-network-traffic) into your virtual network and route outbound traffic through that firewall or network virtual appliance. You can then [set up a single, outbound, public, static, and predictable IP address](connect-virtual-network-vnet-set-up-single-ip-address.md) that all the ISE instances in your virtual network can use to communicate with destination systems. That way, you don't have to set up extra firewall openings at those destination systems for each ISE.
+
+ > [!NOTE]
+ > You can use this approach for a single ISE when your scenario requires limiting the
+ > number of IP addresses that need access. Consider whether the extra costs for
+ > the firewall or virtual network appliance make sense for your scenario. Learn more about
+ > [Azure Firewall pricing](https://azure.microsoft.com/pricing/details/azure-firewall/).
+
+* If you created a new Azure virtual network and subnets without any constraints, you don't need to set up [network security groups (NSGs)](../virtual-network/network-security-groups-overview.md#network-security-groups) in your virtual network to control traffic across subnets.
+
+* For an existing virtual network, you can *optionally* set up [network security groups (NSGs)](../virtual-network/network-security-groups-overview.md#network-security-groups) to [filter network traffic across subnets](../virtual-network/tutorial-filter-network-traffic.md). If you want to go this route, or if you're already using NSGs, make sure that you [open the ports described in this table](#network-ports-for-ise) for those NSGs.
+
+ When you set up [NSG security rules](../virtual-network/network-security-groups-overview.md#security-rules), you need to use *both* the **TCP** and **UDP** protocols, or you can select **Any** instead so you don't have to create separate rules for each protocol. NSG security rules describe the ports that you must open for the IP addresses that need access to those ports. Make sure that any firewalls, routers, or other items that exist between these endpoints also keep those ports accessible to those IP addresses.
+
+* For an ISE that has *external* endpoint access, you must create a network security group (NSG), if you don't have one already. You need to add an inbound security rule to the NSG to allow traffic from managed connector outbound IP addresses. To set up this rule, follow these steps:
+
+ 1. On your ISE menu, under **Settings**, select **Properties**.
+
+ 1. Under **Connector outgoing IP addresses**, copy the public IP address ranges, which also appear in this article, [Limits and configuration - Outbound IP addresses](logic-apps-limits-and-config.md#outbound).
+
+ 1. Create a network security group, if you don't have one already.
+
+ 1. Based on the following information, add an inbound security rule for the public outbound IP addresses that you copied. For more information, review [Tutorial: Filter network traffic with a network security group using the Azure portal](../virtual-network/tutorial-filter-network-traffic.md#create-a-network-security-group).
+
+ | Purpose | Source service tag or IP addresses | Source ports | Destination service tag or IP addresses | Destination ports | Notes |
+ |||--|--|-|-|
+ | Permit traffic from connector outbound IP addresses | <*connector-public-outbound-IP-addresses*> | * | Address space for the virtual network with ISE subnets | * | |
+
+* If you set up forced tunneling through your firewall to redirect Internet-bound traffic, review the [forced tunneling requirements](#forced-tunneling).
+
+<a name="network-ports-for-ise"></a>
+
+### Network ports used by your ISE
+
+This table describes the ports that your ISE requires to be accessible and the purpose for those ports. To help reduce complexity when you set up security rules, the table uses [service tags](../virtual-network/service-tags-overview.md) that represent groups of IP address prefixes for a specific Azure service. Where noted, *internal ISE* and *external ISE* refer to the [access endpoint that's selected during ISE creation](connect-virtual-network-vnet-isolated-environment.md#create-environment). For more information, review [Endpoint access](connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access).
+
+> [!IMPORTANT]
+>
+> For all rules, make sure that you set source ports to `*` because source ports are ephemeral.
+
+#### Inbound security rules
+
+| Source ports | Destination ports | Source service tag or IP addresses | Destination service tag or IP addresses | Purpose | Notes |
+|--|-||--||-|
+| * | * | Address space for the virtual network with ISE subnets | Address space for the virtual network with ISE subnets | Intersubnet communication within virtual network. | Required for traffic to flow *between* the subnets in your virtual network. <br><br>**Important**: For traffic to flow between the *components* in each subnet, make sure that you open all the ports within each subnet. |
+| * | 443 | Internal ISE: <br>**VirtualNetwork** <br><br>External ISE: **Internet** or see **Notes** | **VirtualNetwork** | - Communication to your logic app <br><br>- Runs history for your logic app | Rather than use the **Internet** service tag, you can specify the source IP address for these items: <br><br>- The computer or service that calls any request triggers or webhooks in your logic app <br><br>- The computer or service from where you want to access logic app runs history <br><br>**Important**: Closing or blocking this port prevents calls to logic apps that have request triggers or webhooks. You're also prevented from accessing inputs and outputs for each step in runs history. However, you're not prevented from accessing logic app runs history. |
+| * | 454 | **LogicAppsManagement** |**VirtualNetwork** | Azure Logic Apps designer - dynamic properties| Requests come from the Azure Logic Apps access endpoint's [inbound IP addresses](logic-apps-limits-and-config.md#inbound) for that region. <br><br>**Important**: If you're working with Azure Government cloud, the **LogicAppsManagement** service tag won't work. Instead, you have to provide the Azure Logic Apps [inbound IP addresses](logic-apps-limits-and-config.md#azure-government-inbound) for Azure Government. |
+| * | 454 | **LogicApps** | **VirtualNetwork** | Network health check | Requests come from the Azure Logic Apps access endpoint's [inbound IP addresses](logic-apps-limits-and-config.md#inbound) and [outbound IP addresses](logic-apps-limits-and-config.md#outbound) for that region. <br><br>**Important**: If you're working with Azure Government cloud, the **LogicApps** service tag won't work. Instead, you have to provide both the Azure Logic Apps [inbound IP addresses](logic-apps-limits-and-config.md#azure-government-inbound) and [outbound IP addresses](logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. |
+| * | 454 | **AzureConnectors** | **VirtualNetwork** | Connector deployment | Required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. <br><br>**Important**: If you're working with Azure Government cloud, the **AzureConnectors** service tag won't work. Instead, you have to provide the [managed connector outbound IP addresses](logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. |
+| * | 454, 455 | **AppServiceManagement** | **VirtualNetwork** | App Service Management dependency ||
+| * | Internal ISE: 454 <br><br>External ISE: 443 | **AzureTrafficManager** | **VirtualNetwork** | Communication from Azure Traffic Manager ||
+| * | 3443 | **APIManagement** | **VirtualNetwork** | Connector policy deployment <br><br>API Management - management endpoint | For connector policy deployment, port access is required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. |
+| * | 6379 - 6383, plus see **Notes** | **VirtualNetwork** | **VirtualNetwork** | Access Azure Cache for Redis Instances between Role Instances | For ISE to work with Azure Cache for Redis, you must open these [outbound and inbound ports described by the Azure Cache for Redis FAQ](../azure-cache-for-redis/cache-how-to-premium-vnet.md#outbound-port-requirements). |
+
+#### Outbound security rules
+
+| Source ports | Destination ports | Source service tag or IP addresses | Destination service tag or IP addresses | Purpose | Notes |
+|--|-||--||-|
+| * | * | Address space for the virtual network with ISE subnets | Address space for the virtual network with ISE subnets | Intersubnet communication within virtual network | Required for traffic to flow *between* the subnets in your virtual network. <br><br>**Important**: For traffic to flow between the *components* in each subnet, make sure that you open all the ports within each subnet. |
+| * | 443, 80 | **VirtualNetwork** | Internet | Communication from your logic app | This rule is required for Secure Socket Layer (SSL) certificate verification. This check is for various internal and external sites, which is the reason that the Internet is required as the destination. |
+| * | Varies based on destination | **VirtualNetwork** | Varies based on destination | Communication from your logic app | Destination ports vary based on the endpoints for the external services with which your logic app needs to communicate. <br><br>For example, the destination port is port 25 for an SMTP service, port 22 for an SFTP service, and so on. |
+| * | 80, 443 | **VirtualNetwork** | **AzureActiveDirectory** | Azure Active Directory ||
+| * | 80, 443, 445 | **VirtualNetwork** | **Storage** | Azure Storage dependency ||
+| * | 443 | **VirtualNetwork** | **AppService** | Connection management ||
+| * | 443 | **VirtualNetwork** | **AzureMonitor** | Publish diagnostic logs & metrics ||
+| * | 1433 | **VirtualNetwork** | **SQL** | Azure SQL dependency ||
+| * | 1886 | **VirtualNetwork** | **AzureMonitor** | Azure Resource Health | Required for publishing health status to Resource Health. |
+| * | 5672 | **VirtualNetwork** | **EventHub** | Dependency from Log to Event Hubs policy and monitoring agent ||
+| * | 6379 - 6383, plus see **Notes** | **VirtualNetwork** | **VirtualNetwork** | Access Azure Cache for Redis Instances between Role Instances | For ISE to work with Azure Cache for Redis, you must open these [outbound and inbound ports described by the Azure Cache for Redis FAQ](../azure-cache-for-redis/cache-how-to-premium-vnet.md#outbound-port-requirements). |
+| * | 53 | **VirtualNetwork** | IP addresses for any custom Domain Name System (DNS) servers on your virtual network | DNS name resolution | Required only when you use custom DNS servers on your virtual network |
+
+In addition, you need to add outbound rules for [App Service Environment (ASE)](../app-service/environment/intro.md):
+
+* If you use Azure Firewall, you need to set up your firewall with the App Service Environment (ASE) [fully qualified domain name (FQDN) tag](../firewall/fqdn-tags.md#current-fqdn-tags), which permits outbound access to ASE platform traffic.
+
+* If you use a firewall appliance other than Azure Firewall, you need to set up your firewall with *all* the rules listed in the [firewall integration dependencies](../app-service/environment/firewall-integration.md#dependencies) that are required for App Service Environment.
+
+<a name="forced-tunneling"></a>
+
+#### Forced tunneling requirements
+
+If you set up or use [forced tunneling](../firewall/forced-tunneling.md) through your firewall, you have to permit extra external dependencies for your ISE. Forced tunneling lets you redirect Internet-bound traffic to a designated next hop, such as your virtual private network (VPN) or to a virtual appliance, rather than to the Internet so that you can inspect and audit outbound network traffic.
+
+If you don't permit access for these dependencies, your ISE deployment fails and your deployed ISE stops working.
+
+* User-defined routes
+
+ To prevent asymmetric routing, you must define a route for each and every IP address that's listed below with **Internet** as the next hop.
+
+ * [Azure Logic Apps inbound and outbound addresses for the ISE region](logic-apps-limits-and-config.md#firewall-configuration-ip-addresses-and-service-tags)
+ * [Azure IP addresses for connectors in the ISE region, available in this download file](https://www.microsoft.com/download/details.aspx?id=56519)
+ * [App Service Environment management addresses](../app-service/environment/management-addresses.md)
+ * [Azure Traffic Manager management addresses](https://azuretrafficmanagerdata.blob.core.windows.net/probes/azure/probe-ip-ranges.json)
+ * [Azure API Management Control Plane IP addresses](../api-management/virtual-network-reference.md#control-plane-ip-addresses)
+
+* Service endpoints
+
+ You need to enable service endpoints for Azure SQL, Storage, Service Bus, KeyVault, and Event Hubs because you can't send traffic through a firewall to these services.
+
+* Other inbound and outbound dependencies
+
+ Your firewall *must* allow the following inbound and outbound dependencies:
+
+ * [Azure App Service Dependencies](../app-service/environment/firewall-integration.md#deploying-your-ase-behind-a-firewall)
+ * [Azure Cache Service Dependencies](../azure-cache-for-redis/cache-how-to-premium-vnet.md#what-are-some-common-misconfiguration-issues-with-azure-cache-for-redis-and-virtual-networks)
+ * [Azure API Management Dependencies](../api-management/virtual-network-reference.md)
+ <a name="check-network-health"></a> ## Check network health
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-instance.md
A compute instance is a fully managed cloud-based workstation optimized for your
Azure Machine Learning compute instance enables you to author, train, and deploy models in a fully integrated notebook experience in your workspace.
-You can run Jupyter notebooks in [VS Code](https://techcommunity.microsoft.com/t5/azure-ai/power-your-vs-code-notebooks-with-azml-compute-instances/ba-p/1629630) using compute instance as the remote server with no SSH needed. You can also enable VS Code integration through [remote SSH extension](https://devblogs.microsoft.com/python/enhance-your-azure-machine-learning-experience-with-the-vs-code-extension/).
+You can run notebooks from [your Azure Machine Learning workspace](./how-to-run-jupyter-notebooks.md), [Jupyter](https://jupyter.org/), [JupyterLab](https://jupyterlab.readthedocs.io), or [Visual Studio Code](./how-to-launch-vs-code-remote.md). VS Code Desktop can be configured to access your compute instance. Or use VS Code for the Web, directly from the browser, and without any required installations or dependencies.
+
+We recommend you try VS Code for the Web to take advantage of the easy integration and rich development environment it provides. VS Code for the Web gives you many of the features of VS Code Desktop that you love, including search and syntax highlighting while browsing and editing. For more information about using VS Code Desktop and VS Code for the Web, see [Launch Visual Studio Code integrated with Azure Machine Learning (preview)](how-to-launch-vs-code-remote.md) and [Work in VS Code remotely connected to a compute instance (preview)](how-to-work-in-vs-code-remote.md).
You can [install packages](how-to-access-terminal.md#install-packages) and [add kernels](how-to-access-terminal.md#add-new-kernels) to your compute instance.
-Following tools and environments are already installed on the compute instance:
+The following tools and environments are already installed on the compute instance:
|General tools & environments|Details| |-|:-:|
Following tools and environments are already installed on the compute instance:
You can [Add RStudio or Posit Workbench (formerly RStudio Workbench)](how-to-create-compute-instance.md#add-custom-applications-such-as-rstudio-or-posit-workbench) when you create the instance.
-|**PYTHON** tools & environments|Details|
+|**PYTHON** tools & environments |Details|
|-|-| |Anaconda Python|| |Jupyter and extensions|| |Jupyterlab and extensions||
-[Azure Machine Learning SDK for Python](https://aka.ms/sdk-v2-install)</br>from PyPI|Includes azure-ai-ml and many common azure extra packages. To see the full list, [open a terminal window on your compute instance](how-to-access-terminal.md) and run <br/> `conda list -n azureml_py310_sdkv2 ^azure` |
+[Azure Machine Learning SDK <br/> for Python](https://aka.ms/sdk-v2-install) from PyPI|Includes azure-ai-ml and many common azure extra packages. To see the full list, <br/> [open a terminal window on your compute instance](how-to-access-terminal.md) and run <br/> `conda list -n azureml_py310_sdkv2 ^azure` |
|Other PyPI packages|`jupytext`</br>`tensorboard`</br>`nbconvert`</br>`notebook`</br>`Pillow`| |Conda packages|`cython`</br>`numpy`</br>`ipykernel`</br>`scikit-learn`</br>`matplotlib`</br>`tqdm`</br>`joblib`</br>`nodejs`| |Deep learning packages|`PyTorch`</br>`TensorFlow`</br>`Keras`</br>`Horovod`</br>`MLFlow`</br>`pandas-ml`</br>`scrapbook`| |ONNX packages|`keras2onnx`</br>`onnx`</br>`onnxconverter-common`</br>`skl2onnx`</br>`onnxmltools`| |Azure Machine Learning Python samples||
-Python packages are all installed in the **Python 3.8 - AzureML** environment. Compute instance has Ubuntu 20.04 as the base OS.
+The compute instance has Ubuntu as the base OS.
## Accessing files Notebooks and Python scripts are stored in the default storage account of your workspace in Azure file share. These files are located under your "User files" directory. This storage makes it easy to share notebooks between compute instances. The storage account also keeps your notebooks safely preserved when you stop or delete a compute instance.
-The Azure file share account of your workspace is mounted as a drive on the compute instance. This drive is the default working directory for Jupyter, Jupyter Labs, RStudio, and Posit Workbench. This means that the notebooks and other files you create in Jupyter, JupyterLab, RStudio, or Posit are automatically stored on the file share and available to use in other compute instances as well.
+The Azure file share account of your workspace is mounted as a drive on the compute instance. This drive is the default working directory for Jupyter, Jupyter Labs, RStudio, and Posit Workbench. This means that the notebooks and other files you create in Jupyter, JupyterLab, VS Code for Web, RStudio, or Posit are automatically stored on the file share and available to use in other compute instances as well.
The files in the file share are accessible from all compute instances in the same workspace. Any changes to these files on the compute instance will be reliably persisted back to the file share.
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
You can configure your cloud deployment using YAML. Take a look at the sample YA
__tfserving-endpoint.yml__
+```yml
+$schema: https://azuremlsdk2.blob.core.windows.net/latest/managedOnlineEndpoint.schema.json
+name: tfserving-endpoint
+auth_mode: aml_token
+```
__tfserving-deployment.yml__
+```yml
+$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
+name: tfserving-deployment
+endpoint_name: tfserving-endpoint
+model:
+ name: tfserving-mounted
+ version: {{MODEL_VERSION}}
+ path: ./half_plus_two
+environment_variables:
+ MODEL_BASE_PATH: /var/azureml-app/azureml-models/tfserving-mounted/{{MODEL_VERSION}}
+ MODEL_NAME: half_plus_two
+environment:
+ #name: tfserving
+ #version: 1
+ image: docker.io/tensorflow/serving:latest
+ inference_config:
+ liveness_route:
+ port: 8501
+ path: /v1/models/half_plus_two
+ readiness_route:
+ port: 8501
+ path: /v1/models/half_plus_two
+ scoring_route:
+ port: 8501
+ path: /v1/models/half_plus_two:predict
+instance_type: Standard_DS3_v2
+instance_count: 1
+```
+ # [Python SDK](#tab/python)
machine-learning How To Run Jupyter Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-run-jupyter-notebooks.md
Title: Run Jupyter notebooks in your workspace
description: Learn how run a Jupyter notebook without leaving your workspace in Azure Machine Learning studio. --++ Previously updated : 02/28/2022 Last updated : 09/26/2023 #Customer intent: As a data scientist, I want to run Jupyter notebooks in my workspace in Azure Machine Learning studio. # Run Jupyter notebooks in your workspace
-Learn how to run your Jupyter notebooks directly in your workspace in Azure Machine Learning studio. While you can launch [Jupyter](https://jupyter.org/) or [JupyterLab](https://jupyterlab.readthedocs.io), you can also edit and run your notebooks without leaving the workspace.
+This article shows how to run your Jupyter notebooks inside your workspace of Azure Machine Learning studio. There are other ways to run the notebook as well: [Jupyter](https://jupyter.org/), [JupyterLab](https://jupyterlab.readthedocs.io), and [Visual Studio Code](./how-to-launch-vs-code-remote.md). VS Code Desktop can be configured to access your compute instance. Or use VS Code for the Web, directly from the browser, and without any required installations or dependencies.
-For information on how to create and manage files, including notebooks, see [Create and manage files in your workspace](how-to-manage-files.md).
+We recommend you try VS Code for the Web to take advantage of the easy integration and rich development environment it provides. VS Code for the Web gives you many of the features of VS Code Desktop that you love, including search and syntax highlighting while browsing and editing. For more information about using VS Code Desktop and VS Code for the Web, see [Launch Visual Studio Code integrated with Azure Machine Learning (preview)](how-to-launch-vs-code-remote.md) and [Work in VS Code remotely connected to a compute instance (preview)](how-to-work-in-vs-code-remote.md).
+
+No matter which solution you use to run the notebook, you'll have access to all the files from your workspace. For information on how to create and manage files, including notebooks, see [Create and manage files in your workspace](how-to-manage-files.md).
+
+This rest of this article shows the experience for running the notebook directly in studio.
> [!IMPORTANT] > Features marked as (preview) are provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
machine-learning Compute Idleshutdown Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/compute-idleshutdown-bicep.md
Last updated 08/04/2023-+ # Known issue - Idleshutdown property in Bicep template causes error
machine-learning How To Deploy To Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-deploy-to-code.md
description: Learn how to deploy your flow to a managed online endpoint or Kuber
+
az ml online-endpoint invoke --name basic-chat-endpoint --request-file endpoints
- Learn more about [managed online endpoint schema](../reference-yaml-endpoint-online.md) and [managed online deployment schema](../reference-yaml-deployment-managed-online.md). - Learn more about how to [troubleshoot managed online endpoints](../how-to-troubleshoot-online-endpoints.md).-- Once you improve your flow, and would like to deploy the improved version with safe rollout strategy, see [Safe rollout for online endpoints](../how-to-safely-rollout-online-endpoints.md).
+- Once you improve your flow, and would like to deploy the improved version with safe rollout strategy, see [Safe rollout for online endpoints](../how-to-safely-rollout-online-endpoints.md).
machine-learning How To Enable Streaming Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-enable-streaming-mode.md
description: Learn how use streaming when you consume the endpoints in Azure Mac
+
data: {"answer": ""}
## Next steps - Learn more about how to [troubleshoot managed online endpoints](../how-to-troubleshoot-online-endpoints.md).-- Once you improve your flow, and would like to deploy the improved version with safe rollout strategy, you can refer to [Safe rollout for online endpoints](../how-to-safely-rollout-online-endpoints.md).
+- Once you improve your flow, and would like to deploy the improved version with safe rollout strategy, you can refer to [Safe rollout for online endpoints](../how-to-safely-rollout-online-endpoints.md).
machine-learning Tutorial Cloud Workstation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-cloud-workstation.md
Previously updated : 09/26/2023 Last updated : 09/27/2023 #Customer intent: As a data scientist, I want to know how to prototype and develop machine learning models on a cloud workstation.
You now have a new kernel. Next you'll open a notebook and use this kernel.
:::image type="content" source="media/tutorial-azure-ml-in-a-day/start-compute.png" alt-text="Screenshot shows how to start compute if it's stopped." lightbox="media/tutorial-azure-ml-in-a-day/start-compute.png":::
-1. You'll see the notebook is connected to the default kernel in the top right. Switch to use the **Tutorial Workstation Env** kernel.
+1. You'll see the notebook is connected to the default kernel in the top right. Switch to use the **Tutorial Workstation Env** kernel if you created the kernel.
## Develop a training script
For now, you're running this code on your compute instance, which is your Azure
conda env list ```
-1. Activate your kernel:
+1. If you created a new kernel, activate it now:
```bash conda activate workstation_env
migrate How To Discover Sql Existing Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-discover-sql-existing-project.md
ms. Previously updated : 04/13/2023 Last updated : 09/27/2023
This discovery process is agentless that is, nothing is installed on the target
3. Once the desired credentials are added, select Start Discovery, to begin the scan. > [!Note]
-> Allow web apps and SQL discovery to run for sometime before creating assessments for Azure App Service or Azure SQL. If the discovery of web apps and SQL Server instances and databases is not allowed to complete, the respective instances are marked as **Unknown** in the assessment report.
+> - Allow web apps and SQL discovery to run for sometime before creating assessments for Azure App Service or Azure SQL. If the discovery of web apps and SQL Server instances and databases is not allowed to complete, the respective instances are marked as **Unknown** in the assessment report.
+> - In a project containing multiple appliances, it's possible the Web app discovery and assessment agent of one appliance ends up discovering a web app running on a server discovered by another appliance. This doesn't impede the discovery or assessment experience of the web app.
## Next steps
mysql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-read-replicas-portal.md
In this article, you learn how to create and manage read replicas in the Azure D
## Create a read replica
-> [!IMPORTANT]
-> When you create a replica for a source that has no existing replicas, the source first restarts to prepare itself for replication. Take this into consideration and perform these operations during an off-peak period.
A read replica server can be created using the following steps:
To delete a source server from the Azure portal, use the following steps:
- Learn more about [read replicas](concepts-read-replicas.md) - You can also monitor the replication latency by following the steps mentioned [here](../how-to-troubleshoot-replication-latency.md). - To troubleshoot high replication latency observed in Metrics, visit the [link](../how-to-troubleshoot-replication-latency.md#common-scenarios-for-high-replication-latency).+
mysql Quickstart Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-bicep.md
param firewallRules array = [
@description('The tier of the particular SKU. High Availability is available only for GeneralPurpose and MemoryOptimized sku.') @allowed([ 'Burstable'
- 'Generalpurpose'
+ 'GeneralPurpose'
'MemoryOptimized' ]) param serverEdition string = 'Burstable'
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
## September 2023
+- **Flexible Maintenance for Azure Database for MySQL - Flexible server(Public Preview)**
+Flexible Maintenance for Azure Database for MySQL - Flexible Server enables a tailored maintenance schedule to suit your operational rhythm. This feature allows you to reschedule maintenance tasks within a maximum 14-day window and initiate on-demand maintenance, granting you unprecedented control over server upkeep timing. Stay tuned for more customizable experiences in the future. [Learn more](concepts-maintenance.md).
+ - **Universal Cross Region Read Replica on Azure Database for MySQL- Flexible Server (General Availability)** Azure Database for MySQL - Flexible server now supports Universal Read Replicas in Public regions. The feature allows you to replicate your data from an instance of Azure Database for MySQL Flexible Server to a read-only server in Universal region which could be any region from the list of Azure supported region where flexible server is available. [Learn more](concepts-read-replicas.md)
mysql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-reserved-pricing.md
Last updated 06/20/2022
Azure Database for MySQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for MySQL reserved instances, you make an upfront commitment on MySQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for MySQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br> ## How does the instance reservation work?
-You do not need to assign the reservation to specific Azure Database for MySQL servers. An already running Azure Database for MySQL or ones that are newly deployed, will automatically get the benefit of reserved pricing. By purchasing a reservation, you are pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for MySQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the MySQL Database server. At the end of the reservation term, the billing benefit expires, and the Azure Database for MySQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for MySQL reserved capacity offering](https://azure.microsoft.com/pricing/details/mysql/). </br>
+You don't need to assign the reservation to specific Azure Database for MySQL servers. An already running Azure Database for MySQL or ones that are newly deployed, will automatically get the benefit of reserved pricing. By purchasing a reservation, you're pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for MySQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation doesn't cover software, networking, or storage charges associated with the MySQL Database server. At the end of the reservation term, the billing benefit expires, and the Azure Database for MySQL are billed at the pay-as-you go price. Reservations don't auto-renew. For pricing information, see the [Azure Database for MySQL reserved capacity offering](https://azure.microsoft.com/pricing/details/mysql/). </br>
You can buy Azure Database for MySQL reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity:
You may save up to 67% on compute costs with reserved instances. In order to fin
The size of reservation should be based on the total amount of compute used by the existing or soon-to-be-deployed server within a specific region and using the same performance tier and hardware generation.</br>
-For example, let's suppose that you are running one general purpose, Gen5 ΓÇô 32 vCore MySQL database, and two memory optimized, Gen5 ΓÇô 16 vCore MySQL databases. Further, let's supposed that you plan to deploy within the next month an additional general purpose, Gen5 ΓÇô 32 vCore database server, and one memory optimized, Gen5 ΓÇô 16 vCore database server. Let's suppose that you know that you will need these resources for at least 1 year. In this case, you should purchase a 64 (2x32) vCores, 1 year reservation for single database general purpose - Gen5 and a 48 (2x16 + 16) vCore 1 year reservation for single database memory optimized - Gen5
+For example, let's suppose that you're running one general purpose, Gen5 ΓÇô 32 vCore MySQL database, and two memory optimized, Gen5 ΓÇô 16 vCore MySQL databases. Further, let's supposed that you plan to deploy within the next month an additional general purpose, Gen5 ΓÇô 32 vCore database server, and one memory optimized, Gen5 ΓÇô 16 vCore database server. Let's suppose that you know that you'll need these resources for at least 1 year. In this case, you should purchase a 64 (2x32) vCores, 1 year reservation for single database general purpose - Gen5 and a 48 (2x16 + 16) vCore 1 year reservation for single database memory optimized - Gen5
## Buy Azure Database for MySQL reserved capacity
The following table describes required fields.
| Region | The Azure region that's covered by the Azure Database for MySQL reserved capacity reservation. | Deployment Type | The Azure Database for MySQL resource type that you want to buy the reservation for. | Performance Tier | The service tier for the Azure Database for MySQL servers.
-| Term | One year
-| Quantity | The amount of compute resources being purchased within the Azure Database for MySQL reserved capacity reservation. The quantity is a number of vCores in the selected Azure region and Performance tier that are being reserved and will get the billing discount. For example, if you are running or planning to run an Azure Database for MySQL servers with the total compute capacity of Gen5 16 vCores in the East US region, then you would specify quantity as 16 to maximize the benefit for all servers.
+| Term | One year or three years
+| Quantity | The amount of compute resources being purchased within the Azure Database for MySQL reserved capacity reservation. The quantity is a number of vCores in the selected Azure region and Performance tier that are being reserved and will get the billing discount. For example, if you're running or planning to run an Azure Database for MySQL servers with the total compute capacity of Gen5 16 vCores in the East US region, then you would specify quantity as 16 to maximize the benefit for all servers.
## Reserved instances API support
To learn more about Azure Reservations, see the following articles:
* [Understand Azure Reservations discount](../../cost-management-billing/reservations/understand-reservation-charges.md) * [Understand reservation usage for your Pay-As-You-Go subscription](../../cost-management-billing/reservations/understand-reservation-charges-mysql.md) * [Understand reservation usage for your Enterprise enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
-* [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
+* [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
nat-gateway Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-overview.md
A NAT gateway doesn't affect the network bandwidth of your compute resources. Le
### Traffic routes
-* NAT gateway replaces a subnetΓÇÖs default route to the internet when configured. All traffic within the 0.0.0.0/0 prefix has a next hop type to NAT gateway before connecting outbound to the internet.
+* NAT gateway replaces a subnetΓÇÖs [system default route](/azure/virtual-network/virtual-networks-udr-overview#default) to the internet when configured. When NAT gateway is attached to the subnet, all traffic within the 0.0.0.0/0 prefix will route to NAT gateway before connecting outbound to the internet.
-* You can override NAT gateway as a subnetΓÇÖs next hop to the internet with the creation of a custom user-defined route (UDR).
+* You can override NAT gateway as a subnetΓÇÖs system default route to the internet with the creation of a custom user-defined route (UDR) for 0.0.0.0/0 traffic.
-* Presence of custom UDRs for virtual appliances and ExpressRoute override NAT gateway for directing internet bound traffic (route to the 0.0.0.0/0 address prefix).
+* Presence of UDRs for virtual appliances, VPN Gateway and ExpressRoute for a subnet's 0.0.0.0/0 traffic will cause traffic to route to these services instead of NAT gateway.
* Outbound connectivity follows this order of precedence among different routing and outbound connectivity methods:
-Virtual appliance UDR / ExpressRoute >> NAT gateway >> Instance-level public IP address on a virtual machine >> Load balancer outbound rules >> default system route to the internet
+Virtual appliance UDR / VPN Gateway / ExpressRoute >> NAT gateway >> Instance-level public IP address on a virtual machine >> Load balancer outbound rules >> default system route to the internet
### NAT gateway configurations
peering-service About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/about.md
Title: Azure Peering Service overview
-description: Learn about Azure Peering Service.
+description: Learn about Azure Peering Service concepts and features to optimize network connectivity with Microsoft's global network.
Previously updated : 07/23/2023 Last updated : 09/27/2023+
+#CustomerIntent: As an administrator, I want learn about Azure Peering Service so I can optimize the connectivity to Microsoft.
# Azure Peering Service overview
Service monitoring is offered to analyze user traffic and routing. The following
To onboard a Peering Service connection: -- Work with Internet Service provider (ISP) or Internet Exchange (IX) Partner to obtain a Peering Service to connect your network with the Microsoft network.
+- Work with internet service provider (ISP) or Internet Exchange (IX) Partner to obtain a Peering Service to connect your network with the Microsoft network.
- Ensure the [connectivity provider](location-partners.md) is partnered with Microsoft for Peering Service.
-## Next steps
+## FAQ
+
+For frequently asked questions about Peering Service, see [Azure Peering Service frequently asked questions (FAQ)](faq.yml).
+
+## Related content
-- To learn about Peering Service connections, see [Peering Service connections](connection.md).-- To learn about Peering Service connection telemetry, see [Peering Service connection telemetry](connection-telemetry.md).-- To find a service provider partner, see [Peering Service partners and locations](location-partners.md). - To register Peering Service, see [Create, change, or delete a Peering Service connection using the Azure portal](azure-portal.md).-- To establish a Direct interconnect for Microsoft Azure Peering Service, see [Internet peering for Microsoft Azure Peering Services walkthrough](../../articles/internet-peering/walkthrough-direct-all.md)-- To establish a Direct interconnect for Communications Services, see [Internet peering for Communications Services walkthrough](../../articles/internet-peering/walkthrough-communications-services-partner.md)-- To establish a Direct interconnect for Exchange Router Server, see [Internet peering for Exchange Route Server walkthrough](../../articles/internet-peering/walkthrough-exchange-route-server-partner.md)
+- To learn about Peering Service connections, see [Peering Service connections](connection.md).
+- To find a service provider partner, see [Peering Service partners](location-partners.md).
peering-service Location Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/location-partners.md
The following table provides information on the Peering Service connectivity par
| Mumbai | [DE-CIX](https://www.de-cix.net/services/microsoft-azure-peering-service/) | | New York | [DE-CIX](https://www.de-cix.net/services/microsoft-azure-peering-service/) | | San Jose | [Equinix IX](https://www.equinix.com/interconnection-services/internet-exchange/) |
-| Santiago | [PIT Chile] (https://www.pitchile.cl/wp/maps/) |
+| Santiago | [PIT Chile](https://www.pitchile.cl/wp/maps/) |
| Seattle | [Equinix IX](https://www.equinix.com/interconnection-services/internet-exchange/) | | Singapore | [Equinix IX](https://www.equinix.com/interconnection-services/internet-exchange/) |
postgresql Concepts Compute Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compute-storage.md
We recommend that you actively monitor the disk space that's in use and increase
Storage auto-grow can help ensure that your server always has enough storage capacity and doesn't become read-only. When you turn on storage auto-grow, the storage will automatically expand without affecting the workload. This feature is currently in preview.
-For servers that have less than 1 TiB of provisioned storage, the auto-grow feature activates when storage consumption reaches 80 percent. For servers that have 1 TB or more of storage, auto-grow activates at 90 percent consumption.
+For servers with more than 1 TiB of provisioned storage, the storage autogrow mechanism activates when the available space falls to less than 10% of the total capacity or 64 GiB of free space, whichever of the two values is smaller. Conversely, for servers with storage under 1 TB, this threshold is adjusted to 20% of the available free space or 64 GiB, depending on which of these values is smaller.
-For example, assume that you allocate 256 GiB of storage and turn on storage auto-grow. When the utilization reaches 80 percent (205 GB), the server's storage size automatically increases to the next available premium disk tier, which is 512 GiB. But if the disk size is 1 TiB or larger, the scaling threshold is set at 90 percent. In such cases, the scaling process begins when the utilization reaches 922 GiB, and the disk is resized to 2 TiB.
+As an illustration, take a server with a storage capacity of 2 TiB ( greater than 1 TIB). In this case, the autogrow limit is set at 64 GiB. This choice is made because 64 GiB is the smaller value when compared to 10% of 2 TiB, which is roughly 204.8 GiB. In contrast, for a server with a storage size of 128 GiB (less than 1 TiB), the autogrow feature activates when there's only 25.8 GiB of space left. This activation is based on the 20% threshold of the total allocated storage (128 GiB), which is smaller than 64 GiB.
-Azure Database for PostgreSQL - Flexible Server uses [Azure managed disks](/azure/virtual-machines/disks-types). The default behavior is to increase the disk size to the next premium tier. This increase is always double in both size and cost, regardless of whether you start the storage scaling operation manually or through storage auto-grow. Enabling storage auto-grow is valuable when you're managing unpredictable workloads, because it automatically detects low-storage conditions and scales up the storage accordingly.
-The process of scaling storage is performed online without causing any downtime, except when the disk is provisioned at 4,096 GiB. This exception is a limitation of Azure managed disks. If a disk is already 4,096 GiB, the storage scaling activity won't be triggered, even if storage auto-grow is turned on. In such cases, you need to manually scale your storage. Manual scaling is an offline operation that you should plan according to your business requirements.
+
+Azure Database for PostgreSQL - Flexible Server uses [Azure managed disks](/azure/virtual-machines/disks-types). The default behavior is to increase the disk size to the next premium tier. This increase is always double in both size and cost, regardless of whether you start the storage scaling operation manually or through storage auto-grow. Enabling storage auto-grow is valuable when you're managing unpredictable workloads because it automatically detects low-storage conditions and scales up the storage accordingly.
+
+The process of scaling storage is performed online without causing any downtime, except when the disk is provisioned at 4,096 GiB. This exception is a limitation of Azure Managed disks. If a disk is already 4,096 GiB, the storage scaling activity won't be triggered, even if storage auto-grow is turned on. In such cases, you need to manually scale your storage. Manual scaling is an offline operation that you should plan according to your business requirements.
Remember that storage can only be scaled up, not down.
postgresql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking.md
After you create a private DNS zone in Azure, you'll need to [link](../../dns/pr
> [!IMPORTANT] > We no longer validate virtual network link presence on server creation for Azure Database for PostgreSQL - Flexible Server with private networking. When creating server through the Portal we provide customer choice to create link on server creation via checkbox *"Link Private DNS Zone your virtual network"* in the Azure Portal.
+[DNS private zones are resilient](../../dns/private-dns-overview.md) to regional outages because zone data is globally available. Resource records in a private zone are automatically replicated across regions. Azure Private DNS is an availability zone foundational, zone-reduntant service. For more information, see [Azure services with availability zone support](../../reliability/availability-zones-service-support.md#azure-services-with-availability-zone-support).
+ ### Integration with a custom DNS server If you're using a custom DNS server, you must use a DNS forwarder to resolve the FQDN of Azure Database for PostgreSQL - Flexible Server. The forwarder IP address should be [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md).
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas.md
Previously updated : 10/21/2022 Last updated : 9/26/2023 # Read replicas in Azure Database for PostgreSQL - Flexible Server
It is essential to monitor storage usage and replication lag closely, and take n
### Server parameters
-You are free to change server parameters on your read replica server and set different values than on the primary server. The only exception are parameters that might affect recovery of the replica, mentioned also in the "Scaling" section below: max_connections, max_prepared_transactions, max_locks_per_transaction, max_wal_senders, max_worker_processes. Please ensure these parameters are always [greater than or equal to the setting on the primary](https://www.postgresql.org/docs/current/hot-standby.html#HOT-STANDBY-ADMIN) to ensure that the replica does not run out of shared memory during recovery.
+When a read replica is created, it inherits the server parameters from primary server. This is to ensure a consistent and reliable starting point. However, any changes to the server parameters on the primary server, made post the creation of the read replica, are not automatically replicated. This behavior offers the advantage of individual tuning of the read replica, such as enhancing its performance for read-intensive operations, without modifying the primary server's parameters. While this provides flexibility and customization options, it also necessitates careful and manual management to maintain consistency between the primary and its replica when uniformity of server parameters is required.
+
+Administrators can change server parameters on read replica server and set different values than on the primary server. The only exception are parameters that might affect recovery of the replica, mentioned also in the "Scaling" section below: max_connections, max_prepared_transactions, max_locks_per_transaction, max_wal_senders, max_worker_processes. To ensure the read replicaΓÇÖs recovery is seamless and it does not encounter shared memory limitations, these particular parameters should always be set to values that are either equivalent to or [greater than those configured on the primary server](https://www.postgresql.org/docs/current/hot-standby.html#HOT-STANDBY-ADMIN).
### Scaling
reliability Reliability Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-image-builder.md
Last updated 08/22/2023
# Reliability in Azure Image Builder (AIB)
-This article describes reliability support in Azure Image Builder. Azure Image Builder doesn't currently support availability zones at this time, however it does support [cross-regional resiliency with disaster recovery](#disaster-recovery-cross-region-failover).
+This article contains [specific reliability recommendations for Image Builder](#reliability-recommendations) and [cross-region disaster recovery and business continuity](#cross-region-disaster-recovery-and-business-continuity).
Azure Image Builder (AIB) is a regional service with a cluster that serves single regions. The AIB regional setup keeps data and resources within the regional boundary. AIB as a service doesn't do fail over for cluster and SQL database in region down scenarios.
Azure Image Builder (AIB) is a regional service with a cluster that serves singl
For an architectural overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
-## Disaster recovery: cross-region failover
+>[!NOTE]
+> Azure Image Builder doesn't support [availability zones](./availability-zones-overview.md).
-If a region-wide disaster occurs, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md).
+## Reliability recommendations
+
+
+### Reliability recommendations summary
++
+| Category | Priority |Recommendation |
+||--||
+| [**High Availability**](#high-availability) |:::image type="icon" source="media/icon-recommendation-low.svg":::| [Use generation 2 virtual machine source images](#-use-generation-2-virtual-machine-vm-source-images) |
+|[**Disaster Recovery**](#disaster-recovery)|:::image type="icon" source="media/icon-recommendation-low.svg"::: |[Replicate image templates to a secondary region](#-replicate-image-templates-to-a-secondary-region) |
++
+### High availability
+
+#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **Use generation 2 virtual machine (VM) source images**
+
+When building your image templates, use source images that support generation 2 VMs. Generation 2 VMs support key features that arenΓÇÖt supported in generation 1 VMs such as:
+
+- Increased memory
+- Support for disks greater than 2TB
+- New UEFI-based boot architecture instead, which can improve boot and installation times
+- Intel Software Guard Extensions (Intel SGX)
+- Virtualized persistent memory (vPMEM)
++
+For more information on generation 2 VM features and capabilities, see [Generation 2 VMs: Features and capabilities](/azure/virtual-machines/generation-2#features-and-capabilities).
+
+### Disaster recovery
+
+#### :::image type="icon" source="media/icon-recommendation-low.svg"::: **Replicate image templates to a secondary region**
+
+The Azure Image Builder service that's used to deploy Image Templates doesnΓÇÖt currently support availability zones. Therefore, when building your image templates, you should replicate them to a secondary region, preferably to your primary regionΓÇÖs [paired region](./availability-zones-overview.md#paired-and-unpaired-regions). With a secondary region, you can quickly recover from a region failure and continue to deploy virtual machines from your image templates. For more information, see [Cross-region disaster recovery and business continuity](#cross-region-disaster-recovery-and-business-continuity).
++
+# [Azure Resource Graph](#tab/graph)
++
+-
+
+## Cross-region disaster recovery and business continuity
+ To ensure fast and easy recovery for Azure Image Builder (AIB), it's recommended that you run an image template in region pairs or multiple regions when designing your AIB solution. You should also replicate resources from the start when you're setting up your image templates.
-### Cross-region disaster recovery in multi-region geography
+### Multi-region geography disaster recovery
When a regional disaster occurs, Microsoft is responsible for outage detection, notifications, and support for AIB. However, you're responsible for setting up disaster recovery for the control (service side) and data planes.
In regards to your data processing information, refer to the Azure Image Builder
## Next steps -- [Reliability in Azure](../reliability/overview.md)
+- [Reliability in Azure](overview.md)
- [Enable Azure VM disaster recovery between availability zones](../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md) - [Azure Image Builder overview](../virtual-machines//image-builder-overview.md)
role-based-access-control Delegate Role Assignments Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/delegate-role-assignments-examples.md
+ Last updated 09/20/2023 -- #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshooting.md
na
Last updated 09/20/2023 -+ # Troubleshoot Azure RBAC
route-server Hub Routing Preference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/hub-routing-preference-powershell.md
Last updated 07/31/2023
# Configure routing preference to influence route selection using PowerShell
-Learn how to use [routing preference (preview)](routing-preference.md) setting in Azure Route Server to influence its route selection.
+Learn how to use [routing preference (preview)](hub-routing-preference.md) setting in Azure Route Server to influence its route selection.
> [!IMPORTANT] > Routing preference is currently in PREVIEW.
route-server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/overview.md
Title: What is Azure Route Server? description: Learn how Azure Route Server can simplify routing between your network virtual appliance (NVA) and your virtual network.- + Previously updated : 01/09/2023--
-#Customer intent: As an IT administrator, I want to learn about Azure Route Server and what I can use it for.
Last updated : 09/27/2023+
+#CustomerIntent: As an IT administrator, I want to learn about Azure Route Server and what I can use it for.
# What is Azure Route Server?
Azure Route Server simplifies dynamic routing between your network virtual appli
## How does it work?
-The following diagram illustrates how Azure Route Server works with an SDWAN NVA and a security NVA in a virtual network. Once youΓÇÖve established the BGP peering, Azure Route Server will receive an on-premises route (10.250.0.0/16) from the SDWAN appliance and a default route (0.0.0.0/0) from the firewall. These routes are then automatically configured on the VMs in the virtual network. As a result, all traffic destined to the on-premises network will be sent to the SDWAN appliance, while all Internet-bound traffic will be sent to the firewall. In the opposite direction, Azure Route Server will send the virtual network address (10.1.0.0/16) to both NVAs. The SDWAN appliance can propagate it further to the on-premises network.
+The following diagram illustrates how Azure Route Server works with an SDWAN NVA and a security NVA in a virtual network. Once you've established the BGP peering, Azure Route Server will receive an on-premises route (10.250.0.0/16) from the SDWAN appliance and a default route (0.0.0.0/0) from the firewall. These routes are then automatically configured on the VMs in the virtual network. As a result, all traffic destined to the on-premises network will be sent to the SDWAN appliance, while all Internet-bound traffic will be sent to the firewall. In the opposite direction, Azure Route Server will send the virtual network address (10.1.0.0/16) to both NVAs. The SDWAN appliance can propagate it further to the on-premises network.
:::image type="content" source="./media/overview/route-server-overview.png" alt-text="Diagram showing Azure Route Server configured in a virtual network.":::
For service level agreement details, see [SLA for Azure Route Server](https://az
For frequently asked questions about Azure Route Server, see [Azure Route Server FAQ](route-server-faq.md).
-## Next steps
+## Related content
-- [Learn how to configure Azure Route Server](quickstart-configure-route-server-powershell.md)-- [Learn how Azure Route Server works with Azure ExpressRoute and Azure VPN](expressroute-vpn-support.md)-- [Learn module: Introduction to Azure Route Server](/training/modules/intro-to-azure-route-server)
+- To learn how to create and configure Azure Route Server, see [Quickstart: Create and configure Route Server using the Azure portal](quickstart-configure-route-server-powershell.md).
+- To learn how Azure Route Server works with Azure ExpressRoute and Azure VPN, see [Azure Route Server support for ExpressRoute and Azure VPN](expressroute-vpn-support.md).
+- Training module: [Introduction to Azure Route Server](/training/modules/intro-to-azure-route-server).
+- Azure Architecture Center: [Update route tables by using Azure Route Server](/azure/architecture/example-scenario/networking/manage-routing-azure-route-server).
route-server Vmware Solution Default Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/vmware-solution-default-route.md
- Title: Injecting routes to Azure VMware Solution
-description: Learn about how to advertise routes to Azure VMware Solution with Azure Route Server.
---- Previously updated : 12/22/2022----
-# Injecting routes to Azure VMware Solution with Azure Route Server
-
-[Azure VMware Solution](../azure-vmware/introduction.md) is an Azure service where native VMware vSphere workloads run and communicate with other Azure services. This communication happens over ExpressRoute, and Azure Route Server can be used to modify the default behavior of Azure VMware Solution networking. The most frequent patterns for injecting routing information in Azure VMware Solution are either advertising a default route to attract Internet traffic to Azure, or advertising routes to achieve communications to on-premises networks when Global Reach is not available.
-
-Please refer to [Azure VMware Solution network design considerations](../azure-vmware/concepts-network-design-considerations.md) for additional information.
-
-## Next steps
-
-* [Learn how Azure Route Server works with ExpressRoute](expressroute-vpn-support.md)
-* [Learn how Azure Route Server works with a network virtual appliance](resource-manager-template-samples.md)
-
-[caf_avs_nw]: /azure/cloud-adoption-framework/scenarios/azure-vmware/eslz-network-topology-connectivity
sap Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-control-plane.md
Run the following command to create the deployer and the SAP library. The comman
# [Linux](#tab/linux) -
-Run the following command to deploy the control plane:
+Set the environment variables for the service principal:
```bash
export ARM_SUBSCRIPTION_ID="<subscriptionId>"
export ARM_CLIENT_ID="<appId>" export ARM_CLIENT_SECRET="<password>" export ARM_TENANT_ID="<tenantId>"+
+```
+
+Run the following command to deploy the control plane:
+
+```bash
+ export env_code="MGMT" export region_code="WEEU" export vnet_code="DEP00"
library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-S
${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \ --deployer_parameter_file "${deployer_parameter_file}" \
- --library_parameter_file "{library_parameter_file}" \
+ --library_parameter_file "${library_parameter_file}" \
--subscription "${ARM_SUBSCRIPTION_ID}" \ --spn_id "${ARM_CLIENT_ID}" \ --spn_secret "${ARM_CLIENT_SECRET}" \
Rerun the control plane deployment to enable private endpoints for the storage a
```bash
-export ARM_SUBSCRIPTION_ID="<subscriptionId>"
-export ARM_CLIENT_ID="<appId>"
-export ARM_CLIENT_SECRET="<password>"
-export ARM_TENANT_ID="<tenantId>"
+ export env_code="MGMT" export region_code="WEEU" export vnet_code="DEP00"
library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-S
${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \ --deployer_parameter_file "${deployer_parameter_file}" \
- --library_parameter_file "{library_parameter_file}" \
+ --library_parameter_file "${library_parameter_file}" \
--subscription "${ARM_SUBSCRIPTION_ID}" \ --spn_id "${ARM_CLIENT_ID}" \ --spn_secret "${ARM_CLIENT_SECRET}" \
sap Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/tutorial.md
If you don't assign the User Access Administrator role to the service principal,
# enable_firewall_for_keyvaults_and_storage defines that the storage accounts and key vaults have firewall enabled enable_firewall_for_keyvaults_and_storage = false
+ # public_network_access_enabled controls if storage account and key vaults have public network access enabled
+ public_network_access_enabled = true
++ ``` Note the Terraform variable file locations for future edits during deployment.
For example, choose **North Europe** as the deployment location, with the four-c
The sample SAP library configuration file `MGMT-NOEU-SAP_LIBRARY.tfvars` is in the `~/Azure_SAP_Automated_Deployment/WORKSPACES/LIBRARY/MGMT-NOEU-SAP_LIBRARY` folder.
+Set the environment variables for the service principal:
+
+```bash
+
+export ARM_SUBSCRIPTION_ID="<subscriptionId>"
+export ARM_CLIENT_ID="<appId>"
+export ARM_CLIENT_SECRET="<password>"
+export ARM_TENANT_ID="<tenantId>"
+
+```
+ 1. Create the deployer and the SAP library. Add the service principal details to the deployment key vault. ```bash
- export ARM_SUBSCRIPTION_ID="<subscriptionId>"
- export ARM_CLIENT_ID="<appID>"
- export ARM_CLIENT_SECRET="<password>"
- export ARM_TENANT_ID="<tenant>"
export env_code="MGMT" export vnet_code="DEP00" export region_code="<region_code>"
The sample SAP library configuration file `MGMT-NOEU-SAP_LIBRARY.tfvars` is in t
cd $CONFIG_REPO_PATH
- ${DEPLOYMENT_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
- --deployer_parameter_file DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars \
- --library_parameter_file LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars \
- --subscription "${ARM_SUBSCRIPTION_ID}" \
- --spn_id "${ARM_CLIENT_ID}" \
- --spn_secret "${ARM_CLIENT_SECRET}" \
- --tenant_id "${ARM_TENANT_ID}" \
- --auto-approve
+ deployer_parameter_file="${CONFIG_REPO_PATH}/DEPLOYER/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars"
+ library_parameter_file="${CONFIG_REPO_PATH}/LIBRARY/${env_code}-${region_code}-SAP_LIBRARY/${env_code}-${region_code}-SAP_LIBRARY.tfvars"
+
+ ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
+ --deployer_parameter_file "${deployer_parameter_file}" \
+ --library_parameter_file "${library_parameter_file}" \
+ --subscription "${ARM_SUBSCRIPTION_ID}" \
+ --spn_id "${ARM_CLIENT_ID}" \
+ --spn_secret "${ARM_CLIENT_SECRET}" \
+ --tenant_id "${ARM_TENANT_ID}"
+ ``` If you run into authentication issues, run `az logout` to sign out and clear the `token-cache`. Then run `az login` to reauthenticate.
sap Start Stop Sap Systems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/start-stop-sap-systems.md
Through the Azure portal, you can start and stop:
- Entire SAP Application tier in one go, which include ABAP SAP Central Services (ASCS) and Application Server instances. - Individual SAP instances, which include Central Services and Application server instances. - HANA Database-- You can start and stop instances in the following types of deployments:
+- You can start and stop instances and HANA database in the following types of deployments:
- Single-Server - High Availability (HA) - Distributed Non-HA
The following scenarios are supported when Starting and Stopping SAP systems:
- Stopping and Starting SAP system or individual instances from the VIS resource only stops or starts the SAP application. The underlying VMs are **not** stopped or started. - Stopping a highly available SAP system from the VIS resource gracefully stops the SAP instances in the right order and does not result in a failover of Central Services instance. - Stopping the HANA Database from the VIS resource results in the entire HANA instance to be stopped. In case of HANA MDC with multiple tenant DBs, the entire instance is stopped and not the specific Tenant DB.
+- For highly available (HA) HANA databases, start and stop operations through Virtual Instance for SAP solutions resource are supported only when cluster management solution is in place. Any other HANA database high availability configurations without a cluster are not currently supported when starting and stopping using Virtual Instance for SAP solutions resource.
## Stop SAP system
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
Previously updated : 08/24/2023 Last updated : 09/26/2023
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- September 26, 2023: Change in [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md) to add instructions for deploying /hana/shared (only) on NFS on Azure Files
- September 12, 2023: Adding support to handle Azure scheduled events for [Pacemaker clusters running on RHEL](./high-availability-guide-rhel-pacemaker.md). - August 24, 2023: Support of priority-fencing-delay cluster property on two-node pacemaker cluster to address split-brain situation in RHEL is updated on [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md), [High availability of SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md), [High availability of SAP HANA Scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md), and [Azure VMs high availability for SAP NW on RHEL with Azure NetApp Files](./high-availability-guide-rhel-netapp-files.md) documents. - August 03, 2023: Change of recommendation to use a /25 IP range for delegated subnet for ANF for SAP workload [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)
sap Sap Hana High Availability Scale Out Hsr Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel.md
vm-windows Previously updated : 07/11/2023 Last updated : 09/26/2023
[sap-hana-ha]:sap-hana-high-availability.md [nfs-ha]:high-availability-guide-suse-nfs.md
-This article describes how to deploy a highly available SAP HANA system in a scale-out configuration. Specifically, the configuration uses HANA system replication (HSR) and Pacemaker on Azure Red Hat Enterprise Linux virtual machines (VMs). [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) provides the shared file systems in the presented architecture, and these file systems are mounted over Network File System (NFS).
+This article describes how to deploy a highly available SAP HANA system in a scale-out configuration. Specifically, the configuration uses HANA system replication (HSR) and Pacemaker on Azure Red Hat Enterprise Linux virtual machines (VMs). The shared file systems in the presented architecture are NFS mounted and are provided by [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) or [NFS share on Azure Files](../../storage/files/files-nfs-protocol.md).
-In the example configurations and installation commands, the HANA instance is `03` and the HANA system ID is `HN1`. The examples are based on HANA 2.0 SP4 and Red Hat Enterprise Linux (RHEL) for SAP 7.6.
+In the example configurations and installation commands, the HANA instance is `03` and the HANA system ID is `HN1`.
## Prerequisites
Some readers will benefit from consulting a variety of SAP notes and resources b
* [Red Hat Enterprise Linux Solution for SAP HANA scale-out and system replication](https://access.redhat.com/solutions/4386601). * [Azure NetApp Files documentation][anf-azure-doc]. * [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md).
+* [Azure Files documentation](../../storage/files/storage-files-introduction.md)
## Overview
To achieve HANA high availability for HANA scale-out installations, you can conf
In the following diagram, there are three HANA nodes on each site, and a majority maker node to prevent a "split-brain" scenario. The instructions can be adapted to include more VMs as HANA DB nodes.
-[Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) provides the HANA shared file system, `/hana/shared`. It's mounted via NFS v4.1 on each HANA node in the same HANA system replication site. File systems `/hana/data` and `/hana/log` are local file systems, and aren't shared among the HANA DB nodes. SAP HANA will be installed in non-shared mode.
+The HANA shared file system `/han). The HANA shared file system is NFS mounted on each HANA node in the same HANA system replication site. File systems `/hana/data` and `/hana/log` are local file systems and aren't shared between the HANA DB nodes. SAP HANA will be installed in non-shared mode.
For recommended SAP HANA storage configurations, see [SAP HANA Azure VMs storage configurations](./hana-vm-operations-storage.md).
The preceding diagram shows three subnets represented within one Azure virtual n
Because `/hana/data` and `/hana/log` are deployed on local disks, it isn't necessary to deploy separate subnet and separate virtual network cards for communication to the storage.
-The Azure NetApp volumes are deployed in a separate subnet, [delegated to Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-delegate-subnet.md): `anf` 10.23.1.0/26.
+If you're using Azure NetApp Files, the NFS volumes for `/han): `anf` 10.23.1.0/26.
## Set up the infrastructure
When you're using the standard load balancer, you should be aware of the followi
> [!IMPORTANT] > Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps causes the health probes to fail. Set the parameter `net.ipv4.tcp_timestamps` to `0`. For details, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md) and SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
-### Deploy the Azure NetApp Files infrastructure
+### Deploy NFS
+
+There are two options for deploying Azure native NFS for `/han). Azure files support NFSv4.1 protocol, NFS on Azure NetApp files supports both NFSv4.1 and NFSv3.
+
+The next sections describe the steps to deploy NFS - you'll need to select only *one* of the options.
+
+> [!TIP]
+> You chose to deploy `/han).
+
+#### Deploy the Azure NetApp Files infrastructure
Deploy the Azure NetApp Files volumes for the `/han#set-up-the-azure-netapp-files-infrastructure).
In this example, you use the following Azure NetApp Files volumes:
* volume **HN1**-shared-s1 (nfs://10.23.1.7/**HN1**-shared-s1) * volume **HN1**-shared-s2 (nfs://10.23.1.7/**HN1**-shared-s2)
+#### Deploy the NFS on Azure Files infrastructure
+
+Deploy Azure Files NFS shares for the `/han?tabs=azure-portal).
+
+In this example, the following Azure Files NFS shares were used:
+
+* share **hn1**-shared-s1 (sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1)
+* share **hn1**-shared-s2 (sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2)
+ ## Operating system configuration and preparation The instructions in the next sections are prefixed with one of the following abbreviations:
Configure and prepare your operating system by doing the following:
10.23.1.207 hana-s2-db3-hsr ```
-1. **[A]** Prepare the operating system for running SAP HANA. For more information, see SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the Azure NetApp Files configuration settings.
- <pre><code>
- vi /etc/sysctl.d/91-NetApp-HANA.conf
- # Add the following entries in the configuration file
- net.core.rmem_max = 16777216
- net.core.wmem_max = 16777216
- net.ipv4.tcp_rmem = 4096 131072 16777216
- net.ipv4.tcp_wmem = 4096 16384 16777216
- net.core.netdev_max_backlog = 300000
- net.ipv4.tcp_slow_start_after_idle=0
- net.ipv4.tcp_no_metrics_save = 1
- net.ipv4.tcp_moderate_rcvbuf = 1
- net.ipv4.tcp_window_scaling = 1
- net.ipv4.tcp_sack = 1
- </code></pre>
-
-1. **[A]** Create configuration file */etc/sysctl.d/ms-az.conf* with additional optimization settings.
+1. **[A]** Create configuration file */etc/sysctl.d/ms-az.conf* with Microsoft for Azure configuration settings.
<pre><code> vi /etc/sysctl.d/ms-az.conf
Configure and prepare your operating system by doing the following:
> [!TIP] > Avoid setting `net.ipv4.ip_local_port_range` and `net.ipv4.ip_local_reserved_ports` explicitly in the `sysctl` configuration files, to allow the SAP host agent to manage the port ranges. For more details, see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
-1. **[A]** Adjust the `sunrpc` settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
-
- <pre><code>
- vi /etc/modprobe.d/sunrpc.conf
- # Insert the following line
- options sunrpc tcp_max_slot_table_entries=128
- </code></pre>
1. **[A]** Install the NFS client package.
Configure and prepare your operating system by doing the following:
## Prepare the file systems
-The following sections provide steps for the preparation of your file systems.
+The following sections provide steps for the preparation of your file systems. You chose to deploy /han).
+
+### Mount the shared file systems (Azure NetApp Files NFS)
-### Mount the shared file systems
+In this example, the shared HANA file systems are deployed on Azure NetApp Files and mounted over NFSv4.1. Follow the steps in this section, only if you're using NFS on Azure NetApp Files.
-In this example, the shared HANA file systems are deployed on Azure NetApp Files and mounted over NFS v4.
+1. **[AH]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings.
+
+ ```bash
+ vi /etc/sysctl.d/91-NetApp-HANA.conf
+
+ # Add the following entries in the configuration file
+ net.core.rmem_max = 16777216
+ net.core.wmem_max = 16777216
+ net.ipv4.tcp_rmem = 4096 131072 16777216
+ net.ipv4.tcp_wmem = 4096 16384 16777216
+ net.core.netdev_max_backlog = 300000
+ net.ipv4.tcp_slow_start_after_idle=0
+ net.ipv4.tcp_no_metrics_save = 1
+ net.ipv4.tcp_moderate_rcvbuf = 1
+ net.ipv4.tcp_window_scaling = 1
+ net.ipv4.tcp_sack = 1
+ ```
+
+2. **[AH]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
+
+ ```bash
+ vi /etc/modprobe.d/sunrpc.conf
+
+ # Insert the following line
+ options sunrpc tcp_max_slot_table_entries=128
+ ```
1. **[AH]** Create mount points for the HANA database volumes.
In this example, the shared HANA file systems are deployed on Azure NetApp Files
Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.14,local_lock=none,addr=10.23.1.7 ```
+### Mount the shared file systems (Azure Files NFS)
+
+In this example, the shared HANA file systems are deployed on NFS on Azure Files. Follow the steps in this section, only if you're using NFS on Azure Files.
+
+1. **[AH]** Create mount points for the HANA database volumes.
+
+ ```bash
+ mkdir -p /hana/shared
+ ```
+
+2. **[AH1]** Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs.
+
+ ```bash
+ sudo vi /etc/fstab
+ # Add the following entry
+ sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1 /hana/shared nfs nfsvers=4.1,sec=sys 0 0
+ # Mount all volumes
+ sudo mount -a
+ ```
+
+3. **[AH2]** Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.
+
+ ```bash
+ sudo vi /etc/fstab
+ # Add the following entries
+ sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2 /hana/shared nfs nfsvers=4.1,sec=sys 0 0
+ # Mount the volume
+ sudo mount -a
+ ```
+
+4. **[AH]** Verify that the corresponding `/hana/shared/` file systems are mounted on all HANA DB VMs with NFS protocol version **NFSv4.1**.
+
+ ```bash
+ sudo nfsstat -m
+ # Example from SITE 1, hana-s1-db1
+ sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1
+ Flags: rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.19,local_lock=none,addr=10.23.0.35
+ # Example from SITE 2, hana-s2-db1
+ sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2
+ Flags: rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.22,local_lock=none,addr=10.23.0.35
+ ```
+ ### Prepare the data and log local file systems In the presented configuration, you deploy file systems `/hana/data` and `/hana/log` on a managed disk, and you attach these file systems locally to each HANA DB VM. Run the following steps to create the local data and log volumes on each HANA DB virtual machine.
The following steps get you set up for system replication:
## Create a Pacemaker cluster
-To create a basic Pacemaker cluster for this HANA server, follow the steps in [Setting up Pacemaker on Red Hat Enterprise Linux in Azure](high-availability-guide-rhel-pacemaker.md). Include all virtual machines, including the majority maker in the cluster.
+To create a basic Pacemaker cluster, follow the steps in [Setting up Pacemaker on Red Hat Enterprise Linux in Azure](high-availability-guide-rhel-pacemaker.md). Include all virtual machines, including the majority maker in the cluster.
> [!IMPORTANT] > Don't set `quorum expected-votes` to 2. This isn't a two-node cluster. Make sure that the cluster property `concurrent-fencing` is enabled, so that node fencing is deserialized.
For the next part of this process, you need to create file system resources. Her
``` 1. **[1]** Create the file system cluster resources for `/hana/shared` in the disabled state. You use `--disabled` because you have to define the location constraints before the mounts are enabled.
+You chose to deploy /han).
- ```bash
- # /hana/shared file system for site 1
- pcs resource create fs_hana_shared_s1 --disabled ocf:heartbeat:Filesystem device=10.23.1.7:/HN1-shared-s1 directory=/hana/shared \
- fstype=nfs options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,noatime,sec=sys,nfsvers=4.1,lock,_netdev' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \
- op start interval=0 timeout=120 op stop interval=0 timeout=120
-
- # /hana/shared file system for site 2
- pcs resource create fs_hana_shared_s2 --disabled ocf:heartbeat:Filesystem device=10.23.1.7:/HN1-shared-s1 directory=/hana/shared \
- fstype=nfs options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,noatime,sec=sys,nfsvers=4.1,lock,_netdev' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \
- op start interval=0 timeout=120 op stop interval=0 timeout=120
-
- # clone the /hana/shared file system resources for both site1 and site2
- pcs resource clone fs_hana_shared_s1 meta clone-node-max=1 interleave=true
- pcs resource clone fs_hana_shared_s2 meta clone-node-max=1 interleave=true
- ```
+ - In this example, the '/hana/shared' file system is deployed on Azure NetApp Files and mounted over NFSv4.1. Follow the steps in this section, only if you're using NFS on Azure NetApp Files.
+
+ ```bash
+ # /hana/shared file system for site 1
+ pcs resource create fs_hana_shared_s1 --disabled ocf:heartbeat:Filesystem device=10.23.1.7:/HN1-shared-s1 directory=/hana/shared \
+ fstype=nfs options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,noatime,sec=sys,nfsvers=4.1,lock,_netdev' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \
+ op start interval=0 timeout=120 op stop interval=0 timeout=120
+
+ # /hana/shared file system for site 2
+ pcs resource create fs_hana_shared_s2 --disabled ocf:heartbeat:Filesystem device=10.23.1.7:/HN1-shared-s1 directory=/hana/shared \
+ fstype=nfs options='defaults,rw,hard,timeo=600,rsize=262144,wsize=262144,proto=tcp,noatime,sec=sys,nfsvers=4.1,lock,_netdev' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \
+ op start interval=0 timeout=120 op stop interval=0 timeout=120
+
+ # clone the /hana/shared file system resources for both site1 and site2
+ pcs resource clone fs_hana_shared_s1 meta clone-node-max=1 interleave=true
+ pcs resource clone fs_hana_shared_s2 meta clone-node-max=1 interleave=true
+ ```
+ The suggested timeouts values allow the cluster resources to withstand protocol-specific pause, related to NFSv4.1 lease renewals on Azure NetApp Files. For more information see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf).
+
+ - In this example, the '/hana/shared' file system is deployed on NFS on Azure Files. Follow the steps in this section, only if you're using NFS on Azure Files.
+
+ ```bash
+ # /hana/shared file system for site 1
+ pcs resource create fs_hana_shared_s1 --disabled ocf:heartbeat:Filesystem device=sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1 directory=/hana/shared \
+ fstype=nfs options='defaults,rw,hard,proto=tcp,noatime,nfsvers=4.1,lock' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \
+ op start interval=0 timeout=120 op stop interval=0 timeout=120
+
+ # /hana/shared file system for site 2
+ pcs resource create fs_hana_shared_s2 --disabled ocf:heartbeat:Filesystem device=sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2 directory=/hana/shared \
+ fstype=nfs options='defaults,rw,hard,proto=tcp,noatime,nfsvers=4.1,lock' op monitor interval=20s on-fail=fence timeout=120s OCF_CHECK_LEVEL=20 \
+ op start interval=0 timeout=120 op stop interval=0 timeout=120
+
+ # clone the /hana/shared file system resources for both site1 and site2
+ pcs resource clone fs_hana_shared_s1 meta clone-node-max=1 interleave=true
+ pcs resource clone fs_hana_shared_s2 meta clone-node-max=1 interleave=true
+ ```
+ The `OCF_CHECK_LEVEL=20` attribute is added to the monitor operation, so that monitor operations perform a read/write test on the file system. Without this attribute, the monitor operation only verifies that the file system is mounted. This can be a problem because when connectivity is lost, the file system might remain mounted, despite being inaccessible. The `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced. Without this option, the default behavior is to stop all resources that depend on the failed resource, then restart the failed resource, and then start all the resources that depend on the failed resource. Not only can this behavior take a long time when an SAP HANA resource depends on the failed resource, but it also can fail altogether. The SAP HANA resource can't stop successfully, if the NFS share holding the HANA binaries is inaccessible.
- The suggested timeouts values allow the cluster resources to withstand protocol-specific pause, related to NFSv4.1 lease renewals. For more information see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf). The timeouts in the above configuration may need to be adapted to the specific SAP setup.
+ The timeouts in the above configurations may need to be adapted to the specific SAP setup.
1. **[1]** Configure and verify the node attributes. All SAP HANA DB nodes on replication site 1 are assigned attribute `S1`, and all SAP HANA DB nodes on replication site 2 are assigned attribute `S2`.
For the next part of this process, you need to create file system resources. Her
pcs resource enable fs_hana_shared_s2 ```
- When you enable the file system resources, the cluster will mount the `/hana/shared` file systems.
+ When you enable the file system resources, the cluster will mount the `/hana/shared` file systems.
1. **[AH]** Verify that the Azure NetApp Files volumes are mounted under `/hana/shared`, on all HANA DB VMs on both sites.
- ```bash
- sudo nfsstat -m
- # Verify that flag vers is set to 4.1
- # Example from SITE 1, hana-s1-db1
- /hana/shared from 10.23.1.7:/HN1-shared-s1
- Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.11,local_lock=none,addr=10.23.1.7
- # Example from SITE 2, hana-s2-db1
- /hana/shared from 10.23.1.7:/HN1-shared-s2
- Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.14,local_lock=none,addr=10.23.1.7
- ```
+ - Example, if using Azure NetApp Files:
+ ```bash
+ sudo nfsstat -m
+ # Verify that flag vers is set to 4.1
+ # Example from SITE 1, hana-s1-db1
+ /hana/shared from 10.23.1.7:/HN1-shared-s1
+ Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.11,local_lock=none,addr=10.23.1.7
+ # Example from SITE 2, hana-s2-db1
+ /hana/shared from 10.23.1.7:/HN1-shared-s2
+ Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.14,local_lock=none,addr=10.23.1.7
+ ```
+ - Example, if using Azure Files NFS:
+
+ ```bash
+ sudo nfsstat -m
+ # Example from SITE 1, hana-s1-db1
+ sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1
+ Flags: rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.19,local_lock=none,addr=10.23.0.35
+ # Example from SITE 2, hana-s2-db1
+ sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2
+ Flags: rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.22,local_lock=none,addr=10.23.0.35
+ ```
1. **[1]** Configure and clone the attribute resources, and configure the constraints, as follows:
Now you're ready to create the cluster resources:
pcs resource clone SAPHanaTopology_HN1_HDB03 meta clone-node-max=1 interleave=true ```
- If you're building a RHEL **8.x** cluster, use the following commands:
+ If you're building a RHEL >= **8.x** cluster, use the following commands:
```bash pcs resource create SAPHanaTopology_HN1_HDB03 SAPHanaTopology \ SID=HN1 InstanceNumber=03 meta clone-node-max=1 interleave=true \
Now you're ready to create the cluster resources:
meta master-max="1" clone-node-max=1 interleave=true ```
- If you're building a RHEL **8.x** cluster, use the following commands:
+ If you're building a RHEL >= **8.x** cluster, use the following commands:
```bash pcs resource create SAPHana_HN1_HDB03 SAPHanaController \ SID=HN1 InstanceNumber=03 PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=false \
Now you're ready to create the cluster resources:
sudo pcs resource group add g_ip_HN1_03 nc_HN1_03 vip_HN1_03 ```
- 1. Create the cluster constraints.
+ 1.
+ 2. Create the cluster constraints.
If you're building a RHEL **7.x** cluster, use the following commands: ```bash #Start HANA topology, before the HANA instance
Now you're ready to create the cluster resources:
pcs constraint location SAPHanaTopology_HN1_HDB03-clone rule resource-discovery=never score=-INFINITY hana_nfs_s1_active ne true and hana_nfs_s2_active ne true ```
- If you're building a RHEL **8.x** cluster, use the following commands:
+ If you're building a RHEL >= **8.x** cluster, use the following commands:
```bash #Start HANA topology, before the HANA instance pcs constraint order SAPHanaTopology_HN1_HDB03-clone then SAPHana_HN1_HDB03-clone
When you're testing a HANA cluster configured with a read-enabled secondary, be
1. Verify the cluster configuration for a failure scenario, when a node loses access to the NFS share (`/hana/shared`).
- The SAP HANA resource agents depend on binaries, stored on `/hana/shared`, to perform operations during failover. File system `/hana/shared` is mounted over NFS in the presented configuration. One test that you can perform is to remount the `/hana/shared` file system as *Read only*. This approach validates that the cluster will fail over, if access to `/hana/shared` is lost on the active system replication site.
+ The SAP HANA resource agents depend on binaries, stored on `/hana/shared`, to perform operations during failover. File system `/hana/shared` is mounted over NFS in the presented configuration. A test that can be performed, is to create a temporary firewall rule to block access to the `/hana/shared` NFS mounted file system on one of the primary site VMs. This approach validates that the cluster will fail over, if access to `/hana/shared` is lost on the active system replication site.
- **Expected result**: When you remount `/hana/shared` as *Read only*, the monitoring operation that performs a read/write operation on the file system will fail. This is because it isn't able to write to the file system, and will trigger HANA resource failover. The same result is expected when your HANA node loses access to the NFS share.
+ **Expected result**: When you block the access to the `/hana/shared` NFS mounted file system on one of the primary site VMs, the monitoring operation that performs read/write operation on file system, will fail, as it is not able to access the file system and will trigger HANA resource failover. The same result is expected when your HANA node loses access to the NFS share.
You can check the state of the cluster resources by running `crm_mon` or `pcs status`. Resource state before starting the test: ```bash
When you're testing a HANA cluster configured with a read-enabled secondary, be
# vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hana-s1-db1 ```
- To simulate failure for `/hana/shared` on one of the primary replication site VMs, run the following command:
- ```bash
- # Execute as root
- mount -o ro /hana/shared
- # Or if the preceding command returns an error
- sudo mount -o ro 10.23.1.7/HN1-shared-s1 /hana/shared
- ```
-
+ To simulate failure for `/hana/shared`:
+
+ * If using NFS on ANF, first confirm the IP address for the `/hana/shared` ANF volume on the primary site. You can do that by running `df -kh|grep /hana/shared`.
+ * If using NFS on Azure Files, first determine the IP address of the private end point for your storage account.
+
+ Then, set up a temporary firewall rule to block access to the IP address of the `/hana/shared` NFS file system by executing the following command on one of the primary HANA system replication site VMs.
+
+ In this example, the command was executed on hana-s1-db1 for ANF volume `/hana/shared`.
+
+ ```bash
+ iptables -A INPUT -s 10.23.1.7 -j DROP; iptables -A OUTPUT -d 10.23.1.7 -j DROP
+ ```
+ The HANA VM that lost access to `/hana/shared` should restart or stop, depending on the cluster configuration. The cluster resources are migrated to the other HANA system replication site. If the cluster hasn't started on the VM that was restarted, start the cluster by running the following:
sentinel Connect Cef Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-ama.md
This example collects events for:
1. To capture messages sent from a logger or a connected device, run this command in the background: ```
- tcpdump -i any port 514 -A vv &
+ tcpdump -i any port 514 -A -vv &
``` 1. After you complete the validation, we recommend that you stop the `tcpdump`: Type `fg` and then select <kbd>Ctrl</kbd>+<kbd>C</kbd>. 1. To send demo messages, do one of the following:
sentinel Watchlists Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists-queries.md
To use watchlists in analytics rules, create a rule using the _GetWatchlist('wat
1. Complete the rest of the tabs in the **Analytics rule wizard**.
-For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md).
+Watchlists are refreshed in your workspace every 12 days, updating the `TimeGenerated` field.. For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md#query-scheduling-and-alert-threshold).
## View list of watchlist aliases
sentinel Watchlists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists.md
Before you create a watchlist, be aware of the following limitations:
- The use of watchlists should be limited to reference data, as they aren't designed for large data volumes. - The **total number of active watchlist items** across all watchlists in a single workspace is currently limited to **10 million**. Deleted watchlist items don't count against this total. If you require the ability to reference large data volumes, consider ingesting them using [custom logs](../azure-monitor/agents/data-sources-custom-logs.md) instead.
+- Watchlists are refreshed in your workspace every 12 days, updating the `TimeGenerated` field.
- Watchlists can only be referenced from within the same workspace. Cross-workspace and/or Lighthouse scenarios are currently not supported. - Local file uploads are currently limited to files of up to 3.8 MB in size. - File uploads from an Azure Storage account (in preview) are currently limited to files up to 500 MB in size.
service-connector Concept Service Connector Internals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-service-connector-internals.md
Service Connector runs multiple tasks while creating or updating service connect
- Configuring the network and firewall settings - Configuring connection information - Configuring authentication information-- Creating or updating connection rollback in case of failure
+- Creating or updating connection rollback if failure occurs
If a step fails during this process, Service Connector rolls back all previous steps to keep the initial settings in the source and target instances.
az containerapp connection list-configuration --resource-group <source-service-r
## Configuration naming convention
-Service Connector sets the connection configuration when creating a connection. The environment variable key-value pairs are determined by your client type and authentication type. For example, using the Azure SDK with a managed identity requires a client ID, client secret, etc. Using a JDBC driver requires a database connection string. Follow the conventions below to name the configurations:
+Service Connector sets the connection configuration when creating a connection. The environment variable key-value pairs are determined by your client type and authentication type. For example, using the Azure SDK with a managed identity requires a client ID, client secret, etc. Using a JDBC driver requires a database connection string. Follow these conventions to name the configurations:
- Spring Boot client: the Spring Boot library for each target service has its own naming convention. For example, MySQL connection settings would be `spring.datasource.url`, `spring.datasource.username`, `spring.datasource.password`. Kafka connection settings would be `spring.kafka.properties.bootstrap.servers`.
Service Connector sets the connection configuration when creating a connection.
- The key name of the first connection configuration uses the format `<Cloud>_<Type>_<Name>`. For example, `AZURE_STORAGEBLOB_RESOURCEENDPOINT`, `CONFLUENTCLOUD_KAFKA_BOOTSTRAPSERVER`. - For the same type of target resource, the key name of the second connection configuration uses the format `<Cloud>_<Type>_<Connection Name>_<Name>`. For example, `AZURE_STORAGEBLOB_CONN2_RESOURCEENDPOINT`, `CONFLUENTCLOUD_KAFKA_CONN2_BOOTSTRAPSERVER`.
+## Service network solution
+
+Service Connector offers three network solutions for users to choose from when creating a connection. These solutions are designed to facilitate secure and efficient communication between resources.
+
+1. **Firewall**: This solution allows connection through public network and compute resource will access target resource with public IP address. When selecting this option, Service Connector verifies the target resource's firewall settings and adds a rule to allow connections from the source resource's public IP address. If the resource's firewall has an option to allow all Azure resources accessing, Service Connector enables this setting. However, if the target resource denies all public network traffic by default, Service Connector doesn't modify this setting. In this case, you should choose another option or update the network settings manually before trying again.
+
+2. **Service Endpoint**: This solution enables compute resource to connect to target resources via a virtual network, ensuring that connection traffic doesn't pass through the public network. Its only available if certain preconditions are met:
+ - The compute resource must have virtual network integration enabled. For Azure App Service, this can be configured in its networking settings; for Azure Spring Apps, users must set VNet injection during the resource creation stage.
+ - The target service must support Service Endpoint. For a list of supported services, refer to [Virtual Network service endpoints](/azure/virtual-network/virtual-network-service-endpoints-overview).
+
+ When selecting this option, Service Connector adds the private IP address of the compute resource in the virtual network to the target resource's Virtual Network rules and enables the service endpoint in the source resource's subnet configuration. If the user lacks sufficient permissions or the resource's SKU or region doesn't support service endpoints, connection creation fails.
+
+3. **Private Endpoint**: This solution is a recommended way to connect resources via a virtual network and is only available if certain preconditions are met:
+- The compute resource must have virtual network integration enabled. For Azure App Service, this can be configured in its networking settings; for Azure Spring Apps, users must set VNet injection during the resource creation stage.
+- The target service must support private endpoints. For a list of supported services, refer to [Private-link resource](/azure/private-link/private-endpoint-overview#private-link-resource).
+
+ When selecting this option, Service Connector doesn't perform any more configurations in the compute or target resources. Instead, it verifies the existence of a valid private endpoint and fails the connection if not found. For convenience, users can select the "New Private Endpoint" checkbox in the Azure Portal when creating a connection. With it, Service Connector will automatically create all related resources for the private endpoint in the proper sequence, simplifying the connection creation process.
+++ ## Service connection validation When validating a connection, Service connector checks the following elements:
When a service connection is deleted, the connection information is also deleted
## Next steps
-Go to the concept article below to learn more about Service Connector.
+See the following concept article to learn more about Service Connector.
> [!div class="nextstepaction"] > [High availability](./concept-availability.md)
service-connector Quickstart Portal App Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-app-service-connection.md
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
1. Select **Next: Review + Create** to review the provided information. Then select **Create** to create the service connection. This operation may take a minute to complete.
+> [!NOTE]
+> You need enough permissions to create connection successfully, for more details, see [Permission requirements](./concept-permission.md).
+ ## View service connections in App Service 1. The **Service Connector** tab displays existing App Service connections.
spring-apps Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/cost-management.md
Autoscale reduces operating costs by terminating redundant resources when they'r
You can also set up autoscale rules for your applications in the Azure Spring Apps Standard consumption and dedicated plan. For more information, see [Quickstart: Set up autoscale for applications in the Azure Spring Apps Standard consumption and dedicated plan](quickstart-apps-autoscale-standard-consumption.md).
+## Stop maintaining unused environments
+
+If you set up several environments while developing a product, it's important to remove the environments that are no longer in use once the product is live.
+
+## Remove unnecessary deployments
+
+If you use strategies like blue-green deployment to reduce downtime, it can result in many idle deployments on staging slots, especially multiple app instances that aren't needed once newer versions are deployed to production.
+
+## Avoid over allocating resources
+
+Java users often reserve more processing power and memory than they really need. While it's fine to use large app instances during the initial months in production, you should adjust resource allocation based on usage data.
+
+## Avoid unnecessary scaling
+
+If you use more app instances than you need, you should adjust the number of instances based on real usage data.
+
+## Streamline monitoring data collection
+
+If you collect more logs, metrics, and traces than you can use or afford, you must determine what's necessary for troubleshooting, capacity planning, and monitoring production. For example, you can reduce the frequency of application performance monitoring or be more selective about which logs, metrics, and traces you send to data aggregation tools.
+
+## Deactivate debug mode
+
+If you forget to switch off debug mode for apps, a large amount of data is collected and sent to monitoring platforms. Forgetting to deactivate debug mode could be unnecessary and costly.
+ ## Next steps [Quickstart: Provision an Azure Spring Apps Standard consumption and dedicated plan service instance](quickstart-provision-standard-consumption-service-instance.md)
static-web-apps Snippets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/snippets.md
+
+ Title: Snippets in Azure Static Web Apps (preview)
+description: Inject custom code in the HEAD or BODY elements at runtime in Azure Static Web Apps
++++ Last updated : 06/22/2023+++
+# Snippets in Azure Static Web Apps (preview)
+
+Azure Static Web Apps allows you to inject custom code into the `head` or `body` elements at runtime. These pieces of code are known as *snippets*.
+
+Snippets give you the flexibility to add code to every page in your site in a single place, all without modifying the core codebase.
+
+Common use cases of snippets include:
+
+- Analytics scripts
+- Common scripts
+- Global UI elements
+
+> [!NOTE]
+> Some front-end frameworks may overwrite your snippet code. Test your snippets before applying them to a production environment.
+
+## Add a snippet
+
+1. Go to your static web app in the Azure portal.
+
+1. From the *Settings* menu, select **Configuration**.
+
+1. Select the **Snippets** tab.
+
+1. Select the **Add** button.
+
+1. Enter the following settings in the Snippets window:
+
+ | Setting | Value | Comments |
+ ||||
+ | Location | Select which HTML page element you want your code injected into. | |
+ | Name | Enter a snippet name. | |
+ | Insertion location | Select whether you want to **Prepend** or **Append** your code to the selected element. | *Prepend* means your code appears directly after the open tag of the element. *Append* means your code appears directly before the close tag of the element. |
+ | Environment | Select the environment(s) you want to target. | If you pick **Select environment**, then you can choose from different environments to target. |
+
+1. Enter your code in the text box.
+
+1. Select **OK** to close the window.
+
+1. Select **Save** to commit your changes.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Split traffic](./traffic-splitting.md)
storage Storage Blob Upload Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-python.md
The Azure SDK for Python contains libraries that build on top of the Azure REST
### Code samples -- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob-devguide-blobs.py)
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/python/blob-devguide-py/blob-devguide-upload.py)
[!INCLUDE [storage-dev-guide-resources-python](../../../includes/storage-dev-guides/storage-dev-guide-resources-python.md)]
storage Storage Blobs Tune Upload Download Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download-java.md
Last updated 09/22/2023 ms.devlang: java-+ # Performance tuning for uploads and downloads with Java
storage Elastic San Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-planning.md
The following iSCSI features aren't currently supported:
## Next steps
-For a video that goes over the general planning and deployment with a few example scenarios, see [Getting started with Azure Elastic SAN](/shows/inside-azure-for-it/getting-started-with-azure-elastic-san).
+- [Networking options for Elastic SAN Preview](elastic-san-networking-concepts.md)
+- [Deploy an Elastic SAN Preview](elastic-san-create.md)
-[Networking options for Elastic SAN Preview](elastic-san-networking-concepts.md)
-[Deploy an Elastic SAN Preview](elastic-san-create.md)
+For a video that goes over the general planning and deployment with a few example scenarios, see [Getting started with Azure Elastic SAN](/shows/inside-azure-for-it/getting-started-with-azure-elastic-san).
storage Analyze Files Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/analyze-files-metrics.md
Last updated 09/06/2023 -+ # Analyze Azure Files metrics using Azure Monitor
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
description: Learn how to enable Active Directory Domain Services authentication
Previously updated : 08/11/2023 Last updated : 09/27/2023 recommendations: false
Connect-AzAccount
# Define parameters # $StorageAccountName is the name of an existing storage account that you want to join to AD # $SamAccountName is the name of the to-be-created AD object, which is used by AD as the logon name
-# for the object. It must be 20 characters or less and has certain character restrictions.
+# for the object. It must be 20 characters or less and has certain character restrictions.
+# Make sure that you provide the SamAccountName without the trailing '$' sign.
# See https://learn.microsoft.com/windows/win32/adschema/a-samaccountname for more information. $SubscriptionId = "<your-subscription-id-here>" $ResourceGroupName = "<resource-group-name-here>"
synapse-analytics Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/known-issues.md
Deleting a Synapse workspace fails with the error message:
**Workaround**: The problem can be mitigated by retrying the delete operation. The engineering team is aware of this behavior and working on a fix.
+### REST API PUT operations or ARM/Bicep templates to update network settings fail
+
+When using an ARM template, Bicep template, or direct REST API PUT operation to change the public network access settings and/or firewall rules for a Synapse workspace, the operation can fail.
+
+**Workaround**: The problem can be mitigated by using a REST API PATCH operation or the Azure Portal UI to reverse and retry the desired configuration changes. The engineering team is aware of this behavior and working on a fix.
+ ## Recently Closed Known issues |Synapse Component|Issue|Status|Date Resolved
synapse-analytics Business Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/business-intelligence.md
Title: Business Intelligence partners
description: Lists of third-party business intelligence partners with solutions that support Azure Synapse Analytics. - Last updated 06/14/2023
synapse-analytics Data Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/data-integration.md
Title: Data integration partners
description: Lists of third-party partners with data integration solutions that support Azure Synapse Analytics. - Last updated 06/14/2023
synapse-analytics Data Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/data-management.md
Title: Data management partners
description: Lists of third-party data management partners with solutions that support Azure Synapse Analytics. - Last updated 07/05/2023
synapse-analytics Analyze Your Workload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/analyze-your-workload.md
Title: Analyze your workload for dedicated SQL pool description: Techniques for analyzing query prioritization for dedicated SQL pool in Azure Synapse Analytics. -+ Last updated : 11/03/2021 + - Previously updated : 11/03/2021-
synapse-analytics Cheat Sheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/cheat-sheet.md
Title: Cheat sheet for dedicated SQL pool (formerly SQL DW) description: Find links and best practices to quickly build your dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.--++ Last updated : 11/04/2019 + - Previously updated : 11/04/2019- # Cheat sheet for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
synapse-analytics Column Level Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/column-level-security.md
Title: Column-level security for dedicated SQL pool
description: Column-Level Security allows customers to control access to database table columns based on the user's execution context or group membership, simplifying the design and coding of security in your application, and allowing you to implement restrictions on column access. - Last updated 09/19/2023
synapse-analytics Create Data Warehouse Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/create-data-warehouse-portal.md
Title: "Quickstart: Create and query a dedicated SQL pool (formerly SQL DW) (Azu
description: Create and query a dedicated SQL pool (formerly SQL DW) using the Azure portal - Last updated 02/21/2023
synapse-analytics Create Data Warehouse Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/create-data-warehouse-powershell.md
Title: 'Quickstart: Create a dedicated SQL pool (formerly SQL DW) with Azure PowerShell'
+ Title: "Quickstart: Create a dedicated SQL pool (formerly SQL DW) with Azure PowerShell"
description: Quickly create a dedicated SQL pool (formerly SQL DW) with a server-level firewall rule using Azure PowerShell. - Last updated 4/11/2019- -++
+ - devx-track-azurepowershell
+ - seo-lt-2019
+ - azure-synapse
+ - mode-api
# Quickstart: Create a dedicated SQL pool (formerly SQL DW) with Azure PowerShell
synapse-analytics Design Elt Data Loading https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/design-elt-data-loading.md
Title: Instead of ETL, design ELT
+ Title: Instead of ETL, design ELT
description: Implement flexible data loading strategies for dedicated SQL pools within Azure Synapse Analytics. ---- Previously updated : 11/20/2020 Last updated : 11/20/2020+++
synapse-analytics Design Guidance For Replicated Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/design-guidance-for-replicated-tables.md
Title: Design guidance for replicated tables
-description: Recommendations for designing replicated tables in Synapse SQL pool
---- Previously updated : 09/27/2022
+description: Recommendations for designing replicated tables in Synapse SQL pool
-- Last updated : 09/27/2022++++
+ - seo-lt-2019
+ - azure-synapse
# Design guidance for using replicated tables in Synapse SQL pool
synapse-analytics Fivetran Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/fivetran-quickstart.md
Title: "Quickstart: Fivetran and dedicated SQL pool (formerly SQL DW)"
-description: Get started with Fivetran and dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.
--
+ Title: "Quickstart: Fivetran and dedicated SQL pool (formerly SQL DW)"
+description: Get started with Fivetran and dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.
++ Last updated : 10/12/2018 + - Previously updated : 10/12/2018--+
+ - seo-lt-2019
+ - azure-synapse
# Quickstart: Fivetran with dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
synapse-analytics Gen2 Migration Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/gen2-migration-schedule.md
Title: Migrate your dedicated SQL pool (formerly SQL DW) to Gen2
+ Title: Migrate your dedicated SQL pool (formerly SQL DW) to Gen2
description: Instructions for migrating an existing dedicated SQL pool (formerly SQL DW) to Gen2 and the migration schedule by region. - Last updated : 01/21/2020 - Previously updated : 01/21/2020-++
+ - seo-lt-2019
+ - azure-synapse
# Upgrade your dedicated SQL pool (formerly SQL DW) to Gen2
synapse-analytics Load Data From Azure Blob Storage Using Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/load-data-from-azure-blob-storage-using-copy.md
Title: 'Tutorial: Load New York Taxicab data'
+ Title: "Tutorial: Load New York Taxicab data"
description: Tutorial uses Azure portal and SQL Server Management Studio to load New York Taxicab data from an Azure blob for Synapse SQL. ---- Previously updated : 11/23/2020 Last updated : 11/23/2020+++
synapse-analytics Load Data Wideworldimportersdw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/load-data-wideworldimportersdw.md
Title: 'Tutorial: Load data using Azure portal & SSMS'
+ Title: "Tutorial: Load data using Azure portal & SSMS"
description: Tutorial uses Azure portal and SQL Server Management Studio to load the WideWorldImportersDW data warehouse from a global Azure blob to an Azure Synapse Analytics SQL pool.----- Previously updated : 01/12/2021+ - Last updated : 01/12/2021++++
+ - seo-lt-2019
+ - synapse-analytics
# Tutorial: Load data to Azure Synapse Analytics SQL pool
synapse-analytics Maintenance Scheduling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/maintenance-scheduling.md
Title: Maintenance schedules for Synapse SQL pool
-description: Maintenance scheduling enables customers to plan around the necessary scheduled maintenance events that Azure Synapse Analytics uses to roll out new features, upgrades, and patches.
+description: Maintenance scheduling enables customers to plan around the necessary scheduled maintenance events that Azure Synapse Analytics uses to roll out new features, upgrades, and patches.
-+ Last updated : 11/28/2022 + - Previously updated : 11/28/2022- # Use maintenance schedules to manage service updates and maintenance
synapse-analytics Manage Compute With Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/manage-compute-with-azure-functions.md
Title: 'Tutorial: Manage compute with Azure Functions'
+ Title: "Tutorial: Manage compute with Azure Functions"
description: How to use Azure Functions to manage the compute of your dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. -+ Last updated : 04/27/2018 + - Previously updated : 04/27/2018--+
+ - seo-lt-2019
+ - azure-synapse
+ - devx-track-arm-template
# Use Azure Functions to manage compute resources for your dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
synapse-analytics Massively Parallel Processing Mpp Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/massively-parallel-processing-mpp-architecture.md
Title: Dedicated SQL pool (formerly SQL DW) architecture
-description: Learn how Dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics combines distributed query processing capabilities with Azure Storage to achieve high performance and scalability.
--
+ Title: Dedicated SQL pool (formerly SQL DW) architecture
+description: Learn how Dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics combines distributed query processing capabilities with Azure Storage to achieve high performance and scalability.
++ Last updated : 07/20/2022 + - Previously updated : 07/20/2022- # Dedicated SQL pool (formerly SQL DW) architecture in Azure Synapse Analytics
synapse-analytics Memory Concurrency Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/memory-concurrency-limits.md
Title: Memory and concurrency limits description: View the memory and concurrency limits allocated to the various performance levels and resource classes for dedicated SQL pool in Azure Synapse Analytics. ---- Previously updated : 04/04/2021 Last updated : 04/04/2021+++
To learn more about how to leverage resource classes to optimize your workload f
* [Workload management workload groups](sql-data-warehouse-workload-isolation.md) * [CREATE WORKLOAD GROUP](/sql/t-sql/statements/create-workload-group-transact-sql) * [Resource classes for workload management](resource-classes-for-workload-management.md)
-* [Analyzing your workload](analyze-your-workload.md)
+* [Analyzing your workload](analyze-your-workload.md)
synapse-analytics Performance Tuning Ordered Cci https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/performance-tuning-ordered-cci.md
Title: Performance tuning with ordered clustered columnstore index
description: Recommendations and considerations you should know as you use ordered clustered columnstore index to improve your query performance in dedicated SQL pools. - Last updated 02/13/2023
synapse-analytics Performance Tuning Result Set Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/performance-tuning-result-set-caching.md
Title: Performance tuning with result set caching
-description: Result set caching feature overview for dedicated SQL pool in Azure Synapse Analytics
+ Title: Performance tuning with result set caching
+description: Result set caching feature overview for dedicated SQL pool in Azure Synapse Analytics
---- Previously updated : 10/10/2019 Last updated : 10/10/2019+++
synapse-analytics Quickstart Bulk Load Copy Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql.md
Title: 'Quickstart: Bulk load data using a single T-SQL statement'
+ Title: "Quickstart: Bulk load data using a single T-SQL statement"
description: Bulk load data using the COPY statement ---- Previously updated : 11/20/2020 - Last updated : 11/20/2020++++
+ - azure-synapse
+ - mode-other
# Quickstart: Bulk load data using the COPY statement
synapse-analytics Quickstart Configure Workload Isolation Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-configure-workload-isolation-tsql.md
Title: 'Quickstart: Configure workload isolation - T-SQL'
+ Title: "Quickstart: Configure workload isolation - T-SQL"
description: Use T-SQL to configure workload isolation. ---- Previously updated : 04/27/2020 - Last updated : 04/27/2020++++
+ - azure-synapse
+ - mode-other
# Quickstart: Configure workload isolation in a dedicated SQL pool using T-SQL
synapse-analytics Quickstart Create A Workload Classifier Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-create-a-workload-classifier-portal.md
Title: 'Quickstart: Create a workload classifier - Portal'
+ Title: "Quickstart: Create a workload classifier - Portal"
description: Use Azure portal to create a workload classifier with high importance. - Last updated 05/04/2020- -++
+ - azure-synapse
+ - mode-ui
# Quickstart: Create a dedicated SQL pool workload classifier using the Azure portal
synapse-analytics Quickstart Create A Workload Classifier Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-create-a-workload-classifier-tsql.md
Title: 'Quickstart: Create a workload classifier - T-SQL'
+ Title: "Quickstart: Create a workload classifier - T-SQL"
description: Use T-SQL to create a workload classifier with high importance. ---- Previously updated : 02/04/2020 - Last updated : 02/04/2020++++
+ - azure-synapse
+ - mode-other
# Quickstart: Create a workload classifier using T-SQL
synapse-analytics Quickstart Scale Compute Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal.md
Title: "Quickstart: Scale compute for an Azure Synapse dedicated SQL pool (formerly SQL DW) with the Azure portal" description: You can scale compute for an Azure Synapse dedicated SQL pool (formerly SQL DW) with the Azure portal.---++ Last updated 02/22/2023
To change data warehouse units:
## Next steps -- To learn more about SQL pool, continue to the [Load data into SQL pool](./load-data-from-azure-blob-storage-using-copy.md) tutorial.
+- To learn more about SQL pool, continue to the [Load data into SQL pool](./load-data-from-azure-blob-storage-using-copy.md) tutorial.
synapse-analytics Quickstart Scale Compute Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-tsql.md
Title: "Quickstart: Scale compute in dedicated SQL pool (formerly SQL DW) - T-SQL" description: Scale compute in dedicated SQL pool (formerly SQL DW) using T-SQL and SQL Server Management Studio (SSMS). Scale out compute for better performance, or scale back compute to save costs.---++ Last updated 02/22/2023
synapse-analytics Quickstart Scale Compute Workspace Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-workspace-portal.md
Title: "Quickstart: Scale compute for an Azure Synapse dedicated SQL pool in a Synapse workspace with the Azure portal" description: Learn how to scale compute for an Azure Synapse dedicated SQL pool in a Synapse workspace with the Azure portal.---++ Last updated 02/22/2023
synapse-analytics Release Notes 10 0 10106 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/release-notes-10-0-10106-0.md
Title: Release notes for dedicated SQL pool (formerly SQL DW) description: Release notes for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.--- Previously updated : 3/24/2022 - Last updated : 3/24/2022+++ tags: azure-synapse
synapse-analytics Resource Classes For Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/resource-classes-for-workload-management.md
Title: Resource classes for workload management
+ Title: Resource classes for workload management
description: Guidance for using resource classes to manage concurrency and compute resources for queries in Azure Synapse Analytics. ---- Previously updated : 02/04/2020 Last updated : 02/04/2020+++
synapse-analytics Sql Data Warehouse Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-authentication.md
Title: Authentication for dedicated SQL pool (formerly SQL DW) description: Learn how to authenticate to dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics by using Azure Active Directory (Azure AD) or SQL Server authentication. -+ Last updated : 04/02/2019 + - Previously updated : 04/02/2019- tag: azure-synapse
synapse-analytics Sql Data Warehouse Concept Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-recommendations.md
Title: Dedicated SQL pool Azure Advisor recommendations description: Learn about Synapse SQL recommendations and how they are generated -+ Last updated : 06/26/2020 + - Previously updated : 06/26/2020-
synapse-analytics Sql Data Warehouse Concept Resource Utilization Query Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-resource-utilization-query-activity.md
Title: Manageability and monitoring - query activity, resource utilization
description: Learn what capabilities are available to manage and monitor Azure Synapse Analytics. Use the Azure portal and Dynamic Management Views (DMVs) to understand query activity and resource utilization of your data warehouse. - Last updated 11/02/2022
synapse-analytics Sql Data Warehouse Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-connect-overview.md
Title: Connect to a SQL pool in Azure Synapse
+ Title: Connect to a SQL pool in Azure Synapse
description: Learn how to connect to an SQL pool in Azure Synapse.----- Previously updated : 06/13/2022+ - Last updated : 06/13/2022++++
+ - azure-synapse
+ - seo-lt-2019
+ - devx-track-csharp
+ - kr2b-contr-experiment
# Connect to a SQL pool in Azure Synapse
synapse-analytics Sql Data Warehouse Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-connection-strings.md
Title: Connection strings description: Connection strings for Synapse SQL pool----- Previously updated : 04/17/2018+ - Last updated : 04/17/2018++++
+ - azure-synapse
+ - seo-lt-2019
+ - devx-track-csharp
# Connection strings for SQL pools in Azure Synapse
synapse-analytics Sql Data Warehouse Continuous Integration And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-continuous-integration-and-deployment.md
Title: Continuous integration and deployment for dedicated SQL pool
+ Title: Continuous integration and deployment for dedicated SQL pool
description: Enterprise-class Database DevOps experience for dedicated SQL pool in Azure Synapse Analytics with built-in support for continuous integration and deployment using Azure Pipelines. -+ Last updated : 02/04/2020 + - Previously updated : 02/04/2020- # Continuous integration and deployment for dedicated SQL pool in Azure Synapse Analytics
At this point, you have a simple environment where any check-in to your source c
- Explore [Dedicated SQL pool (formerly SQL DW) architecture](massively-parallel-processing-mpp-architecture.md) - Quickly [create a dedicated SQL pool (formerly SQL DW)](create-data-warehouse-portal.md) - [Load sample data](./load-data-from-azure-blob-storage-using-copy.md)-- Explore [Videos](sql-data-warehouse-videos.md)
+- Explore [Videos](sql-data-warehouse-videos.md)
synapse-analytics Sql Data Warehouse Develop Ctas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-ctas.md
Title: CREATE TABLE AS SELECT (CTAS)
+ Title: CREATE TABLE AS SELECT (CTAS)
description: Explanation and examples of the CREATE TABLE AS SELECT (CTAS) statement in dedicated SQL pool (formerly SQL DW) for developing solutions. ---- Previously updated : 06/09/2022 - Last updated : 06/09/2022++++
+ - seoapril2019
+ - azure-synapse
# CREATE TABLE AS SELECT (CTAS)
synapse-analytics Sql Data Warehouse Develop Dynamic Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-dynamic-sql.md
Title: Using dynamic SQL
+ Title: Using dynamic SQL
description: Tips for development solutions using dynamic SQL for dedicated SQL pools in Azure Synapse Analytics. ---- Previously updated : 04/17/2018 - Last updated : 04/17/2018++++
+ - seo-lt-2019
+ - azure-synapse
# Dynamic SQL for dedicated SQL pools in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Develop Group By Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-group-by-options.md
Title: Using group by options
+ Title: Using group by options
description: Tips for implementing group by options for dedicated SQL pools in Azure Synapse Analytics. ---- Previously updated : 04/17/2018 - Last updated : 04/17/2018++++
+ - seo-lt-2019
+ - azure-synapse
# Group by options for dedicated SQL pools in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Develop Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-label.md
Title: Using labels to instrument queries description: Tips for using labels to instrument queries for dedicated SQL pools in Azure Synapse Analytics. ---- Previously updated : 04/17/2018 - Last updated : 04/17/2018++++
+ - seo-lt-2019
+ - azure-synapse
# Using labels to instrument queries for dedicated SQL pools in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Develop Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-loops.md
Title: Using T-SQL loops description: Tips for solution development using T-SQL loops and replacing cursors for dedicated SQL pools in Azure Synapse Analytics. ---- Previously updated : 04/17/2018 - Last updated : 04/17/2018++++
+ - seo-lt-2019
+ - azure-synapse
# Using T-SQL loops for dedicated SQL pools in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Develop Stored Procedures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-stored-procedures.md
Title: Using stored procedures description: Tips for developing solutions by implementing stored procedures for dedicated SQL pools in Azure Synapse Analytics.----- Previously updated : 04/02/2019+ Last updated : 04/02/2019+++
synapse-analytics Sql Data Warehouse Develop User Defined Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-user-defined-schemas.md
Title: Using user-defined schemas
+ Title: Using user-defined schemas
description: Tips for using T-SQL user-defined schemas to develop solutions for dedicated SQL pools in Azure Synapse Analytics.--++ Last updated : 04/17/2018 + - Previously updated : 04/17/2018--+
+ - seo-lt-2019
+ - azure-synapse
# User-defined schemas for dedicated SQL pools in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Develop Variable Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-variable-assignment.md
Title: Assign variables description: In this article, you'll find essential tips for assigning T-SQL variables for dedicated SQL pools in Azure Synapse Analytics.----- Previously updated : 04/17/2018+ - Last updated : 04/17/2018++++
+ - seo-lt-2019
+ - azure-synapse
# Assign variables for dedicated SQL pools in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Encryption Tde Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-encryption-tde-tsql.md
Title: Transparent data encryption (T-SQL) description: Transparent data encryption (TDE) in Azure Synapse Analytics (T-SQL) -+ Last updated : 04/30/2019 + - Previously updated : 04/30/2019--
synapse-analytics Sql Data Warehouse Encryption Tde https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-encryption-tde.md
Title: Transparent Data Encryption (Portal) for dedicated SQL pool (formerly SQL DW) description: Transparent Data Encryption (TDE) for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics -+ Last updated : 06/23/2021 + - Previously updated : 06/23/2021--
synapse-analytics Sql Data Warehouse Get Started Analyze With Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-get-started-analyze-with-azure-machine-learning.md
Title: Analyze data with Azure Machine Learning
+ Title: Analyze data with Azure Machine Learning
description: Use Azure Machine Learning to build a predictive machine learning model based on data stored in Azure Synapse.--++ Last updated : 07/15/2020 + - Previously updated : 07/15/2020- tag: azure-Synapse
Compare the column BikeBuyer (actual) with the Scored Labels (prediction), to se
To learn more about Azure Machine Learning, refer to [Introduction to Machine Learning on Azure](../../machine-learning/overview-what-is-azure-machine-learning.md).
-Learn about built-in scoring in the data warehouse, [here](/sql/t-sql/queries/predict-transact-sql?view=azure-sqldw-latest&preserve-view=true).
+Learn about built-in scoring in the data warehouse, [here](/sql/t-sql/queries/predict-transact-sql?view=azure-sqldw-latest&preserve-view=true).
synapse-analytics Sql Data Warehouse Get Started Connect Sqlcmd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-get-started-connect-sqlcmd.md
Title: Connect with sqlcmd
+ Title: Connect with sqlcmd
description: Use sqlcmd command-line utility to connect to and query a dedicated SQL pool in Azure Synapse Analytics.----- Previously updated : 04/17/2018+ - Last updated : 04/17/2018++++
+ - seo-lt-2019
+ - azure-synapse
# Connect to a dedicated SQL pool in Azure Synapse Analytics with sqlcmd
sqlcmd -S MySqlDw.database.windows.net -d Adventure_Works -U myuser -P myP@sswor
## Next steps
-For more about details about the options available in sqlcmd, see [sqlcmd documentation](/sql/tools/sqlcmd-utility?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true).
+For more about details about the options available in sqlcmd, see [sqlcmd documentation](/sql/tools/sqlcmd-utility?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true).
synapse-analytics Sql Data Warehouse How To Configure Workload Importance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-configure-workload-importance.md
Title: Configure workload importance for dedicated SQL pool description: Learn how to set request level importance in Azure Synapse Analytics. ---- Previously updated : 05/15/2020 Last updated : 05/15/2020+++
synapse-analytics Sql Data Warehouse How To Convert Resource Classes Workload Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-convert-resource-classes-workload-groups.md
Title: Convert resource class to a workload group
+ Title: Convert resource class to a workload group
description: Learn how to create a workload group that is similar to a resource class in a dedicated SQL pool. -+ Last updated : 08/13/2020 -+ Previously updated : 08/13/2020-
synapse-analytics Sql Data Warehouse How To Find Queries Running Beyond Wlm Elapsed Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-find-queries-running-beyond-wlm-elapsed-timeout.md
Title: Identify queries running beyond workload group query execution timeout
-description: Identify queries that are running beyond the workload groups query execution timeout value.
------ Previously updated : 06/13/2022-
+description: Identify queries that are running beyond the workload groups query execution timeout value.
++ Last updated : 06/13/2022++++ # Identify queries running beyond workload group query execution timeout
synapse-analytics Sql Data Warehouse How To Manage And Monitor Workload Importance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance.md
Title: Manage and monitor workload importance in dedicated SQL pool description: Learn how to manage and monitor request level importance dedicated SQL pool for Azure Synapse Analytics. ---- Previously updated : 02/04/2020 Last updated : 02/04/2020+++
synapse-analytics Sql Data Warehouse How To Monitor Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-monitor-cache.md
Title: Optimize your Gen2 cache
+ Title: Optimize your Gen2 cache
description: Learn how to monitor your Gen2 cache using the Azure portal. -+ Last updated : 11/20/2020 -+ Previously updated : 11/20/2020-
synapse-analytics Sql Data Warehouse How To Troubleshoot Missed Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-troubleshoot-missed-classification.md
Title: Troubleshoot misclassified workload in a dedicated SQL pool
-description: Identify and troubleshoot scenarios where workloads are misclassified to unintended workload groups in a dedicated SQL pool in Azure Synapse Analytics.
------ Previously updated : 03/09/2022-
+description: Identify and troubleshoot scenarios where workloads are misclassified to unintended workload groups in a dedicated SQL pool in Azure Synapse Analytics.
++ Last updated : 03/09/2022++++ # Troubleshooting a misclassified workload in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Install Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-install-visual-studio.md
Title: Install Visual Studio 2019
+ Title: Install Visual Studio 2019
description: Install Visual Studio and SQL Server Development Tools (SSDT) for Synapse SQL-- -+ Last updated : 05/11/2020 + - Previously updated : 05/11/2020-+
+ - vs-azure
+ - azure-synapse
# Getting started with Visual Studio 2019
There are times when feature releases for Synapse SQL may not include support fo
## Next steps
-Now that you have the latest version of SSDT, you're ready to [connect](sql-data-warehouse-query-visual-studio.md) to your SQL pool.
+Now that you have the latest version of SSDT, you're ready to [connect](sql-data-warehouse-query-visual-studio.md) to your SQL pool.
synapse-analytics Sql Data Warehouse Integrate Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-integrate-azure-stream-analytics.md
Title: Use Azure Stream Analytics in dedicated SQL pool description: Tips for using Azure Stream Analytics with dedicated SQL pool in Azure Synapse for developing real-time solutions. -+ Last updated : 10/07/2022 + - Previously updated : 10/07/2022-
synapse-analytics Sql Data Warehouse Load From Azure Blob Storage With Polybase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-load-from-azure-blob-storage-with-polybase.md
Title: Load Contoso retail data to dedicated SQL pools description: Use PolyBase and T-SQL commands to load two tables from the Contoso retail data into dedicated SQL pools. -+ Last updated : 11/20/2020 + - Previously updated : 11/20/2020-
synapse-analytics Sql Data Warehouse Manage Compute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-compute-overview.md
Title: Manage compute resource for for dedicated SQL pool (formerly SQL DW) description: Learn about performance scale out capabilities for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. Scale out by adjusting DWUs, or lower costs by pausing the dedicated SQL pool (formerly SQL DW). -+ Last updated : 11/12/2019 + - Previously updated : 11/12/2019--+
+ - seo-lt-2019
+ - azure-synapse
# Manage compute for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Manage Compute Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-compute-rest-api.md
Title: Pause, resume, scale with REST APIs for dedicated SQL pool (formerly SQL DW) description: Manage compute power for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics through REST APIs.---+++ Last updated : 03/09/2022 + - Previously updated : 03/09/2022--+
+ - seo-lt-2019
+ - azure-synapse
# REST APIs for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Memory Optimizations For Columnstore Compression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-memory-optimizations-for-columnstore-compression.md
Title: Improve columnstore index performance for dedicated SQL pool description: Reduce memory requirements or increase the available memory to maximize the number of rows within each rowgroup in dedicated SQL pool. -+ Last updated : 10/18/2021 + - Previously updated : 10/18/2021--
synapse-analytics Sql Data Warehouse Overview Develop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-develop.md
Title: Resources for developing a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics description: Development concepts, design decisions, recommendations, and coding techniques for a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. ---- Previously updated : 08/29/2018 Last updated : 08/29/2018+++ # Design decisions and coding techniques for a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Overview Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-integrate.md
Title: Build integrated solutions description: Solution tools and partners that integrate with a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.--++ Last updated : 04/17/2018 + - Previously updated : 04/17/2018-
Azure Stream Analytics is a complex, fully managed infrastructure for processing
* **Job Output:** Send output from Stream Analytics jobs directly to a dedicated SQL pool (formerly SQL DW).
-For more information, see [Integrate with Azure Stream Analytics](sql-data-warehouse-integrate-azure-stream-analytics.md).
+For more information, see [Integrate with Azure Stream Analytics](sql-data-warehouse-integrate-azure-stream-analytics.md).
synapse-analytics Sql Data Warehouse Overview Manage Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-manage-security.md
Title: Secure a dedicated SQL pool (formerly SQL DW) description: Tips for securing a dedicated SQL pool (formerly SQL DW) and developing solutions in Azure Synapse Analytics. -+ Last updated : 04/17/2018 + - Previously updated : 04/17/2018- tags: azure-synapse
synapse-analytics Sql Data Warehouse Overview Manageability Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-manageability-monitoring.md
Title: Manageability and monitoring - overview
+ Title: Manageability and monitoring - overview
description: Monitoring and manageability overview for resource utilization, log and query activity, recommendations, and data protection (backup and restore) with dedicated SQL pool in Azure Synapse Analytics. -+ Last updated : 08/27/2018 + - Previously updated : 08/27/2018-
Synapse SQL allows you to provision a data warehouse via dedicated SQL pool. The
## Next steps
-For How-to guides, see [Monitor and tune your dedicated SQL pool](sql-data-warehouse-manage-monitor.md).
+For How-to guides, see [Monitor and tune your dedicated SQL pool](sql-data-warehouse-manage-monitor.md).
synapse-analytics Sql Data Warehouse Overview What Is https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md
Title: What is dedicated SQL pool (formerly SQL DW)? description: Dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics is the enterprise data warehousing functionality in Azure Synapse Analytics. -+ Last updated : 02/21/2023 + - Previously updated : 02/21/2023-- # What is dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics?
synapse-analytics Sql Data Warehouse Predict https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-predict.md
Title: Score machine learning models with PREDICT description: Learn how to score machine learning models using the T-SQL PREDICT function in dedicated SQL pool. ---- Previously updated : 07/21/2020 Last updated : 07/21/2020+++
DATA = dbo.mytable AS d, RUNTIME = ONNX) WITH (Score float) AS p;
## Next steps
-To learn more about the PREDICT function, see [PREDICT (Transact-SQL)](/sql/t-sql/queries/predict-transact-sql?preserve-view=true&view=azure-sqldw-latest).
+To learn more about the PREDICT function, see [PREDICT (Transact-SQL)](/sql/t-sql/queries/predict-transact-sql?preserve-view=true&view=azure-sqldw-latest).
synapse-analytics Sql Data Warehouse Query Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-query-visual-studio.md
Title: Connect to dedicated SQL pool (formerly SQL DW) with VSTS
+ Title: Connect to dedicated SQL pool (formerly SQL DW) with VSTS
description: Query dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics with Visual Studio. -+ Last updated : 08/15/2019 + - Previously updated : 08/15/2019-
Now that a connection has been established to your database, let's write a query
## Next steps Now that you can connect and query, try [visualizing the data with Power BI](/power-bi/connect-data/service-azure-sql-data-warehouse-with-direct-connect).
-To configure your environment for Azure Active Directory authentication, see [Authenticate to dedicated SQL pool (formerly SQL DW)](sql-data-warehouse-authentication.md).
+To configure your environment for Azure Active Directory authentication, see [Authenticate to dedicated SQL pool (formerly SQL DW)](sql-data-warehouse-authentication.md).
synapse-analytics Sql Data Warehouse Reference Powershell Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-powershell-cmdlets.md
Title: PowerShell & REST APIs for dedicated SQL pool (formerly SQL DW) description: Top PowerShell cmdlets for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics including how to pause and resume a database. -+ Last updated : 04/17/2018 + - Previously updated : 04/17/2018--+
+ - seo-lt-2019
+ - devx-track-azurepowershell
# PowerShell for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Reference Tsql Language Elements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-tsql-language-elements.md
Title: T-SQL language elements for dedicated SQL pool description: Links to the documentation for T-SQL language elements supported for dedicated SQL pool in Azure Synapse Analytics. ---- Previously updated : 06/13/2018 - Last updated : 06/13/2018++++
+ - seo-lt-2019
+ - azure-synapse
# T-SQL language elements for dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Reference Tsql Statements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-tsql-statements.md
Title: T-SQL statements in dedicate SQL pool description: Links to the documentation for T-SQL statements supported for dedicated SQL pool in Azure Synapse Analytics . -+ Last updated : 05/01/2019 + - Previously updated : 05/01/2019--+
+ - seo-lt-2019
+ - azure-synapse
# T-SQL statements supported for dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Reference Tsql System Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-tsql-system-views.md
Title: System views for dedicated SQL pool (formerly SQL DW) description: Links to the documentation for system views for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.--+++ Last updated : 01/06/2020 + - Previously updated : 01/06/2020---+
+ - seo-lt-2019
+ - azure-synapse
# System views for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Restore Deleted Dw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-deleted-dw.md
Title: Restore a deleted dedicated SQL pool (formerly SQL DW) description: How to guide for restoring a deleted dedicated SQL pool in Azure Synapse Analytics. ---- Previously updated : 08/29/2018 - Last updated : 08/29/2018++++
+ - seo-lt-2019
+ - devx-track-azurepowershell
# Restore a deleted dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Restore From Geo Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-from-geo-backup.md
Title: Restore a dedicated SQL pool from a geo-backup
+ Title: Restore a dedicated SQL pool from a geo-backup
description: How-to guide for geo-restoring a dedicated SQL pool in Azure Synapse Analytics ---- Previously updated : 11/13/2020 - Last updated : 11/13/2020++++
+ - seo-lt-2019
+ - devx-track-azurepowershell
# Geo-restore a dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-points.md
Title: User-defined restore points
+ Title: User-defined restore points
description: How to create a restore point for dedicated SQL pool (formerly SQL DW). -+ Last updated : 07/03/2019 + - Previously updated : 07/03/2019--+
+ - seo-lt-2019
+ - devx-track-azurepowershell
# User-defined restore points for a dedicated SQL pool (formerly SQL DW)
synapse-analytics Sql Data Warehouse Service Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-service-capacity-limits.md
Title: Capacity limits for dedicated SQL pool description: Maximum values allowed for various components of dedicated SQL pool in Azure Synapse Analytics. ---- Previously updated : 6/20/2023 - Last updated : 6/20/2023+++
synapse-analytics Sql Data Warehouse Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-source-control-integration.md
Title: Source Control Integration description: Enterprise-class Database DevOps experience for dedicated SQL pool with native source control integration using Azure Repos (Git and GitHub). -+ Last updated : 08/23/2019 + - Previously updated : 08/23/2019- # Source Control Integration for dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Table Constraints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-table-constraints.md
Title: Primary, foreign, and unique keys
+ Title: Primary, foreign, and unique keys
description: Table constraints support using dedicated SQL pool in Azure Synapse Analytics -++ Last updated : 09/05/2019 + - Previously updated : 09/05/2019---+
+ - seo-lt-2019
+ - azure-synapse
# Primary key, foreign key, and unique key using dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Tables Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-data-types.md
Title: Table data types in dedicated SQL pool (formerly SQL DW)
-description: Recommendations for defining table data types for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.
+description: Recommendations for defining table data types for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.
---- Previously updated : 01/06/2020 - Last updated : 01/06/2020+++ # Table data types for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Tables Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-identity.md
Title: Using IDENTITY to create surrogate keys description: Recommendations and examples for using the IDENTITY property to create surrogate keys on tables in dedicated SQL pool.----- Previously updated : 07/20/2020+ Last updated : 07/20/2020+++
synapse-analytics Sql Data Warehouse Tables Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-index.md
Title: Indexing tables description: Recommendations and examples for indexing tables in dedicated SQL pool.---- Previously updated : 11/02/2021 -- Last updated : 11/02/2021++++
+ - seo-lt-2019
+ - azure-synapse
# Indexes on dedicated SQL pool tables in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Tables Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-overview.md
Title: Designing tables
-description: Introduction to designing tables using dedicated SQL pool.
---- Previously updated : 07/05/2023
+description: Introduction to designing tables using dedicated SQL pool.
Last updated : 07/05/2023+++
synapse-analytics Sql Data Warehouse Tables Partition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-partition.md
Title: Partitioning tables in dedicated SQL pool
+ Title: Partitioning tables in dedicated SQL pool
description: Recommendations and examples for using table partitions in dedicated SQL pool.---- Previously updated : 11/02/2021 -- Last updated : 11/02/2021++++
+ - seo-lt-2019
+ - azure-synapse
# Partitioning tables in dedicated SQL pool
synapse-analytics Sql Data Warehouse Tables Temporary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-temporary.md
Title: Temporary tables
-description: Essential guidance for using temporary tables in dedicated SQL pool, highlighting the principles of session level temporary tables.
---- Previously updated : 11/02/2021
+description: Essential guidance for using temporary tables in dedicated SQL pool, highlighting the principles of session level temporary tables.
- Last updated : 11/02/2021+++ # Temporary tables in dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-troubleshoot-connectivity.md
Title: Troubleshooting connectivity description: Troubleshooting connectivity in dedicated SQL pool (formerly SQL DW). -+ Last updated : 03/27/2019 + - Previously updated : 03/27/2019--+
+ - seo-lt-2019
+ - azure-synapse
+ - devx-track-csharp
# Troubleshooting connectivity issues in dedicated SQL pool (formerly SQL DW)
For more information on errors 40914 and 40615, refer to [vNet service endpoint
## Still having connectivity issues?
-Create a [support ticket](sql-data-warehouse-get-started-create-support-ticket.md) so the engineering team can support you.
+Create a [support ticket](sql-data-warehouse-get-started-create-support-ticket.md) so the engineering team can support you.
synapse-analytics Sql Data Warehouse Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-videos.md
Title: Videos for Azure Synapse Analytics description: Links to various video playlists for Azure Synapse Analytics. -+ Last updated : 07/18/2023 + - Previously updated : 07/18/2023- # Azure Synapse Analytics - dedicated SQL pool (formerly SQL DW) Videos
synapse-analytics Sql Data Warehouse Workload Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-classification.md
Title: Workload classification for dedicated SQL pool description: Guidance for using classification to manage query concurrency, importance, and compute resources for dedicated SQL pool in Azure Synapse Analytics. ---- Previously updated : 01/24/2022 Last updated : 01/24/2022+++
synapse-analytics Sql Data Warehouse Workload Importance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-importance.md
Title: Workload importance
+ Title: Workload importance
description: Guidance for setting importance for dedicated SQL pool queries in Azure Synapse Analytics. ---- Previously updated : 02/04/2020 Last updated : 02/04/2020+++
synapse-analytics Sql Data Warehouse Workload Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-isolation.md
Title: Workload isolation
+ Title: Workload isolation
description: Guidance for setting workload isolation with workload groups in Azure Synapse Analytics. ---- Previously updated : 11/16/2021 Last updated : 11/16/2021+++
synapse-analytics Sql Data Warehouse Workload Management Portal Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management-portal-monitor.md
Title: Workload management portal monitoring description: Guidance for workload management portal monitoring in Azure Synapse Analytics. ---- Previously updated : 03/01/2021 Last updated : 03/01/2021+++
synapse-analytics Sql Data Warehouse Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management.md
Title: Workload management
+ Title: Workload management
description: Guidance for implementing workload management in Azure Synapse Analytics. ---- Previously updated : 02/04/2020 Last updated : 02/04/2020+++
synapse-analytics Striim Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/striim-quickstart.md
Title: Striim quick start
+ Title: Striim quick start
description: Get started quickly with Striim and Azure Synapse Analytics.--++ Last updated : 10/12/2018 + - Previously updated : 10/12/2018-
synapse-analytics Upgrade To Latest Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/upgrade-to-latest-generation.md
Title: Upgrade to the latest generation of dedicated SQL pool (formerly SQL DW) description: Upgrade Azure Synapse Analytics dedicated SQL pool (formerly SQL DW) to latest generation of Azure hardware and storage architecture.----- Previously updated : 02/19/2019+ - Last updated : 02/19/2019++++
+ - seo-lt-2019
+ - devx-track-azurepowershell
# Optimize performance by upgrading dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
synapse-analytics What Is A Data Warehouse Unit Dwu Cdwu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/what-is-a-data-warehouse-unit-dwu-cdwu.md
Title: Data Warehouse Units (DWUs) for dedicated SQL pool (formerly SQL DW) description: Recommendations on choosing the ideal number of data warehouse units (DWUs) to optimize price and performance, and how to change the number of units.--++ Last updated : 11/22/2019 + - Previously updated : 11/22/2019-
synapse-analytics Best Practices Dedicated Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/best-practices-dedicated-sql-pool.md
Title: Best practices for dedicated SQL pools
-description: Recommendations and best practices you should know as you work with dedicated SQL pools.
--
+description: Recommendations and best practices you should know as you work with dedicated SQL pools.
++ Last updated : 09/22/2022 - Previously updated : 09/22/2022-+ # Best practices for dedicated SQL pools in Azure Synapse Analytics
synapse-analytics Best Practices Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/best-practices-serverless-sql-pool.md
Title: Best practices for serverless SQL pool
description: Recommendations and best practices for working with serverless SQL pool. - Last updated 02/15/2023
synapse-analytics Data Load Columnstore Compression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/data-load-columnstore-compression.md
Title: Improve columnstore index performance
+ Title: Improve columnstore index performance
description: Reduce memory requirements or increase the available memory to maximize the number of rows a columnstore index compresses into each rowgroup.---- Previously updated : 10/18/2021 - Last updated : 10/18/2021+++
synapse-analytics Data Loading Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/data-loading-best-practices.md
Title: Data loading best practices for dedicated SQL pools description: Recommendations and performance optimizations for loading data into a dedicated SQL pool in Azure Synapse Analytics.----- Previously updated : 08/26/2021+ Last updated : 08/26/2021+++
No other changes to underlying external data sources are needed.
- To learn more about PolyBase and designing an Extract, Load, and Transform (ELT) process, see [Design ELT for Azure Synapse Analytics](../sql-data-warehouse/design-elt-data-loading.md?context=/azure/synapse-analytics/context/context). - For a loading tutorial, [Use PolyBase to load data from Azure blob storage to Azure Synapse Analytics](../sql-data-warehouse/load-data-from-azure-blob-storage-using-copy.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json).-- To monitor data loads, see [Monitor your workload using DMVs](../sql-data-warehouse/sql-data-warehouse-manage-monitor.md?context=/azure/synapse-analytics/context/context).
+- To monitor data loads, see [Monitor your workload using DMVs](../sql-data-warehouse/sql-data-warehouse-manage-monitor.md?context=/azure/synapse-analytics/context/context).
synapse-analytics Develop Dynamic Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-dynamic-sql.md
Title: Use dynamic SQL in Synapse SQL
+ Title: Use dynamic SQL in Synapse SQL
description: Tips for using dynamic SQL in Synapse SQL. ---- Previously updated : 04/15/2020 - Last updated : 04/15/2020+++ # Dynamic SQL in Synapse SQL
synapse-analytics Develop Group By Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-group-by-options.md
Title: Use GROUP BY options in Synapse SQL description: Synapse SQL allows for developing solutions by implementing different GROUP BY options. ---- Previously updated : 04/15/2020 - Last updated : 04/15/2020+++
synapse-analytics Develop Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-label.md
Title: Use query labels in Synapse SQL description: Included in this article are essential tips for using query labels in Synapse SQL. ---- Previously updated : 04/15/2020 - Last updated : 04/15/2020+++ # Use query labels in Synapse SQL
synapse-analytics Develop Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-loops.md
Title: Use T-SQL loops description: Tips for using T-SQL loops, replacing cursors, and developing related solutions with Synapse SQL in Azure Synapse Analytics. ---- Previously updated : 04/15/2020 Last updated : 04/15/2020+++ # Use T-SQL loops with Synapse SQL in Azure Synapse Analytics
DROP TABLE #tbl;
## Next steps
-For more development tips, see [development overview](develop-overview.md).
+For more development tips, see [development overview](develop-overview.md).
synapse-analytics Develop Materialized View Performance Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-materialized-view-performance-tuning.md
Title: Performance tuning with materialized views
-description: Recommendations and considerations for materialized views to improve your query performance.
+description: Recommendations and considerations for materialized views to improve your query performance.
---- Previously updated : 03/01/2023 Last updated : 03/01/2023+++ # Performance tuning with materialized views using dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Develop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-overview.md
Title: Resources for developing Synapse SQL features description: Development concepts, design decisions, recommendations, and coding techniques for Synapse SQL. ---- Previously updated : 03/23/2022 Last updated : 03/23/2022+++ # Design decisions and coding techniques for Synapse SQL features in Azure Synapse Analytics
synapse-analytics Develop Stored Procedures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-stored-procedures.md
Title: Use stored procedures description: Tips for implementing stored procedures using Synapse SQL in Azure Synapse Analytics for solution development.--+++ Last updated : 11/03/2020 - Previously updated : 11/03/2020--+ # Stored procedures using Synapse SQL in Azure Synapse Analytics
synapse-analytics Develop Tables Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-data-types.md
Title: Table data types in Synapse SQL
-description: Recommendations for defining table data types in Synapse SQL.
+description: Recommendations for defining table data types in Synapse SQL.
---- Previously updated : 04/15/2020 - Last updated : 04/15/2020+++ # Table data types in Synapse SQL
synapse-analytics Develop Tables Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-overview.md
Title: Design tables using Synapse SQL
-description: Introduction to designing tables in Synapse SQL.
+description: Introduction to designing tables in Synapse SQL.
---- Previously updated : 04/15/2020 Last updated : 04/15/2020+++ # Design tables using Synapse SQL in Azure Synapse Analytics
ORDER BY distribution_id
## Next steps
-After creating the tables for your data warehouse, the next step is to load data into the table. For a loading tutorial, see [Loading data into dedicated SQL pool](../sql-data-warehouse/load-data-wideworldimportersdw.md?context=/azure/synapse-analytics/context/context#load-the-data-into-sql-pool).
+After creating the tables for your data warehouse, the next step is to load data into the table. For a loading tutorial, see [Loading data into dedicated SQL pool](../sql-data-warehouse/load-data-wideworldimportersdw.md?context=/azure/synapse-analytics/context/context#load-the-data-into-sql-pool).
synapse-analytics Develop Tables Statistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-statistics.md
Title: Create and update statistics using Azure Synapse SQL resources description: Recommendations and examples for creating and updating query-optimization statistics in Azure Synapse SQL. ---- Previously updated : 10/11/2022 - Last updated : 10/11/2022+++ # Statistics in Synapse SQL
synapse-analytics Develop Tables Temporary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-temporary.md
Title: Use temporary tables in Synapse SQL
-description: Essential guidance for using temporary tables in Synapse SQL.
---- Previously updated : 11/02/2021
+description: Essential guidance for using temporary tables in Synapse SQL.
- Last updated : 11/02/2021+++ # Temporary tables in Synapse SQL
synapse-analytics Develop Transaction Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-transaction-best-practices.md
Title: Optimize transactions for dedicated SQL pool description: Learn how to optimize the performance of your transactional code in dedicated SQL pool.--+++ Last updated : 04/15/2020 - Previously updated : 04/15/2020--+ # Optimize transactions with dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Load Data Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/load-data-overview.md
Title: Design a PolyBase data loading strategy for dedicated SQL pool description: Instead of ETL, design an Extract, Load, and Transform (ELT) process for loading data with dedicated SQL. ---- Previously updated : 09/20/2022 Last updated : 09/20/2022+++ # Design a PolyBase data loading strategy for dedicated SQL pool in Azure Synapse Analytics
traffic-manager Traffic Manager Configure Subnet Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-configure-subnet-routing-method.md
Add the two VMs running the IIS servers - *myIISVMEastUS* & *myIISVMWEurope* to
| Name | myTestWebSiteEndpoint | | Target resource type | Public IP Address | | Target resource | **Choose a Public IP address** to show the listing of resources with Public IP addresses under the same subscription. In **Resource**, select the public IP address named *myIISVMEastUS-ip*. This is the public IP address of the IIS server VM in East US.|
- | Subnet routing settings | Add the IP address of *myVMEastUS* test VM. Any user query originating from this VM will be directed to the *myTestWebSiteEndpoint*. |
+ | Subnet routing settings | Add the IP address of the recursive DNS resolver used by *myVMEastUS* test VM. Any user query originating from this VM will be directed to the *myTestWebSiteEndpoint*. |
-4. Repeat steps 2 and 3 to add another endpoint named *myProductionEndpoint* for the public IP address *myIISVMWEurope-ip* that is associated with the IIS server VM named *myIISVMWEurope*. For **Subnet routing settings**, add the IP address of the test VM - *myVMWestEurope*. Any user query from this test VM will be routed to the endpoint - *myProductionWebsiteEndpoint*.
+4. Repeat steps 2 and 3 to add another endpoint named *myProductionEndpoint* for the public IP address *myIISVMWEurope-ip* that is associated with the IIS server VM named *myIISVMWEurope*. For **Subnet routing settings**, add the IP address of the recursive DNS resolver used by test VM - *myVMWestEurope*. Any user query from this test VM via its DNS resolver will be routed to the endpoint - *myProductionWebsiteEndpoint*.
5. When the addition of both endpoints is complete, they are displayed in **Traffic Manager profile** along with their monitoring status as **Online**. ![Add a Traffic Manager endpoint](./media/traffic-manager-subnet-routing-method/customize-endpoint-with-subnet-routing-eastus.png)
update-center Guidance Patching Sql Server Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/guidance-patching-sql-server-azure-vm.md
Title: Guidance on patching for SQL Server on Azure VMs using Azure Update Manager.
-description: An overview on patching guidance for SQL Server on Azure VMs using Azure Update Manager
+ Title: Guidance on patching for SQL Server on Azure VMs (preview) using Azure Update Manager.
+description: An overview on patching guidance for SQL Server on Azure VMs (preview) using Azure Update Manager
Previously updated : 09/18/2023 Last updated : 09/27/2023
-# Guidance on patching for SQL Server on Azure VMs using Azure Update Manager
+# Guidance on patching for SQL Server on Azure VMs (preview) using Azure Update Manager
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
update-center Manage Updates Customized Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-updates-customized-images.md
description: This article describes customized image support, how to register an
Previously updated : 09/18/2023 Last updated : 09/27/2023
With marketplace images, support is validated even before Update Manager operati
For instance, an assessment call attempts to fetch the latest patch that's available from the image's OS family to check support. It stores this support-related data in an Azure Resource Graph table, which you can query to see the support status for your Azure Compute Gallery image.
-## Check the preview
-
-Start the asynchronous support check by using either one of the following APIs:
--- API Action Invocation:
- 1. [Assess patches](/rest/api/compute/virtual-machines/assess-patches?tabs=HTTP).
- 1. [Install patches](/rest/api/compute/virtual-machines/install-patches?tabs=HTTP).
--- Portal operations. Try the preview:
- 1. [On-demand check for updates](view-updates.md)
- 1. [One-time update](deploy-updates.md)
-
-Validate the VM support state for Azure Resource Graph:
--- Table:-
- `patchassessmentresources`
-- Resource:-
- `Microsoft.compute/virtualmachines/patchassessmentresults/configurationStatus.vmGuestPatchReadiness.detectedVMGuestPatchSupportState. [Possible values: Unknown, Supported, Unsupported, UnableToDetermine]`
-
- :::image type="content" source="./media/manage-updates-customized-images/resource-graph-view.png" alt-text="Screenshot that shows the resource in Azure Resource Graph Explorer.":::
-
-We recommend that you run the Assess Patches API after the VM is provisioned and the prerequisites are set for public preview. This action validates the support state of the VM. If the VM is supported, you can run the Install Patches API to begin the patching.
## Limitations
update-center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/support-matrix.md
Last updated 09/18/2023 -+ # Support matrix for Azure Update Manager
Update Manager supports operating system updates for both Windows and Linux.
Update Manager doesn't support driver updates.
+### Extended Security Updates (ESU) for Windows Server
+
+Using Azure Update Manager, you can deploy Extended Security Updates for your Azure Arc-enabled Windows Server 2012 / R2 machines. To enroll in Windows Server 2012 Extended Security Updates, follow the guidance on [How to get Extended Security Updates (ESU) for Windows Server 2012 and 2012 R2](/windows-server/get-started/extended-security-updates-deploy#extended-security-updates-enabled-by-azure-arc)
+ ### First-party updates on Windows By default, the Windows Update client is configured to provide updates only for the Windows operating system. If you enable the **Give me updates for other Microsoft products when I update Windows** setting, you also receive updates for other Microsoft products. Updates include security patches for Microsoft SQL Server and other Microsoft software.
update-center Whats Upcoming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-upcoming.md
Previously updated : 09/20/2023 Last updated : 09/27/2023 # What are the upcoming features in Azure Update Manager
The article [What's new in Azure Update Manager](whats-new.md) contains updates
## Expanded support for operating system and VM images
-Expanded support for [specialized images](../virtual-machines/linux/imaging.md#specialized-images), virtual machines created by Azure Migrate, Azure Backup, and Azure Site Recovery, and Azure Marketplace images are upcoming in the third quarter of 2023. Until then, we recommend that you continue using [Automation Update Management](../automation/update-management/overview.md) for these images. For more information, see [Support matrix for Update Manager](support-matrix.md#supported-operating-systems).
+Expanded support for [specialized images](../virtual-machines/linux/imaging.md#specialized-images), virtual machines created by Azure Migrate, Azure Backup, and Azure Site Recovery, and Azure Marketplace images are upcoming in the fourth quarter of 2023. Until then, we recommend that you continue using [Automation Update Management](../automation/update-management/overview.md) for these images. For more information, see [Support matrix for Update Manager](support-matrix.md#supported-operating-systems).
-Update Manager will be declared generally available soon.
## Prescript and postscript
virtual-desktop Azure Ad Joined Session Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-ad-joined-session-hosts.md
The following known limitations may affect access to your on-premises or Active
- Azure Virtual Desktop (classic) doesn't support Azure AD-joined VMs. - Azure AD-joined VMs don't currently support external identities, such as Azure AD Business-to-Business (B2B) and Azure AD Business-to-Consumer (B2C).-- Azure AD-joined VMs can only access [Azure Files shares](create-profile-container-azure-ad.md) for hybrid users using Azure AD Kerberos for FSLogix user profiles.
+- Azure AD-joined VMs can only access [Azure Files shares](create-profile-container-azure-ad.md) or [Azure NetApp Files shares](create-fslogix-profile-container.md) for hybrid users using Azure AD Kerberos for FSLogix user profiles.
- The [Remote Desktop app for Windows](users/connect-microsoft-store.md) doesn't support Azure AD-joined VMs. ## Deploy Azure AD-joined VMs
You can enable a single sign-on experience using Azure AD authentication when ac
## User profiles
-You can use FSLogix profile containers with Azure AD-joined VMs when you store them on Azure Files while using hybrid user accounts. For more information, see [Create a profile container with Azure Files and Azure AD](create-profile-container-azure-ad.md).
+You can use FSLogix profile containers with Azure AD-joined VMs when you store them on Azure Files or Azure NetApp Files while using hybrid user accounts. For more information, see [Create a profile container with Azure Files and Azure AD](create-profile-container-azure-ad.md).
## Accessing on-premises resources
Now that you've deployed some Azure AD joined VMs, we recommend enabling single
- [Connect with the Windows Desktop client](users/connect-windows.md) - [Connect with the web client](users/connect-web.md) - [Troubleshoot connections to Azure AD-joined VMs](troubleshoot-azure-ad-connections.md)
+- [Create a profile container with Azure NetApp Files](create-fslogix-profile-container.md)
virtual-desktop Create Fslogix Profile Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-fslogix-profile-container.md
Last updated 07/01/2020
-# Create a profile container with Azure NetApp Files and AD DS
+# Create a profile container with Azure NetApp Files
We recommend using FSLogix profile containers as a user profile solution for the [Azure Virtual Desktop service](overview.md). FSLogix profile containers store a complete user profile in a single container and are designed to roam profiles in non-persistent remote computing environments like Azure Virtual Desktop. When you sign in, the container dynamically attaches to the computing environment using a locally supported virtual hard disk (VHD) and Hyper-V virtual hard disk (VHDX). These advanced filter-driver technologies allow the user profile to be immediately available and appear in the system exactly like a local user profile. To learn more about FSLogix profile containers, see [FSLogix profile containers and Azure Files](fslogix-containers-azure-files.md).
The instructions in this guide are specifically for Azure Virtual Desktop users.
>[!NOTE] >If you're looking for comparison material about the different FSLogix Profile Container storage options on Azure, see [Storage options for FSLogix profile containers](store-fslogix-profile.md).
+## Considerations
+
+FSLogix profile containers on Azure NetApp Files can be accessed by users authenticating from Active Directory Domain Services (AD DS) and from [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md), allowing Azure AD users to access profile containers without requiring line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined virtual machines (VMs). For more information, see [Access SMB volumes from Azure AD joined Windows VMs](../azure-netapp-files/access-smb-volume-from-windows-client.md).
+ ## Prerequisites Before you can create an FSLogix profile container for a host pool, you must:
virtual-machine-scale-sets Virtual Machine Scale Sets Health Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md
Application Health Extensions has two options available: **Binary Health States*
| -- | -- | | | Available Health States | Two available states: *Healthy*, *Unhealthy* | Four available states: *Healthy*, *Unhealthy*, *Initializing*, *Unknown*<sup>1</sup> | | Sending Health Signals | Health signals are sent through HTTP/HTTPS response codes or TCP connections. | Health signals on HTTP/HTTPS protocol are sent through the probe response code and response body. Health signals through TCP protocol remain unchanged from Binary Health States. |
-| Identifying *Unhealthy* Instances | Instances will automatically fall into *Unhealthy* state if a *Healthy* signal isn't received from the application. An *Unhealthy* instance can indicate either an issue with the extension configuration (for example, unreachable endpoint) or an issue with the application (for example, non-2xx status code). | Instances will only go into an *Unhealthy* state if the application emits an *Unhealthy* probe response. Users are responsible for implementing custom logic to identify and flag instances with *Unhealthy* applications<sup>2</sup>. Instances with incorrect extension settings (for example, unreachable endpoint) or invalid health probe responses will fall under the *Unknown* state<sup>2</sup>. |
+| Identifying *Unhealthy* Instances | Instances will automatically fall into *Unhealthy* state if a *Healthy* signal isn't received from the application. An *Unhealthy* instance can indicate either an issue with the extension configuration (for example, unreachable endpoint) or an issue with the application (for example, non-200 status code). | Instances will only go into an *Unhealthy* state if the application emits an *Unhealthy* probe response. Users are responsible for implementing custom logic to identify and flag instances with *Unhealthy* applications<sup>2</sup>. Instances with incorrect extension settings (for example, unreachable endpoint) or invalid health probe responses will fall under the *Unknown* state<sup>2</sup>. |
| *Initializing* state for newly created instances | *Initializing* state isn't available. Newly created instances may take some time before settling into a steady state. | *Initializing* state allows newly created instances to settle into a steady Health State before making the instance eligible for rolling upgrades or instance repair operations. | | HTTP/HTTPS protocol | Supported | Supported | | TCP protocol | Supported | Limited Support ΓÇô *Unknown* state is unavailable on TCP protocol. See [Rich Health States protocol table](#rich-health-states) for Health State behaviors on TCP. |
Binary Health State reporting contains two Health States, *Healthy* and *Unhealt
| Protocol | Health State | Description | | -- | | -- |
-| http/https | Healthy | To send a *Healthy* signal, the application is expected to return a 2xx response code. |
-| http/https | Unhealthy | The instance will be marked as *Unhealthy* if a 2xx response code isn't received from the application. |
+| http/https | Healthy | To send a *Healthy* signal, the application is expected to return a 200 response code. |
+| http/https | Unhealthy | The instance will be marked as *Unhealthy* if a 200 response code isn't received from the application. |
**TCP Protocol**
Binary Health State reporting contains two Health States, *Healthy* and *Unhealt
| TCP | Unhealthy | The instance will be marked as *Unhealthy* if a failed or incomplete handshake occurred with the provided application endpoint. | Some scenarios that may result in an *Unhealthy* state include: -- When the application endpoint returns a non-2xx status code
+- When the application endpoint returns a non-200 status code
- When there's no application endpoint configured inside the virtual machine instances to provide application health status - When the application endpoint is incorrectly configured - When the application endpoint isn't reachable
virtual-machines Infrastructure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/infrastructure-automation.md
-+ Last updated 09/21/2023
virtual-machines Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-linux.md
Azure dedicated host instances and SQL hybrid benefits aren't eligible for Azure
You can invoke AHB at the time of virtual machine creation. Benefits of doing so are threefold: - You can provision both PAYG and BYOS virtual machines by using the same image and process.-- It enables future licensing mode changes. These changes aren't available with a BYOS-only image or if you bring your own virtual machine.
+- It enables future licensing mode changes.
- The virtual machine is connected to Red Hat Update Infrastructure (RHUI) by default, to help keep it up to date and secure. You can change the updated mechanism after deployment at any time. #### [Azure portal](#tab/ahbNewPortal)
You can use the `az vm extension` and `az vm update` commands to update new virt
- SLES License Types: SLES_STANDARD, SLES_SAP, SLES_HPCΓÇï + ### Enabling AHB on Existing VM #### [Azure portal](#tab/ahbExistingPortal)
You can use the `az vm extension` and `az vm update` commands to update existing
- SLES License Types: SLES_STANDARD, SLES_SAP, SLES_HPCΓÇï + ++ ## Check the current licensing model of an AHB enabled VM You can view the Azure Hybrid Benefit status of a virtual machine by using the Azure CLI or by using Azure Instance Metadata Service.
To start using Azure Hybrid Benefit for SUSE:
2. Activate the subscription in the SUSE Customer Center. 3. Register your virtual machines that are receiving Azure Hybrid Benefit with the SUSE Customer Center to get the updates from the SUSE Customer Center. + + ### Convert to BYOS using the Azure CLI #### [Red Hat (RHEL)](#tab/rhelAzcliByosConv) * For RHEL virtual machines, run the command with a `--license-type` parameter of `RHEL_BYOS`.+ ```azurecli # This will enable BYOS on a RHEL virtual machine using Azure Hybrid Benefit az vm update -g myResourceGroup -n myVmName --license-type RHEL_BYOS
az vm update -g myResourceGroup -n myVmName --license-type RHEL_BYOS
#### [SUSE (SLES)](#tab/slesAzcliByosConv) * For SLES virtual machines, run the command with a `--license-type` parameter of `SLES_BYOS`.+ ```azurecli # This will enable BYOS on a SLES virtual machine az vm update -g myResourceGroup -n myVmName --license-type SLES_BYOS
az vm update -g myResourceGroup -n myVmName --license-type SLES_BYOS
sudo zypper repos ``` + ++++ ## BYOS to PAYG conversions Converting from a Bring-your-own-subscription to a Pay-as-you-go model. #### [Single VM](#tab/paygclisingle)
If you use Azure Hybrid Benefit BYOS to PAYG capability for SLES and want more i
* [Learn how to create and update virtual machines and add license types (RHEL_BYOS, SLES_BYOS) for Azure Hybrid Benefit by using the Azure CLI](/cli/azure/vm) * [Learn about Azure Hybrid Benefit on Virtual Machine Scale Sets for RHEL and SLES and how to use it](../../virtual-machine-scale-sets/azure-hybrid-benefit-linux.md)++
virtual-machines Create Upload Ubuntu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-ubuntu.md
Ubuntu now publishes official Azure VHDs for download at [https://cloud-images.u
* Ubuntu 18.04/Bionic: [bionic-server-cloudimg-amd64-azure.vhd.zip](https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64-azure.vhd.tar.gz) * Ubuntu 20.04/Focal: [focal-server-cloudimg-amd64-azure.vhd.zip](https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64-azure.vhd.tar.gz)
+* Ubuntu 22.04/Jammy: [jammy-server-cloudimg-amd64-azure.vhd.zip](https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64-azure.vhd.tar.gz)
## Prerequisites This article assumes that you've already installed an Ubuntu Linux operating system to a virtual hard disk. Multiple tools exist to create .vhd files, for example a virtualization solution such as Hyper-V. For instructions, see [Install the Hyper-V Role and Configure a Virtual Machine](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh846766(v=ws.11)).
This article assumes that you've already installed an Ubuntu Linux operating sys
Before editing `/etc/apt/sources.list`, it's recommended to make a backup:
- ```bash
- sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak
- ```
+```bash
+sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak
+```
- Ubuntu 18.04 and Ubuntu 20.04:
-
- ```bash
- sudo sed -i 's/http:\/\/archive\.ubuntu\.com\/ubuntu\//http:\/\/azure\.archive\.ubuntu\.com\/ubuntu\//g' /etc/apt/sources.list
- sudo sed -i 's/http:\/\/[a-z][a-z]\.archive\.ubuntu\.com\/ubuntu\//http:\/\/azure\.archive\.ubuntu\.com\/ubuntu\//g' /etc/apt/sources.list
- sudo sed -i 's/http:\/\/security\.ubuntu\.com\/ubuntu\//http:\/\/azure\.archive\.ubuntu\.com\/ubuntu\//g' /etc/apt/sources.list
- sudo sed -i 's/http:\/\/[a-z][a-z]\.security\.ubuntu\.com\/ubuntu\//http:\/\/azure\.archive\.ubuntu\.com\/ubuntu\//g' /etc/apt/sources.list
- sudo apt-get update
- ```
+```bash
+sudo sed -i 's#http://archive\.ubuntu\.com/ubuntu#http://azure\.archive\.ubuntu\.com/ubuntu#g' /etc/apt/sources.list
+sudo sed -i 's#http://[a-z][a-z]\.archive\.ubuntu\.com/ubuntu#http://azure\.archive\.ubuntu\.com/ubuntu#g' /etc/apt/sources.list
+sudo sed -i 's#http://security\.ubuntu\.com/ubuntu#http://azure\.archive\.ubuntu\.com/ubuntu#g' /etc/apt/sources.list
+sudo sed -i 's#http://[a-z][a-z]\.security\.ubuntu\.com/ubuntu#http://azure\.archive\.ubuntu\.com/ubuntu#g' /etc/apt/sources.list
+sudo apt-get update
+```
4. The Ubuntu Azure images are now using the [Azure-tailored kernel](https://ubuntu.com/blog/microsoft-and-canonical-increase-velocity-with-azure-tailored-kernel). Update the operating system to the latest Azure-tailored kernel and install Azure Linux tools (including Hyper-V dependencies) by running the following commands:
- - Ubuntu 18.04 and Ubuntu 20.04:
-
- ```bash
- sudo apt update
- sudo apt install linux-azure linux-image-azure linux-headers-azure linux-tools-common linux-cloud-tools-common linux-tools-azure linux-cloud-tools-azure
- ```
- - Recommended:
- ```bash
- sudo apt full-upgrade
- sudo reboot
- ```
+```bash
+sudo apt update
+sudo apt install linux-azure linux-image-azure linux-headers-azure linux-tools-common linux-cloud-tools-common linux-tools-azure linux-cloud-tools-azure
+sudo apt full-upgrade
+sudo reboot
+```
5. Modify the kernel boot line for Grub to include additional kernel parameters for Azure. To do this open `/etc/default/grub` in a text editor, find the variable called `GRUB_CMDLINE_LINUX_DEFAULT` (or add it if needed) and edit it to include the following parameters:
- ```config
- GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 rootdelay=300 quiet splash"
- ```
+```config
+GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 rootdelay=300 quiet splash"
+```
- Save and close this file, and then run `sudo update-grub`. This will ensure all console messages are sent to the first serial port, which can assist Azure technical support with debugging issues.
+Save and close this file, and then run `sudo update-grub`. This will ensure all console messages are sent to the first serial port, which can assist Azure technical support with debugging issues.
6. Ensure that the SSH server is installed and configured to start at boot time. This is usually the default. 7. Install cloud-init (the provisioning agent) and the Azure Linux Agent (the guest extensions handler). Cloud-init uses `netplan` to configure the system network configuration (during provisioning and each subsequent boot) and `gdisk` to partition resource disks.
- ```bash
- sudo apt update
- sudo apt install cloud-init gdisk netplan.io walinuxagent && systemctl stop walinuxagent
- ```
+```bash
+sudo apt update
+sudo apt install cloud-init gdisk netplan.io walinuxagent && systemctl stop walinuxagent
+```
- > [!Note]
- > The `walinuxagent` package may remove the `NetworkManager` and `NetworkManager-gnome` packages, if they are installed.
+> [!Note]
+> The `walinuxagent` package may remove the `NetworkManager` and `NetworkManager-gnome` packages, if they are installed.
8. Remove cloud-init default configs and leftover `netplan` artifacts that may conflict with cloud-init provisioning on Azure:
- ```bash
- sudo rm -f /etc/cloud/cloud.cfg.d/50-curtin-networking.cfg /etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg /etc/cloud/cloud.cfg.d/99-installer.cfg /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg
- sudo rm -f /etc/cloud/ds-identify.cfg
- sudo rm -f /etc/netplan/*.yaml
- ```
+```bash
+sudo rm -f /etc/cloud/cloud.cfg.d/50-curtin-networking.cfg /etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg /etc/cloud/cloud.cfg.d/99-installer.cfg /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg
+sudo rm -f /etc/cloud/ds-identify.cfg
+sudo rm -f /etc/netplan/*.yaml
+```
9. Configure cloud-init to provision the system using the Azure datasource:
- ```bash
- sudo cat > /etc/cloud/cloud.cfg.d/90_dpkg.cfg << EOF
- datasource_list: [ Azure ]
- EOF
-
- cat > /etc/cloud/cloud.cfg.d/90-azure.cfg << EOF
- system_info:
- package_mirrors:
- - arches: [i386, amd64]
- failsafe:
- primary: http://archive.ubuntu.com/ubuntu
- security: http://security.ubuntu.com/ubuntu
- search:
- primary:
- - http://azure.archive.ubuntu.com/ubuntu/
- security: []
- - arches: [armhf, armel, default]
- failsafe:
- primary: http://ports.ubuntu.com/ubuntu-ports
- security: http://ports.ubuntu.com/ubuntu-ports
- EOF
-
- cat > /etc/cloud/cloud.cfg.d/10-azure-kvp.cfg << EOF
- reporting:
- logging:
- type: log
- telemetry:
- type: hyperv
- EOF
- ```
+```bash
+cat <<EOF | sudo tee /etc/cloud/cloud.cfg.d/90_dpkg.cfg
+datasource_list: [ Azure ]
+EOF
+
+cat <<EOF | sudo tee /etc/cloud/cloud.cfg.d/90-azure.cfg
+system_info:
+ package_mirrors:
+ - arches: [i386, amd64]
+ failsafe:
+ primary: http://archive.ubuntu.com/ubuntu
+ security: http://security.ubuntu.com/ubuntu
+ search:
+ primary:
+ - http://azure.archive.ubuntu.com/ubuntu/
+ security: []
+ - arches: [armhf, armel, default]
+ failsafe:
+ primary: http://ports.ubuntu.com/ubuntu-ports
+ security: http://ports.ubuntu.com/ubuntu-ports
+EOF
+
+cat <<EOF | sudo tee /etc/cloud/cloud.cfg.d/10-azure-kvp.cfg
+reporting:
+ logging:
+ type: log
+ telemetry:
+ type: hyperv
+EOF
+```
10. Configure the Azure Linux agent to rely on cloud-init to perform provisioning. Have a look at the [WALinuxAgent project](https://github.com/Azure/WALinuxAgent) for more information on these options.
- ```bash
- sudo sed -i 's/Provisioning.Enabled=y/Provisioning.Enabled=n/g' /etc/waagent.conf
- sudo sed -i 's/Provisioning.UseCloudInit=n/Provisioning.UseCloudInit=y/g' /etc/waagent.conf
- sudo sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf
- sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf
- ```
-
- ```bash
- sudo cat >> /etc/waagent.conf << EOF
- # For Azure Linux agent version >= 2.2.45, this is the option to configure,
- # enable, or disable the provisioning behavior of the Linux agent.
- # Accepted values are auto (default), waagent, cloud-init, or disabled.
- # A value of auto means that the agent will rely on cloud-init to handle
- # provisioning if it is installed and enabled, which in this case it will.
- Provisioning.Agent=auto
- EOF
- ```
+```bash
+sudo sed -i 's/Provisioning.Enabled=y/Provisioning.Enabled=n/g' /etc/waagent.conf
+sudo sed -i 's/Provisioning.UseCloudInit=n/Provisioning.UseCloudInit=y/g' /etc/waagent.conf
+sudo sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf
+sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf
+```
+
+```bash
+cat <<EOF | sudo tee -a /etc/waagent.conf
+# For Azure Linux agent version >= 2.2.45, this is the option to configure,
+# enable, or disable the provisioning behavior of the Linux agent.
+# Accepted values are auto (default), waagent, cloud-init, or disabled.
+# A value of auto means that the agent will rely on cloud-init to handle
+# provisioning if it is installed and enabled, which in this case it will.
+Provisioning.Agent=auto
+EOF
+```
11. Clean cloud-init and Azure Linux agent runtime artifacts and logs:
- ```bash
- sudo cloud-init clean --logs --seed
- sudo rm -rf /var/lib/cloud/
- sudo systemctl stop walinuxagent.service
- sudo rm -rf /var/lib/waagent/
- sudo rm -f /var/log/waagent.log
- ```
+```bash
+sudo cloud-init clean --logs --seed
+sudo rm -rf /var/lib/cloud/
+sudo systemctl stop walinuxagent.service
+sudo rm -rf /var/lib/waagent/
+sudo rm -f /var/log/waagent.log
+```
12. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
- > [!NOTE]
- > The `sudo waagent -force -deprovision+user` command generalizes the image by attempting to clean the system and make it suitable for re-provisioning. The `+user` option deletes the last provisioned user account and associated data.
+> [!NOTE]
+> The `sudo waagent -force -deprovision+user` command generalizes the image by attempting to clean the system and make it suitable for re-provisioning. The `+user` option deletes the last provisioned user account and associated data.
- > [!WARNING]
- > Deprovisioning using the command above doesn't guarantee the image is cleared of all sensitive information and is suitable for redistribution.
+> [!WARNING]
+> Deprovisioning using the command above doesn't guarantee the image is cleared of all sensitive information and is suitable for redistribution.
- ```bash
- sudo waagent -force -deprovision+user
- sudo rm -f ~/.bash_history
- sudo export HISTSIZE=0
- ```
+```bash
+sudo waagent -force -deprovision+user
+sudo rm -f ~/.bash_history
+```
13. Click **Action -> Shut Down** in Hyper-V Manager.
This article assumes that you've already installed an Ubuntu Linux operating sys
15. To bring a Generation 2 VM on Azure, follow these steps:
- 1. Change directory to the boot EFI directory:
-
- ```bash
- cd /boot/efi/EFI
- ```
+16. Change directory to the boot EFI directory:
+```bash
+cd /boot/efi/EFI
+```
- 2. Copy the ubuntu directory to a new directory named boot:
-
- ```bash
- sudo cp -r ubuntu/ boot
- ```
+17. Copy the ubuntu directory to a new directory named boot:
+```bash
+sudo cp -r ubuntu/ boot
+```
- 3. Change directory to the newly created boot directory:
-
- ```bash
- cd boot
- ```
+18. Change directory to the newly created boot directory:
+```bash
+cd boot
+```
- 4. Rename the shimx64.efi file:
-
- ```bash
- sudo mv shimx64.efi bootx64.efi
- ```
-
- 5. Rename the grub.cfg file to bootx64.cfg:
-
- ```bash
- sudo mv grub.cfg bootx64.cfg
- ```
+19. Rename the shimx64.efi file:
+```bash
+sudo mv shimx64.efi bootx64.efi
+```
+
+20. Rename the grub.cfg file to bootx64.cfg:
+```bash
+sudo mv grub.cfg bootx64.cfg
+```
## Next steps You're now ready to use your Ubuntu Linux virtual hard disk to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-overview.md
Linux server distributions that are not endorsed by Azure do not support Azure D
| Publisher | Offer | SKU | URN | Volume type supported for encryption | | | | | | |
-| Canonical | Ubuntu | 22.04-LTS | Canonical:0001-com-ubuntu-server-focal:22_04-lts:latest | OS and data disk |
-| Canonical | Ubuntu | 22.04-LTS Gen2 | Canonical:0001-com-ubuntu-server-focal:22_04-lts-gen2:latest | OS and data disk |
+| Canonical | Ubuntu | 22.04-LTS | Canonical:0001-com-ubuntu-server-jammy:22_04-lts:latest | OS and data disk |
+| Canonical | Ubuntu | 22.04-LTS Gen2 | Canonical:0001-com-ubuntu-server-jammy:22_04-lts-gen2:latest | OS and data disk |
| Canonical | Ubuntu | 20.04-LTS | Canonical:0001-com-ubuntu-server-focal:20_04-lts:latest | OS and data disk | | Canonical | Ubuntu | 20.04-DAILY-LTS | Canonical:0001-com-ubuntu-server-focal-daily:20_04-daily-lts:latest | OS and data disk | | Canonical | Ubuntu | 20.04-LTS Gen2 | Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2:latest | OS and data disk |
virtual-machines Suse Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/suse-create-upload-vhd.md
This article assumes that you have already installed a SUSE or openSUSE Leap Lin
* The VHDX format isn't supported in Azure, only **fixed VHD**. You can convert the disk to VHD format using Hyper-V Manager or the convert-vhd cmdlet. * Azure supports Gen1 (BIOS boot) and Gen2 (UEFI boot) virtual machines. * The `vfat` kernel module must be enabled in the kernel
-* When installing the Linux operating system, use standard partitions rather than logical volume manager (LVM) managed partitions, which is often the default for many installations. Using standard partitions will avoid LVM name conflicts with cloned VMs, particularly if an OS disk ever needs to be attached to another VM for troubleshooting. [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) or [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) may be used on data disks if preferred.
* Don't configure a swap partition on the OS disk. The Linux agent can be configured to create a swap file on the temporary resource disk. More information about configuring swap space can be found in the steps below. * All VHDs on Azure must have a virtual size aligned to 1 MB. When converting from a raw disk to VHD, you must ensure that the raw disk size is a multiple of 1 MB before conversion. See [Linux Installation Notes](create-upload-generic.md#general-linux-installation-notes) for more information. > [!NOTE]
-> **_Cloud-init >= 21.2 removes the udf requirement_**. However, without the udf module enabled, the cdrom won't mount during provisioning, preventing custom data from being applied. A workaround for this is to apply custom data using user data. However, unlike custom data, user data isn't encrypted. https://cloudinit.readthedocs.io/en/latest/topics/format.html
+> **(_Cloud-init >= 21.2 removes the udf requirement._)** however without the udf module enabled the cdrom won't mount during provisioning, preventing custom data from being applied. A workaround for this is to apply custom data. However, unlike custom data user data, isn't encrypted. https://cloudinit.readthedocs.io/en/latest/topics/format.html
## Use SUSE Studio
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
## Prepare SUSE Linux Enterprise Server for Azure
-1. In the center pane of Hyper-V Manager, select the virtual machine.
-2. Click **Connect** to open the window for the virtual machine.
+1. Configure the Azure/Hyper-v modules if required.
+
+ If your software hypervisor is not Hyper-V, other modules need to be added into the initramfs to successfully boot in Azure
+
+ Edit the "/etc/dracut.conf" file and add the following line to the file then executeh the ```dracut```command to rebuild the initramfs file:
+
+```config
+add_drivers+=" hv_vmbus hv_netvsc hv_storvsc "
+```
+
+```bash
+sudo dracut --verbose --force
+```
+
+2. Setup the Serial Console.
+
+ In order to successfully work with the serial console, it's required to set up several variables in the "/etc/defaults/grub" file and recreate the grub on the server.
+
+```config
+# Add console=ttyS0 and earlyprintk=ttS0 to the variable
+# remove "splash=silent" and "quiet" options.
+GRUB_CMDLINE_LINUX_DEFAULT="audit=1 no-scroll fbcon=scrollback:0 mitigations=auto security=apparmor crashkernel=228M,high crashkernel=72M,low console=ttyS0 earlyprintk=ttyS0"
+
+# Add "console serial" to GRUB_TERMINAL
+GRUB_TERMINAL="console serial"
+
+# Set the GRUB_SERIAL_COMMAND variable
+
+GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
+```
+
+```shell
+/usr/sbin/grub2-mkconfig -o /boot/grub2/grub.cfg
+```
+
3. Register your SUSE Linux Enterprise system to allow it to download updates and install packages.+ 4. Update the system with the latest patches:
- ```bash
- sudo zypper update
- ```
+```bash
+sudo zypper update
+```
5. Install Azure Linux Agent and cloud-init
- ```bash
- sudo SUSEConnect -p sle-module-public-cloud/15.2/x86_64 (SLES 15 SP2)
- sudo zypper refresh
- sudo zypper install python-azure-agent
- sudo zypper install cloud-init
- ```
+```bash
+sudo SUSEConnect -p sle-module-public-cloud/15.2/x86_64 (SLES 15 SP2)
+sudo zypper refresh
+sudo zypper install python-azure-agent
+sudo zypper install cloud-init
+```
6. Enable waagent & cloud-init to start on boot
- ```bash
- sudo chkconfig waagent on
- sudo systemctl enable cloud-init-local.service
- sudo systemctl enable cloud-init.service
- sudo systemctl enable cloud-config.service
- sudo systemctl enable cloud-final.service
- sudo systemctl daemon-reload
- sudo cloud-init clean
- ```
-
-7. Update waagent and cloud-init configuration
-
- ```bash
- sudo sed -i 's/Provisioning.UseCloudInit=n/Provisioning.UseCloudInit=auto/g' /etc/waagent.conf
- sudo sed -i 's/Provisioning.Enabled=y/Provisioning.Enabled=n/g' /etc/waagent.conf
- sudo sh -c 'printf "datasource:\n Azure:" > /etc/cloud/cloud.cfg.d/91-azure_datasource.cfg'
- sudo sh -c 'printf "reporting:\n logging:\n type: log\n telemetry:\n type: hyperv" > /etc/cloud/cloud.cfg.d/10-azure-kvp.cfg'
- ```
-
-8. Edit the "/etc/default/grub" file to ensure console logs are sent to the serial port by adding the following line:
+```bash
+sudo systemctl enable waagent
+sudo systemctl enable cloud-init-local.service
+sudo systemctl enable cloud-init.service
+sudo systemctl enable cloud-config.service
+sudo systemctl enable cloud-final.service
+sudo systemctl daemon-reload
+sudo cloud-init clean
+```
- ```config-grub
- console=ttyS0 earlyprintk=ttyS0
- ```
+7. Update the cloud-init configuration
- Next, apply this change by running the following command:
+```bash
+cat <<EOF | sudo /etc/cloud/cloud.cfg.d/91-azure_datasource.cfg
+datasource_list: [ Azure ]
+datasource:
+ Azure:
+ apply_network_config: False
- ```bash
- sudo grub2-mkconfig -o /boot/grub2/grub.cfg
- ```
+EOF
+```
- This configuration will ensure all console messages are sent to the first serial port, which can assist Azure support with debugging issues.
+```bash
+sudo cat <<EOF | sudo tee /etc/cloud/cloud.cfg.d/05_logging.cfg
+# This tells cloud-init to redirect its stdout and stderr to
+# 'tee -a /var/log/cloud-init-output.log' so the user can see output
+# there without needing to look on the console.
+output: {all: '| tee -a /var/log/cloud-init-output.log'}
+EOF
+
+# Make sure mounts and disk_setup are in the init stage:
+echo "Adding mounts and disk_setup to init stage"
+sudo sed -i '/ - mounts/d' /etc/cloud/cloud.cfg
+sudo sed -i '/ - disk_setup/d' /etc/cloud/cloud.cfg
+sudo sed -i '/cloud_init_modules/a\\ - mounts' /etc/cloud/cloud.cfg
+sudo sed -i '/cloud_init_modules/a\\ - disk_setup' /etc/cloud/cloud.cfg
+```
-9. Ensure the "/etc/fstab" file references the disk using its UUID (by-uuid)
+8. If you want to mount, format, and create a swap partition you can either:
+ * Pass this configuration in as a cloud-init config every time you create a VM.
+ * Use a cloud-init directive baked into the image that configures swap space every time the VM is created:
+
-10. Modify udev rules to avoid generating static rules for the Ethernet interface(s). These rules can cause problems when cloning a virtual machine in Microsoft Azure or Hyper-V:
+```bash
+cat <<EOF | sudo tee -a /etc/systemd/system.conf
+'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"'
+EOF
+
+cat <<EOF | sudo tee /etc/cloud/cloud.cfg.d/00-azure-swap.cfg
+#cloud-config
+# Generated by Azure cloud image build
+disk_setup:
+ ephemeral0:
+ table_type: mbr
+ layout: [66, [33, 82]]
+ overwrite: True
+fs_setup:
+ - device: ephemeral0.1
+ filesystem: ext4
+ - device: ephemeral0.2
+ filesystem: swap
+mounts:
+ - ["ephemeral0.1", "/mnt"]
+ - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"]
+EOF
+```
+
+9. Previously, the Azure Linux Agent was used to automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However, this step is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk or create the swap file. Use these commands to modify `/etc/waagent.conf` appropriately:
- ```bash
- sudo ln -s /etc/udev/rules.d/75-persistent-net-generator.rules
- sudo rm -f /etc/udev/rules.d/70-persistent-net.rules
- ```
-11. It's recommended to edit the "/etc/sysconfig/network/dhcp" file and change the `DHCLIENT_SET_HOSTNAME` parameter to the following:
+```bash
+sudo sed -i 's/Provisioning.UseCloudInit=n/Provisioning.UseCloudInit=auto/g' /etc/waagent.conf
+sudo sed -i 's/Provisioning.Enabled=y/Provisioning.Enabled=n/g' /etc/waagent.conf
+sudo sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf
+sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf
+```
- ```config
- DHCLIENT_SET_HOSTNAME="no"
- ```
+> [!NOTE]
+> Make sure the **'udf'** module is enabled. Removing/disabling them will cause a provisioning/boot failure. **(_Cloud-init >= 21.2 removes the udf requirement. Please read top of document for more detail)**
-12. In the "/etc/sudoers" file, comment out or remove the following lines if they exist:
+10. Ensure the "/etc/fstab" file references the disk using its UUID (by-uuid)
- ```output
- Defaults targetpw # ask for the password of the target user i.e. root
- ALL ALL=(ALL) ALL # WARNING! Only use this setting together with 'Defaults targetpw'!
- ```
+11. Remove udev rules and network adapter configuration files to avoid generating static rules for the Ethernet interface(s). These rules can cause problems when cloning a virtual machine in Microsoft Azure or Hyper-V:
-13. Ensure that the SSH server is installed and configured to start at boot time.
+```bash
+sudo rm -f /etc/udev/rules.d/70-persistent-net.rules
+sudo rm -f /etc/udev/rules.d/85-persistent-net-cloud-init.rules
+sudo rm -f /etc/sysconfig/network/ifcfg-eth*
+```
-14. Swap configuration
+12. It's recommended to edit the "/etc/sysconfig/network/dhcp" file and change the `DHCLIENT_SET_HOSTNAME` parameter to the following:
- Don't create swap space on the operating system disk.
+```config
+DHCLIENT_SET_HOSTNAME="no"
+```
+13. In the "/etc/sudoers" file, comment out or remove the following lines if they exist:
- Previously, the Azure Linux Agent was used to automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However, this step is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk or create the swap file. Use these commands to modify `/etc/waagent.conf` appropriately:
+```output
+Defaults targetpw # ask for the password of the target user i.e. root
+ALL ALL=(ALL) ALL # WARNING! Only use this setting together with 'Defaults targetpw'!
+```
- ```bash
- sudo sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf
- sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf
- ```
- For more information on the waagent.conf configuration options, see the [Linux agent configuration](../extensions/agent-linux.md#configuration) documentation.
+14. Ensure that the SSH server is installed and configured to start at boot time.
- If you want to mount, format, and create a swap partition you can either:
- * Pass this configuration in as a cloud-init config every time you create a VM.
- * Use a cloud-init directive baked into the image that configures swap space every time the VM is created:
+```bash
+sudo systemctl enable sshd
+```
- ```bash
- sudo echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
- cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF
- #cloud-config
- # Generated by Azure cloud image build
- disk_setup:
- ephemeral0:
- table_type: mbr
- layout: [66, [33, 82]]
- overwrite: True
- fs_setup:
- - device: ephemeral0.1
- filesystem: ext4
- - device: ephemeral0.2
- filesystem: swap
- mounts:
- - ["ephemeral0.1", "/mnt"]
- - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"]
- EOF
- ```
-> [!NOTE]
-> Make sure the **'udf'** module is enabled. Removing/disabling them will cause a provisioning/boot failure. **(_Cloud-init >= 21.2 removes the udf requirement. Please read top of document for more detail)**
+15. Make sure to clean cloud-init stage;
+```bash
+sudo cloud-init clean --seed --logs
+```
-15. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
+16. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
-> [!NOTE]
-> If you're migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step.
+>[!NOTE]
+> If you're migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step
```bash
- sudo rm -f /var/log/waagent.log
- sudo cloud-init clean
- sudo waagent -force -deprovision+user
- sudo rm -f ~/.bash_history
- sudo export HISTSIZE=0
- ```
+sudo rm -f /var/log/waagent.log
+sudo waagent -force -deprovision+user
+sudo export HISTSIZE=0
+sudo rm -f ~/.bash_history
+```
-16. Click **Action -> Shut Down** in Hyper-V Manager. Your Linux VHD is now ready to be [**uploaded to Azure**](./upload-vhd.md#option-1-upload-a-vhd).
- ## Prepare openSUSE 15.2+
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
sudo zypper install WALinuxAgent ```
-6. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this, open "/boot/grub/menu.lst" in a text editor and ensure that the default kernel includes the following parameters:
+6. Modify the kernel boot line in your grub configuration to include other kernel parameters for Azure. To do this, open "/boot/grub/menu.lst" in a text editor and ensure that the default kernel includes the following parameters:
```config-grub console=ttyS0 earlyprintk=ttyS0 ```
- This option will ensure all console messages are sent to the first serial port, which can assist Azure support with debugging issues. In addition, remove the following parameters from the kernel boot line if they exist:
+ This option ensures all console messages are sent to the first serial port, which can assist Azure support with debugging issues. In addition, remove the following parameters from the kernel boot line if they exist:
```config-grub libata.atapi_enabled=0 reserve=0x1f0,0x8
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
9. Ensure that the SSH server is installed and configured to start at boot time. 10. Don't create swap space on the OS disk.
- The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. Note that the local resource disk is a *temporary* disk and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify the following parameters in the "/etc/waagent.conf" as follows:
+ The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. The local resource disk is a *temporary* disk and will be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify the following parameters in the "/etc/waagent.conf" as follows:
```config-conf
- ResourceDisk.Format=y
+ ResourceDisk.Format=n
ResourceDisk.Filesystem=ext4 ResourceDisk.MountPoint=/mnt/resource
- ResourceDisk.EnableSwap=y
+ ResourceDisk.EnableSwap=n
ResourceDisk.SwapSizeMB=2048 ## NOTE: set the size to whatever you need it to be. ```
As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
```bash sudo rm -f ~/.bash_history # Remove current user history
+ sudo -i
sudo rm -rf /var/lib/waagent/ sudo rm -f /var/log/waagent.log sudo waagent -force -deprovision+user
virtual-machines Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/resize-vm.md
Last updated 09/15/2023 --+ # Change the size of a virtual machine
virtual-machines Trusted Launch Existing Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-existing-vm.md
Last updated 08/13/2023-+ # Enable Trusted Launch on existing Azure VMs
New-AzResourceGroupDeployment `
**(Recommended)** Post-Upgrades enable [Boot Integrity Monitoring](trusted-launch.md#microsoft-defender-for-cloud-integration) to monitor the health of the VM using Microsoft Defender for Cloud.
-Learn more about [trusted launch](trusted-launch.md) and review [frequently asked questions](trusted-launch-faq.md)
+Learn more about [trusted launch](trusted-launch.md) and review [frequently asked questions](trusted-launch-faq.md)
virtual-wan How To Routing Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-routing-policies.md
description: Learn how to configure Virtual WAN routing policies + Last updated 09/21/2023 - # How to configure Virtual WAN Hub routing intent and routing policies
vpn-gateway Ikev2 Openvpn From Sstp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/ikev2-openvpn-from-sstp.md
description: Learn how to transition to OpenVPN protocol or IKEv2 from SSTP to o
Previously updated : 09/15/2023 Last updated : 09/26/2023
The zip file also provides the values of some of the important settings on the A
### <a name="gwsku"></a>Which gateway SKUs support P2S VPN?
+The following table shows gateway SKUs by tunnel, connection, and throughput. For additional tables and more information regarding this table, see the Gateway SKUs section of the [VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md#gwsku) article.
-* For gateway SKU recommendations, see [About VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md#gwsku).
->[!NOTE]
->The Basic SKU does not support IKEv2 or RADIUS authentication.
+> [!NOTE]
+> The Basic SKU has limitations and does not support IKEv2, or RADIUS authentication. See the [VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md#gwsku) article for more information.
> ### <a name="IKE/IPsec policies"></a>What IKE/IPsec policies are configured on VPN gateways for P2S?
vpn-gateway Point To Site About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-about.md
description: Learn about Point-to-Site VPN.
Previously updated : 08/11/2023 Last updated : 09/26/2023
Point-to-site VPN can use one of the following protocols:
* **IKEv2 VPN**, a standards-based IPsec VPN solution. IKEv2 VPN can be used to connect from Mac devices (macOS versions 10.11 and above). -
->[!NOTE]
->IKEv2 and OpenVPN for P2S are available for the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) only. They aren't available for the classic deployment model.
+> [!NOTE]
+> IKEv2 and OpenVPN for P2S are available for the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) only. They aren't available for the classic deployment model.
> ## <a name="authentication"></a>How are P2S VPN clients authenticated?
The validation of the client certificate is performed by the VPN gateway and hap
Azure AD authentication allows users to connect to Azure using their Azure Active Directory credentials. Native Azure AD authentication is only supported for OpenVPN protocol and also requires the use of the [Azure VPN Client](https://go.microsoft.com/fwlink/?linkid=2117554). The supported client operation systems are Windows 10 or later and macOS.
-With native Azure AD authentication, you can use Azure AD's conditional access and Multi-Factor Authentication (MFA) features for VPN.
+With native Azure AD authentication, you can use Azure AD's conditional access and multifactor authentication (MFA) features for VPN.
At a high level, you need to perform the following steps to configure Azure AD authentication:
The client configuration requirements vary, based on the VPN client that you use
## <a name="gwsku"></a>Which gateway SKUs support P2S VPN?
+The following table shows gateway SKUs by tunnel, connection, and throughput. For additional tables and more information regarding this table, see the Gateway SKUs section of the [VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md#gwsku) article.
-* For Gateway SKU recommendations, see [About VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md#gwsku).
->[!NOTE]
->The Basic SKU does not support IKEv2 or RADIUS authentication.
+> [!NOTE]
+> The Basic SKU has limitations and does not support IKEv2, IPv6, or RADIUS authentication. See the [VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md#gwsku) article for more information.
> ## <a name="IKE/IPsec policies"></a>What IKE/IPsec policies are configured on VPN gateways for P2S?
vpn-gateway Vpn Gateway About Vpn Gateway Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md
description: Learn about VPN Gateway resources and configuration settings.
Previously updated : 08/10/2023 Last updated : 09/26/2023 ms.devlang: azurecli
ms.devlang: azurecli
A VPN gateway is a type of virtual network gateway that sends encrypted traffic between your virtual network and your on-premises location across a public connection. You can also use a VPN gateway to send traffic between virtual networks across the Azure backbone.
-A VPN gateway connection relies on the configuration of multiple resources, each of which contains configurable settings. The sections in this article discuss the resources and settings that relate to a VPN gateway for a virtual network created in [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). You can find descriptions and topology diagrams for each connection solution in the [About VPN Gateway](vpn-gateway-about-vpngateways.md) article.
+VPN gateway connections rely on the configuration of multiple resources, each of which contains configurable settings. The sections in this article discuss the resources and settings that relate to a VPN gateway for a virtual network created in [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). You can find descriptions and topology diagrams for each connection solution in the [VPN Gateway design](design.md) article.
The values in this article apply VPN gateways (virtual network gateways that use the -GatewayType Vpn). Additionally, this article covers many, but not all, gateway types and SKUs. See the following articles for information regarding gateways that use these specified settings:
az network vnet-gateway create --name VNet1GW --public-ip-address VNet1GWPIP --r
If you have a VPN gateway and you want to use a different gateway SKU, your options are to either resize your gateway SKU, or to change to another SKU. When you change to another gateway SKU, you delete the existing gateway entirely and build a new one. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. In comparison, when you resize a gateway SKU, there isn't much downtime because you don't have to delete and rebuild the gateway. While it's faster to resize your gateway SKU, there are rules regarding resizing: 1. Except for the Basic SKU, you can resize a VPN gateway SKU to another VPN gateway SKU within the same generation (Generation1 or Generation2) and SKU family (VpnGwx or VpnGwxAZ).
- * Example: VpnGw1 of Generation1 can be resized to VpnGw2 of Generation1, but can't be resized to VpnGw2 of Generation2. The gateway must instead be changed (deleted and rebuilt).
- * Example: VpnGw2 of Generation2 can't be resized to VpnGw2AZ of either Generation1 or Generation2 because the "AZ" gateways are [zone redundant](about-zone-redundant-vnet-gateways.md). To change to an AZ SKU, delete the gateway and rebuild it using the desired AZ SKU.
+ * Example: VpnGw1 of Generation1 can be resized to VpnGw2 of Generation1, but can't be resized to VpnGw2 of Generation2. The gateway must instead be changed (deleted and rebuilt).
+ * Example: VpnGw2 of Generation2 can't be resized to VpnGw2AZ of either Generation1 or Generation2 because the "AZ" gateways are [zone redundant](about-zone-redundant-vnet-gateways.md). To change to an AZ SKU, delete the gateway and rebuild it using the desired AZ SKU.
1. When working with older legacy SKUs: * You can resize between Standard and HighPerformance SKUs. * You **cannot** resize from Basic/Standard/HighPerformance SKUs to VpnGw SKUs. You must instead, [change](#change) to the new SKUs.
New-AzVirtualNetworkGateway -Name vnetgw1 -ResourceGroupName testrg `
Before you create a VPN gateway, you must create a gateway subnet. The gateway subnet contains the IP addresses that the virtual network gateway VMs and services use. When you create your virtual network gateway, gateway VMs are deployed to the gateway subnet and configured with the required VPN gateway settings. Never deploy anything else (for example, additional VMs) to the gateway subnet. The gateway subnet must be named 'GatewaySubnet' to work properly. Naming the gateway subnet 'GatewaySubnet' lets Azure know that this is the subnet to which it should deploy the virtual network gateway VMs and services.
->[!NOTE]
->[!INCLUDE [vpn-gateway-gwudr-warning.md](../../includes/vpn-gateway-gwudr-warning.md)]
->
- When you create the gateway subnet, you specify the number of IP addresses that the subnet contains. The IP addresses in the gateway subnet are allocated to the gateway VMs and gateway services. Some configurations require more IP addresses than others. When you're planning your gateway subnet size, refer to the documentation for the configuration that you're planning to create. For example, the ExpressRoute/VPN Gateway coexist configuration requires a larger gateway subnet than most other configurations. While it's possible to create a gateway subnet as small as /29 (applicable to the Basic SKU only), all other SKUs require a gateway subnet of size /27 or larger (/27, /26, /25 etc.). You may want to create a gateway subnet larger than /27 so that the subnet has enough IP addresses to accommodate possible future configurations.
The following Resource Manager PowerShell example shows a gateway subnet named G
Add-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -AddressPrefix 10.0.3.0/27 ```
+Considerations:
++
+* When working with gateway subnets, avoid associating a network security group (NSG) to the gateway subnet. Associating a network security group to this subnet may cause your virtual network gateway (VPN and Express Route gateways) to stop functioning as expected. For more information about network security groups, see [What is a network security group?](../virtual-network/network-security-groups-overview.md).
## <a name="lng"></a>Local network gateways
vpn-gateway Vpn Gateway About Vpngateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpngateways.md
# Customer intent: As someone with a basic network background, but is new to Azure, I want to understand the capabilities of Azure VPN Gateway so that I can securely connect to my Azure virtual networks. Previously updated : 09/15/2023 Last updated : 09/26/2023
You can start out creating and configuring resources using one configuration too
## <a name="gwsku"></a>Gateway SKUs
-When you create a virtual network gateway, you specify the gateway SKU that you want to use. Select the SKU that satisfies your requirements based on the types of workloads, throughputs, features, and SLAs.
+When you create a virtual network gateway, you specify the gateway SKU that you want to use. Select the SKU that satisfies your requirements based on the types of workloads, throughputs, features, and SLAs. For more information about gateway SKUs, including supported features, performance, production and dev-test, and configuration steps, see the [VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md#gwsku) article.
-* For more information about gateway SKUs, including supported features, production and dev-test, and configuration steps, see the [VPN Gateway Settings - Gateway SKUs](vpn-gateway-about-vpn-gateway-settings.md#gwsku) article.
-* For Legacy SKU information, see [Working with Legacy SKUs](vpn-gateway-about-skus-legacy.md).
-* The Basic SKU doesn't support IPv6 and can only be configured using PowerShell or Azure CLI.
-
-### <a name="benchmark"></a>Gateway SKUs by tunnel, connection, and throughput
+The following table shows gateway SKUs by tunnel, connection, and throughput. For additional tables and more information regarding this table, see the Gateway SKUs section of the [VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md#gwsku) article.
[!INCLUDE [Aggregated throughput by SKU](../../includes/vpn-gateway-table-gwtype-aggtput-include.md)]
vpn-gateway Vpn Gateway Highlyavailable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-highlyavailable.md
This configuration provides multiple active tunnels from the same Azure VPN gate
1. BGP is required for this configuration. Each local network gateway representing a VPN device must have a unique BGP peer IP address specified in the "BgpPeerIpAddress" property. 1. You should use BGP to advertise the same prefixes of the same on-premises network prefixes to your Azure VPN gateway, and the traffic will be forwarded through these tunnels simultaneously. 1. You must use Equal-cost multi-path routing (ECMP).
-1. Each connection is counted against the maximum number of tunnels for your Azure VPN gateway. See the [Overview](vpn-gateway-about-vpngateways.md#benchmark) page for the latest information about tunnels, connections, and throughput.
+1. Each connection is counted against the maximum number of tunnels for your Azure VPN gateway. See the [VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md#gwsku) page for the latest information about tunnels, connections, and throughput.
In this configuration, the Azure VPN gateway is still in active-standby mode, so the same failover behavior and brief interruption will still happen as described [above](#activestandby). But this setup guards against failures or interruptions on your on-premises network and VPN devices.
vpn-gateway Vpn Gateway Highlyavailable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vs-azure-tools-storage-explorer-blobs.md
The following steps illustrate how to manage the blobs (and virtual directories)
1. Select the blob you wish to delete. 2. On the main pane's toolbar, select **Delete**. 3. Select **Yes** to the confirmation dialog.
+
+ * **Delete a blob along with snapshots**
+
+ 1. Select the blob you wish to delete.
+ 2. On the main pane's toolbar, select **Delete**.
+ 3. Select **Yes** to the confirmation dialog.
+ 4. Under Activities the deletion of the blob will be skipped now click on retry.
+ 5. Retry Azcopy window will open and from Snapshot select Delete blobs with snapshots option from dropdown then
+ select Retry selected.
## Next steps
The following steps illustrate how to manage the blobs (and virtual directories)
[16]: ./media/vs-azure-tools-storage-explorer-blobs/blob-upload-files-options.png [17]: ./media/vs-azure-tools-storage-explorer-blobs/blob-upload-folder-menu.png [18]: ./media/vs-azure-tools-storage-explorer-blobs/blob-upload-folder-options.png
-[19]: ./media/vs-azure-tools-storage-explorer-blobs/blob-container-open-editor-context-menu.png
+[19]: ./media/vs-azure-tools-storage-explorer-blobs/blob-container-open-editor-context-menu.png
web-application-firewall Automated Detection Response With Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/automated-detection-response-with-sentinel.md
+
+ Title: Automated detection and response for Azure WAF with Microsoft Sentinel
+description: Use WAF detection templates in Sentinel, deploy a playbook, and configure the detection and response in Sentinel.
++++ Last updated : 09/27/2023++
+# Automated detection and response for Azure WAF with Microsoft Sentinel
+
+Malicious attackers increasingly target web applications by exploiting commonly known vulnerabilities such as SQL injection and Cross-site scripting. Preventing these attacks in application code poses a challenge, requiring rigorous maintenance, patching, and monitoring at multiple layers of the application topology. A Web Application Firewall (WAF) solution can react to a security threat faster by centrally patching a known vulnerability, instead of securing each individual web application. Azure Web Application Firewall (WAF) is a cloud-native service that protects web apps from common web-hacking techniques. You can deploy this service in a matter of minutes to gain complete visibility into the web application traffic and block malicious web attacks.
+
+Integrating Azure WAF with Microsoft Sentinel (a cloud-native SIEM/SOAR solution) for automated detection and response to threats/incidents/alerts is an added advantage and reduces the manual intervention needed to update the WAF policy.
+
+In this article, you learn about WAF detection templates in Sentinel, deploy a playbook, and configure the detection and response in Sentinel using these templates and the playbook.
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure Front Door deployment with an associated WAF policy. For more information, see [Quickstart: Create a Front Door Standard/Premium using an ARM template](../../frontdoor/create-front-door-template.md), and [Tutorial: Create a WAF policy on Azure Front Door by using the Azure portal](waf-front-door-create-portal.md).
+- An Azure Front Door configured to capture logs in a Log Analytics workspace. For more information, see [Configure Azure Front Door logs](../../frontdoor/standard-premium/how-to-logs.md).
+
+## Deploy the playbook
+You install a Sentinel playbook named *Block-IPAzureWAF* from a template on GitHub. This playbook runs in response to WAF incidents. The goal is to create or modify a custom rule in a WAF policy to block requests from a certain IP address. This is accomplished using the Azure REST API.
+
+You install the playbook from a template on GitHub.
+1. Go to the [GitHub repository](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20WAF/Playbook%20-%20WAF%20Sentinel%20Playbook%20Block%20IP%20-%20New) and select **Deploy to Azure** to launch the template.
+1. Fill in the required parameters. You can get your Front Door ID from the Azure portal. The Front Door ID is the resource ID of the Front Door resource.
+ :::image type="content" source="../media/automated-detection-response-with-sentinel/playbook-template.png" alt-text="Screenshot showing the playbook template.":::
+1. Select **Review + create** and then **Create**.
+
+## Authorize the API connection
+
+An API connection named *azuresentinel-Block-IPAzureWAF* is created as part of this deployment. You must authorize it with your Azure ID to allow the playbook to make changes to your WAF policy.
+
+1. In the Azure portal, select the *azuresentinel-Block-IPAzureWAF* API connection.
+1. Select **Edit API connection**.
+1. Under **Display Name**, type your Azure ID.
+1. Select **Authorize**.
+1. Select **Save**.
++
+## Configure the Contributor role assignment
+
+The playbook must have the necessary permissions to query and modify the existing WAF policy via the REST API. You can assign the playbook a system-assigned Managed Identity with Contributor permissions on the Front Door resource along with their associated WAF policies. You can assign permissions only if your account has been assigned Owner or User Access Administrator roles to the underlying resource.
+
+This can be done using the IAM section in the respective resource by adding a new role assignment to this playbook.
+
+1. In the Azure portal, select the Front Door resource.
+1. In the left pane, select **Access control (IAM)**.
+1. Select **Role assignments**.
+1. Select **Add** then **Add role assignment**.
+1. Select **Privileged administrator roles**.
+1. Select **Contributor** and then select **Next**.
+1. Select **Select members**.
+1. Search for **Block-IPAzureWAF** and select it. There may be multiple entries for this playbook. The one you recently added usually the last one in the list.
+1. Select **Block-IPAzureWAF** and select **Select**.
+1. Select **Review + assign**.
+
+Repeat this procedure for the WAF policy resource.
+
+## Add Microsoft Sentinel to your workspace
+
+1. In the Azure portal, search for and then open Microsoft Sentinel.
+1. Select **Create**.
+1. Select your workspace, and then select **Add**.
+
+## Configure the Logic App Contributor role assignment
+
+Your account must have owner permissions on any resource group to which you want to grant Microsoft Sentinel permissions, and you must have the **Logic App Contributor** role on any resource group containing playbooks you want to run.
+
+1. In the Azure portal, select the resource group that contains the playbook.
+1. In the left pane, select **Access control (IAM)**.
+1. Select **Role assignments**.
+1. Select **Add** then **Add role assignment**.
+1. Select search for **Logic App Contributor**, select it, and then select **Next**.
+1. Select **Select members**.
+1. Search for your account and select it.
+1. Select **Select**.
+1. Select **Next**.
+1. Select **Review + assign**.
+
+## Configure detection and response
+
+There are detection query templates for SQLi and XSS attacks in Sentinel for Azure WAF. You can download these templates from the Content hub. By using these templates, you can create analytic rules that detect specific type of attack patterns in the WAF logs and further notify the security analyst by creating an incident. The automation section of these rules can help you respond to this incident by blocking the source IP of the attacker on the WAF policy, which then stops subsequent attacks upfront from these source IP addresses. Microsoft is continuously working to include more Detection Templates for more detection and response scenarios.
+
+### Install the templates
+
+1. From Microsoft Sentinel, under **Configuration** in the left pane, select **Analytics**.
+1. At the top of the page, select **More content at Content hub**.
+1. Search for **Azure Web Application Firewall**, select it and then select **Install**.
+
+### Create an analytic rule
+
+1. From Microsoft Sentinel, under **Configuration** in the left pane, select **Analytics**.
+1. Select **Rule templates**. It may take a few minutes for the templates to appear.
+1. Select the **Front Door Premium WAF - SQLi Detection** template.
+1. On the right pane, select **Create rule**.
+1. Accept all the defaults and continue through to **Automated response**. You can edit these settings later to customize the rule.
+ > [!TIP]
+ > If you see an error in the rule query, it might be because you don't have any WAF logs in your workspace. You can generate some logs by sending test traffic to your web app. For example, you can simulate a SQLi attack by sending a request like this: `http://x.x.x.x/?text1=%27OR%27%27=%27`. Replace `x.x.x.x` with your Front Door URL.
+
+1. On the **Automated response** page, select **Add new**.
+1. On the **Create new automation rule** page, type a name for the rule.
+1. Under **Trigger**, select **When alert is created**.
+1. Under **Actions**, select **Manage playbook permissions**.
+1. On the **Manage permissions** page, select your resource group and select **Apply**.
+1. Back on the **Create new automation rule** page, under **Actions** select the **Block-IPAzureWAF** playbook from the drop down list.
+1. Select **Apply**.
+1. Select **Next: Review + create**.
+1. Select **Save**.
+
+Once the Analytic rule is created with respective Automation rule settings, you're now ready for *Detection and Response*. The following flow of events happens during an attack:
+
+- Azure WAF logs traffic when an attacker attempts to target one of the web apps behind it. Sentinel then ingests these logs.
+- The Analytic/Detection rule that you configured detects the pattern for this attack and generates an incident to notify an analyst.
+- The automation rule that is part of the analytic rule triggers the respective playbook that you configured previously.
+- The playbook creates a custom rule called *SentinelBlockIP* in the respective WAF policy, which includes the source IP of the attacker.
+- WAF blocks subsequent attack attempts, and if the attacker tries to use another source IP, it appends the respective source IP to the block rule.
+
+An important point is that by default Azure WAF blocks any malicious web attacks with the help of core ruleset of the Azure WAF engine. However, this automated detection and response configuration further enhances the security by modifying or adding new custom block rules on the Azure WAF policy for the respective source IP addresses. This ensures that the traffic from these source IP addresses gets blocked before it even hits the Azure WAF engine ruleset.
+
+## Related content
+
+- [Using Microsoft Sentinel with Azure Web Application Firewall](../waf-sentinel.md)
web-application-firewall Application Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/shared/application-ddos-protection.md
Azure WAF has many features that can be used to mitigate many different types of
* Use bot protection managed rule set to protect against known bad bots. For more information, see [Configuring bot protection](../afds/waf-front-door-policy-configure-bot-protection.md).
-* Apply rate limit to prevent IP addresses from calling your service too frequently. For more information, see [Rate limiting](../afds/waf-front-door-rate-limit.md).
+* Apply rate limits to prevent IP addresses from calling your service too frequently. For more information, see [Rate limiting](../afds/waf-front-door-rate-limit.md).
* Block IP addresses, and ranges that you identify as malicious. For more information, see [IP restrictions](../afds/waf-front-door-configure-ip-restriction.md).
Application Gateway WAF SKUs can be used to mitigate many L7 DDoS attacks:
* Use bot protection managed rule set provides protection against known bad bots. For more information, see [Configuring bot protection](../ag/bot-protection.md).
+* Apply rate limits to prevent IP addresses from calling your service too frequently. For more information, see [Configuring Rate limiting custom rules](../ag/rate-limiting-configure.md).
+ * Block IP addresses, and ranges that you identify as malicious. For more information, see examples at [Create and use v2 custom rules](../ag/create-custom-waf-rules.md). * Block or redirect to a static web page any traffic from outside a defined geographic region, or within a defined region that doesn't fit the application traffic pattern. For more information, see examples at [Create and use v2 custom rules](../ag/create-custom-waf-rules.md).
Application Gateway WAF SKUs can be used to mitigate many L7 DDoS attacks:
* You can bypass the WAF for known legitimate traffic by creating Match Custom Rules with the action of Allow to reduce false positive. These rules should be configured with a high priority (lower numeric value) than other block and rate limit rules.
-* Depending on your traffic pattern, create a preventive rate limit rule (only applies to Azure Front Door). For example, you can configure a rate limit rule to not allow any single *Client IP address* to send more than XXX traffic per window to your site. Azure Front Door supports two fixed windows for tracking requests, 1 and 5 minutes. It's recommended to use the 5-minute window for better mitigation of HTTP Flood attacks. For example, **Configure a Rate Limit Rule**, which blocks any *Source IP* that exceeds 100 requests in a 5-minute window. This rule should be the lowest priority rule (priority is ordered with 1 being the highest priority), so that more specific Rate Limit rules or Match rules can be created to match before this rule.
+* At a minimum, you should have a rate limit rule that blocks high rate of requests from any single IP address. For example, you can configure a rate limit rule to not allow any single *Client IP address* to send more than XXX traffic per window to your site. Azure WAF supports two windows for tracking requests, 1 and 5 minutes. It's recommended to use the 5-minute window for better mitigation of HTTP Flood attacks. This rule should be the lowest priority rule (priority is ordered with 1 being the highest priority), so that more specific Rate Limit rules or Match rules can be created to match before this rule. If you are using Application Gateway WAF v2, you can make use of additional rate limiting configurations to track and block clients by methods other than Client IP. More information on Rate Limits on Application Gateway waf can be found at [Rate limiting overview](../ag/rate-limiting-overview.md).
- The following Log Analytics query can be helpful in determining the threshold you should use for the above rule.
+ The following Log Analytics query can be helpful in determining the threshold you should use for the above rule. For a similar query but with Application Gateway, replace "FrontdoorAccessLog" with "ApplicationGatewayAccessLog".
``` AzureDiagnostics