Updates from: 08/01/2023 01:27:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Network Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/network-considerations.md
Previously updated : 03/14/2023 Last updated : 07/31/2023
A managed domain connects to a subnet in an Azure virtual network. Design this s
* A managed domain requires 3-5 IP addresses. Make sure that your subnet IP address range can provide this number of addresses. * Restricting the available IP addresses can prevent the managed domain from maintaining two domain controllers.
+ >[!NOTE]
+ >You shouldn't use public IP addresses for virtual networks and their subnets due to the following issues:
+ >
+ >- **Scarcity of the IP address**: IPv4 public IP addresses are limited, and their demand often exceeds the available supply. Also, there are potentially overlapping IPs with public endpoints.
+ >- **Security risks**: Using public IPs for virtual networks exposes your devices directly to the internet, increasing the risk of unauthorized access and potential attacks. Without proper security measures, your devices may become vulnerable to various threats.
+ >
+ >- **Complexity**: Managing a virtual network with public IPs can be more complex than using private IPs, as it requires dealing with external IP ranges and ensuring proper network segmentation and security.
+ >
+ >It is strongly recommended to use private IP addresses. If you use a public IP, ensure you are the owner/dedicated user of the chosen IPs in the public range you chose.
+ The following example diagram outlines a valid design where the managed domain has its own subnet, there's a gateway subnet for external connectivity, and application workloads are in a connected subnet within the virtual network: ![Recommended subnet design](./media/active-directory-domain-services-design-guide/vnet-subnet-design.png)
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
Previously updated : 01/29/2023 Last updated : 07/31/2023 #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
To quickly create a managed domain, you can select **Review + create** to accept
* Creates a subnet named *aadds-subnet* using the IP address range of *10.0.2.0/24*. * Synchronizes *All* users from Azure AD into the managed domain.
+>[!NOTE]
+>You shouldn't use public IP addresses for virtual networks and their subnets due to the following issues:
+>
+>- **Scarcity of the IP address**: IPv4 public IP addresses are limited, and their demand often exceeds the available supply. Also, there are potentially overlapping IPs with public endpoints.
+>- **Security risks**: Using public IPs for virtual networks exposes your devices directly to the internet, increasing the risk of unauthorized access and potential attacks. Without proper security measures, your devices may become vulnerable to various threats.
+>
+>- **Complexity**: Managing a virtual network with public IPs can be more complex than using private IPs, as it requires dealing with external IP ranges and ensuring proper network segmentation and security.
+>
+>It is strongly recommended to use private IP addresses. If you use a public IP, ensure you are the owner/dedicated user of the chosen IPs in the public range you chose.
+ Select **Review + create** to accept these default configuration options. ## Deploy the managed domain
active-directory Application Provisioning Configuration Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-configuration-api.md
Content-type: application/json
{ "value": [ {
- "id": "8b1025e4-1dd2-430b-a150-2ef79cd700f5",
+ "id": "8b1025e4-1dd2-430b-a150-2ef79cd700f5",
"displayName": "AWS Single-Account Access", "homePageUrl": "http://aws.amazon.com/", "supportedSingleSignOnModes": [
active-directory Application Provisioning When Will Provisioning Finish Specific User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md
Summary of factors that influence the time it takes to complete an **initial cyc
- Whether users in scope for provisioning are matched to existing users in the target application, or need to be created for the first time. Sync jobs for which all users are created for the first time take about *twice as long* as sync jobs for which all users are matched to existing users. -- Number of errors in the [provisioning logs](check-status-user-account-provisioning.md). Performance is slower if there are many errors and the provisioning service has gone into a quarantine state.
+- Number of errors in the [provisioning logs](check-status-user-account-provisioning.md). Performance is slower if there are many errors and the provisioning service has gone into a quarantine state.
- Request rate limits and throttling implemented by the target system. Some target systems implement request rate limits and throttling, which can impact performance during large sync operations. Under these conditions, an app that receives too many requests too fast might slow its response rate or close the connection. To improve performance, the connector needs to adjust by not sending the app requests faster than the app can process them. Provisioning connectors built by Microsoft make this adjustment.
active-directory Inbound Provisioning Api Grant Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-grant-access.md
This section describes how you can assign the necessary permissions to a managed
[![Screenshot of managed identity name.](media/inbound-provisioning-api-grant-access/managed-identity-name.png)](media/inbound-provisioning-api-grant-access/managed-identity-name.png#lightbox)
-1. Run the following PowerShell script to assign permissions to your managed identity.
+1. Run the following PowerShell script to assign permissions to your managed identity.
+ ```powershell Install-Module Microsoft.Graph -Scope CurrentUser
-
+ Connect-MgGraph -Scopes "Application.Read.All","AppRoleAssignment.ReadWrite.All,RoleManagement.ReadWrite.Directory" Select-MgProfile Beta $graphApp = Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'"
This section describes how you can assign the necessary permissions to a managed
$managedID = Get-MgServicePrincipal -Filter "DisplayName eq 'CSV2SCIMBulkUpload'" New-MgServicePrincipalAppRoleAssignment -PrincipalId $managedID.Id -ServicePrincipalId $managedID.Id -ResourceId $graphApp.Id -AppRoleId $AppRole.Id ```
-1. To confirm that the permission was applied, find the managed identity service principal under **Enterprise Applications** in Azure AD. Remove the **Application type** filter to see all service principals.
+1. To confirm that the permission was applied, find the managed identity service principal under **Enterprise Applications** in Azure AD. Remove the **Application type** filter to see all service principals.
[![Screenshot of managed identity principal.](media/inbound-provisioning-api-grant-access/managed-identity-principal.png)](media/inbound-provisioning-api-grant-access/managed-identity-principal.png#lightbox) 1. Click on the **Permissions** blade under **Security**. Ensure the permission is set. [![Screenshot of managed identity permissions.](media/inbound-provisioning-api-grant-access/managed-identity-permissions.png)](media/inbound-provisioning-api-grant-access/managed-identity-permissions.png#lightbox)
active-directory Inbound Provisioning Api Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-powershell.md
The PowerShell sample script published in the [Microsoft Entra ID inbound provis
- Test-ScriptCommands.ps1 (sample usage commands) - UseClientCertificate.ps1 (script to generate self-signed certificate and upload it as service principal credential for use in OAuth flow) - `Sample1` (folder with more examples of how CSV file columns can be mapped to SCIM standard attributes. If you get different CSV files for employees, contractors, interns, you can create a separate AttributeMapping.psd1 file for each entity.)
-1. Download and install the latest version of PowerShell.
-1. Run the command to enable execution of remote signed scripts:
+1. Download and install the latest version of PowerShell.
+1. Run the command to enable execution of remote signed scripts:
```powershell set-executionpolicy remotesigned ```
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
In this example, the users and or groups are created in a cloud HR application l
![Picture 2](./media/plan-auto-user-provisioning/workdayprovisioning.png)
-1. **HR team** performs the transactions in the cloud HR app tenant.
-2. **Azure AD provisioning service** runs the scheduled cycles from the cloud HR app tenant and identifies changes that need to be processed for sync with AD.
-3. **Azure AD provisioning service** invokes the Azure AD Connect provisioning agent with a request payload containing AD account create/update/enable/disable operations.
-4. **Azure AD Connect provisioning agent** uses a service account to manage AD account data.
-5. **Azure AD Connect** runs delta sync to pull updates in AD.
-6. **AD** updates are synced with Azure AD.
-7. **Azure AD provisioning service** writebacks email attribute and username from Azure AD to the cloud HR app tenant.
+1. **HR team** performs the transactions in the cloud HR app tenant.
+2. **Azure AD provisioning service** runs the scheduled cycles from the cloud HR app tenant and identifies changes that need to be processed for sync with AD.
+3. **Azure AD provisioning service** invokes the Azure AD Connect provisioning agent with a request payload containing AD account create/update/enable/disable operations.
+4. **Azure AD Connect provisioning agent** uses a service account to manage AD account data.
+5. **Azure AD Connect** runs delta sync to pull updates in AD.
+6. **AD** updates are synced with Azure AD.
+7. **Azure AD provisioning service** writebacks email attribute and username from Azure AD to the cloud HR app tenant.
## Plan the deployment project
active-directory User Provisioning Sync Attributes For Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md
Once schema extensions are created, these extension attributes are automatically
When you've more than 1000 service principals, you may find extensions missing in the source attribute list. If an attribute you've created doesn't automatically appear, then verify the attribute was created and add it manually to your schema. To verify it was created, use Microsoft Graph and [Graph Explorer](/graph/graph-explorer/graph-explorer-overview). To add it manually to your schema, see [Editing the list of supported attributes](customize-application-attributes.md#editing-the-list-of-supported-attributes). ### Create an extension attribute for cloud only users using Microsoft Graph
-You can extend the schema of Azure AD users using [Microsoft Graph](/graph/overview).
+You can extend the schema of Azure AD users using [Microsoft Graph](/graph/overview).
First, list the apps in your tenant to get the ID of the app you're working on. To learn more, see [List extensionProperties](/graph/api/application-list-extensionproperty).
Content-type: application/json
"name": "extensionName", "dataType": "string", "targetObjects": [
- "User"
+ "User"
] } ```
GET https://graph.microsoft.com/v1.0/users/{id}?$select=displayName,extension_in
### Create an extension attribute on a cloud only user using PowerShell
-Create a custom extension using PowerShell and assign a value to a user.
+Create a custom extension using PowerShell and assign a value to a user.
```
-#Connect to your Azure AD tenant
+#Connect to your Azure AD tenant
Connect-AzureAD #Create an application (you can instead use an existing application if you would like)
Cloud sync will automatically discover your extensions in on-premises Active Dir
4. Select the configuration you wish to add the extension attribute and mapping. 5. Under **Manage attributes** select **click to edit mappings**. 6. Click **Add attribute mapping**. The attributes will automatically be discovered.
-7. The new attributes will be available in the drop-down under **source attribute**.
+7. The new attributes will be available in the drop-down under **source attribute**.
8. Fill in the type of mapping you want and click **Apply**. [![Custom attribute mapping](media/user-provisioning-sync-attributes-for-mapping/schema-1.png)](media/user-provisioning-sync-attributes-for-mapping/schema-1.png#lightbox)
If users who will access the applications originate in on-premises Active Direct
1. Open the Azure AD Connect wizard, choose Tasks, and then choose **Customize synchronization options**. ![Azure Active Directory Connect wizard Additional tasks page](./media/user-provisioning-sync-attributes-for-mapping/active-directory-connect-customize.png)
-
-2. Sign in as an Azure AD Global Administrator.
+
+2. Sign in as an Azure AD Global Administrator.
3. On the **Optional Features** page, select **Directory extension attribute sync**.
-
+ ![Azure Active Directory Connect wizard Optional features page](./media/user-provisioning-sync-attributes-for-mapping/active-directory-connect-directory-extension-attribute-sync.png) 4. Select the attribute(s) you want to extend to Azure AD.
If users who will access the applications originate in on-premises Active Direct
![Screenshot that shows the "Directory extensions" selection page](./media/user-provisioning-sync-attributes-for-mapping/active-directory-connect-directory-extensions.png) 5. Finish the Azure AD Connect wizard and allow a full synchronization cycle to run. When the cycle is complete, the schema is extended and the new values are synchronized between your on-premises AD and Azure AD.
-
+ 6. In the Azure portal, while youΓÇÖre [editing user attribute mappings](customize-application-attributes.md), the **Source attribute** list will now contain the added attribute in the format `<attributename> (extension_<appID>_<attributename>)`, where appID is the identifier of a placeholder application in your tenant. Select the attribute and map it to the target application for provisioning. ![Azure Active Directory Connect wizard Directory extensions selection page](./media/user-provisioning-sync-attributes-for-mapping/attribute-mapping-extensions.png) > [!NOTE]
-> The ability to provision reference attributes from on-premises AD, such as **managedby** or **DN/DistinguishedName**, is not supported today. You can request this feature on [User Voice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789).
+> The ability to provision reference attributes from on-premises AD, such as **managedby** or **DN/DistinguishedName**, is not supported today. You can request this feature on [User Voice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789).
## Next steps
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning.md
# What is app provisioning in Azure Active Directory? In Azure Active Directory (Azure AD), the term *app provisioning* refers to automatically creating user identities and roles for applications.
-
+ ![Diagram that shows provisioning scenarios.](../governance/media/what-is-provisioning/provisioning.png) Azure AD application provisioning refers to automatically creating user identities and roles in the applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into SaaS applications like [Dropbox](../../active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md), and many more.
active-directory Application Proxy Azure Front Door https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-azure-front-door.md
This article guides you through the steps to securely expose a web application o
### Application Proxy Configuration Follow these steps to configure Application Proxy for Front Door:
-1. Install connector for the location that your app instances will be in (For example US West). For the connector group assign the connector to the right region (For example North America).
-2. Set up your app instance with Application Proxy as follows:
+1. Install connector for the location that your app instances will be in (For example US West). For the connector group assign the connector to the right region (For example North America).
+2. Set up your app instance with Application Proxy as follows:
- Set the Internal URL to the address users access the app from the internal network, for example contoso.org - Set the External URL to the domain address you want the users to access the app from. For this you must configure a custom domain for our application here, for example, contoso.org. Reference: [Custom domains in Azure Active Directory Application Proxy][appproxy-custom-domain] - Assign the application to the appropriate connector group (For example: North America) - Note down the URL generated by Application Proxy to access the application. For example, contoso.msappproxy.net - For the application configure a CNAME Entry in your DNS provider which points the external URL to the Front DoorΓÇÖs endpoint, for example ΓÇÿcontoso.orgΓÇÖ to contoso.msappproxy.net
-3. In the Front Door service, utilize the URL generated for the app by Application Proxy as a backend for the backend pool. For example, contoso.msappproxy.net
+3. In the Front Door service, utilize the URL generated for the app by Application Proxy as a backend for the backend pool. For example, contoso.msappproxy.net
#### Sample Application Proxy Configuration The following table shows a sample Application Proxy configuration. The sample scenario uses the sample application domain www.contoso.org as the External URL.
The configuration steps that follow refer to the following definitions:
- Origin host header: This represented the host header value being sent to the backend for each request. For example, contoso.org. For more information refer here: [Origins and origin groups ΓÇô Azure Front Door][front-door-origin] Follow these steps to configure the Front Door Service (Standard):
-1. Create a Front Door (Standard) with the configuration below:
+1. Create a Front Door (Standard) with the configuration below:
- Add an Endpoint name for generating the Front DoorΓÇÖs default domain i.e. azurefd.net. For example, contoso-nam that generated the Endpoint hostname contoso-nam.azurefd.net - Add an Origin Type for the type of backend resource. For example Custom here for the Application Proxy resource - Add an Origin host name to represent the backend host name. For example, contoso.msappproxy.net - Optional: Enable Caching for the routing rule for Front Door to cache your static content.
-2. Verify if the deployment is complete and the Front Door Service is ready
-3. To give your Front Door service a user-friendly domain host name URL, create a CNAME record with your DNS provider for your Application Proxy External URL that points to Front DoorΓÇÖs domain host name (generated by the Front Door service). For example, contoso.org points to contoso.azurefd.net Reference: [How to add a custom domain - Azure Front Door][front-door-custom-domain]
-4. As per the reference, on the Front Door Service Dashboard navigate to Front Door Manager and add a Domain with the Custom Hostname. For example, contoso.org
-5. Navigate to the Origin groups in the Front Door Service Dashboard, select the origin name and validate the Origin host header matches the domain of the backend. For example here the Origin host header should be: contoso.org
+2. Verify if the deployment is complete and the Front Door Service is ready
+3. To give your Front Door service a user-friendly domain host name URL, create a CNAME record with your DNS provider for your Application Proxy External URL that points to Front DoorΓÇÖs domain host name (generated by the Front Door service). For example, contoso.org points to contoso.azurefd.net Reference: [How to add a custom domain - Azure Front Door][front-door-custom-domain]
+4. As per the reference, on the Front Door Service Dashboard navigate to Front Door Manager and add a Domain with the Custom Hostname. For example, contoso.org
+5. Navigate to the Origin groups in the Front Door Service Dashboard, select the origin name and validate the Origin host header matches the domain of the backend. For example here the Origin host header should be: contoso.org
| | Configuration | Additional Information | |- | -- | - |
active-directory Application Proxy Configure Complex Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-complex-application.md
# Understanding Azure Active Directory Application Proxy Complex application scenario (Preview) When applications are made up of multiple individual web application using different domain suffixes or different ports or paths in the URL, the individual web application instances must be published in separate Azure AD Application Proxy apps and the following problems might arise:
-1. Pre-authentication- The client must separately acquire an access token or cookie for each Azure AD Application Proxy app. This might lead to additional redirects to login.microsoftonline.com and CORS issues.
-2. CORS issues- Cross-origin resource sharing calls (OPTIONS request) might be triggered to validate if the caller web app is allowed to access the URL of the targeted web app. These will be blocked by the Azure AD Application Proxy Cloud service, since these requests cannot contain authentication information.
-3. Poor app management- Multiple enterprise apps are created to enable access to a private app adding friction to the app management experience.
+1. Pre-authentication- The client must separately acquire an access token or cookie for each Azure AD Application Proxy app. This might lead to additional redirects to login.microsoftonline.com and CORS issues.
+2. CORS issues- Cross-origin resource sharing calls (OPTIONS request) might be triggered to validate if the caller web app is allowed to access the URL of the targeted web app. These will be blocked by the Azure AD Application Proxy Cloud service, since these requests cannot contain authentication information.
+3. Poor app management- Multiple enterprise apps are created to enable access to a private app adding friction to the app management experience.
The following figure shows an example for complex application domain structure.
active-directory Application Proxy Configure Connectors With Proxy Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-connectors-with-proxy-servers.md
To enable this, please follow the next steps:
`UseDefaultProxyForBackendRequests = 1` to the Connector configuration registry key located in "HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft AAD App Proxy Connector". ### Step 2: Configure the proxy server manually using netsh command
-1. Enable the group policy Make proxy settings per-machine. This is found in: Computer Configuration\Policies\Administrative Templates\Windows Components\Internet Explorer. This needs to be set rather than having this policy set to per-user.
-2. Run `gpupdate /force` on the server or reboot the server to ensure it uses the updated group policy settings.
-3. Launch an elevated command prompt with admin rights and enter `control inetcpl.cpl`.
-4. Configure the required proxy settings.
+1. Enable the group policy Make proxy settings per-machine. This is found in: Computer Configuration\Policies\Administrative Templates\Windows Components\Internet Explorer. This needs to be set rather than having this policy set to per-user.
+2. Run `gpupdate /force` on the server or reboot the server to ensure it uses the updated group policy settings.
+3. Launch an elevated command prompt with admin rights and enter `control inetcpl.cpl`.
+4. Configure the required proxy settings.
These settings make the connector use the same forward proxy for the communication to Azure and to the backend application. If the connector to Azure communication requires no forward proxy or a different forward proxy, you can set this up with modifying the file ApplicationProxyConnectorService.exe.config as described in the sections Bypass outbound proxies or Use the outbound proxy server.
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
To enable the certificate-based authentication in the Azure portal, complete the
1. Sign in to the [Azure portal](https://portal.azure.com) as an Authentication Policy Administrator. 1. Select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side. 1. Under **Manage**, select **Authentication methods** > **Certificate-based Authentication**.
-1. Under **Enable and Target**, click **Enable**.
+1. Under **Enable and Target**, click **Enable**.
1. Click **All users**, or click **Add groups** to select specific groups. :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/enable.png" alt-text="Screenshot of how to enable CBA.":::
As a first configuration test, you should try to sign in to the [MyApps portal](
1. Select **Sign in with a certificate**.
-1. Pick the correct user certificate in the client certificate picker UI and click **OK**.
+1. Pick the correct user certificate in the client certificate picker UI and click **OK**.
:::image type="content" border="true" source="./media/how-to-certificate-based-authentication/picker.png" alt-text="Screenshot of the certificate picker UI.":::
active-directory How To Mfa Server Migration Utility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md
If the upgrade had issues, follow these steps to roll back:
>[!NOTE] >Any changes since the backup was made will be lost, but should be minimal if backup was made right before upgrade and upgrade was unsuccessful.
-1. Run the installer for your previous version (for example, 8.0.x.x).
+1. Run the installer for your previous version (for example, 8.0.x.x).
1. Configure Azure AD to accept MFA requests to your on-premises federation server. Use Graph PowerShell to set [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true) to `enforceMfaByFederatedIdp`, as shown in the following example. **Request**
active-directory Concept Continuous Access Evaluation Workload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation-workload.md
When a clientΓÇÖs access to a resource is blocked due to CAE being triggered, th
The following steps detail how an admin can verify sign in activity in the sign-in logs:
-1. Sign into the Azure portal as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Sign-in logs** > **Service Principal Sign-ins**. You can use filters to ease the debugging process.
-1. Select an entry to see activity details. The **Continuous access evaluation** field indicates whether a CAE token was issued in a particular sign-in attempt.
+1. Sign into the Azure portal as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Sign-in logs** > **Service Principal Sign-ins**. You can use filters to ease the debugging process.
+1. Select an entry to see activity details. The **Continuous access evaluation** field indicates whether a CAE token was issued in a particular sign-in attempt.
## Next steps
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Customers who have configured CAE settings under Security before have to migrate
:::image type="content" source="media/concept-continuous-access-evaluation/migrate-continuous-access-evaluation.png" alt-text="Portal view showing the option to migrate continuous access evaluation to a Conditional Access policy." lightbox="media/concept-continuous-access-evaluation/migrate-continuous-access-evaluation.png"::: 1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Continuous access evaluation**.
-1. You have the option to **Migrate** your policy. This action is the only one that you have access to at this point.
+1. Browse to **Azure Active Directory** > **Security** > **Continuous access evaluation**.
+1. You have the option to **Migrate** your policy. This action is the only one that you have access to at this point.
1. Browse to **Conditional Access** and you find a new policy named **Conditional Access policy created from CAE settings** with your settings configured. Administrators can choose to customize this policy or create their own to replace it. The following table describes the migration experience of each customer group based on previously configured CAE settings.
active-directory Howto Continuous Access Evaluation Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-continuous-access-evaluation-troubleshoot.md
Administrators can monitor and troubleshoot sign in events where [continuous acc
Administrators can monitor user sign-ins where continuous access evaluation (CAE) is applied. This information is found in the Azure AD sign-in logs:
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Sign-in logs**.
-1. Apply the **Is CAE Token** filter.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Sign-in logs**.
+1. Apply the **Is CAE Token** filter.
[ ![Screenshot showing how to add a filter to the Sign-ins log to see where CAE is being applied or not.](./media/howto-continuous-access-evaluation-troubleshoot/sign-ins-log-apply-filter.png) ](./media/howto-continuous-access-evaluation-troubleshoot/sign-ins-log-apply-filter.png#lightbox)
The continuous access evaluation insights workbook allows administrators to view
Log Analytics integration must be completed before workbooks are displayed. For more information about how to stream Azure AD sign-in logs to a Log Analytics workspace, see the article [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Workbooks**.
-1. Under **Public Templates**, search for **Continuous access evaluation insights**.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Workbooks**.
+1. Under **Public Templates**, search for **Continuous access evaluation insights**.
The **Continuous access evaluation insights** workbook contains the following table:
Admins can view records filtered by time range and application. Admins can compa
To unblock users, administrators can add specific IP addresses to a trusted named location.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. Here you can create or update trusted IP locations.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. Here you can create or update trusted IP locations.
> [!NOTE] > Before adding an IP address as a trusted named location, confirm that the IP address does in fact belong to the intended organization.
active-directory Reference Office 365 Application Contents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/reference-office-365-application-contents.md
+ # Apps included in Conditional Access Office 365 app suite The following list is provided as a reference and includes a detailed list of services and applications that are included in the Conditional Access [Office 365](concept-conditional-access-cloud-apps.md#office-365) app. -- Augmentation Loop-- Call Recorder
+- Augmentation Loop
+- Call Recorder
- Connectors-- Device Management Service
+- Device Management Service
- EnrichmentSvc-- IC3 Gateway-- Media Analysis and Transformation Service-- Message Recall app-- Messaging Async Media
+- IC3 Gateway
+- Media Analysis and Transformation Service
+- Message Recall app
+- Messaging Async Media
- MessagingAsyncMediaProd-- Microsoft 365 Reporting Service-- Microsoft Discovery Service-- Microsoft Exchange Online Protection-- Microsoft Flow-- Microsoft Flow GCC-- Microsoft Forms-- Microsoft Forms Web-- Microsoft Forms Web in Azure Government-- Microsoft Legacy To-Do WebApp-- Microsoft Office 365 Portal-- Microsoft Office client application-- Microsoft People Cards Service-- Microsoft SharePoint Online - SharePoint Home-- Microsoft Stream Portal-- Microsoft Stream Service-- Microsoft Teams-- Microsoft Teams - T4L Web Client-- Microsoft Teams - Teams And Channels Service-- Microsoft Teams Chat Aggregator-- Microsoft Teams Graph Service-- Microsoft Teams Retail Service-- Microsoft Teams Services-- Microsoft Teams UIS-- Microsoft Teams Web Client-- Microsoft To-Do WebApp-- Microsoft Whiteboard Services-- O365 Suite UX-- OCPS Checkin Service-- Office 365 app, corresponding to a migrated siteId.-- Office 365 Exchange Microservices-- Office 365 Exchange Online-- Office 365 Search Service-- Office 365 SharePoint Online-- Office 365 Yammer-- Office Delve-- Office Hive-- Office Hive Azure Government-- Office Online-- Office Services Manager-- Office Services Manager in USGov-- Office Shredding Service-- Office365 Shell WCSS-Client-- Office365 Shell WCSS-Client in Azure Government
+- Microsoft 365 Reporting Service
+- Microsoft Discovery Service
+- Microsoft Exchange Online Protection
+- Microsoft Flow
+- Microsoft Flow GCC
+- Microsoft Forms
+- Microsoft Forms Web
+- Microsoft Forms Web in Azure Government
+- Microsoft Legacy To-Do WebApp
+- Microsoft Office 365 Portal
+- Microsoft Office client application
+- Microsoft People Cards Service
+- Microsoft SharePoint Online - SharePoint Home
+- Microsoft Stream Portal
+- Microsoft Stream Service
+- Microsoft Teams
+- Microsoft Teams - T4L Web Client
+- Microsoft Teams - Teams And Channels Service
+- Microsoft Teams Chat Aggregator
+- Microsoft Teams Graph Service
+- Microsoft Teams Retail Service
+- Microsoft Teams Services
+- Microsoft Teams UIS
+- Microsoft Teams Web Client
+- Microsoft To-Do WebApp
+- Microsoft Whiteboard Services
+- O365 Suite UX
+- OCPS Checkin Service
+- Office 365 app, corresponding to a migrated siteId.
+- Office 365 Exchange Microservices
+- Office 365 Exchange Online
+- Office 365 Search Service
+- Office 365 SharePoint Online
+- Office 365 Yammer
+- Office Delve
+- Office Hive
+- Office Hive Azure Government
+- Office Online
+- Office Services Manager
+- Office Services Manager in USGov
+- Office Shredding Service
+- Office365 Shell WCSS-Client
+- Office365 Shell WCSS-Client in Azure Government
- OfficeClientService - OfficeHome - OneDrive-- OneDrive SyncEngine
+- OneDrive SyncEngine
- OneNote-- Outlook Browser Extension-- Outlook Service for Exchange-- PowerApps Service-- PowerApps Web-- PowerApps Web GCC
+- Outlook Browser Extension
+- Outlook Service for Exchange
+- PowerApps Service
+- PowerApps Web
+- PowerApps Web GCC
- ProjectWorkManagement - ProjectWorkManagement_USGov-- Reply at mention-- Security & Compliance Center-- SharePoint Online Web Client Extensibility-- SharePoint Online Web Client Extensibility Isolated-- Skype and Teams Tenant Admin API-- Skype for Business Online-- Skype meeting broadcast-- Skype Presence Service
+- Reply at mention
+- Security & Compliance Center
+- SharePoint Online Web Client Extensibility
+- SharePoint Online Web Client Extensibility Isolated
+- Skype and Teams Tenant Admin API
+- Skype for Business Online
+- Skype meeting broadcast
+- Skype Presence Service
- SmartCompose - Sway-- Targeted Messaging Service-- The GCC DoD app for office.com-- The Office365 Shell DoD WCSS-Client
+- Targeted Messaging Service
+- The GCC DoD app for office.com
+- The Office365 Shell DoD WCSS-Client
active-directory Resilience Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/resilience-defaults.md
If there was an outage of the primary authentication service, the Azure Active D
For authentications protected by Conditional Access, policies are reevaluated before access tokens are issued to determine:
-1. Which Conditional Access policies apply?
-1. For policies that do apply, were the required controls are satisfied?
+1. Which Conditional Access policies apply?
+1. For policies that do apply, were the required controls are satisfied?
During an outage, not all conditions can be evaluated in real time by the Backup Authentication Service to determine whether a Conditional Access policy should apply. Conditional Access resilience defaults are a new session control that lets admins decide between:
You can configure Conditional Access resilience defaults from the Azure portal,
### Azure portal
-1. Navigate to the **Azure portal** > **Security** > **Conditional Access**
-1. Create a new policy or select an existing policy
-1. Open the Session control settings
-1. Select Disable resilience defaults to disable the setting for this policy. Sign-ins in scope of the policy will be blocked during an Azure AD outage
-1. Save changes to the policy
+1. Navigate to the **Azure portal** > **Security** > **Conditional Access**
+1. Create a new policy or select an existing policy
+1. Open the Session control settings
+1. Select Disable resilience defaults to disable the setting for this policy. Sign-ins in scope of the policy will be blocked during an Azure AD outage
+1. Save changes to the policy
### MS Graph APIs
active-directory Howto Restrict Your App To A Set Of Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md
Once you've configured your app to enable user assignment, you can go ahead and
Follow the steps in this section to secure app-to-app authentication access for your tenant.
-1. Navigate to Service Principal sign-in logs in your tenant to find services authenticating to access resources in your tenant.
-1. Check using app ID if a Service Principal exists for both resource and client apps in your tenant that you wish to manage access.
+1. Navigate to Service Principal sign-in logs in your tenant to find services authenticating to access resources in your tenant.
+1. Check using app ID if a Service Principal exists for both resource and client apps in your tenant that you wish to manage access.
```powershell Get-MgServicePrincipal ` -Filter "AppId eq '$appId'" ```
-1. Create a Service Principal using app ID, if it doesn't exist:
+1. Create a Service Principal using app ID, if it doesn't exist:
```powershell New-MgServicePrincipal ` -AppId $appId ```
-1. Explicitly assign client apps to resource apps (this functionality is available only in API and not in the Azure AD Portal):
+1. Explicitly assign client apps to resource apps (this functionality is available only in API and not in the Azure AD Portal):
```powershell $clientAppId = ΓÇ£[guid]ΓÇ¥ $clientId = (Get-MgServicePrincipal -Filter "AppId eq '$clientAppId'").Id
Follow the steps in this section to secure app-to-app authentication access for
-ResourceId (Get-MgServicePrincipal -Filter "AppId eq '$appId'").Id ` -AppRoleId "00000000-0000-0000-0000-000000000000" ```
-1. Require assignment for the resource application to restrict access only to the explicitly assigned users or services.
+1. Require assignment for the resource application to restrict access only to the explicitly assigned users or services.
```powershell Update-MgServicePrincipal -ServicePrincipalId (Get-MgServicePrincipal -Filter "AppId eq '$appId'").Id -AppRoleAssignmentRequired:$true ```
active-directory Scenario Daemon Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-acquire-token.md
Here's an example of defining the scopes for the web API as part of the configur
```json {
- "AzureAd": {
- // Same AzureAd section as before.
- },
-
- "MyWebApi": {
- "BaseUrl": "https://localhost:44372/",
- "RelativePath": "api/TodoList",
- "RequestAppToken": true,
- "Scopes": [ "[Enter here the scopes for your web API]" ]
- }
+ "AzureAd": {
+ // Same AzureAd section as before.
+ },
+
+ "MyWebApi": {
+ "BaseUrl": "https://localhost:44372/",
+ "RelativePath": "api/TodoList",
+ "RequestAppToken": true,
+ "Scopes": [ "[Enter here the scopes for your web API]" ]
+ }
} ```
active-directory Scenario Daemon App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-app-configuration.md
ConfidentialClientApplication cca =
```JavaScript const msalConfig = {
- auth: {
- clientId: process.env.CLIENT_ID,
- authority: process.env.AAD_ENDPOINT + process.env.TENANT_ID,
- clientSecret: process.env.CLIENT_SECRET,
- }
+ auth: {
+ clientId: process.env.CLIENT_ID,
+ authority: process.env.AAD_ENDPOINT + process.env.TENANT_ID,
+ clientSecret: process.env.CLIENT_SECRET,
+ }
}; const apiConfig = {
- uri: process.env.GRAPH_ENDPOINT + 'v1.0/users',
+ uri: process.env.GRAPH_ENDPOINT + 'v1.0/users',
}; const tokenRequest = {
- scopes: [process.env.GRAPH_ENDPOINT + '.default'],
+ scopes: [process.env.GRAPH_ENDPOINT + '.default'],
}; const cca = new msal.ConfidentialClientApplication(msalConfig);
active-directory Scenario Mobile Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-mobile-acquire-token.md
UIViewController *viewController = ...; // Pass a reference to the view controll
MSALWebviewParameters *webParameters = [[MSALWebviewParameters alloc] initWithAuthPresentationViewController:viewController]; MSALInteractiveTokenParameters *interactiveParams = [[MSALInteractiveTokenParameters alloc] initWithScopes:scopes webviewParameters:webParameters]; [application acquireTokenWithParameters:interactiveParams completionBlock:^(MSALResult *result, NSError *error) {
- if (!error)
- {
- // You'll want to get the account identifier to retrieve and reuse the account
- // for later acquireToken calls
- NSString *accountIdentifier = result.account.identifier;
-
- NSString *accessToken = result.accessToken;
- }
+ if (!error)
+ {
+ // You'll want to get the account identifier to retrieve and reuse the account
+ // for later acquireToken calls
+ NSString *accountIdentifier = result.account.identifier;
+
+ NSString *accessToken = result.accessToken;
+ }
}]; ```
let webviewParameters = MSALWebviewParameters(authPresentationViewController: vi
let interactiveParameters = MSALInteractiveTokenParameters(scopes: scopes, webviewParameters: webviewParameters) application.acquireToken(with: interactiveParameters, completionBlock: { (result, error) in
- guard let authResult = result, error == nil else {
- print(error!.localizedDescription)
- return
- }
+ guard let authResult = result, error == nil else {
+ print(error!.localizedDescription)
+ return
+ }
- // Get access token from result
- let accessToken = authResult.accessToken
+ // Get access token from result
+ let accessToken = authResult.accessToken
}) ```
active-directory Scenario Spa Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-acquire-token.md
import { filter, Subject, takeUntil } from 'rxjs';
// In app.component.ts export class AppComponent implements OnInit {
- private readonly _destroying$ = new Subject<void>();
-
- constructor(private broadcastService: MsalBroadcastService) { }
-
- ngOnInit() {
- this.broadcastService.msalSubject$
- .pipe(
- filter((msg: EventMessage) => msg.eventType === EventType.ACQUIRE_TOKEN_SUCCESS),
- takeUntil(this._destroying$)
- )
- .subscribe((result: EventMessage) => {
- // Do something with event payload here
- });
- }
-
- ngOnDestroy(): void {
- this._destroying$.next(undefined);
- this._destroying$.complete();
- }
+ private readonly _destroying$ = new Subject<void>();
+
+ constructor(private broadcastService: MsalBroadcastService) { }
+
+ ngOnInit() {
+ this.broadcastService.msalSubject$
+ .pipe(
+ filter((msg: EventMessage) => msg.eventType === EventType.ACQUIRE_TOKEN_SUCCESS),
+ takeUntil(this._destroying$)
+ )
+ .subscribe((result: EventMessage) => {
+ // Do something with event payload here
+ });
+ }
+
+ ngOnDestroy(): void {
+ this._destroying$.next(undefined);
+ this._destroying$.complete();
+ }
} ```
active-directory Tutorial V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-console.md
This should result in some JSON response from Microsoft Graph API and you should
You have selected: getUsers request made to web API at: Fri Jan 22 2021 09:31:52 GMT-0800 (Pacific Standard Time) {
- '@odata.context': 'https://graph.microsoft.com/v1.0/$metadata#users',
- value: [
- {
- displayName: 'Adele Vance'
- givenName: 'Adele',
- jobTitle: 'Retail Manager',
- mail: 'AdeleV@msaltestingjs.onmicrosoft.com',
- mobilePhone: null,
- officeLocation: '18/2111',
- preferredLanguage: 'en-US',
- surname: 'Vance',
- userPrincipalName: 'AdeleV@msaltestingjs.onmicrosoft.com',
- id: 'a6a218a5-f5ae-462a-acd3-581af4bcca00'
- }
- ]
+ '@odata.context': 'https://graph.microsoft.com/v1.0/$metadata#users',
+ value: [
+ {
+ displayName: 'Adele Vance'
+ givenName: 'Adele',
+ jobTitle: 'Retail Manager',
+ mail: 'AdeleV@msaltestingjs.onmicrosoft.com',
+ mobilePhone: null,
+ officeLocation: '18/2111',
+ preferredLanguage: 'en-US',
+ surname: 'Vance',
+ userPrincipalName: 'AdeleV@msaltestingjs.onmicrosoft.com',
+ id: 'a6a218a5-f5ae-462a-acd3-581af4bcca00'
+ }
+ ]
} ``` :::image type="content" source="media/tutorial-v2-nodejs-console/screenshot.png" alt-text="Command-line interface displaying Graph response":::
active-directory Tutorial V2 Windows Uwp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-uwp.md
In the current sample, the `WithRedirectUri("https://login.microsoftonline.com/c
.Build(); ```
-2. Find the callback URI for your app by adding the `redirectURI` field in *MainPage.xaml.cs* and setting a breakpoint on it:
+2. Find the callback URI for your app by adding the `redirectURI` field in *MainPage.xaml.cs* and setting a breakpoint on it:
```csharp
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
To uninstall old packages:
1. If the command fails, try the low-level tools with scripts disabled: 1. For Ubuntu/Debian, run `sudo dpkg --purge aadlogin`. If it's still failing because of the script, delete the `/var/lib/dpkg/info/aadlogin.prerm` file and try again. 1. For everything else, run `rpm -e --noscripts aadogin`.
-1. Repeat steps 3-4 for package `aadlogin-selinux`.
+1. Repeat steps 3-4 for package `aadlogin-selinux`.
### Extension installation errors
active-directory Domains Admin Takeover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md
# Take over an unmanaged directory as administrator in Azure Active Directory
-This article describes two ways to take over a DNS domain name in an unmanaged directory in Azure Active Directory (Azure AD), part of Microsoft Entra. When a self-service user signs up for a cloud service that uses Azure AD, they are added to an unmanaged Azure AD directory based on their email domain. For more about self-service or "viral" sign-up for a service, see [What is self-service sign-up for Azure Active Directory?](directory-self-service-signup.md)
+This article describes two ways to take over a DNS domain name in an unmanaged directory in Azure Active Directory (Azure AD), part of Microsoft Entra. When a self-service user signs up for a cloud service that uses Azure AD, they're added to an unmanaged Azure AD directory based on their email domain. For more about self-service or "viral" sign-up for a service, see [What is self-service sign-up for Azure Active Directory?](directory-self-service-signup.md)
> [!VIDEO https://www.youtube.com/embed/GOSpjHtrRsg]
This article describes two ways to take over a DNS domain name in an unmanaged d
## Decide how you want to take over an unmanaged directory During the process of admin takeover, you can prove ownership as described in [Add a custom domain name to Azure AD](../fundamentals/add-custom-domain.md). The next sections explain the admin experience in more detail, but here's a summary:
-* When you perform an ["internal" admin takeover](#internal-admin-takeover) of an unmanaged Azure directory, you are added as the global administrator of the unmanaged directory. No users, domains, or service plans are migrated to any other directory you administer.
+* When you perform an ["internal" admin takeover](#internal-admin-takeover) of an unmanaged Azure directory, you're added as the global administrator of the unmanaged directory. No users, domains, or service plans are migrated to any other directory you administer.
* When you perform an ["external" admin takeover](#external-admin-takeover) of an unmanaged Azure directory, you add the DNS domain name of the unmanaged directory to your managed Azure directory. When you add the domain name, a mapping of users to resources is created in your managed Azure directory so that users can continue to access services without interruption. ## Internal admin takeover
-Some products that include SharePoint and OneDrive, such as Microsoft 365, do not support external takeover. If that is your scenario, or if you are an admin and want to take over an unmanaged or "shadow" Azure AD organization create by users who used self-service sign-up, you can do this with an internal admin takeover.
+Some products that include SharePoint and OneDrive, such as Microsoft 365, don't support external takeover. If that is your scenario, or if you're an admin and want to take over an unmanaged or "shadow" Azure AD organization create by users who used self-service sign-up, you can do this with an internal admin takeover.
1. Create a user context in the unmanaged organization through signing up for Power BI. For convenience of example, these steps assume that path.
Some products that include SharePoint and OneDrive, such as Microsoft 365, do no
![first screenshot for Become the Admin](./media/domains-admin-takeover/become-admin-first.png)
-5. Add the TXT record to prove that you own the domain name **fourthcoffee.xyz** at your domain name registrar. In this example, it is GoDaddy.com.
+5. Add the TXT record to prove that you own the domain name **fourthcoffee.xyz** at your domain name registrar. In this example, it's GoDaddy.com.
![Add a txt record for the domain name](./media/domains-admin-takeover/become-admin-txt-record.png) When the DNS TXT records are verified at your domain name registrar, you can manage the Azure AD organization.
-When you complete the preceding steps, you are now the global administrator of the Fourth Coffee organization in Microsoft 365. To integrate the domain name with your other Azure services, you can remove it from Microsoft 365 and add it to a different managed organization in Azure.
+When you complete the preceding steps, you're now the global administrator of the Fourth Coffee organization in Microsoft 365. To integrate the domain name with your other Azure services, you can remove it from Microsoft 365 and add it to a different managed organization in Azure.
### Adding the domain name to a managed organization in Azure AD [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] 1. Open the [Microsoft 365 admin center](https://admin.microsoft.com).
-2. Select **Users** tab, and create a new user account with a name like *user\@fourthcoffeexyz.onmicrosoft.com* that does not use the custom domain name.
+2. Select **Users** tab, and create a new user account with a name like *user\@fourthcoffeexyz.onmicrosoft.com* that doesn't use the custom domain name.
3. Ensure that the new user account has Global Administrator privileges for the Azure AD organization. 4. Open **Domains** tab in the Microsoft 365 admin center, select the domain name and select **Remove**.
When you complete the preceding steps, you are now the global administrator of t
## External admin takeover
-If you already manage an organization with Azure services or Microsoft 365, you cannot add a custom domain name if it is already verified in another Azure AD organization. However, from your managed organization in Azure AD you can take over an unmanaged organization as an external admin takeover. The general procedure follows the article [Add a custom domain to Azure AD](../fundamentals/add-custom-domain.md).
+If you already manage an organization with Azure services or Microsoft 365, you can't add a custom domain name if it's already verified in another Azure AD organization. However, from your managed organization in Azure AD you can take over an unmanaged organization as an external admin takeover. The general procedure follows the article [Add a custom domain to Azure AD](../fundamentals/add-custom-domain.md).
When you verify ownership of the domain name, Azure AD removes the domain name from the unmanaged organization and moves it to your existing organization. External admin takeover of an unmanaged directory requires the same DNS TXT validation process as internal admin takeover. The difference is that the following are also moved over with the domain name:
The supported service plans include:
- Microsoft Stream - Dynamics 365 free trial
-External admin takeover is not supported for any service that has service plans that include SharePoint, OneDrive, or Skype For Business; for example, through an Office free subscription.
+External admin takeover isn't supported for any service that has service plans that include SharePoint, OneDrive, or Skype For Business; for example, through an Office free subscription.
> [!NOTE] > External admin takeover is not supported cross cloud boundaries (ex. Azure Commercial to Azure Government). In these scenarios it is recommended to perform External admin takeover into another Azure Commercial tenant, and then delete the domain from this tenant so you may verify successfully into the destination Azure Government tenant.
You can optionally use the [**ForceTakeover** option](#azure-ad-powershell-cmdle
For [RMS for individuals](/azure/information-protection/rms-for-individuals), when the unmanaged organization is in the same region as the organization that you own, the automatically created [Azure Information Protection organization key](/azure/information-protection/plan-implement-tenant-key) and [default protection templates](/azure/information-protection/configure-usage-rights#rights-included-in-the-default-templates) are additionally moved over with the domain name.
-The key and templates are not moved over when the unmanaged organization is in a different region. For example, if the unmanaged organization is in Europe and the organization that you own is in North America.
+The key and templates aren't moved over when the unmanaged organization is in a different region. For example, if the unmanaged organization is in Europe and the organization that you own is in North America.
-Although RMS for individuals is designed to support Azure AD authentication to open protected content, it doesn't prevent users from also protecting content. If users did protect content with the RMS for individuals subscription, and the key and templates were not moved over, that content is not accessible after the domain takeover.
+Although RMS for individuals is designed to support Azure AD authentication to open protected content, it doesn't prevent users from also protecting content. If users did protect content with the RMS for individuals subscription, and the key and templates weren't moved over, that content isn't accessible after the domain takeover.
### Azure AD PowerShell cmdlets for the ForceTakeover option You can see these cmdlets used in [PowerShell example](#powershell-example). cmdlet | Usage - | -
-`connect-msolservice` | When prompted, sign in to your managed organization.
-`get-msoldomain` | Shows your domain names associated with the current organization.
-`new-msoldomain ΓÇôname <domainname>` | Adds the domain name to organization as Unverified (no DNS verification has been performed yet).
-`get-msoldomain` | The domain name is now included in the list of domain names associated with your managed organization, but is listed as **Unverified**.
-`get-msoldomainverificationdns ΓÇôDomainname <domainname> ΓÇôMode DnsTxtRecord` | Provides the information to put into new DNS TXT record for the domain (MS=xxxxx). Verification might not happen immediately because it takes some time for the TXT record to propagate, so wait a few minutes before considering the **-ForceTakeover** option.
-`confirm-msoldomain ΓÇôDomainname <domainname> ΓÇôForceTakeover Force` | <li>If your domain name is still not verified, you can proceed with the **-ForceTakeover** option. It verifies that the TXT record was created and kicks off the takeover process.<li>The **-ForceTakeover** option should be added to the cmdlet only when forcing an external admin takeover, such as when the unmanaged organization has Microsoft 365 services blocking the takeover.
-`get-msoldomain` | The domain list now shows the domain name as **Verified**.
+`connect-mggraph` | When prompted, sign in to your managed organization.
+`get-mgdomain` | Shows your domain names associated with the current organization.
+`new-mgdomain -BodyParameter @{Id="<your domain name>"; IsDefault="False"}` | Adds the domain name to organization as Unverified (no DNS verification has been performed yet).
+`get-mgdomain` | The domain name is now included in the list of domain names associated with your managed organization, but is listed as **Unverified**.
+`Get-MgDomainVerificationDnsRecord` | Provides the information to put into new DNS TXT record for the domain (MS=xxxxx). Verification might not happen immediately because it takes some time for the TXT record to propagate, so wait a few minutes before considering the **-ForceTakeover** option.
+`confirm-mgdomain ΓÇôDomainname <domainname>` | - If your domain name is still not verified, you can proceed with the **-ForceTakeover** option. It verifies that the TXT record was created and kicks off the takeover process.<br>- The **-ForceTakeover** option should be added to the cmdlet only when forcing an external admin takeover, such as when the unmanaged organization has Microsoft 365 services blocking the takeover.
+`get-mgdomain` | The domain list now shows the domain name as **Verified**.
> [!NOTE] > The unmanaged Azure AD organization is deleted 10 days after you exercise the external takeover force option. ### PowerShell example
-1. Connect to Azure AD using the credentials that were used to respond to the self-service offering:
+1. Connect to Microsoft Graph using the credentials that were used to respond to the self-service offering:
```powershell
- Install-Module -Name MSOnline
- $msolcred = get-credential
-
- connect-msolservice -credential $msolcred
+ Install-Module -Name Microsoft.Graph
+
+ Connect-MgGraph -Scopes "User.ReadWrite.All","Domain.ReadWrite.All"
``` 2. Get a list of domains: ```powershell
- Get-MsolDomain
+ Get-MgDomain
```
-3. Run the Get-MsolDomainVerificationDns cmdlet to create a challenge:
+3. Run the New-MgDomain cmdlet to add a new domain in Azure:
```powershell
- Get-MsolDomainVerificationDns ΓÇôDomainName *your_domain_name* ΓÇôMode DnsTxtRecord
+ New-MgDomain -BodyParameter @{Id="<your domain name>"; IsDefault="False"}
```
- For example:
+4. Run the Get-MgDomainVerificationDnsRecord cmdlet to view the DNS challenge:
+ ```powershell
+ (Get-MgDomainVerificationDnsRecord -DomainId "<your domain name>" | ?{$_.recordtype -eq "Txt"}).AdditionalProperties.text
```
- Get-MsolDomainVerificationDns ΓÇôDomainName contoso.com ΓÇôMode DnsTxtRecord
+ For example:
+ ```powershell
+ (Get-MgDomainVerificationDnsRecord -DomainId "contoso.com" | ?{$_.recordtype -eq "Txt"}).AdditionalProperties.text
``` 4. Copy the value (the challenge) that is returned from this command. For example: ```powershell
- MS=32DD01B82C05D27151EA9AE93C5890787F0E65D9
+ MS=ms18939161
``` 5. In your public DNS namespace, create a DNS txt record that contains the value that you copied in the previous step. The name for this record is the name of the parent domain, so if you create this resource record by using the DNS role from Windows Server, leave the Record name blank and just paste the value into the Text box.
-6. Run the Confirm-MsolDomain cmdlet to verify the challenge:
+6. Run the Confirm-MgDomain cmdlet to verify the challenge:
```powershell
- Confirm-MsolDomain ΓÇôDomainName *your_domain_name* ΓÇôForceTakeover Force
+ Confirm-MgDomain -DomainId "<your domain name>"
``` For example: ```powershell
- Confirm-MsolDomain ΓÇôDomainName contoso.com ΓÇôForceTakeover Force
+ Confirm-MgDomain -DomainId "contoso.com"
``` A successful challenge returns you to the prompt without an error.
active-directory Groups Dynamic Rule Member Of https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-member-of.md
This feature can be used in the Azure portal, Microsoft Graph, and in PowerShell
1. Sign in to the [Azure portal](https://portal.azure.com) with an account that has Global Administrator, Intune Administrator, or User Administrator role permissions. 1. Select **Azure Active Directory** > **Groups**, and then select **New group**. 1. Fill in group details. The group type can be Security or Microsoft 365, and the membership type can be set to **Dynamic User** or **Dynamic Device**.
-1. Select **Add dynamic query**.
+1. Select **Add dynamic query**.
1. MemberOf isn't yet supported in the rule builder. Select **Edit** to write the rule in the **Rule syntax** box. 1. Example user rule: `user.memberof -any (group.objectId -in ['groupId', 'groupId'])` 1. Example device rule: `device.memberof -any (group.objectId -in ['groupId', 'groupId'])`
active-directory Groups Settings V2 Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-v2-cmdlets.md
To disable group creation for non-admin users:
2. If it returns `UsersPermissionToCreateGroupsEnabled : True`, then non-admin users can create groups. To disable this feature:
- ```powershell
+ ```powershell
Set-MsolCompanySettings -UsersPermissionToCreateGroupsEnabled $False ```
active-directory Allow Deny List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/allow-deny-list.md
If the module is not installed, or you don't have a required version, do one of
- If no results are returned, run the following command to install the latest version of the AzureADPreview module:
- ```powershell
+ ```powershell
Install-Module AzureADPreview ``` - If only the AzureAD module is shown in the results, run the following commands to install the AzureADPreview module:
- ```powershell
+ ```powershell
Uninstall-Module AzureAD Install-Module AzureADPreview ```
active-directory B2b Quickstart Invite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-invite-powershell.md
description: In this quickstart, you learn how to use PowerShell to send an invi
- Previously updated : 03/21/2023+ Last updated : 07/31/2023
Remove-MgUser -UserId '3f80a75e-750b-49aa-a6b0-d9bf6df7b4c6'
## Next steps
-In this quickstart, you invited and added a single guest user to your directory using PowerShell. Next, learn how to [invite guest users in bulk using PowerShell](tutorial-bulk-invite.md).
+In this quickstart, you invited and added a single guest user to your directory using PowerShell. You can also invite a guest user using the [Azure portal](b2b-quickstart-add-guest-users-portal.md). Additionally you can [invite guest users in bulk using PowerShell](tutorial-bulk-invite.md).
active-directory Bulk Invite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/bulk-invite-powershell.md
Previously updated : 11/18/2022 Last updated : 07/31/2023 -+ # Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user.
# Tutorial: Use PowerShell to bulk invite Azure AD B2B collaboration users
-If you use [Azure Active Directory (Azure AD) B2B collaboration](what-is-b2b.md) to work with external partners, you can invite multiple guest users to your organization at the same time [via the portal](tutorial-bulk-invite.md) or via PowerShell. In this tutorial, you learn how to use PowerShell to send bulk invitations to external users. Specifically, you do the following:
+If you use Azure Active Directory (Azure AD) B2B collaboration to work with external partners, you can invite multiple guest users to your organization at the same time via the portal or via PowerShell. In this tutorial, you learn how to use PowerShell to send bulk invitations to external users. Specifically, you do the following:
> [!div class="checklist"] > * Prepare a comma-separated value (.csv) file with the user information
To verify that the invited users were added to Azure AD, run the following comma
Get-AzureADUser -Filter "UserType eq 'Guest'" ```
-You should see the users that you invited listed, with a [user principal name (UPN)](../hybrid/plan-connect-userprincipalname.md#what-is-userprincipalname) in the format *emailaddress*#EXT#\@*domain*. For example, *lstokes_fabrikam.com#EXT#\@contoso.onmicrosoft.com*, where contoso.onmicrosoft.com is the organization from which you sent the invitations.
+You should see the users that you invited listed, with a user principal name (UPN) in the format *emailaddress*#EXT#\@*domain*. For example, *msullivan_fabrikam.com#EXT#\@contoso.onmicrosoft.com*, where contoso.onmicrosoft.com is the organization from which you sent the invitations.
## Clean up resources
When no longer needed, you can delete the test user accounts in the directory. R
Remove-AzureADUser -ObjectId "<UPN>" ```
-For example: `Remove-AzureADUser -ObjectId "lstokes_fabrikam.com#EXT#@contoso.onmicrosoft.com"`
+For example: `Remove-AzureADUser -ObjectId "msullivan_fabrikam.com#EXT#@contoso.onmicrosoft.com"`
## Next steps
-In this tutorial, you sent bulk invitations to guest users outside of your organization. Next, learn how the invitation redemption process works and how to enforce MFA for guest users.
+In this tutorial, you sent bulk invitations to guest users outside of your organization. Next, learn how to bulk invite guest users on the portal and how to enforce MFA for them.
-- [Learn about the Azure AD B2B collaboration invitation redemption process](redemption-experience.md)
+- [Bulk invite guest users via the portal](tutorial-bulk-invite.md)
- [Enforce multi-factor authentication for B2B guest users](b2b-tutorial-require-mfa.md)
active-directory Concept Branding Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/concept-branding-customers.md
The customer tenant is unique in that it doesn't have any default branding, but
The following list and image outline the elements of the default Microsoft sign-in experience in an Azure AD tenant:
-1. Microsoft background image and color.
-2. Microsoft favicon.
-3. Microsoft banner logo.
-4. Footer as a page layout element.
-5. Microsoft footer hyperlinks, for example, Privacy & cookies, Terms of use and troubleshooting details also known as ellipsis in the right bottom corner of the screen.
-6. Microsoft overlay.
+1. Microsoft background image and color.
+2. Microsoft favicon.
+3. Microsoft banner logo.
+4. Footer as a page layout element.
+5. Microsoft footer hyperlinks, for example, Privacy & cookies, Terms of use and troubleshooting details also known as ellipsis in the right bottom corner of the screen.
+6. Microsoft overlay.
:::image type="content" source="media/how-to-customize-branding-customers/microsoft-branding.png" alt-text="Screenshot of the Azure AD default Microsoft branding." lightbox="media/how-to-customize-branding-customers/microsoft-branding.png":::
active-directory How To Customize Branding Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-customize-branding-customers.md
Microsoft provides a neutral branding as the default for the customer tenant, wh
The following list and image outline the elements of the default Microsoft sign-in experience in an Azure AD tenant:
-1. Microsoft background image and color.
-2. Microsoft favicon.
-3. Microsoft banner logo.
-4. Footer as a page layout element.
-5. Microsoft footer hyperlinks, for example, Privacy & cookies, Terms of use and troubleshooting details also known as ellipsis in the right bottom corner of the screen.
-6. Microsoft overlay.
+1. Microsoft background image and color.
+2. Microsoft favicon.
+3. Microsoft banner logo.
+4. Footer as a page layout element.
+5. Microsoft footer hyperlinks, for example, Privacy & cookies, Terms of use and troubleshooting details also known as ellipsis in the right bottom corner of the screen.
+6. Microsoft overlay.
:::image type="content" source="media/how-to-customize-branding-customers/microsoft-branding.png" alt-text="Screenshot of the Azure AD default Microsoft branding." lightbox="media/how-to-customize-branding-customers/microsoft-branding.png":::
Before you customize any settings, the neutral default branding will appear in y
For your customer tenant, you might have different requirements for the information you want to collect during sign-up and sign-in. The customer tenant comes with a built-in set of information stored in attributes, such as Given Name, Surname, City, and Postal Code. You can create custom attributes in your customer tenant using the Microsoft Graph API or in the portal under the **Text** tab in **Company Branding**.
-1. On the **Text** tab select **Add Custom Text**.
-1. Select any of the options:
+1. On the **Text** tab select **Add Custom Text**.
+1. Select any of the options:
- Select **Attributes** to override the default values. - Select **Attribute collection** to add a new attribute option that you would like to collect during the sign-up process.
When no longer needed, you can remove the sign-in customization from your custom
1.If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier. 1. In the search bar, type and select **Company branding**. 1. Under **Default sign-in experience**, select **Edit**.
-1. Remove the elements you no longer need.
-1. Once finished select **Review + save**.
+1. Remove the elements you no longer need.
+1. Once finished select **Review + save**.
1. Wait a few minutes for the changes to take effect. ## Clean up resources via the Microsoft Graph API
active-directory How To Enable Password Reset Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-enable-password-reset-customers.md
Title: Enable self-service password reset
description: Learn how to enable self-service password reset so your customers can reset their own passwords without admin assistance. -+ Previously updated : 07/12/2023 Last updated : 07/28/2023
To enable self-service password reset, you need to enable the email one-time pas
1. Select **Save**.
-## Customize the password reset flow
+### Enable the password reset link
-You can configure options for showing, hiding, or customizing the self-service password reset link on the sign-in page. For details, see [To customize self-service password reset](how-to-customize-branding-customers.md#to-customize-self-service-password-reset) in the article [Customize the neutral branding in your customer tenant](how-to-customize-branding-customers.md).
+You can hide, show or customize the self-service password reset link on the sign-in page.
+
+1. In the search bar, type and select **Company Branding**.
+1. Under **Default sign-in** select **Edit**.
+1. On the **Sign-in form** tab, scroll to the **Self-service password reset** section and select **Show self-service password reset**.
+
+ :::image type="content" source="media/how-to-customize-branding-customers/company-branding-self-service-password-reset.png" alt-text="Screenshot of the company branding Self-service password reset.":::
+
+1. Select **Review + save** and **Save** on the **Review** tab.
+
+For more details, check out the [Customize the neutral branding in your customer tenant](how-to-customize-branding-customers.md#to-customize-self-service-password-reset) article.
## Test self-service password reset
active-directory How To Web App Dotnet Sign In Sign Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-web-app-dotnet-sign-in-sign-out.md
After installing the NuGet packages and adding necessary code for authentication
1. Next, add a reference to `_LoginPartial` in the *Layout.cshtml* file, which is located in the same folder. It's recommended to place this after the `navbar-collapse` class as shown in the following snippet:
- ```html
+ ```html
<div class="navbar-collapse collapse d-sm-inline-flex flex-sm-row-reverse"> <partial name="_LoginPartial" /> </div>
active-directory Tutorial Single Page App React Sign In Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-react-sign-in-prepare-app.md
All parts of the app that require authentication must be wrapped in the [`MsalPr
root.render( <App instance={msalInstance}/> );
- ```
+ ```
## Next steps
active-directory Hybrid Cloud To On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-cloud-to-on-premises.md
You must do the following:
- Assign Azure AD B2B Users to the SAML Application. When you've completed the steps above, your app should be up and running. To test Azure AD B2B access:
-1. Open a browser and navigate to the external URL that you created when you published the app.
-2. Sign in with the Azure AD B2B account that you assigned to the app. You should be able to open the app and access it with single sign-on.
+1. Open a browser and navigate to the external URL that you created when you published the app.
+2. Sign in with the Azure AD B2B account that you assigned to the app. You should be able to open the app and access it with single sign-on.
## Access to IWA and KCD apps
The following diagram provides a high-level overview of how Azure AD Application
![Diagram of MIM and B2B script solutions.](media/hybrid-cloud-to-on-premises/MIMScriptSolution.PNG)
-1. A user from a partner organization (the Fabrikam tenant) is invited to the Contoso tenant.
-2. A guest user object is created in the Contoso tenant (for example, a user object with a UPN of guest_fabrikam.com#EXT#@contoso.onmicrosoft.com).
-3. The Fabrikam guest is imported from Contoso through MIM or through the B2B PowerShell script.
-4. A representation or ΓÇ£footprintΓÇ¥ of the Fabrikam guest user object (Guest#EXT#) is created in the on-premises directory, Contoso.com, through MIM or through the B2B PowerShell script.
-5. The guest user accesses the on-premises application, app.contoso.com.
-6. The authentication request is authorized through Application Proxy, using Kerberos constrained delegation.
-7. Because the guest user object exists locally, the authentication is successful.
+1. A user from a partner organization (the Fabrikam tenant) is invited to the Contoso tenant.
+2. A guest user object is created in the Contoso tenant (for example, a user object with a UPN of guest_fabrikam.com#EXT#@contoso.onmicrosoft.com).
+3. The Fabrikam guest is imported from Contoso through MIM or through the B2B PowerShell script.
+4. A representation or ΓÇ£footprintΓÇ¥ of the Fabrikam guest user object (Guest#EXT#) is created in the on-premises directory, Contoso.com, through MIM or through the B2B PowerShell script.
+5. The guest user accesses the on-premises application, app.contoso.com.
+6. The authentication request is authorized through Application Proxy, using Kerberos constrained delegation.
+7. Because the guest user object exists locally, the authentication is successful.
### Lifecycle management policies
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/redemption-experience.md
However, the following scenarios should continue to work:
- Signing back into an application after redemption process using [SAML/WS-Fed IdP](./direct-federation.md) and [Google Federation](./google-federation.md) accounts. To unblock users who can't redeem an invitation due to a conflicting [Contact object](/graph/api/resources/contact), follow these steps:
-1. Delete the conflicting Contact object.
-2. Delete the guest user in the Azure portal (the user's "Invitation accepted" property should be in a pending state).
-3. Reinvite the guest user.
-4. Wait for the user to redeem invitation.
-5. Add the user's Contact email back into Exchange and any DLs they should be a part of.
+1. Delete the conflicting Contact object.
+2. Delete the guest user in the Azure portal (the user's "Invitation accepted" property should be in a pending state).
+3. Reinvite the guest user.
+4. Wait for the user to redeem invitation.
+5. Add the user's Contact email back into Exchange and any DLs they should be a part of.
## Invitation redemption flow
active-directory Tutorial Bulk Invite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tutorial-bulk-invite.md
Previously updated : 07/04/2023 Last updated : 07/31/2023 -+ # Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user.
active-directory Groups View Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/groups-view-azure-portal.md
The group you just created is used in other articles in the Azure AD Fundamental
1. On the **Groups - All groups** page, search for the **MDM policy - West** group.
-1. Select the **MDM policy - West** group.
+1. Select the **MDM policy - West** group.
The **MDM policy - West Overview** page appears.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page updates monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md).
+## July 2023
+
+### General Availability: Azure Active Directory (Azure AD) is being renamed.
+
+**Type:** Changed feature
+**Service category:** N/A
+**Product capability:** End User Experiences
+
+**No action is required from you, but you may need to update some of your own documentation.**
+
+Azure AD is being renamed to Microsoft Entra ID. The name change rolls out across all Microsoft products and experiences throughout the second half of 2023.
+
+Capabilities, licensing, and usage of the product isn't changing. To make the transition seamless for you, the pricing, terms, service level agreements, URLs, APIs, PowerShell cmdlets, Microsoft Authentication Library (MSAL) and developer tooling remain the same.
+
+Learn more and get renaming details: [New name for Azure Active Directory](../fundamentals/new-name.md).
+++
+### General Availability - Include/exclude My Apps in Conditional Access policies
+
+**Type:** Fixed
+**Service category:** Conditional Access
+**Product capability:** End User Experiences
+
+My Apps can now be targeted in conditional access policies. This solves a top customer blocker. The functionality is available in all clouds. GA also brings a new app launcher, which improves app launch performance for both SAML and other app types.
+
+Learn More about setting up conditional access policies here: [Azure AD Conditional Access documentation](../conditional-access/index.yml).
+++
+### General Availability - Conditional Access for Protected Actions
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+Protected actions are high-risk operations, such as altering access policies or changing trust settings, that can significantly impact an organization's security. To add an extra layer of protection, Conditional Access for Protected Actions lets organizations define specific conditions for users to perform these sensitive tasks. For more information, see: [What are protected actions in Azure AD?](../roles/protected-actions-overview.md).
+++
+### General Availability - Access Reviews for Inactive Users
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+This new feature, part of the Microsoft Entra ID Governance SKU, allows admins to review and address stale accounts that havenΓÇÖt been active for a specified period. Admins can set a specific duration to determine inactive accounts that weren't used for either interactive or non-interactive sign-in activities. As part of the review process, stale accounts can automatically be removed. For more information, see: [Microsoft Entra ID Governance Introduces Two New Features in Access Reviews](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-id-governance-introduces-two-new-features-in/ba-p/2466930).
+++
+### General Availability - Automatic assignments to access packages in Microsoft Entra ID Governance
+
+**Type:** Changed feature
+**Service category:** Entitlement Management
+**Product capability:** Entitlement Management
+
+Microsoft Entra ID Governance includes the ability for a customer to configure an assignment policy in an entitlement management access package that includes an attribute-based rule, similar to dynamic groups, of the users who should be assigned access. For more information, see: [Configure an automatic assignment policy for an access package in entitlement management](../governance/entitlement-management-access-package-auto-assignment-policy.md).
+++
+### General Availability - Custom Extensions in Entitlement Management
+
+**Type:** New feature
+**Service category:** Entitlement Management
+**Product capability:** Entitlement Management
+
+Custom extensions in Entitlement Management are now generally available, and allow you to extend the access lifecycle with your organization-specific processes and business logic when access is requested or about to expire. With custom extensions you can create tickets for manual access provisioning in disconnected systems, send custom notifications to additional stakeholders, or automate additional access-related configuration in your business applications such as assigning the correct sales region in Salesforce. You can also leverage custom extensions to embed external governance, risk, and compliance (GRC) checks in the access request.
+
+For more information, see:
+
+- [Microsoft Entra ID Governance Entitlement Management New Generally Available Capabilities](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-id-governance-entitlement-management-new/ba-p/2466929)
+- [Trigger Logic Apps with custom extensions in entitlement management](../governance/entitlement-management-logic-apps-integration.md)
+++
+### General Availability - Conditional Access templates
+
+**Type:** Plan for change
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+Conditional Access templates are predefined set of conditions and controls that provide a convenient method to deploy new policies aligned with Microsoft recommendations. Customers are assured that their policies reflect modern best practices for securing corporate assets, promoting secure, optimal access for their hybrid workforce. For more information, see: [Conditional Access templates](../conditional-access/concept-conditional-access-policy-common.md).
+++
+### General Availability - Lifecycle Workflows
+
+**Type:** New feature
+**Service category:** Lifecycle Workflows
+**Product capability:** Identity Governance
+
+User identity lifecycle is a critical part of an organizationΓÇÖs security posture, and when managed correctly, can have a positive impact on their usersΓÇÖ productivity for Joiners, Movers, and Leavers. The ongoing digital transformation is accelerating the need for good identity lifecycle management. However, IT and security teams face enormous challenges managing the complex, time-consuming, and error-prone manual processes necessary to execute the required onboarding and offboarding tasks for hundreds of employees at once. This is an ever present and complex issue IT admins continue to face with digital transformation across security, governance, and compliance.
+
+Lifecycle Workflows, one of our newest Microsoft Entra ID Governance capabilities is now generally available to help organizations further optimize their user identity lifecycle. For more information, see: [Lifecycle Workflows is now generally available!](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/lifecycle-workflows-is-now-generally-available/ba-p/2466931)
+++
+### General Availability - Enabling extended customization capabilities for sign-in and sign-up pages in Company Branding capabilities.
+
+**Type:** New feature
+**Service category:** User Experience and Management
+**Product capability:** User Authentication
+
+Update the Microsoft Entra ID and Microsoft 365 sign in experience with new Company Branding capabilities. You can apply your companyΓÇÖs brand guidance to authentication experiences with predefined templates. For more information, see: [Company Branding](../fundamentals/how-to-customize-branding.md)
+++
+### General Availability - Enabling customization capabilities for the Self-Service Password Reset (SSPR) hyperlinks, footer hyperlinks and browser icons in Company Branding.
+
+**Type:** Changed feature
+**Service category:** User Experience and Management
+**Product capability:** End User Experiences
+
+Update the Company Branding functionality on the Microsoft Entra ID/Microsoft 365 sign in experience to allow customizing Self Service Password Reset (SSPR) hyperlinks, footer hyperlinks, and a browser icon. For more information, see: [Company Branding](../fundamentals/how-to-customize-branding.md)
+++
+### General Availability - User-to-Group Affiliation recommendation for group Access Reviews
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+This feature provides Machine Learning based recommendations to the reviewers of Azure AD Access Reviews to make the review experience easier and more accurate. The recommendation leverages machine learning based scoring mechanism and compares usersΓÇÖ relative affiliation with other users in the group, based on the organizationΓÇÖs reporting structure. For more information, see: [Review recommendations for Access reviews](../governance/review-recommendations-access-reviews.md) and [Introducing Machine Learning based recommendations in Azure AD Access reviews](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/introducing-machine-learning-based-recommendations-in-azure-ad/ba-p/2466923)
+++
+### Public Preview - Inactive guest insights
+
+**Type:** New feature
+**Service category:** Reporting
+**Product capability:** Identity Governance
+
+Monitor guest accounts at scale with intelligent insights into inactive guest users in your organization. Customize the inactivity threshold depending on your organizationΓÇÖs needs, narrow down the scope of guest users you want to monitor and identify the guest users that may be inactive. For more information, see: [Monitor and clean up stale guest accounts using access reviews](../enterprise-users/clean-up-stale-guest-accounts.md).
+++
+### Public Preview - Just-in-time application access with PIM for Groups
+
+**Type:** New feature
+**Service category:** Privileged Identity Management
+**Product capability:** Privileged Identity Management
+
+You can minimize the number of persistent administrators in applications such as [AWS](../saas-apps/aws-single-sign-on-provisioning-tutorial.md#just-in-time-jit-application-access-with-pim-for-groups-preview)/[GCP](../saas-apps/g-suite-provisioning-tutorial.md#just-in-time-jit-application-access-with-pim-for-groups-preview) and get JIT access to groups in AWS and GCP. While PIM for Groups is publicly available, weΓÇÖve released a public preview that integrates PIM with provisioning and reduces the activation delay from 40+ minutes to 1 ΓÇô 2 minutes.
+++
+### Public Preview - Graph beta API for PIM security alerts on Azure AD roles
+
+**Type:** New feature
+**Service category:** Privileged Identity Management
+**Product capability:** Privileged Identity Management
+
+Announcing API support (beta) for managing PIM security alerts for Azure AD roles. [Azure Privileged Identity Management (PIM)](../privileged-identity-management/index.yml) generates alerts when there's suspicious or unsafe activity in your organization in Azure Active Directory (Azure AD), part of Microsoft Entra. You can now manage these alerts using REST APIs. These alerts can also be [managed through the Azure portal](../privileged-identity-management/pim-resource-roles-configure-alerts.md). For more information, see: [unifiedRoleManagementAlert resource type](/graph/api/resources/unifiedrolemanagementalert).
+++
+### General Availability - Reset Password on Azure Mobile App
+
+**Type:** New feature
+**Service category:** Other
+**Product capability:** End User Experiences
+
+The Azure mobile app has been enhanced to empower admins with specific permissions to conveniently reset their users' passwords. Self Service Password Reset won't be supported at this time. However, users can still more efficiently control and streamline their authentication methods. For more information, see: [What authentication and verification methods are available in Azure Active Directory?](../authentication/concept-authentication-methods.md).
+++
+### Public Preview - API-driven inbound user provisioning
+
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Inbound to Azure AD
+
+With API-driven inbound provisioning, Microsoft Entra ID provisioning service now supports integration with any system of record. Customers and partners can use any automation tool of their choice to retrieve workforce data from any system of record for provisioning into Entra ID and connected on-premises Active Directory domains. The IT admin has full control on how the data is processed and transformed with attribute mappings. Once the workforce data is available in Entra ID, the IT admin can configure appropriate joiner-mover-leaver business processes using Entra ID Governance Lifecycle Workflows. For more information, see: [API-driven inbound provisioning concepts (Public preview)](../app-provisioning/inbound-provisioning-api-concepts.md).
+++
+### Public Preview - Dynamic Groups based on EmployeeHireDate User attribute
+
+**Type:** New feature
+**Service category:** Group Management
+**Product capability:** Directory
+
+This feature enables admins to create dynamic group rules based on the user objects' employeeHireDate attribute. For more information, see: [Properties of type string](../enterprise-users/groups-dynamic-membership.md#properties-of-type-string).
+++
+### General Availability - Enhanced Create User and Invite User Experiences
+
+**Type:** Changed feature
+**Service category:** User Management
+**Product capability:** User Management
+
+We have increased the number of properties admins are able to define when creating and inviting a user in the Entra admin portal, bringing our UX to parity with our Create User APIs. Additionally, admins can now add users to a group or administrative unit, and assign roles. For more information, see: [Add or delete users using Azure Active Directory](../fundamentals/add-users-azure-active-directory.md).
+++
+### General Availability - All Users and User Profile
+
+**Type:** Changed feature
+**Service category:** User Management
+**Product capability:** User Management
+
+The All Users list now features an infinite scroll, and admins can now modify more properties in the User Profile. For more information, see: [How to create, invite, and delete users](../fundamentals/how-to-create-delete-users.md).
+++
+### Public Preview - Windows MAM
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+“*When will you have MAM for Windows?*” is one of our most frequently asked customer questions. We’re happy to report that the answer is: “Now!” We’re excited to offer this new and long-awaited MAM Conditional Access capability in Public Preview for Microsoft Edge for Business on Windows.
+
+Using MAM Conditional Access, Microsoft Edge for Business provides users with secure access to organizational data on personal Windows devices with a customizable user experience. WeΓÇÖve combined the familiar security features of app protection policies (APP), Windows Defender client threat defense, and conditional access, all anchored to Azure AD identity to ensure un-managed devices are healthy and protected before granting data access. This can help businesses to improve their security posture and protect sensitive data from unauthorized access, without requiring full mobile device enrollment.
+
+The new capability extends the benefits of app layer management to the Windows platform via Microsoft Edge for Business. Admins are empowered to configure the user experience and protect organizational data within Microsoft Edge for Business on un-managed Windows devices.
+
+For more information, see: [Require an app protection policy on Windows devices (preview)](../conditional-access/how-to-app-protection-policy-windows.md).
+++
+### General Availability - New Federated Apps available in Azure AD Application gallery - July 2023
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In July 2023 we've added the following 10 new applications in our App gallery with Federation support:
+
+[Gainsight SAML](../saas-apps/gainsight-saml-tutorial.md), [Dataddo](https://www.dataddo.com/), [Puzzel](https://www.puzzel.com/), [Worthix App](../saas-apps/worthix-app-tutorial.md), [iOps360 IdConnect](https://iops360.com/iops360-id-connect-azuread-single-sign-on/), [Airbase](../saas-apps/airbase-tutorial.md), [Couchbase Capella - SSO](../saas-apps/couchbase-capella-sso-tutorial.md), [SSO for Jama Connect®](../saas-apps/sso-for-jama-connect-tutorial.md), [mediment (メディメント)](https://mediment.jp/), [Netskope Cloud Exchange Administration Console](../saas-apps/netskope-cloud-exchange-administration-console-tutorial.md), [Uber](../saas-apps/uber-tutorial.md), [Plenda](https://app.plenda.nl/), [Deem Mobile](../saas-apps/deem-mobile-tutorial.md), [40SEAS](https://www.40seas.com/), [Vivantio](https://www.vivantio.com/), [AppTweak](https://www.apptweak.com/), [ioTORQ EMIS](https://www.iotorq.com/), [Vbrick Rev Cloud](../saas-apps/vbrick-rev-cloud-tutorial.md), [OptiTurn](../saas-apps/optiturn-tutorial.md), [Application Experience with Mist](https://www.mist.com/), [クラウド勤怠管理システムKING OF TIME](../saas-apps/cloud-attendance-management-system-king-of-time-tutorial.md), [Connect1](../saas-apps/connect1-tutorial.md), [DB Education Portal for Schools](../saas-apps/db-education-portal-for-schools-tutorial.md), [SURFconext](../saas-apps/surfconext-tutorial.md), [Chengliye Smart SMS Platform](../saas-apps/chengliye-smart-sms-platform-tutorial.md), [CivicEye SSO](../saas-apps/civic-eye-sso-tutorial.md), [Colloquial](../saas-apps/colloquial-tutorial.md), [BigPanda](../saas-apps/bigpanda-tutorial.md), [Foreman](https://foreman.mn/)
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
+
+For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest
+++
+### Public Preview - New provisioning connectors in the Azure AD Application Gallery - July 2023
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+
+We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps:
+
+- [Albert](../saas-apps/albert-provisioning-tutorial.md)
+- [Rhombus Systems](../saas-apps/rhombus-systems-provisioning-tutorial.md)
+- [Axiad Cloud](../saas-apps/axiad-cloud-provisioning-tutorial.md)
+- [Dagster Cloud](../saas-apps/dagster-cloud-provisioning-tutorial.md)
+- [WATS](../saas-apps/wats-provisioning-tutorial.md)
+- [Funnel Leasing](../saas-apps/funnel-leasing-provisioning-tutorial.md)
++
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
++++
+### General Availability - Microsoft Authentication Library for .NET 4.55.0
+
+**Type:** New feature
+**Service category:** Other
+**Product capability:** User Authentication
+
+Earlier this month we announced the release of [MSAL.NET 4.55.0](https://www.nuget.org/packages/Microsoft.Identity.Client/4.55.0), the latest version of the [Microsoft Authentication Library for the .NET platform](/entra/msal/dotnet/). The new version introduces support for user-assigned [managed identity](/entra/msal/dotnet/advanced/managed-identity) being specified through object IDs, CIAM authorities in the `WithTenantId` API, better error messages when dealing with cache serialization, and improved logging when using the [Windows authentication broker](/entra/msal/dotnet/acquiring-tokens/desktop-mobile/wam).
+++
+### General Availability - Microsoft Authentication Library for Python 1.23.0
+
+**Type:** New feature
+**Service category:** Other
+**Product capability:** User Authentication
+
+Earlier this month, the Microsoft Authentication Library team announced the release of [MSAL for Python version 1.23.0](https://pypi.org/project/msal/1.23.0/). The new version of the library adds support for better caching when using client credentials, eliminating the need to request new tokens repeatedly when cached tokens exist.
+
+To learn more about MSAL for Python, see: [Microsoft Authentication Library (MSAL) for Python](/entra/msal/python/).
+++ ## June 2023 ### Public Preview - New provisioning connectors in the Azure AD Application Gallery - June 2023
Starting today the modernized experience for viewing previously accepted terms o
**Service category:** Privileged Identity Management **Product capability:** Privileged Identity Management
-Privileged Identity Management for Groups is now generally available. With this feature, you have the ability to grant users just-in-time membership in a group, which in turn provides access to Azure Active Directory roles, Azure roles, Azure SQL, Azure Key Vault, Intune, other application roles, as well as third-party applications. Through one activation, you can conveniently assign a combination of permissions across different applications and RBAC systems.
+Privileged Identity Management for Groups is now generally available. With this feature, you have the ability to grant users just-in-time membership in a group, which in turn provides access to Azure Active Directory roles, Azure roles, Azure SQL, Azure Key Vault, Intune, other application roles, and third-party applications. Through one activation, you can conveniently assign a combination of permissions across different applications and RBAC systems.
PIM for Groups offers can also be used for just-in-time ownership. As the owner of the group, you can manage group properties, including membership. For more information, see: [Privileged Identity Management (PIM) for Groups](../privileged-identity-management/concept-pim-for-groups.md).
PIM for Groups offers can also be used for just-in-time ownership. As the owner
**Service category:** Privileged Identity Management **Product capability:** Privileged Identity Management
-The Privileged Identity Management (PIM) integration with Conditional Access authentication context is generally available. You can require users to meet a variety of requirements during role activation such as:
+The Privileged Identity Management (PIM) integration with Conditional Access authentication context is generally available. You can require users to meet various requirements during role activation such as:
- Have specific authentication method through [Authentication Strengths](../authentication/concept-authentication-strengths.md) - Activate from a compliant device
The Converged Authentication Methods Policy enables you to manage all authentica
**Service category:** Provisioning **Product capability:** Azure Active Directory Connect Cloud Sync
-Hybrid IT Admins can now sync both Active Directory and Azure AD Directory Extensions using Azure AD Cloud Sync. This new capability adds the ability to dynamically discover the schema for both Active Directory and Azure Active Directory, thereby, allowing customers to simply map the needed attributes using Cloud Sync's attribute mapping experience. For more information, see: [Cloud Sync directory extensions and custom attribute mapping](../hybrid/cloud-sync/custom-attribute-mapping.md).
+Hybrid IT Admins can now sync both Active Directory and Azure AD Directory Extensions using Azure AD Cloud Sync. This new capability adds the ability to dynamically discover the schema for both Active Directory and Azure Active Directory, thereby, allowing customers to map the needed attributes using Cloud Sync's attribute mapping experience. For more information, see: [Cloud Sync directory extensions and custom attribute mapping](../hybrid/cloud-sync/custom-attribute-mapping.md).
To address this challenge, we're introducing a new system-preferred authenticati
**Service category:** User Management **Product capability:** User Management
-Admins can now define more properties when creating and inviting a user in the Entra admin portal. These improvements bring our UX to parity with our [Create User APIS](/graph/api/user-post-users). Additionally, admins can now add users to a group or administrative unit, and assign roles. For more information, see: [Add or delete users using Azure Active Directory](../fundamentals/add-users-azure-active-directory.md).
+We have increased the number of properties that admins are able to define when creating and inviting a user in the Entra admin portal. This brings our UX to parity with our Create User APIs. Additionally, admins can now add users to a group or administrative unit, and assign roles. For more information, see: [How to create, invite, and delete users](../fundamentals/how-to-create-delete-users.md).
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
This article describes how to create one or more access reviews for group member
## Prerequisites - Microsoft Azure AD Premium P2 or Microsoft Entra ID Governance licenses. -- Creating a review on [inactive user](review-recommendations-access-reviews.md#inactive-user-recommendations) and with [use-to-group affiliation](review-recommendations-access-reviews.md#user-to-group-affiliation) recommendations requires a Microsoft Entra ID Governance license.
+- Creating a review on inactive users with [use-to-group affiliation](review-recommendations-access-reviews.md#user-to-group-affiliation) recommendations requires a Microsoft Entra ID Governance license.
- Global administrator, User administrator, or Identity Governance administrator to create reviews on groups or applications. - Global administrators and Privileged Role administrators can create reviews on role-assignable groups. For more information, see [Use Azure AD groups to manage role assignments](../roles/groups-concept.md). - Microsoft 365 and Security group owner.
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-assignments.md
In some cases, you might want to directly assign specific users to an access pac
![Assignments - Add user to access package](./media/entitlement-management-access-package-assignments/assignments-add-user.png)
-1. In the **Select policy** list, select a policy that the users' future requests and lifecycle will be governed and tracked by. If you want the selected users to have different policy settings, you can select **Create new policy** to add a new policy.
+1. In the **Select policy** list, select a policy that the users' future requests and lifecycle will be governed and tracked by. If you want the selected users to have different policy settings, you can select **Create new policy** to add a new policy.
-1. Once you select a policy, youΓÇÖll be able to Add users to select the users you want to assign this access package to, under the chosen policy.
+1. Once you select a policy, youΓÇÖll be able to Add users to select the users you want to assign this access package to, under the chosen policy.
> [!NOTE] > If you select a policy with questions, you can only assign one user at a time. 1. Set the date and time you want the selected users' assignment to start and end. If an end date isn't provided, the policy's lifecycle settings will be used.
-1. Optionally provide a justification for your direct assignment for record keeping.
+1. Optionally provide a justification for your direct assignment for record keeping.
-1. If the selected policy includes additional requestor information, select **View questions** to answer them on behalf of the users, then select **Save**.
+1. If the selected policy includes additional requestor information, select **View questions** to answer them on behalf of the users, then select **Save**.
![Assignments - click view questions](./media/entitlement-management-access-package-assignments/assignments-view-questions.png)
Entitlement management also allows you to directly assign external users to an a
**Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
-1. In the left menu, select **Access packages** and then open the access package in which you want to add a user.
+1. In the left menu, select **Access packages** and then open the access package in which you want to add a user.
-1. In the left menu, select **Assignments**.
+1. In the left menu, select **Assignments**.
-1. Select **New assignment** to open **Add user to access package**.
+1. Select **New assignment** to open **Add user to access package**.
-1. In the **Select policy** list, select a policy that allows that is set to **For users not in your directory**
+1. In the **Select policy** list, select a policy that allows that is set to **For users not in your directory**
1. Select **Any user**. YouΓÇÖll be able to specify which users you want to assign to this access package. ![Assignments - Add any user to access package](./media/entitlement-management-access-package-assignments/assignments-add-any-user.png)
Entitlement management also allows you to directly assign external users to an a
> - Similarly, if you set your policy to include **All configured connected organizations**, the userΓÇÖs email address must be from one of your configured connected organizations. Otherwise, the user won't be added to the access package. > - If you wish to add any user to the access package, you'll need to ensure that you select **All users (All connected organizations + any external user)** when configuring your policy.
-1. Set the date and time you want the selected users' assignment to start and end. If an end date isn't provided, the policy's lifecycle settings will be used.
-1. Select **Add** to directly assign the selected users to the access package.
-1. After a few moments, select **Refresh** to see the users in the Assignments list.
+1. Set the date and time you want the selected users' assignment to start and end. If an end date isn't provided, the policy's lifecycle settings will be used.
+1. Select **Add** to directly assign the selected users to the access package.
+1. After a few moments, select **Refresh** to see the users in the Assignments list.
## Directly assigning users programmatically ### Assign a user to an access package with Microsoft Graph
active-directory Entitlement Management Access Package Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-create.md
Then, create the access package:
```powershell $params = @{
- CatalogId = $catalog.id
- DisplayName = "sales reps"
- Description = "outside sales representatives"
+ CatalogId = $catalog.id
+ DisplayName = "sales reps"
+ Description = "outside sales representatives"
} $ap = New-MgEntitlementManagementAccessPackage -BodyParameter $params
After you create the access package, assign the resource roles to it. For examp
```powershell $rparams = @{
- AccessPackageResourceRole = @{
- OriginId = $rr[2].OriginId
- DisplayName = $rr[2].DisplayName
- OriginSystem = $rr[2].OriginSystem
- AccessPackageResource = @{
- Id = $rsc[0].Id
- ResourceType = $rsc[0].ResourceType
- OriginId = $rsc[0].OriginId
- OriginSystem = $rsc[0].OriginSystem
- }
- }
- AccessPackageResourceScope = @{
- OriginId = $rsc[0].OriginId
- OriginSystem = $rsc[0].OriginSystem
- }
+ AccessPackageResourceRole = @{
+ OriginId = $rr[2].OriginId
+ DisplayName = $rr[2].DisplayName
+ OriginSystem = $rr[2].OriginSystem
+ AccessPackageResource = @{
+ Id = $rsc[0].Id
+ ResourceType = $rsc[0].ResourceType
+ OriginId = $rsc[0].OriginId
+ OriginSystem = $rsc[0].OriginSystem
+ }
+ }
+ AccessPackageResourceScope = @{
+ OriginId = $rsc[0].OriginId
+ OriginSystem = $rsc[0].OriginSystem
+ }
} New-MgEntitlementManagementAccessPackageResourceRoleScope -AccessPackageId $ap.Id -BodyParameter $rparams ```
Finally, create the policies. In this policy, only the administrator can assign
```powershell $pparams = @{
- AccessPackageId = $ap.Id
- DisplayName = "direct"
- Description = "direct assignments by administrator"
- AccessReviewSettings = $null
- RequestorSettings = @{
- ScopeType = "NoSubjects"
- AcceptRequests = $true
- AllowedRequestors = @(
- )
- }
- RequestApprovalSettings = @{
- IsApprovalRequired = $false
- IsApprovalRequiredForExtension = $false
- IsRequestorJustificationRequired = $false
- ApprovalMode = "NoApproval"
- ApprovalStages = @(
- )
- }
+ AccessPackageId = $ap.Id
+ DisplayName = "direct"
+ Description = "direct assignments by administrator"
+ AccessReviewSettings = $null
+ RequestorSettings = @{
+ ScopeType = "NoSubjects"
+ AcceptRequests = $true
+ AllowedRequestors = @(
+ )
+ }
+ RequestApprovalSettings = @{
+ IsApprovalRequired = $false
+ IsApprovalRequiredForExtension = $false
+ IsRequestorJustificationRequired = $false
+ ApprovalMode = "NoApproval"
+ ApprovalStages = @(
+ )
+ }
} New-MgEntitlementManagementAccessPackageAssignmentPolicy -BodyParameter $pparams
active-directory Entitlement Management Access Package Incompatible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-incompatible.md
To use entitlement management and assign users to access packages, you must have
Follow these steps to change the list of incompatible groups or other access packages for an existing access package:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure Active Directory**, and then select **Identity Governance**.
-1. In the left menu, select **Access packages** and then open the access package which users will request.
+1. In the left menu, select **Access packages** and then open the access package which users will request.
-1. In the left menu, select **Separation of duties**.
+1. In the left menu, select **Separation of duties**.
1. If you wish to prevent users who have another access package assignment already from requesting this access package, select on **Add access package** and select the access package that the user would already be assigned.
New-MgEntitlementManagementAccessPackageIncompatibleAccessPackageByRef -AccessPa
Follow these steps to view the list of other access packages that have indicated that they're incompatible with an existing access package:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure Active Directory**, and then select **Identity Governance**.
-1. In the left menu, select **Access packages** and then open the access package.
+1. In the left menu, select **Access packages** and then open the access package.
-1. In the left menu, select **Separation of duties**.
+1. In the left menu, select **Separation of duties**.
1. Select on **Incompatible With**.
If you've configured incompatible access settings on an access package that alre
Follow these steps to view the list of users who have assignments to two access packages.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure Active Directory**, and then select **Identity Governance**.
-1. In the left menu, select **Access packages** and then open the access package where you've configured another access package as incompatible.
+1. In the left menu, select **Access packages** and then open the access package where you've configured another access package as incompatible.
-1. In the left menu, select **Separation of duties**.
+1. In the left menu, select **Separation of duties**.
1. In the table, if there is a non-zero value in the Additional access column for the second access package, then that indicates there are one or more users with assignments.
If you're configuring incompatible access settings on an access package that alr
Follow these steps to view the list of users who have assignments to two access packages.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure Active Directory**, and then select **Identity Governance**.
-1. In the left menu, select **Access packages** and then open the access package where you'll be configuring incompatible assignments.
+1. In the left menu, select **Access packages** and then open the access package where you'll be configuring incompatible assignments.
-1. In the left menu, select **Assignments**.
+1. In the left menu, select **Assignments**.
1. In the **Status** field, ensure that **Delivered** status is selected.
Follow these steps to view the list of users who have assignments to two access
1. In the navigation bar, select **Identity Governance**.
-1. In the left menu, select **Access packages** and then open the access package that you plan to indicate as incompatible.
+1. In the left menu, select **Access packages** and then open the access package that you plan to indicate as incompatible.
-1. In the left menu, select **Assignments**.
+1. In the left menu, select **Assignments**.
1. In the **Status** field, ensure that the **Delivered** status is selected.
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
Select-MgProfile -Name "beta"
$apid = "cdd5f06b-752a-4c9f-97a6-82f4eda6c76d" $pparams = @{
- AccessPackageId = $apid
- DisplayName = "direct"
- Description = "direct assignments by administrator"
- AccessReviewSettings = $null
- RequestorSettings = @{
- ScopeType = "NoSubjects"
- AcceptRequests = $true
- AllowedRequestors = @(
- )
- }
- RequestApprovalSettings = @{
- IsApprovalRequired = $false
- IsApprovalRequiredForExtension = $false
- IsRequestorJustificationRequired = $false
- ApprovalMode = "NoApproval"
- ApprovalStages = @(
- )
- }
+ AccessPackageId = $apid
+ DisplayName = "direct"
+ Description = "direct assignments by administrator"
+ AccessReviewSettings = $null
+ RequestorSettings = @{
+ ScopeType = "NoSubjects"
+ AcceptRequests = $true
+ AllowedRequestors = @(
+ )
+ }
+ RequestApprovalSettings = @{
+ IsApprovalRequired = $false
+ IsApprovalRequiredForExtension = $false
+ IsRequestorJustificationRequired = $false
+ ApprovalMode = "NoApproval"
+ ApprovalStages = @(
+ )
+ }
} New-MgEntitlementManagementAccessPackageAssignmentPolicy -BodyParameter $pparams ```
active-directory Entitlement Management Access Reviews Review Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-review-access.md
If there are multiple reviewers, the last submitted response is recorded. Consid
To review access for multiple users more quickly, you can use the system-generated recommendations, accepting the recommendations with a single select. The recommendations are generated based on the user's sign-in activity.
-1. In the bar at the top of the page, select **Accept recommendations**.
+1. In the bar at the top of the page, select **Accept recommendations**.
![Select Accept recommendations](./media/entitlement-management-access-reviews-review-access/review-access-use-recommendations.png) You see a summary of the recommended actions.
-1. Select **Submit** to accept the recommendations.
+1. Select **Submit** to accept the recommendations.
## Next steps
active-directory Entitlement Management Access Reviews Self Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-self-review.md
To do an access review, you must first open the access review. Use the following
1. Select **Access reviews** on the left navigation bar to see a list of pending access reviews assigned to you.
-1. Select the review that youΓÇÖd like to begin.
+1. Select the review that youΓÇÖd like to begin.
## Perform the access review Once you open the access review, you can see your access. Use the following procedure to do the access review:
-1. Decide whether you still need access to the access package. For example, the project you're working on isn't complete, so you still need access to continue working on the project.
+1. Decide whether you still need access to the access package. For example, the project you're working on isn't complete, so you still need access to continue working on the project.
-1. Select **Yes** to keep your access or select **No** to remove your access.
+1. Select **Yes** to keep your access or select **No** to remove your access.
>[!NOTE] >If you stated that you no longer need access, you aren't removed from the access package immediately. You will be removed from the access package when the review ends or if an administrator stops the review.
-1. If you chose **Yes**, you may need to include a justification statement in the **Reason** box.
+1. If you chose **Yes**, you may need to include a justification statement in the **Reason** box.
-1. Select **Submit**.
+1. Select **Submit**.
You can return to the review if you change your mind and decide to change your response before the end of the review.
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-catalog-create.md
To require attributes for access requests:
![Screenshot that shows selecting Require attributes](./media/entitlement-management-catalog-create/resources-require-attributes.png)
-1. Select the attribute type:
+1. Select the attribute type:
1. **Built-in** includes Azure AD user profile attributes. 1. **Directory schema extension** provides a way to store more data in Azure AD on user objects and other directory objects. This includes groups, tenant details, and service principals. Only extension attributes on user objects can be used to send out claims to applications.
To require attributes for access requests:
> [!NOTE] > The User.mobilePhone attribute is a sensitive property that can be updated only by some administrators. Learn more at [Who can update sensitive user attributes?](/graph/api/resources/users#who-can-update-sensitive-attributes).
-1. Select the answer format you want requestors to use for their answer. Answer formats include **short text**, **multiple choice**, and **long text**.
+1. Select the answer format you want requestors to use for their answer. Answer formats include **short text**, **multiple choice**, and **long text**.
-1. If you select multiple choice, select **Edit and localize** to configure the answer options.
+1. If you select multiple choice, select **Edit and localize** to configure the answer options.
1. In the **View/edit question** pane that appears, enter the response options you want to give the requestor when they answer the question in the **Answer values** boxes. 1. Select the language for the response option. You can localize response options if you choose more languages. 1. Enter as many responses as you need, and then select **Save**.
To require attributes for access requests:
![Screenshot that shows adding localizations.](./media/entitlement-management-catalog-create/add-attributes-questions.png)
-1. If you want to add localization, select **Add localization**.
+1. If you want to add localization, select **Add localization**.
1. In the **Add localizations for question** pane, select the language code for the language in which you want to localize the question related to the selected attribute. 1. In the language you configured, enter the question in the **Localized Text** box.
To require attributes for access requests:
![Screenshot that shows saving the localizations.](./media/entitlement-management-catalog-create/attributes-add-localization.png)
-1. After all attribute information is completed on the **Require attributes** page, select **Save**.
+1. After all attribute information is completed on the **Require attributes** page, select **Save**.
### Add a Multi-Geo SharePoint site
active-directory Entitlement Management Reprocess Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-assignments.md
To use entitlement management and assign users to access packages, you must have
If you have users who are in the "Delivered" state but don't have access to resources that are a part of the access package, you'll likely need to reprocess the assignments to reassign those users to the access package's resources. Follow these steps to reprocess assignments for an existing access package:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure Active Directory**, and then select **Identity Governance**.
-1. In the left menu, select **Access packages** and then open the access package with the user assignment you want to reprocess.
+1. In the left menu, select **Access packages** and then open the access package with the user assignment you want to reprocess.
-1. Underneath **Manage** on the left side, select **Assignments**.
+1. Underneath **Manage** on the left side, select **Assignments**.
![Entitlement management in the Azure portal](./media/entitlement-management-reprocess-access-package-assignments/reprocess-access-package-assignment.png)
-1. Select all users whose assignments you wish to reprocess.
+1. Select all users whose assignments you wish to reprocess.
-1. Select **Reprocess**.
+1. Select **Reprocess**.
## Next steps
active-directory Entitlement Management Reprocess Access Package Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-requests.md
To use entitlement management and assign users to access packages, you must have
If you have a set of users whose requests are in the "Partially Delivered" or "Failed" state, you might need to reprocess some of those requests. Follow these steps to reprocess requests for an existing access package:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Click **Azure Active Directory**, and then click **Identity Governance**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. In the left menu, click **Access packages** and then open the access package.
-1. Underneath **Manage** on the left side, click **Requests**.
+1. Underneath **Manage** on the left side, click **Requests**.
-1. Select all users whose requests you wish to reprocess.
+1. Select all users whose requests you wish to reprocess.
-1. Click **Reprocess**.
+1. Click **Reprocess**.
## Next steps
active-directory Entitlement Management Ticketed Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-ticketed-provisioning.md
After registering your application, you must add a client secret by following th
To authorize the created application to call the [MS Graph resume API](/graph/api/accesspackageassignmentrequest-resume) you'd do the following steps:
-1. Navigate to the Entra portal [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement)
+1. Navigate to the Entra portal [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement)
1. In the left menu, select **Catalogs**.
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
For Microsoft Graph, the parameters for the **Send welcome email to new hire** t
|arguments | The optional common email task parameters can be specified; if they are not included, the default behavior takes effect. |
+Example of usage within the workflow:
-```Example for usage within the workflow
+```json
{
- "category": "joiner",
- "continueOnError": false,
- "description": "Send welcome email to new hire",
- "displayName": "Send Welcome Email",
- "isEnabled": true,
- "taskDefinitionId": "70b29d51-b59a-4773-9280-8841dfd3f2ea",
- "arguments": [
- {
- "name": "cc",
- "value": "e94ad2cd-d590-4b39-8e46-bb4f8e293f85,ac17d108-60cd-4eb2-a4b4-084cacda33f2"
- },
- {
- "name": "customSubject",
- "value": "Welcome to the organization {{userDisplayName}}!"
- },
- {
- "name": "customBody",
- "value": "Welcome to our organization {{userGivenName}} {{userSurname}}.\n\nFor more information, reach out to your manager {{managerDisplayName}} at {{managerEmail}}."
- },
- {
- "name": "locale",
- "value": "en-us"
- }
- ]
+ "category": "joiner",
+ "continueOnError": false,
+ "description": "Send welcome email to new hire",
+ "displayName": "Send Welcome Email",
+ "isEnabled": true,
+ "taskDefinitionId": "70b29d51-b59a-4773-9280-8841dfd3f2ea",
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "e94ad2cd-d590-4b39-8e46-bb4f8e293f85,ac17d108-60cd-4eb2-a4b4-084cacda33f2"
+ },
+ {
+ "name": "customSubject",
+ "value": "Welcome to the organization {{userDisplayName}}!"
+ },
+ {
+ "name": "customBody",
+ "value": "Welcome to our organization {{userGivenName}} {{userSurname}}.\n\nFor more information, reach out to your manager {{managerDisplayName}} at {{managerEmail}}."
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ }
+ ]
} ```
For Microsoft Graph, the parameters for the **Send onboarding reminder email** t
|taskDefinitionId | 3C860712-2D37-42A4-928F-5C93935D26A1 | |arguments | The optional common email task parameters can be specified; if they are not included, the default behavior takes effect. |
+Example of usage within the workflow:
-```Example for usage within the workflow
+```json
{
- "category": "joiner",
- "continueOnError": false,
- "description": "Send onboarding reminder email to user\u2019s manager",
- "displayName": "Send onboarding reminder email",
- "isEnabled": true,
- "taskDefinitionId": "3C860712-2D37-42A4-928F-5C93935D26A1",
- "arguments": [
- {
- "name": "cc",
- "value": "e94ad2cd-d590-4b39-8e46-bb4f8e293f85,068fa0c1-fa00-4f4f-8411-e968d921c3e7"
- },
- {
- "name": "customSubject",
- "value": "Reminder: {{userDisplayName}} is starting soon"
- },
- {
- "name": "customBody",
- "value": "Hello {{managerDisplayName}}\n\nthis is a reminder that {{userDisplayName}} is starting soon.\n\nRegards\nYour IT department"
- },
- {
- "name": "locale",
- "value": "en-us"
- }
- ]
+ "category": "joiner",
+ "continueOnError": false,
+ "description": "Send onboarding reminder email to user\u2019s manager",
+ "displayName": "Send onboarding reminder email",
+ "isEnabled": true,
+ "taskDefinitionId": "3C860712-2D37-42A4-928F-5C93935D26A1",
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "e94ad2cd-d590-4b39-8e46-bb4f8e293f85,068fa0c1-fa00-4f4f-8411-e968d921c3e7"
+ },
+ {
+ "name": "customSubject",
+ "value": "Reminder: {{userDisplayName}} is starting soon"
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}\n\nthis is a reminder that {{userDisplayName}} is starting soon.\n\nRegards\nYour IT department"
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ }
+ ]
} ```
For Microsoft Graph, the parameters for the **Generate Temporary Access Pass and
|taskDefinitionId | 1b555e50-7f65-41d5-b514-5894a026d10d | |arguments | Argument contains the name parameter "tapLifetimeInMinutes", which is the lifetime of the temporaryAccessPass in minutes starting at startDateTime. Minimum 10, Maximum 43200 (equivalent to 30 days). The argument also contains the tapIsUsableOnce parameter, which determines whether the passcode is limited to a one time use. If true, the pass can be used once; if false, the pass can be used multiple times within the temporaryAccessPass lifetime. Additionally, the optional common email task parameters can be specified; if they are not included, the default behavior takes effect. |
+Example of usage within the workflow:
-```Example for usage within the workflow
+```json
{
- "category": "joiner",
- "continueOnError": false,
- "description": "Generate Temporary Access Pass and send via email to user's manager",
- "displayName": "Generate TAP and Send Email",
- "isEnabled": true,
- "taskDefinitionId": "1b555e50-7f65-41d5-b514-5894a026d10d",
- "arguments": [
- {
- "name": "tapLifetimeMinutes",
- "value": "480"
- },
- {
- "name": "tapIsUsableOnce",
- "value": "false"
- },
- {
- "name": "cc",
- "value": "068fa0c1-fa00-4f4f-8411-e968d921c3e7,9d208c40-7eb6-46ff-bebd-f30148c39b47"
- },
- {
- "name": "customSubject",
- "value": "Temporary access pass for your new employee {{userDisplayName}}"
- },
- {
- "name": "customBody",
- "value": "Hello {{managerDisplayName}}\n\nPlease find the temporary access pass for your new employee {{userDisplayName}} below:\n\n{{temporaryAccessPass}}\n\nRegards\nYour IT department"
- },
- {
- "name": "locale",
- "value": "en-us"
- }
- ]
+ "category": "joiner",
+ "continueOnError": false,
+ "description": "Generate Temporary Access Pass and send via email to user's manager",
+ "displayName": "Generate TAP and Send Email",
+ "isEnabled": true,
+ "taskDefinitionId": "1b555e50-7f65-41d5-b514-5894a026d10d",
+ "arguments": [
+ {
+ "name": "tapLifetimeMinutes",
+ "value": "480"
+ },
+ {
+ "name": "tapIsUsableOnce",
+ "value": "false"
+ },
+ {
+ "name": "cc",
+ "value": "068fa0c1-fa00-4f4f-8411-e968d921c3e7,9d208c40-7eb6-46ff-bebd-f30148c39b47"
+ },
+ {
+ "name": "customSubject",
+ "value": "Temporary access pass for your new employee {{userDisplayName}}"
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}\n\nPlease find the temporary access pass for your new employee {{userDisplayName}} below:\n\n{{temporaryAccessPass}}\n\nRegards\nYour IT department"
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ }
+ ]
} ```
For Microsoft Graph the parameters for the **Send email to notify manager of use
|taskDefinitionId | aab41899-9972-422a-9d97-f626014578b7 | |arguments | The optional common email task parameters can be specified; if they are not included, the default behavior takes effect. |
-```Example for usage within the workflow
+Example of usage within the workflow:
+
+```json
{
- "category": "mover",
- "continueOnError": false,
- "description": "Send email to notify user\u2019s manager of user move",
- "displayName": "Send email to notify manager of user move",
- "isEnabled": true,
- "taskDefinitionId": "aab41899-9972-422a-9d97-f626014578b7",
- "arguments": [
- {
- "name": "cc",
- "value": "ac17d108-60cd-4eb2-a4b4-084cacda33f2,7d3ee937-edcc-46b0-9e2c-f832e01231ea"
- },
- {
- "name": "customSubject",
- "value": "{{userDisplayName}} has moved"
- },
- {
- "name": "customBody",
- "value": "Hello {{managerDisplayName}}\n\nwe are reaching out to let you know {{userDisplayName}} has moved in the organization.\n\nRegards\nYour IT department"
- },
- {
- "name": "locale",
- "value": "en-us"
- }
- ]
+ "category": "mover",
+ "continueOnError": false,
+ "description": "Send email to notify user\u2019s manager of user move",
+ "displayName": "Send email to notify manager of user move",
+ "isEnabled": true,
+ "taskDefinitionId": "aab41899-9972-422a-9d97-f626014578b7",
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "ac17d108-60cd-4eb2-a4b4-084cacda33f2,7d3ee937-edcc-46b0-9e2c-f832e01231ea"
+ },
+ {
+ "name": "customSubject",
+ "value": "{{userDisplayName}} has moved"
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}\n\nwe are reaching out to let you know {{userDisplayName}} has moved in the organization.\n\nRegards\nYour IT department"
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ }
+ ]
} ```
For Microsoft Graph, the parameters for the **Request user access package assign
|taskDefinitionId | c1ec1e76-f374-4375-aaa6-0bb6bd4c60be | |arguments | Argument contains two name parameter that is the "assignmentPolicyId", and "accessPackageId". |
+Example of usage within the workflow:
-```Example for usage within the workflow
+```json
{
- "category": "joiner,mover",
- "continueOnError": false,
- "description": "Request user assignment to selected access package",
- "displayName": "Request user access package assignment",
- "isEnabled": true,
- "taskDefinitionId": "c1ec1e76-f374-4375-aaa6-0bb6bd4c60be",
- "arguments": [
- {
- "name": "assignmentPolicyId",
- "value": "00d6fd25-6695-4f4a-8186-e4c6f901d2c1"
- },
- {
- "name": "accessPackageId",
- "value": "2ae5d6e5-6cbe-4710-82f2-09ef6ffff0d0"
- }
- ]
+ "category": "joiner,mover",
+ "continueOnError": false,
+ "description": "Request user assignment to selected access package",
+ "displayName": "Request user access package assignment",
+ "isEnabled": true,
+ "taskDefinitionId": "c1ec1e76-f374-4375-aaa6-0bb6bd4c60be",
+ "arguments": [
+ {
+ "name": "assignmentPolicyId",
+ "value": "00d6fd25-6695-4f4a-8186-e4c6f901d2c1"
+ },
+ {
+ "name": "accessPackageId",
+ "value": "2ae5d6e5-6cbe-4710-82f2-09ef6ffff0d0"
+ }
+ ]
} ```
For Microsoft Graph, the parameters for the **Remove access package assignment f
```Example for usage within the workflow {
- "category": "leaver,mover",
- "continueOnError": false,
- "description": "Remove user assignment of selected access package",
- "displayName": "Remove access package assignment for user",
- "isEnabled": true,
- "taskDefinitionId": "4a0b64f2-c7ec-46ba-b117-18f262946c50",
- "arguments": [
- {
- "name": "accessPackageId",
- "value": "2ae5d6e5-6cbe-4710-82f2-09ef6ffff0d0"
- }
- ]
+ "category": "leaver,mover",
+ "continueOnError": false,
+ "description": "Remove user assignment of selected access package",
+ "displayName": "Remove access package assignment for user",
+ "isEnabled": true,
+ "taskDefinitionId": "4a0b64f2-c7ec-46ba-b117-18f262946c50",
+ "arguments": [
+ {
+ "name": "accessPackageId",
+ "value": "2ae5d6e5-6cbe-4710-82f2-09ef6ffff0d0"
+ }
+ ]
} ```
For Microsoft Graph, the parameters for the **Remove all access package assignme
|description | Remove all access packages assigned to the user (Customizable by user) | |taskDefinitionId | 42ae2956-193d-4f39-be06-691b8ac4fa1d |
+Example of usage within the workflow:
-```Example for usage within the workflow
+```json
{
- "category": "leaver",
- "continueOnError": false,
- "description": "Remove all access packages assigned to the user",
- "displayName": "Remove all access package assignments for user",
- "isEnabled": true,
- "taskDefinitionId": "42ae2956-193d-4f39-be06-691b8ac4fa1d",
- "arguments": []
+ "category": "leaver",
+ "continueOnError": false,
+ "description": "Remove all access packages assigned to the user",
+ "displayName": "Remove all access package assignments for user",
+ "isEnabled": true,
+ "taskDefinitionId": "42ae2956-193d-4f39-be06-691b8ac4fa1d",
+ "arguments": []
} ```
For Microsoft Graph, the parameters for the **Cancel all pending access package
|taskDefinitionId | 498770d9-bab7-4e4c-b73d-5ded82a1d0b3 |
-```Example for usage within the workflow
+Example of usage within the workflow:
+
+```json
{
- "category": "leaver",
- "continueOnError": false,
- "description": "Cancel all access package assignment requests pending for the user",
- "displayName": "Cancel all pending access package assignment requests for user",
- "isEnabled": true,
- "taskDefinitionId": "498770d9-bab7-4e4c-b73d-5ded82a1d0b3",
- "arguments": []
+ "category": "leaver",
+ "continueOnError": false,
+ "description": "Cancel all access package assignment requests pending for the user",
+ "displayName": "Cancel all pending access package assignment requests for user",
+ "isEnabled": true,
+ "taskDefinitionId": "498770d9-bab7-4e4c-b73d-5ded82a1d0b3",
+ "arguments": []
} ```
For Microsoft Graph the parameters for the **Send email before user's last day**
|taskDefinitionId | 52853a3e-f4e5-4eb8-bb24-1ac09a1da935 | |arguments | The optional common email task parameters can be specified; if they are not included, the default behavior takes effect. |
-```Example for usage within the workflow
+Example of usage within the workflow:
+
+```
{
- "category": "leaver",
- "continueOnError": false,
- "description": "Send offboarding email to userΓÇÖs manager before the last day of work",
- "displayName": "Send email before userΓÇÖs last day",
- "isEnabled": true,
- "taskDefinitionId": "52853a3e-f4e5-4eb8-bb24-1ac09a1da935",
- "arguments": [
- {
- "name": "cc",
- "value": "068fa0c1-fa00-4f4f-8411-e968d921c3e7,e94ad2cd-d590-4b39-8e46-bb4f8e293f85"
- },
- {
- "name": "customSubject",
- "value": "Reminder that {{userDisplayName}}'s last day is coming up"
- },
- {
- "name": "customBody",
- "value": "Hello {{managerDisplayName}}\n\nthis is a reminder that {{userDisplayName}}'s last day is coming up.\n\nRegards\nYour IT department"
- },
- {
- "name": "locale",
- "value": "en-us"
- }
- ]
+ "category": "leaver",
+ "continueOnError": false,
+ "description": "Send offboarding email to userΓÇÖs manager before the last day of work",
+ "displayName": "Send email before userΓÇÖs last day",
+ "isEnabled": true,
+ "taskDefinitionId": "52853a3e-f4e5-4eb8-bb24-1ac09a1da935",
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "068fa0c1-fa00-4f4f-8411-e968d921c3e7,e94ad2cd-d590-4b39-8e46-bb4f8e293f85"
+ },
+ {
+ "name": "customSubject",
+ "value": "Reminder that {{userDisplayName}}'s last day is coming up"
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}\n\nthis is a reminder that {{userDisplayName}}'s last day is coming up.\n\nRegards\nYour IT department"
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ }
+ ]
} ```
For Microsoft Graph, the parameters for the **Send email on user last day** task
|taskDefinitionId | 9c0a1eaf-5bda-4392-9d9e-6e155bb57411 | |arguments | The optional common email task parameters can be specified; if they are not included, the default behavior takes effect. |
-```Example for usage within the workflow
+Example of usage within the workflow:
+
+```json
{
- "category": "leaver",
- "continueOnError": false,
- "description": "Send offboarding email to userΓÇÖs manager on the last day of work",
- "displayName": "Send email on userΓÇÖs last day",
- "isEnabled": true,
- "taskDefinitionId": "9c0a1eaf-5bda-4392-9d9e-6e155bb57411",
- "arguments": [
- {
- "name": "cc",
- "value": "068fa0c1-fa00-4f4f-8411-e968d921c3e7,e94ad2cd-d590-4b39-8e46-bb4f8e293f85"
- },
- {
- "name": "customSubject",
- "value": "{{userDisplayName}}'s last day"
- },
- {
- "name": "customBody",
- "value": "Hello {{managerDisplayName}}\n\nthis is a reminder that {{userDisplayName}}'s last day is today and their access will be revoked.\n\nRegards\nYour IT department"
- },
- {
- "name": "locale",
- "value": "en-us"
- }
- ]
+ "category": "leaver",
+ "continueOnError": false,
+ "description": "Send offboarding email to userΓÇÖs manager on the last day of work",
+ "displayName": "Send email on userΓÇÖs last day",
+ "isEnabled": true,
+ "taskDefinitionId": "9c0a1eaf-5bda-4392-9d9e-6e155bb57411",
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "068fa0c1-fa00-4f4f-8411-e968d921c3e7,e94ad2cd-d590-4b39-8e46-bb4f8e293f85"
+ },
+ {
+ "name": "customSubject",
+ "value": "{{userDisplayName}}'s last day"
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}\n\nthis is a reminder that {{userDisplayName}}'s last day is today and their access will be revoked.\n\nRegards\nYour IT department"
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ }
+ ]
} ```
For Microsoft Graph, the parameters for the **Send email to users manager after
|taskDefinitionId | 6f22ddd4-b3a5-47a4-a846-0d7c201a49ce | |arguments | The optional common email task parameters can be specified; if they are not included, the default behavior takes effect. |
-```Example for usage within the workflow
+Example of usage within the workflow:
+
+```json
{
- "category": "leaver",
- "continueOnError": false,
- "description": "Send offboarding email to userΓÇÖs manager after the last day of work",
- "displayName": "Send email after userΓÇÖs last day",
- "isEnabled": true,
- "taskDefinitionId": "6f22ddd4-b3a5-47a4-a846-0d7c201a49ce",
- "arguments": [
- {
- "name": "cc",
- "value": "ac17d108-60cd-4eb2-a4b4-084cacda33f2,7d3ee937-edcc-46b0-9e2c-f832e01231ea"
- },
- {
- "name": "customSubject",
- "value": "{{userDisplayName}}'s accounts will be deleted today"
- },
- {
- "name": "customBody",
- "value": "Hello {{managerDisplayName}}\n\nthis is a reminder that {{userDisplayName}} left the organization a while ago and today their disabled accounts will be deleted.\n\nRegards\nYour IT department"
- },
- {
- "name": "locale",
- "value": "en-us"
- }
- ]
+ "category": "leaver",
+ "continueOnError": false,
+ "description": "Send offboarding email to userΓÇÖs manager after the last day of work",
+ "displayName": "Send email after userΓÇÖs last day",
+ "isEnabled": true,
+ "taskDefinitionId": "6f22ddd4-b3a5-47a4-a846-0d7c201a49ce",
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "ac17d108-60cd-4eb2-a4b4-084cacda33f2,7d3ee937-edcc-46b0-9e2c-f832e01231ea"
+ },
+ {
+ "name": "customSubject",
+ "value": "{{userDisplayName}}'s accounts will be deleted today"
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}\n\nthis is a reminder that {{userDisplayName}} left the organization a while ago and today their disabled accounts will be deleted.\n\nRegards\nYour IT department"
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ }
+ ]
} ```
active-directory How To Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-prerequisites.md
You need the following to use Azure AD Connect cloud sync:
A group Managed Service Account is a managed domain account that provides automatic password management, simplified service principal name (SPN) management, the ability to delegate the management to other administrators, and also extends this functionality over multiple servers. Azure AD Connect Cloud Sync supports and uses a gMSA for running the agent. You will be prompted for administrative credentials during setup, in order to create this account. The account will appear as (domain\provAgentgMSA$). For more information on a gMSA, see [group Managed Service Accounts](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview) ### Prerequisites for gMSA:
-1. The Active Directory schema in the gMSA domain's forest needs to be updated to Windows Server 2012 or later.
-2. [PowerShell RSAT modules](/windows-server/remote/remote-server-administration-tools) on a domain controller
-3. At least one domain controller in the domain must be running Windows Server 2012 or later.
-4. A domain joined server where the agent is being installed needs to be either Windows Server 2016 or later.
+1. The Active Directory schema in the gMSA domain's forest needs to be updated to Windows Server 2012 or later.
+2. [PowerShell RSAT modules](/windows-server/remote/remote-server-administration-tools) on a domain controller
+3. At least one domain controller in the domain must be running Windows Server 2012 or later.
+4. A domain joined server where the agent is being installed needs to be either Windows Server 2016 or later.
### Custom gMSA account If you are creating a custom gMSA account, you need to ensure that the account has the following permissions.
active-directory Deprecated Azure Ad Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/deprecated-azure-ad-connect.md
We regularly update Azure AD Connect with [newer versions](reference-connect-ver
If you're still using a deprecated and unsupported version of Azure AD Connect, here's what you should do:
- 1. Verify which version you should install. Most customers no longer need Azure AD Connect and can now use [Azure AD Cloud Sync](../cloud-sync/what-is-cloud-sync.md). Cloud sync is the next generation of sync tools to provision users and groups from AD into Azure AD. It features a lightweight agent and is fully managed from the cloud ΓÇô and it upgrades to newer versions automatically, so you never have to worry about upgrading again!
+ 1. Verify which version you should install. Most customers no longer need Azure AD Connect and can now use [Azure AD Cloud Sync](../cloud-sync/what-is-cloud-sync.md). Cloud sync is the next generation of sync tools to provision users and groups from AD into Azure AD. It features a lightweight agent and is fully managed from the cloud ΓÇô and it upgrades to newer versions automatically, so you never have to worry about upgrading again!
- 2. If you're not yet eligible for Azure AD Cloud Sync, please follow this [link to download](https://www.microsoft.com/download/details.aspx?id=47594) and install the latest version of Azure AD Connect. In most cases, upgrading to the latest version will only take a few moments. For more information, see [Upgrading Azure AD Connect from a previous version.](how-to-upgrade-previous-version.md).
+ 2. If you're not yet eligible for Azure AD Cloud Sync, please follow this [link to download](https://www.microsoft.com/download/details.aspx?id=47594) and install the latest version of Azure AD Connect. In most cases, upgrading to the latest version will only take a few moments. For more information, see [Upgrading Azure AD Connect from a previous version.](how-to-upgrade-previous-version.md).
## Next steps
active-directory How To Connect Device Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-device-options.md
The following documentation provides information about the various device option
## Configure device options in Azure AD Connect
-1. Run Azure AD Connect. In the **Additional tasks** page, select **Configure device options**. Click **Next**.
+1. Run Azure AD Connect. In the **Additional tasks** page, select **Configure device options**. Click **Next**.
![Configure device options](./media/how-to-connect-device-options/deviceoptions.png) The **Overview** page displays the details.
The following documentation provides information about the various device option
>[!NOTE] > The new Configure device options is available only in version 1.1.819.0 and newer.
-2. After providing the credentials for Azure AD, you can chose the operation to be performed on the Device options page.
+2. After providing the credentials for Azure AD, you can chose the operation to be performed on the Device options page.
![Device operations](./media/how-to-connect-device-options/deviceoptionsselection.png) ## Next steps
active-directory How To Connect Health Adfs Risky Ip Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-health-adfs-risky-ip-workbook.md
Additionally, it is possible for a single IP address to attempt multiple logins
- Expanded functionality from the previous Risky IP report, which will be deprecated after January 24, 2022. ## Requirements
-1. Connect Health for AD FS installed and updated to the latest agent.
-2. A Log Analytics Workspace with the ΓÇ£ADFSSignInLogsΓÇ¥ stream enabled.
-3. Permissions to use the Azure AD Monitor Workbooks. To use Workbooks, you need:
+1. Connect Health for AD FS installed and updated to the latest agent.
+2. A Log Analytics Workspace with the ΓÇ£ADFSSignInLogsΓÇ¥ stream enabled.
+3. Permissions to use the Azure AD Monitor Workbooks. To use Workbooks, you need:
- An Azure Active Directory tenant with a premium (P1 or P2) license. - Access to a Log Analytics Workspace and the following roles in Azure AD (if accessing Log Analytics through Azure portal): Security administrator, Security reader, Reports reader, Global administrator
Alerting threshold can be updated through Threshold Settings. To start with, sys
## Configure notification alerts using Azure Monitor Alerts through the Azure portal: [![Azure Alerts Rule](./media/how-to-connect-health-adfs-risky-ip-workbook/azure-alerts-rule-1.png)](./media/how-to-connect-health-adfs-risky-ip-workbook/azure-alerts-rule-1.png#lightbox)
-1. In the Azure portal, search for ΓÇ£MonitorΓÇ¥ in the search bar to navigate to the Azure ΓÇ£MonitorΓÇ¥ service. Select ΓÇ£AlertsΓÇ¥ from the left menu, then ΓÇ£+ New alert ruleΓÇ¥.
-2. On the ΓÇ£Create alert ruleΓÇ¥ blade:
+1. In the Azure portal, search for ΓÇ£MonitorΓÇ¥ in the search bar to navigate to the Azure ΓÇ£MonitorΓÇ¥ service. Select ΓÇ£AlertsΓÇ¥ from the left menu, then ΓÇ£+ New alert ruleΓÇ¥.
+2. On the ΓÇ£Create alert ruleΓÇ¥ blade:
* Scope: Click ΓÇ£Select resourceΓÇ¥ and select your Log Analytics workspace that contains the ADFSSignInLogs you wish to monitor. * Condition: Click ΓÇ£Add conditionΓÇ¥. Select ΓÇ£LogΓÇ¥ for Signal type and ΓÇ£Log analyticsΓÇ¥ for Monitor service. Choose ΓÇ£Custom log searchΓÇ¥.
active-directory How To Connect Health Data Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-health-data-retrieval.md
This document describes how to use Azure AD Connect to retrieve data from Azure
To retrieve the email addresses for all of your users that are configured in Azure AD Connect Health to receive alerts, use the following steps.
-1. Start at the Azure Active Directory Connect health blade and select **Sync Services** from the left-hand navigation bar.
+1. Start at the Azure Active Directory Connect health blade and select **Sync Services** from the left-hand navigation bar.
![Sync Services](./media/how-to-connect-health-data-retrieval/retrieve1.png)
-2. Click on the **Alerts** tile.</br>
+2. Click on the **Alerts** tile.</br>
![Alert](./media/how-to-connect-health-data-retrieval/retrieve3.png)
-3. Click on **Notification Settings**.
+3. Click on **Notification Settings**.
![Notification](./media/how-to-connect-health-data-retrieval/retrieve4.png)
-4. On the **Notification Setting** blade, you will find the list of email addresses that have been enabled as recipients for health Alert notifications.
+4. On the **Notification Setting** blade, you will find the list of email addresses that have been enabled as recipients for health Alert notifications.
![Emails](./media/how-to-connect-health-data-retrieval/retrieve5a.png) ## Retrieve all sync errors To retrieve a list of all sync errors, use the following steps.
-1. Starting on the Azure Active Directory Health blade, select **Sync Errors**.
+1. Starting on the Azure Active Directory Health blade, select **Sync Errors**.
![Sync errors](./media/how-to-connect-health-data-retrieval/retrieve6.png)
-2. In the **Sync Errors** blade, click on **Export**. This will export a list of the recorded sync errors.
+2. In the **Sync Errors** blade, click on **Export**. This will export a list of the recorded sync errors.
![Export](./media/how-to-connect-health-data-retrieval/retrieve7.png) ## Next Steps
active-directory How To Connect Health Diagnose Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-health-diagnose-sync-errors.md
Follow the steps from the Azure portal to narrow down the sync error details and
![Sync error diagnosis steps](./media/how-to-connect-health-diagnose-sync-errors/IIdFixSteps.png) From the Azure portal, take a few steps to identify specific fixable scenarios:
-1. Check the **Diagnose status** column. The status shows if there's a possible way to fix a sync error directly from Azure Active Directory. In other words, a troubleshooting flow exists that can narrow down the error case and potentially fix it.
+1. Check the **Diagnose status** column. The status shows if there's a possible way to fix a sync error directly from Azure Active Directory. In other words, a troubleshooting flow exists that can narrow down the error case and potentially fix it.
| Status | What does it mean? | | | --|
active-directory How To Connect Install Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-custom.md
For more information, see [Directory extensions](how-to-connect-sync-feature-dir
### Enabling single sign-on On the **Single sign-on** page, you configure single sign-on for use with password synchronization or pass-through authentication. You do this step once for each forest that's being synchronized to Azure AD. Configuration involves two steps:
-1. Create the necessary computer account in your on-premises instance of Active Directory.
-2. Configure the intranet zone of the client machines to support single sign-on.
+1. Create the necessary computer account in your on-premises instance of Active Directory.
+2. Configure the intranet zone of the client machines to support single sign-on.
#### Create the computer account in Active Directory For each forest that has been added in Azure AD Connect, you need to supply domain administrator credentials so that the computer account can be created in each forest. The credentials are used only to create the account. They aren't stored or used for any other operation. Add the credentials on the **Enable single sign-on** page, as the following image shows.
To ensure that the client signs in automatically in the intranet zone, make sure
On a computer that has Group Policy management tools:
-1. Open the Group Policy management tools.
-2. Edit the group policy that will be applied to all users. For example, the Default Domain policy.
-3. Go to **User Configuration** > **Administrative Templates** > **Windows Components** > **Internet Explorer** > **Internet Control Panel** > **Security Page**. Then select **Site to Zone Assignment List**.
-4. Enable the policy. Then, in the dialog box, enter a value name of `https://autologon.microsoftazuread-sso.com` and value of `1`. Your setup should look like the following image.
+1. Open the Group Policy management tools.
+2. Edit the group policy that will be applied to all users. For example, the Default Domain policy.
+3. Go to **User Configuration** > **Administrative Templates** > **Windows Components** > **Internet Explorer** > **Internet Control Panel** > **Security Page**. Then select **Site to Zone Assignment List**.
+4. Enable the policy. Then, in the dialog box, enter a value name of `https://autologon.microsoftazuread-sso.com` and value of `1`. Your setup should look like the following image.
![Screenshot showing intranet zones.](./media/how-to-connect-install-custom/sitezone.png)
-6. Select **OK** twice.
+6. Select **OK** twice.
## Configuring federation with AD FS You can configure AD FS with Azure AD Connect in just a few clicks. Before you start, you need:
active-directory How To Connect Install Existing Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-existing-database.md
Important notes to take note of before you proceed:
- You cannot have multiple Azure AD Connect servers share the same ADSync database. The ΓÇ£use existing databaseΓÇ¥ method allows you to reuse an existing ADSync database with a new Azure AD Connect server. It does not support sharing. ## Steps to install Azure AD Connect with ΓÇ£use existing databaseΓÇ¥ mode
-1. Download Azure AD Connect installer (AzureADConnect.MSI) to the Windows server. Double-click the Azure AD Connect installer to start installing Azure AD Connect.
-2. Once the MSI installation completes, the Azure AD Connect wizard starts with the Express mode setup. Close the screen by clicking the Exit icon.
+1. Download Azure AD Connect installer (AzureADConnect.MSI) to the Windows server. Double-click the Azure AD Connect installer to start installing Azure AD Connect.
+2. Once the MSI installation completes, the Azure AD Connect wizard starts with the Express mode setup. Close the screen by clicking the Exit icon.
![Screenshot that shows the "Welcome to Azure A D Connect" page, with "Express Settings" in the left-side menu highlighted.](./media/how-to-connect-install-existing-database/db1.png)
-3. Start a new command prompt or PowerShell session. Navigate to folder "C:\Program Files\Microsoft Azure Active Directory Connect". Run command .\AzureADConnect.exe /useexistingdatabase to start the Azure AD Connect wizard in ΓÇ£Use existing databaseΓÇ¥ setup mode.
+3. Start a new command prompt or PowerShell session. Navigate to folder "C:\Program Files\Microsoft Azure Active Directory Connect". Run command .\AzureADConnect.exe /useexistingdatabase to start the Azure AD Connect wizard in ΓÇ£Use existing databaseΓÇ¥ setup mode.
> [!NOTE] > Use the switch **/UseExistingDatabase** only when the database already contains data from an earlier Azure AD Connect installation. For instance, when you are moving from a local database to a full SQL Server database or when the Azure AD Connect server was rebuilt and you restored a SQL backup of the ADSync database from an earlier installation of Azure AD Connect. If the database is empty, that is, it doesn't contain any data from a previous Azure AD Connect installation, skip this step.
active-directory How To Connect Pta Upgrade Preview Authentication Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-pta-upgrade-preview-authentication-agents.md
To check the versions of your Authentication Agents, on each server identified i
Before upgrading, ensure that you have the following items in place: 1. **Create cloud-only Global Administrator account**: DonΓÇÖt upgrade without having a cloud-only Global Administrator account to use in emergency situations where your Pass-through Authentication Agents are not working properly. Learn about [adding a cloud-only Global Administrator account](../../fundamentals/add-users-azure-active-directory.md). Doing this step is critical and ensures that you don't get locked out of your tenant.
-2. **Ensure high availability**: If not completed previously, install a second standalone Authentication Agent to provide high availability for sign-in requests, using these [instructions](how-to-connect-pta-quick-start.md#step-4-ensure-high-availability).
+2. **Ensure high availability**: If not completed previously, install a second standalone Authentication Agent to provide high availability for sign-in requests, using these [instructions](how-to-connect-pta-quick-start.md#step-4-ensure-high-availability).
## Upgrading the Authentication Agent on your Azure AD Connect server
active-directory How To Connect Pta User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-pta-user-privacy.md
Azure AD Pass-through Authentication creates the following log type, which can c
Improve user privacy for Pass-through Authentication in two ways:
-1. Upon request, extract data for a person and remove data from that person from the installations.
-2. Ensure no data is retained beyond 48 hours.
+1. Upon request, extract data for a person and remove data from that person from the installations.
+2. Ensure no data is retained beyond 48 hours.
We strongly recommend the second option as it is easier to implement and maintain. Following are the instructions for each log type:
Foreach ($file in $files) {
To schedule this script to run every 48 hours follow these steps:
-1. Save the script in a file with the ".PS1" extension.
-2. Open **Control Panel** and click on **System and Security**.
-3. Under the **Administrative Tools** heading, click on ΓÇ£**Schedule Tasks**ΓÇ¥.
-4. In **Task Scheduler**, right-click on “**Task Schedule Library**” and click on “**Create Basic task…**”.
-5. Enter the name for the new task and click **Next**.
-6. Select ΓÇ£**Daily**ΓÇ¥ for the **Task Trigger** and click **Next**.
-7. Set the recurrence to two days and click **Next**.
-8. Select ΓÇ£**Start a program**ΓÇ¥ as the action and click **Next**.
-9. Type ΓÇ£**PowerShell**ΓÇ¥ in the box for the Program/script, and in box labeled ΓÇ£**Add arguments (optional)**ΓÇ¥, enter the full path to the script that you created earlier, then click **Next**.
-10. The next screen shows a summary of the task you are about to create. Verify the values and click **Finish** to create the task:
+1. Save the script in a file with the ".PS1" extension.
+2. Open **Control Panel** and click on **System and Security**.
+3. Under the **Administrative Tools** heading, click on ΓÇ£**Schedule Tasks**ΓÇ¥.
+4. In **Task Scheduler**, right-click on “**Task Schedule Library**” and click on “**Create Basic task…**”.
+5. Enter the name for the new task and click **Next**.
+6. Select ΓÇ£**Daily**ΓÇ¥ for the **Task Trigger** and click **Next**.
+7. Set the recurrence to two days and click **Next**.
+8. Select ΓÇ£**Start a program**ΓÇ¥ as the action and click **Next**.
+9. Type ΓÇ£**PowerShell**ΓÇ¥ in the box for the Program/script, and in box labeled ΓÇ£**Add arguments (optional)**ΓÇ¥, enter the full path to the script that you created earlier, then click **Next**.
+10. The next screen shows a summary of the task you are about to create. Verify the values and click **Finish** to create the task:
### Note about Domain controller logs
active-directory How To Connect Sso User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sso-user-privacy.md
Azure AD Seamless SSO creates the following log type, which can contain Personal
Improve user privacy for Seamless SSO in two ways:
-1. Upon request, extract data for a person and remove data from that person from the installations.
-2. Ensure no data is retained beyond 48 hours.
+1. Upon request, extract data for a person and remove data from that person from the installations.
+2. Ensure no data is retained beyond 48 hours.
We strongly recommend the second option as it is easier to implement and maintain. See following instructions for each log type:
active-directory How To Connect Sync Change The Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-change-the-configuration.md
Before enabling synchronization of the UserType attribute, you must first decide
The steps to enable synchronization of the UserType attribute can be summarized as:
-1. Disable the sync scheduler and verify there is no synchronization in progress.
-2. Add the source attribute to the on-premises AD Connector schema.
-3. Add the UserType to the Azure AD Connector schema.
-4. Create an inbound synchronization rule to flow the attribute value from on-premises Active Directory.
-5. Create an outbound synchronization rule to flow the attribute value to Azure AD.
-6. Run a full synchronization cycle.
-7. Enable the sync scheduler.
+1. Disable the sync scheduler and verify there is no synchronization in progress.
+2. Add the source attribute to the on-premises AD Connector schema.
+3. Add the UserType to the Azure AD Connector schema.
+4. Create an inbound synchronization rule to flow the attribute value from on-premises Active Directory.
+5. Create an outbound synchronization rule to flow the attribute value to Azure AD.
+6. Run a full synchronization cycle.
+7. Enable the sync scheduler.
>[!NOTE] > The rest of this section covers these steps. They are described in the context of an Azure AD deployment with single-forest topology and without custom synchronization rules. If you have multi-forest topology, custom synchronization rules configured, or have a staging server, you need to adjust the steps accordingly.
active-directory How To Connect Sync Staging Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-staging-server.md
See the section [verify](#verify) on how to use this script.
```powershell Param(
- [Parameter(Mandatory=$true, HelpMessage="Must be a file generated using csexport 'Name of Connector' export.xml /f:x)")]
- [string]$xmltoimport="%temp%\exportedStage1a.xml",
- [Parameter(Mandatory=$false, HelpMessage="Maximum number of users per output file")][int]$batchsize=1000,
- [Parameter(Mandatory=$false, HelpMessage="Show console output")][bool]$showOutput=$false
+ [Parameter(Mandatory=$true, HelpMessage="Must be a file generated using csexport 'Name of Connector' export.xml /f:x)")]
+ [string]$xmltoimport="%temp%\exportedStage1a.xml",
+ [Parameter(Mandatory=$false, HelpMessage="Maximum number of users per output file")][int]$batchsize=1000,
+ [Parameter(Mandatory=$false, HelpMessage="Show console output")][bool]$showOutput=$false
) #LINQ isn't loaded automatically, so force it
$result=$reader = [System.Xml.XmlReader]::Create($resolvedXMLtoimport) 
$result=$reader.ReadToDescendant('cs-object') if($result) {
- do 
- {
- #create the object placeholder
- #adding them up here means we can enforce consistency
- $objOutputUser=New-Object psobject
- Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name ID -Value ""
- Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name Type -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name DN -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name operation -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name UPN -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name displayName -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name sourceAnchor -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name alias -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name primarySMTP -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name onPremisesSamAccountName -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name mail -Value ""
-
- $user = [System.Xml.Linq.XElement]::ReadFrom($reader)
- if ($showOutput) {Write-Host Found an exported object... -ForegroundColor Green}
-
- #object id
- $outID=$user.Attribute('id').Value
- if ($showOutput) {Write-Host ID: $outID}
- $objOutputUser.ID=$outID
-
- #object type
- $outType=$user.Attribute('object-type').Value
- if ($showOutput) {Write-Host Type: $outType}
- $objOutputUser.Type=$outType
-
- #dn
- $outDN= $user.Element('unapplied-export').Element('delta').Attribute('dn').Value
- if ($showOutput) {Write-Host DN: $outDN}
- $objOutputUser.DN=$outDN
-
- #operation
- $outOperation= $user.Element('unapplied-export').Element('delta').Attribute('operation').Value
- if ($showOutput) {Write-Host Operation: $outOperation}
- $objOutputUser.operation=$outOperation
-
- #now that we have the basics, go get the details
-
- foreach ($attr in $user.Element('unapplied-export-hologram').Element('entry').Elements("attr"))
- {
- $attrvalue=$attr.Attribute('name').Value
- $internalvalue= $attr.Element('value').Value
-
- switch ($attrvalue)
- {
- "userPrincipalName"
- {
- if ($showOutput) {Write-Host UPN: $internalvalue}
- $objOutputUser.UPN=$internalvalue
- }
- "displayName"
- {
- if ($showOutput) {Write-Host displayName: $internalvalue}
- $objOutputUser.displayName=$internalvalue
- }
- "sourceAnchor"
- {
- if ($showOutput) {Write-Host sourceAnchor: $internalvalue}
- $objOutputUser.sourceAnchor=$internalvalue
- }
- "alias"
- {
- if ($showOutput) {Write-Host alias: $internalvalue}
- $objOutputUser.alias=$internalvalue
- }
- "proxyAddresses"
- {
- if ($showOutput) {Write-Host primarySMTP: ($internalvalue -replace "SMTP:","")}
- $objOutputUser.primarySMTP=$internalvalue -replace "SMTP:",""
- }
- }
- }
-
- $objOutputUsers += $objOutputUser
-
- Write-Progress -activity "Processing ${xmltoimport} in batches of ${batchsize}" -status "Batch ${outputfilecount}: " -percentComplete (($objOutputUsers.Count / $batchsize) * 100)
-
- #every so often, dump the processed users in case we blow up somewhere
- if ($count % $batchsize -eq 0)
- {
- Write-Host Hit the maximum users processed without completion... -ForegroundColor Yellow
-
- #export the collection of users as a CSV
- Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
- $objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation
-
- #increment the output file counter
- $outputfilecount+=1
-
- #reset the collection and the user counter
- $objOutputUsers = $null
- $count=0
- }
-
- $count+=1
-
- #need to bail out of the loop if no more users to process
- if ($reader.NodeType -eq [System.Xml.XmlNodeType]::EndElement)
- {
- break
- }
-
- } while ($reader.Read)
-
- #need to write out any users that didn't get picked up in a batch of 1000
- #export the collection of users as CSV
- Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
- $objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation
+ do 
+ {
+ #create the object placeholder
+ #adding them up here means we can enforce consistency
+ $objOutputUser=New-Object psobject
+ Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name ID -Value ""
+ Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name Type -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name DN -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name operation -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name UPN -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name displayName -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name sourceAnchor -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name alias -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name primarySMTP -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name onPremisesSamAccountName -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name mail -Value ""
+
+ $user = [System.Xml.Linq.XElement]::ReadFrom($reader)
+ if ($showOutput) {Write-Host Found an exported object... -ForegroundColor Green}
+
+ #object id
+ $outID=$user.Attribute('id').Value
+ if ($showOutput) {Write-Host ID: $outID}
+ $objOutputUser.ID=$outID
+
+ #object type
+ $outType=$user.Attribute('object-type').Value
+ if ($showOutput) {Write-Host Type: $outType}
+ $objOutputUser.Type=$outType
+
+ #dn
+ $outDN= $user.Element('unapplied-export').Element('delta').Attribute('dn').Value
+ if ($showOutput) {Write-Host DN: $outDN}
+ $objOutputUser.DN=$outDN
+
+ #operation
+ $outOperation= $user.Element('unapplied-export').Element('delta').Attribute('operation').Value
+ if ($showOutput) {Write-Host Operation: $outOperation}
+ $objOutputUser.operation=$outOperation
+
+ #now that we have the basics, go get the details
+
+ foreach ($attr in $user.Element('unapplied-export-hologram').Element('entry').Elements("attr"))
+ {
+ $attrvalue=$attr.Attribute('name').Value
+ $internalvalue= $attr.Element('value').Value
+
+ switch ($attrvalue)
+ {
+ "userPrincipalName"
+ {
+ if ($showOutput) {Write-Host UPN: $internalvalue}
+ $objOutputUser.UPN=$internalvalue
+ }
+ "displayName"
+ {
+ if ($showOutput) {Write-Host displayName: $internalvalue}
+ $objOutputUser.displayName=$internalvalue
+ }
+ "sourceAnchor"
+ {
+ if ($showOutput) {Write-Host sourceAnchor: $internalvalue}
+ $objOutputUser.sourceAnchor=$internalvalue
+ }
+ "alias"
+ {
+ if ($showOutput) {Write-Host alias: $internalvalue}
+ $objOutputUser.alias=$internalvalue
+ }
+ "proxyAddresses"
+ {
+ if ($showOutput) {Write-Host primarySMTP: ($internalvalue -replace "SMTP:","")}
+ $objOutputUser.primarySMTP=$internalvalue -replace "SMTP:",""
+ }
+ }
+ }
+
+ $objOutputUsers += $objOutputUser
+
+ Write-Progress -activity "Processing ${xmltoimport} in batches of ${batchsize}" -status "Batch ${outputfilecount}: " -percentComplete (($objOutputUsers.Count / $batchsize) * 100)
+
+ #every so often, dump the processed users in case we blow up somewhere
+ if ($count % $batchsize -eq 0)
+ {
+ Write-Host Hit the maximum users processed without completion... -ForegroundColor Yellow
+
+ #export the collection of users as a CSV
+ Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
+ $objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation
+
+ #increment the output file counter
+ $outputfilecount+=1
+
+ #reset the collection and the user counter
+ $objOutputUsers = $null
+ $count=0
+ }
+
+ $count+=1
+
+ #need to bail out of the loop if no more users to process
+ if ($reader.NodeType -eq [System.Xml.XmlNodeType]::EndElement)
+ {
+ break
+ }
+
+ } while ($reader.Read)
+
+ #need to write out any users that didn't get picked up in a batch of 1000
+ #export the collection of users as CSV
+ Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
+ $objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation
} else {
active-directory How To Upgrade Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-upgrade-previous-version.md
These steps also work to move from Azure AD Sync or a solution with FIM + Azure
### Use a swing migration to upgrade 1. If you only have one Azure AD Connect server, if you are upgrading from AD Sync, or upgrading from an old version, it's a good idea to install the new version on a new Windows Server. If you already have two Azure AD Connect servers, upgrade the staging server first. and promote the staging to active. It's recommended to always keep a pair of active/staging server running the same version, but it's not required. 2. If you have made a custom configuration and your staging server doesn't have it, follow the steps under [Move a custom configuration from the active server to the staging server](#move-a-custom-configuration-from-the-active-server-to-the-staging-server).
-3. Let the sync engine run full import and full synchronization on your staging server.
+3. Let the sync engine run full import and full synchronization on your staging server.
4. Verify that the new configuration did not cause any unexpected changes by using the steps under "Verify" in [Verify the configuration of a server](how-to-connect-sync-staging-server.md#verify-the-configuration-of-a-server). If something is not as expected, correct it, run a sync cycle, and verify the data until it looks good. 5. Before upgrading the other server, switch it to staging mode and promote the staging server to be the active server. This is the last step "Switch active server" in the process to [Verify the configuration of a server](how-to-connect-sync-staging-server.md#verify-the-configuration-of-a-server). 6. Upgrade the server that is now in staging mode to the latest release. Follow the same steps as before to get the data and configuration upgraded. If you upgrade from Azure AD Sync, you can now turn off and decommission your old server.
active-directory Plan Connect Userprincipalname https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-connect-userprincipalname.md
The following are example scenarios of how the UPN is calculated based on the gi
![Scenario1](./media/plan-connect-userprincipalname/example1.png) On-Premises user object:-- mailNickName : &lt;not set&gt;-- proxyAddresses : {SMTP:us1@contoso.com}-- mail : us2@contoso.com-- userPrincipalName : us3@contoso.com
+- mailNickName: &lt;not set&gt;
+- proxyAddresses: {SMTP:us1@contoso.com}
+- mail: us2@contoso.com
+- userPrincipalName: us3@contoso.com
Synchronized the user object to Azure AD Tenant for the first time - Set Azure AD MailNickName attribute to primary SMTP address prefix.
Synchronized the user object to Azure AD Tenant for the first time
- Set Azure AD UserPrincipalName attribute to MOERA. Azure AD Tenant user object:-- MailNickName : us1 -- UserPrincipalName : us1@contoso.onmicrosoft.com-
+- MailNickName : us1
+- UserPrincipalName: us1@contoso.onmicrosoft.com
### Scenario 2: Non-verified UPN suffix ΓÇô set on-premises mailNickName attribute ![Scenario2](./media/plan-connect-userprincipalname/example2.png) On-Premises user object:-- mailNickName : us4-- proxyAddresses : {SMTP:us1@contoso.com}-- mail : us2@contoso.com-- userPrincipalName : us3@contoso.com
+- mailNickName: us4
+- proxyAddresses: {SMTP:us1@contoso.com}
+- mail: us2@contoso.com
+- userPrincipalName: us3@contoso.com
Synchronize update on on-premises mailNickName attribute to Azure AD Tenant - Update Azure AD MailNickName attribute with on-premises mailNickName attribute. - Because there is no update to the on-premises userPrincipalName attribute, there is no change to the Azure AD UserPrincipalName attribute. Azure AD Tenant user object:-- MailNickName : us4-- UserPrincipalName : us1@contoso.onmicrosoft.com
+- MailNickName: us4
+- UserPrincipalName: us1@contoso.onmicrosoft.com
### Scenario 3: Non-verified UPN suffix ΓÇô update on-premises userPrincipalName attribute ![Scenario3](./media/plan-connect-userprincipalname/example3.png) On-Premises user object:-- mailNickName : us4-- proxyAddresses : {SMTP:us1@contoso.com}-- mail : us2@contoso.com-- userPrincipalName : us5@contoso.com
+- mailNickName: us4
+- proxyAddresses: {SMTP:us1@contoso.com}
+- mail: us2@contoso.com
+- userPrincipalName: us5@contoso.com
Synchronize update on on-premises userPrincipalName attribute to Azure AD Tenant - Update on on-premises userPrincipalName attribute triggers recalculation of MOERA and Azure AD UserPrincipalName attribute.
Synchronize update on on-premises userPrincipalName attribute to Azure AD Tenant
- Set Azure AD UserPrincipalName attribute to MOERA. Azure AD Tenant user object:-- MailNickName : us4-- UserPrincipalName : us4@contoso.onmicrosoft.com
+- MailNickName: us4
+- UserPrincipalName: us4@contoso.onmicrosoft.com
### Scenario 4: Non-verified UPN suffix ΓÇô update primary SMTP address and on-premises mail attribute ![Scenario4](./media/plan-connect-userprincipalname/example4.png) On-Premises user object:-- mailNickName : us4-- proxyAddresses : {SMTP:us6@contoso.com}-- mail : us7@contoso.com-- userPrincipalName : us5@contoso.com
+- mailNickName: us4
+- proxyAddresses: {SMTP:us6@contoso.com}
+- mail: us7@contoso.com
+- userPrincipalName: us5@contoso.com
Synchronize update on on-premises mail attribute and primary SMTP address to Azure AD Tenant - After the initial synchronization of the user object, updates to the on-premises mail attribute and the primary SMTP address will not affect the Azure AD MailNickName or the UserPrincipalName attribute. Azure AD Tenant user object:-- MailNickName : us4-- UserPrincipalName : us4@contoso.onmicrosoft.com
+- MailNickName: us4
+- UserPrincipalName: us4@contoso.onmicrosoft.com
### Scenario 5: Verified UPN suffix ΓÇô update on-premises userPrincipalName attribute suffix ![Scenario5](./media/plan-connect-userprincipalname/example5.png) On-Premises user object:-- mailNickName : us4-- proxyAddresses : {SMTP:us6@contoso.com}-- mail : us7@contoso.com-- userPrincipalName : us5@verified.contoso.com
+- mailNickName: us4
+- proxyAddresses: {SMTP:us6@contoso.com}
+- mail: us7@contoso.com
+- userPrincipalName: us5@verified.contoso.com
Synchronize update on on-premises userPrincipalName attribute to the Azure AD Tenant - Update on on-premises userPrincipalName attribute triggers recalculation of Azure AD UserPrincipalName attribute. - Set Azure AD UserPrincipalName attribute to on-premises userPrincipalName attribute as the UPN suffix is verified with the Azure AD Tenant. Azure AD Tenant user object:-- MailNickName : us4 -- UserPrincipalName : us5@verified.contoso.com
+- MailNickName: us4
+- UserPrincipalName: us5@verified.contoso.com
## Next Steps - [Integrate your on-premises directories with Azure Active Directory](../whatis-hybrid-identity.md)
active-directory Reference Connect User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-user-privacy.md
Improve user privacy for Azure AD Connect installations in two ways:
-1. Upon request, extract data for a person and remove data from that person from the installations
-2. Ensure no data is retained beyond 48 hours.
+1. Upon request, extract data for a person and remove data from that person from the installations
+2. Ensure no data is retained beyond 48 hours.
The Azure AD Connect team recommends the second option since it is much easier to implement and maintain. An Azure AD Connect sync server stores the following user privacy data:
-1. Data about a person in the **Azure AD Connect database**
-2. Data in the **Windows Event log** files that may contain information about a person
-3. Data in the **Azure AD Connect installation log files** that may contain about a person
+1. Data about a person in the **Azure AD Connect database**
+2. Data in the **Windows Event log** files that may contain information about a person
+3. Data in the **Azure AD Connect installation log files** that may contain about a person
Azure AD Connect customers should use the following guidelines when removing user data:
-1. Delete the contents of the folder that contains the Azure AD Connect installation log files on a regular basis ΓÇô at least every 48 hours
-2. This product may also create Event Logs. To learn more about Event Logs logs, please see the [documentation here](/windows/win32/wes/windows-event-log).
+1. Delete the contents of the folder that contains the Azure AD Connect installation log files on a regular basis ΓÇô at least every 48 hours
+2. This product may also create Event Logs. To learn more about Event Logs logs, please see the [documentation here](/windows/win32/wes/windows-event-log).
Data about a person is automatically removed from the Azure AD Connect database when that personΓÇÖs data is removed from the source system where it originated from. No specific action from administrators is required to be GDPR compliant. However, it does require that the Azure AD Connect data is synced with your data source at least every two days.
If ($File.ToUpper() -ne "$env:programdata\aadconnect\PERSISTEDSTATE.XML".toupper
### Schedule this script to run every 48 hours Use the following steps to schedule the script to run every 48 hours.
-1. Save the script in a file with the extension **&#46;PS1**, then open the Control Panel and click on **Systems and Security**.
+1. Save the script in a file with the extension **&#46;PS1**, then open the Control Panel and click on **Systems and Security**.
![System](./media/reference-connect-user-privacy/gdpr2.png)
-2. Under the Administrative Tools heading, click on **Schedule Tasks**.
+2. Under the Administrative Tools heading, click on **Schedule Tasks**.
![Task](./media/reference-connect-user-privacy/gdpr3.png)
-3. In Task Scheduler, right click on **Task Schedule Library** and click on **Create Basic task…**
-4. Enter the name for the new task and click **Next**.
-5. Select **Daily** for the task trigger and click on **Next**.
-6. Set the recurrence to **2 days** and click **Next**.
-7. Select **Start a program** as the action and click on **Next**.
-8. Type **PowerShell** in the box for the Program/script, and in box labeled **Add arguments (optional)**, enter the full path to the script that you created earlier, then click **Next**.
-9. The next screen shows a summary of the task you are about to create. Verify the values and click **Finish** to create the task.
+3. In Task Scheduler, right click on **Task Schedule Library** and click on **Create Basic task…**
+4. Enter the name for the new task and click **Next**.
+5. Select **Daily** for the task trigger and click on **Next**.
+6. Set the recurrence to **2 days** and click **Next**.
+7. Select **Start a program** as the action and click on **Next**.
+8. Type **PowerShell** in the box for the Program/script, and in box labeled **Add arguments (optional)**, enter the full path to the script that you created earlier, then click **Next**.
+9. The next screen shows a summary of the task you are about to create. Verify the values and click **Finish** to create the task.
active-directory Tshoot Connect Largeobjecterror Usercertificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-largeobjecterror-usercertificate.md
The steps can be summarized as:
8. Export the changes to Azure AD. 9. Re-enable sync scheduler.
-### Step 1. Disable sync scheduler and verify there is no synchronization in progress
+### Step 1. Disable sync scheduler and verify there is no synchronization in progress
Ensure no synchronization takes place while you are in the middle of implementing a new sync rule to avoid unintended changes being exported to Azure AD. To disable the built-in sync scheduler: 1. Start PowerShell session on the Azure AD Connect server.
Ensure no synchronization takes place while you are in the middle of implementin
1. Go to the **Operations** tab and confirm there is no operation whose status is *ΓÇ£in progress.ΓÇ¥*
-### Step 2. Find the existing outbound sync rule for userCertificate attribute
+### Step 2. Find the existing outbound sync rule for userCertificate attribute
There should be an existing sync rule that is enabled and configured to export userCertificate attribute for User objects to Azure AD. Locate this sync rule to find out its **precedence** and **scoping filter** configuration: 1. Start the **Synchronization Rules Editor** by going to START → Synchronization Rules Editor.
The new sync rule must have the same **scoping filter** and **higher precedence*
6. Click the **Add** button to create the sync rule.
-### Step 4. Verify the new sync rule on an existing object with LargeObject error
+### Step 4. Verify the new sync rule on an existing object with LargeObject error
This is to verify that the sync rule created is working correctly on an existing AD object with LargeObject error before you apply it to other objects: 1. Go to the **Operations** tab in the Synchronization Service Manager. 2. Select the most recent Export to Azure AD operation and click on one of the objects with LargeObject errors.
-3. In the Connector Space Object Properties pop-up screen, click on the **Preview** button.
+3. In the Connector Space Object Properties pop-up screen, click on the **Preview** button.
4. In the Preview pop-up screen, select **Full synchronization** and click **Commit Preview**. 5. Close the Preview screen and the Connector Space Object Properties screen. 6. Go to the **Connectors** tab in the Synchronization Service Manager.
This is to verify that the sync rule created is working correctly on an existing
8. In the Run Connector pop-up, select **Export** step and click **OK**. 9. Wait for Export to Azure AD to complete and confirm there is no more LargeObject error on this specific object.
-### Step 5. Apply the new sync rule to remaining objects with LargeObject error
+### Step 5. Apply the new sync rule to remaining objects with LargeObject error
Once the sync rule has been added, you need to run a full synchronization step on the AD Connector: 1. Go to the **Connectors** tab in the Synchronization Service Manager. 2. Right-click on the **AD** Connector and select **Run...**
Once the sync rule has been added, you need to run a full synchronization step o
4. Wait for the Full Synchronization step to complete. 5. Repeat the above steps for the remaining AD Connectors if you have more than one AD Connectors. Usually, multiple connectors are required if you have multiple on-premises directories.
-### Step 6. Verify there are no unexpected changes waiting to be exported to Azure AD
+### Step 6. Verify there are no unexpected changes waiting to be exported to Azure AD
1. Go to the **Connectors** tab in the Synchronization Service Manager. 2. Right-click on the **Azure AD** Connector and select **Search Connector Space**. 3. In the Search Connector Space pop-up:
Once the sync rule has been added, you need to run a full synchronization step o
3. Click **Search** button to return all objects with changes waiting to be exported to Azure AD. 4. Verify there are no unexpected changes. To examine the changes for a given object, double-click on the object.
-### Step 7. Export the changes to Azure AD
+### Step 7. Export the changes to Azure AD
To export the changes to Azure AD: 1. Go to the **Connectors** tab in the Synchronization Service Manager. 2. Right-click on the **Azure AD** Connector and select **Run...** 4. In the Run Connector pop-up, select **Export** step and click **OK**. 5. Wait for Export to Azure AD to complete and confirm there are no more LargeObject errors.
-### Step 8. Re-enable sync scheduler
+### Step 8. Re-enable sync scheduler
Now that the issue is resolved, re-enable the built-in sync scheduler: 1. Start PowerShell session. 2. Re-enable scheduled synchronization by running cmdlet: `Set-ADSyncScheduler -SyncCycleEnabled $true`
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
Once you determine if the workload identity was compromised, dismiss the account
## Remediate risky workload identities
-1. Inventory credentials assigned to the risky workload identity, whether for the service principal or application objects.
+1. Inventory credentials assigned to the risky workload identity, whether for the service principal or application objects.
1. Add a new credential. Microsoft recommends using x509 certificates. 1. Remove the compromised credentials. If you believe the account is at risk, we recommend removing all existing credentials.
-1. Remediate any Azure KeyVault secrets that the Service Principal has access to by rotating them.
+1. Remediate any Azure KeyVault secrets that the Service Principal has access to by rotating them.
The [Azure AD Toolkit](https://github.com/microsoft/AzureADToolkit) is a PowerShell module that can help you perform some of these actions.
active-directory Howto Identity Protection Configure Risk Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md
If you already have risk policies enabled in Identity Protection, we highly reco
### Migrating to Conditional Access
-1. **Create an equivalent** [user risk-based](#user-risk-policy-in-conditional-access) and [sign-in risk-based ](#sign-in-risk-policy-in-conditional-access) policy in Conditional Access in report-only mode. You can create a policy with the steps above or using [Conditional Access templates](../conditional-access/concept-conditional-access-policy-common.md) based on Microsoft's recommendations and your organizational requirements.
+1. **Create an equivalent** [user risk-based](#user-risk-policy-in-conditional-access) and [sign-in risk-based ](#sign-in-risk-policy-in-conditional-access) policy in Conditional Access in report-only mode. You can create a policy with the steps above or using [Conditional Access templates](../conditional-access/concept-conditional-access-policy-common.md) based on Microsoft's recommendations and your organizational requirements.
1. Ensure that the new Conditional Access risk policy works as expected by testing it in [report-only mode](../conditional-access/howto-conditional-access-insights-reporting.md).
-1. **Enable** the new Conditional Access risk policy. You can choose to have both policies running side-by-side to confirm the new policies are working as expected before turning off the Identity Protection risk policies.
+1. **Enable** the new Conditional Access risk policy. You can choose to have both policies running side-by-side to confirm the new policies are working as expected before turning off the Identity Protection risk policies.
1. Browse back to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select this new policy to edit it. 1. Set **Enable policy** to **On** to enable the policy
-1. **Disable** the old risk policies in Identity Protection.
+1. **Disable** the old risk policies in Identity Protection.
1. Browse to **Azure Active Directory** > **Identity Protection** > Select the **User risk** or **Sign-in risk** policy. 1. Set **Enforce policy** to **Off**
-1. Create other risk policies if needed in [Conditional Access](../conditional-access/concept-conditional-access-policy-common.md).
+1. Create other risk policies if needed in [Conditional Access](../conditional-access/concept-conditional-access-policy-common.md).
## Next steps
active-directory Id Protection Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/id-protection-dashboard.md
Customers with P2 licenses can view a comprehensive list of recommendations that
Recent Activity provides a summary of recent risk-related activities in your tenant. Possible activity types are:
-1. Attack Activity
-1. Admin Remediation Activity
-1. Self-Remediation Activity
-1. New High-Risk Users
+1. Attack Activity
+1. Admin Remediation Activity
+1. Self-Remediation Activity
+1. New High-Risk Users
[![Screenshot showing recent activities in the dashboard.](./media/id-protection-dashboard/microsoft-entra-id-protection-dashboard-recent-activities.png)](./media/id-protection-dashboard/microsoft-entra-id-protection-dashboard-recent-activities.png)
active-directory Add Application Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-configure.md
You'll need to consent to the `Application.ReadWrite.All` permission.
Import-Module Microsoft.Graph.Applications $params = @{
- Tags = @(
- "HR"
- "Payroll"
- "HideApp"
- )
- Info = @{
- LogoUrl = "https://cdn.pixabay.com/photo/2016/03/21/23/25/link-1271843_1280.png"
- MarketingUrl = "https://www.contoso.com/app/marketing"
- PrivacyStatementUrl = "https://www.contoso.com/app/privacy"
- SupportUrl = "https://www.contoso.com/app/support"
- TermsOfServiceUrl = "https://www.contoso.com/app/termsofservice"
- }
- Web = @{
- HomePageUrl = "https://www.contoso.com/"
- LogoutUrl = "https://www.contoso.com/frontchannel_logout"
- RedirectUris = @(
- "https://localhost"
- )
- }
- ServiceManagementReference = "Owners aliases: Finance @ contosofinance@contoso.com; The Phone Company HR consulting @ hronsite@thephone-company.com;"
+ Tags = @(
+ "HR"
+ "Payroll"
+ "HideApp"
+ )
+ Info = @{
+ LogoUrl = "https://cdn.pixabay.com/photo/2016/03/21/23/25/link-1271843_1280.png"
+ MarketingUrl = "https://www.contoso.com/app/marketing"
+ PrivacyStatementUrl = "https://www.contoso.com/app/privacy"
+ SupportUrl = "https://www.contoso.com/app/support"
+ TermsOfServiceUrl = "https://www.contoso.com/app/termsofservice"
+ }
+ Web = @{
+ HomePageUrl = "https://www.contoso.com/"
+ LogoutUrl = "https://www.contoso.com/frontchannel_logout"
+ RedirectUris = @(
+ "https://localhost"
+ )
+ }
+ ServiceManagementReference = "Owners aliases: Finance @ contosofinance@contoso.com; The Phone Company HR consulting @ hronsite@thephone-company.com;"
} Update-MgApplication -ApplicationId $applicationId -BodyParameter $params
active-directory Configure Linked Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-linked-sign-on.md
To configure linked-based SSO in your Azure AD tenant, you need:
## Configure linked-based single sign-on
-1. Sign in to the [Azure portal](https://portal.azure.com) with the appropriate role.
-2. Select **Azure Active Directory** in Azure Services, and then select **Enterprise applications**.
-3. Search for and select the application that you want to add linked SSO.
-4. Select **Single sign-on** and then select **Linked**.
-5. Enter the URL for the sign-in page of the application.
-6. Select **Save**.
+1. Sign in to the [Azure portal](https://portal.azure.com) with the appropriate role.
+2. Select **Azure Active Directory** in Azure Services, and then select **Enterprise applications**.
+3. Search for and select the application that you want to add linked SSO.
+4. Select **Single sign-on** and then select **Linked**.
+5. Enter the URL for the sign-in page of the application.
+6. Select **Save**.
## Next steps
active-directory Configure Password Single Sign On Non Gallery Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications.md
To configure password-based SSO in your Azure AD tenant, you need:
## Configure password-based single sign-on
-1. Sign in to the [Azure portal](https://portal.azure.com) with the appropriate role.
-1. Select **Azure Active Directory** in Azure Services, and then select **Enterprise applications**.
-1. Search for and select the application that you want to add password-based SSO.
-1. Select **Single sign-on** and then select **Password-based**.
-1. Enter the URL for the sign-in page of the application.
-1. Select **Save**.
+1. Sign in to the [Azure portal](https://portal.azure.com) with the appropriate role.
+1. Select **Azure Active Directory** in Azure Services, and then select **Enterprise applications**.
+1. Search for and select the application that you want to add password-based SSO.
+1. Select **Single sign-on** and then select **Password-based**.
+1. Enter the URL for the sign-in page of the application.
+1. Select **Save**.
Azure AD parses the HTML of the sign-in page for username and password input fields. If the attempt succeeds, you're done. Your next step is to [Assign users or groups](add-application-portal-assign-users.md) to the application.
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
In Service Provider settings, define SAML SP instance settings for the SHA-prote
3. (Optional) In Security Settings, select **Enable Encryption Assertion** to enable Azure AD to encrypt issued SAML assertions. Azure AD and BIG-IP APM encryption assertions help assure content tokens aren't intercepted, nor personal or corporate data compromised.
-4. In **Security Settings**, from the **Assertion Decryption Private Key** list, select **Create New**.
+4. In **Security Settings**, from the **Assertion Decryption Private Key** list, select **Create New**.
![Screenshot of the Create New option in the Assertion Decryption Private Key list.](./media/f5-big-ip-oracle/configure-security-create-new.png)
-5. Select **OK**.
-6. The **Import SSL Certificate and Keys** dialog appears.
-7. For **Import Type**, select **PKCS 12 (IIS)**. This action imports the certificate and private key.
-8. For **Certificate and Key Name**, select **New** and enter the input.
-9. Enter the **Password**.
-10. Select **Import**.
-11. Close the browser tab to return to the main tab.
+5. Select **OK**.
+6. The **Import SSL Certificate and Keys** dialog appears.
+7. For **Import Type**, select **PKCS 12 (IIS)**. This action imports the certificate and private key.
+8. For **Certificate and Key Name**, select **New** and enter the input.
+9. Enter the **Password**.
+10. Select **Import**.
+11. Close the browser tab to return to the main tab.
![Screenshot of selections and entries for SSL Certificate Key Source.](./media/f5-big-ip-oracle/import-ssl-certificates-and-keys.png)
-12. Check the box for **Enable Encrypted Assertion**.
-13. If you enabled encryption, from the **Assertion Decryption Private Key** list, select the certificate. BIG-IP APM uses this certificate private key to decrypt Azure AD assertions.
-14. If you enabled encryption, from the **Assertion Decryption Certificate** list, select the certificate. BIG-IP uploads this certificate to Azure AD to encrypt the issued SAML assertions.
+12. Check the box for **Enable Encrypted Assertion**.
+13. If you enabled encryption, from the **Assertion Decryption Private Key** list, select the certificate. BIG-IP APM uses this certificate private key to decrypt Azure AD assertions.
+14. If you enabled encryption, from the **Assertion Decryption Certificate** list, select the certificate. BIG-IP uploads this certificate to Azure AD to encrypt the issued SAML assertions.
![Screenshot of two entries and one option for Security Settings.](./media/f5-big-ip-easy-button-ldap/service-provider-security-settings.png)
Conditional Access policies control access based on device, application, locatio
To select a policy to be applied to the application being published:
-1. On the **Conditional Access Policy** tab, in the **Available Policies** list, select a policy.
-2. Select the **right arrow** and move it to the **Selected Policies** list.
+1. On the **Conditional Access Policy** tab, in the **Available Policies** list, select a policy.
+2. Select the **right arrow** and move it to the **Selected Policies** list.
> [!NOTE] > You can select the **Include** or **Exclude** option for a policy. If both options are selected, the policy is unenforced.
active-directory F5 Big Ip Sap Erp Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-sap-erp-easy-button.md
Use the Service Provider settings to define SAML SP instance properties of the a
![Screenshot of the Create New option from the Assertion Decryption Private Key list.](./media/f5-big-ip-oracle/configure-security-create-new.png)
-5. Select **OK**.
-6. The **Import SSL Certificate and Keys** dialog appears in a new tab.
+5. Select **OK**.
+6. The **Import SSL Certificate and Keys** dialog appears in a new tab.
-7. To import the certificate and private key, select **PKCS 12 (IIS)**.
-8. Close the browser tab to return to the main tab.
+7. To import the certificate and private key, select **PKCS 12 (IIS)**.
+8. Close the browser tab to return to the main tab.
![Screenshot of options and selections for Import SSL Certificates and Keys.](./media/f5-big-ip-easy-button-sap-erp/import-ssl-certificates-and-keys.png)
-9. For **Enable Encrypted Assertion**, check the box.
+9. For **Enable Encrypted Assertion**, check the box.
10. If you enabled encryption, from the **Assertion Decryption Private Key** list, select the private key for the certificate BIG-IP APM uses to decrypt Azure AD assertions. 11. If you enabled encryption, from the **Assertion Decryption Certificate** list, select the certificate BIG-IP uploads to Azure AD to encrypt the issued SAML assertions.
The **Selected Policies** view lists policies targeting cloud apps. You can't de
To select a policy for the application being published:
-1. From the **Available Policies** list, select the policy.
-2. Select the right arrow.
-3. Move the policy to the **Selected Policies** list.
+1. From the **Available Policies** list, select the policy.
+2. Select the right arrow.
+3. Move the policy to the **Selected Policies** list.
Selected policies have an **Include** or **Exclude** option checked. If both options are checked, the selected policy isn't enforced.
active-directory F5 Bigip Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-bigip-deployment-guide.md
For BIG-IP to be pre-configured and ready for SHA scenarios, provision Client an
![Screenshot of certificate, key, and chain selections.](./media/f5ve-deployment-plan/contoso-wildcard.png)
-13. Repeat steps to create an **SSL server certificate profile**.
-14. From the top ribbon, select **SSL** > **Server** > **Create**.
-15. In the **New Server SSL Profile** page, enter a unique, friendly **Name**.
-16. Ensure the Parent profile is set to **serverssl**.
-17. Select the far-right check box for the **Certificate** and **Key** rows
-18. From the **Certificate** and **Key** drop-down lists, select your imported certificate.
-19. Select **Finished**.
+13. Repeat steps to create an **SSL server certificate profile**.
+14. From the top ribbon, select **SSL** > **Server** > **Create**.
+15. In the **New Server SSL Profile** page, enter a unique, friendly **Name**.
+16. Ensure the Parent profile is set to **serverssl**.
+17. Select the far-right check box for the **Certificate** and **Key** rows
+18. From the **Certificate** and **Key** drop-down lists, select your imported certificate.
+19. Select **Finished**.
![Screenshot of general properties and configuration selections.](./media/f5ve-deployment-plan/server-ssl-profile.png)
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
In the example, the resource enterprise application is Microsoft Graph of object
1. Grant the delegated permissions to the client enterprise application by running the following request.
- ```http
+ ```http
POST https://graph.microsoft.com/v1.0/oauth2PermissionGrants Request body
active-directory Grant Consent Single User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-consent-single-user.md
In the example, the resource enterprise application is Microsoft Graph of object
1. Grant the delegated permissions to the client enterprise application on behalf of the user by running the following request.
- ```http
+ ```http
POST https://graph.microsoft.com/v1.0/oauth2PermissionGrants Request body
active-directory Tutorial Govern Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-govern-monitor.md
To send logs to your logs analytics workspace:
1. Select **Diagnostic settings**, and then select **Add diagnostic setting**. You can also select Export Settings from the Audit Logs or Sign-ins page to get to the diagnostic settings configuration page. 1. In the Diagnostic settings menu, select **Send to Log Analytics workspace**, and then select Configure. 1. Select the Log Analytics workspace you want to send the logs to, or create a new workspace in the provided dialog box.
-1. Select the logs that you would like to send to the workspace.
-1. Select **Save** to save the setting.
+1. Select the logs that you would like to send to the workspace.
+1. Select **Save** to save the setting.
After about 15 minutes, verify that events are streamed to your Log Analytics workspace.
active-directory How To Assign Managed Identity Via Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md
The policy is designed to implement this recommendation.
When executed, the policy takes the following actions:
-1. Create, if not exist, a new built-in user-assigned managed identity in the subscription and each Azure region based on the VMs that are in scope of the policy.
-2. Once created, put a lock on the user-assigned managed identity so that it will not be accidentally deleted.
-3. Assign the built-in user-assigned managed identity to Virtual Machines from the subscription and region based on the VMs that are in scope of the policy.
+1. Create, if not exist, a new built-in user-assigned managed identity in the subscription and each Azure region based on the VMs that are in scope of the policy.
+2. Once created, put a lock on the user-assigned managed identity so that it will not be accidentally deleted.
+3. Assign the built-in user-assigned managed identity to Virtual Machines from the subscription and region based on the VMs that are in scope of the policy.
> [!NOTE] > If the Virtual Machine has exactly 1 user-assigned managed identity already assigned, then the policy skips this VM to assign the built-in identity. This is to make sure assignment of the policy does not break applications that take a dependency on [the default behavior of the token endpoint on IMDS.](managed-identities-faq.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request)
active-directory How To View Associated Resources For An Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-associated-resources-for-an-identity.md
Notice a sample response from the REST API:
```json {
- "totalCount": 2,
- "value": [{
- "id": "/subscriptions/{subId}/resourceGroups/testrg/providers/Microsoft.CognitiveServices/accounts/test1",
- "name": "test1",
- "type": "microsoft.cognitiveservices/accounts",
- "resourceGroup": "testrg",
- "subscriptionId": "{subId}",
- "subscriptionDisplayName": "TestSubscription"
- },
- {
- "id": "/subscriptions/{subId}/resourceGroups/testrg/providers/Microsoft.CognitiveServices/accounts/test2",
- "name": "test2",
- "type": "microsoft.cognitiveservices/accounts",
- "resourceGroup": "testrg",
- "subscriptionId": "{subId}",
- "subscriptionDisplayName": "TestSubscription"
- }
- ],
- "nextLink": "https://management.azure.com/subscriptions/{subId}/resourceGroups/testrg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/testid?skiptoken=ew0KICAiJGlkIjogIjEiLA0KICAiTWF4Um93cyI6IDIsDQogICJSb3dzVG9Ta2lwIjogMiwNCiAgIkt1c3RvQ2x1c3RlclVybCI6ICJodHRwczovL2FybXRvcG9sb2d5Lmt1c3RvLndpbmRvd3MubmV0Ig0KfQ%253d%253d&api-version=2021"
+ "totalCount": 2,
+ "value": [
+ {
+ "id": "/subscriptions/{subId}/resourceGroups/testrg/providers/Microsoft.CognitiveServices/accounts/test1",
+ "name": "test1",
+ "type": "microsoft.cognitiveservices/accounts",
+ "resourceGroup": "testrg",
+ "subscriptionId": "{subId}",
+ "subscriptionDisplayName": "TestSubscription"
+ },
+ {
+ "id": "/subscriptions/{subId}/resourceGroups/testrg/providers/Microsoft.CognitiveServices/accounts/test2",
+ "name": "test2",
+ "type": "microsoft.cognitiveservices/accounts",
+ "resourceGroup": "testrg",
+ "subscriptionId": "{subId}",
+ "subscriptionDisplayName": "TestSubscription"
+ }
+ ],
+ "nextLink": "https://management.azure.com/subscriptions/{subId}/resourceGroups/testrg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/testid?skiptoken=ew0KICAiJGlkIjogIjEiLA0KICAiTWF4Um93cyI6IDIsDQogICJSb3dzVG9Ta2lwIjogMiwNCiAgIkt1c3RvQ2x1c3RlclVybCI6ICJodHRwczovL2FybXRvcG9sb2d5Lmt1c3RvLndpbmRvd3MubmV0Ig0KfQ%253d%253d&api-version=2021"
} ```
active-directory How To View Managed Identity Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-activity.md
System-assigned identity:
![Browse to active directory](./media/how-to-view-managed-identity-activity/browse-to-active-directory.png)
-2. Select **Sign-in logs** from the **Monitoring** section.
+2. Select **Sign-in logs** from the **Monitoring** section.
![Select sign-in logs](./media/how-to-view-managed-identity-activity/sign-in-logs-menu-item.png)
System-assigned identity:
![managed identity sign-in events](./media/how-to-view-managed-identity-activity/msi-sign-in-events.png)
-5. To view the identity's Enterprise application in Azure Active Directory, select the ΓÇ£Managed Identity IDΓÇ¥ column.
-6. To view the Azure resource or user-assigned managed identity, search by name in the search bar of the Azure portal.
+5. To view the identity's Enterprise application in Azure Active Directory, select the ΓÇ£Managed Identity IDΓÇ¥ column.
+6. To view the Azure resource or user-assigned managed identity, search by name in the search bar of the Azure portal.
## Next steps
active-directory Tutorial Windows Vm Access Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm.md
Using managed identities for Azure resources, your application can get access to
You'll need to use **PowerShell** in this portion. If you donΓÇÖt have **PowerShell** installed, download it [here](/powershell/azure/).
-1. In the portal, navigate to **Virtual Machines** and go to your Windows virtual machine and in the **Overview**, select **Connect**.
-2. Enter in your **Username** and **Password** for which you added when you created the Windows VM.
-3. Now that you've created a **Remote Desktop Connection** with the virtual machine, open **PowerShell** in the remote session.
-4. Using the Invoke-WebRequest cmdlet, make a request to the local managed identity for Azure resources endpoint to get an access token for Azure Resource Manager.
+1. In the portal, navigate to **Virtual Machines** and go to your Windows virtual machine and in the **Overview**, select **Connect**.
+2. Enter in your **Username** and **Password** for which you added when you created the Windows VM.
+3. Now that you've created a **Remote Desktop Connection** with the virtual machine, open **PowerShell** in the remote session.
+4. Using the Invoke-WebRequest cmdlet, make a request to the local managed identity for Azure resources endpoint to get an access token for Azure Resource Manager.
```powershell $response = Invoke-WebRequest -Uri 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/' -Method GET -Headers @{Metadata="true"}
active-directory Concept Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md
Up until January 2023, it was required that every Privileged Access Group (forme
## Making group of users eligible for Azure AD role There are two ways to make a group of users eligible for Azure AD role:
-1. Make active assignments of users to the group, and then assign the group to a role as eligible for activation.
-2. Make active assignment of a role to a group and assign users to be eligible to group membership.
+1. Make active assignments of users to the group, and then assign the group to a role as eligible for activation.
+2. Make active assignment of a role to a group and assign users to be eligible to group membership.
To provide a group of users with just-in-time access to Azure AD directory roles with permissions in SharePoint, Exchange, or Security & Microsoft Purview compliance portal (for example, Exchange Administrator role), be sure to make active assignments of users to the group, and then assign the group to a role as eligible for activation (Option #1 above). If you choose to make active assignment of a group to a role and assign users to be eligible to group membership instead, it may take significant time to have all permissions of the role activated and ready to use.
active-directory Groups Activate Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-activate-roles.md
When you need to take on a group membership or ownership, you can request activa
:::image type="content" source="media/pim-for-groups/pim-group-7.png" alt-text="Screenshot of where to provide a justification in the Reason box." lightbox="media/pim-for-groups/pim-group-7.png":::
-1. Select **Activate**.
+1. Select **Activate**.
If the [role requires approval](pim-resource-roles-approval-workflow.md) to activate, an Azure notification appears in the upper right corner of your browser informing you the request is pending approval.
active-directory Groups Assign Member Owner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-assign-member-owner.md
Follow these steps to make a user eligible member or owner of a group. You will
> For groups used for elevating into Azure AD roles, Microsoft recommends that you require an approval process for eligible member assignments. Assignments that can be activated without approval can leave you vulnerable to a security risk from another administrator with permission to reset an eligible user's passwords. - Active assignments don't require the member to perform any activations to use the role. Members or owners assigned as active have the privileges assigned to the role at all times.
-1. If the assignment should be permanent (permanently eligible or permanently assigned), select the **Permanently** checkbox. Depending on the group's settings, the check box might not appear or might not be editable. For more information, check out the [Configure PIM for Groups settings in Privileged Identity Management](groups-role-settings.md#assignment-duration) article.
+1. If the assignment should be permanent (permanently eligible or permanently assigned), select the **Permanently** checkbox. Depending on the group's settings, the check box might not appear or might not be editable. For more information, check out the [Configure PIM for Groups settings in Privileged Identity Management](groups-role-settings.md#assignment-duration) article.
:::image type="content" source="media/pim-for-groups/pim-group-5.png" alt-text="Screenshot of where to configure the setting for add assignments." lightbox="media/pim-for-groups/pim-group-5.png":::
-1. Select **Assign**.
+1. Select **Assign**.
## Update or remove an existing role assignment
Follow these steps to update or remove an existing role assignment. You will nee
:::image type="content" source="media/pim-for-groups/pim-group-3.png" alt-text="Screenshot of where to review existing membership or ownership assignments for selected group." lightbox="media/pim-for-groups/pim-group-3.png":::
-1. Select **Update** or **Remove** to update or remove the membership or ownership assignment.
+1. Select **Update** or **Remove** to update or remove the membership or ownership assignment.
## Next steps
active-directory Groups Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-audit.md
Follow these steps to view the audit history for groups in Privileged Identity M
:::image type="content" source="media/pim-for-groups/pim-group-19.png" alt-text="Screenshot of where to select Resource audit." lightbox="media/pim-for-groups/pim-group-19.png":::
-1. Filter the history using a predefined date or custom range.
+1. Filter the history using a predefined date or custom range.
## View my audit
Follow these steps to view the audit history for groups in Privileged Identity M
:::image type="content" source="media/pim-for-groups/pim-group-20.png" alt-text="Screenshot of where to select My audit." lightbox="media/pim-for-groups/pim-group-20.png":::
-1. Filter the history using a predefined date or custom range.
+1. Filter the history using a predefined date or custom range.
## Next steps
active-directory Groups Discover Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-discover-groups.md
You need appropriate permissions to bring groups in Azure AD PIM. For role-assig
:::image type="content" source="media/pim-for-groups/pim-group-1.png" alt-text="Screenshot of where to view groups that are already enabled for PIM for Groups." lightbox="media/pim-for-groups/pim-group-1.png":::
-1. Select **Discover groups** and select a group that you want to bring under management with PIM.
+1. Select **Discover groups** and select a group that you want to bring under management with PIM.
:::image type="content" source="media/pim-for-groups/pim-group-2.png" alt-text="Screenshot of where to select a group that you want to bring under management with PIM." lightbox="media/pim-for-groups/pim-group-2.png":::
active-directory Overview Flagged Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-flagged-sign-ins.md
Flagged sign-ins gives you the ability to enable flagging when signing in using
3. In **Troubleshooting details**, select **Enable Flagging**. The text changes to **Disable Flagging**. Flagging is now enabled. 4. Close the browser window. 5. Open a new browser window (in the same browser application) and attempt the same sign-in that failed.
-6. Reproduce the sign-in error that was seen before.
+6. Reproduce the sign-in error that was seen before.
With flagging enabled, the same browser application and client must be used or the events won't be flagged. ### Admin: Find flagged events in reports
-1. In the Azure portal, go to **Sign-in logs** > **Add Filters**.
-1. From the **Pick a field** menu, select **Flagged for review** and **Apply**.
-1. All events that were flagged by users are shown.
-1. If needed, apply more filters to refine the event view.
-1. Select the event to review what happened.
+1. In the Azure portal, go to **Sign-in logs** > **Add Filters**.
+1. From the **Pick a field** menu, select **Flagged for review** and **Apply**.
+1. All events that were flagged by users are shown.
+1. If needed, apply more filters to refine the event view.
+1. Select the event to review what happened.
### Admin or Developer: Find flagged events using MS Graph
active-directory Recommendation Migrate To Authenticator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-to-authenticator.md
The Microsoft Authenticator app is available for Android and iOS. Microsoft Auth
## Action plan
-1. Ensure that notification through mobile app and/or verification code from mobile app are available to users as authentication methods. How to Configure Verification Options
+1. Ensure that notification through mobile app and/or verification code from mobile app are available to users as authentication methods. How to Configure Verification Options
-2. Educate users on how to add a work or school account.
+2. Educate users on how to add a work or school account.
## Next steps
active-directory Recommendation Turn Off Per User Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-turn-off-per-user-mfa.md
This recommendation improves your user's productivity and minimizes the sign-in
1. Confirm that there's an existing CA policy with an MFA requirement. Ensure that you're covering all resources and users you would like to secure with MFA. - Review your [Conditional Access policies](https://portal.azure.com/?Microsoft_AAD_IAM_enableAadvisorFeaturePreview=true&amp%3BMicrosoft_AAD_IAM_enableAadvisorFeature=true#blade/Microsoft_AAD_IAM/PoliciesTemplateBlade).
-2. Require MFA using a Conditional Access policy.
+2. Require MFA using a Conditional Access policy.
- [Secure user sign-in events with Azure AD Multi-Factor Authentication](../authentication/tutorial-enable-azure-mfa.md). 3. Ensure that the per-user MFA configuration is turned off.
active-directory Reference Audit Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-audit-activities.md
If you're using Entitlement Management to streamline how you assign members of A
|Audit Category|Activity| ||| |EntitlementManagement|Add Entitlement Management role assignment|
-|EntitlementManagement|Administrator directly assigns user to access package|
+|EntitlementManagement|Administrator directly assigns user to access package|
|EntitlementManagement|Administrator directly removes user access package assignment| |EntitlementManagement|Approval stage completed for access package assignment request| |EntitlementManagement|Approve access package assignment request|
If you're using Entitlement Management to streamline how you assign members of A
|EntitlementManagement|Cancel access package assignment request| |EntitlementManagement|Create access package| |EntitlementManagement|Create access package assignment policy|
-|EntitlementManagement|Create access package assignment user update request|
+|EntitlementManagement|Create access package assignment user update request|
|EntitlementManagement|Create access package catalog|
-|EntitlementManagement|Create connected organization|
+|EntitlementManagement|Create connected organization|
|EntitlementManagement|Create custom extension| |EntitlementManagement|Create incompatible access package| |EntitlementManagement|Create incompatible group|
active-directory Acoustic Connect Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/acoustic-connect-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Acoustic Connect
+description: Learn how to configure single sign-on between Azure Active Directory and Acoustic Connect.
++++++++ Last updated : 07/20/2023++++
+# Azure Active Directory SSO integration with Acoustic Connect
+
+In this article, you'll learn how to integrate Acoustic Connect with Azure Active Directory (Azure AD). Acoustic Connect is platform that helps you create marketing campaigns that resonate with people, build a loyal following, and drive revenue. When you integrate Acoustic Connect with Azure AD, you can:
+
+* Control in Azure AD who has access to Acoustic Connect.
+* Enable your users to be automatically signed-in to Acoustic Connect with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Acoustic Connect in a test environment. Acoustic Connect supports both **SP** and **IDP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Acoustic Connect, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Acoustic Connect single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Acoustic Connect application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Acoustic Connect from the Azure AD gallery
+
+Add Acoustic Connect from the Azure AD application gallery to configure single sign-on with Acoustic Connect. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Acoustic Connect** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://www.okta.com/saml2/service-provider/<Acoustic_ID>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://login.goacoustic.com/sso/saml2/<ID>`
+
+1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** textbox, type the URL:
+ `https://login.goacoustic.com/`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Acoustic Connect support team](mailto:support@acoustic.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Acoustic Connect** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+## Configure Acoustic Connect SSO
+
+To configure single sign-on on **Acoustic Connect** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Acoustic Connect support team](mailto:support@acoustic.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Acoustic Connect test user
+
+In this section, a user called B.Simon is created in Acoustic Connect. Acoustic Connect supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Acoustic Connect, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Acoustic Connect Sign-on URL where you can initiate the login flow.
+
+* Go to Acoustic Connect Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Acoustic Connect for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Acoustic Connect tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Acoustic Connect for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Acoustic Connect you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Cloudbees Ci Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloudbees-ci-tutorial.md
+
+ Title: Azure Active Directory SSO integration with CloudBees CI
+description: Learn how to configure single sign-on between Azure Active Directory and CloudBees CI.
++++++++ Last updated : 07/21/2023++++
+# Azure Active Directory SSO integration with CloudBees CI
+
+In this article, you'll learn how to integrate CloudBees CI with Azure Active Directory (Azure AD). Centralize management, ensure compliance, and automate at scale with CloudBees CI - the secure, scalable, and flexible CI solution based on Jenkins. When you integrate CloudBees CI with Azure AD, you can:
+
+* Control in Azure AD who has access to CloudBees CI.
+* Enable your users to be automatically signed-in to CloudBees CI with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for CloudBees CI in a test environment. CloudBees CI supports only **SP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with CloudBees CI, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* CloudBees CI single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the CloudBees CI application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add CloudBees CI from the Azure AD gallery
+
+Add CloudBees CI from the Azure AD application gallery to configure single sign-on with CloudBees CI. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **CloudBees CI** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `<Customer_EntityID>`
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ ||
+ | `https://<CustomerDomain>/cjoc/securityRealm/finishLogin` |
+ | `https://<CustomerDomain>/<Environment>/securityRealm/finishLogin` |
+ | `https://cjoc.<CustomerDomain>/securityRealm/finishLogin` |
+ | `https://<Environment>.<CustomerDomain>/securityRealm/finishLogin` |
+
+1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** textbox, type the URL using one of the following patterns:
+
+ | **Sign on URL** |
+ ||
+ | `https://<CustomerDomain>/cjoc` |
+ | `https://<CustomerDomain>/<Environment>` |
+ | `https://cjoc.<CustomerDomain>` |
+ | `https://<Environment>.<CustomerDomain>` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [CloudBees CI support team](mailto:support@cloudbees.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. CloudBees CI application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, CloudBees CI application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | email | user.mail |
+ | username | user.userprincipalname |
+ | displayname | user.givenname |
+ | groups | user.groups |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up CloudBees CI** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+## Configure CloudBees CI SSO
+
+To configure single sign-on on **CloudBees CI** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CloudBees CI support team](mailto:support@cloudbees.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create CloudBees CI test user
+
+In this section, you create a user called Britta Simon at CloudBees CI SSO. Work with [CloudBees CI support team](mailto:support@cloudbees.com) to add the users in the CloudBees CI SSO platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to CloudBees CI Sign-on URL where you can initiate the login flow.
+
+* Go to CloudBees CI Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the CloudBees CI tile in the My Apps, this will redirect to CloudBees CI Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure CloudBees CI you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Kanbanbox Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kanbanbox-tutorial.md
+
+ Title: Azure Active Directory SSO integration with KanbanBOX
+description: Learn how to configure single sign-on between Azure Active Directory and KanbanBOX.
++++++++ Last updated : 07/17/2023++++
+# Azure Active Directory SSO integration with KanbanBOX
+
+In this article, you'll learn how to integrate KanbanBOX with Azure Active Directory (Azure AD).KanbanBOX digitizes kanban material flows along the Supply Chain. KanbanBOX supports internal production and logistic flows, as well as collaboration with external suppliers and customers. When you integrate KanbanBOX with Azure AD, you can:
+
+* Control in Azure AD who has access to KanbanBOX.
+* Enable your users to be automatically signed-in to KanbanBOX with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for KanbanBOX in a test environment. KanbanBOX supports both **SP** and **IDP** initiated single sign-on.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with KanbanBOX, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* KanbanBOX single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the KanbanBOX application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add KanbanBOX from the Azure AD gallery
+
+Add KanbanBOX from the Azure AD application gallery to configure single sign-on with KanbanBOX. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **KanbanBOX** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+ In the **Relay State** textbox, type the URL:
+ `https://app.kanbanbox.com/auth/idp_initiated_sso_login`
+
+1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** textbox, type the URL:
+ `https://app.kanbanbox.com/auth/login`
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up KanbanBOX** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+## Configure KanbanBOX SSO
+
+To configure single sign-on on **KanbanBOX** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [KanbanBOX support team](mailto:help@kanbanbox.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create KanbanBOX test user
+
+In this section, you create a user called Britta Simon at KanbanBOX SSO. Work with [KanbanBOX support team](mailto:help@kanbanbox.com) to add the users in the KanbanBOX SSO platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to KanbanBOX Sign-on URL where you can initiate the login flow.
+
+* Go to KanbanBOX Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the KanbanBOX for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the KanbanBOX tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the KanbanBOX for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure KanbanBOX you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Whosoff Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/whosoff-tutorial.md
Previously updated : 07/14/2023 Last updated : 07/31/2023
Complete the following steps to enable Azure AD single sign-on in the Azure port
![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+1. On the **Basic SAML Configuration** section, the user doesn't have to perform any step as the app is already preintegrated with Azure.
1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
Complete the following steps to enable Azure AD single sign-on in the Azure port
`https://app.whosoff.com/int/<Integration_ID>/sso/azure/` > [!NOTE]
- > This value is not real. Update this value with the actual Sign on URL. Contact [WhosOff support team](mailto:support@whosoff.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > This value is not real. Update this value with the actual Sign on URL. You can collect `Integration_ID` from your WhosOff account when activating Azure SSO which is explained later in this tutorial. For any queriers, please contact [WhosOff support team](mailto:support@whosoff.com). You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
Complete the following steps to enable Azure AD single sign-on in the Azure port
## Configure WhosOff SSO
-To configure single sign-on on **WhosOff** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [WhosOff support team](mailto:support@whosoff.com). They set this setting to have the SAML SSO connection set properly on both sides.
+1. Log in to your WhosOff company site as an administrator.
+
+1. Go to **ADMINISTRATION** on the left hand menu and click **COMPANY SETTINGS** > **Single Sign On**.
+
+1. In the **Setup Single Sign On** section, perform the following steps:
+
+ ![Screenshot shows settings of metadata and configuration.](./media/whosoff-tutorial/metadata.png "Account")
+
+ 1. Select **Azure** SSO provider from the drop-down and click **Active SSO**.
+
+ 1. Once activated, copy the **Integration GUID** and save it on your computer.
+
+ 1. Upload **Federation Metadata XML** file by clicking on the **Choose File** option, which you have downloaded from the Azure portal.
+
+ 1. Click **Save changes**.
### Create WhosOff test user
active-directory Admin Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md
You are able to [search](how-to-issuer-revoke.md) for verifiable credentials wit
string claimvalue = "Bowen"; string contractid = "ZjViZjJmYzYtNzEzNS00ZDk0LWE2ZmUtYzI2ZTQ1NDNiYzVhdGVzdDM"; string output;
-
+ using (var sha256 = SHA256.Create()) {
- var input = contractid + claimvalue;
- byte[] inputasbytes = Encoding.UTF8.GetBytes(input);
- hashedsearchclaimvalue = Convert.ToBase64String(sha256.ComputeHash(inputasbytes));
+ var input = contractid + claimvalue;
+ byte[] inputasbytes = Encoding.UTF8.GetBytes(input);
+ hashedsearchclaimvalue = Convert.ToBase64String(sha256.ComputeHash(inputasbytes));
} ```
active-directory Using Wallet Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/using-wallet-library.md
In order to test the demo app, you need a webapp that issues credentials and mak
## Building the Android sample On your developer machine with Android Studio, do the following:
-1. Download or clone the Android Wallet Library [github repo](https://github.com/microsoft/entra-verifiedid-wallet-library-android/archive/refs/heads/dev.zip).
+1. Download or clone the Android Wallet Library [github repo](https://github.com/microsoft/entra-verifiedid-wallet-library-android/archive/refs/heads/dev.zip).
You donΓÇÖt need the walletlibrary folder and you can delete it if you like.
-1. Start Android Studio and open the parent folder of walletlibrarydemo
+1. Start Android Studio and open the parent folder of walletlibrarydemo
![Screenshot of Android Studio.](media/using-wallet-library/androidstudio-screenshot.png)
-1. Select **Build** menu and then **Make Project**. This step takes some time.
-1. Connect your Android test device via USB cable to your laptop
-1. Select your test device in Android Studio and click **run** button (green triangle)
+1. Select **Build** menu and then **Make Project**. This step takes some time.
+1. Connect your Android test device via USB cable to your laptop
+1. Select your test device in Android Studio and click **run** button (green triangle)
## Issuing credentials using the Android sample
-1. Start the WalletLibraryDemo app
+1. Start the WalletLibraryDemo app
![Screenshot of Create Request on Android.](media/using-wallet-library/android-create-request.png)
-1. On your laptop, launch the public demo website [https://aka.ms/vcdemo](https://aka.ms/vcdemo) and do the following
+1. On your laptop, launch the public demo website [https://aka.ms/vcdemo](https://aka.ms/vcdemo) and do the following
1. Enter your First Name and Last Name and press **Next** 1. Select **Verify with True Identity** 1. Click **Take a selfie** and **Upload government issued ID**. The demo uses simulated data and you don't need to provide a real selfie or an ID. 1. Click **Next** and **OK**
-1. Scan the QR code with your QR Code Reader app on your test device, then copy the full URL displayed in the QR Code Reader app. Remember the pin code.
-1. Switch back to WalletLibraryDemo app and paste in the URL from the clipboard
-1. Press **CREATE REQUEST** button
-1. When the app has downloaded the request, it shows a screen like below. Click on the white rectangle, which is a textbox, and enter the pin code that is displayed in the browser page. Then click the **COMPLETE** button.
+1. Scan the QR code with your QR Code Reader app on your test device, then copy the full URL displayed in the QR Code Reader app. Remember the pin code.
+1. Switch back to WalletLibraryDemo app and paste in the URL from the clipboard
+1. Press **CREATE REQUEST** button
+1. When the app has downloaded the request, it shows a screen like below. Click on the white rectangle, which is a textbox, and enter the pin code that is displayed in the browser page. Then click the **COMPLETE** button.
![Screenshot of Enter Pin Code on Android.](media/using-wallet-library/android-enter-pincode.png)
-1. Once issuance completes, the demo app displays the claims in the credential
+1. Once issuance completes, the demo app displays the claims in the credential
![Screenshot of Issuance Complete on Android.](media/using-wallet-library/android-issuance-complete.png) ## Presenting credentials using the Android sample The sample app holds the issued credential in memory, so after issuance, you can use it for presentation.
-1. The WalletLibraryDemo app should display some credential details on the home screen if you have successfully issued a credential.
+1. The WalletLibraryDemo app should display some credential details on the home screen if you have successfully issued a credential.
![Screenshot of app with credential on Android.](media/using-wallet-library/android-have-credential.png)
-1. In the Woodgrove demo in the browser, click **Return to Woodgrove** if you havenΓÇÖt done so already and continue with step 3 **Access personalized portal**.
-1. Scan the QR code with the QR Code Reader app on your test device, then copy the full URL to the clipboard.
-1. Switch back to the WalletLibraryDemo app and paste in the URL and click **CREATE REQUEST** button
-1. The app retrieves the presentation request and display the matching credentials you have in memory. In this case you only have one. **Click on it** so that the little check mark appears, then click the **COMPLETE** button to submit the presentation response
+1. In the Woodgrove demo in the browser, click **Return to Woodgrove** if you havenΓÇÖt done so already and continue with step 3 **Access personalized portal**.
+1. Scan the QR code with the QR Code Reader app on your test device, then copy the full URL to the clipboard.
+1. Switch back to the WalletLibraryDemo app and paste in the URL and click **CREATE REQUEST** button
+1. The app retrieves the presentation request and display the matching credentials you have in memory. In this case you only have one. **Click on it** so that the little check mark appears, then click the **COMPLETE** button to submit the presentation response
![Screenshot of presenting credential on Android.](media/using-wallet-library/android-present-credential.png) ## Building the iOS sample On your Mac developer machine with Xcode, do the following:
-1. Download or clone the iOS Wallet Library [github repo](https://github.com/microsoft/entra-verifiedid-wallet-library-ios/archive/refs/heads/dev.zip).
-1. Start Xcode and open the top level folder for the WalletLibrary
-1. Set focus on WalletLibraryDemo project
+1. Download or clone the iOS Wallet Library [github repo](https://github.com/microsoft/entra-verifiedid-wallet-library-ios/archive/refs/heads/dev.zip).
+1. Start Xcode and open the top level folder for the WalletLibrary
+1. Set focus on WalletLibraryDemo project
![Screenshot of Xcode.](media/using-wallet-library/xcode-screenshot.png)
-1. Change the Team ID to your [Apple Developer Team ID](https://developer.apple.com/help/account/manage-your-team/locate-your-team-id).
-1. Select Product menu and then **Build**. This step takes some time.
-1. Connect your iOS test device via USB cable to your laptop
-1. Select your test device in Xcode
-1. Select Product menu and then **Run** or click on run triangle
+1. Change the Team ID to your [Apple Developer Team ID](https://developer.apple.com/help/account/manage-your-team/locate-your-team-id).
+1. Select Product menu and then **Build**. This step takes some time.
+1. Connect your iOS test device via USB cable to your laptop
+1. Select your test device in Xcode
+1. Select Product menu and then **Run** or click on run triangle
## Issuing credentials using the iOS sample
-1. Start the WalletLibraryDemo app
+1. Start the WalletLibraryDemo app
![Screenshot of Create Request on iOS.](media/using-wallet-library/ios-create-request.png)
-1. On your laptop, launch the public demo website [https://aka.ms/vcdemo](https://aka.ms/vcdemo) and do the following
+1. On your laptop, launch the public demo website [https://aka.ms/vcdemo](https://aka.ms/vcdemo) and do the following
1. Enter your First Name and Last Name and press **Next** 1. Select **Verify with True Identity** 1. Click **Take a selfie** and **Upload government issued ID**. The demo uses simulated data and you don't need to provide a real selfie or an ID. 1. Click **Next** and **OK**
-1. Scan the QR code with your QR Code Reader app on your test device, then copy the full URL displayed in the QR Code Reader app. Remember the pin code.
-1. Switch back to WalletLibraryDemo app and paste in the URL from the clipboard
-1. Press **Create Request** button
-1. When the app has downloaded the request, it shows a screen like below. Click on the **Add Pin** text to go to a screen where you can input the pin code, then click **Add** button to get back and finally click the **Complete** button.
+1. Scan the QR code with your QR Code Reader app on your test device, then copy the full URL displayed in the QR Code Reader app. Remember the pin code.
+1. Switch back to WalletLibraryDemo app and paste in the URL from the clipboard
+1. Press **Create Request** button
+1. When the app has downloaded the request, it shows a screen like below. Click on the **Add Pin** text to go to a screen where you can input the pin code, then click **Add** button to get back and finally click the **Complete** button.
![Screenshot of Enter Pin Code on iOS.](media/using-wallet-library/ios-enter-pincode.png)
-1. Once issuance completes, the demo app displays the claims in the credential.
+1. Once issuance completes, the demo app displays the claims in the credential.
![Screenshot of Issuance Complete on iOS.](media/using-wallet-library/ios-issuance-complete.png) ## Presenting credentials using the iOS sample The sample app holds the issued credential in memory, so after issuance, you can use it for presentation.
-1. The WalletLibraryDemo app should display credential type name on the home screen if you have successfully issued a credential.
+1. The WalletLibraryDemo app should display credential type name on the home screen if you have successfully issued a credential.
![Screenshot of app with credential on iOS.](media/using-wallet-library/ios-have-credential.png)
-1. In the Woodgrove demo in the browser, click **Return to Woodgrove** if you havenΓÇÖt done so already and continue with step 3 **Access personalized portal**.
-1. Scan the QR code with the QR Code Reader app on your test device, then copy the full URL to the clipboard.
-1. Switch back to the WalletLibraryDemo app, ***clear the previous request*** from the textbox, paste in the URL and click **Create Request** button
-1. The app retrieves the presentation request and display the matching credentials you have in memory. In this case you only have one. **Click on it** so that the little check mark switches from blue to green, then click the **Complete** button to submit the presentation response
+1. In the Woodgrove demo in the browser, click **Return to Woodgrove** if you havenΓÇÖt done so already and continue with step 3 **Access personalized portal**.
+1. Scan the QR code with the QR Code Reader app on your test device, then copy the full URL to the clipboard.
+1. Switch back to the WalletLibraryDemo app, ***clear the previous request*** from the textbox, paste in the URL and click **Create Request** button
+1. The app retrieves the presentation request and display the matching credentials you have in memory. In this case you only have one. **Click on it** so that the little check mark switches from blue to green, then click the **Complete** button to submit the presentation response
![Screenshot of presenting credential on iOS.](media/using-wallet-library/ios-present-credential.png)
ai-services Bring Your Own Storage Speech Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/bring-your-own-storage-speech-resource.md
If you perform all actions in the section, your Storage account will be in the f
- Access to all external network traffic is prohibited. - Access to Storage account using Storage account key is prohibited. - Access to Storage account blob storage using [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) is prohibited. (Except for [User delegation SAS](../../storage/common/shared-key-authorization-prevent.md#understand-how-disallowing-shared-key-affects-sas-tokens))-- Access to the BYOS-enanled Speech resource is allowed using the resource [system assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
+- Access to the BYOS-enabled Speech resource is allowed using the resource [system assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
So in effect your Storage account becomes completely "locked" and can only be accessed by your Speech resource, which will be able to: - Write artifacts of your Speech data processing (see details in the [correspondent articles](#next-steps)),
If you perform all actions in the section, your Storage account will be in the f
- External network traffic is allowed. - Access to Storage account using Storage account key is prohibited. - Access to Storage account blob storage using [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) is prohibited. (Except for [User delegation SAS](../../storage/common/shared-key-authorization-prevent.md#understand-how-disallowing-shared-key-affects-sas-tokens))-- Access to the BYOS-enanled Speech resource is allowed using the resource [system assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) and [User delegation SAS](../../storage/common/storage-sas-overview.md#user-delegation-sas).
+- Access to the BYOS-enabled Speech resource is allowed using the resource [system assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) and [User delegation SAS](../../storage/common/storage-sas-overview.md#user-delegation-sas).
These are the most restricted security settings possible for Text to speech scenario. You may further customize them according to your needs.
aks Workload Identity Migrate From Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md
Title: Migrate your Azure Kubernetes Service (AKS) pod to use workload identity
description: In this Azure Kubernetes Service (AKS) article, you learn how to configure your Azure Kubernetes Service pod to authenticate with workload identity. Previously updated : 07/26/2023 Last updated : 07/31/2023 # Migrate from pod managed-identity to workload identity
If your cluster is already using the latest version of the Azure Identity SDK, p
If your cluster isn't using the latest version of the Azure Identity SDK, you have two options: -- You can use a migration sidecar that we provide within your Linux applications, which proxies the IMDS transactions your application makes over to [OpenID Connect][openid-connect-overview] (OIDC). The migration sidecar isn't intended to be a long-term solution, but a way to get up and running quickly on workload identity. Perform the following steps to:
+- You can use a migration sidecar that we provide within your Linux applications, which proxies the IMDS transactions your application makes over to [OpenID Connect][openid-connect-overview] (OIDC). The migration sidecar isn't intended to be a long-term solution, but a way to get up and running quickly on workload identity. Perform the following steps to:
- [Deploy the workload with migration sidecar](#deploy-the-workload-with-migration-sidecar) to proxy the application IMDS transactions. - Verify the authentication transactions are completing successfully.
If your cluster isn't using the latest version of the Azure Identity SDK, you ha
- Once the SDK's are updated to the supported version, you can remove the proxy sidecar and redeploy the application. > [!NOTE]
- > The migration sidecar is **not supported for production use**. This feature is meant to give you time to migrate your application SDK's to a supported version, and not meant or intended to be a long-term solution.
- > The migration sidecar is only for Linux containers as pod-managed identities was available on Linux node pools only.
+ > The migration sidecar is **not supported for production use**. This feature is meant to give you time to migrate your application SDK's to a supported version, and not meant or intended to be a long-term solution.
+ > The migration sidecar is only available for Linux containers, due to only providing pod-managed identities with Linux node pools.
- Rewrite your application to support the latest version of the [Azure Identity][azure-identity-supported-versions] client library. Afterwards, perform the following steps:
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
This scenario shows you how to configure your Azure API Management instance to protect an API. We'll use the Azure AD B2C SPA (Auth Code + PKCE) flow to acquire a token, alongside API Management to secure an Azure Functions backend using EasyAuth.
-For a conceptual overview of API authorization, see [Authentication and authorization to APIs in API Management](authentication-authorization-overview.md).
+For a conceptual overview of API authorization, see [Authentication and authorization to APIs in API Management](authentication-authorization-overview.md).
## Aims
Here's an illustration of the components in use and the flow between them once t
Here's a quick overview of the steps: 1. Create the Azure AD B2C Calling (Frontend, API Management) and API Applications with scopes and grant API Access
-1. Create the sign-up and sign-in policies to allow users to sign in with Azure AD B2C
+1. Create the sign-up and sign-in policies to allow users to sign in with Azure AD B2C
1. Configure API Management with the new Azure AD B2C Client IDs and keys to Enable OAuth2 user authorization in the Developer Console 1. Build the Function API 1. Configure the Function API to enable EasyAuth with the new Azure AD B2C Client IDΓÇÖs and Keys and lock down to APIM VIP
Here's a quick overview of the steps:
1. Set up the **CORS** policy and add the **validate-jwt** policy to validate the OAuth token for every incoming request 1. Build the calling application to consume the API 1. Upload the JS SPA Sample
-1. Configure the Sample JS Client App with the new Azure AD B2C Client IDΓÇÖs and keys
+1. Configure the Sample JS Client App with the new Azure AD B2C Client IDΓÇÖs and keys
1. Test the Client Application > [!TIP]
- > We're going to capture quite a few pieces of information and keys etc as we walk this document, you might find it handy to have a text editor open to store the following items of configuration temporarily.
+ > We're going to capture quite a few pieces of information and keys etc as we walk this document, you might find it handy to have a text editor open to store the following items of configuration temporarily.
>
- > B2C BACKEND CLIENT ID:
- > B2C BACKEND CLIENT SECRET KEY:
- > B2C BACKEND API SCOPE URI:
- > B2C FRONTEND CLIENT ID:
- > B2C USER FLOW ENDPOINT URI:
- > B2C WELL-KNOWN OPENID ENDPOINT:
- > B2C POLICY NAME: Frontendapp_signupandsignin
- > FUNCTION URL:
- > APIM API BASE URL:
- > STORAGE PRIMARY ENDPOINT URL:
+ > B2C BACKEND CLIENT ID:
+ > B2C BACKEND CLIENT SECRET KEY:
+ > B2C BACKEND API SCOPE URI:
+ > B2C FRONTEND CLIENT ID:
+ > B2C USER FLOW ENDPOINT URI:
+ > B2C WELL-KNOWN OPENID ENDPOINT:
+ > B2C POLICY NAME: Frontendapp_signupandsignin
+ > FUNCTION URL:
+ > APIM API BASE URL:
+ > STORAGE PRIMARY ENDPOINT URL:
## Configure the backend application
Open the Azure AD B2C blade in the portal and do the following steps.
> [!NOTE] > B2C Policies allow you to expose the Azure AD B2C login endpoints to be able to capture different data components and sign in users in different ways.
- >
- > In this case we configured a sign-up or sign in flow (policy). This also exposed a well-known configuration endpoint, in both cases our created policy was identified in the URL by the "p=" query string parameter.
+ >
+ > In this case we configured a sign-up or sign in flow (policy). This also exposed a well-known configuration endpoint, in both cases our created policy was identified in the URL by the "p=" query string parameter.
> > Once this is done, you now have a functional Business to Consumer identity platform that will sign users into multiple applications.
Open the Azure AD B2C blade in the portal and do the following steps.
1. Select Save. ```csharp
-
+ using System.Net; using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Primitives;
-
+ public static async Task<IActionResult> Run(HttpRequest req, ILogger log) { log.LogInformation("C# HTTP trigger function processed a request.");
-
+ return (ActionResult)new OkObjectResult($"Hello World, time and date are {DateTime.Now.ToString()}"); }
-
+ ``` > [!TIP]
Open the Azure AD B2C blade in the portal and do the following steps.
1. Click 'Save' (at the top left of the blade). > [!IMPORTANT]
- > Now your Function API is deployed and should throw 401 responses if the correct JWT isn't supplied as an Authorization: Bearer header, and should return data when a valid request is presented.
- > You added additional defense-in-depth security in EasyAuth by configuring the 'Login With Azure AD' option to handle unauthenticated requests.
+ > Now your Function API is deployed and should throw 401 responses if the correct JWT isn't supplied as an Authorization: Bearer header, and should return data when a valid request is presented.
+ > You added additional defense-in-depth security in EasyAuth by configuring the 'Login With Azure AD' option to handle unauthenticated requests.
+ >
+ > We still have no IP security applied, if you have a valid key and OAuth2 token, anyone can call this from anywhere - ideally we want to force all requests to come via API Management.
>
- > We still have no IP security applied, if you have a valid key and OAuth2 token, anyone can call this from anywhere - ideally we want to force all requests to come via API Management.
- >
> If you're using APIM Consumption tier then [there isn't a dedicated Azure API Management Virtual IP](./api-management-howto-ip-addresses.md#ip-addresses-of-consumption-tier-api-management-service) to allow-list with the functions access-restrictions. In the Azure API Management Standard SKU and above [the VIP is single tenant and for the lifetime of the resource](./api-management-howto-ip-addresses.md#changes-to-the-ip-addresses). For the Azure API Management Consumption tier, you can lock down your API calls via the shared secret function key in the portion of the URI you copied above. Also, for the Consumption tier - steps 12-17 below do not apply. 1. Close the 'Authentication' blade from the App Service / Functions portal.
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
1. Click Browse, choose the function app you're hosting the API inside, and click select. Next, click select again. 1. Give the API a name and description for API Management's internal use and add it to the ΓÇÿunlimitedΓÇÖ Product. 1. Copy and record the API's 'base URL' and click 'create'.
-1. Click the 'settings' tab, then under subscription - switch off the 'Subscription Required' checkbox as we'll use the Oauth JWT token in this case to rate limit. Note that if you're using the consumption tier, this would still be required in a production environment.
+1. Click the 'settings' tab, then under subscription - switch off the 'Subscription Required' checkbox as we'll use the Oauth JWT token in this case to rate limit. Note that if you're using the consumption tier, this would still be required in a production environment.
> [!TIP]
- > If using the consumption tier of APIM the unlimited product won't be available as an out of the box. Instead, navigate to "Products" under "APIs" and hit "Add".
+ > If using the consumption tier of APIM the unlimited product won't be available as an out of the box. Instead, navigate to "Products" under "APIs" and hit "Add".
> Type "Unlimited" as the product name and description and select the API you just added from the "+" APIs callout at the bottom left of the screen. Select the "published" checkbox. Leave the rest as default. Finally, hit the "create" button. This created the "unlimited" product and assigned it to your API. You can customize your new product later. ## Configure and capture the correct storage endpoint settings
-1. Open the storage accounts blade in the Azure portal
+1. Open the storage accounts blade in the Azure portal
1. Select the account you created and select the 'Static Website' blade from the Settings section (if you don't see a 'Static Website' option, check you created a V2 account). 1. Set the static web hosting feature to 'enabled', and set the index document name to 'https://docsupdatetracker.net/index.html', then click 'save'. 1. Note down the contents of the 'Primary Endpoint' for later, as this location is where the frontend site will be hosted.
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
> [!NOTE] > Now Azure API management is able to respond to cross origin requests from your JavaScript SPA apps, and it will perform throttling, rate-limiting and pre-validation of the JWT auth token being passed BEFORE forwarding the request on to the Function API.
- >
+ >
> Congratulations, you now have Azure AD B2C, API Management and Azure Functions working together to publish, secure AND consume an API! > [!TIP]
- > If you're using the API Management consumption tier then instead of rate limiting by the JWT subject or incoming IP Address (Limit call rate by key policy isn't supported today for the "Consumption" tier), you can Limit by call rate quota see [here](rate-limit-policy.md).
+ > If you're using the API Management consumption tier then instead of rate limiting by the JWT subject or incoming IP Address (Limit call rate by key policy isn't supported today for the "Consumption" tier), you can Limit by call rate quota see [here](rate-limit-policy.md).
> As this example is a JavaScript Single Page Application, we use the API Management Key only for rate-limiting and billing calls. The actual Authorization and Authentication is handled by Azure AD B2C, and is encapsulated in the JWT, which gets validated twice, once by API Management, and then by the backend Azure Function. ## Upload the JavaScript SPA sample to static storage
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
<meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1">
- <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.0-beta2/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-BmbxuPwQa2lc/FVzBcNJ7UAyJxM6wuqIj61tLrc4wSX0szH/Ev+nYRRuWlolflfl" crossorigin="anonymous">
- <script type="text/javascript" src="https://alcdn.msauth.net/browser/2.11.1/js/msal-browser.min.js"></script>
+ <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.0-beta2/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-BmbxuPwQa2lc/FVzBcNJ7UAyJxM6wuqIj61tLrc4wSX0szH/Ev+nYRRuWlolflfl" crossorigin="anonymous">
+ <script type="text/javascript" src="https://alcdn.msauth.net/browser/2.11.1/js/msal-browser.min.js"></script>
</head> <body> <div class="container-fluid"> <div class="row"> <div class="col-md-12">
- <nav class="navbar navbar-expand-lg navbar-dark bg-dark">
- <div class="container-fluid">
- <a class="navbar-brand" href="#">Azure Active Directory B2C with Azure API Management</a>
- <div class="navbar-nav">
- <button class="btn btn-success" id="signinbtn" onClick="login()">Sign In</a>
- </div>
- </div>
- </nav>
+ <nav class="navbar navbar-expand-lg navbar-dark bg-dark">
+ <div class="container-fluid">
+ <a class="navbar-brand" href="#">Azure Active Directory B2C with Azure API Management</a>
+ <div class="navbar-nav">
+ <button class="btn btn-success" id="signinbtn" onClick="login()">Sign In</a>
+ </div>
+ </div>
+ </nav>
</div> </div> <div class="row"> <div class="col-md-12"> <div class="card" >
- <div id="cardheader" class="card-header">
- <div class="card-text"id="message">Please sign in to continue</div>
- </div>
- <div class="card-body">
- <button class="btn btn-warning" id="callapibtn" onClick="getAPIData()">Call API</a>
- <div id="progress" class="spinner-border" role="status">
- <span class="visually-hidden">Loading...</span>
- </div>
- </div>
+ <div id="cardheader" class="card-header">
+ <div class="card-text"id="message">Please sign in to continue</div>
+ </div>
+ <div class="card-body">
+ <button class="btn btn-warning" id="callapibtn" onClick="getAPIData()">Call API</a>
+ <div id="progress" class="spinner-border" role="status">
+ <span class="visually-hidden">Loading...</span>
+ </div>
+ </div>
</div> </div> </div> </div> <script lang="javascript">
- // Just change the values in this config object ONLY.
- var config = {
- msal: {
- auth: {
- clientId: "{CLIENTID}", // This is the client ID of your FRONTEND application that you registered with the SPA type in Azure Active Directory B2C
- authority: "{YOURAUTHORITYB2C}", // Formatted as https://{b2ctenantname}.b2clogin.com/tfp/{b2ctenantguid or full tenant name including onmicrosoft.com}/{signuporinpolicyname}
- redirectUri: "{StoragePrimaryEndpoint}", // The storage hosting address of the SPA, a web-enabled v2 storage account - recorded earlier as the Primary Endpoint.
- knownAuthorities: ["{B2CTENANTDOMAIN}"] // {b2ctenantname}.b2clogin.com
- },
- cache: {
- cacheLocation: "sessionStorage",
- storeAuthStateInCookie: false
- }
- },
- api: {
- scopes: ["{BACKENDAPISCOPE}"], // The scope that we request for the API from B2C, this should be the backend API scope, with the full URI.
- backend: "{APIBASEURL}/hello" // The location that we'll call for the backend api, this should be hosted in API Management, suffixed with the name of the API operation (in the sample this is '/hello').
- }
- }
- document.getElementById("callapibtn").hidden = true;
- document.getElementById("progress").hidden = true;
- const myMSALObj = new msal.PublicClientApplication(config.msal);
- myMSALObj.handleRedirectPromise().then((tokenResponse) => {
- if(tokenResponse !== null){
- console.log(tokenResponse.account);
- document.getElementById("message").innerHTML = "Welcome, " + tokenResponse.account.name;
- document.getElementById("signinbtn").hidden = true;
- document.getElementById("callapibtn").hidden = false;
- }}).catch((error) => {console.log("Error Signing in:" + error);
- });
- function login() {
- try {
- myMSALObj.loginRedirect({scopes: config.api.scopes});
- } catch (err) {console.log(err);}
- }
- function getAPIData() {
- document.getElementById("progress").hidden = false;
- document.getElementById("message").innerHTML = "Calling backend ... "
- document.getElementById("cardheader").classList.remove('bg-success','bg-warning','bg-danger');
- myMSALObj.acquireTokenSilent({scopes: config.api.scopes, account: getAccount()}).then(tokenResponse => {
- const headers = new Headers();
- headers.append("Authorization", `Bearer ${tokenResponse.accessToken}`);
- fetch(config.api.backend, {method: "GET", headers: headers})
- .then(async (response) => {
- if (!response.ok)
- {
- document.getElementById("message").innerHTML = "Error: " + response.status + " " + JSON.parse(await response.text()).message;
- document.getElementById("cardheader").classList.add('bg-warning');
- }
- else
- {
- document.getElementById("cardheader").classList.add('bg-success');
- document.getElementById("message").innerHTML = await response.text();
- }
- }).catch(async (error) => {
- document.getElementById("cardheader").classList.add('bg-danger');
- document.getElementById("message").innerHTML = "Error: " + error;
- });
- }).catch(error => {console.log("Error Acquiring Token Silently: " + error);
- return myMSALObj.acquireTokenRedirect({scopes: config.api.scopes, forceRefresh: false})
- });
- document.getElementById("progress").hidden = true;
+ // Just change the values in this config object ONLY.
+ var config = {
+ msal: {
+ auth: {
+ clientId: "{CLIENTID}", // This is the client ID of your FRONTEND application that you registered with the SPA type in Azure Active Directory B2C
+ authority: "{YOURAUTHORITYB2C}", // Formatted as https://{b2ctenantname}.b2clogin.com/tfp/{b2ctenantguid or full tenant name including onmicrosoft.com}/{signuporinpolicyname}
+ redirectUri: "{StoragePrimaryEndpoint}", // The storage hosting address of the SPA, a web-enabled v2 storage account - recorded earlier as the Primary Endpoint.
+ knownAuthorities: ["{B2CTENANTDOMAIN}"] // {b2ctenantname}.b2clogin.com
+ },
+ cache: {
+ cacheLocation: "sessionStorage",
+ storeAuthStateInCookie: false
+ }
+ },
+ api: {
+ scopes: ["{BACKENDAPISCOPE}"], // The scope that we request for the API from B2C, this should be the backend API scope, with the full URI.
+ backend: "{APIBASEURL}/hello" // The location that we'll call for the backend api, this should be hosted in API Management, suffixed with the name of the API operation (in the sample this is '/hello').
+ }
+ }
+ document.getElementById("callapibtn").hidden = true;
+ document.getElementById("progress").hidden = true;
+ const myMSALObj = new msal.PublicClientApplication(config.msal);
+ myMSALObj.handleRedirectPromise().then((tokenResponse) => {
+ if(tokenResponse !== null){
+ console.log(tokenResponse.account);
+ document.getElementById("message").innerHTML = "Welcome, " + tokenResponse.account.name;
+ document.getElementById("signinbtn").hidden = true;
+ document.getElementById("callapibtn").hidden = false;
+ }}).catch((error) => {console.log("Error Signing in:" + error);
+ });
+ function login() {
+ try {
+ myMSALObj.loginRedirect({scopes: config.api.scopes});
+ } catch (err) {console.log(err);}
+ }
+ function getAPIData() {
+ document.getElementById("progress").hidden = false;
+ document.getElementById("message").innerHTML = "Calling backend ... "
+ document.getElementById("cardheader").classList.remove('bg-success','bg-warning','bg-danger');
+ myMSALObj.acquireTokenSilent({scopes: config.api.scopes, account: getAccount()}).then(tokenResponse => {
+ const headers = new Headers();
+ headers.append("Authorization", `Bearer ${tokenResponse.accessToken}`);
+ fetch(config.api.backend, {method: "GET", headers: headers})
+ .then(async (response) => {
+ if (!response.ok)
+ {
+ document.getElementById("message").innerHTML = "Error: " + response.status + " " + JSON.parse(await response.text()).message;
+ document.getElementById("cardheader").classList.add('bg-warning');
+ }
+ else
+ {
+ document.getElementById("cardheader").classList.add('bg-success');
+ document.getElementById("message").innerHTML = await response.text();
+ }
+ }).catch(async (error) => {
+ document.getElementById("cardheader").classList.add('bg-danger');
+ document.getElementById("message").innerHTML = "Error: " + error;
+ });
+ }).catch(error => {console.log("Error Acquiring Token Silently: " + error);
+ return myMSALObj.acquireTokenRedirect({scopes: config.api.scopes, forceRefresh: false})
+ });
+ document.getElementById("progress").hidden = true;
} function getAccount() { var accounts = myMSALObj.getAllAccounts();
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
1. Browse to the Static Website Primary Endpoint you stored earlier in the last section. > [!NOTE]
- > Congratulations, you just deployed a JavaScript Single Page App to Azure Storage Static content hosting.
+ > Congratulations, you just deployed a JavaScript Single Page App to Azure Storage Static content hosting.
> Since we havenΓÇÖt configured the JS app with your Azure AD B2C details yet ΓÇô the page won't work yet if you open it. ## Configure the JavaScript SPA for Azure AD B2C 1. Now we know where everything is: we can configure the SPA with the appropriate API Management API address and the correct Azure AD B2C application / client IDs.
-1. Go back to the Azure portal storage blade
-1. Select 'Containers' (under 'Settings')
+1. Go back to the Azure portal storage blade
+1. Select 'Containers' (under 'Settings')
1. Select the '$web' container from the list
-1. Select https://docsupdatetracker.net/index.html blob from the list
-1. Click 'Edit'
+1. Select https://docsupdatetracker.net/index.html blob from the list
+1. Click 'Edit'
1. Update the auth values in the msal config section to match your *front-end* application you registered in B2C earlier. Use the code comments for hints on how the config values should look. The *authority* value needs to be in the format:- https://{b2ctenantname}.b2clogin.com/tfp/{b2ctenantname}.onmicrosoft.com}/{signupandsigninpolicyname}, if you have used our sample names and your b2c tenant is called 'contoso' then you would expect the authority to be 'https://contoso.b2clogin.com/tfp/contoso.onmicrosoft.com/Frontendapp_signupandsignin'. 1. Set the api values to match your backend address (The API Base Url you recorded earlier, and the 'b2cScopes' values were recorded earlier for the *backend application*).
The *authority* value needs to be in the format:- https://{b2ctenantname}.b2clog
1. Add a new URI for the primary (storage) endpoint (minus the trailing forward slash). > [!NOTE]
- > This configuration will result in a client of the frontend application receiving an access token with appropriate claims from Azure AD B2C.
- > The SPA will be able to add this as a bearer token in the https header in the call to the backend API.
- >
- > API Management will pre-validate the token, rate-limit calls to the endpoint by both the subject of the JWT issued by Azure ID (the user) and by IP address of the caller (depending on the service tier of API Management, see the note above), before passing through the request to the receiving Azure Function API, adding the functions security key.
+ > This configuration will result in a client of the frontend application receiving an access token with appropriate claims from Azure AD B2C.
+ > The SPA will be able to add this as a bearer token in the https header in the call to the backend API.
+ >
+ > API Management will pre-validate the token, rate-limit calls to the endpoint by both the subject of the JWT issued by Azure ID (the user) and by IP address of the caller (depending on the service tier of API Management, see the note above), before passing through the request to the receiving Azure Function API, adding the functions security key.
> The SPA will render the response in the browser. > > *Congratulations, youΓÇÖve configured Azure AD B2C, Azure API Management, Azure Functions, Azure App Service Authorization to work in perfect harmony!*
api-management Publish Event Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/publish-event-policy.md
The `publish-event` policy publishes an event to one or more subscriptions speci
<http-response> [...] <publish-event>
- <targets>
- <graphql-subscription id="subscription field" />
- </targets>
- </publish-event>
+ <targets>
+ <graphql-subscription id="subscription field" />
+ </targets>
+ </publish-event>
</http-response> </http-data-source> ```
The `publish-event` policy publishes an event to one or more subscriptions speci
### Usage notes
-* This policy is invoked only when a related GraphQL query or mutation is executed.
+* This policy is invoked only when a related GraphQL query or mutation is executed.
## Example
type Subscription {
```xml <http-data-source>
- <http-request>
- <set-method>POST</set-method>
- <set-url>https://contoso.com/api/user</set-url>
- <set-body template="liquid">{ "id" : {{body.arguments.id}}, "name" : "{{body.arguments.name}}"}</set-body>
- </http-request>
- <http-response>
- <publish-event>
- <targets>
- <graphql-subscription id="onUserCreated" />
- </targets>
- </publish-event>
- </http-response>
+ <http-request>
+ <set-method>POST</set-method>
+ <set-url>https://contoso.com/api/user</set-url>
+ <set-body template="liquid">{ "id" : {{body.arguments.id}}, "name" : "{{body.arguments.name}}"}</set-body>
+ </http-request>
+ <http-response>
+ <publish-event>
+ <targets>
+ <graphql-subscription id="onUserCreated" />
+ </targets>
+ </publish-event>
+ </http-response>
</http-data-source> ```
application-gateway How To Path Header Query String Routing Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-path-header-query-string-routing-gateway-api.md
+
+ Title: Path, header, and query string routing with Application Gateway for Containers - Gateway API (preview)
+description: Learn how to configure Application Gateway for Containers with support with path, header, and query string routing.
+++++ Last updated : 07/30/2023+++
+# Path, header, and query string routing with Application Gateway for Containers - Gateway API (preview)
+
+This document helps you set up an example application that uses the resources from Gateway API to demonstrate traffic routing based on URL path, query string, and header. Review the following gateway API resources for more information:
+- [Gateway](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gateway) - create a gateway with one HTTPS listener.
+- [HTTPRoute](https://gateway-api.sigs.k8s.io/v1alpha2/api-types/httproute/) - create an HTTP route that references a backend service.
+- [HTTPRouteMatch](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.HTTPRouteMatch) - Use `matches` to route based on path, header, and query string.
+
+## Prerequisites
+
+> [!IMPORTANT]
+> Application Gateway for Containers is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
+2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTP application
+ Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing.
+ ```bash
+ kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ ```
+
+ This command creates the following on your cluster:
+ - a namespace called `test-infra`
+ - 2 services called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - 2 deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
+
+## Deploy the required Gateway API resources
+
+# [ALB managed deployment](#tab/alb-managed)
+
+Create a gateway:
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-namespace: alb-test-infra
+ alb.networking.azure.io/alb-name: alb-test
+spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: http-listener
+ port: 80
+ protocol: HTTP
+ allowedRoutes:
+ namespaces:
+ from: Same
+EOF
+```
++
+# [Bring your own (BYO) deployment](#tab/byo)
+1. Set the following environment variables
+
+```bash
+RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
+RESOURCE_NAME='alb-test'
+
+RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
+FRONTEND_NAME='frontend'
+```
+
+2. Create a Gateway
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-id: $RESOURCE_ID
+spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: http-listener
+ port: 80
+ protocol: HTTP
+ allowedRoutes:
+ namespaces:
+ from: Same
+ addresses:
+ - type: alb.networking.azure.io/alb-frontend
+ value: $FRONTEND_NAME
+EOF
+```
+++
+Once the gateway resource has been created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
+```bash
+kubectl get gateway gateway-01 -n test-infra -o yaml
+```
+
+Example output of successful gateway creation.
+```yaml
+status:
+ addresses:
+ - type: IPAddress
+ value: xxxx.yyyy.alb.azure.com
+ conditions:
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Valid Gateway
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ listeners:
+ - attachedRoutes: 0
+ conditions:
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: ""
+ observedGeneration: 1
+ reason: ResolvedRefs
+ status: "True"
+ type: ResolvedRefs
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Listener is accepted
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ name: https-listener
+ supportedKinds:
+ - group: gateway.networking.k8s.io
+ kind: HTTPRoute
+```
+
+Once the gateway has been created, create an HTTPRoute to define two different matches and a default service to route traffic to.
+
+The way the following rules read are as follows:
+1) If the path is **/bar**, traffic is routed to backend-v2 service on port 8080 OR
+2) If the request contains an HTTP header with the name **magic** and the value **foo**, the URL contains a query string defining the name great with a value of example, AND the path is **/some/thing**, the request is sent to backend-v2 on port 8080.
+3) Otherwise, all other requests are routed to backend-v1 service on port 8080.
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: HTTPRoute
+metadata:
+ name: http-route
+ namespace: test-infra
+spec:
+ parentRefs:
+ - name: gateway-01
+ namespace: test-infra
+ rules:
+ - matches:
+ - path:
+ type: PathPrefix
+ value: /bar
+ backendRefs:
+ - name: backend-v2
+ port: 8080
+ - matches:
+ - headers:
+ - type: Exact
+ name: magic
+ value: foo
+ queryParams:
+ - type: Exact
+ name: great
+ value: example
+ path:
+ type: PathPrefix
+ value: /some/thing
+ method: GET
+ backendRefs:
+ - name: backend-v2
+ port: 8080
+ - backendRefs:
+ - name: backend-v1
+ port: 8080
+EOF
+```
+
+Once the HTTPRoute resource has been created, ensure the route has been _Accepted_ and the Application Gateway for Containers resource has been _Programmed_.
+```bash
+kubectl get httproute http-route -n test-infra -o yaml
+```
+
+Verify the status of the Application Gateway for Containers resource has been successfully updated.
+
+```yaml
+status:
+ parents:
+ - conditions:
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: ""
+ observedGeneration: 1
+ reason: ResolvedRefs
+ status: "True"
+ type: ResolvedRefs
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: Route is Accepted
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ controllerName: alb.networking.azure.io/alb-controller
+ parentRef:
+ group: gateway.networking.k8s.io
+ kind: Gateway
+ name: gateway-01
+ namespace: test-infra
+ ```
+
+## Test access to the application
+
+Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the following command to get the FQDN.
+
+```bash
+fqdn=$(kubectl get gateway gateway-01 -n test-infra -o jsonpath='{.status.addresses[0].value}')
+```
+
+By using the curl command, we can validate three different scenarios:
+
+### Path based routing
+In this scenario, the client request sent to http://frontend-fqdn/bar is routed to backend-v2 service.
+
+Run the following command:
+```bash
+curl http://$fqdn/bar
+```
+
+Notice the container serving the request is backend-v2.
+
+### Query string + header + path routing
+In this scenario, the client request sent to http://frontend-fqdn/some/thing?great=example with a header key/value part of "magic: foo" is routed to backend-v2 service.
+
+Run the following command:
+```bash
+curl http://$fqdn/some/thing?great=example -H "magic: foo"
+```
+
+Notice the container serving the request is backend-v2.
+
+### Default route
+If neither of the first two scenarios are satisfied, Application Gateway for Containers routes all other requests to the backend-v1 service.
+
+Run the following command:
+```bash
+curl http://$fqdn/
+```
+
+Notice the container serving the request is backend-v1.
+
+Congratulations, you have installed ALB Controller, deployed a backend application and routed traffic to the application via Gateway API on Application Gateway for Containers.
application-gateway How To Traffic Splitting Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-traffic-splitting-gateway-api.md
Previously updated : 07/24/2023 Last updated : 07/31/2023
EOF
Once the HTTPRoute resource has been created, ensure the route has been _Accepted_ and the Application Gateway for Containers resource has been _Programmed_. ```bash
-kubectl get httproute https-route -n test-infra -o yaml
+kubectl get httproute traffic-split-route -n test-infra -o yaml
``` Verify the status of the Application Gateway for Containers resource has been successfully updated.
application-gateway Quickstart Deploy Application Gateway For Containers Alb Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-deploy-application-gateway-for-containers-alb-controller.md
You need to complete the following tasks prior to deploying Application Gateway
1. Prepare your Azure subscription and your `az-cli` client.
- ```azurecli-interactive
- # Sign in to your Azure subscription.
- SUBSCRIPTION_ID='<your subscription id>'
- az login
- az account set --subscription $SUBSCRIPTION_ID
-
- # Register required resource providers on Azure.
- az provider register --namespace Microsoft.ContainerService
- az provider register --namespace Microsoft.Network
- az provider register --namespace Microsoft.NetworkFunction
- az provider register --namespace Microsoft.ServiceNetworking
-
- # Install Azure CLI extensions.
- az extension add --name alb
- ```
+ ```azurecli-interactive
+ # Sign in to your Azure subscription.
+ SUBSCRIPTION_ID='<your subscription id>'
+ az login
+ az account set --subscription $SUBSCRIPTION_ID
+
+ # Register required resource providers on Azure.
+ az provider register --namespace Microsoft.ContainerService
+ az provider register --namespace Microsoft.Network
+ az provider register --namespace Microsoft.NetworkFunction
+ az provider register --namespace Microsoft.ServiceNetworking
+
+ # Install Azure CLI extensions.
+ az extension add --name alb
+ ```
2. Set an AKS cluster for your workload.
- > [!NOTE]
- > The AKS cluster needs to be in a [region where Application Gateway for Containers is available](overview.md#supported-regions)
- > AKS cluster should use [Azure CNI](../../aks/configure-azure-cni.md).
+ > [!NOTE]
+ > The AKS cluster needs to be in a [region where Application Gateway for Containers is available](overview.md#supported-regions)
+ > AKS cluster should use [Azure CNI](../../aks/configure-azure-cni.md).
> AKS cluster should have the workload identity feature enabled. [Learn how](../../aks/workload-identity-deploy-cluster.md#update-an-existing-aks-cluster) to enable and use an existing AKS cluster section.
- If using an existing cluster, ensure you enable Workload Identity support on your AKS cluster. Workload identities can be enabled via the following:
-
- ```azurecli-interactive
- AKS_NAME='<your cluster name>'
- RESOURCE_GROUP='<your resource group name>'
- az aks update -g $RESOURCE_GROUP -n $AKS_NAME --enable-oidc-issuer --enable-workload-identity --no-wait
- ```
+ If using an existing cluster, ensure you enable Workload Identity support on your AKS cluster. Workload identities can be enabled via the following:
+
+ ```azurecli-interactive
+ AKS_NAME='<your cluster name>'
+ RESOURCE_GROUP='<your resource group name>'
+ az aks update -g $RESOURCE_GROUP -n $AKS_NAME --enable-oidc-issuer --enable-workload-identity --no-wait
+ ```
- If you don't have an existing cluster, use the following commands to create a new AKS cluster with Azure CNI and workload identity enabled.
+ If you don't have an existing cluster, use the following commands to create a new AKS cluster with Azure CNI and workload identity enabled.
- ```azurecli-interactive
- AKS_NAME='<your cluster name>'
- RESOURCE_GROUP='<your resource group name>'
- LOCATION='northeurope' # The list of available regions may grow as we roll out to more preview regions
- VM_SIZE='<the size of the vm in AKS>' # The size needs to be available in your location
-
- az group create --name $RESOURCE_GROUP --location $LOCATION
- az aks create \
- --resource-group $RESOURCE_GROUP \
- --name $AKS_NAME \
- --location $LOCATION \
- --node-vm-size $VM_SIZE \
- --network-plugin azure \
- --enable-oidc-issuer \
- --enable-workload-identity \
- --generate-ssh-key
- ```
+ ```azurecli-interactive
+ AKS_NAME='<your cluster name>'
+ RESOURCE_GROUP='<your resource group name>'
+ LOCATION='northeurope' # The list of available regions may grow as we roll out to more preview regions
+ VM_SIZE='<the size of the vm in AKS>' # The size needs to be available in your location
+
+ az group create --name $RESOURCE_GROUP --location $LOCATION
+ az aks create \
+ --resource-group $RESOURCE_GROUP \
+ --name $AKS_NAME \
+ --location $LOCATION \
+ --node-vm-size $VM_SIZE \
+ --network-plugin azure \
+ --enable-oidc-issuer \
+ --enable-workload-identity \
+ --generate-ssh-key
+ ```
3. Install Helm
- [Helm](https://github.com/helm/helm) is an open-source packaging tool that is used to install ALB controller.
+ [Helm](https://github.com/helm/helm) is an open-source packaging tool that is used to install ALB controller.
- > [!NOTE]
- > Helm is already available in Azure Cloud Shell. If you are using Azure Cloud Shell, no additional Helm installation is necessary.
+ > [!NOTE]
+ > Helm is already available in Azure Cloud Shell. If you are using Azure Cloud Shell, no additional Helm installation is necessary.
- You can also use the following steps to install Helm on a local device running Windows or Linux. Ensure that you have the latest version of helm installed.
+ You can also use the following steps to install Helm on a local device running Windows or Linux. Ensure that you have the latest version of helm installed.
- # [Windows](#tab/install-helm-windows)
- See the [instructions for installation](https://github.com/helm/helm#install) for various options of installation. Similarly, if your version of Windows has [Windows Package Manager winget](/windows/package-manager/winget/) installed, you may execute the following command:
- ```powershell
- winget install helm.helm
- ```
+ # [Windows](#tab/install-helm-windows)
+ See the [instructions for installation](https://github.com/helm/helm#install) for various options of installation. Similarly, if your version of Windows has [Windows Package Manager winget](/windows/package-manager/winget/) installed, you may execute the following command:
- # [Linux](#tab/install-helm-linux)
- The following command can be used to install Helm. Commands that use Helm with Azure CLI in this article can also be run using Bash.
- ```bash
- curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
- ```
+ ```powershell
+ winget install helm.helm
+ ```
+
+ # [Linux](#tab/install-helm-linux)
+ The following command can be used to install Helm. Commands that use Helm with Azure CLI in this article can also be run using Bash.
+ ```bash
+ curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
+ ```
## Install the ALB Controller
You need to complete the following tasks prior to deploying Application Gateway
1. Create a user managed identity for ALB controller and federate the identity as Pod Identity to use in the AKS cluster. ```azurecli-interactive
- RESOURCE_GROUP='<your resource group name>'
- AKS_NAME='<your aks cluster name>'
- IDENTITY_RESOURCE_NAME='azure-alb-identity'
-
- mcResourceGroup=$(az aks show --resource-group $RESOURCE_GROUP --name $AKS_NAME --query "nodeResourceGroup" -o tsv)
- mcResourceGroupId=$(az group show --name $mcResourceGroup --query id -otsv)
-
- echo "Creating identity $IDENTITY_RESOURCE_NAME in resource group $RESOURCE_GROUP"
- az identity create --resource-group $RESOURCE_GROUP --name $IDENTITY_RESOURCE_NAME
- principalId="$(az identity show -g $RESOURCE_GROUP -n $IDENTITY_RESOURCE_NAME --query principalId -otsv)"
-
- echo "Waiting 60 seconds to allow for replication of the identity..."
- sleep 60
+ RESOURCE_GROUP='<your resource group name>'
+ AKS_NAME='<your aks cluster name>'
+ IDENTITY_RESOURCE_NAME='azure-alb-identity'
+
+ mcResourceGroup=$(az aks show --resource-group $RESOURCE_GROUP --name $AKS_NAME --query "nodeResourceGroup" -o tsv)
+ mcResourceGroupId=$(az group show --name $mcResourceGroup --query id -otsv)
+
+ echo "Creating identity $IDENTITY_RESOURCE_NAME in resource group $RESOURCE_GROUP"
+ az identity create --resource-group $RESOURCE_GROUP --name $IDENTITY_RESOURCE_NAME
+ principalId="$(az identity show -g $RESOURCE_GROUP -n $IDENTITY_RESOURCE_NAME --query principalId -otsv)"
+
+ echo "Waiting 60 seconds to allow for replication of the identity..."
+ sleep 60
- echo "Apply Reader role to the AKS managed cluster resource group for the newly provisioned identity"
- az role assignment create --assignee-object-id $principalId --assignee-principal-type ServicePrincipal --scope $mcResourceGroupId --role "acdd72a7-3385-48ef-bd42-f606fba81ae7" # Reader role
-
- echo "Set up federation with AKS OIDC issuer"
- AKS_OIDC_ISSUER="$(az aks show -n "$AKS_NAME" -g "$RESOURCE_GROUP" --query "oidcIssuerProfile.issuerUrl" -o tsv)"
- az identity federated-credential create --name "azure-alb-identity" \
- --identity-name "$IDENTITY_RESOURCE_NAME" \
- --resource-group $RESOURCE_GROUP \
- --issuer "$AKS_OIDC_ISSUER" \
- --subject "system:serviceaccount:azure-alb-system:alb-controller-sa"
+ echo "Apply Reader role to the AKS managed cluster resource group for the newly provisioned identity"
+ az role assignment create --assignee-object-id $principalId --assignee-principal-type ServicePrincipal --scope $mcResourceGroupId --role "acdd72a7-3385-48ef-bd42-f606fba81ae7" # Reader role
+
+ echo "Set up federation with AKS OIDC issuer"
+ AKS_OIDC_ISSUER="$(az aks show -n "$AKS_NAME" -g "$RESOURCE_GROUP" --query "oidcIssuerProfile.issuerUrl" -o tsv)"
+ az identity federated-credential create --name "azure-alb-identity" \
+ --identity-name "$IDENTITY_RESOURCE_NAME" \
+ --resource-group $RESOURCE_GROUP \
+ --issuer "$AKS_OIDC_ISSUER" \
+ --subject "system:serviceaccount:azure-alb-system:alb-controller-sa"
``` ALB Controller requires a federated credential with the name of _azure-alb-identity_. Any other federated credential name is unsupported.
You need to complete the following tasks prior to deploying Application Gateway
2. Install ALB Controller using Helm
- ### For new deployments
- ALB Controller can be installed by running the following commands:
-
- ```azurecli-interactive
- az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME
- helm install alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \
- --version 0.4.023971 \
- --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv)
- ```
-
- > [!Note]
- > ALB Controller will automatically be provisioned into a namespace called azure-alb-system. The namespace name may be changed by defining the _--namespace <namespace_name>_ parameter when executing the helm command. During upgrade, please ensure you specify the --namespace parameter.
-
- ### For existing deployments
- ALB can be upgraded by running the following commands (ensure you add the `--namespace namespace_name` parameter to define the namespace if the previous installation did not use the namespace _azure-alb-system_):
- ```azurecli-interactive
- az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME
- helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \
- --version 0.4.023971 \
- --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv)
- ```
+ ### For new deployments
+ ALB Controller can be installed by running the following commands:
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME
+ helm install alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \
+ --version 0.4.023971 \
+ --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv)
+ ```
+
+ > [!Note]
+ > ALB Controller will automatically be provisioned into a namespace called azure-alb-system. The namespace name may be changed by defining the _--namespace <namespace_name>_ parameter when executing the helm command. During upgrade, please ensure you specify the --namespace parameter.
+
+ ### For existing deployments
+ ALB can be upgraded by running the following commands (ensure you add the `--namespace namespace_name` parameter to define the namespace if the previous installation did not use the namespace _azure-alb-system_):
+ ```azurecli-interactive
+ az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME
+ helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \
+ --version 0.4.023971 \
+ --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv)
+ ```
### Verify the ALB Controller installation
application-gateway Mutual Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-overview.md
description: This article is an overview of mutual authentication on Application
Previously updated : 12/21/2022 Last updated : 07/29/2023
If you're uploading a certificate chain with root CA and intermediate CA certifi
> [!IMPORTANT] > Make sure you upload the entire trusted client CA certificate chain to the Application Gateway when using mutual authentication.
-Each SSL profile can support up to five trusted client CA certificate chains.
+Each SSL profile can support up to 100 trusted client CA certificate chains. A single Application Gateway can support a total of 200 trusted client CA certificate chains.
> [!NOTE] > Mutual authentication is only available on Standard_v2 and WAF_v2 SKUs.
application-gateway Retirement Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/retirement-faq.md
Once the deadline arrives V1 gateways aren't supported. Any V1 SKU resources tha
### What is the definition of a new customer on Application Gateway V1 SKU?
-Customers who didn't have Application Gateway V1 SKU in their subscriptions as of 4 July 2023 are considered as new customers. These customers wonΓÇÖt be able to create new V1 gateways going forward.
+Customers who didn't have Application Gateway V1 SKU in their subscriptions as of 4 July 2023 are considered as new customers. These customers wonΓÇÖt be able to create new V1 gateways in subscriptions which didn't have an existing V1 gateway as of 4 July 2023 going forward.
### What is the definition of an existing customer on Application Gateway V1 SKU?
application-gateway V1 Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/v1-retirement.md
We announced the deprecation of Application Gateway V1 on **April 28 ,2023**. St
- Deprecation announcement: April 28 ,2023 -- No new subscriptions for V1 deployments: July 1,2023 onwards - Application Gateway V1 is no longer available for deployment on [new subscriptions](./retirement-faq.md#what-is-the-definition-of-a-new-customer-on-application-gateway-v1-sku) from July 1 2023 onwards.
+- No new subscriptions for V1 deployments: July 1,2023 onwards - Application Gateway V1 is no longer available for deployment on subscriptions with out V1 gateways(Refer to [FAQ](./retirement-faq.md#what-is-the-definition-of-a-new-customer-on-application-gateway-v1-sku) for details) from July 1 2023 onwards.
- No new V1 deployments: August 28, 2024 - V1 creation is stopped completely for all customers 28 August 2024 onwards. -- SKU retirement: April 28, 2026 - Any Application Gateway V1 that are in Running status will be stopped. Application Gateway V1s that is not migrated to Application Gateway V2 are informed regarding timelines for deleting them and subsequently force deleted.
+- SKU retirement: April 28, 2026 - Any Application Gateway V1 that are in Running status will be stopped. Application Gateway V1s that is not migrated to Application Gateway V2 are informed regarding timelines for deleting them and then force deleted.
## Resources available for migration -- Follow the steps outlined in the [migration script](./migrate-v1-v2.md) to migrate from Application Gateway v1 to v2. Please review [pricing](./understanding-pricing.md) before making the transition.
+- Follow the steps outlined in the [migration script](./migrate-v1-v2.md) to migrate from Application Gateway v1 to v2. Review [pricing](./understanding-pricing.md) before making the transition.
- If your company/organization has partnered with Microsoft or works with Microsoft representatives (like cloud solution architects (CSAs) or customer success account managers (CSAMs)), please work with them for migration.
automation Automation Create Alert Triggered Runbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-create-alert-triggered-runbook.md
description: This article tells how to trigger a runbook to run when an Azure al
Last updated 12/15/2022-+ #Customer intent: As a developer, I want to trigger a runbook so that VMs can be stopped under certain conditions.
Assign permissions to the appropriate [managed identity](./automation-security-o
{ Connect-AzAccount }
-
+ # If you have multiple subscriptions, set the one to use # Select-AzSubscription -SubscriptionId <SUBSCRIPTIONID> ```
Use this example to create a runbook called **Stop-AzureVmInResponsetoVMAlert**.
[object] $WebhookData ) $ErrorActionPreference = "stop"
-
+ if ($WebhookData) { # Get the data object from WebhookData $WebhookBody = (ConvertFrom-Json -InputObject $WebhookData.RequestBody)
-
+ # Get the info needed to identify the VM (depends on the payload schema) $schemaId = $WebhookBody.schemaId Write-Verbose "schemaId: $schemaId" -Verbose
Use this example to create a runbook called **Stop-AzureVmInResponsetoVMAlert**.
# Schema not supported Write-Error "The alert data schema - $schemaId - is not supported." }
-
+ Write-Verbose "status: $status" -Verbose if (($status -eq "Activated") -or ($status -eq "Fired")) {
Use this example to create a runbook called **Stop-AzureVmInResponsetoVMAlert**.
Write-Verbose "resourceName: $ResourceName" -Verbose Write-Verbose "resourceGroupName: $ResourceGroupName" -Verbose Write-Verbose "subscriptionId: $SubId" -Verbose
-
+ # Determine code path depending on the resourceType if ($ResourceType -eq "Microsoft.Compute/virtualMachines") { # This is an Resource Manager VM Write-Verbose "This is an Resource Manager VM." -Verbose
-
- # Ensures you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave -Scope Process
-
- # Connect to Azure with system-assigned managed identity
- $AzureContext = (Connect-AzAccount -Identity).context
-
- # set and store context
- $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile $AzureContext
-
+
+ # Ensures you do not inherit an AzContext in your runbook
+ Disable-AzContextAutosave -Scope Process
+
+ # Connect to Azure with system-assigned managed identity
+ $AzureContext = (Connect-AzAccount -Identity).context
+
+ # set and store context
+ $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile $AzureContext
+ # Stop the Resource Manager VM Write-Verbose "Stopping the VM - $ResourceName - in resource group - $ResourceGroupName -" -Verbose Stop-AzVM -Name $ResourceName -ResourceGroupName $ResourceGroupName -DefaultProfile $AzureContext -Force
Alerts use action groups, which are collections of actions that are triggered by
:::image type="content" source="./media/automation-create-alert-triggered-runbook/create-alert-rule-portal.png" alt-text="The create alert rule page and subsections.":::
-1. Under **Scope**, select **Edit resource**.
+1. Under **Scope**, select **Edit resource**.
1. On the **Select a resource** page, from the **Filter by resource type** drop-down list, select **Virtual machines**.
Alerts use action groups, which are collections of actions that are triggered by
1. On the **Configure signal logic** page, under **Threshold value** enter an initial low value for testing purposes, such as `5`. You can go back and update this value once you've confirmed the alert works as expected. Then select **Done** to return to the **Create alert rule** page. :::image type="content" source="./media/automation-create-alert-triggered-runbook/configure-signal-logic-portal.png" alt-text="Entering CPU percentage threshold value.":::
-
+ 1. Under **Actions**, select **Add action groups**, and then **+Create action group**. :::image type="content" source="./media/automation-create-alert-triggered-runbook/create-action-group-portal.png" alt-text="The create action group page with Basics tab open.":::
Alerts use action groups, which are collections of actions that are triggered by
1. On the **Create action group** page: 1. On the **Basics** tab, enter an **Action group name** and **Display name**. 1. On the **Actions** tab, in the **Name** text box, enter a name. Then from the **Action type** drop-down list, select **Automation Runbook** to open the **Configure Runbook** page.
- 1. For the **Runbook source** item, select **User**.
+ 1. For the **Runbook source** item, select **User**.
1. From the **Subscription** drop-down list, select your subscription. 1. From the **Automation account** drop-down list, select your Automation account. 1. From the **Runbook** drop-down list, select **Stop-AzureVmInResponsetoVMAlert**. 1. For the **Enable the common alert schema** item, select **Yes**. 1. Select **OK** to return to the **Create action group** page.
-
+ :::image type="content" source="./media/automation-create-alert-triggered-runbook/configure-runbook-portal.png" alt-text="Configure runbook page with values."::: 1. Select **Review + create** and then **Create** to return to the **Create alert rule** page.
Ensure your VM is running. Navigate to the runbook **Stop-AzureVmInResponsetoVMA
## Common Azure VM management operations
-Azure Automation provides scripts for common Azure VM management operations like restart VM, stop VM, delete VM, scale up and down scenarios in Runbook gallery. The scripts can also be found in the Azure Automation [GitHub repository](https://github.com/azureautomation) You can also use these scripts as mentioned in the above steps.
+Azure Automation provides scripts for common Azure VM management operations like restart VM, stop VM, delete VM, scale up and down scenarios in Runbook gallery. The scripts can also be found in the Azure Automation [GitHub repository](https://github.com/azureautomation) You can also use these scripts as mentioned in the above steps.
|**Azure VM management operations** | **Details**| | | |
automation Enforce Job Execution Hybrid Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/enforce-job-execution-hybrid-worker.md
> [!IMPORTANT] > Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 October 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md).
-Starting a runbook on a Hybrid Runbook Worker uses a **Run on** option that allows you to specify the name of a Hybrid Runbook Worker group when initiating from the Azure portal, with the Azure PowerShell, or REST API. When a group is specified, one of the workers in that group retrieves and runs the runbook. If your runbook does not specify this option, Azure Automation runs the runbook in the Azure sandbox.
+Starting a runbook on a Hybrid Runbook Worker uses a **Run on** option that allows you to specify the name of a Hybrid Runbook Worker group when initiating from the Azure portal, with the Azure PowerShell, or REST API. When a group is specified, one of the workers in that group retrieves and runs the runbook. If your runbook does not specify this option, Azure Automation runs the runbook in the Azure sandbox.
-Anyone in your organization who is a member of the [Automation Job Operator](automation-role-based-access-control.md#automation-job-operator) or higher can create runbook jobs. To manage runbook execution targeting a Hybrid Runbook Worker group in your Automation account, you can use [Azure Policy](../governance/policy/overview.md). This helps to enforce organizational standards and ensure your automation jobs are controlled and managed by those designated, and anyone cannot execute a runbook on an Azure sandbox, only on Hybrid Runbook workers.
+Anyone in your organization who is a member of the [Automation Job Operator](automation-role-based-access-control.md#automation-job-operator) or higher can create runbook jobs. To manage runbook execution targeting a Hybrid Runbook Worker group in your Automation account, you can use [Azure Policy](../governance/policy/overview.md). This helps to enforce organizational standards and ensure your automation jobs are controlled and managed by those designated, and anyone cannot execute a runbook on an Azure sandbox, only on Hybrid Runbook workers.
A custom Azure Policy definition is included in this article to help you control these activities using the following Automation REST API operations. Specifically:
Here we compose the policy rule and then assign it to either a management group
1. Use the following JSON snippet to create a JSON file with the name AuditAutomationHRWJobExecution.json.
- ```json
+ ```json
{
- "properties": {
- "displayName": "Enforce job execution on Automation Hybrid Runbook Worker",
- "description": "Enforce job execution on Hybrid Runbook Workers in your Automation account.",
- "mode": "all",
- "parameters": {
- "effectType": {
- "type": "string",
- "defaultValue": "Deny",
- "allowedValues": [
- "Deny",
- "Disabled"
- ],
- "metadata": {
- "displayName": "Effect",
- "description": "Enable or disable execution of the policy"
- }
- }
- },
- "policyRule": {
+ "properties": {
+ "displayName": "Enforce job execution on Automation Hybrid Runbook Worker",
+ "description": "Enforce job execution on Hybrid Runbook Workers in your Automation account.",
+ "mode": "all",
+ "parameters": {
+ "effectType": {
+ "type": "string",
+ "defaultValue": "Deny",
+ "allowedValues": [
+ "Deny",
+ "Disabled"
+ ],
+ "metadata": {
+ "displayName": "Effect",
+ "description": "Enable or disable execution of the policy"
+ }
+ }
+ },
+ "policyRule": {
"if": { "anyOf": [ {
Here we compose the policy rule and then assign it to either a management group
} } }
- ```
+ ```
2. Run the following Azure PowerShell or Azure CLI command to create a policy definition using the AuditAutomationHRWJobExecution.json file.
- # [Azure CLI](#tab/azure-cli)
+ # [Azure CLI](#tab/azure-cli)
- ```azurecli
+ ```azurecli
az policy definition create --name 'audit-enforce-jobs-on-automation-hybrid-runbook-workers' --display-name 'Audit Enforce Jobs on Automation Hybrid Runbook Workers' --description 'This policy enforces job execution on Automation account user Hybrid Runbook Workers.' --rules 'AuditAutomationHRWJobExecution.json' --mode All
- ```
+ ```
- The command creates a policy definition named **Audit Enforce Jobs on Automation Hybrid Runbook Workers**. For more information about other parameters that you can use, see [az policy definition create](/cli/azure/policy/definition#az-policy-definition-create).
+ The command creates a policy definition named **Audit Enforce Jobs on Automation Hybrid Runbook Workers**. For more information about other parameters that you can use, see [az policy definition create](/cli/azure/policy/definition#az-policy-definition-create).
- When called without location parameters, `az policy definition create` defaults to saving the policy definition in the selected subscription of the sessions context. To save the definition to a different location, use the following parameters:
+ When called without location parameters, `az policy definition create` defaults to saving the policy definition in the selected subscription of the sessions context. To save the definition to a different location, use the following parameters:
- * **subscription** - Save to a different subscription. Requires a *GUID* value for the subscription ID or a *string* value for the subscription name.
- * **management-group** - Save to a management group. Requires a *string* value.
+ * **subscription** - Save to a different subscription. Requires a *GUID* value for the subscription ID or a *string* value for the subscription name.
+ * **management-group** - Save to a management group. Requires a *string* value.
- # [PowerShell](#tab/azure-powershell)
+ # [PowerShell](#tab/azure-powershell)
- ```azurepowershell
- New-AzPolicyDefinition -Name 'audit-enforce-jobs-on-automation-hybrid-runbook-workers' -DisplayName 'Audit Enforce Jobs on Automation Hybrid Runbook Workers' -Policy 'AuditAutomationHRWJobExecution.json'
- ```
+ ```azurepowershell
+ New-AzPolicyDefinition -Name 'audit-enforce-jobs-on-automation-hybrid-runbook-workers' -DisplayName 'Audit Enforce Jobs on Automation Hybrid Runbook Workers' -Policy 'AuditAutomationHRWJobExecution.json'
+ ```
- The command creates a policy definition named **Audit Enforce Jobs on Automation Hybrid Runbook Workers**. For more information about other parameters that you can use, see [New-AzPolicyDefinition](/powershell/module/az.resources/new-azpolicydefinition).
+ The command creates a policy definition named **Audit Enforce Jobs on Automation Hybrid Runbook Workers**. For more information about other parameters that you can use, see [New-AzPolicyDefinition](/powershell/module/az.resources/new-azpolicydefinition).
- When called without location parameters, `New-AzPolicyDefinition` defaults to saving the policy definition in the selected subscription of the sessions context. To save the definition to a different location, use the following parameters:
+ When called without location parameters, `New-AzPolicyDefinition` defaults to saving the policy definition in the selected subscription of the sessions context. To save the definition to a different location, use the following parameters:
- * **SubscriptionId** - Save to a different subscription. Requires a *GUID* value.
- * **ManagementGroupName** - Save to a management group. Requires a *string* value.
+ * **SubscriptionId** - Save to a different subscription. Requires a *GUID* value.
+ * **ManagementGroupName** - Save to a management group. Requires a *string* value.
-
+
3. After you create your policy definition, you can create a policy assignment by running the following commands:
- # [Azure CLI](#tab/azure-cli)
-
- ```azurecli
- az policy assignment create --name '<name>' --scope '<scope>' --policy '<policy definition ID>'
- ```
-
- The **scope** parameter on `az policy assignment create` works with management group,
- subscription, resource group, or a single resource. The parameter uses a full resource path. The
- pattern for **scope** for each container is as follows. Replace `{rName}`, `{rgName}`, `{subId}`,
- and `{mgName}` with your resource name, resource group name, subscription ID, and management
- group name, respectively. `{rType}` would be replaced with the **resource type** of the resource,
- such as `Microsoft.Compute/virtualMachines` for a VM.
-
- - Resource - `/subscriptions/{subID}/resourceGroups/{rgName}/providers/{rType}/{rName}`
- - Resource group - `/subscriptions/{subID}/resourceGroups/{rgName}`
- - Subscription - `/subscriptions/{subID}`
- - Management group - `/providers/Microsoft.Management/managementGroups/{mgName}`
-
- You can get the Azure Policy Definition ID by using PowerShell with the following command:
-
- ```azurecli
- az policy definition show --name 'Audit Enforce Jobs on Automation Hybrid Runbook Workers'
- ```
-
- The policy definition ID for the policy definition that you created should resemble the following
- example:
-
- ```output
- "/subscription/<subscriptionId>/providers/Microsoft.Authorization/policyDefinitions/Audit Enforce Jobs on Automation Hybrid Runbook Workers"
- ```
-
- # [PowerShell](#tab/azure-powershell)
-
- ```azurepowershell
- $rgName = Get-AzResourceGroup -Name 'ContosoRG'
- $Policy = Get-AzPolicyDefinition -Name 'audit-enforce-jobs-on-automation-hybrid-runbook-workers'
- New-AzPolicyAssignment -Name 'audit-enforce-jobs-on-automation-hybrid-runbook-workers' -PolicyDefinition $Policy -Scope $rg.ResourceId
- ```
-
- Replace _ContosoRG_ with the name of your intended resource group.
-
- The **Scope** parameter on `New-AzPolicyAssignment` works with management group, subscription,
- resource group, or a single resource. The parameter uses a full resource path, which the
- **ResourceId** property on `Get-AzResourceGroup` returns. The pattern for **Scope** for each
- container is as follows. Replace `{rName}`, `{rgName}`, `{subId}`, and `{mgName}` with your
- resource name, resource group name, subscription ID, and management group name, respectively.
- `{rType}` would be replaced with the **resource type** of the resource, such as
- `Microsoft.Compute/virtualMachines` for a VM.
-
- - Resource - `/subscriptions/{subID}/resourceGroups/{rgName}/providers/{rType}/{rName}`
- - Resource group - `/subscriptions/{subId}/resourceGroups/{rgName}`
- - Subscription - `/subscriptions/{subId}`
- - Management group - `/providers/Microsoft.Management/managementGroups/{mgName}`
-
-
+ # [Azure CLI](#tab/azure-cli)
+
+ ```azurecli
+ az policy assignment create --name '<name>' --scope '<scope>' --policy '<policy definition ID>'
+ ```
+
+ The **scope** parameter on `az policy assignment create` works with management group, subscription, resource group, or a single resource. The parameter uses a full resource path. The pattern for **scope** for each container is as follows. Replace `{rName}`, `{rgName}`, `{subId}`, and `{mgName}` with your resource name, resource group name, subscription ID, and management group name, respectively. `{rType}` would be replaced with the **resource type** of the resource, such as `Microsoft.Compute/virtualMachines` for a VM.
+
+ - Resource - `/subscriptions/{subID}/resourceGroups/{rgName}/providers/{rType}/{rName}`
+ - Resource group - `/subscriptions/{subID}/resourceGroups/{rgName}`
+ - Subscription - `/subscriptions/{subID}`
+ - Management group - `/providers/Microsoft.Management/managementGroups/{mgName}`
+
+ You can get the Azure Policy Definition ID by using PowerShell with the following command:
+
+ ```azurecli
+ az policy definition show --name 'Audit Enforce Jobs on Automation Hybrid Runbook Workers'
+ ```
+
+ The policy definition ID for the policy definition that you created should resemble the following example:
+
+ ```output
+ "/subscription/<subscriptionId>/providers/Microsoft.Authorization/policyDefinitions/Audit Enforce Jobs on Automation Hybrid Runbook Workers"
+ ```
+
+ # [PowerShell](#tab/azure-powershell)
+
+ ```azurepowershell
+ $rgName = Get-AzResourceGroup -Name 'ContosoRG'
+ $Policy = Get-AzPolicyDefinition -Name 'audit-enforce-jobs-on-automation-hybrid-runbook-workers'
+ New-AzPolicyAssignment -Name 'audit-enforce-jobs-on-automation-hybrid-runbook-workers' -PolicyDefinition $Policy -Scope $rg.ResourceId
+ ```
+
+ Replace _ContosoRG_ with the name of your intended resource group.
+
+ The **Scope** parameter on `New-AzPolicyAssignment` works with management group, subscription, resource group, or a single resource. The parameter uses a full resource path, which the **ResourceId** property on `Get-AzResourceGroup` returns. The pattern for **Scope** for each container is as follows. Replace `{rName}`, `{rgName}`, `{subId}`, and `{mgName}` with your resource name, resource group name, subscription ID, and management group name, respectively. `{rType}` would be replaced with the **resource type** of the resource, such as `Microsoft.Compute/virtualMachines` for a VM.
+
+ - Resource - `/subscriptions/{subID}/resourceGroups/{rgName}/providers/{rType}/{rName}`
+ - Resource group - `/subscriptions/{subId}/resourceGroups/{rgName}`
+ - Subscription - `/subscriptions/{subId}`
+ - Management group - `/providers/Microsoft.Management/managementGroups/{mgName}`
+
+
4. Sign in to the [Azure portal](https://portal.azure.com). 5. Launch the Azure Policy service in the Azure portal by selecting **All services**, then searching for and selecting **Policy**.
The attempted operation is also logged in the Automation account's Activity Log,
## Next steps
-To work with runbooks, see [Manage runbooks in Azure Automation](manage-runbooks.md).
+To work with runbooks, see [Manage runbooks in Azure Automation](manage-runbooks.md).
automation Automation Tutorial Runbook Textual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/automation-tutorial-runbook-textual.md
description: This tutorial teaches you to create, test, and publish a PowerShell
Last updated 10/16/2022-+ #Customer intent: As a developer, I want use workflow runbooks so that I can automate the parallel starting of VMs.
Assign permissions to the appropriate [managed identity](../automation-security-
:::image type="content" source="../media/automation-tutorial-runbook-textual/system-assigned-role-assignments-portal.png" alt-text="Selecting Azure role assignments in portal.":::
-1. Select **+ Add role assignment (Preview)** to open the **Add role assignment (Preview)** page.
+1. Select **+ Add role assignment (Preview)** to open the **Add role assignment (Preview)** page.
:::image type="content" source="../media/automation-tutorial-runbook-textual/system-assigned-add-role-assignment-portal.png" alt-text="Add role assignments in portal.":::
Assign permissions to the appropriate [managed identity](../automation-security-
:::image type="content" source="../media/automation-tutorial-runbook-textual/managed-identity-client-id-portal.png" alt-text="Showing Client ID for managed identity in portal":::
-1. From the left menu, select **Azure role assignments** and then **+ Add role assignment (Preview)** to open the **Add role assignment (Preview)** page.
+1. From the left menu, select **Azure role assignments** and then **+ Add role assignment (Preview)** to open the **Add role assignment (Preview)** page.
:::image type="content" source="../media/automation-tutorial-runbook-textual/user-assigned-add-role-assignment-portal.png" alt-text="Add role assignments in portal for user-assigned identity.":::
Assign permissions to the appropriate [managed identity](../automation-security-
Start by creating a simple [PowerShell Workflow runbook](../automation-runbook-types.md#powershell-workflow-runbooks). One advantage of Windows PowerShell Workflows is the ability to perform a set of commands in parallel instead of sequentially as with a typical script. >[!NOTE]
-> With release runbook creation has a new experience in the Azure portal. When you select **Runbooks** blade > **Create a runbook**, a new page **Create a runbook** opens with applicable options.
+> With release runbook creation has a new experience in the Azure portal. When you select **Runbooks** blade > **Create a runbook**, a new page **Create a runbook** opens with applicable options.
1. From your open Automation account page, under **Process Automation**, select **Runbooks**
Start by creating a simple [PowerShell Workflow runbook](../automation-runbook-t
1. From the **Runtime version** drop-down, select **5.1**. 1. Enter applicable **Description**. 1. Select **Create**.
-
+ :::image type="content" source="../media/automation-tutorial-runbook-textual/create-powershell-workflow-runbook-options.png" alt-text="PowerShell workflow runbook options from portal":::
-
+ ## Add code to the runbook
Workflow MyFirstRunbook-Workflow
Write-Output "Non-Parallel" Get-Date Start-Sleep -s 3
- Get-Date
+ Get-Date
``` 1. Save the runbook by selecting **Save**.
Before you publish the runbook to make it available in production, you should te
:::image type="content" source="../media/automation-tutorial-runbook-textual/workflow-runbook-parallel-output.png" alt-text="PowerShell workflow runbook parallel output":::
- Review the output. Everything in the `Parallel` block, including the `Start-Sleep` command, executed at the same time. The same commands outside the `Parallel` block ran sequentially, as shown by the different date time stamps.
+ Review the output. Everything in the `Parallel` block, including the `Start-Sleep` command, executed at the same time. The same commands outside the `Parallel` block ran sequentially, as shown by the different date time stamps.
1. Close the **Test** page to return to the canvas.
You've tested and published your runbook, but so far it doesn't do anything usef
workflow MyFirstRunbook-Workflow { $resourceGroup = "resourceGroupName"
-
+ # Ensures you do not inherit an AzContext in your runbook Disable-AzContextAutosave -Scope Process
-
+ # Connect to Azure with system-assigned managed identity Connect-AzAccount -Identity
-
+ # set and store context
- $AzureContext = Set-AzContext ΓÇôSubscriptionId "<SubscriptionID>"
+ $AzureContext = Set-AzContext ΓÇôSubscriptionId "<SubscriptionID>"
} ```
You've tested and published your runbook, but so far it doesn't do anything usef
## Add code to start a virtual machine
-Now that your runbook is authenticating to the Azure subscription, you can manage resources. Add a command to start a virtual machine. You can pick any VM in your Azure subscription, and for now you're hardcoding that name in the runbook.
+Now that your runbook is authenticating to the Azure subscription, you can manage resources. Add a command to start a virtual machine. You can pick any VM in your Azure subscription, and for now you're hardcoding that name in the runbook.
-1. Add the code below as the last line immediately before the closing brace. Replace `VMName` with the actual name of a VM.
+1. Add the code below as the last line immediately before the closing brace. Replace `VMName` with the actual name of a VM.
```powershell Start-AzVM -Name "VMName" -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext
You can use the `ForEach -Parallel` construct to process commands for each item
```powershell workflow MyFirstRunbook-Workflow {
- Param(
- [string]$resourceGroup,
- [string[]]$VMs,
- [string]$action
- )
-
- # Ensures you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave -Scope Process
-
- # Connect to Azure with system-assigned managed identity
- Connect-AzAccount -Identity
-
- # set and store context
- $AzureContext = Set-AzContext ΓÇôSubscriptionId "<SubscriptionID>"
-
- # Start or stop VMs in parallel
- if($action -eq "Start")
- {
- ForEach -Parallel ($vm in $VMs)
- {
- Start-AzVM -Name $vm -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext
- }
- }
- elseif ($action -eq "Stop")
- {
- ForEach -Parallel ($vm in $VMs)
- {
- Stop-AzVM -Name $vm -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext -Force
- }
- }
- else {
- Write-Output "`r`n Action not allowed. Please enter 'stop' or 'start'."
- }
- }
+ Param(
+ [string]$resourceGroup,
+ [string[]]$VMs,
+ [string]$action
+ )
+
+ # Ensures you do not inherit an AzContext in your runbook
+ Disable-AzContextAutosave -Scope Process
+
+ # Connect to Azure with system-assigned managed identity
+ Connect-AzAccount -Identity
+
+ # set and store context
+ $AzureContext = Set-AzContext ΓÇôSubscriptionId "<SubscriptionID>"
+
+ # Start or stop VMs in parallel
+ if ($action -eq "Start") {
+ ForEach -Parallel ($vm in $VMs)
+ {
+ Start-AzVM -Name $vm -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext
+ }
+ }
+ elseif ($action -eq "Stop") {
+ ForEach -Parallel ($vm in $VMs)
+ {
+ Stop-AzVM -Name $vm -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext -Force
+ }
+ }
+ else {
+ Write-Output "`r`n Action not allowed. Please enter 'stop' or 'start'."
+ }
+ }
``` 1. If you want the runbook to execute with the system-assigned managed identity, leave the code as-is. If you prefer to use a user-assigned managed identity, then:+ 1. From line 9, remove `Connect-AzAccount -Identity`, 1. Replace it with `Connect-AzAccount -Identity -AccountId <ClientId>`, and 1. Enter the Client ID you obtained earlier.
automation Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/source-control-integration.md
Title: Use source control integration in Azure Automation
description: This article tells you how to synchronize Azure Automation source control with other repositories. Previously updated : 04/12/2023 Last updated : 07/31/2023
This example uses Azure PowerShell to show how to assign the Contributor role in
```powershell New-AzRoleAssignment `
- -ObjectId <automation-Identity-object-id> `
+ -ObjectId <automation-Identity-Object(Principal)-Id> `
-Scope "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/automationAccounts/{automationAccountName}" ` -RoleDefinitionName "Contributor" ```
automation Enable From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-template.md
Previously updated : 09/18/2020 Last updated : 09/18/2020 # Enable Update Management using Azure Resource Manager template
If you're new to Azure Automation and Azure Monitor, it's important that you und
} } },
- {
- "apiVersion": "2015-11-01-preview",
- "location": "[parameters('location')]",
- "name": "[variables('Updates').name]",
- "type": "Microsoft.OperationsManagement/solutions",
- "id": "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.OperationsManagement/solutions/', variables('Updates').name)]",
- "dependsOn": [
- "[concat('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
- ],
- "properties": {
- "workspaceResourceId": "[resourceId('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
- },
- "plan": {
- "name": "[variables('Updates').name]",
- "publisher": "Microsoft",
- "promotionCode": "",
- "product": "[concat('OMSGallery/', variables('Updates').galleryName)]"
- }
- },
+ {
+ "apiVersion": "2015-11-01-preview",
+ "location": "[parameters('location')]",
+ "name": "[variables('Updates').name]",
+ "type": "Microsoft.OperationsManagement/solutions",
+ "id": "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.OperationsManagement/solutions/', variables('Updates').name)]",
+ "dependsOn": [
+ "[concat('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
+ ],
+ "properties": {
+ "workspaceResourceId": "[resourceId('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
+ },
+ "plan": {
+ "name": "[variables('Updates').name]",
+ "publisher": "Microsoft",
+ "promotionCode": "",
+ "product": "[concat('OMSGallery/', variables('Updates').galleryName)]"
+ }
+ },
{ "type": "Microsoft.Automation/automationAccounts", "apiVersion": "2020-01-13-preview",
azure-app-configuration Howto Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-import-export-data.md
Azure App Configuration supports data import and export operations. Use these operations to work with configuration data in bulk and exchange data between your App Configuration store and code project. For example, you can set up one App Configuration store for testing and another one for production. You can copy application settings between them so that you don't have to enter data twice.
-This article provides a guide for importing and exporting data with App Configuration. If youΓÇÖd like to set up an ongoing sync with your GitHub repo, take a look at [GitHub Actions](./concept-github-action.md) and [Azure Pipeline tasks](./pull-key-value-devops-pipeline.md).
+This article provides a guide for importing and exporting data with App Configuration. If youΓÇÖd like to set up an ongoing sync with your GitHub repo, take a look at [GitHub Actions](./concept-github-action.md) and [Azure Pipelines tasks](./pull-key-value-devops-pipeline.md).
You can import or export data using either the [Azure portal](https://portal.azure.com) or the [Azure CLI](./scripts/cli-import.md).
azure-app-configuration Quickstart Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-azure-kubernetes-service.md
Now that you have an application running in AKS, you'll deploy the App Configura
```console helm install azureappconfiguration.kubernetesprovider \ oci://mcr.microsoft.com/azure-app-configuration/helmchart/kubernetes-provider \
- --version 1.0.0-preview \
+ --version 1.0.0-preview3 \
--namespace azappconfig-system \ --create-namespace ```
azure-app-configuration Reference Kubernetes Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/reference-kubernetes-provider.md
The following reference outlines the properties supported by the Azure App Confi
## Properties
-An `AzureAppConfigurationProvider` resource has the following top-level child properties under the `spec`.
+An `AzureAppConfigurationProvider` resource has the following top-level child properties under the `spec`. Either `endpoint` or `connectionStringReference` has to be specified.
|Name|Description|Required|Type| |||||
-|endpoint|The endpoint of Azure App Configuration, which you would like to retrieve the key-values from|true|string|
+|endpoint|The endpoint of Azure App Configuration, which you would like to retrieve the key-values from|alternative|string|
+|connectionStringReference|The name of the Kubernetes Secret that contains Azure App Configuration connection string|alternative|string|
|target|The destination of the retrieved key-values in Kubernetes|true|object| |auth|The authentication method to access Azure App Configuration|false|object| |keyValues|The settings for querying and processing key-values|false|object|
The `spec.keyValues` has the following child properties. The `spec.keyValues.key
|selectors|The list of selectors for key-value filtering|false|object array| |trimKeyPrefixes|The list of key prefixes to be trimmed|false|string array| |keyVaults|The settings for Key Vault references|conditional|object|
+|refresh|The settings for refreshing the key-values in ConfigMap or Secret|false|object|
If the `spec.keyValues.selectors` property isn't set, all key-values with no label will be downloaded. It contains an array of *selector* objects, which have the following child properties.
If the `spec.keyValues.selectors` property isn't set, all key-values with no lab
|keyFilter|The key filter for querying key-values|true|string| |labelFilter|The label filter for querying key-values|false|string| - The `spec.keyValues.keyVaults` property has the following child properties. |Name|Description|Required|Type|
The authentication method of each *vault* can be specified with the following pr
|managedIdentityClientId|The client ID of a user-assigned managed identity used for authentication with a vault|false|string| |servicePrincipalReference|The name of the Kubernetes Secret that contains the credentials of a service principal used for authentication with a vault|false|string|
+The `spec.keyValues.refresh` property has the following child properties.
+
+|Name|Description|Required|Type|
+|||||
+|monitoring|The key-values that are monitored by the provider, provider automatically refreshes the ConfigMap or Secret if value change in any designated key-value|true|object|
+|interval|The interval for refreshing, default value is 30 seconds, must be greater than 1 second|false|duration string|
+
+The `spec.keyValues.refresh.monitoring.keyValues` is an array of objects, which have the following child properties.
+
+|Name|Description|Required|Type|
+|||||
+|key|The key of a key-value|true|string|
+|label|The label of a key-value|false|string|
+ ## Examples ### Authentication
The authentication method of each *vault* can be specified with the following pr
servicePrincipalReference: <your-service-principal-secret-name> ```
+#### Use Connection String
+
+1. Create a Kubernetes Secret in the same namespace as the `AzureAppConfigurationProvider` resource and add Azure App Configuration connection string with key *azure_app_configuration_connection_string* in the Secret.
+2. Set the `spec.connectionStringReference` property to the name of the Secret in the following sample `AzureAppConfigurationProvider` resource and deploy it to the Kubernetes cluster.
+
+ ``` yaml
+ apiVersion: azconfig.io/v1beta1
+ kind: AzureAppConfigurationProvider
+ metadata:
+ name: appconfigurationprovider-sample
+ spec:
+ connectionStringReference: <your-connection-string-secret-name>
+ target:
+ configMapName: configmap-created-by-appconfig-provider
+ ```
+ ### Key-value selection Use the `selectors` property to filter the key-values to be downloaded from Azure App Configuration.
spec:
- uri: <your-key-vault-uri> servicePrincipalReference: <name-of-secret-containing-service-principal-credentials> ```+
+### Dynamically refresh ConfigMap and Secret
+
+Setting the `spec.keyValues.refresh` property enables dynamic configuration data refresh in ConfigMap and Secret by monitoring designated key-values. The provider periodically polls the key-values, if there is any value change, provider triggers ConfigMap and Secret refresh in accordance with the present data in Azure App Configuration.
+
+The following sample instructs monitoring two key-values with 1 minute polling interval.
+
+``` yaml
+apiVersion: azconfig.io/v1beta1
+kind: AzureAppConfigurationProvider
+metadata:
+ name: appconfigurationprovider-sample
+spec:
+ endpoint: <your-app-configuration-store-endpoint>
+ target:
+ configMapName: configmap-created-by-appconfig-provider
+ keyValues:
+ selectors:
+ - keyFilter: app1*
+ labelFilter: common
+ - keyFilter: app1*
+ labelFilter: development
+ refresh:
+ interval: 1m
+ monitoring:
+ keyValues:
+ - key: sentinelKey
+ label: common
+ - key: sentinelKey
+ label: development
+```
azure-arc Monitor Gitops Flux 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/monitor-gitops-flux-2.md
Title: Monitor GitOps (Flux v2) status and activity Previously updated : 07/21/2023 Last updated : 07/28/2023 description: Learn how to monitor status, compliance, resource consumption, and reconciliation activity for GitOps with Flux v2.
Follow these steps to import dashboards that let you monitor Flux extension depl
> [!NOTE] > These steps describe the process for importing the dashboard to [Azure Managed Grafana](/azure/managed-grafana/overview). You can also [import this dashboard to any Grafana instance](https://grafana.com/docs/grafana/latest/dashboards/manage-dashboards/#import-a-dashboard). With this option, a service principal must be used; managed identity is not supported for data connection outside of Azure Managed Grafana.
-1. Create an Azure Managed Grafana instance by using the [Azure portal](/azure/managed-grafana/quickstart-managed-grafana-portal) or [Azure CLI](/azure/managed-grafana/quickstart-managed-grafana-cli). This connection lets the dashboard access Azure Resource Graph.
-1. [Create the Azure Monitor Data Source connection](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) in your Azure Managed Grafana instance.
-1. Ensure that the user account that will access the dashboard has the **Reader** role on the subscriptions and/or resource groups where the clusters are located.
-
- If you're using a managed identity, follow these steps to enable this access:
+1. Create an Azure Managed Grafana instance by using the [Azure portal](/azure/managed-grafana/quickstart-managed-grafana-portal) or [Azure CLI](/azure/managed-grafana/quickstart-managed-grafana-cli). Ensure that you're able to access Grafana by selecting its endpoint on the Overview page. You need at least **Reader** level permissions. You can check your access by going to **Access control (IAM)** on the Grafana instance.
+1. If you're using a managed identity for the Azure Managed Grafana instance, follow these steps to assign it a Reader role on the subscription(s):
1. In the Azure portal, navigate to the subscription that you want to add. 1. Select **Access control (IAM)**.
Follow these steps to import dashboards that let you monitor Flux extension depl
If you're using a service principal, grant the **Reader** role to the service principal that you'll use for your data source connection. Follow these same steps, but select **User, group, or service principal** in the **Members** tab, then select your service principal. (If you aren't using Azure Managed Grafana, you must use a service principal for data connection access.)
+1. [Create the Azure Monitor Data Source connection](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) in your Azure Managed Grafana instance. This connection lets the dashboard access Azure Resource Graph data.
1. Download the [GitOps Flux - Application Deployments Dashboard](https://github.com/Azure/fluxv2-grafana-dashboards/blob/main/dashboards/GitOps%20Flux%20-%20Application%20Deployments%20Dashboard.json). 1. Follow the steps to [import the JSON dashboard to Grafana](/azure/managed-grafana/how-to-create-dashboard#import-a-json-dashboard).
azure-functions Durable Functions Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-troubleshooting-guide.md
Title: Durable Functions Troubleshooting Guide - Azure Functions description: Guide to troubleshoot common issues with durable functions.-+ Last updated 03/10/2023
azure-functions Functions Bindings Azure Data Explorer Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-data-explorer-input.md
The Azure Data Explorer input binding retrieves data from a database.
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Azure Data Explorer Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-data-explorer-output.md
For information on setup and configuration details, see the [overview](functions
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
For information on setup and configuration details, see the [overview](./functio
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
For information on setup and configuration details, see the [overview](./functio
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
The following example shows a function that retrieves a single document. The fun
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/CosmosDB/CosmosDbInputBindingFunction.cs" id="docsnippet_qtrigger_with_cosmosdb_inputbinding" :::
-# [C# Script](#tab/csharp-script)
-
-This section contains the following examples:
-
-* [Queue trigger, look up ID from string](#queue-trigger-look-up-id-from-string-c-script)
-* [Queue trigger, get multiple docs, using SqlQuery](#queue-trigger-get-multiple-docs-using-sqlquery-c-script)
-* [HTTP trigger, look up ID from query string](#http-trigger-look-up-id-from-query-string-c-script)
-* [HTTP trigger, look up ID from route data](#http-trigger-look-up-id-from-route-data-c-script)
-* [HTTP trigger, get multiple docs, using SqlQuery](#http-trigger-get-multiple-docs-using-sqlquery-c-script)
-* [HTTP trigger, get multiple docs, using DocumentClient](#http-trigger-get-multiple-docs-using-documentclient-c-script)
-
-The HTTP trigger examples refer to a simple `ToDoItem` type:
-
-```cs
-namespace CosmosDBSamplesV2
-{
- public class ToDoItem
- {
- public string Id { get; set; }
- public string Description { get; set; }
- }
-}
-```
-
-<a id="queue-trigger-look-up-id-from-string-c-script"></a>
-
-### Queue trigger, look up ID from string
-
-The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads a single document and updates the document's text value.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "name": "inputDocument",
- "type": "cosmosDB",
- "databaseName": "MyDatabase",
- "collectionName": "MyCollection",
- "id" : "{queueTrigger}",
- "partitionKey": "{partition key value}",
- "connectionStringSetting": "MyAccount_COSMOSDB",
- "direction": "in"
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```cs
- using System;
-
- // Change input document contents using Azure Cosmos DB input binding
- public static void Run(string myQueueItem, dynamic inputDocument)
- {
- inputDocument.text = "This has changed.";
- }
-```
-
-<a id="queue-trigger-get-multiple-docs-using-sqlquery-c-script"></a>
-
-### Queue trigger, get multiple docs, using SqlQuery
-
-The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters.
-
-The queue trigger provides a parameter `departmentId`. A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "name": "documents",
- "type": "cosmosDB",
- "direction": "in",
- "databaseName": "MyDb",
- "collectionName": "MyCollection",
- "sqlQuery": "SELECT * from c where c.departmentId = {departmentId}",
- "connectionStringSetting": "CosmosDBConnection"
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```csharp
- public static void Run(QueuePayload myQueueItem, IEnumerable<dynamic> documents)
- {
- foreach (var doc in documents)
- {
- // operate on each document
- }
- }
-
- public class QueuePayload
- {
- public string departmentId { get; set; }
- }
-```
-
-<a id="http-trigger-look-up-id-from-query-string-c-script"></a>
-
-### HTTP trigger, look up ID from query string
-
-The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a single document. The function is triggered by an HTTP request that uses a query string to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "authLevel": "anonymous",
- "name": "req",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "name": "$return",
- "type": "http",
- "direction": "out"
- },
- {
- "type": "cosmosDB",
- "name": "toDoItem",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "connectionStringSetting": "CosmosDBConnection",
- "direction": "in",
- "Id": "{Query.id}",
- "PartitionKey" : "{Query.partitionKeyValue}"
- }
- ],
- "disabled": false
-}
-```
-
-Here's the C# script code:
-
-```cs
-using System.Net;
-using Microsoft.Extensions.Logging;
-
-public static HttpResponseMessage Run(HttpRequestMessage req, ToDoItem toDoItem, ILogger log)
-{
- log.LogInformation("C# HTTP trigger function processed a request.");
-
- if (toDoItem == null)
- {
- log.LogInformation($"ToDo item not found");
- }
- else
- {
- log.LogInformation($"Found ToDo item, Description={toDoItem.Description}");
- }
- return req.CreateResponse(HttpStatusCode.OK);
-}
-```
-
-<a id="http-trigger-look-up-id-from-route-data-c-script"></a>
-
-### HTTP trigger, look up ID from route data
-
-The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a single document. The function is triggered by an HTTP request that uses route data to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "authLevel": "anonymous",
- "name": "req",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "get",
- "post"
- ],
- "route":"todoitems/{partitionKeyValue}/{id}"
- },
- {
- "name": "$return",
- "type": "http",
- "direction": "out"
- },
- {
- "type": "cosmosDB",
- "name": "toDoItem",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "connectionStringSetting": "CosmosDBConnection",
- "direction": "in",
- "id": "{id}",
- "partitionKey": "{partitionKeyValue}"
- }
- ],
- "disabled": false
-}
-```
-
-Here's the C# script code:
-
-```cs
-using System.Net;
-using Microsoft.Extensions.Logging;
-
-public static HttpResponseMessage Run(HttpRequestMessage req, ToDoItem toDoItem, ILogger log)
-{
- log.LogInformation("C# HTTP trigger function processed a request.");
-
- if (toDoItem == null)
- {
- log.LogInformation($"ToDo item not found");
- }
- else
- {
- log.LogInformation($"Found ToDo item, Description={toDoItem.Description}");
- }
- return req.CreateResponse(HttpStatusCode.OK);
-}
-```
-
-<a id="http-trigger-get-multiple-docs-using-sqlquery-c-script"></a>
-
-### HTTP trigger, get multiple docs, using SqlQuery
-
-The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a list of documents. The function is triggered by an HTTP request. The query is specified in the `SqlQuery` attribute property.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "authLevel": "anonymous",
- "name": "req",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "name": "$return",
- "type": "http",
- "direction": "out"
- },
- {
- "type": "cosmosDB",
- "name": "toDoItems",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "connectionStringSetting": "CosmosDBConnection",
- "direction": "in",
- "sqlQuery": "SELECT top 2 * FROM c order by c._ts desc"
- }
- ],
- "disabled": false
-}
-```
-
-Here's the C# script code:
-
-```cs
-using System.Net;
-using Microsoft.Extensions.Logging;
-
-public static HttpResponseMessage Run(HttpRequestMessage req, IEnumerable<ToDoItem> toDoItems, ILogger log)
-{
- log.LogInformation("C# HTTP trigger function processed a request.");
-
- foreach (ToDoItem toDoItem in toDoItems)
- {
- log.LogInformation(toDoItem.Description);
- }
- return req.CreateResponse(HttpStatusCode.OK);
-}
-```
-
-<a id="http-trigger-get-multiple-docs-using-documentclient-c-script"></a>
-
-### HTTP trigger, get multiple docs, using DocumentClient
-
-The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a list of documents. The function is triggered by an HTTP request. The code uses a `DocumentClient` instance provided by the Azure Cosmos DB binding to read a list of documents. The `DocumentClient` instance could also be used for write operations.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "authLevel": "anonymous",
- "name": "req",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "name": "$return",
- "type": "http",
- "direction": "out"
- },
- {
- "type": "cosmosDB",
- "name": "client",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "connectionStringSetting": "CosmosDBConnection",
- "direction": "inout"
- }
- ],
- "disabled": false
-}
-```
-
-Here's the C# script code:
-
-```cs
-#r "Microsoft.Azure.Documents.Client"
-
-using System.Net;
-using Microsoft.Azure.Documents.Client;
-using Microsoft.Azure.Documents.Linq;
-using Microsoft.Extensions.Logging;
-
-public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, DocumentClient client, ILogger log)
-{
- log.LogInformation("C# HTTP trigger function processed a request.");
-
- Uri collectionUri = UriFactory.CreateDocumentCollectionUri("ToDoItems", "Items");
- string searchterm = req.GetQueryNameValuePairs()
- .FirstOrDefault(q => string.Compare(q.Key, "searchterm", true) == 0)
- .Value;
-
- if (searchterm == null)
- {
- return req.CreateResponse(HttpStatusCode.NotFound);
- }
-
- log.LogInformation($"Searching for word: {searchterm} using Uri: {collectionUri.ToString()}");
- IDocumentQuery<ToDoItem> query = client.CreateDocumentQuery<ToDoItem>(collectionUri)
- .Where(p => p.Description.Contains(searchterm))
- .AsDocumentQuery();
-
- while (query.HasMoreResults)
- {
- foreach (ToDoItem result in await query.ExecuteNextAsync())
- {
- log.LogInformation(result.Description);
- }
- }
- return req.CreateResponse(HttpStatusCode.OK);
-}
-```
- ::: zone-end
Here's the binding data in the *function.json* file:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#cosmos-db-input).
# [Extension 4.x+](#tab/extensionv4/in-process)
Both [in-process](functions-dotnet-class-library.md) and [isolated worker proces
[!INCLUDE [functions-cosmosdb-input-attributes-v3](../../includes/functions-cosmosdb-input-attributes-v3.md)]
-# [Extension 4.x+](#tab/extensionv4/csharp-script)
--
-# [Functions 2.x+](#tab/functionsv2/csharp-script)
-- ::: zone-end
See the [Example section](#example) for complete examples.
[!INCLUDE [functions-cosmosdb-usage](../../includes/functions-cosmosdb-usage.md)]
-# [Extension 4.x+](#tab/extensionv4/csharp-script)
--
-# [Functions 2.x+](#tab/functionsv2/csharp-script)
-- The parameter type supported by the Cosmos DB input binding depends on the Functions runtime version, the extension package version, and the C# modality used.
See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfuncti
See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=isolated-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types.
-# [Extension 4.x+](#tab/extensionv4/csharp-script)
-
-See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4&pivots=programming-language-csharp#binding-types) for a list of supported types.
-
-# [Functions 2.x+](#tab/functionsv2/csharp-script)
-
-See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types.
- ::: zone-end
azure-functions Functions Bindings Cosmosdb V2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md
In the following example, the return type is an [`IReadOnlyList<T>`](/dotnet/api
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/CosmosDB/CosmosDBFunction.cs" range="4-35":::
-# [C# Script](#tab/csharp-script)
-
-This section contains the following examples:
-
-* [Queue trigger, write one doc](#queue-trigger-write-one-doc-c-script)
-* [Queue trigger, write docs using IAsyncCollector](#queue-trigger-write-docs-using-iasynccollector-c-script)
--
-<a id="queue-trigger-write-one-doc-c-script"></a>
-
-### Queue trigger, write one doc
-
-The following example shows an Azure Cosmos DB output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses a queue input binding for a queue that receives JSON in the following format:
-
-```json
-{
- "name": "John Henry",
- "employeeId": "123456",
- "address": "A town nearby"
-}
-```
-
-The function creates Azure Cosmos DB documents in the following format for each record:
-
-```json
-{
- "id": "John Henry-123456",
- "name": "John Henry",
- "employeeId": "123456",
- "address": "A town nearby"
-}
-```
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "name": "employeeDocument",
- "type": "cosmosDB",
- "databaseName": "MyDatabase",
- "collectionName": "MyCollection",
- "createIfNotExists": true,
- "connectionStringSetting": "MyAccount_COSMOSDB",
- "direction": "out"
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```cs
- #r "Newtonsoft.Json"
-
- using Microsoft.Azure.WebJobs.Host;
- using Newtonsoft.Json.Linq;
- using Microsoft.Extensions.Logging;
-
- public static void Run(string myQueueItem, out object employeeDocument, ILogger log)
- {
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
-
- dynamic employee = JObject.Parse(myQueueItem);
-
- employeeDocument = new {
- id = employee.name + "-" + employee.employeeId,
- name = employee.name,
- employeeId = employee.employeeId,
- address = employee.address
- };
- }
-```
-
-<a id="queue-trigger-write-docs-using-iasynccollector-c-script"></a>
-
-### Queue trigger, write docs using IAsyncCollector
-
-To create multiple documents, you can bind to `ICollector<T>` or `IAsyncCollector<T>` where `T` is one of the supported types.
-
-This example refers to a simple `ToDoItem` type:
-
-```cs
-namespace CosmosDBSamplesV2
-{
- public class ToDoItem
- {
- public string id { get; set; }
- public string Description { get; set; }
- }
-}
-```
-
-Here's the function.json file:
-
-```json
-{
- "bindings": [
- {
- "name": "toDoItemsIn",
- "type": "queueTrigger",
- "direction": "in",
- "queueName": "todoqueueforwritemulti",
- "connectionStringSetting": "AzureWebJobsStorage"
- },
- {
- "type": "cosmosDB",
- "name": "toDoItemsOut",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "connectionStringSetting": "CosmosDBConnection",
- "direction": "out"
- }
- ],
- "disabled": false
-}
-```
-
-Here's the C# script code:
-
-```cs
-using System;
-using Microsoft.Extensions.Logging;
-
-public static async Task Run(ToDoItem[] toDoItemsIn, IAsyncCollector<ToDoItem> toDoItemsOut, ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed {toDoItemsIn?.Length} items");
-
- foreach (ToDoItem toDoItem in toDoItemsIn)
- {
- log.LogInformation($"Description={toDoItem.Description}");
- await toDoItemsOut.AddAsync(toDoItem);
- }
-}
-```
- ::: zone-end
def main(req: func.HttpRequest, doc: func.Out[func.Document]) -> func.HttpRespon
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#cosmos-db-output).
# [Extension 4.x+](#tab/extensionv4/in-process)
Both [in-process](functions-dotnet-class-library.md) and [isolated worker proces
[!INCLUDE [functions-cosmosdb-output-attributes-v3](../../includes/functions-cosmosdb-output-attributes-v3.md)]
-# [Extension 4.x+](#tab/functionsv4/csharp-script)
--
-# [Functions 2.x+](#tab/functionsv2/csharp-script)
-- ::: zone-end
See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfuncti
See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=isolated-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types.
-# [Extension 4.x+](#tab/extensionv4/csharp-script)
-
-See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4&pivots=programming-language-csharp#binding-types) for a list of supported types.
-
-# [Functions 2.x+](#tab/functionsv2/csharp-script)
-
-See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types.
- ::: zone-end
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
An in-process class library is a compiled C# function runs in the same process a
# [Isolated process](#tab/isolated-process)
-An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
-
-# [C# script](#tab/csharp-script)
-
-C# script is used primarily when creating C# functions in the Azure portal.
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
This example requires the following `using` statements:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/CosmosDB/CosmosDBFunction.cs" range="4-7"::: -
-# [Extension 4.x+](#tab/extensionv4/csharp-script)
-
-The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are added or modified.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "type": "cosmosDBTrigger",
- "name": "documents",
- "direction": "in",
- "leaseContainerName": "leases",
- "connection": "<connection-app-setting>",
- "databaseName": "Tasks",
- "containerName": "Items",
- "createLeaseContainerIfNotExists": true
-}
-```
-
-Here's the C# script code:
-
-```cs
- using System;
- using System.Collections.Generic;
- using Microsoft.Extensions.Logging;
-
- // Customize the model with your own desired properties
- public class ToDoItem
- {
- public string id { get; set; }
- public string Description { get; set; }
- }
-
- public static void Run(IReadOnlyList<ToDoItem> documents, ILogger log)
- {
- log.LogInformation("Documents modified " + documents.Count);
- log.LogInformation("First document Id " + documents[0].id);
- }
-```
-
-# [Functions 2.x+](#tab/functionsv2/csharp-script)
-
-The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are added or modified.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "type": "cosmosDBTrigger",
- "name": "documents",
- "direction": "in",
- "leaseCollectionName": "leases",
- "connectionStringSetting": "<connection-app-setting>",
- "databaseName": "Tasks",
- "collectionName": "Items",
- "createLeaseCollectionIfNotExists": true
-}
-```
-
-Here's the C# script code:
-
-```cs
- #r "Microsoft.Azure.DocumentDB.Core"
-
- using System;
- using Microsoft.Azure.Documents;
- using System.Collections.Generic;
- using Microsoft.Extensions.Logging;
-
- public static void Run(IReadOnlyList<Document> documents, ILogger log)
- {
- log.LogInformation("Documents modified " + documents.Count);
- log.LogInformation("First document Id " + documents[0].Id);
- }
-```
- ::: zone-end
Here's the Python code:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [CosmosDBTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [CosmosDBTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#cosmos-db-trigger).
# [Extension 4.x+](#tab/extensionv4/in-process)
Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotn
[!INCLUDE [functions-cosmosdb-attributes-v3](../../includes/functions-cosmosdb-attributes-v3.md)]
-# [Extension 4.x+](#tab/extensionv4/csharp-script)
--
-# [Functions 2.x+](#tab/functionsv2/csharp-script)
-- ::: zone-end
See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfuncti
See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=isolated-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types.
-# [Extension 4.x+](#tab/extensionv4/csharp-script)
-
-See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4&pivots=programming-language-csharp#binding-types) for a list of supported types.
-
-# [Functions 2.x+](#tab/functionsv2/csharp-script)
-
-See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types.
- ::: zone-end
azure-functions Functions Bindings Event Grid Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md
This article supports both programming models.
The type of the output parameter used with an Event Grid output binding depends on the Functions runtime version, the binding extension version, and the modality of the C# function. The C# function can be created using one of the following C# modes: * [In-process class library](functions-dotnet-class-library.md): compiled C# function that runs in the same process as the Functions runtime.
-* [Isolated worker process class library](dotnet-isolated-process-guide.md): compiled C# function that runs in a worker process isolated from the runtime.
-* [C# script](functions-reference-csharp.md): used primarily when creating C# functions in the Azure portal.
+* [Isolated worker process class library](dotnet-isolated-process-guide.md): compiled C# function that runs in a worker process isolated from the runtime.
# [In-process](#tab/in-process)
The following example shows how the custom type is used in both the trigger and
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/EventGrid/EventGridFunction.cs" range="4-49":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows the Event Grid output binding data in the *function.json* file.
-
-```json
-{
- "type": "eventGrid",
- "name": "outputEvent",
- "topicEndpointUri": "MyEventGridTopicUriSetting",
- "topicKeySetting": "MyEventGridTopicKeySetting",
- "direction": "out"
-}
-```
-
-Here's C# script code that creates one event:
-
-```cs
-#r "Microsoft.Azure.EventGrid"
-using System;
-using Microsoft.Azure.EventGrid.Models;
-using Microsoft.Extensions.Logging;
-
-public static void Run(TimerInfo myTimer, out EventGridEvent outputEvent, ILogger log)
-{
- outputEvent = new EventGridEvent("message-id", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0");
-}
-```
-
-Here's C# script code that creates multiple events:
-
-```cs
-#r "Microsoft.Azure.EventGrid"
-using System;
-using Microsoft.Azure.EventGrid.Models;
-using Microsoft.Extensions.Logging;
-
-public static void Run(TimerInfo myTimer, ICollector<EventGridEvent> outputEvent, ILogger log)
-{
- outputEvent.Add(new EventGridEvent("message-id-1", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0"));
- outputEvent.Add(new EventGridEvent("message-id-2", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0"));
-}
-```
::: zone-end
def main(eventGridEvent: func.EventGridEvent,
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#event-grid-output).
The attribute's constructor takes the name of an application setting that contains the name of the custom topic, and the name of an application setting that contains the topic key.
The following table explains the parameters for the `EventGridOutputAttribute`.
|**TopicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. | |**TopicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
-# [C# Script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property | Description|
-|||-|
-|**type** | Must be set to `eventGrid`. |
-|**direction** | Must be set to `out`. This parameter is set automatically when you create the binding in the Azure portal. |
-|**name** | The variable name used in function code that represents the event. |
-|**topicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. |
-|**topicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
- ::: zone-end
Requires you to define a custom type, or use a string. See the [Example section]
Functions version 1.x doesn't support isolated worker process.
-# [Extension v3.x](#tab/extensionv3/csharp-script)
-
-C# script functions support the following types:
-
-+ [Azure.Messaging.CloudEvent][CloudEvent]
-+ [Azure.Messaging.EventGrid][EventGridEvent]
-+ [Newtonsoft.Json.Linq.JObject][JObject]
-+ [System.String][String]
-
-Send messages by using a method parameter such as `out EventGridEvent paramName`.
-To write multiple messages, you can instead use `ICollector<EventGridEvent>` or `IAsyncCollector<EventGridEvent>`.
-
-# [Extension v2.x](#tab/extensionv2/csharp-script)
-
-C# script functions support the following types:
-
-+ [Microsoft.Azure.EventGrid.Models.EventGridEvent][EventGridEvent]
-+ [Newtonsoft.Json.Linq.JObject][JObject]
-+ [System.String][String]
-
-Send messages by using a method parameter such as `out EventGridEvent paramName`.
-To write multiple messages, you can instead use `ICollector<EventGridEvent>` or `IAsyncCollector<EventGridEvent>`.
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-C# script functions support the following types:
-
-+ [Newtonsoft.Json.Linq.JObject][JObject]
-+ [System.String][String]
- ::: zone-end
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md
The following example shows how the custom type is used in both the trigger and
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/EventGrid/EventGridFunction.cs" range="11-33":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows an Event Grid trigger defined in the *function.json* file.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "type": "eventGridTrigger",
- "name": "eventGridEvent",
- "direction": "in"
- }
- ],
- "disabled": false
-}
-```
-
-Here's an example of a C# script function that uses an `EventGridEvent` binding parameter:
-
-```csharp
-#r "Microsoft.Azure.EventGrid"
-using Microsoft.Azure.EventGrid.Models;
-using Microsoft.Extensions.Logging;
-
-public static void Run(EventGridEvent eventGridEvent, ILogger log)
-{
- log.LogInformation(eventGridEvent.Data.ToString());
-}
-```
-
-For more information, see Packages, [Attributes](#attributes), [Configuration](#configuration), and [Usage](#usage).
--
-Here's an example of a C# script function that uses a `JObject` binding parameter:
-
-```cs
-#r "Newtonsoft.Json"
-
-using Newtonsoft.Json;
-using Newtonsoft.Json.Linq;
-
-public static void Run(JObject eventGridEvent, TraceWriter log)
-{
- log.Info(eventGridEvent.ToString(Formatting.Indented));
-}
-```
- ::: zone-end
def main(event: func.EventGridEvent):
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [EventGridTrigger](https://github.com/Azure/azure-functions-eventgrid-extension/blob/master/src/EventGridExtension/TriggerBinding/EventGridTriggerAttribute.cs) attribute. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [EventGridTrigger](https://github.com/Azure/azure-functions-eventgrid-extension/blob/master/src/EventGridExtension/TriggerBinding/EventGridTriggerAttribute.cs) attribute. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#event-grid-trigger).
# [In-process](#tab/in-process)
Here's an `EventGridTrigger` attribute in a method signature:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/EventGrid/EventGridFunction.cs" range="13-16":::
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file. There are no constructor parameters or properties to set in the `EventGridTrigger` attribute.
-
-|function.json property |Description|
-|||
-| **type** | Required - must be set to `eventGridTrigger`. |
-| **direction** | Required - must be set to `in`. |
-| **name** | Required - the variable name used in function code for the parameter that receives the event data. |
- ::: zone-end
Requires you to define a custom type, or use a string. See the [Example section]
Functions version 1.x doesn't support the isolated worker process.
-# [Extension v3.x](#tab/extensionv3/csharp-script)
-
-In-process C# class library functions supports the following types:
-
-+ [Azure.Messaging.CloudEvent][CloudEvent]
-+ [Azure.Messaging.EventGrid][EventGridEvent2]
-+ [Newtonsoft.Json.Linq.JObject][JObject]
-+ [System.String][String]
-
-# [Extension v2.x](#tab/extensionv2/csharp-script)
-
-In-process C# class library functions supports the following types:
-
-+ [Microsoft.Azure.EventGrid.Models.EventGridEvent][EventGridEvent]
-+ [Newtonsoft.Json.Linq.JObject][JObject]
-+ [System.String][String]
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-In-process C# class library functions supports the following types:
-
-+ [Newtonsoft.Json.Linq.JObject][JObject]
-+ [System.String][String]
- ::: zone-end
azure-functions Functions Bindings Event Hubs Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-output.md
The following example shows a [C# function](dotnet-isolated-process-guide.md) th
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/EventHubs/EventHubsFunction.cs" range="12-23":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows an event hub trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes a message to an event hub.
-
-The following examples show Event Hubs binding data in the *function.json* file for Functions runtime version 2.x and later versions.
-
-```json
-{
- "type": "eventHub",
- "name": "outputEventHubMessage",
- "eventHubName": "myeventhub",
- "connection": "MyEventHubSendAppSetting",
- "direction": "out"
-}
-```
-
-Here's C# script code that creates one message:
-
-```cs
-using System;
-using Microsoft.Extensions.Logging;
-
-public static void Run(TimerInfo myTimer, out string outputEventHubMessage, ILogger log)
-{
- String msg = $"TimerTriggerCSharp1 executed at: {DateTime.Now}";
- log.LogInformation(msg);
- outputEventHubMessage = msg;
-}
-```
-
-Here's C# script code that creates multiple messages:
-
-```cs
-public static void Run(TimerInfo myTimer, ICollector<string> outputEventHubMessage, ILogger log)
-{
- string message = $"Message created at: {DateTime.Now}";
- log.LogInformation(message);
- outputEventHubMessage.Add("1 " + message);
- outputEventHubMessage.Add("2 " + message);
-}
-```
- ::: zone-end
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#event-hubs-output).
# [In-process](#tab/in-process)
Use the [EventHubOutputAttribute] to define an output binding to an event hub, w
|**EventHubName** | The name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. | |**Connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. To learn more, see [Connections](#connections).|
-# [C# Script](#tab/csharp-script)
-
-The following table explains the binding configuration properties that you set in the *function.json* file.
-
-|function.json property | Description|
-|||
-|**type** | Must be set to `eventHub`. |
-|**direction** | Must be set to `out`. This parameter is set automatically when you create the binding in the Azure portal. |
-|**name** | The variable name used in function code that represents the event. |
-|**eventHubName** | Functions 2.x and higher. The name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. In Functions 1.x, this property is named `path`.|
-|**connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. To learn more, see [Connections](#connections).|
- ::: zone-end
Send messages by using a method parameter such as `out string paramName`. To wri
# [Extension v3.x+](#tab/extensionv3/isolated-process)
-Requires you to define a custom type, or use a string.
-
-# [Extension v5.x+](#tab/extensionv5/csharp-script)
-
-C# script functions support the following types:
-
-+ [Azure.Messaging.EventHubs.EventData](/dotnet/api/azure.messaging.eventhubs.eventdata)
-+ String
-+ Byte array
-+ Plain-old CLR object (POCO)
-
-This version of [EventData](/dotnet/api/azure.messaging.eventhubs.eventdata) drops support for the legacy `Body` type in favor of [EventBody](/dotnet/api/azure.messaging.eventhubs.eventdata.eventbody).
-
-Send messages by using a method parameter such as `out string paramName`, where `paramName` is the value specified in the `name` property of *function.json*. To write multiple messages, you can use `ICollector<string>` or `IAsyncCollector<string>` in place of `out string`.
-
-# [Extension v3.x+](#tab/extensionv3/csharp-script)
-
-C# script functions support the following types:
-
-+ [Microsoft.Azure.EventHubs.EventData](/dotnet/api/microsoft.azure.eventhubs.eventdata)
-+ String
-+ Byte array
-+ Plain-old CLR object (POCO)
-
-Send messages by using a method parameter such as `out string paramName`, where `paramName` is the value specified in the `name` property of *function.json*. To write multiple messages, you can use `ICollector<string>` or
-`IAsyncCollector<string>` in place of `out string`.
+Requires you to define a custom type, or use a string. Additional options are available in **Extension v5.x+**.
azure-functions Functions Bindings Http Webhook Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-output.md
The default return value for an HTTP-triggered function is:
::: zone pivot="programming-language-csharp" ## Attribute
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries don't require an attribute. C# script uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries don't require an attribute. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#http-output).
# [In-process](#tab/in-process)
A return value attribute isn't required. To learn more, see [Usage](#usage).
A return value attribute isn't required. To learn more, see [Usage](#usage).
-# [C# Script](#tab/csharp-script)
-
-The following table explains the binding configuration properties that you set in the *function.json* file.
-
-|Property |Description |
-|||
-| **type** |Must be set to `http`. |
-| **direction** | Must be set to `out`. |
-| **name** | The variable name used in function code for the response, or `$return` to use the return value. |
- ::: zone-end
The HTTP triggered function returns a type of [IActionResult] or `Task<IActionRe
The HTTP triggered function returns an [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata) object or a `Task<HttpResponseData>`. If the app uses [ASP.NET Core integration in .NET Isolated](./dotnet-isolated-process-guide.md#aspnet-core-integration-preview), it could also use [IActionResult], `Task<IActionResult>`, [HttpResponse], or `Task<HttpResponse>`.
-# [C# Script](#tab/csharp-script)
-
-The HTTP triggered function returns a type of [IActionResult] or `Task<IActionResult>`.
- [IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult [HttpResponse]: /dotnet/api/microsoft.aspnetcore.http.httpresponse
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md
public IActionResult Run(
[IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult
-# [C# Script](#tab/csharp-script)
-
-The following example shows a trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function looks for a `name` parameter either in the query string or the body of the HTTP request.
-
-Here's the *function.json* file:
-
-```json
-{
- "disabled": false,
- "bindings": [
- {
- "authLevel": "function",
- "name": "req",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "name": "$return",
- "type": "http",
- "direction": "out"
- }
- ]
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's C# script code that binds to `HttpRequest`:
-
-```cs
-#r "Newtonsoft.Json"
-
-using System.Net;
-using Microsoft.AspNetCore.Mvc;
-using Microsoft.Extensions.Primitives;
-using Newtonsoft.Json;
-
-public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
-{
- log.LogInformation("C# HTTP trigger function processed a request.");
-
- string name = req.Query["name"];
-
- string requestBody = String.Empty;
- using (StreamReader streamReader = new StreamReader(req.Body))
- {
- requestBody = await streamReader.ReadToEndAsync();
- }
- dynamic data = JsonConvert.DeserializeObject(requestBody);
- name = name ?? data?.name;
-
- return name != null
- ? (ActionResult)new OkObjectResult($"Hello, {name}")
- : new BadRequestObjectResult("Please pass a name on the query string or in the request body");
-}
-```
-
-You can bind to a custom object instead of `HttpRequest`. This object is created from the body of the request and parsed as JSON. Similarly, a type can be passed to the HTTP response output binding and returned as the response body, along with a `200` status code.
-
-```csharp
-using System.Net;
-using System.Threading.Tasks;
-using Microsoft.Extensions.Logging;
-
-public static string Run(Person person, ILogger log)
-{
- return person.Name != null
- ? (ActionResult)new OkObjectResult($"Hello, {person.Name}")
- : new BadRequestObjectResult("Please pass an instance of Person.");
-}
-
-public class Person {
- public string Name {get; set;}
-}
-```
- ::: zone-end
def main(req: func.HttpRequest) -> func.HttpResponse:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `HttpTriggerAttribute` to define the trigger binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `HttpTriggerAttribute` to define the trigger binding. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#http-trigger).
# [In-process](#tab/in-process)
In [isolated worker process](dotnet-isolated-process-guide.md) function apps, th
| **Methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the HTTP endpoint](#customize-the-http-endpoint). | | **Route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is `<functionname>`. For more information, see [customize the HTTP endpoint](#customize-the-http-endpoint). |
-# [C# Script](#tab/csharp-script)
-
-The following table explains the trigger configuration properties that you set in the *function.json* file:
-
-|function.json property | Description|
-|||
-| **type** | Required - must be set to `httpTrigger`. |
-| **direction** | Required - must be set to `in`. |
-| **name** | Required - the variable name used in function code for the request or request body. |
-| **authLevel** | Determines what keys, if any, need to be present on the request in order to invoke the function. For supported values, see [Authorization level](#http-auth). |
-| **methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the HTTP endpoint](#customize-the-http-endpoint). |
-| **route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is `<functionname>`. For more information, see [customize the HTTP endpoint](#customize-the-http-endpoint). |
-| **webHookType** | _Supported only for the version 1.x runtime._<br/><br/>Configures the HTTP trigger to act as a [webhook](https://en.wikipedia.org/wiki/Webhook) receiver for the specified provider. For supported values, see [WebHook type](#webhook-type).|
- ::: zone-end
FunctionContext executionContext)
} ```
-# [C# Script](#tab/csharp-script)
-
- The following C# function code makes use of both parameters.
-
-```csharp
-#r "Newtonsoft.Json"
-
-using System.Net;
-using Microsoft.AspNetCore.Mvc;
-using Microsoft.Extensions.Primitives;
-
-public static IActionResult Run(HttpRequest req, string category, int? id, ILogger log)
-{
- var message = String.Format($"Category: {category}, ID: {id}");
- return (ActionResult)new OkObjectResult(message);
-}
-```
- ::: zone-end
public static void Run(JObject input, ClaimsPrincipal principal, ILogger log)
The authenticated user is available via [HTTP Headers](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code).
-# [C# Script](#tab/csharp-script)
-
-```csharp
-using System.Net;
-using Microsoft.AspNetCore.Mvc;
-using System.Security.Claims;
-
-public static IActionResult Run(HttpRequest req, ILogger log)
-{
- ClaimsPrincipal identities = req.HttpContext.User;
- // ...
- return new OkObjectResult();
-}
-```
-
-Alternatively, the ClaimsPrincipal can simply be included as an additional parameter in the function signature:
-
-```csharp
-#r "Newtonsoft.Json"
-
-using System.Net;
-using Microsoft.AspNetCore.Mvc;
-using System.Security.Claims;
-using Newtonsoft.Json.Linq;
-
-public static void Run(JObject input, ClaimsPrincipal principal, ILogger log)
-{
- // ...
- return;
-}
-```
- ::: zone-end
azure-functions Functions Bindings Http Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook.md
Functions execute in the same process as the Functions host. To learn more, see
Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
-# [C# script](#tab/csharp-script)
-
-Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
- The functionality of the extension varies depending on the extension version:
Add the extension to your project by installing the [NuGet package](https://www.
Functions 1.x doesn't support running in an isolated worker process.
-# [Functions v2.x+](#tab/functionsv2/csharp-script)
-
-This version of the extension should already be available to your function app with [extension bundle], version 2.x.
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
- ::: zone-end
azure-functions Functions Bindings Rabbitmq Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-output.md
For information on setup and configuration details, see the [overview](functions
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Rabbitmq Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-trigger.md
For information on setup and configuration details, see the [overview](functions
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-sendgrid.md
You can add the extension to your project by explicitly installing the [NuGet pa
## Example ::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
The following example shows a [C# function](dotnet-isolated-process-guide.md) th
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/ServiceBus/ServiceBusFunction.cs" range="10-25":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows a Service Bus output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses a timer trigger to send a queue message every 15 seconds.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "schedule": "0/15 * * * * *",
- "name": "myTimer",
- "runsOnStartup": true,
- "type": "timerTrigger",
- "direction": "in"
- },
- {
- "name": "outputSbQueue",
- "type": "serviceBus",
- "queueName": "testqueue",
- "connection": "MyServiceBusConnection",
- "direction": "out"
- }
- ],
- "disabled": false
-}
-```
-
-Here's C# script code that creates a single message:
-
-```cs
-public static void Run(TimerInfo myTimer, ILogger log, out string outputSbQueue)
-{
- string message = $"Service Bus queue message created at: {DateTime.Now}";
- log.LogInformation(message);
- outputSbQueue = message;
-}
-```
-
-Here's C# script code that creates multiple messages:
-
-```cs
-public static async Task Run(TimerInfo myTimer, ILogger log, IAsyncCollector<string> outputSbQueue)
-{
- string message = $"Service Bus queue messages created at: {DateTime.Now}";
- log.LogInformation(message);
- await outputSbQueue.AddAsync("1 " + message);
- await outputSbQueue.AddAsync("2 " + message);
-}
-```
::: zone-end
def main(req: func.HttpRequest, msg: func.Out[str]) -> func.HttpResponse:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#service-bus-output).
# [In-process](#tab/in-process)
The following table explains the properties you can set using the attribute:
|**QueueOrTopicName**|Name of the topic or queue to send messages to. Use `EntityType` to set the destination type.| |**Connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
-# [C# script](#tab/csharp-script)
-
-C# script uses a *function.json* file for configuration instead of attributes. The following table explains the binding configuration properties that you set in the *function.json* file.
-
-|function.json property | Description|
-|||-|
-|**type** |Must be set to "serviceBus". This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | Must be set to "out". This property is set automatically when you create the trigger in the Azure portal. |
-|**name** | The name of the variable that represents the queue or topic message in function code. Set to "$return" to reference the function return value. |
-|**queueName**|Name of the queue. Set only if sending queue messages, not for a topic.
-|**topicName**|Name of the topic. Set only if sending topic messages, not for a queue.|
-|**connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
-|**accessRights** (v1 only)|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
- ::: zone-end
Use the [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmes
# [Functions 2.x and higher](#tab/functionsv2/isolated-process)
-Messaging-specific types are not yet supported.
+Earlier versions of this extension in the isolated worker process only support binding to messaging-specific types. Additional options are available to **Extension 5.x and higher**
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Messaging-specific types are not yet supported.
-
-# [Extension 5.x and higher](#tab/extensionv5/csharp-script)
-
-Use the [ServiceBusMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessage) type when sending messages with metadata. Parameters are defined as `out` parameters. Use an `ICollector<T>` or `IAsyncCollector<T>` to write multiple messages. A message is created when you call the `Add` method.
-
-When the parameter value is null when the function exits, Functions doesn't create a message.
-
-# [Functions 2.x and higher](#tab/functionsv2/csharp-script)
-
-Use the [Message](/dotnet/api/microsoft.azure.servicebus.message) type when sending messages with metadata. Parameters are defined as `out` parameters. Use an `ICollector<T>` or `IAsyncCollector<T>` to write multiple messages. A message is created when you call the `Add` method.
-
-When the parameter value is null when the function exits, Functions doesn't create a message.
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-Use the [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) type when sending messages with metadata. Parameters are defined as `out` parameters. Use an `ICollector<T>` or `IAsyncCollector<T>` to write multiple messages. A message is created when you call the `Add` method.
-
-When the parameter value is null when the function exits, Functions doesn't create a message.
+Functions version 1.x doesn't support isolated worker process. To use the isolated worker model, [upgrade your application to Functions 4.x].
::: zone-end
For a complete example, see [the examples section](#example).
## Next steps - [Run a function when a Service Bus queue or topic message is created (Trigger)](./functions-bindings-service-bus-trigger.md)+
+[upgrade your application to Functions 4.x]: ./migrate-version-1-version-4.md
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
The following example shows a [C# function](dotnet-isolated-process-guide.md) th
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/ServiceBus/ServiceBusFunction.cs" range="10-25":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows a Service Bus trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads [message metadata](#message-metadata) and logs a Service Bus queue message.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
-"bindings": [
- {
- "queueName": "testqueue",
- "connection": "MyServiceBusConnection",
- "name": "myQueueItem",
- "type": "serviceBusTrigger",
- "direction": "in"
- }
-],
-"disabled": false
-}
-```
-
-Here's the C# script code:
-
-```cs
-using System;
-
-public static void Run(string myQueueItem,
- Int32 deliveryCount,
- DateTime enqueuedTimeUtc,
- string messageId,
- TraceWriter log)
-{
- log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
-
- log.Info($"EnqueuedTimeUtc={enqueuedTimeUtc}");
- log.Info($"DeliveryCount={deliveryCount}");
- log.Info($"MessageId={messageId}");
-}
-```
::: zone-end
def main(msg: azf.ServiceBusMessage) -> str:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [ServiceBusTriggerAttribute](https://github.com/Azure/azure-functions-servicebus-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ServiceBusTriggerAttribute.cs) attribute to define the function trigger. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [ServiceBusTriggerAttribute](https://github.com/Azure/azure-functions-servicebus-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ServiceBusTriggerAttribute.cs) attribute to define the function trigger. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#service-bus-trigger).
# [In-process](#tab/in-process)
The following table explains the properties you can set using this trigger attri
|**IsBatched**| Messages are delivered in batches. Requires an array or collection type. | |**IsSessionsEnabled**|`true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
-# [C# script](#tab/csharp-script)
-
-C# script uses a *function.json* file for configuration instead of attributes. The following table explains the binding configuration properties that you set in the *function.json* file.
-
-|function.json property | Description|
-||-|
-|**type** | Must be set to `serviceBusTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
-|**name** | The name of the variable that represents the queue or topic message in function code. |
-|**queueName**| Name of the queue to monitor. Set only if monitoring a queue, not for a topic.
-|**topicName**| Name of the topic to monitor. Set only if monitoring a topic, not for a queue.|
-|**subscriptionName**| Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.|
-|**connection**| The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
-|**accessRights**| Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
-|**isSessionsEnabled**| `true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
-|**autoComplete**| `true` when the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>Setting to `false` is only supported in C#.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br/><br/>This property is available only in Azure Functions 2.x and higher. |
- [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ::: zone-end
In [C# class libraries](functions-dotnet-class-library.md), the attribute's cons
# [Functions 2.x and higher](#tab/functionsv2/isolated-process)
-Messaging-specific types are not yet supported.
+Earlier versions of this extension in the isolated worker process only support binding to messaging-specific types. Additional options are available to **extension 5.x and higher**
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Messaging-specific types are not yet supported.
-
-# [Extension 5.x and higher](#tab/extensionv5/csharp-script)
-
-Use the [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) type to receive message metadata from Service Bus Queues and Subscriptions. To learn more, see [Messages, payloads, and serialization](../service-bus-messaging/service-bus-messages-payloads.md).
-
-# [Functions 2.x and higher](#tab/functionsv2/csharp-script)
-
-Use the [Message](/dotnet/api/microsoft.azure.servicebus.message) type to receive messages with metadata. To learn more, see [Messages, payloads, and serialization](../service-bus-messaging/service-bus-messages-payloads.md).
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-The following parameter types are available for the queue or topic message:
-
-* [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) - Gives you the deserialized message with the [BrokeredMessage.GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1) method.
-* [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) - Used to receive and acknowledge messages from the message container, which is required when `autoComplete` is set to `false`.
+Functions version 1.x doesn't support isolated worker process. To use the isolated worker model, [upgrade your application to Functions 4.x].
These properties are members of the [BrokeredMessage](/dotnet/api/microsoft.serv
# [Extension 5.x and higher](#tab/extensionv5/isolated-process)
-Messaging-specific types are not yet supported.
-
-# [Functions 2.x and higher](#tab/functionsv2/isolated-process)
-
-Messaging-specific types are not yet supported.
-
-# [Functions 1.x](#tab/functionsv1/isolated-process)
-
-Messaging-specific types are not yet supported.
-
-# [Extension 5.x and higher](#tab/extensionv5/csharp-script)
- These properties are members of the [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) class. |Property|Type|Description|
These properties are members of the [ServiceBusReceivedMessage](/dotnet/api/azur
|`Subject`|`string`|The application-specific label which can be used in place of the `Label` metadata property.| |`To`|`string`|The send to address.|
-# [Functions 2.x and higher](#tab/functionsv2/csharp-script)
-These properties are members of the [Message](/dotnet/api/microsoft.azure.servicebus.message) class.
-
-|Property|Type|Description|
-|--|-|--|
-|`ContentType`|`string`|A content type identifier utilized by the sender and receiver for application-specific logic.|
-|`CorrelationId`|`string`|The correlation ID.|
-|`DeliveryCount`|`Int32`|The number of deliveries.|
-|`ScheduledEnqueueTimeUtc`|`DateTime`|The scheduled enqueued time in UTC.|
-|`ExpiresAtUtc`|`DateTime`|The expiration time in UTC.|
-|`Label`|`string`|The application-specific label.|
-|`MessageId`|`string`|A user-defined value that Service Bus can use to identify duplicate messages, if enabled.|
-|`ReplyTo`|`string`|The reply to queue address.|
-|`To`|`string`|The send to address.|
-|`UserProperties`|`IDictionary<string, object>`|Properties set by the sender. |
+# [Functions 2.x and higher](#tab/functionsv2/isolated-process)
-# [Functions 1.x](#tab/functionsv1/csharp-script)
+Earlier versions of this extension in the isolated worker process only support binding to messaging-specific types. Additional options are available to **Extension 5.x and higher**
-These properties are members of the [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) and [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) classes.
+# [Functions 1.x](#tab/functionsv1/isolated-process)
-|Property|Type|Description|
-|--|-|--|
-|`ContentType`|`string`|A content type identifier utilized by the sender and receiver for application-specific logic.|
-|`CorrelationId`|`string`|The correlation ID.|
-|`DeadLetterSource`|`string`|The dead letter source.|
-|`DeliveryCount`|`Int32`|The number of deliveries.|
-|`EnqueuedTimeUtc`|`DateTime`|The enqueued time in UTC.|
-|`ExpiresAtUtc`|`DateTime`|The expiration time in UTC.|
-|`Label`|`string`|The application-specific label.|
-|`MessageId`|`string`|A user-defined value that Service Bus can use to identify duplicate messages, if enabled.|
-|`MessageReceiver`|`MessageReceiver`|Service Bus message receiver. Can be used to abandon, complete, or deadletter the message.|
-|`MessageSession`|`MessageSession`|A message receiver specifically for session-enabled queues and topics.|
-|`ReplyTo`|`string`|The reply to queue address.|
-|`SequenceNumber`|`long`|The unique number assigned to a message by the Service Bus.|
-|`To`|`string`|The send to address.|
-|`UserProperties`|`IDictionary<string, object>`|Properties set by the sender. |
+Functions version 1.x doesn't support isolated worker process. To use the isolated worker model, [upgrade your application to Functions 4.x].
These properties are members of the [BrokeredMessage](/dotnet/api/microsoft.serv
[BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage
+[upgrade your application to Functions 4.x]: ./migrate-version-1-version-4.md
azure-functions Functions Bindings Signalr Service Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-input.md
For information on setup and configuration details, see the [overview](functions
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Signalr Service Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-output.md
For information on setup and configuration details, see the [overview](functions
::: zone pivot="programming-language-csharp" ### Broadcast to all clients
azure-functions Functions Bindings Signalr Service Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-trigger.md
For information on setup and configuration details, see the [overview](functions
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
The trigger input type is declared as either `InvocationContext` or a custom typ
### InvocationContext
-`InvocationContext` contains all the content in the message send from aa SignalR service, which includes the following properties:
+`InvocationContext` contains all the content in the message sent from a SignalR service, which includes the following properties:
|Property | Description| |||
azure-functions Functions Bindings Storage Blob Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md
The following example is a [C# function](dotnet-isolated-process-guide.md) that
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Blob/BlobFunction.cs" range="9-26":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows blob input and output bindings in a *function.json* file and [C# script (.csx)](functions-reference-csharp.md) code that uses the bindings. The function makes a copy of a text blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
-
-In the *function.json* file, the `queueTrigger` metadata property is used to specify the blob name in the `path` properties:
-
-```json
-{
- "bindings": [
- {
- "queueName": "myqueue-items",
- "connection": "MyStorageConnectionAppSetting",
- "name": "myQueueItem",
- "type": "queueTrigger",
- "direction": "in"
- },
- {
- "name": "myInputBlob",
- "type": "blob",
- "path": "samples-workitems/{queueTrigger}",
- "connection": "MyStorageConnectionAppSetting",
- "direction": "in"
- },
- {
- "name": "myOutputBlob",
- "type": "blob",
- "path": "samples-workitems/{queueTrigger}-Copy",
- "connection": "MyStorageConnectionAppSetting",
- "direction": "out"
- }
- ],
- "disabled": false
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```cs
-public static void Run(string myQueueItem, string myInputBlob, out string myOutputBlob, ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
- myOutputBlob = myInputBlob;
-}
-```
::: zone-end
def main(queuemsg: func.QueueMessage, inputblob: bytes) -> bytes:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#blob-input).
# [In-process](#tab/in-process)
isolated worker process defines an input binding by using a `BlobInputAttribute`
|**BlobPath** | The path to the blob.| |**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property | Description|
-||-|
-|**type** | Must be set to `blob`. |
-|**direction** | Must be set to `in`. Exceptions are noted in the [usage](#usage) section. |
-|**name** | The name of the variable that represents the blob in function code.|
-|**path** | The path to the blob. |
-|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
-|**dataType**| For dynamically typed languages, specifies the underlying data type. Possible values are `string`, `binary`, or `stream`. For more detail, refer to the [triggers and bindings concepts](functions-triggers-bindings.md?tabs=python#trigger-and-binding-definitions). |
- [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding
[!INCLUDE [functions-bindings-storage-blob-input-dotnet-isolated-types](../../includes/functions-bindings-storage-blob-input-dotnet-isolated-types.md)]
-# [C# Script](#tab/csharp-script)
-
-See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types.
- Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage).
azure-functions Functions Bindings Storage Blob Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md
The following example is a [C# function](dotnet-isolated-process-guide.md) that
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Blob/BlobFunction.cs" range="4-26":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows blob input and output bindings in a *function.json* file and [C# script (.csx)](functions-reference-csharp.md) code that uses the bindings. The function makes a copy of a text blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
-
-In the *function.json* file, the `queueTrigger` metadata property is used to specify the blob name in the `path` properties:
-
-```json
-{
- "bindings": [
- {
- "queueName": "myqueue-items",
- "connection": "MyStorageConnectionAppSetting",
- "name": "myQueueItem",
- "type": "queueTrigger",
- "direction": "in"
- },
- {
- "name": "myInputBlob",
- "type": "blob",
- "path": "samples-workitems/{queueTrigger}",
- "connection": "MyStorageConnectionAppSetting",
- "direction": "in"
- },
- {
- "name": "myOutputBlob",
- "type": "blob",
- "path": "samples-workitems/{queueTrigger}-Copy",
- "connection": "MyStorageConnectionAppSetting",
- "direction": "out"
- }
- ],
- "disabled": false
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```cs
-public static void Run(string myQueueItem, string myInputBlob, out string myOutputBlob, ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
- myOutputBlob = myInputBlob;
-}
-```
- ::: zone-end
def main(queuemsg: func.QueueMessage, inputblob: bytes, outputblob: func.Out[byt
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#blob-output).
# [In-process](#tab/in-process)
The `BlobOutputAttribute` constructor takes the following parameters:
|**BlobPath** | The path to the blob.| |**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).| -
-# [C# script](#tab/csharp-script)
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property | Description|
-||-|
-|**type** | Must be set to `blob`. |
-|**direction** | Must be set to `in`. Exceptions are noted in the [usage](#usage) section. |
-|**name** | The name of the variable that represents the blob in function code.|
-|**path** | The path to the blob. |
-|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
-|**dataType**| For dynamically typed languages, specifies the underlying data type. Possible values are `string`, `binary`, or `stream`. For more detail, refer to the [triggers and bindings concepts](functions-triggers-bindings.md?tabs=python#trigger-and-binding-definitions). |
- [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding
[!INCLUDE [functions-bindings-storage-blob-output-dotnet-isolated-types](../../includes/functions-bindings-storage-blob-output-dotnet-isolated-types.md)]
-# [C# Script](#tab/csharp-script)
-
-See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types.
- Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage).
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
The following example is a [C# function](dotnet-isolated-process-guide.md) that
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Blob/BlobFunction.cs" range="9-25":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows a blob trigger binding in a *function.json* file and code that uses the binding. The function writes a log when a blob is added or updated in the `samples-workitems` [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources).
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "disabled": false,
- "bindings": [
- {
- "name": "myBlob",
- "type": "blobTrigger",
- "direction": "in",
- "path": "samples-workitems/{name}",
- "connection":"MyStorageAccountAppSetting"
- }
- ]
-}
-```
-
-The string `{name}` in the blob trigger path `samples-workitems/{name}` creates a [binding expression](./functions-bindings-expressions-patterns.md) that you can use in function code to access the file name of the triggering blob. For more information, see [Blob name patterns](#blob-name-patterns) later in this article.
-
-For more information about *function.json* file properties, see the [Configuration](#configuration) section explains these properties.
-
-Here's C# script code that binds to a `Stream`:
-
-```cs
-public static void Run(Stream myBlob, string name, ILogger log)
-{
- log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes");
-}
-```
-
-Here's C# script code that binds to a `CloudBlockBlob`:
-
-```cs
-#r "Microsoft.WindowsAzure.Storage"
-
-using Microsoft.WindowsAzure.Storage.Blob;
-
-public static void Run(CloudBlockBlob myBlob, string name, ILogger log)
-{
- log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name}\nURI:{myBlob.StorageUri}");
-}
-```
- ::: zone-end
def main(myblob: func.InputStream):
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [BlobAttribute](/dotnet/api/microsoft.azure.webjobs.blobattribute) attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [BlobAttribute](/dotnet/api/microsoft.azure.webjobs.blobattribute) attribute to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#blob-trigger).
The attribute's constructor takes the following parameters:
Here's an `BlobTrigger` attribute in a method signature:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Blob/BlobFunction.cs" range="11-16"::: -
-# [C# script](#tab/csharp-script)
-
-C# script uses a *function.json* file for configuration instead of attributes.
-
-|function.json property | Description|
-||-|
-|**type** | Must be set to `blobTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. Exceptions are noted in the [usage](#usage) section. |
-|**name** | The name of the variable that represents the blob in function code. |
-|**path** | The [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources) to monitor. May be a [blob name pattern](#blob-name-patterns). |
-|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
- [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding
[!INCLUDE [functions-bindings-storage-blob-trigger-dotnet-isolated-types](../../includes/functions-bindings-storage-blob-trigger-dotnet-isolated-types.md)]
-# [C# Script](#tab/csharp-script)
-
-See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types.
- Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage).
azure-functions Functions Bindings Storage Queue Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-output.md
public static class QueueFunctions
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Queue/QueueFunction.cs" id="docsnippet_queue_output_binding" :::
-# [C# Script](#tab/csharp-script)
-
-The following example shows an HTTP trigger binding in a *function.json* file and [C# script (.csx)](functions-reference-csharp.md) code that uses the binding. The function creates a queue item with a **CustomQueueMessage** object payload for each HTTP request received.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "type": "httpTrigger",
- "direction": "in",
- "authLevel": "function",
- "name": "input"
- },
- {
- "type": "http",
- "direction": "out",
- "name": "$return"
- },
- {
- "type": "queue",
- "direction": "out",
- "name": "$return",
- "queueName": "outqueue",
- "connection": "MyStorageConnectionAppSetting"
- }
- ]
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's C# script code that creates a single queue message:
-
-```cs
-public class CustomQueueMessage
-{
- public string PersonName { get; set; }
- public string Title { get; set; }
-}
-
-public static CustomQueueMessage Run(CustomQueueMessage input, ILogger log)
-{
- return input;
-}
-```
-
-You can send multiple messages at once by using an `ICollector` or `IAsyncCollector` parameter. Here's C# script code that sends multiple messages, one with the HTTP request data and one with hard-coded values:
-
-```cs
-public static void Run(
- CustomQueueMessage input,
- ICollector<CustomQueueMessage> myQueueItems,
- ILogger log)
-{
- myQueueItems.Add(input);
- myQueueItems.Add(new CustomQueueMessage { PersonName = "You", Title = "None" });
-}
-```
- ::: zone-end
def main(req: func.HttpRequest, msg: func.Out[typing.List[str]]) -> func.HttpRes
::: zone pivot="programming-language-csharp" ## Attributes
-The attribute that defines an output binding in C# libraries depends on the mode in which the C# class library runs. C# script instead uses a function.json configuration file.
+The attribute that defines an output binding in C# libraries depends on the mode in which the C# class library runs.
# [In-process](#tab/in-process)
-In [C# class libraries](functions-dotnet-class-library.md), use the [QueueAttribute](/dotnet/api/microsoft.azure.webjobs.queueattribute).
+In [C# class libraries](functions-dotnet-class-library.md), use the [QueueAttribute](/dotnet/api/microsoft.azure.webjobs.queueattribute). C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#queue-output).
The attribute applies to an `out` parameter or the return value of the function. The attribute's constructor takes the name of the queue, as shown in the following example:
When running in an isolated worker process, you use the [QueueOutputAttribute](h
Only returned variables are supported when running in an isolated worker process. Output parameters can't be used.
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties that you set in the *function.json* file and the `Queue` attribute.
-
-|function.json property | Description|
-||-|
-|**type** |Must be set to `queue`. This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | Must be set to `out`. This property is set automatically when you create the trigger in the Azure portal. |
-|**name** | The name of the variable that represents the queue in function code. Set to `$return` to reference the function return value.|
-|**queueName** | The name of the queue. |
-|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](#connections).|
+ ::: zone-end ::: zone pivot="programming-language-python"
An in-process class library is a compiled C# function runs in the same process a
An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
-# [C# script](#tab/csharp-script)
-
-C# script is used primarily when creating C# functions in the Azure portal.
- Choose a version to see usage details for the mode and version.
You can write multiple messages to the queue by using one of the following types
Isolated worker process currently only supports binding to string parameters.
-# [Extension 5.x+](#tab/extensionv5/csharp-script)
-
-Write a single queue message by using a method parameter such as `out T paramName`. You can use the method return type instead of an `out` parameter, and `T` can be any of the following types:
-
-* An object serializable as JSON
-* `string`
-* `byte[]`
-* [QueueMessage]
-
-For examples using these types, see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
-
-You can write multiple messages to the queue by using one of the following types:
-
-* `ICollector<T>` or `IAsyncCollector<T>`
-* [QueueClient]
-
-For examples using [QueueMessage] and [QueueClient], see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
-
-# [Extension 2.x+](#tab/extensionv2/csharp-script)
-
-Write a single queue message by using a method parameter such as `out T paramName`. You can use the method return type instead of an `out` parameter, and `T` can be any of the following types:
-
-* An object serializable as JSON
-* `string`
-* `byte[]`
-* [CloudQueueMessage]
-
-If you try to bind to [CloudQueueMessage] and get an error message, make sure that you have a reference to [the correct Storage SDK version](functions-bindings-storage-queue.md#azure-storage-sdk-version-in-functions-1x).
-
-You can write multiple messages to the queue by using one of the following types:
-
-* `ICollector<T>` or `IAsyncCollector<T>`
-* [CloudQueue](/dotnet/api/microsoft.azure.storage.queue.cloudqueue)
- ::: zone-end
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md
The following example shows a [C# function](dotnet-isolated-process-guide.md) th
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Queue/QueueFunction.cs" id="docsnippet_queue_output_binding":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows a queue trigger binding in a *function.json* file and [C# script (.csx)](functions-reference-csharp.md) code that uses the binding. The function polls the `myqueue-items` queue and writes a log each time a queue item is processed.
-
-Here's the *function.json* file:
-
-```json
-{
- "disabled": false,
- "bindings": [
- {
- "type": "queueTrigger",
- "direction": "in",
- "name": "myQueueItem",
- "queueName": "myqueue-items",
- "connection":"MyStorageConnectionAppSetting"
- }
- ]
-}
-```
-
-The [section below](#attributes) explains these properties.
-
-Here's the C# script code:
-
-```csharp
-#r "Microsoft.WindowsAzure.Storage"
-
-using Microsoft.Extensions.Logging;
-using Microsoft.WindowsAzure.Storage.Queue;
-using System;
-
-public static void Run(CloudQueueMessage myQueueItem,
- DateTimeOffset expirationTime,
- DateTimeOffset insertionTime,
- DateTimeOffset nextVisibleTime,
- string queueTrigger,
- string id,
- string popReceipt,
- int dequeueCount,
- ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem.AsString}\n" +
- $"queueTrigger={queueTrigger}\n" +
- $"expirationTime={expirationTime}\n" +
- $"insertionTime={insertionTime}\n" +
- $"nextVisibleTime={nextVisibleTime}\n" +
- $"id={id}\n" +
- $"popReceipt={popReceipt}\n" +
- $"dequeueCount={dequeueCount}");
-}
-```
-
-The [usage](#usage) section explains `myQueueItem`, which is named by the `name` property in function.json. The [message metadata section](#message-metadata) explains all of the other variables shown.
- ::: zone-end
def main(msg: func.QueueMessage):
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [QueueTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Storage/Queues/QueueTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [QueueTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Storage/Queues/QueueTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#queue-trigger).
# [In-process](#tab/in-process)
In [C# class libraries](dotnet-isolated-process-guide.md), the attribute's const
This example also demonstrates setting the [connection string setting](#connections) in the attribute itself.
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property | Description|
-||-|
-|**type** |Must be set to `queueTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
-|**direction**| In the *function.json* file only. Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. |
-|**name** | The name of the variable that contains the queue item payload in the function code. |
-|**queueName** | The name of the queue to poll. |
-|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](#connections).|
- ::: zone-end
An in-process class library is a compiled C# function runs in the same process a
# [Isolated process](#tab/isolated-process) An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
-
-# [C# script](#tab/csharp-script)
-
-C# script is used primarily when creating C# functions in the Azure portal.
When binding to an object, the Functions runtime tries to deserialize the JSON p
# [Extension 2.x+](#tab/extensionv2/isolated-process)
-Isolated worker process currently only supports binding to string parameters.
-
-# [Extension 5.x+](#tab/extensionv5/csharp-script)
-
-Access the message data by using a method parameter such as `string paramName`. The `paramName` is the value specified in the *function.json* file. You can bind to any of the following types:
-
-* Plain-old CLR object (POCO)
-* `string`
-* `byte[]`
-* [QueueMessage]
-
-When binding to an object, the Functions runtime tries to deserialize the JSON payload into an instance of an arbitrary class defined in your code. For examples using [QueueMessage], see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
--
-# [Extension 2.x+](#tab/extensionv2/csharp-script)
-
-Access the message data by using a method parameter such as `string paramName`. The `paramName` is the value specified in the *function.json* file. You can bind to any of the following types:
-
-* Plain-old CLR object (POCO)
-* `string`
-* `byte[]`
-* [CloudQueueMessage]
-
-When binding to an object, the Functions runtime tries to deserialize the JSON payload into an instance of an arbitrary class defined in your code. If you try to bind to [CloudQueueMessage] and get an error message, make sure that you have a reference to [the correct Storage SDK version](functions-bindings-storage-queue.md).
+Earlier versions of this extension in the isolated worker process only support binding to strings. Additional options are available to **Extension 5.x+**.
::: zone-end
azure-functions Functions Bindings Storage Table Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md
An [in-process class library](functions-dotnet-class-library.md) is a compiled C
An [isolated worker process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime.
-# [C# script](#tab/csharp-script)
-
-C# script is used primarily when creating C# functions in the Azure portal.
- Choose a version to see examples for the mode and version.
The `Filter` and `Take` properties are used to limit the number of entities retu
Functions version 1.x doesn't support isolated worker process.
-# [Azure Tables extension](#tab/table-api/csharp-script)
-
-The following example shows a table input binding in a *function.json* file and [C# script](./functions-reference-csharp.md) code that uses the binding. The function uses a queue trigger to read a single table row.
-
-The *function.json* file specifies a `partitionKey` and a `rowKey`. The `rowKey` value `{queueTrigger}` indicates that the row key comes from the queue message string.
-
-```json
-{
- "bindings": [
- {
- "queueName": "myqueue-items",
- "connection": "MyStorageConnectionAppSetting",
- "name": "myQueueItem",
- "type": "queueTrigger",
- "direction": "in"
- },
- {
- "name": "personEntity",
- "type": "table",
- "tableName": "Person",
- "partitionKey": "Test",
- "rowKey": "{queueTrigger}",
- "connection": "MyStorageConnectionAppSetting",
- "direction": "in"
- }
- ],
- "disabled": false
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```csharp
-#r "Microsoft.WindowsAzure.Storage"
-using Microsoft.Extensions.Logging;
-using Azure.Data.Tables;
-
-public static void Run(string myQueueItem, Person personEntity, ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
- log.LogInformation($"Name in Person entity: {personEntity.Name}");
-}
-
-public class Person : ITableEntity
-{
- public string Name { get; set; }
-
- public string PartitionKey { get; set; }
- public string RowKey { get; set; }
- public DateTimeOffset? Timestamp { get; set; }
- public ETag ETag { get; set; }
-}
-```
-
-# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
-
-The following example shows a table input binding in a *function.json* file and [C# script](./functions-reference-csharp.md) code that uses the binding. The function uses a queue trigger to read a single table row.
-
-The *function.json* file specifies a `partitionKey` and a `rowKey`. The `rowKey` value `{queueTrigger}` indicates that the row key comes from the queue message string.
-
-```json
-{
- "bindings": [
- {
- "queueName": "myqueue-items",
- "connection": "MyStorageConnectionAppSetting",
- "name": "myQueueItem",
- "type": "queueTrigger",
- "direction": "in"
- },
- {
- "name": "personEntity",
- "type": "table",
- "tableName": "Person",
- "partitionKey": "Test",
- "rowKey": "{queueTrigger}",
- "connection": "MyStorageConnectionAppSetting",
- "direction": "in"
- }
- ],
- "disabled": false
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```csharp
-public static void Run(string myQueueItem, Person personEntity, ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
- log.LogInformation($"Name in Person entity: {personEntity.Name}");
-}
-
-public class Person
-{
- public string PartitionKey { get; set; }
- public string RowKey { get; set; }
- public string Name { get; set; }
-}
-```
-
-To read more than one row, use a `CloudTable` method parameter to read the table by using the Azure Storage SDK. Here's an example of a function that queries an Azure Functions log table:
-
-```json
-{
- "bindings": [
- {
- "name": "myTimer",
- "type": "timerTrigger",
- "direction": "in",
- "schedule": "0 */1 * * * *"
- },
- {
- "name": "cloudTable",
- "type": "table",
- "connection": "AzureWebJobsStorage",
- "tableName": "AzureWebJobsHostLogscommon",
- "direction": "in"
- }
- ],
- "disabled": false
-}
-```
-
-```csharp
-#r "Microsoft.WindowsAzure.Storage"
-using Microsoft.WindowsAzure.Storage.Table;
-using System;
-using System.Threading.Tasks;
-using Microsoft.Extensions.Logging;
-
-public static async Task Run(TimerInfo myTimer, CloudTable cloudTable, ILogger log)
-{
- log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
-
- TableQuery<LogEntity> rangeQuery = new TableQuery<LogEntity>().Where(
- TableQuery.CombineFilters(
- TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal,
- "FD2"),
- TableOperators.And,
- TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThan,
- "a")));
-
- // Execute the query and loop through the results
- foreach (LogEntity entity in
- await cloudTable.ExecuteQuerySegmentedAsync(rangeQuery, null))
- {
- log.LogInformation(
- $"{entity.PartitionKey}\t{entity.RowKey}\t{entity.Timestamp}\t{entity.OriginalName}");
- }
-}
-
-public class LogEntity : TableEntity
-{
- public string OriginalName { get; set; }
-}
-```
-
-For more information about how to use CloudTable, see [Get started with Azure Table storage](../cosmos-db/tutorial-develop-table-dotnet.md).
-
-If you try to bind to `CloudTable` and get an error message, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-table.md#azure-storage-sdk-version-in-functions-1x).
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-The following example shows a table input binding in a *function.json* file and [C# script](./functions-reference-csharp.md) code that uses the binding. The function uses a queue trigger to read a single table row.
-
-The *function.json* file specifies a `partitionKey` and a `rowKey`. The `rowKey` value `{queueTrigger}` indicates that the row key comes from the queue message string.
-
-```json
-{
- "bindings": [
- {
- "queueName": "myqueue-items",
- "connection": "MyStorageConnectionAppSetting",
- "name": "myQueueItem",
- "type": "queueTrigger",
- "direction": "in"
- },
- {
- "name": "personEntity",
- "type": "table",
- "tableName": "Person",
- "partitionKey": "Test",
- "rowKey": "{queueTrigger}",
- "connection": "MyStorageConnectionAppSetting",
- "direction": "in"
- }
- ],
- "disabled": false
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```csharp
-public static void Run(string myQueueItem, Person personEntity, ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
- log.LogInformation($"Name in Person entity: {personEntity.Name}");
-}
-
-public class Person
-{
- public string PartitionKey { get; set; }
- public string RowKey { get; set; }
- public string Name { get; set; }
-}
-```
-
-The following example shows a table input binding in a *function.json* file and [C# script](./functions-reference-csharp.md) code that uses the binding. The function uses `IQueryable<T>` to read entities for a partition key that is specified in a queue message. `IQueryable<T>` is only supported by version 1.x of the Functions runtime.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "queueName": "myqueue-items",
- "connection": "MyStorageConnectionAppSetting",
- "name": "myQueueItem",
- "type": "queueTrigger",
- "direction": "in"
- },
- {
- "name": "tableBinding",
- "type": "table",
- "connection": "MyStorageConnectionAppSetting",
- "tableName": "Person",
- "direction": "in"
- }
- ],
- "disabled": false
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-The C# script code adds a reference to the Azure Storage SDK so that the entity type can derive from `TableEntity`:
-
-```csharp
-#r "Microsoft.WindowsAzure.Storage"
-using Microsoft.WindowsAzure.Storage.Table;
-using Microsoft.Extensions.Logging;
-
-public static void Run(string myQueueItem, IQueryable<Person> tableBinding, ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
- foreach (Person person in tableBinding.Where(p => p.PartitionKey == myQueueItem).ToList())
- {
- log.LogInformation($"Name: {person.Name}");
- }
-}
-
-public class Person : TableEntity
-{
- public string Name { get; set; }
-}
-```
- ::: zone-end
With this simple binding, you can't programmatically handle a case in which no r
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#table-input).
# [In-process](#tab/in-process)
In [C# class libraries](dotnet-isolated-process-guide.md), the `TableInputAttrib
|**Filter** | Optional. An OData filter expression for entities to read into an [`IEnumerable<T>`]. Can't be used with `RowKey`. | |**Connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). |
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property | Description|
-||-|
-|**type** | Must be set to `table`. This property is set automatically when you create the binding in the Azure portal.|
-|**direction** | Must be set to `in`. This property is set automatically when you create the binding in the Azure portal. |
-|**name** | The name of the variable that represents the table or entity in function code. |
-|**tableName** | The name of the table.|
-|**partitionKey** | Optional. The partition key of the table entity to read. |
-|**rowKey** |Optional. The row key of the table entity to read. Can't be used with `take` or `filter`.|
-|**take** | Optional. The maximum number of entities to return. Can't be used with `rowKey`. |
-|**filter** | Optional. An OData filter expression for the entities to return from the table. Can't be used with `rowKey`.|
-|**connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). |
- ::: zone-end
An in-process class library is a compiled C# function that runs in the same proc
# [Isolated process](#tab/isolated-process)
-An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
-
-# [C# script](#tab/csharp-script)
-
-C# script is used primarily when creating C# functions in the Azure portal.
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
To return a specific entity by key, use a plain-old CLR object (POCO). The speci
Functions version 1.x doesn't support isolated worker process.
-# [Azure Tables extension](#tab/table-api/csharp-script)
-
-To return a specific entity by key, use a binding parameter that derives from [TableEntity](/dotnet/api/azure.data.tables.tableentity).
-
-To execute queries that return multiple entities, bind to a [TableClient] object. You can then use this object to create and execute queries against the bound table. Note that [TableClient] and related APIs belong to the [Azure.Data.Tables](/dotnet/api/azure.data.tables) namespace.
-
-# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
-
-To return a specific entity by key, use a binding parameter that derives from [TableEntity](/dotnet/api/azure.data.tables.tableentity).
-
-To execute queries that return multiple entities, bind to a [CloudTable] object. You can then use this object to create and execute queries against the bound table. Note that [CloudTable] and related APIs belong to the [Microsoft.Azure.Cosmos.Table](/dotnet/api/microsoft.azure.cosmos.table) namespace.
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-To return a specific entity by key, use a binding parameter that derives from [TableEntity]. The specific `TableName`, `PartitionKey`, and `RowKey` are used to try and get a specific entity from the table.
-
-To execute queries that return multiple entities, bind to an [`IQueryable<T>`] of a type that inherits from [TableEntity].
- ::: zone-end
azure-functions Functions Bindings Storage Table Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-output.md
public static MyTableData Run(
} ```
-# [C# Script](#tab/csharp-script)
-
-The following example shows a table output binding in a *function.json* file and [C# script](functions-reference-csharp.md) code that uses the binding. The function writes multiple table entities.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "name": "input",
- "type": "manualTrigger",
- "direction": "in"
- },
- {
- "tableName": "Person",
- "connection": "MyStorageConnectionAppSetting",
- "name": "tableBinding",
- "type": "table",
- "direction": "out"
- }
- ],
- "disabled": false
-}
-```
-
-The [attributes](#attributes) section explains these properties.
-
-Here's the C# script code:
-
-```csharp
-public static void Run(string input, ICollector<Person> tableBinding, ILogger log)
-{
- for (int i = 1; i < 10; i++)
- {
- log.LogInformation($"Adding Person entity {i}");
- tableBinding.Add(
- new Person() {
- PartitionKey = "Test",
- RowKey = i.ToString(),
- Name = "Name" + i.ToString() }
- );
- }
-
-}
-
-public class Person
-{
- public string PartitionKey { get; set; }
- public string RowKey { get; set; }
- public string Name { get; set; }
-}
-
-```
- ::: zone-end
def main(req: func.HttpRequest, message: func.Out[str]) -> func.HttpResponse:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#table-output).
# [In-process](#tab/in-process)
In [C# class libraries](dotnet-isolated-process-guide.md), the `TableInputAttrib
|**PartitionKey** | The partition key of the table entity to write. | |**RowKey** | The row key of the table entity to write. | |**Connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). |
-
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property | Description|
-|||
-|**type** |Must be set to `table`. This property is set automatically when you create the binding in the Azure portal.|
-|**direction** | Must be set to `out`. This property is set automatically when you create the binding in the Azure portal. |
-|**name** | The variable name used in function code that represents the table or entity. Set to `$return` to reference the function return value.|
-|**tableName** |The name of the table to which to write.|
-|**partitionKey** |The partition key of the table entity to write. |
-|**rowKey** | The row key of the table entity to write. |
-|**connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). |
An in-process class library is a compiled C# function runs in the same process a
# [Isolated process](#tab/isolated-process)
-An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
-
-# [C# script](#tab/csharp-script)
-
-C# script is used primarily when creating C# functions in the Azure portal.
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
Return a plain-old CLR object (POCO) with properties that can be mapped to the t
Functions version 1.x doesn't support isolated worker process.
-# [Azure Tables extension](#tab/table-api/csharp-script)
-
-The following types are supported for `out` parameters and return types:
--- A plain-old CLR object (POCO) that includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity`.-- `ICollector<T>` or `IAsyncCollector<T>` where `T` includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity`.-
-You can also bind to `TableClient` [from the Azure SDK](/dotnet/api/azure.data.tables.tableclient). You can then use that object to write to the table.
-
-# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
-
-The following types are supported for `out` parameters and return types:
--- A plain-old CLR object (POCO) that includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity` or inheriting `TableEntity`.-- `ICollector<T>` or `IAsyncCollector<T>` where `T` includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity` or inheriting `TableEntity`.-
-You can also bind to `CloudTable` [from the Storage SDK](/dotnet/api/microsoft.azure.cosmos.table.cloudtable) as a method parameter. You can then use that object to write to the table.
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-The following types are supported for `out` parameters and return types:
--- A plain-old CLR object (POCO) that includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity` or inheriting `TableEntity`.-- `ICollector<T>` or `IAsyncCollector<T>` where `T` includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity` or inheriting `TableEntity`.-
-You can also bind to `CloudTable` [from the Storage SDK](/dotnet/api/microsoft.azure.cosmos.table.cloudtable) as a method parameter. You can then use that object to write to the table.
- ::: zone-end
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
public static void Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, ILogger
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Timer/TimerFunction.cs" range="11-17":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows a timer trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes a log indicating whether this function invocation is due to a missed schedule occurrence. The [`TimerInfo`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerInfo.cs) object is passed into the function.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "schedule": "0 */5 * * * *",
- "name": "myTimer",
- "type": "timerTrigger",
- "direction": "in"
-}
-```
-
-Here's the C# script code:
-
-```csharp
-public static void Run(TimerInfo myTimer, ILogger log)
-{
- if (myTimer.IsPastDue)
- {
- log.LogInformation("Timer is running late!");
- }
- log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}" );
-}
-```
- ::: zone-end ::: zone pivot="programming-language-java"
def main(mytimer: func.TimerRequest) -> None:
::: zone pivot="programming-language-csharp" ## Attributes
-[In-process](functions-dotnet-class-library.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs) from [Microsoft.Azure.WebJobs.Extensions](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions) whereas [isolated worker process](dotnet-isolated-process-guide.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Timer/src/TimerTriggerAttribute.cs) from [Microsoft.Azure.Functions.Worker.Extensions.Timer](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Timer) to define the function.
-
-C# script instead uses a function.json configuration file.
+[In-process](functions-dotnet-class-library.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs) from [Microsoft.Azure.WebJobs.Extensions](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions) whereas [isolated worker process](dotnet-isolated-process-guide.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Timer/src/TimerTriggerAttribute.cs) from [Microsoft.Azure.Functions.Worker.Extensions.Timer](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Timer) to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#timer-trigger).
# [In-process](#tab/in-process)
C# script instead uses a function.json configuration file.
|**RunOnStartup**| If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **RunOnStartup** should rarely if ever be set to `true`, especially in production. | |**UseMonitor**| Set to `true` or `false` to indicate whether the schedule should be monitored. Schedule monitoring persists schedule occurrences to aid in ensuring the schedule is maintained correctly even when function app instances restart. If not set explicitly, the default is `true` for schedules that have a recurrence interval greater than or equal to 1 minute. For schedules that trigger more than once per minute, the default is `false`. |
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property | Description|
-||-|
-|**type** | Must be set to "timerTrigger". This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
-|**name** | The name of the variable that represents the timer object in function code. |
-|**schedule**| A [CRON expression](#ncrontab-expressions) or a [TimeSpan](#timespan) value. A `TimeSpan` can be used only for a function app that runs on an App Service Plan. You can put the schedule expression in an app setting and set this property to the app setting name wrapped in **%** signs, as in this example: "%ScheduleAppSetting%". |
-|**runOnStartup**| If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **runOnStartup** should rarely if ever be set to `true`, especially in production. |
-|**useMonitor**| Set to `true` or `false` to indicate whether the schedule should be monitored. Schedule monitoring persists schedule occurrences to aid in ensuring the schedule is maintained correctly even when function app instances restart. If not set explicitly, the default is `true` for schedules that have a recurrence interval greater than or equal to 1 minute. For schedules that trigger more than once per minute, the default is `false`. |
- ::: zone-end
azure-functions Functions Bindings Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-twilio.md
You can add the extension to your project by explicitly installing the [NuGet pa
Unless otherwise noted, these examples are specific to version 2.x and later version of the Functions runtime. ::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Warmup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md
The following considerations apply when using a warmup trigger:
<!--Optional intro text goes here, followed by the C# modes include.--> # [In-process](#tab/in-process)
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
The following table lists the .NET attributes for each binding type and the pack
> | Storage table | [`Microsoft.Azure.WebJobs.TableAttribute`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs), [`Microsoft.Azure.WebJobs.StorageAccountAttribute`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/StorageAccountAttribute.cs) | | > | Twilio | [`Microsoft.Azure.WebJobs.TwilioSmsAttribute`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.Twilio/TwilioSMSAttribute.cs) | `#r "Microsoft.Azure.WebJobs.Extensions.Twilio"` |
+## Binding configuration and examples
+
+### Blob trigger
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to `blobTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the blob in function code. |
+|**path** | The [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources) to monitor. May be a [blob name pattern](./functions-bindings-storage-blob-trigger.md#blob-name-patterns). |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](./functions-bindings-storage-blob-trigger.md#connections).|
++
+The following example shows a blob trigger binding in a *function.json* file and code that uses the binding. The function writes a log when a blob is added or updated in the `samples-workitems` [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources).
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "disabled": false,
+ "bindings": [
+ {
+ "name": "myBlob",
+ "type": "blobTrigger",
+ "direction": "in",
+ "path": "samples-workitems/{name}",
+ "connection":"MyStorageAccountAppSetting"
+ }
+ ]
+}
+```
+
+The string `{name}` in the blob trigger path `samples-workitems/{name}` creates a [binding expression](./functions-bindings-expressions-patterns.md) that you can use in function code to access the file name of the triggering blob. For more information, see [Blob name patterns](./functions-bindings-storage-blob-trigger.md#blob-name-patterns).
+
+Here's C# script code that binds to a `Stream`:
+
+```cs
+public static void Run(Stream myBlob, string name, ILogger log)
+{
+ log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes");
+}
+```
+
+Here's C# script code that binds to a `CloudBlockBlob`:
+
+```cs
+#r "Microsoft.WindowsAzure.Storage"
+
+using Microsoft.WindowsAzure.Storage.Blob;
+
+public static void Run(CloudBlockBlob myBlob, string name, ILogger log)
+{
+ log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name}\nURI:{myBlob.StorageUri}");
+}
+```
+
+### Blob input
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to `blob`. |
+|**direction** | Must be set to `in`. |
+|**name** | The name of the variable that represents the blob in function code.|
+|**path** | The path to the blob. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](./functions-bindings-storage-blob-input.md#connections).|
+
+The following example shows blob input and output bindings in a *function.json* file and C# script code that uses the bindings. The function makes a copy of a text blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
+
+In the *function.json* file, the `queueTrigger` metadata property is used to specify the blob name in the `path` properties:
+
+```json
+{
+ "bindings": [
+ {
+ "queueName": "myqueue-items",
+ "connection": "MyStorageConnectionAppSetting",
+ "name": "myQueueItem",
+ "type": "queueTrigger",
+ "direction": "in"
+ },
+ {
+ "name": "myInputBlob",
+ "type": "blob",
+ "path": "samples-workitems/{queueTrigger}",
+ "connection": "MyStorageConnectionAppSetting",
+ "direction": "in"
+ },
+ {
+ "name": "myOutputBlob",
+ "type": "blob",
+ "path": "samples-workitems/{queueTrigger}-Copy",
+ "connection": "MyStorageConnectionAppSetting",
+ "direction": "out"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+public static void Run(string myQueueItem, string myInputBlob, out string myOutputBlob, ILogger log)
+{
+ log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
+ myOutputBlob = myInputBlob;
+}
+```
+
+### Blob output
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to `blob`. |
+|**direction** | Must be set to `out`. |
+|**name** | The name of the variable that represents the blob in function code.|
+|**path** | The path to the blob. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](./functions-bindings-storage-blob-output.md#connections).|
+
+The following example shows blob input and output bindings in a *function.json* file and C# script code that uses the bindings. The function makes a copy of a text blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
+
+In the *function.json* file, the `queueTrigger` metadata property is used to specify the blob name in the `path` properties:
+
+```json
+{
+ "bindings": [
+ {
+ "queueName": "myqueue-items",
+ "connection": "MyStorageConnectionAppSetting",
+ "name": "myQueueItem",
+ "type": "queueTrigger",
+ "direction": "in"
+ },
+ {
+ "name": "myInputBlob",
+ "type": "blob",
+ "path": "samples-workitems/{queueTrigger}",
+ "connection": "MyStorageConnectionAppSetting",
+ "direction": "in"
+ },
+ {
+ "name": "myOutputBlob",
+ "type": "blob",
+ "path": "samples-workitems/{queueTrigger}-Copy",
+ "connection": "MyStorageConnectionAppSetting",
+ "direction": "out"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+public static void Run(string myQueueItem, string myInputBlob, out string myOutputBlob, ILogger log)
+{
+ log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
+ myOutputBlob = myInputBlob;
+}
+```
+
+### Queue trigger
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** |Must be set to `queueTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction**| In the *function.json* file only. Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that contains the queue item payload in the function code. |
+|**queueName** | The name of the queue to poll. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](./functions-bindings-storage-queue-trigger.md#connections).|
++
+The following example shows a queue trigger binding in a *function.json* file and C# script code that uses the binding. The function polls the `myqueue-items` queue and writes a log each time a queue item is processed.
+
+Here's the *function.json* file:
+
+```json
+{
+ "disabled": false,
+ "bindings": [
+ {
+ "type": "queueTrigger",
+ "direction": "in",
+ "name": "myQueueItem",
+ "queueName": "myqueue-items",
+ "connection":"MyStorageConnectionAppSetting"
+ }
+ ]
+}
+```
+
+Here's the C# script code:
+
+```csharp
+#r "Microsoft.WindowsAzure.Storage"
+
+using Microsoft.Extensions.Logging;
+using Microsoft.WindowsAzure.Storage.Queue;
+using System;
+
+public static void Run(CloudQueueMessage myQueueItem,
+ DateTimeOffset expirationTime,
+ DateTimeOffset insertionTime,
+ DateTimeOffset nextVisibleTime,
+ string queueTrigger,
+ string id,
+ string popReceipt,
+ int dequeueCount,
+ ILogger log)
+{
+ log.LogInformation($"C# Queue trigger function processed: {myQueueItem.AsString}\n" +
+ $"queueTrigger={queueTrigger}\n" +
+ $"expirationTime={expirationTime}\n" +
+ $"insertionTime={insertionTime}\n" +
+ $"nextVisibleTime={nextVisibleTime}\n" +
+ $"id={id}\n" +
+ $"popReceipt={popReceipt}\n" +
+ $"dequeueCount={dequeueCount}");
+}
+```
+
+### Queue output
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** |Must be set to `queue`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to `out`. This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the queue in function code. Set to `$return` to reference the function return value.|
+|**queueName** | The name of the queue. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](./functions-bindings-storage-queue-output.md#connections).|
+
+The following example shows an HTTP trigger binding in a *function.json* file and C# script code that uses the binding. The function creates a queue item with a **CustomQueueMessage** object payload for each HTTP request received.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "httpTrigger",
+ "direction": "in",
+ "authLevel": "function",
+ "name": "input"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "$return"
+ },
+ {
+ "type": "queue",
+ "direction": "out",
+ "name": "$return",
+ "queueName": "outqueue",
+ "connection": "MyStorageConnectionAppSetting"
+ }
+ ]
+}
+```
+
+Here's C# script code that creates a single queue message:
+
+```cs
+public class CustomQueueMessage
+{
+ public string PersonName { get; set; }
+ public string Title { get; set; }
+}
+
+public static CustomQueueMessage Run(CustomQueueMessage input, ILogger log)
+{
+ return input;
+}
+```
+
+You can send multiple messages at once by using an `ICollector` or `IAsyncCollector` parameter. Here's C# script code that sends multiple messages, one with the HTTP request data and one with hard-coded values:
+
+```cs
+public static void Run(
+ CustomQueueMessage input,
+ ICollector<CustomQueueMessage> myQueueItems,
+ ILogger log)
+{
+ myQueueItems.Add(input);
+ myQueueItems.Add(new CustomQueueMessage { PersonName = "You", Title = "None" });
+}
+```
+
+### Table input
+
+This section outlines support for the [Tables API version of the extension](./functions-bindings-storage-table.md?tabs=in-process%2Ctable-api) only.
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to `table`. This property is set automatically when you create the binding in the Azure portal.|
+|**direction** | Must be set to `in`. This property is set automatically when you create the binding in the Azure portal. |
+|**name** | The name of the variable that represents the table or entity in function code. |
+|**tableName** | The name of the table.|
+|**partitionKey** | Optional. The partition key of the table entity to read. |
+|**rowKey** |Optional. The row key of the table entity to read. Can't be used with `take` or `filter`.|
+|**take** | Optional. The maximum number of entities to return. Can't be used with `rowKey`. |
+|**filter** | Optional. An OData filter expression for the entities to return from the table. Can't be used with `rowKey`.|
+|**connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](./functions-bindings-storage-table-input.md#connections). |
+
+he following example shows a table input binding in a *function.json* file and C# script code that uses the binding. The function uses a queue trigger to read a single table row.
+
+The *function.json* file specifies a `partitionKey` and a `rowKey`. The `rowKey` value `{queueTrigger}` indicates that the row key comes from the queue message string.
+
+```json
+{
+ "bindings": [
+ {
+ "queueName": "myqueue-items",
+ "connection": "MyStorageConnectionAppSetting",
+ "name": "myQueueItem",
+ "type": "queueTrigger",
+ "direction": "in"
+ },
+ {
+ "name": "personEntity",
+ "type": "table",
+ "tableName": "Person",
+ "partitionKey": "Test",
+ "rowKey": "{queueTrigger}",
+ "connection": "MyStorageConnectionAppSetting",
+ "direction": "in"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```csharp
+#r "Azure.Data.Tables"
+using Microsoft.Extensions.Logging;
+using Azure.Data.Tables;
+
+public static void Run(string myQueueItem, Person personEntity, ILogger log)
+{
+ log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
+ log.LogInformation($"Name in Person entity: {personEntity.Name}");
+}
+
+public class Person : ITableEntity
+{
+ public string Name { get; set; }
+
+ public string PartitionKey { get; set; }
+ public string RowKey { get; set; }
+ public DateTimeOffset? Timestamp { get; set; }
+ public ETag ETag { get; set; }
+}
+```
+
+### Table output
+
+This section outlines support for the [Tables API version of the extension](./functions-bindings-storage-table.md?tabs=in-process%2Ctable-api) only.
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+|||
+|**type** |Must be set to `table`. This property is set automatically when you create the binding in the Azure portal.|
+|**direction** | Must be set to `out`. This property is set automatically when you create the binding in the Azure portal. |
+|**name** | The variable name used in function code that represents the table or entity. Set to `$return` to reference the function return value.|
+|**tableName** |The name of the table to which to write.|
+|**partitionKey** |The partition key of the table entity to write. |
+|**rowKey** | The row key of the table entity to write. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](./functions-bindings-storage-table-output.md#connections). |
+
+The following example shows a table output binding in a *function.json* file and C# script code that uses the binding. The function writes multiple table entities.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "name": "input",
+ "type": "manualTrigger",
+ "direction": "in"
+ },
+ {
+ "tableName": "Person",
+ "connection": "MyStorageConnectionAppSetting",
+ "name": "tableBinding",
+ "type": "table",
+ "direction": "out"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```csharp
+public static void Run(string input, ICollector<Person> tableBinding, ILogger log)
+{
+ for (int i = 1; i < 10; i++)
+ {
+ log.LogInformation($"Adding Person entity {i}");
+ tableBinding.Add(
+ new Person() {
+ PartitionKey = "Test",
+ RowKey = i.ToString(),
+ Name = "Name" + i.ToString() }
+ );
+ }
+
+}
+
+public class Person
+{
+ public string PartitionKey { get; set; }
+ public string RowKey { get; set; }
+ public string Name { get; set; }
+}
+
+```
+
+### Timer trigger
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to "timerTrigger". This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the timer object in function code. |
+|**schedule**| A [CRON expression](./functions-bindings-timer.md#ncrontab-expressions) or a [TimeSpan](./functions-bindings-timer.md#timespan) value. A `TimeSpan` can be used only for a function app that runs on an App Service Plan. You can put the schedule expression in an app setting and set this property to the app setting name wrapped in **%** signs, as in this example: "%ScheduleAppSetting%". |
+|**runOnStartup**| If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **runOnStartup** should rarely if ever be set to `true`, especially in production. |
+|**useMonitor**| Set to `true` or `false` to indicate whether the schedule should be monitored. Schedule monitoring persists schedule occurrences to aid in ensuring the schedule is maintained correctly even when function app instances restart. If not set explicitly, the default is `true` for schedules that have a recurrence interval greater than or equal to 1 minute. For schedules that trigger more than once per minute, the default is `false`. |
+
+The following example shows a timer trigger binding in a *function.json* file and a C# script function that uses the binding. The function writes a log indicating whether this function invocation is due to a missed schedule occurrence. The [`TimerInfo`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerInfo.cs) object is passed into the function.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "schedule": "0 */5 * * * *",
+ "name": "myTimer",
+ "type": "timerTrigger",
+ "direction": "in"
+}
+```
+
+Here's the C# script code:
+
+```csharp
+public static void Run(TimerInfo myTimer, ILogger log)
+{
+ if (myTimer.IsPastDue)
+ {
+ log.LogInformation("Timer is running late!");
+ }
+ log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}" );
+}
+```
+
+### HTTP trigger
+
+The following table explains the trigger configuration properties that you set in the *function.json* file:
+
+|function.json property | Description|
+|||
+| **type** | Required - must be set to `httpTrigger`. |
+| **direction** | Required - must be set to `in`. |
+| **name** | Required - the variable name used in function code for the request or request body. |
+| **authLevel** | Determines what keys, if any, need to be present on the request in order to invoke the function. For supported values, see [Authorization level](./functions-bindings-http-webhook-trigger.md#http-auth). |
+| **methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the HTTP endpoint](./functions-bindings-http-webhook-trigger.md#customize-the-http-endpoint). |
+| **route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is `<functionname>`. For more information, see [customize the HTTP endpoint](./functions-bindings-http-webhook-trigger.md#customize-the-http-endpoint). |
+| **webHookType** | _Supported only for the version 1.x runtime._<br/><br/>Configures the HTTP trigger to act as a [webhook](https://en.wikipedia.org/wiki/Webhook) receiver for the specified provider. For supported values, see [WebHook type](./functions-bindings-http-webhook-trigger.md#webhook-type).|
+
+The following example shows a trigger binding in a *function.json* file and a C# script function that uses the binding. The function looks for a `name` parameter either in the query string or the body of the HTTP request.
+
+Here's the *function.json* file:
+
+```json
+{
+ "disabled": false,
+ "bindings": [
+ {
+ "authLevel": "function",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ }
+ ]
+}
+```
+
+Here's C# script code that binds to `HttpRequest`:
+
+```cs
+#r "Newtonsoft.Json"
+
+using System.Net;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+
+public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ string name = req.Query["name"];
+
+ string requestBody = String.Empty;
+ using (StreamReader streamReader = new StreamReader(req.Body))
+ {
+ requestBody = await streamReader.ReadToEndAsync();
+ }
+ dynamic data = JsonConvert.DeserializeObject(requestBody);
+ name = name ?? data?.name;
+
+ return name != null
+ ? (ActionResult)new OkObjectResult($"Hello, {name}")
+ : new BadRequestObjectResult("Please pass a name on the query string or in the request body");
+}
+```
+
+You can bind to a custom object instead of `HttpRequest`. This object is created from the body of the request and parsed as JSON. Similarly, a type can be passed to the HTTP response output binding and returned as the response body, along with a `200` status code.
+
+```csharp
+using System.Net;
+using System.Threading.Tasks;
+using Microsoft.Extensions.Logging;
+
+public static string Run(Person person, ILogger log)
+{
+ return person.Name != null
+ ? (ActionResult)new OkObjectResult($"Hello, {person.Name}")
+ : new BadRequestObjectResult("Please pass an instance of Person.");
+}
+
+public class Person {
+ public string Name {get; set;}
+}
+```
+
+### HTTP output
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|Property |Description |
+|||
+| **type** |Must be set to `http`. |
+| **direction** | Must be set to `out`. |
+| **name** | The variable name used in function code for the response, or `$return` to use the return value. |
+
+### Event Hubs trigger
+
+The following table explains the trigger configuration properties that you set in the *function.json* file:
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to `eventHubTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the event item in function code. |
+|**eventHubName** | Functions 2.x and higher. The name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. Can be referenced via [app settings](./functions-bindings-expressions-patterns.md#binding-expressionsapp-settings) `%eventHubName%`. In version 1.x, this property is named `path`. |
+|**consumerGroup** |An optional property that sets the [consumer group](../event-hubs/event-hubs-features.md#event-consumers) used to subscribe to events in the hub. If omitted, the `$Default` consumer group is used. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. See [Connections](./functions-bindings-event-hubs-trigger.md#connections).|
++
+The following example shows an Event Hubs trigger binding in a *function.json* file and a C# script functionthat uses the binding. The function logs the message body of the Event Hubs trigger.
+
+The following examples show Event Hubs binding data in the *function.json* file for Functions runtime version 2.x and later versions.
+
+```json
+{
+ "type": "eventHubTrigger",
+ "name": "myEventHubMessage",
+ "direction": "in",
+ "eventHubName": "MyEventHub",
+ "connection": "myEventHubReadConnectionAppSetting"
+}
+```
+
+Here's the C# script code:
+
+```cs
+using System;
+
+public static void Run(string myEventHubMessage, TraceWriter log)
+{
+ log.Info($"C# function triggered to process a message: {myEventHubMessage}");
+}
+```
+
+To get access to event metadata in function code, bind to an [EventData](/dotnet/api/microsoft.servicebus.messaging.eventdata) object. You can also access the same properties by using binding expressions in the method signature. The following example shows both ways to get the same data:
+
+```cs
+#r "Microsoft.Azure.EventHubs"
+
+using System.Text;
+using System;
+using Microsoft.ServiceBus.Messaging;
+using Microsoft.Azure.EventHubs;
+
+public void Run(EventData myEventHubMessage,
+ DateTime enqueuedTimeUtc,
+ Int64 sequenceNumber,
+ string offset,
+ TraceWriter log)
+{
+ log.Info($"Event: {Encoding.UTF8.GetString(myEventHubMessage.Body)}");
+ log.Info($"EnqueuedTimeUtc={myEventHubMessage.SystemProperties.EnqueuedTimeUtc}");
+ log.Info($"SequenceNumber={myEventHubMessage.SystemProperties.SequenceNumber}");
+ log.Info($"Offset={myEventHubMessage.SystemProperties.Offset}");
+
+ // Metadata accessed by using binding expressions
+ log.Info($"EnqueuedTimeUtc={enqueuedTimeUtc}");
+ log.Info($"SequenceNumber={sequenceNumber}");
+ log.Info($"Offset={offset}");
+}
+```
+
+To receive events in a batch, make `string` or `EventData` an array:
+
+```cs
+public static void Run(string[] eventHubMessages, TraceWriter log)
+{
+ foreach (var message in eventHubMessages)
+ {
+ log.Info($"C# function triggered to process a message: {message}");
+ }
+}
+```
+
+### Event Hubs output
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property | Description|
+|||
+|**type** | Must be set to `eventHub`. |
+|**direction** | Must be set to `out`. This parameter is set automatically when you create the binding in the Azure portal. |
+|**name** | The variable name used in function code that represents the event. |
+|**eventHubName** | Functions 2.x and higher. The name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. In Functions 1.x, this property is named `path`.|
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. To learn more, see [Connections](./functions-bindings-event-hubs-output.md#connections).|
+
+The following example shows an event hub trigger binding in a *function.json* file and a C# script function that uses the binding. The function writes a message to an event hub.
+
+The following examples show Event Hubs binding data in the *function.json* file for Functions runtime version 2.x and later versions.
+
+```json
+{
+ "type": "eventHub",
+ "name": "outputEventHubMessage",
+ "eventHubName": "myeventhub",
+ "connection": "MyEventHubSendAppSetting",
+ "direction": "out"
+}
+```
+
+Here's C# script code that creates one message:
+
+```cs
+using System;
+using Microsoft.Extensions.Logging;
+
+public static void Run(TimerInfo myTimer, out string outputEventHubMessage, ILogger log)
+{
+ String msg = $"TimerTriggerCSharp1 executed at: {DateTime.Now}";
+ log.LogInformation(msg);
+ outputEventHubMessage = msg;
+}
+```
+
+Here's C# script code that creates multiple messages:
+
+```cs
+public static void Run(TimerInfo myTimer, ICollector<string> outputEventHubMessage, ILogger log)
+{
+ string message = $"Message created at: {DateTime.Now}";
+ log.LogInformation(message);
+ outputEventHubMessage.Add("1 " + message);
+ outputEventHubMessage.Add("2 " + message);
+}
+```
+
+### Event Grid trigger
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file. There are no constructor parameters or properties to set in the `EventGridTrigger` attribute.
+
+|function.json property |Description|
+|||
+| **type** | Required - must be set to `eventGridTrigger`. |
+| **direction** | Required - must be set to `in`. |
+| **name** | Required - the variable name used in function code for the parameter that receives the event data. |
+
+The following example shows an Event Grid trigger defined in the *function.json* file.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "eventGridTrigger",
+ "name": "eventGridEvent",
+ "direction": "in"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's an example of a C# script function that uses an `EventGridEvent` binding parameter:
+
+```csharp
+#r "Azure.Messaging.EventGrid"
+using Azure.Messaging.EventGrid;
+using Microsoft.Extensions.Logging;
+
+public static void Run(EventGridEvent eventGridEvent, ILogger log)
+{
+ log.LogInformation(eventGridEvent.Data.ToString());
+}
+```
+
+Here's an example of a C# script function that uses a `JObject` binding parameter:
+
+```cs
+#r "Newtonsoft.Json"
+
+using Newtonsoft.Json;
+using Newtonsoft.Json.Linq;
+
+public static void Run(JObject eventGridEvent, TraceWriter log)
+{
+ log.Info(eventGridEvent.ToString(Formatting.Indented));
+}
+```
+
+### Event Grid output
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+|||-|
+|**type** | Must be set to `eventGrid`. |
+|**direction** | Must be set to `out`. This parameter is set automatically when you create the binding in the Azure portal. |
+|**name** | The variable name used in function code that represents the event. |
+|**topicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. |
+|**topicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
+
+The following example shows the Event Grid output binding data in the *function.json* file.
+
+```json
+{
+ "type": "eventGrid",
+ "name": "outputEvent",
+ "topicEndpointUri": "MyEventGridTopicUriSetting",
+ "topicKeySetting": "MyEventGridTopicKeySetting",
+ "direction": "out"
+}
+```
+
+Here's C# script code that creates one event:
+
+```cs
+#r "Microsoft.Azure.EventGrid"
+using System;
+using Microsoft.Azure.EventGrid.Models;
+using Microsoft.Extensions.Logging;
+
+public static void Run(TimerInfo myTimer, out EventGridEvent outputEvent, ILogger log)
+{
+ outputEvent = new EventGridEvent("message-id", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0");
+}
+```
+
+Here's C# script code that creates multiple events:
+
+```cs
+#r "Microsoft.Azure.EventGrid"
+using System;
+using Microsoft.Azure.EventGrid.Models;
+using Microsoft.Extensions.Logging;
+
+public static void Run(TimerInfo myTimer, ICollector<EventGridEvent> outputEvent, ILogger log)
+{
+ outputEvent.Add(new EventGridEvent("message-id-1", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0"));
+ outputEvent.Add(new EventGridEvent("message-id-2", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0"));
+}
+```
+
+### Service Bus trigger
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to `serviceBusTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the queue or topic message in function code. |
+|**queueName**| Name of the queue to monitor. Set only if monitoring a queue, not for a topic.
+|**topicName**| Name of the topic to monitor. Set only if monitoring a topic, not for a queue.|
+|**subscriptionName**| Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.|
+|**connection**| The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](./functions-bindings-service-bus-trigger.md#connections).|
+|**accessRights**| Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
+|**isSessionsEnabled**| `true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
+|**autoComplete**| `true` when the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>Setting to `false` is only supported in C#.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br/><br/>This property is available only in Azure Functions 2.x and higher. |
+
+The following example shows a Service Bus trigger binding in a *function.json* file and a C# script function that uses the binding. The function reads message metadata and logs a Service Bus queue message.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+"bindings": [
+ {
+ "queueName": "testqueue",
+ "connection": "MyServiceBusConnection",
+ "name": "myQueueItem",
+ "type": "serviceBusTrigger",
+ "direction": "in"
+ }
+],
+"disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+using System;
+
+public static void Run(string myQueueItem,
+ Int32 deliveryCount,
+ DateTime enqueuedTimeUtc,
+ string messageId,
+ TraceWriter log)
+{
+ log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
+
+ log.Info($"EnqueuedTimeUtc={enqueuedTimeUtc}");
+ log.Info($"DeliveryCount={deliveryCount}");
+ log.Info($"MessageId={messageId}");
+}
+```
+
+### Service Bus output
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property | Description|
+|||-|
+|**type** |Must be set to "serviceBus". This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to "out". This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the queue or topic message in function code. Set to "$return" to reference the function return value. |
+|**queueName**|Name of the queue. Set only if sending queue messages, not for a topic.
+|**topicName**|Name of the topic. Set only if sending topic messages, not for a queue.|
+|**connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](./functions-bindings-service-bus-output.md#connections).|
+|**accessRights** (v1 only)|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
+
+The following example shows a Service Bus output binding in a *function.json* file and a C# script function that uses the binding. The function uses a timer trigger to send a queue message every 15 seconds.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "schedule": "0/15 * * * * *",
+ "name": "myTimer",
+ "runsOnStartup": true,
+ "type": "timerTrigger",
+ "direction": "in"
+ },
+ {
+ "name": "outputSbQueue",
+ "type": "serviceBus",
+ "queueName": "testqueue",
+ "connection": "MyServiceBusConnection",
+ "direction": "out"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's C# script code that creates a single message:
+
+```cs
+public static void Run(TimerInfo myTimer, ILogger log, out string outputSbQueue)
+{
+ string message = $"Service Bus queue message created at: {DateTime.Now}";
+ log.LogInformation(message);
+ outputSbQueue = message;
+}
+```
+
+Here's C# script code that creates multiple messages:
+
+```cs
+public static async Task Run(TimerInfo myTimer, ILogger log, IAsyncCollector<string> outputSbQueue)
+{
+ string message = $"Service Bus queue messages created at: {DateTime.Now}";
+ log.LogInformation(message);
+ await outputSbQueue.AddAsync("1 " + message);
+ await outputSbQueue.AddAsync("2 " + message);
+}
+```
+
+### Cosmos DB trigger
+
+This section outlines support for the [version 4.x+ of the extension](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4) only.
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
++
+The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are added or modified.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "type": "cosmosDBTrigger",
+ "name": "documents",
+ "direction": "in",
+ "leaseContainerName": "leases",
+ "connection": "<connection-app-setting>",
+ "databaseName": "Tasks",
+ "containerName": "Items",
+ "createLeaseContainerIfNotExists": true
+}
+```
+
+Here's the C# script code:
+
+```cs
+ using System;
+ using System.Collections.Generic;
+ using Microsoft.Extensions.Logging;
+
+ // Customize the model with your own desired properties
+ public class ToDoItem
+ {
+ public string id { get; set; }
+ public string Description { get; set; }
+ }
+
+ public static void Run(IReadOnlyList<ToDoItem> documents, ILogger log)
+ {
+ log.LogInformation("Documents modified " + documents.Count);
+ log.LogInformation("First document Id " + documents[0].id);
+ }
+```
+
+### Cosmos DB input
+
+This section outlines support for the [version 4.x+ of the extension](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4) only.
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
++
+This section contains the following examples:
+
+* [Queue trigger, look up ID from string](#queue-trigger-look-up-id-from-string-c-script)
+* [Queue trigger, get multiple docs, using SqlQuery](#queue-trigger-get-multiple-docs-using-sqlquery-c-script)
+* [HTTP trigger, look up ID from query string](#http-trigger-look-up-id-from-query-string-c-script)
+* [HTTP trigger, look up ID from route data](#http-trigger-look-up-id-from-route-data-c-script)
+* [HTTP trigger, get multiple docs, using SqlQuery](#http-trigger-get-multiple-docs-using-sqlquery-c-script)
+* [HTTP trigger, get multiple docs, using DocumentClient](#http-trigger-get-multiple-docs-using-documentclient-c-script)
+
+The HTTP trigger examples refer to a simple `ToDoItem` type:
+
+```cs
+namespace CosmosDBSamplesV2
+{
+ public class ToDoItem
+ {
+ public string Id { get; set; }
+ public string Description { get; set; }
+ }
+}
+```
+
+<a id="queue-trigger-look-up-id-from-string-c-script"></a>
+
+#### Queue trigger, look up ID from string
+
+The following example shows an Azure Cosmos DB input binding in a *function.json* file and a C# script function that uses the binding. The function reads a single document and updates the document's text value.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "name": "inputDocument",
+ "type": "cosmosDB",
+ "databaseName": "MyDatabase",
+ "collectionName": "MyCollection",
+ "id" : "{queueTrigger}",
+ "partitionKey": "{partition key value}",
+ "connectionStringSetting": "MyAccount_COSMOSDB",
+ "direction": "in"
+}
+```
+
+Here's the C# script code:
+
+```cs
+ using System;
+
+ // Change input document contents using Azure Cosmos DB input binding
+ public static void Run(string myQueueItem, dynamic inputDocument)
+ {
+ inputDocument.text = "This has changed.";
+ }
+```
+
+<a id="queue-trigger-get-multiple-docs-using-sqlquery-c-script"></a>
+
+#### Queue trigger, get multiple docs, using SqlQuery
+
+The following example shows an Azure Cosmos DB input binding in a *function.json* file and a C# script function that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters.
+
+The queue trigger provides a parameter `departmentId`. A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "name": "documents",
+ "type": "cosmosDB",
+ "direction": "in",
+ "databaseName": "MyDb",
+ "collectionName": "MyCollection",
+ "sqlQuery": "SELECT * from c where c.departmentId = {departmentId}",
+ "connectionStringSetting": "CosmosDBConnection"
+}
+```
+
+Here's the C# script code:
+
+```csharp
+ public static void Run(QueuePayload myQueueItem, IEnumerable<dynamic> documents)
+ {
+ foreach (var doc in documents)
+ {
+ // operate on each document
+ }
+ }
+
+ public class QueuePayload
+ {
+ public string departmentId { get; set; }
+ }
+```
+
+<a id="http-trigger-look-up-id-from-query-string-c-script"></a>
+
+#### HTTP trigger, look up ID from query string
+
+The following example shows a C# script function that retrieves a single document. The function is triggered by an HTTP request that uses a query string to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "type": "cosmosDB",
+ "name": "toDoItem",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "connectionStringSetting": "CosmosDBConnection",
+ "direction": "in",
+ "Id": "{Query.id}",
+ "PartitionKey" : "{Query.partitionKeyValue}"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+using System.Net;
+using Microsoft.Extensions.Logging;
+
+public static HttpResponseMessage Run(HttpRequestMessage req, ToDoItem toDoItem, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ if (toDoItem == null)
+ {
+ log.LogInformation($"ToDo item not found");
+ }
+ else
+ {
+ log.LogInformation($"Found ToDo item, Description={toDoItem.Description}");
+ }
+ return req.CreateResponse(HttpStatusCode.OK);
+}
+```
+
+<a id="http-trigger-look-up-id-from-route-data-c-script"></a>
+
+#### HTTP trigger, look up ID from route data
+
+The following example shows a C# script function that retrieves a single document. The function is triggered by an HTTP request that uses route data to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ],
+ "route":"todoitems/{partitionKeyValue}/{id}"
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "type": "cosmosDB",
+ "name": "toDoItem",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "connectionStringSetting": "CosmosDBConnection",
+ "direction": "in",
+ "id": "{id}",
+ "partitionKey": "{partitionKeyValue}"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+using System.Net;
+using Microsoft.Extensions.Logging;
+
+public static HttpResponseMessage Run(HttpRequestMessage req, ToDoItem toDoItem, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ if (toDoItem == null)
+ {
+ log.LogInformation($"ToDo item not found");
+ }
+ else
+ {
+ log.LogInformation($"Found ToDo item, Description={toDoItem.Description}");
+ }
+ return req.CreateResponse(HttpStatusCode.OK);
+}
+```
+
+<a id="http-trigger-get-multiple-docs-using-sqlquery-c-script"></a>
+
+#### HTTP trigger, get multiple docs, using SqlQuery
+
+The following example shows a C# script function that retrieves a list of documents. The function is triggered by an HTTP request. The query is specified in the `SqlQuery` attribute property.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "type": "cosmosDB",
+ "name": "toDoItems",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "connectionStringSetting": "CosmosDBConnection",
+ "direction": "in",
+ "sqlQuery": "SELECT top 2 * FROM c order by c._ts desc"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+using System.Net;
+using Microsoft.Extensions.Logging;
+
+public static HttpResponseMessage Run(HttpRequestMessage req, IEnumerable<ToDoItem> toDoItems, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ foreach (ToDoItem toDoItem in toDoItems)
+ {
+ log.LogInformation(toDoItem.Description);
+ }
+ return req.CreateResponse(HttpStatusCode.OK);
+}
+```
+
+<a id="http-trigger-get-multiple-docs-using-documentclient-c-script"></a>
+
+#### HTTP trigger, get multiple docs, using DocumentClient
+
+The following example shows a C# script function that retrieves a list of documents. The function is triggered by an HTTP request. The code uses a `DocumentClient` instance provided by the Azure Cosmos DB binding to read a list of documents. The `DocumentClient` instance could also be used for write operations.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "type": "cosmosDB",
+ "name": "client",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "connectionStringSetting": "CosmosDBConnection",
+ "direction": "inout"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+#r "Microsoft.Azure.Documents.Client"
+
+using System.Net;
+using Microsoft.Azure.Documents.Client;
+using Microsoft.Azure.Documents.Linq;
+using Microsoft.Extensions.Logging;
+
+public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, DocumentClient client, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ Uri collectionUri = UriFactory.CreateDocumentCollectionUri("ToDoItems", "Items");
+ string searchterm = req.GetQueryNameValuePairs()
+ .FirstOrDefault(q => string.Compare(q.Key, "searchterm", true) == 0)
+ .Value;
+
+ if (searchterm == null)
+ {
+ return req.CreateResponse(HttpStatusCode.NotFound);
+ }
+
+ log.LogInformation($"Searching for word: {searchterm} using Uri: {collectionUri.ToString()}");
+ IDocumentQuery<ToDoItem> query = client.CreateDocumentQuery<ToDoItem>(collectionUri)
+ .Where(p => p.Description.Contains(searchterm))
+ .AsDocumentQuery();
+
+ while (query.HasMoreResults)
+ {
+ foreach (ToDoItem result in await query.ExecuteNextAsync())
+ {
+ log.LogInformation(result.Description);
+ }
+ }
+ return req.CreateResponse(HttpStatusCode.OK);
+}
+```
+
+### Cosmos DB output
+
+This section outlines support for the [version 4.x+ of the extension](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4) only.
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
++
+This section contains the following examples:
+
+* [Queue trigger, write one doc](#queue-trigger-write-one-doc-c-script)
+* [Queue trigger, write docs using IAsyncCollector](#queue-trigger-write-docs-using-iasynccollector-c-script)
+
+<a id="queue-trigger-write-one-doc-c-script"></a>
+
+#### Queue trigger, write one doc
+
+The following example shows an Azure Cosmos DB output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses a queue input binding for a queue that receives JSON in the following format:
+
+```json
+{
+ "name": "John Henry",
+ "employeeId": "123456",
+ "address": "A town nearby"
+}
+```
+
+The function creates Azure Cosmos DB documents in the following format for each record:
+
+```json
+{
+ "id": "John Henry-123456",
+ "name": "John Henry",
+ "employeeId": "123456",
+ "address": "A town nearby"
+}
+```
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "name": "employeeDocument",
+ "type": "cosmosDB",
+ "databaseName": "MyDatabase",
+ "collectionName": "MyCollection",
+ "createIfNotExists": true,
+ "connectionStringSetting": "MyAccount_COSMOSDB",
+ "direction": "out"
+}
+```
+
+Here's the C# script code:
+
+```cs
+ #r "Newtonsoft.Json"
+
+ using Microsoft.Azure.WebJobs.Host;
+ using Newtonsoft.Json.Linq;
+ using Microsoft.Extensions.Logging;
+
+ public static void Run(string myQueueItem, out object employeeDocument, ILogger log)
+ {
+ log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
+
+ dynamic employee = JObject.Parse(myQueueItem);
+
+ employeeDocument = new {
+ id = employee.name + "-" + employee.employeeId,
+ name = employee.name,
+ employeeId = employee.employeeId,
+ address = employee.address
+ };
+ }
+```
+
+<a id="queue-trigger-write-docs-using-iasynccollector-c-script"></a>
+
+#### Queue trigger, write docs using IAsyncCollector
+
+To create multiple documents, you can bind to `ICollector<T>` or `IAsyncCollector<T>` where `T` is one of the supported types.
+
+This example refers to a simple `ToDoItem` type:
+
+```cs
+namespace CosmosDBSamplesV2
+{
+ public class ToDoItem
+ {
+ public string id { get; set; }
+ public string Description { get; set; }
+ }
+}
+```
+
+Here's the function.json file:
+
+```json
+{
+ "bindings": [
+ {
+ "name": "toDoItemsIn",
+ "type": "queueTrigger",
+ "direction": "in",
+ "queueName": "todoqueueforwritemulti",
+ "connectionStringSetting": "AzureWebJobsStorage"
+ },
+ {
+ "type": "cosmosDB",
+ "name": "toDoItemsOut",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "connectionStringSetting": "CosmosDBConnection",
+ "direction": "out"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+using System;
+using Microsoft.Extensions.Logging;
+
+public static async Task Run(ToDoItem[] toDoItemsIn, IAsyncCollector<ToDoItem> toDoItemsOut, ILogger log)
+{
+ log.LogInformation($"C# Queue trigger function processed {toDoItemsIn?.Length} items");
+
+ foreach (ToDoItem toDoItem in toDoItemsIn)
+ {
+ log.LogInformation($"Description={toDoItem.Description}");
+ await toDoItemsOut.AddAsync(toDoItem);
+ }
+}
+```
+ ## Next steps > [!div class="nextstepaction"]
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
To learn more about specific language version support policy timeline, visit the
|Language | Configuration guides | |--|--|
-|C# (class library) |[link](./functions-dotnet-class-library.md#supported-versions)|
+|C# (in-process model) |[link](./functions-dotnet-class-library.md#supported-versions)|
+|C# (isolated worker model) |[link](./dotnet-isolated-process-guide.md#supported-versions)|
|Node |[link](./functions-reference-node.md#setting-the-node-version)| |PowerShell |[link](./functions-reference-powershell.md#changing-the-powershell-version)| |Python |[link](./functions-reference-python.md#python-version)|
azure-functions Migrate Version 1 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md
zone_pivot_groups: programming-languages-set-functions
-# Migrate apps from Azure Functions version 1.x to version 4.x
+# <a name="top"></a>Migrate apps from Azure Functions version 1.x to version 4.x
::: zone pivot="programming-language-java"+ > [!IMPORTANT] > Java isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your Java app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above. + ::: zone-end+ ::: zone pivot="programming-language-typescript"+ > [!IMPORTANT] > TypeScript isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your TypeScript app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above. + ::: zone-end+ ::: zone pivot="programming-language-powershell"+ > [!IMPORTANT] > PowerShell isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your PowerShell app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above. + ::: zone-end+ ::: zone pivot="programming-language-python"+ > [!IMPORTANT]
-> Python isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your Python app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above.
+> Python isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your Python app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above.
+ ::: zone-end++
+This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top).
++ ::: zone pivot="programming-language-csharp"
-If you're running on version 1.x of the Azure Functions runtime, it's likely because your C# app requires .NET Framework 2.1. Version 4.x of the runtime now lets you run .NET Framework 4.8 apps. At this point, you should consider migrating your version 1.x function apps to run on version 4.x. For more information about Functions runtime versions, see [Azure Functions runtime versions overview](./functions-versions.md).
-Migrating a C# function app from version 1.x to version 4.x of the Functions runtime requires you to make changes to your project code. Many of these changes are a result of changes in the C# language and .NET APIs. JavaScript apps generally don't require code changes to migrate.
+## Choose your target .NET version
+
+On version 1.x of the Functions runtime, your C# function app targets .NET Framework.
-You can upgrade your C# project to one of the following versions of .NET, all of which can run on Functions version 4.x:
-| .NET version | Process model<sup>*</sup> |
-| | | |
-| .NET 7 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
-| .NET 6 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
-| .NET 6 | [In-process](./functions-dotnet-class-library.md) |
-| .NET&nbsp;Framework&nbsp;4.8 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
+> [!TIP]
+> **Unless your app depends on a library or API only available to .NET Framework, we recommend upgrading to .NET 6 on the isolated worker model.** Many apps on version 1.x target .NET Framework only because that is what was available when they were created. Additional capabilities are available to more recent versions of .NET, and if your app is not forced to stay on .NET Framework due to a dependency, you should upgrade.
+>
+> Migrating to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. The [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
-<sup>*</sup> [In-process execution](./functions-dotnet-class-library.md) is only supported for Long Term Support (LTS) releases of .NET. Non-LTS releases and .NET Framework require you to run in an [isolated worker process](./dotnet-isolated-process-guide.md). For a feature and functionality comparison between the two process models, see [Differences between in-process and isolate worker process .NET Azure Functions](./dotnet-isolated-in-process-differences.md).
::: zone-end+ ::: zone pivot="programming-language-javascript,programming-language-csharp"
-This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime.
## Prepare for migration
Before you upgrade your app to version 4.x of the Functions runtime, you should
* Consider using a [staging slot](functions-deployment-slots.md) to test and verify your app in Azure on the new runtime version. You can then deploy your app with the updated version settings to the production slot. For more information, see [Migrate using slots](#upgrade-using-slots). ::: zone-end ::: zone pivot="programming-language-csharp"+ ## Update your project files The following sections describes the updates you must make to your C# project files to be able to run on one of the supported versions of .NET in Functions version 4.x. The updates shown are ones common to most projects. Your project code may require updates not mentioned in this article, especially when using custom NuGet packages.
+Migrating a C# function app from version 1.x to version 4.x of the Functions runtime requires you to make changes to your project code. Many of these changes are a result of changes in the C# language and .NET APIs.
+ Choose the tab that matches your target version of .NET and the desired process model (in-process or isolated worker process).
+> [!TIP]
+> The [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
+ ### .csproj file The following example is a .csproj project file that runs on version 1.x:
In version 2.x, the following changes were made:
> [!div class="nextstepaction"] > [Learn more about Functions versions](functions-versions.md)+
+[.NET Upgrade Assistant]: /dotnet/core/porting/upgrade-assistant-overview
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
zone_pivot_groups: programming-languages-set-functions
Azure Functions version 4.x is highly backwards compatible to version 3.x. Most apps should safely upgrade to 4.x without requiring significant code changes. For more information about Functions runtime versions, see [Azure Functions runtime versions overview](./functions-versions.md). > [!IMPORTANT]
-> Beginning on December 13, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime have reached the end of life (EOL) of extended support.
+> As of December 13, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime have reached the end of life (EOL) of extended support.
>
-> After the deadline, function apps can be created and deployed from your CI/CD DevOps pipeline, and all existing apps continue to run without breaking changes. However, your apps are not eligible for new features, security patches, and performance optimizations. You'll get related service support once you upgraded them to version 4.x.
+> Apps using versions 2.x and 3.x can still be created and deployed from your CI/CD DevOps pipeline, and all existing apps continue to run without breaking changes. However, your apps are not eligible for new features, security patches, and performance optimizations. You'll only get related service support once you upgrade them to version 4.x.
>
->End of support for these runtime versions is due to the ending of support for .NET Core 3.1, which is required by these older runtime versions. This requirement affects all [languages supported by Azure Functions](supported-languages.md).
+> End of support for these older runtime versions is due to the end of support for .NET Core 3.1, which they had as a core dependency. This requirement affects all [languages supported by Azure Functions](supported-languages.md).
>
->We highly recommend you migrating your function apps to version 4.x of the Functions runtime by following this article.
->
->Functions version 1.x is still supported for C# function apps that require the .NET Framework. Preview support is now available in Functions 4.x to [run C# functions on .NET Framework 4.8](dotnet-isolated-process-guide.md#supported-versions).
-
+> We highly recommend that you migrate your function apps to version 4.x of the Functions runtime by following this article.
This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top). ::: zone pivot="programming-language-csharp"
-## Choose your target .NET
-
-On version 3.x of the Functions runtime, your C# function app targets .NET Core 3.1. When you migrate your function app to version 4.x, you have the opportunity to choose the target version of .NET. You can upgrade your C# project to one of the following versions of .NET, all of which can run on Functions version 4.x:
+## Choose your target .NET version
-| .NET version | Process model<sup>*</sup> |
-| | | |
-| .NET 7 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
-| .NET 6 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
-| .NET 6 | [In-process](./functions-dotnet-class-library.md) |
+On version 3.x of the Functions runtime, your C# function app targets .NET Core 3.1 using the in-process model or .NET 5 using the isolated worker model.
-<sup>*</sup> [In-process execution](./functions-dotnet-class-library.md) is only supported for Long Term Support (LTS) releases of .NET. Standard Terms Support (STS) releases and .NET Framework are supported .NET Azure functions [isolated worker process](./dotnet-isolated-process-guide.md).
> [!TIP]
-> On version 3.x of the Functions runtime, if you're on .NET 5, we recommend you upgrade to .NET 7. If you're on .NET Core 3.1, we recommend you upgrade to .NET 6 (in-process) for a quick upgrade path.
+> **If you're migrating from .NET 5 (on the isolated worker model), we recommend upgrading to .NET 6 on the isolated worker model.** This provides a quick upgrade path with the longest support window from .NET.
>
-> If you're looking for moving to a Long Term Support (LTS) .NET release, we recommend you upgrade to .NET 6 .
->
-> Migrating to .NET Isolated worker model to get all benefits provided by Azure Functions .NET isolated worker process. For more information about .NET isolated worker process advantages see [.NET isolated worker process enhancement](./dotnet-isolated-in-process-differences.md). For more information about .NET version support, see [Supported versions](./dotnet-isolated-process-guide.md#supported-versions).
+> **If you're migrating from .NET Core 3.1 (on the in-process model), we recommend upgrading to .NET 6 on the in-process model.** This provides a quick upgrade path. However, you might also consider upgrading to .NET 6 on the isolated worker model. Switching to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. The [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
-Upgrading from .NET Core 3.1 to .NET 6 running in-process requires minimal updates to your project and virtually no updates to code. Switching to the isolated worker process model requires you to make changes to your code, but provides the flexibility of being able to easily run on any future version of .NET. For a feature and functionality comparison between the two process models, see [Differences between in-process and isolate worker process .NET Azure Functions](./dotnet-isolated-in-process-differences.md).
::: zone-end ## Prepare for migration
Upgrading instructions are language dependent. If you don't see your language, c
Choose the tab that matches your target version of .NET and the desired process model (in-process or isolated worker process).
+> [!TIP]
+> The [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
+ ### .csproj file The following example is a .csproj project file that uses .NET Core 3.1 on version 3.x:
If you don't see your programming language, go select it from the [top of the pa
> [!div class="nextstepaction"] > [Learn more about Functions versions](functions-versions.md)+
+[.NET Upgrade Assistant]: /dotnet/core/porting/upgrade-assistant-overview
azure-monitor Azure Monitor Agent Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-tools.md
To install DCR Config Generator:
1. Run the script:
- Option 1: Outputs **ready-to-deploy ARM template files** only, which creates the generated DCR in the specified subscription and resource group, when deployed.
-
- ```powershell
- .\WorkspaceConfigToDCRMigrationTool.ps1 -SubscriptionId $subId -ResourceGroupName $rgName -WorkspaceName $workspaceName -DCRName $dcrName -Location $location -FolderPath $folderPath
- ```
- Option 2: Outputs **ready-to-deploy ARM template files** and **the DCR JSON files** separately for you to deploy via other means. You need to set the `GetDcrPayload` parameter.
-
- ```powershell
- .\WorkspaceConfigToDCRMigrationTool.ps1 -SubscriptionId $subId -ResourceGroupName $rgName -WorkspaceName $workspaceName -DCRName $dcrName -Location $location -FolderPath $folderPath -GetDcrPayload
- ```
-
- **Parameters**
-
- | Parameter | Required? | Description |
- ||||
- | `SubscriptionId` | Yes | ID of the subscription that contains the target workspace. |
- | `ResourceGroupName` | Yes | Resource group that contains the target workspace. |
- | `WorkspaceName` | Yes | Name of the target workspace. |
- | `DCRName` | Yes | Name of the new DCR. |
- | `Location` | Yes | Region location for the new DCR. |
- | `GetDcrPayload` | No | When set, it generates additional DCR JSON files
- | `FolderPath` | No | Path in which to save the ARM template files and JSON files (optional). By default, Azure Monitor uses the current directory. |
-
+ Option 1: Outputs **ready-to-deploy ARM template files** only, which creates the generated DCR in the specified subscription and resource group, when deployed.
+
+ ```powershell
+ .\WorkspaceConfigToDCRMigrationTool.ps1 -SubscriptionId $subId -ResourceGroupName $rgName -WorkspaceName $workspaceName -DCRName $dcrName -Location $location -FolderPath $folderPath
+ ```
+ Option 2: Outputs **ready-to-deploy ARM template files** and **the DCR JSON files** separately for you to deploy via other means. You need to set the `GetDcrPayload` parameter.
+
+ ```powershell
+ .\WorkspaceConfigToDCRMigrationTool.ps1 -SubscriptionId $subId -ResourceGroupName $rgName -WorkspaceName $workspaceName -DCRName $dcrName -Location $location -FolderPath $folderPath -GetDcrPayload
+ ```
+
+ **Parameters**
+
+ | Parameter | Required? | Description |
+ ||||
+ | `SubscriptionId` | Yes | ID of the subscription that contains the target workspace. |
+ | `ResourceGroupName` | Yes | Resource group that contains the target workspace. |
+ | `WorkspaceName` | Yes | Name of the target workspace. |
+ | `DCRName` | Yes | Name of the new DCR. |
+ | `Location` | Yes | Region location for the new DCR. |
+ | `GetDcrPayload` | No | When set, it generates additional DCR JSON files
+ | `FolderPath` | No | Path in which to save the ARM template files and JSON files (optional). By default, Azure Monitor uses the current directory. |
+ 1. Review the output ARM template files. The script can produce two types of ARM template files, depending on the agent configuration in the target workspace:
- - Windows ARM template and parameter files - if the target workspace contains Windows performance counters or Windows events.
- - Linux ARM template and parameter files - if the target workspace contains Linux performance counters or Linux Syslog events.
-
- If the Log Analytics workspace wasn't [configured to collect data](./log-analytics-agent.md#data-collected) from connected agents, the generated files will be empty. This is a scenario in which the agent was connected to a Log Analytics workspace, but wasn't configured to send any data from the host machine.
+ - Windows ARM template and parameter files - if the target workspace contains Windows performance counters or Windows events.
+ - Linux ARM template and parameter files - if the target workspace contains Linux performance counters or Linux Syslog events.
+
+ If the Log Analytics workspace wasn't [configured to collect data](./log-analytics-agent.md#data-collected) from connected agents, the generated files will be empty. This is a scenario in which the agent was connected to a Log Analytics workspace, but wasn't configured to send any data from the host machine.
1. Deploy the generated ARM templates:
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm Rsyslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md
Overview of Azure Monitor Agent for Linux Syslog collection and supported RFC st
- Azure Monitor Agent ingests Syslog events via the previously mentioned socket and filters them based on facility or severity combination from data collection rule (DCR) configuration in `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`. Any `facility` or `severity` not present in the DCR is dropped. - Azure Monitor Agent attempts to parse events in accordance with **RFC3164** and **RFC5424**. It also knows how to parse the message formats listed on [this website](./azure-monitor-agent-overview.md#data-sources-and-destinations). - Azure Monitor Agent identifies the destination endpoint for Syslog events from the DCR configuration and attempts to upload the events.
- > [!NOTE]
- > Azure Monitor Agent uses local persistency by default. All events received from `rsyslog` or `syslog-ng` are queued in `/var/opt/microsoft/azuremonitoragent/events` if they fail to be uploaded.
+ > [!NOTE]
+ > Azure Monitor Agent uses local persistency by default. All events received from `rsyslog` or `syslog-ng` are queued in `/var/opt/microsoft/azuremonitoragent/events` if they fail to be uploaded.
## Issues
If you're sending a high log volume through rsyslog and your system is set up to
1. For example, to remove `local4` events from being logged at `/var/log/syslog` or `/var/log/messages`, change this line in `/etc/rsyslog.d/50-default.conf` from this snippet:
- ```config
- *.*;auth,authpriv.none -/var/log/syslog
- ```
+ ```config
+ *.*;auth,authpriv.none -/var/log/syslog
+ ```
- To this snippet (add `local4.none;`):
+ To this snippet (add `local4.none;`):
- ```config
- *.*;local4.none;auth,authpriv.none -/var/log/syslog
- ```
+ ```config
+ *.*;local4.none;auth,authpriv.none -/var/log/syslog
+ ```
1. `sudo systemctl restart rsyslog`
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).** 2. **Verify that the extension was successfully installed and provisioned, which installs the agent binaries on your machine**:
- 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** from the pane on the left > 'AzureMonitorLinuxAgent'should show up with Status: 'Provisioning succeeded'
- 2. If you don't see the extension listed, check if machine can reach Azure and find the extension to install using the command below:
- ```azurecli
- az vm extension image list-versions --location <machine-region> --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor
- ```
- 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up as above, [uninstall and install the extension](./azure-monitor-agent-manage.md) again.
- 4. Check if you see any errors in extension logs located at `/var/log/azure/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent/` on your machine
- 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
-
+ 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** from the pane on the left > 'AzureMonitorLinuxAgent'should show up with Status: 'Provisioning succeeded'
+ 2. If you don't see the extension listed, check if machine can reach Azure and find the extension to install using the command below:
+ ```azurecli
+ az vm extension image list-versions --location <machine-region> --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor
+ ```
+ 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up as above, [uninstall and install the extension](./azure-monitor-agent-manage.md) again.
+ 4. Check if you see any errors in extension logs located at `/var/log/azure/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent/` on your machine
+ 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
+ 3. **Verify that the agent is running**:
- 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
- ```Kusto
- Heartbeat | where Category == "Azure Monitor Agent" and Computer == "<computer-name>" | take 10
- ```
- 2. Check if the agent service is running
- ```
- systemctl status azuremonitoragent
- ```
- 3. Check if you see any errors in core agent logs located at `/var/opt/microsoft/azuremonitoragent/log/mdsd.*` on your machine
- 3. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
-
+ 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
+ ```Kusto
+ Heartbeat | where Category == "Azure Monitor Agent" and Computer == "<computer-name>" | take 10
+ ```
+ 2. Check if the agent service is running
+ ```
+ systemctl status azuremonitoragent
+ ```
+ 3. Check if you see any errors in core agent logs located at `/var/opt/microsoft/azuremonitoragent/log/mdsd.*` on your machine
+ 3. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
+
4. **Verify that the DCR exists and is associated with the virtual machine:**
- 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
- 2. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the virtual machine listed here.
- 3. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs.
- 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
+ 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
+ 2. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the virtual machine listed here.
+ 3. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs.
+ 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
5. **Verify that agent was able to download the associated DCR(s) from AMCS service:**
- 1. Check if you see the latest DCR downloaded at this location `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`
- 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
+ 1. Check if you see the latest DCR downloaded at this location `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`
+ 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
## Issues collecting Syslog For more information on how to troubleshoot syslog issues with Azure Monitor Agent, see [here](azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md).
-
-- The quality of service (QoS) file `/var/opt/microsoft/azuremonitoragent/log/mdsd.qos` provides CSV-format 15-minute aggregations of the processed events and contains the information on the amount of the processed syslog events in the given timeframe. **This file is useful in tracking Syslog event ingestion drops**. -
- For example, the below fragment shows that in the 15 minutes preceding 2022-02-28T19:55:23.5432920Z, the agent received 77 syslog events with facility daemon and level info and sent 77 of said events to the upload task. Additionally, the agent upload task received 77 and successfully uploaded all 77 of these daemon.info messages.
-
- ```
- #Time: 2022-02-28T19:55:23.5432920Z
- #Fields: Operation,Object,TotalCount,SuccessCount,Retries,AverageDuration,AverageSize,AverageDelay,TotalSize,TotalRowsRead,TotalRowsSent
- ...
- MaRunTaskLocal,daemon.debug,15,15,0,60000,0,0,0,0,0
- MaRunTaskLocal,daemon.info,15,15,0,60000,46.2,0,693,77,77
- MaRunTaskLocal,daemon.notice,15,15,0,60000,0,0,0,0,0
- MaRunTaskLocal,daemon.warning,15,15,0,60000,0,0,0,0,0
- MaRunTaskLocal,daemon.error,15,15,0,60000,0,0,0,0,0
- MaRunTaskLocal,daemon.critical,15,15,0,60000,0,0,0,0,0
- MaRunTaskLocal,daemon.alert,15,15,0,60000,0,0,0,0,0
- MaRunTaskLocal,daemon.emergency,15,15,0,60000,0,0,0,0,0
- ...
- MaODSRequest,https://e73fd5e3-ea2b-4637-8da0-5c8144b670c8_LogManagement,15,15,0,455067,476.467,0,7147,77,77
- ```
-
+
+- The quality of service (QoS) file `/var/opt/microsoft/azuremonitoragent/log/mdsd.qos` provides CSV-format 15-minute aggregations of the processed events and contains the information on the amount of the processed syslog events in the given timeframe. **This file is useful in tracking Syslog event ingestion drops**.
+
+ For example, the below fragment shows that in the 15 minutes preceding 2022-02-28T19:55:23.5432920Z, the agent received 77 syslog events with facility daemon and level info and sent 77 of said events to the upload task. Additionally, the agent upload task received 77 and successfully uploaded all 77 of these daemon.info messages.
+
+ ```
+ #Time: 2022-02-28T19:55:23.5432920Z
+ #Fields: Operation,Object,TotalCount,SuccessCount,Retries,AverageDuration,AverageSize,AverageDelay,TotalSize,TotalRowsRead,TotalRowsSent
+ ...
+ MaRunTaskLocal,daemon.debug,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.info,15,15,0,60000,46.2,0,693,77,77
+ MaRunTaskLocal,daemon.notice,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.warning,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.error,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.critical,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.alert,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.emergency,15,15,0,60000,0,0,0,0,0
+ ...
+ MaODSRequest,https://e73fd5e3-ea2b-4637-8da0-5c8144b670c8_LogManagement,15,15,0,455067,476.467,0,7147,77,77
+ ```
+ **Troubleshooting steps** 1. Review the [generic Linux AMA troubleshooting steps](#basic-troubleshooting-steps) first. If agent is emitting heartbeats, proceed to step 2. 2. The parsed configuration is stored at `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`. Check that Syslog collection is defined and the log destinations are the same as constructed in DCR UI / DCR JSON.
- 1. If yes, proceed to step 3. If not, the issue is in the configuration workflow.
- 2. Investigate `mdsd.err`,`mdsd.warn`, `mdsd.info` files under `/var/opt/microsoft/azuremonitoragent/log` for possible configuration errors.
- 3. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'Syslog DCR not available' and **Problem type** as 'I need help configuring data collection from a VM'.
+ 1. If yes, proceed to step 3. If not, the issue is in the configuration workflow.
+ 2. Investigate `mdsd.err`,`mdsd.warn`, `mdsd.info` files under `/var/opt/microsoft/azuremonitoragent/log` for possible configuration errors.
+ 3. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'Syslog DCR not available' and **Problem type** as 'I need help configuring data collection from a VM'.
3. Validate the layout of the Syslog collection workflow to ensure all necessary pieces are in place and accessible:
- 1. For `rsyslog` users, ensure the `/etc/rsyslog.d/10-azuremonitoragent.conf` file is present, isn't empty, and is accessible by the `rsyslog` daemon (syslog user).
- 1. Check your rsyslog configuration at `/etc/rsyslog.conf` and `/etc/rsyslog.d/*` to see if you have any inputs bound to a non-default ruleset, as messages from these inputs won't be forwarded to Azure Monitor Agent. For instance, messages from an input configured with a non-default ruleset like `input(type="imtcp" port="514" `**`ruleset="myruleset"`**`)` won't be forward.
- 2. For `syslog-ng` users, ensure the `/etc/syslog-ng/conf.d/azuremonitoragent.conf` file is present, isn't empty, and is accessible by the `syslog-ng` daemon (syslog user).
- 3. Ensure the file `/run/azuremonitoragent/default_syslog.socket` exists and is accessible by `rsyslog` or `syslog-ng` respectively.
- 4. Check for a corresponding drop in count of processed syslog events in `/var/opt/microsoft/azuremonitoragent/log/mdsd.qos`. If such drop isn't indicated in the file, [file a ticket](#file-a-ticket) with **Summary** as 'Syslog data dropped in pipeline' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
- 5. Check that syslog daemon queue isn't overflowing, causing the upload to fail, by referring the guidance here: [Rsyslog data not uploaded due to Full Disk space issue on AMA Linux Agent](./azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md)
+ 1. For `rsyslog` users, ensure the `/etc/rsyslog.d/10-azuremonitoragent.conf` file is present, isn't empty, and is accessible by the `rsyslog` daemon (syslog user).
+ 1. Check your rsyslog configuration at `/etc/rsyslog.conf` and `/etc/rsyslog.d/*` to see if you have any inputs bound to a non-default ruleset, as messages from these inputs won't be forwarded to Azure Monitor Agent. For instance, messages from an input configured with a non-default ruleset like `input(type="imtcp" port="514" `**`ruleset="myruleset"`**`)` won't be forward.
+ 2. For `syslog-ng` users, ensure the `/etc/syslog-ng/conf.d/azuremonitoragent.conf` file is present, isn't empty, and is accessible by the `syslog-ng` daemon (syslog user).
+ 3. Ensure the file `/run/azuremonitoragent/default_syslog.socket` exists and is accessible by `rsyslog` or `syslog-ng` respectively.
+ 4. Check for a corresponding drop in count of processed syslog events in `/var/opt/microsoft/azuremonitoragent/log/mdsd.qos`. If such drop isn't indicated in the file, [file a ticket](#file-a-ticket) with **Summary** as 'Syslog data dropped in pipeline' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
+ 5. Check that syslog daemon queue isn't overflowing, causing the upload to fail, by referring the guidance here: [Rsyslog data not uploaded due to Full Disk space issue on AMA Linux Agent](./azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md)
4. To debug syslog events ingestion further, you can append trace flag **-T 0x2002** at the end of **MDSD_OPTIONS** in the file `/etc/default/azuremonitoragent`, and restart the agent:
- ```
- export MDSD_OPTIONS="-A -c /etc/opt/microsoft/azuremonitoragent/mdsd.xml -d -r $MDSD_ROLE_PREFIX -S $MDSD_SPOOL_DIRECTORY/eh -L $MDSD_SPOOL_DIRECTORY/events -e $MDSD_LOG_DIR/mdsd.err -w $MDSD_LOG_DIR/mdsd.warn -o $MDSD_LOG_DIR/mdsd.info -T 0x2002"
- ```
+ ```
+ export MDSD_OPTIONS="-A -c /etc/opt/microsoft/azuremonitoragent/mdsd.xml -d -r $MDSD_ROLE_PREFIX -S $MDSD_SPOOL_DIRECTORY/eh -L $MDSD_SPOOL_DIRECTORY/events -e $MDSD_LOG_DIR/mdsd.err -w $MDSD_LOG_DIR/mdsd.warn -o $MDSD_LOG_DIR/mdsd.info -T 0x2002"
+ ```
5. After the issue is reproduced with the trace flag on, you'll find more debug information in `/var/opt/microsoft/azuremonitoragent/log/mdsd.info`. Inspect the file for the possible cause of syslog collection issue, such as parsing / processing / configuration / upload errors.
- > [!WARNING]
- > Ensure to remove trace flag setting **-T 0x2002** after the debugging session, since it generates many trace statements that could fill up the disk more quickly or make visually parsing the log file difficult.
+ > [!WARNING]
+ > Ensure to remove trace flag setting **-T 0x2002** after the debugging session, since it generates many trace statements that could fill up the disk more quickly or make visually parsing the log file difficult.
6. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA fails to collect syslog events' and **Problem type** as 'I need help with Azure Monitor Linux Agent'. ## Troubleshooting issues on Arc-enabled server
azure-monitor Azure Monitor Agent Troubleshoot Windows Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-arc.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).** 2. **Verify that the extension was successfully installed and provisioned, which installs the agent binaries on your machine**:
- 1. Open Azure portal > select your Arc-enabled server > Open **Settings** : **Extensions** from the pane on the left > 'AzureMonitorWindowsAgent'should show up with Status: 'Succeeded'
- 2. If not, check if the Arc agent (Connected Machine Agent) is able to connect to Azure and the extension service is running.
- ```azurecli
- azcmagent show
- ```
- You should see the below output:
- ```
- Resource Name : <server name>
- [...]
- Dependent Service Status
- Agent Service (himds) : running
- GC Service (gcarcservice) : running
- Extension Service (extensionservice) : running
- ```
- If instead you see `Agent Status: Disconnected` or any other status, [file a ticket](#file-a-ticket) with **Summary** as 'Arc agent or extensions service not working' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
- 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up, [uninstall and install the extension](./azure-monitor-agent-manage.md) again and repeat the verification to see the extension show up.
- 4. If not, check if you see any errors in extension logs located at `C:\ProgramData\GuestConfig\extension_logs\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent` on your machine
- 5. If none of the above works, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
-
+ 1. Open Azure portal > select your Arc-enabled server > Open **Settings** : **Extensions** from the pane on the left > 'AzureMonitorWindowsAgent'should show up with Status: 'Succeeded'
+ 2. If not, check if the Arc agent (Connected Machine Agent) is able to connect to Azure and the extension service is running.
+ ```azurecli
+ azcmagent show
+ ```
+ You should see the below output:
+ ```
+ Resource Name : <server name>
+ [...]
+ Dependent Service Status
+ Agent Service (himds) : running
+ GC Service (gcarcservice) : running
+ Extension Service (extensionservice) : running
+ ```
+ If instead you see `Agent Status: Disconnected` or any other status, [file a ticket](#file-a-ticket) with **Summary** as 'Arc agent or extensions service not working' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up, [uninstall and install the extension](./azure-monitor-agent-manage.md) again and repeat the verification to see the extension show up.
+ 4. If not, check if you see any errors in extension logs located at `C:\ProgramData\GuestConfig\extension_logs\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent` on your machine
+ 5. If none of the above works, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ 3. **Verify that the agent is running**:
- 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
- ```Kusto
- Heartbeat | where Category == "Azure Monitor Agent" and Computer == "<computer-name>" | take 10
- ```
- 2. If not, open Task Manager and check if 'MonAgentCore.exe' process is running. If it is, wait for 5 minutes for heartbeat to show up.
- 3. If not, check if you see any errors in core agent logs located at `C:\Resources\Directory\AMADataStore\Configuration` on your machine
- 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
-
+ 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
+ ```Kusto
+ Heartbeat | where Category == "Azure Monitor Agent" and Computer == "<computer-name>" | take 10
+ ```
+ 2. If not, open Task Manager and check if 'MonAgentCore.exe' process is running. If it is, wait for 5 minutes for heartbeat to show up.
+ 3. If not, check if you see any errors in core agent logs located at `C:\Resources\Directory\AMADataStore\Configuration` on your machine
+ 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+
4. **Verify that the DCR exists and is associated with the Arc-enabled server:**
- 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
- 2. On your Arc-enabled server, verify the existence of the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.latest.xml`. If this file doesn't exist, the Arc-enabled server may not be associated with a DCR.
- 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the Arc-enabled server listed here
- 4. If not listed, click 'Add' and select your Arc-enabled server from the resource picker. Repeat across all DCRs.
- 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
+ 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
+ 2. On your Arc-enabled server, verify the existence of the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.latest.xml`. If this file doesn't exist, the Arc-enabled server may not be associated with a DCR.
+ 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the Arc-enabled server listed here
+ 4. If not listed, click 'Add' and select your Arc-enabled server from the resource picker. Repeat across all DCRs.
+ 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
5. **Verify that agent was able to download the associated DCR(s) from AMCS service:**
- 1. Check if you see the latest DCR downloaded at this location `C:\Resources\Directory\AMADataStore\mcs\configchunks`
- 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ 1. Check if you see the latest DCR downloaded at this location `C:\Resources\Directory\AMADataStore\mcs\configchunks`
+ 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
## Issues collecting Performance counters 1. Check that your DCR JSON contains a section for 'performanceCounters'. If not, fix your DCR. See [how to create DCR](./data-collection-rule-azure-monitor-agent.md) or [sample DCR](./data-collection-rule-sample-agent.md). 2. Check that the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.lkg.xml` exists. If it doesn't exist, [file a ticket](#file-a-ticket) with **Summary** as 'AMA didn't run long enough to mark and **Problem type** as 'I need help with Azure Monitor Windows Agent'. 3. Open the file and check if it contains `CounterSet` nodes as shown in the example below:
- ```xml
- <CounterSet storeType="Local" duration="PT1M"
+ ```xml
+ <CounterSet storeType="Local" duration="PT1M"
eventName="c9302257006473204344_16355538690556228697" sampleRateInSeconds="15" format="Factored"> <Counter>\Processor(_Total)\% Processor Time</Counter>
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
<Counter>\LogicalDisk(_Total)\Free Megabytes</Counter> <Counter>\PhysicalDisk(_Total)\Avg. Disk Queue Length</Counter> </CounterSet>
- ```
- If there are no `CounterSet` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ ```
+ If there are no `CounterSet` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
### Issues using 'Custom Metrics' as destination 1. Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites). 2. Ensure that the associated DCR is correctly authored to collect performance counters and send them to Azure Monitor metrics. You should see this section in your DCR:
- ```json
- "destinations": {
- "azureMonitorMetrics": {
- "name":"myAmMetricsDest"
- }
- }
- ```
-
+ ```json
+ "destinations": {
+ "azureMonitorMetrics": {
+ "name":"myAmMetricsDest"
+ }
+ }
+ ```
+
3. Run PowerShell command:
- ```powershell
- Get-WmiObject Win32_Process -Filter "name = 'MetricsExtension.Native.exe'" | select Name,ExecutablePath,CommandLine | Format-List
- ```
-
- Verify that the *CommandLine* parameter in the output contains the argument "-TokenSource MSI"
+ ```powershell
+ Get-WmiObject Win32_Process -Filter "name = 'MetricsExtension.Native.exe'" | select Name,ExecutablePath,CommandLine | Format-List
+ ```
+
+ Verify that the *CommandLine* parameter in the output contains the argument "-TokenSource MSI"
4. Verify `C:\Resources\Directory\AMADataStore\mcs\AuthToken-MSI.json` file is present. 5. Verify `C:\Resources\Directory\AMADataStore\mcs\CUSTOMMETRIC_<subscription>_<region>_MonitoringAccount_Configuration.json` file is present. 6. Collect logs by running the command `C:\Packages\Plugins\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\<version-number>\Monitoring\Agent\table2csv.exe C:\Resources\Directory\AMADataStore\Tables\MaMetricsExtensionEtw.tsf`
- 1. The command will generate the file 'MaMetricsExtensionEtw.csv'
- 2. Open it and look for any Level 2 errors and try to fix them.
+ 1. The command will generate the file 'MaMetricsExtensionEtw.csv'
+ 2. Open it and look for any Level 2 errors and try to fix them.
7. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to collect custom metrics' and **Problem type** as 'I need help with Azure Monitor Windows Agent'. ## Issues collecting Windows event logs 1. Check that your DCR JSON contains a section for 'windowsEventLogs'. If not, fix your DCR. See [how to create DCR](./data-collection-rule-azure-monitor-agent.md) or [sample DCR](./data-collection-rule-sample-agent.md). 2. Check that the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.lkg.xml` exists. If it doesn't exist, [file a ticket](#file-a-ticket) with **Summary** as 'AMA didn't run long enough to mark and **Problem type** as 'I need help with Azure Monitor Windows Agent'. 3. Open the file and check if it contains `Subscription` nodes as shown in the example below:
- ```xml
- <Subscription eventName="c9302257006473204344_14882095577508259570"
+ ```xml
+ <Subscription eventName="c9302257006473204344_14882095577508259570"
query="System!*[System[(Level = 1 or Level = 2 or Level = 3)]]"> <Column name="ProviderGuid" type="mt:wstr" defaultAssignment="00000000-0000-0000-0000-000000000000"> <Value>/Event/System/Provider/@Guid</Value> </Column>
- ...
-
+ ...
+
</Column> </Subscription>
- ```
- If there are no `Subscription` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ ```
+ If there are no `Subscription` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
[!INCLUDE [azure-monitor-agent-file-a-ticket](../../../includes/azure-monitor-agent/azure-monitor-agent-file-a-ticket.md)]
azure-monitor Azure Monitor Agent Troubleshoot Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-vm.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).** 2. **Verify that the extension was successfully installed and provisioned, which installs the agent binaries on your machine**:
- 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** from the pane on the left > 'AzureMonitorWindowsAgent'should show up with Status: 'Provisioning succeeded'
- 2. If not, check if machine can reach Azure and find the extension to install using the command below:
- ```azurecli
- az vm extension image list-versions --location <machine-region> --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor
- ```
- 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up, [uninstall and install the extension](./azure-monitor-agent-manage.md) again and repeat the verification to see the extension show up.
- 4. If not, check if you see any errors in extension logs located at `C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent` on your machine
- 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
-
+ 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** from the pane on the left > 'AzureMonitorWindowsAgent'should show up with Status: 'Provisioning succeeded'
+ 2. If not, check if machine can reach Azure and find the extension to install using the command below:
+ ```azurecli
+ az vm extension image list-versions --location <machine-region> --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor
+ ```
+ 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up, [uninstall and install the extension](./azure-monitor-agent-manage.md) again and repeat the verification to see the extension show up.
+ 4. If not, check if you see any errors in extension logs located at `C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent` on your machine
+ 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+
3. **Verify that the agent is running**:
- 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
- ```Kusto
- Heartbeat | where Category == "Azure Monitor Agent" and 'Computer' == "<computer-name>" | take 10
- ```
- 2. If not, open Task Manager and check if 'MonAgentCore.exe' process is running. If it is, wait for 5 minutes for heartbeat to show up.
- 3. If not, check if you see any errors in core agent logs located at `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Configuration` on your machine
- 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
-
+ 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
+ ```Kusto
+ Heartbeat | where Category == "Azure Monitor Agent" and 'Computer' == "<computer-name>" | take 10
+ ```
+ 2. If not, open Task Manager and check if 'MonAgentCore.exe' process is running. If it is, wait for 5 minutes for heartbeat to show up.
+ 3. If not, check if you see any errors in core agent logs located at `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Configuration` on your machine
+ 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+
4. **Verify that the DCR exists and is associated with the virtual machine:**
- 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
- 2. On your virtual machine, verify the existence of the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.latest.xml`. If this file doesn't exist:
- - The virtual machine may not be associated with a DCR. See step 3
- - The virtual machine may not have Managed Identity enabled. [See here](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-during-creation-of-a-vm) on how to enable.
- - IMDS service is not running/accessible from the virtual machine. [Check if you can access IMDS from the machine](../../virtual-machines/windows/instance-metadata-service.md?tabs=windows). If not, [file a ticket](#file-a-ticket) with **Summary** as 'IMDS service not running' and **Problem type** as 'I need help configuring data collection from a VM'.
- - AMA cannot access IMDS. Check if you see IMDS errors in `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Tables\MAEventTable.tsf` file. If yes, [file a ticket](#file-a-ticket) with **Summary** as 'AMA cannot access IMDS' and **Problem type** as 'I need help configuring data collection from a VM'.
- 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the virtual machine listed here
- 4. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs.
- 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
+ 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
+ 2. On your virtual machine, verify the existence of the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.latest.xml`. If this file doesn't exist:
+ - The virtual machine may not be associated with a DCR. See step 3
+ - The virtual machine may not have Managed Identity enabled. [See here](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-during-creation-of-a-vm) on how to enable.
+ - IMDS service is not running/accessible from the virtual machine. [Check if you can access IMDS from the machine](../../virtual-machines/windows/instance-metadata-service.md?tabs=windows). If not, [file a ticket](#file-a-ticket) with **Summary** as 'IMDS service not running' and **Problem type** as 'I need help configuring data collection from a VM'.
+ - AMA cannot access IMDS. Check if you see IMDS errors in `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Tables\MAEventTable.tsf` file. If yes, [file a ticket](#file-a-ticket) with **Summary** as 'AMA cannot access IMDS' and **Problem type** as 'I need help configuring data collection from a VM'.
+ 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the virtual machine listed here
+ 4. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs.
+ 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
5. **Verify that agent was able to download the associated DCR(s) from AMCS service:**
- 1. Check if you see the latest DCR downloaded at this location `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\configchunks`
- 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ 1. Check if you see the latest DCR downloaded at this location `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\configchunks`
+ 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
-
+
## Issues collecting Performance counters 1. Check that your DCR JSON contains a section for 'performanceCounters'. If not, fix your DCR. See [how to create DCR](./data-collection-rule-azure-monitor-agent.md) or [sample DCR](./data-collection-rule-sample-agent.md). 2. Check that the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.lkg.xml` exists. If it doesn't exist, [file a ticket](#file-a-ticket) with **Summary** as 'AMA didn't run long enough to mark and **Problem type** as 'I need help with Azure Monitor Windows Agent'. 3. Open the file and check if it contains `CounterSet` nodes as shown in the example below:
- ```xml
- <CounterSet storeType="Local" duration="PT1M"
+ ```xml
+ <CounterSet storeType="Local" duration="PT1M"
eventName="c9302257006473204344_16355538690556228697" sampleRateInSeconds="15" format="Factored"> <Counter>\Processor(_Total)\% Processor Time</Counter>
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
<Counter>\LogicalDisk(_Total)\Free Megabytes</Counter> <Counter>\PhysicalDisk(_Total)\Avg. Disk Queue Length</Counter> </CounterSet>
- ```
- If there are no `CounterSet` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ ```
+ If there are no `CounterSet` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
### Issues using 'Custom Metrics' as destination 1. Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites). 2. Ensure that the associated DCR is correctly authored to collect performance counters and send them to Azure Monitor metrics. You should see this section in your DCR:
- ```json
- "destinations": {
- "azureMonitorMetrics": {
- "name":"myAmMetricsDest"
- }
- }
- ```
+ ```json
+ "destinations": {
+ "azureMonitorMetrics": {
+ "name":"myAmMetricsDest"
+ }
+ }
+ ```
3. Run PowerShell command:
- ```powershell
- Get-WmiObject Win32_Process -Filter "name = 'MetricsExtension.Native.exe'" | select Name,ExecutablePath,CommandLine | Format-List
- ```
- Verify that the *CommandLine* parameter in the output contains the argument "-TokenSource MSI"
+ ```powershell
+ Get-WmiObject Win32_Process -Filter "name = 'MetricsExtension.Native.exe'" | select Name,ExecutablePath,CommandLine | Format-List
+ ```
+ Verify that the *CommandLine* parameter in the output contains the argument "-TokenSource MSI"
4. Verify `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\AuthToken-MSI.json` file is present. 5. Verify `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\CUSTOMMETRIC_<subscription>_<region>_MonitoringAccount_Configuration.json` file is present. 6. Collect logs by running the command `C:\Packages\Plugins\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\<version-number>\Monitoring\Agent\table2csv.exe C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Tables\MaMetricsExtensionEtw.tsf`
- 1. The command will generate the file 'MaMetricsExtensionEtw.csv'
- 2. Open it and look for any Level 2 errors and try to fix them.
+ 1. The command will generate the file 'MaMetricsExtensionEtw.csv'
+ 2. Open it and look for any Level 2 errors and try to fix them.
7. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to collect custom metrics' and **Problem type** as 'I need help with Azure Monitor Windows Agent'. ## Issues collecting Windows event logs 1. Check that your DCR JSON contains a section for 'windowsEventLogs'. If not, fix your DCR. See [how to create DCR](./data-collection-rule-azure-monitor-agent.md) or [sample DCR](./data-collection-rule-sample-agent.md). 2. Check that the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.lkg.xml` exists. If it doesn't exist, [file a ticket](#file-a-ticket) with **Summary** as 'AMA didn't run long enough to mark and **Problem type** as 'I need help with Azure Monitor Windows Agent'. 3. Open the file and check if it contains `Subscription` nodes as shown in the example below:
- ```xml
- <Subscription eventName="c9302257006473204344_14882095577508259570"
+ ```xml
+ <Subscription eventName="c9302257006473204344_14882095577508259570"
query="System!*[System[(Level = 1 or Level = 2 or Level = 3)]]"> <Column name="ProviderGuid" type="mt:wstr" defaultAssignment="00000000-0000-0000-0000-000000000000"> <Value>/Event/System/Provider/@Guid</Value> </Column>
- ...
-
+ ...
+
</Column> </Subscription>
- ```
- If there are no `Subscription`, nodes then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
-
-
-
+ ```
+ If there are no `Subscription`, nodes then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ [!INCLUDE [azure-monitor-agent-file-a-ticket](../../../includes/azure-monitor-agent/azure-monitor-agent-file-a-ticket.md)]
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Here is a comparison between client installer and VM extension for Azure Monitor
| Authentication | Using Managed Identity | Using AAD device token | | Central configuration | Via Data collection rules | Same | | Associating config rules to agents | DCRs associates directly to individual VM resources | DCRs associate to Monitored Object (MO), which maps to all devices within the AAD tenant |
-| Data upload to Log Analytics | Via Log Analytics endpoints | Same |
+| Data upload to Log Analytics | Via Log Analytics endpoints | Same |
| Feature support | All features documented [here](./azure-monitor-agent-overview.md) | Features dependent on AMA agent extension that don't require additional extensions. This includes support for Sentinel Windows Event filtering | | [Networking options](./azure-monitor-agent-overview.md#networking) | Proxy support, Private link support | Proxy support only |
Here is a comparison between client installer and VM extension for Azure Monitor
3. The machine must be domain joined to an Azure AD tenant (AADj or Hybrid AADj machines), which enables the agent to fetch Azure AD device tokens used to authenticate and fetch data collection rules from Azure. 4. You may need tenant admin permissions on the Azure AD tenant. 5. The device must have access to the following HTTPS endpoints:
- - global.handler.control.monitor.azure.com
- - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com)
- - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opinsights.azure.com)
+ - global.handler.control.monitor.azure.com
+ - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com)
+ - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opinsights.azure.com)
(If using private links on the agent, you must also add the [data collection endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)) 6. A data collection rule you want to associate with the devices. If it doesn't exist already, [create a data collection rule](./data-collection-rule-azure-monitor-agent.md#create-a-data-collection-rule). **Do not associate the rule to any resources yet**. ## Install the agent 1. Download the Windows MSI installer for the agent using [this link](https://go.microsoft.com/fwlink/?linkid=2192409). You can also download it from **Monitor** > **Data Collection Rules** > **Create** experience on Azure portal (shown below):
- [![Diagram shows download agent link on Azure portal.](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal.png)](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal-focus.png#lightbox)
+ [![Diagram shows download agent link on Azure portal.](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal.png)](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal-focus.png#lightbox)
2. Open an elevated admin command prompt window and change directory to the location where you downloaded the installer. 3. To install with **default settings**, run the following command:
- ```cli
- msiexec /i AzureMonitorAgentClientSetup.msi /qn
- ```
+ ```cli
+ msiexec /i AzureMonitorAgentClientSetup.msi /qn
+ ```
4. To install with custom file paths, [network proxy settings](./azure-monitor-agent-overview.md#proxy-configuration), or on a Non-Public Cloud use the command below with the values from the following table:
- ```cli
- msiexec /i AzureMonitorAgentClientSetup.msi /qn DATASTOREDIR="C:\example\folder"
- ```
-
- | Parameter | Description |
- |:|:|
- | INSTALLDIR | Directory path where the agent binaries are installed |
- | DATASTOREDIR | Directory path where the agent stores its operational logs and data |
- | PROXYUSE | Must be set to "true" to use proxy |
- | PROXYADDRESS | Set to Proxy Address. PROXYUSE must be set to "true" to be correctly applied |
- | PROXYUSEAUTH | Set to "true" if proxy requires authentication |
- | PROXYUSERNAME | Set to Proxy username. PROXYUSE and PROXYUSEAUTH must be set to "true" |
- | PROXYPASSWORD | Set to Proxy password. PROXYUSE and PROXYUSEAUTH must be set to "true" |
- | CLOUDENV | Set to Cloud. "Azure Commercial", "Azure China", "Azure US Gov", "Azure USNat", or "Azure USSec
+ ```cli
+ msiexec /i AzureMonitorAgentClientSetup.msi /qn DATASTOREDIR="C:\example\folder"
+ ```
+
+ | Parameter | Description |
+ |:|:|
+ | INSTALLDIR | Directory path where the agent binaries are installed |
+ | DATASTOREDIR | Directory path where the agent stores its operational logs and data |
+ | PROXYUSE | Must be set to "true" to use proxy |
+ | PROXYADDRESS | Set to Proxy Address. PROXYUSE must be set to "true" to be correctly applied |
+ | PROXYUSEAUTH | Set to "true" if proxy requires authentication |
+ | PROXYUSERNAME | Set to Proxy username. PROXYUSE and PROXYUSEAUTH must be set to "true" |
+ | PROXYPASSWORD | Set to Proxy password. PROXYUSE and PROXYUSEAUTH must be set to "true" |
+ | CLOUDENV | Set to Cloud. "Azure Commercial", "Azure China", "Azure US Gov", "Azure USNat", or "Azure USSec
6. Verify successful installation:
- - Open **Control Panel** -> **Programs and Features** OR **Settings** -> **Apps** -> **Apps & Features** and ensure you see ΓÇÿAzure Monitor AgentΓÇÖ listed
- - Open **Services** and confirm ΓÇÿAzure Monitor AgentΓÇÖ is listed and shows as **Running**.
+ - Open **Control Panel** -> **Programs and Features** OR **Settings** -> **Apps** -> **Apps & Features** and ensure you see ΓÇÿAzure Monitor AgentΓÇÖ listed
+ - Open **Services** and confirm ΓÇÿAzure Monitor AgentΓÇÖ is listed and shows as **Running**.
7. Proceed to create the monitored object that you'll associate data collection rules to, for the agent to actually start operating. > [!NOTE]
PUT https://management.azure.com/providers/microsoft.insights/providers/microsof
**Request Body** ```JSON {
- "properties":
- {
- "roleDefinitionId":"/providers/Microsoft.Authorization/roleDefinitions/56be40e24db14ccf93c37e44c597135b",
- "principalId":"aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
- }
+ "properties":
+ {
+ "roleDefinitionId":"/providers/Microsoft.Authorization/roleDefinitions/56be40e24db14ccf93c37e44c597135b",
+ "principalId":"aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
+ }
} ```
PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{
```JSON { "properties":
- {
+ {
"location":"eastus" } }
PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{
**Request Body** ```JSON {
- "properties":
- {
- "dataCollectionRuleId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{DCRName}"
- }
+ "properties":
+ {
+ "dataCollectionRuleId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{DCRName}"
+ }
} ``` **Body parameters**
In order to update the version, install the new version you wish to update to.
## Troubleshoot ### View agent diagnostic logs 1. Rerun the installation with logging turned on and specify the log file name:
- `Msiexec /I AzureMonitorAgentClientSetup.msi /L*V <log file name>`
+ `Msiexec /I AzureMonitorAgentClientSetup.msi /L*V <log file name>`
2. Runtime logs are collected automatically either at the default location `C:\Resources\Azure Monitor Agent\` or at the file path mentioned during installation.
- - If you can't locate the path, the exact location can be found on the registry as `AMADataRootDirPath` on `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMonitorAgent`.
+ - If you can't locate the path, the exact location can be found on the registry as `AMADataRootDirPath` on `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMonitorAgent`.
3. The 'ServiceLogs' folder contains log from AMA Windows Service, which launches and manages AMA processes 4. 'AzureMonitorAgent.MonitoringDataStore' contains data/logs from AMA processes.
azure-monitor Data Collection Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-iis.md
To create the data collection rule in the Azure portal:
[ ![Screenshot that shows the Azure portal form to select basic performance counters in a data collection rule.](media/data-collection-iis/iis-data-collection-rule.png)](media/data-collection-iis/iis-data-collection-rule.png#lightbox) 1. Specify a file pattern to identify the directory where the log files are located.
-1. On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of the same or different types. For instance, you can select multiple Log Analytics workspaces, which is also known as multihoming.
+1. On the **Destination** tab, add a destinations for the data source.
[ ![Screenshot that shows the Azure portal form to add a data source in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png#lightbox)
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
Application Insights JavaScript SDK feature extensions are extra features that c
In this article, we cover the Click Analytics plug-in, which automatically tracks click events on webpages and uses `data-*` attributes or customized tags on HTML elements to populate event telemetry.
-> [!IMPORTANT]
-> If you haven't already, you need to first [enable Azure Monitor Application Insights Real User Monitoring](./javascript-sdk.md) before you enable the Click Analytics plug-in.
+## Prerequisites
+
+[Install the JavaScript SDK](./javascript-sdk.md) before you enable the Click Analytics plug-in.
## What data does the plug-in collect?
Users can set up the Click Analytics Auto-Collection plug-in via JavaScript (Web
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-### 1. Add the code
+### Add the code
#### [JavaScript (Web) SDK Loader Script](#tab/javascriptwebsdkloaderscript)
-Ignore this setup if you use the npm setup.
-
-```html
-<script type="text/javascript" src="https://js.monitor.azure.com/scripts/b/ext/ai.clck.2.min.js"></script>
-<script type="text/javascript">
- var clickPluginInstance = new Microsoft.ApplicationInsights.ClickAnalyticsPlugin();
- // Click Analytics configuration
- var clickPluginConfig = {
- autoCapture : true,
- dataTags: {
- useDefaultContentNameOrId: true
- }
- }
- // Application Insights configuration
- var configObj = {
- connectionString: "YOUR_CONNECTION_STRING",
- // Alternatively, you can pass in the instrumentation key,
- // but support for instrumentation key ingestion will end on March 31, 2025.
- // instrumentationKey: "YOUR INSTRUMENTATION KEY",
- extensions: [
- clickPluginInstance
- ],
- extensionConfig: {
- [clickPluginInstance.identifier] : clickPluginConfig
- },
- };
- // Application Insights JavaScript (Web) SDK Loader Script code
- !function(v,y,T){<!-- Removed the JavaScript (Web) SDK Loader Script code for brevity -->}(window,document,{
- src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js",
- crossOrigin: "anonymous",
- cfg: configObj // configObj is defined above.
- });
-</script>
-```
-
-> [!NOTE]
-> To add or update JavaScript (Web) SDK Loader Script configuration, see [JavaScript (Web) SDK Loader Script configuration](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#javascript-web-sdk-loader-script-configuration).
+1. Paste the JavaScript (Web) SDK Loader Script at the top of each page for which you want to enable Application Insights.
+
+ ```html
+ <script type="text/javascript" src="https://js.monitor.azure.com/scripts/b/ext/ai.clck.2.min.js"></script>
+ <script type="text/javascript">
+ var clickPluginInstance = new Microsoft.ApplicationInsights.ClickAnalyticsPlugin();
+ // Click Analytics configuration
+ var clickPluginConfig = {
+ autoCapture : true,
+ dataTags: {
+ useDefaultContentNameOrId: true
+ }
+ }
+ // Application Insights configuration
+ var configObj = {
+ connectionString: "YOUR_CONNECTION_STRING",
+ // Alternatively, you can pass in the instrumentation key,
+ // but support for instrumentation key ingestion will end on March 31, 2025.
+ // instrumentationKey: "YOUR INSTRUMENTATION KEY",
+ extensions: [
+ clickPluginInstance
+ ],
+ extensionConfig: {
+ [clickPluginInstance.identifier] : clickPluginConfig
+ },
+ };
+ // Application Insights JavaScript (Web) SDK Loader Script code
+ !function(v,y,T){<!-- Removed the JavaScript (Web) SDK Loader Script code for brevity -->}(window,document,{
+ src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js",
+ crossOrigin: "anonymous",
+ cfg: configObj // configObj is defined above.
+ });
+ </script>
+ ```
+
+1. To add or update JavaScript (Web) SDK Loader Script configuration, see [JavaScript (Web) SDK Loader Script configuration](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#javascript-web-sdk-loader-script-configuration).
#### [npm package](#tab/npmpackage)
appInsights.loadAppInsights();
> [!TIP]
-> If you want to add a framework extension or you've already added one, see the [React, React Native, and Angular code samples for how to add the Click Analytics plug-in](./javascript-framework-extensions.md#2-add-the-extension-to-your-code).
+> If you want to add a framework extension or you've already added one, see the [React, React Native, and Angular code samples for how to add the Click Analytics plug-in](./javascript-framework-extensions.md#add-the-extension-to-your-code).
-### 2. (Optional) Set the authenticated user context
+### (Optional) Set the authenticated user context
If you want to set this optional setting, see [Set the authenticated user context](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#setauthenticatedusercontext).
-> [!NOTE]
-> If you're using a HEART workbook with the Click Analytics plug-in, you don't need to set the authenticated user context to see telemetry data. For more information, see the [HEART workbook documentation](./usage-heart.md#confirm-that-data-is-flowing).
+If you're using a HEART workbook with the Click Analytics plug-in, you don't need to set the authenticated user context to see telemetry data. For more information, see the [HEART workbook documentation](./usage-heart.md#confirm-that-data-is-flowing).
## Use the plug-in
Telemetry data generated from the click events are stored as `customEvents` in t
The `name` column of the `customEvent` is populated based on the following rules: 1. The `id` provided in the `data-*-id`, which means it must start with `data` and end with `id`, is used as the `customEvent` name. For example, if the clicked HTML element has the attribute `"data-sample-id"="button1"`, then `"button1"` is the `customEvent` name. 1. If no such attribute exists and if the `useDefaultContentNameOrId` is set to `true` in the configuration, the clicked element's HTML attribute `id` or content name of the element is used as the `customEvent` name. If both `id` and the content name are present, precedence is given to `id`.
- 1. If `useDefaultContentNameOrId` is `false`, the `customEvent` name is `"not_specified"`.
-
- > [!TIP]
- > We recommend setting `useDefaultContentNameOrId` to `true` for generating meaningful data.
+ 1. If `useDefaultContentNameOrId` is `false`, the `customEvent` name is `"not_specified"`. We recommend setting `useDefaultContentNameOrId` to `true` for generating meaningful data.
### `parentId` key
The value for `parentId` is fetched based on the following rules:
- If both `data-*-id` and `id` are defined, precedence is given to `data-*-id`. - If `parentDataTag` is defined but the plug-in can't find this tag under the DOM tree, the plug-in uses the `id` or `data-*-id` defined within the element that is closest to the clicked element as `parentId`. However, we recommend defining the `data-{parentDataTag}` or `customDataPrefix-{parentDataTag}` attribute to reduce the number of loops needed to find `parentId`. Declaring `parentDataTag` is useful when you need to use the plug-in with customized options. - If no `parentDataTag` is defined and no `parentId` information is included in current element, no `parentId` value is collected. -
-> [!NOTE]
-> If `parentDataTag` is defined, `useDefaultContentNameOrId` is set to `false`, and only an `id` attribute is defined within the element closest to the clicked element, the `parentId` populates as `"not_specified"`. To fetch the value of `id`, set `useDefaultContentNameOrId` to `true`.
+- If `parentDataTag` is defined, `useDefaultContentNameOrId` is set to `false`, and only an `id` attribute is defined within the element closest to the clicked element, the `parentId` populates as `"not_specified"`. To fetch the value of `id`, set `useDefaultContentNameOrId` to `true`.
When you define the `data-parentid` or `data-*-parentid` attribute, the plug-in fetches the instance of this attribute that is closest to the clicked element, including within the clicked element if applicable. If you declare `parentDataTag` and define the `data-parentid` or `data-*-parentid` attribute, precedence is given to `data-parentid` or `data-*-parentid`.
-> [!NOTE]
-> For examples showing which value is fetched as the `parentId` for different configurations, see [Examples of `parentid` key](#examples-of-parentid-key).
-
-> [!CAUTION]
-> Once `parentDataTag` is included in *any* HTML element across your application *the SDK begins looking for parents tags across your entire application* and not just the HTML element where you used it.
+For examples showing which value is fetched as the `parentId` for different configurations, see [Examples of `parentid` key](#examples-of-parentid-key).
> [!CAUTION]
-> If you're using the HEART workbook with the Click Analytics plug-in, for HEART events to be logged or detected, the tag `parentDataTag` must be declared in all other parts of an end user's application.
+> - Once `parentDataTag` is included in *any* HTML element across your application *the SDK begins looking for parents tags across your entire application* and not just the HTML element where you used it.
+> - If you're using the HEART workbook with the Click Analytics plug-in, for HEART events to be logged or detected, the tag `parentDataTag` must be declared in all other parts of an end user's application.
### `customDataPrefix`
export const clickPluginConfigWithParentDataTag = {
</div> ```
-For example 2, for clicked element `<Button>`, the value of `parentId` is `parentid2`. Even though `parentDataTag` is declared, the `data-parentid` definition takes precedence.
-> [!NOTE]
-> If the `data-parentid` attribute was defined within the div element with `className=ΓÇ¥test2ΓÇ¥`, the value for `parentId` would still be `parentid2`.
+For example 2, for clicked element `<Button>`, the value of `parentId` is `parentid2`. Even though `parentDataTag` is declared, the `data-parentid` definition takes precedence. If the `data-parentid` attribute was defined within the div element with `className=ΓÇ¥test2ΓÇ¥`, the value for `parentId` would still be `parentid2`.
### Example 3
export const clickPluginConfigWithParentDataTag = {
</div> ``` For example 3, for clicked element `<Button>`, because `parentDataTag` is declared and the `data-parentid` or `data-*-parentid` attribute isnΓÇÖt defined, the value of `parentId` is `test6parent`. It's `test6parent` because when `parentDataTag` is declared, the plug-in fetches the value of the `id` or `data-*-id` attribute from the parent HTML element that is closest to the clicked element. Because `data-group="buttongroup1"` is defined, the plug-in finds the `parentId` more efficiently.
-> [!NOTE]
-> If you remove the `data-group="buttongroup1"` attribute, the value of `parentId` is still `test6parent`, because `parentDataTag` is still declared.
+
+If you remove the `data-group="buttongroup1"` attribute, the value of `parentId` is still `test6parent`, because `parentDataTag` is still declared.
## Troubleshooting
See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/ap
## Next steps -- [Confirm data is flowing](./javascript-sdk.md#5-confirm-data-is-flowing).
+- [Confirm data is flowing](./javascript-sdk.md#confirm-data-is-flowing).
- See the [documentation on utilizing HEART workbook](usage-heart.md) for expanded product analytics. - See the [GitHub repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Autocollection Plug-in. - Use [Events Analysis in the Usage experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions.
azure-monitor Javascript Framework Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md
In addition to the core SDK, there are also plugins available for specific frame
These plugins provide extra functionality and integration with the specific framework.
-> [!IMPORTANT]
-> If you haven't already, you need to first [enable Azure Monitor Application Insights Real User Monitoring](./javascript-sdk.md) before you enable a framework extension.
- ## Prerequisites
+- Install the [JavaScript SDK](./javascript-sdk.md).
+ ### [React](#tab/react) None. ### [React Native](#tab/reactnative)
-You must be using a version >= 2.0.0 of `@microsoft/applicationinsights-web`. This plugin only works in react-native apps. It doesn't work with [apps using the Expo framework](https://docs.expo.io/) or Create React Native App, which is based on the Expo framework.
+- You must be using a version >= 2.0.0 of `@microsoft/applicationinsights-web`. This plugin only works in react-native apps. It doesn't work with [apps using the Expo framework](https://docs.expo.io/) or Create React Native App, which is based on the Expo framework.
### [Angular](#tab/angular)
-None.
+- The Angular plugin is NOT ECMAScript 3 (ES3) compatible.
+- When we add support for a new Angular version, our npm package becomes incompatible with down-level Angular versions. Continue to use older npm packages until you're ready to upgrade your Angular version.
The Angular plugin for the Application Insights JavaScript SDK enables:
- Track exceptions - Chain more custom exception handlers
-> [!WARNING]
-> Angular plugin is NOT ECMAScript 3 (ES3) compatible.
-
-> [!IMPORTANT]
-> When we add support for a new Angular version, our NPM package becomes incompatible with down-level Angular versions. Continue to use older NPM packages until you're ready to upgrade your Angular version.
- ## Add a plug-in To add a plug-in, follow the steps in this section.
-### 1. Install the package
+### Install the package
#### [React](#tab/react)
npm install @microsoft/applicationinsights-angularplugin-js
-### 2. Add the extension to your code
+### Add the extension to your code
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
npm install @microsoft/applicationinsights-angularplugin-js
Initialize a connection to Application Insights:
-> [!TIP]
-> If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [reactPlugin],`.
- ```javascript import React from 'react'; import { ApplicationInsights } from '@microsoft/applicationinsights-web';
import { ReactPlugin } from '@microsoft/applicationinsights-react-js';
import { createBrowserHistory } from "history"; const browserHistory = createBrowserHistory({ basename: '' }); var reactPlugin = new ReactPlugin();
-// Add the Click Analytics plug-in.
+// *** Add the Click Analytics plug-in. ***
/* var clickPluginInstance = new ClickAnalyticsPlugin(); var clickPluginConfig = { autoCapture: true
var reactPlugin = new ReactPlugin();
var appInsights = new ApplicationInsights({ config: { connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
- // If you're adding the Click Analytics plug-in, delete the next line.
+ // *** If you're adding the Click Analytics plug-in, delete the next line. ***
extensions: [reactPlugin],
- // Add the Click Analytics plug-in.
+ // *** Add the Click Analytics plug-in. ***
// extensions: [reactPlugin, clickPluginInstance], extensionConfig: { [reactPlugin.identifier]: { history: browserHistory }
- // Add the Click Analytics plug-in.
+ // *** Add the Click Analytics plug-in. ***
// [clickPluginInstance.identifier]: clickPluginConfig } }
var appInsights = new ApplicationInsights({
appInsights.loadAppInsights(); ```
-> [!TIP]
-> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
- #### [React Native](#tab/reactnative) - **React Native Plug-in** To use this plugin, you need to construct the plugin and add it as an `extension` to your existing Application Insights instance.
- > [!TIP]
- > If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [RNPlugin]`.
- ```typescript import { ApplicationInsights } from '@microsoft/applicationinsights-web'; import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native';
- // Add the Click Analytics plug-in.
+ // *** Add the Click Analytics plug-in. ***
// import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js'; var RNPlugin = new ReactNativePlugin();
- // Add the Click Analytics plug-in.
+ // *** Add the Click Analytics plug-in. ***
/* var clickPluginInstance = new ClickAnalyticsPlugin(); var clickPluginConfig = { autoCapture: true
appInsights.loadAppInsights();
var appInsights = new ApplicationInsights({ config: { connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
- // If you're adding the Click Analytics plug-in, delete the next line.
+ // *** If you're adding the Click Analytics plug-in, delete the next line. ***
extensions: [RNPlugin]
- // Add the Click Analytics plug-in.
+ // *** Add the Click Analytics plug-in. ***
/* extensions: [RNPlugin, clickPluginInstance], extensionConfig: { [clickPluginInstance.identifier]: clickPluginConfig
appInsights.loadAppInsights();
```
- > [!TIP]
- > If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
-- - **React Native Manual Device Plugin** To use this plugin, you must either disable automatic device info collection or use your own device info collection class after you add the extension to your code.
Set up an instance of Application Insights in the entry component in your app:
> [!IMPORTANT] > When using the ErrorService, there is an implicit dependency on the `@microsoft/applicationinsights-analytics-js` extension. you MUST include either the `'@microsoft/applicationinsights-web'` or include the `@microsoft/applicationinsights-analytics-js` extension. Otherwise, unhandled exceptions caught by the error service will not be sent.
-> [!TIP]
-> If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [angularPlugin],`.
- ```js import { Component } from '@angular/core'; import { ApplicationInsights } from '@microsoft/applicationinsights-web'; import { AngularPlugin } from '@microsoft/applicationinsights-angularplugin-js';
-// Add the Click Analytics plug-in.
+// *** Add the Click Analytics plug-in. ***
// import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js'; import { Router } from '@angular/router';
export class AppComponent {
private router: Router ){ var angularPlugin = new AngularPlugin();
- // Add the Click Analytics plug-in.
+ // *** Add the Click Analytics plug-in. ***
/* var clickPluginInstance = new ClickAnalyticsPlugin(); var clickPluginConfig = { autoCapture: true
export class AppComponent {
const appInsights = new ApplicationInsights({ config: { connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
- // If you're adding the Click Analytics plug-in, delete the next line.
+ // *** If you're adding the Click Analytics plug-in, delete the next line. ***
extensions: [angularPlugin],
- // Add the Click Analytics plug-in.
+ // *** Add the Click Analytics plug-in. ***
// extensions: [angularPlugin, clickPluginInstance], extensionConfig: { [angularPlugin.identifier]: { router: this.router }
- // Add the Click Analytics plug-in.
+ // *** Add the Click Analytics plug-in. ***
// [clickPluginInstance.identifier]: clickPluginConfig } }
export class AppComponent {
} ```
-> [!TIP]
-> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
-
+### (Optional) Add the Click Analytics plug-in
+
+If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md):
+
+1. Uncomment the lines for Click Analytics.
+1. Do one of the following, depending on which plug-in you're adding:
+
+ - For React, delete `extensions: [reactPlugin],`.
+ - For React Native, delete `extensions: [RNPlugin]`.
+ - For Angular, delete `extensions: [angularPlugin],`.
+
+1. See [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
+ ## Configuration This section covers configuration settings for the framework extensions for Application Insights JavaScript SDK.
To chain more custom exception handlers:
#### [React](#tab/react)
-N/A
-
-> [!NOTE]
-> The device information, which includes Browser, OS, version, and language, is already being collected by the Application Insights web package.
+The device information, which includes Browser, OS, version, and language, is already being collected by the Application Insights web package.
#### [React Native](#tab/reactnative)
N/A
#### [Angular](#tab/angular)
-N/A
-
-> [!NOTE]
-> The device information, which includes Browser, OS, version, and language, is already being collected by the Application Insights web package.
+The device information, which includes Browser, OS, version, and language, is already being collected by the Application Insights web package.
customMetrics
| summarize avg(value), count() by tostring(customDimensions["Component Name"]) ```
-> [!NOTE]
-> It can take up to 10 minutes for new custom metrics to appear in the Azure portal.
+It can take up to 10 minutes for new custom metrics to appear in the Azure portal.
#### Use Application Insights with React Context
Check out the [Application Insights Angular demo](https://github.com/microsoft/a
## Next steps -- [Confirm data is flowing](javascript-sdk.md#5-confirm-data-is-flowing).
+- [Confirm data is flowing](javascript-sdk.md#confirm-data-is-flowing).
azure-monitor Javascript Sdk Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-configuration.md
The Azure Application Insights JavaScript SDK provides configuration for trackin
These configuration fields are optional and default to false unless otherwise stated.
-| Name | Type | Default | Description |
-||||-|
-| accountId | string | null | An optional account ID, if your app groups users into accounts. No spaces, commas, semicolons, equals, or vertical bars |
-| addRequestContext | (requestContext: IRequestionContext) => {[key: string]: any} | undefined | Provide a way to enrich dependencies logs with context at the beginning of api call. Default is undefined. You need to check if `xhr` exists if you configure `xhr` related context. You need to check if `fetch request` and `fetch response` exist if you configure `fetch` related context. Otherwise you may not get the data you need. |
-| ajaxPerfLookupDelay | numeric | 25 | Defaults to 25 ms. The amount of time to wait before reattempting to find the windows.performance timings for an Ajax request, time is in milliseconds and is passed directly to setTimeout().
-| appId | string | null | AppId is used for the correlation between AJAX dependencies happening on the client-side with the server-side requests. When Beacon API is enabled, it can't be used automatically, but can be set manually in the configuration. Default is null |
-| autoTrackPageVisitTime | boolean | false | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It's sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (IE8 or less). Default is false. |
-| convertUndefined | `any` | undefined | Provide user an option to convert undefined field to user defined value.
-| cookieCfg | [ICookieCfgConfig](#cookie-management)<br>[Optional]<br>(Since 2.6.0) | undefined | Defaults to cookie usage enabled see [ICookieCfgConfig](#cookie-management) settings for full defaults. |
-| cookieDomain | alias for [`cookieCfg.domain`](#cookie-management)<br>[Optional] | null | Custom cookie domain. It's helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined it takes precedence over this value. |
-| cookiePath | alias for [`cookieCfg.path`](#cookie-management)<br>[Optional]<br>(Since 2.6.0) | null | Custom cookie path. It's helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined, it takes precedence. |
-| correlationHeaderDomains | string[] | undefined | Enable correlation headers for specific domains |
-| correlationHeaderExcludedDomains | string[] | undefined | Disable correlation headers for specific domains |
-| correlationHeaderExcludePatterns | regex[] | undefined | Disable correlation headers using regular expressions |
-| createPerfMgr | (core: IAppInsightsCore, notification
-| customHeaders | `[{header: string, value: string}]` | undefined | The ability for the user to provide extra headers when using a custom endpoint. customHeaders aren't added on browser shutdown moment when beacon sender is used. And adding custom headers isn't supported on IE9 or earlier.
-| diagnosticLogInterval | numeric | 10000 | (internal) Polling interval (in ms) for internal logging queue |
-| disableAjaxTracking | boolean | false | If true, Ajax calls aren't autocollected. Default is false. |
-| disableCookiesUsage | alias for [`cookieCfg.enabled`](#cookie-management)<br>[Optional] | false | Default false. A boolean that indicates whether to disable the use of cookies by the SDK. If true, the SDK doesn't store or read any data from cookies.<br>(Since v2.6.0) If `cookieCfg.enabled` is defined it takes precedence. Cookie usage can be re-enabled after initialization via the core.getCookieMgr().setEnabled(true). |
-| disableCorrelationHeaders | boolean | false | If false, the SDK adds two headers ('Request-Id' and 'Request-Context') to all dependency requests to correlate them with corresponding requests on the server side. Default is false. |
-| disableDataLossAnalysis | boolean | true | If false, internal telemetry sender buffers are checked at startup for items not yet sent. |
-| disableExceptionTracking | boolean | false | If true, exceptions aren't autocollected. Default is false. |
-| disableFetchTracking | boolean | false | The default setting for `disableFetchTracking` is `false`, meaning it's enabled. However, in versions prior to 2.8.10, it was disabled by default. When set to `true`, Fetch requests aren't automatically collected. The default setting changed from `true` to `false` in version 2.8.0. |
-| disableFlushOnBeforeUnload | boolean | false | Default false. If true, flush method isn't called when onBeforeUnload event triggers |
-| disableIkeyDeprecationMessage | boolean | true | Disable instrumentation Key deprecation error message. If true, error messages are NOT sent.
-| disableInstrumentationKeyValidation | boolean | false | If true, instrumentation key validation check is bypassed. Default value is false.
-| disableTelemetry | boolean | false | If true, telemetry isn't collected or sent. Default is false. |
-| disableXhr | boolean | false | Don't use XMLHttpRequest or XDomainRequest (for IE < 9) by default instead attempt to use fetch() or sendBeacon. If no other transport is available, it uses XMLHttpRequest |
-| distributedTracingMode | numeric or `DistributedTracingModes` | `DistributedTracingModes.AI_AND_W3C` | Sets the distributed tracing mode. If AI_AND_W3C mode or W3C mode is set, W3C trace context headers (traceparent/tracestate) are generated and included in all outgoing requests. AI_AND_W3C is provided for back-compatibility with any legacy Application Insights instrumented services.
-| enableAjaxErrorStatusText | boolean | false | Default false. If true, include response error data text boolean in dependency event on failed AJAX requests. |
-| enableAjaxPerfTracking | boolean | false | Default false. Flag to enable looking up and including extra browser window.performance timings in the reported Ajax (XHR and fetch) reported metrics.
-| enableAutoRouteTracking | boolean | false | Automatically track route changes in Single Page Applications (SPA). If true, each route change sends a new Pageview to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views.<br>***Note***: If you enable this field, don't enable the `history` object for [React router configuration](./javascript-framework-extensions.md?tabs=react#track-router-history) because you'll get multiple page view events.
-| enableCorsCorrelation | boolean | false | If true, the SDK adds two headers ('Request-Id' and 'Request-Context') to all CORS requests to correlate outgoing AJAX dependencies with corresponding requests on the server side. Default is false |
-| enableDebug | boolean | false | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting results in dropped telemetry whenever an internal error occurs. It can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `loggingLevelConsole` or `loggingLevelTelemetry` instead of `enableDebug`. |
-| enablePerfMgr | boolean | false | When enabled (true) it creates local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). It can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code.
-| enableRequestHeaderTracking | boolean | false | If true, AJAX & Fetch request headers is tracked, default is false. If ignoreHeaders isn't configured, Authorization and X-API-Key headers aren't logged.
-| enableResponseHeaderTracking | boolean | false | If true, AJAX & Fetch request's response headers is tracked, default is false. If ignoreHeaders isn't configured, WWW-Authenticate header isn't logged.
-| enableSessionStorageBuffer | boolean | true | Default true. If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load |
-| enableUnhandledPromiseRejectionTracking | boolean | false | If true, unhandled promise rejections are autocollected as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value is ignored and unhandled promise rejections aren't reported.
-| eventsLimitInMem | number | 10000 | The number of events that can be kept in memory before the SDK starts to drop events when not using Session Storage (the default).
-| excludeRequestFromAutoTrackingPatterns | string[] \| RegExp[] | undefined | Provide a way to exclude specific route from automatic tracking for XMLHttpRequest or Fetch request. If defined, for an Ajax / fetch request that the request url matches with the regex patterns, auto tracking is turned off. Default is undefined. |
-| idLength | numeric | 22 | Identifies the default length used to generate new random session and user IDs. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set the value to 5.
-| ignoreHeaders | string[] | ["Authorization", "X-API-Key", "WWW-Authenticate"] | AJAX & Fetch request and response headers to be ignored in log data. To override or discard the default, add an array with all headers to be excluded or an empty array to the configuration.
-| isBeaconApiDisabled | boolean | true | If false, the SDK sends all telemetry using the [Beacon API](https://www.w3.org/TR/beacon) |
-| isBrowserLinkTrackingEnabled | boolean | false | Default is false. If true, the SDK tracks all [Browser Link](/aspnet/core/client-side/using-browserlink) requests. |
-| isRetryDisabled | boolean | false | Default false. If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected) |
-| isStorageUseDisabled | boolean | false | If true, the SDK doesn't store or read any data from local and session storage. Default is false. |
-| loggingLevelConsole | numeric | 0 | Logs **internal** Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) |
-| loggingLevelTelemetry | numeric | 1 | Sends **internal** Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) |
-| maxAjaxCallsPerView | numeric | 500 | Default 500 - controls how many Ajax calls are monitored per page view. Set to -1 to monitor all (unlimited) Ajax calls on the page. |
-| maxAjaxPerfLookupAttempts | numeric | 3 | Defaults to 3. The maximum number of times to look for the window.performance timings (if available) is required. Not all browsers populate the window.performance before reporting the end of the XHR request. For fetch requests, it's added after it's complete.
-| maxBatchInterval | numeric | 15000 | How long to batch telemetry for before sending (milliseconds) |
-| maxBatchSizeInBytes | numeric | 10000 | Max size of telemetry batch. If a batch exceeds this limit, it's immediately sent and a new batch is started |
-| namePrefix | string | undefined | An optional value that is used as name postfix for localStorage and session cookie name.
-| onunloadDisableBeacon | boolean | false | Default false. when tab is closed, the SDK sends all remaining telemetry using the [Beacon API](https://www.w3.org/TR/beacon) |
-| onunloadDisableFetch | boolean | false | If fetch keepalive is supported don't use it for sending events during unload, it may still fall back to fetch() without keepalive |</