Updates from: 08/01/2023 01:27:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Network Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/network-considerations.md
Previously updated : 03/14/2023 Last updated : 07/31/2023
A managed domain connects to a subnet in an Azure virtual network. Design this s
* A managed domain requires 3-5 IP addresses. Make sure that your subnet IP address range can provide this number of addresses. * Restricting the available IP addresses can prevent the managed domain from maintaining two domain controllers.
+ >[!NOTE]
+ >You shouldn't use public IP addresses for virtual networks and their subnets due to the following issues:
+ >
+ >- **Scarcity of the IP address**: IPv4 public IP addresses are limited, and their demand often exceeds the available supply. Also, there are potentially overlapping IPs with public endpoints.
+ >- **Security risks**: Using public IPs for virtual networks exposes your devices directly to the internet, increasing the risk of unauthorized access and potential attacks. Without proper security measures, your devices may become vulnerable to various threats.
+ >
+ >- **Complexity**: Managing a virtual network with public IPs can be more complex than using private IPs, as it requires dealing with external IP ranges and ensuring proper network segmentation and security.
+ >
+ >It is strongly recommended to use private IP addresses. If you use a public IP, ensure you are the owner/dedicated user of the chosen IPs in the public range you chose.
+ The following example diagram outlines a valid design where the managed domain has its own subnet, there's a gateway subnet for external connectivity, and application workloads are in a connected subnet within the virtual network: ![Recommended subnet design](./media/active-directory-domain-services-design-guide/vnet-subnet-design.png)
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
Previously updated : 01/29/2023 Last updated : 07/31/2023 #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
To quickly create a managed domain, you can select **Review + create** to accept
* Creates a subnet named *aadds-subnet* using the IP address range of *10.0.2.0/24*. * Synchronizes *All* users from Azure AD into the managed domain.
+>[!NOTE]
+>You shouldn't use public IP addresses for virtual networks and their subnets due to the following issues:
+>
+>- **Scarcity of the IP address**: IPv4 public IP addresses are limited, and their demand often exceeds the available supply. Also, there are potentially overlapping IPs with public endpoints.
+>- **Security risks**: Using public IPs for virtual networks exposes your devices directly to the internet, increasing the risk of unauthorized access and potential attacks. Without proper security measures, your devices may become vulnerable to various threats.
+>
+>- **Complexity**: Managing a virtual network with public IPs can be more complex than using private IPs, as it requires dealing with external IP ranges and ensuring proper network segmentation and security.
+>
+>It is strongly recommended to use private IP addresses. If you use a public IP, ensure you are the owner/dedicated user of the chosen IPs in the public range you chose.
+ Select **Review + create** to accept these default configuration options. ## Deploy the managed domain
active-directory Application Provisioning Configuration Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-configuration-api.md
Content-type: application/json
{ "value": [ {
- "id": "8b1025e4-1dd2-430b-a150-2ef79cd700f5",
+ "id": "8b1025e4-1dd2-430b-a150-2ef79cd700f5",
"displayName": "AWS Single-Account Access", "homePageUrl": "http://aws.amazon.com/", "supportedSingleSignOnModes": [
active-directory Application Provisioning When Will Provisioning Finish Specific User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md
Summary of factors that influence the time it takes to complete an **initial cyc
- Whether users in scope for provisioning are matched to existing users in the target application, or need to be created for the first time. Sync jobs for which all users are created for the first time take about *twice as long* as sync jobs for which all users are matched to existing users. -- Number of errors in the [provisioning logs](check-status-user-account-provisioning.md). Performance is slower if there are many errors and the provisioning service has gone into a quarantine state.
+- Number of errors in the [provisioning logs](check-status-user-account-provisioning.md). Performance is slower if there are many errors and the provisioning service has gone into a quarantine state.
- Request rate limits and throttling implemented by the target system. Some target systems implement request rate limits and throttling, which can impact performance during large sync operations. Under these conditions, an app that receives too many requests too fast might slow its response rate or close the connection. To improve performance, the connector needs to adjust by not sending the app requests faster than the app can process them. Provisioning connectors built by Microsoft make this adjustment.
active-directory Inbound Provisioning Api Grant Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-grant-access.md
This section describes how you can assign the necessary permissions to a managed
[![Screenshot of managed identity name.](media/inbound-provisioning-api-grant-access/managed-identity-name.png)](media/inbound-provisioning-api-grant-access/managed-identity-name.png#lightbox)
-1. Run the following PowerShell script to assign permissions to your managed identity.
+1. Run the following PowerShell script to assign permissions to your managed identity.
+ ```powershell Install-Module Microsoft.Graph -Scope CurrentUser
-
+ Connect-MgGraph -Scopes "Application.Read.All","AppRoleAssignment.ReadWrite.All,RoleManagement.ReadWrite.Directory" Select-MgProfile Beta $graphApp = Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'"
This section describes how you can assign the necessary permissions to a managed
$managedID = Get-MgServicePrincipal -Filter "DisplayName eq 'CSV2SCIMBulkUpload'" New-MgServicePrincipalAppRoleAssignment -PrincipalId $managedID.Id -ServicePrincipalId $managedID.Id -ResourceId $graphApp.Id -AppRoleId $AppRole.Id ```
-1. To confirm that the permission was applied, find the managed identity service principal under **Enterprise Applications** in Azure AD. Remove the **Application type** filter to see all service principals.
+1. To confirm that the permission was applied, find the managed identity service principal under **Enterprise Applications** in Azure AD. Remove the **Application type** filter to see all service principals.
[![Screenshot of managed identity principal.](media/inbound-provisioning-api-grant-access/managed-identity-principal.png)](media/inbound-provisioning-api-grant-access/managed-identity-principal.png#lightbox) 1. Click on the **Permissions** blade under **Security**. Ensure the permission is set. [![Screenshot of managed identity permissions.](media/inbound-provisioning-api-grant-access/managed-identity-permissions.png)](media/inbound-provisioning-api-grant-access/managed-identity-permissions.png#lightbox)
active-directory Inbound Provisioning Api Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-powershell.md
The PowerShell sample script published in the [Microsoft Entra ID inbound provis
- Test-ScriptCommands.ps1 (sample usage commands) - UseClientCertificate.ps1 (script to generate self-signed certificate and upload it as service principal credential for use in OAuth flow) - `Sample1` (folder with more examples of how CSV file columns can be mapped to SCIM standard attributes. If you get different CSV files for employees, contractors, interns, you can create a separate AttributeMapping.psd1 file for each entity.)
-1. Download and install the latest version of PowerShell.
-1. Run the command to enable execution of remote signed scripts:
+1. Download and install the latest version of PowerShell.
+1. Run the command to enable execution of remote signed scripts:
```powershell set-executionpolicy remotesigned ```
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
In this example, the users and or groups are created in a cloud HR application l
![Picture 2](./media/plan-auto-user-provisioning/workdayprovisioning.png)
-1. **HR team** performs the transactions in the cloud HR app tenant.
-2. **Azure AD provisioning service** runs the scheduled cycles from the cloud HR app tenant and identifies changes that need to be processed for sync with AD.
-3. **Azure AD provisioning service** invokes the Azure AD Connect provisioning agent with a request payload containing AD account create/update/enable/disable operations.
-4. **Azure AD Connect provisioning agent** uses a service account to manage AD account data.
-5. **Azure AD Connect** runs delta sync to pull updates in AD.
-6. **AD** updates are synced with Azure AD.
-7. **Azure AD provisioning service** writebacks email attribute and username from Azure AD to the cloud HR app tenant.
+1. **HR team** performs the transactions in the cloud HR app tenant.
+2. **Azure AD provisioning service** runs the scheduled cycles from the cloud HR app tenant and identifies changes that need to be processed for sync with AD.
+3. **Azure AD provisioning service** invokes the Azure AD Connect provisioning agent with a request payload containing AD account create/update/enable/disable operations.
+4. **Azure AD Connect provisioning agent** uses a service account to manage AD account data.
+5. **Azure AD Connect** runs delta sync to pull updates in AD.
+6. **AD** updates are synced with Azure AD.
+7. **Azure AD provisioning service** writebacks email attribute and username from Azure AD to the cloud HR app tenant.
## Plan the deployment project
active-directory User Provisioning Sync Attributes For Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md
Once schema extensions are created, these extension attributes are automatically
When you've more than 1000 service principals, you may find extensions missing in the source attribute list. If an attribute you've created doesn't automatically appear, then verify the attribute was created and add it manually to your schema. To verify it was created, use Microsoft Graph and [Graph Explorer](/graph/graph-explorer/graph-explorer-overview). To add it manually to your schema, see [Editing the list of supported attributes](customize-application-attributes.md#editing-the-list-of-supported-attributes). ### Create an extension attribute for cloud only users using Microsoft Graph
-You can extend the schema of Azure AD users using [Microsoft Graph](/graph/overview).
+You can extend the schema of Azure AD users using [Microsoft Graph](/graph/overview).
First, list the apps in your tenant to get the ID of the app you're working on. To learn more, see [List extensionProperties](/graph/api/application-list-extensionproperty).
Content-type: application/json
"name": "extensionName", "dataType": "string", "targetObjects": [
- "User"
+ "User"
] } ```
GET https://graph.microsoft.com/v1.0/users/{id}?$select=displayName,extension_in
### Create an extension attribute on a cloud only user using PowerShell
-Create a custom extension using PowerShell and assign a value to a user.
+Create a custom extension using PowerShell and assign a value to a user.
```
-#Connect to your Azure AD tenant
+#Connect to your Azure AD tenant
Connect-AzureAD #Create an application (you can instead use an existing application if you would like)
Cloud sync will automatically discover your extensions in on-premises Active Dir
4. Select the configuration you wish to add the extension attribute and mapping. 5. Under **Manage attributes** select **click to edit mappings**. 6. Click **Add attribute mapping**. The attributes will automatically be discovered.
-7. The new attributes will be available in the drop-down under **source attribute**.
+7. The new attributes will be available in the drop-down under **source attribute**.
8. Fill in the type of mapping you want and click **Apply**. [![Custom attribute mapping](media/user-provisioning-sync-attributes-for-mapping/schema-1.png)](media/user-provisioning-sync-attributes-for-mapping/schema-1.png#lightbox)
If users who will access the applications originate in on-premises Active Direct
1. Open the Azure AD Connect wizard, choose Tasks, and then choose **Customize synchronization options**. ![Azure Active Directory Connect wizard Additional tasks page](./media/user-provisioning-sync-attributes-for-mapping/active-directory-connect-customize.png)
-
-2. Sign in as an Azure AD Global Administrator.
+
+2. Sign in as an Azure AD Global Administrator.
3. On the **Optional Features** page, select **Directory extension attribute sync**.
-
+ ![Azure Active Directory Connect wizard Optional features page](./media/user-provisioning-sync-attributes-for-mapping/active-directory-connect-directory-extension-attribute-sync.png) 4. Select the attribute(s) you want to extend to Azure AD.
If users who will access the applications originate in on-premises Active Direct
![Screenshot that shows the "Directory extensions" selection page](./media/user-provisioning-sync-attributes-for-mapping/active-directory-connect-directory-extensions.png) 5. Finish the Azure AD Connect wizard and allow a full synchronization cycle to run. When the cycle is complete, the schema is extended and the new values are synchronized between your on-premises AD and Azure AD.
-
+ 6. In the Azure portal, while youΓÇÖre [editing user attribute mappings](customize-application-attributes.md), the **Source attribute** list will now contain the added attribute in the format `<attributename> (extension_<appID>_<attributename>)`, where appID is the identifier of a placeholder application in your tenant. Select the attribute and map it to the target application for provisioning. ![Azure Active Directory Connect wizard Directory extensions selection page](./media/user-provisioning-sync-attributes-for-mapping/attribute-mapping-extensions.png) > [!NOTE]
-> The ability to provision reference attributes from on-premises AD, such as **managedby** or **DN/DistinguishedName**, is not supported today. You can request this feature on [User Voice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789).
+> The ability to provision reference attributes from on-premises AD, such as **managedby** or **DN/DistinguishedName**, is not supported today. You can request this feature on [User Voice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789).
## Next steps
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning.md
# What is app provisioning in Azure Active Directory? In Azure Active Directory (Azure AD), the term *app provisioning* refers to automatically creating user identities and roles for applications.
-
+ ![Diagram that shows provisioning scenarios.](../governance/media/what-is-provisioning/provisioning.png) Azure AD application provisioning refers to automatically creating user identities and roles in the applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into SaaS applications like [Dropbox](../../active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md), and many more.
active-directory Application Proxy Azure Front Door https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-azure-front-door.md
This article guides you through the steps to securely expose a web application o
### Application Proxy Configuration Follow these steps to configure Application Proxy for Front Door:
-1. Install connector for the location that your app instances will be in (For example US West). For the connector group assign the connector to the right region (For example North America).
-2. Set up your app instance with Application Proxy as follows:
+1. Install connector for the location that your app instances will be in (For example US West). For the connector group assign the connector to the right region (For example North America).
+2. Set up your app instance with Application Proxy as follows:
- Set the Internal URL to the address users access the app from the internal network, for example contoso.org - Set the External URL to the domain address you want the users to access the app from. For this you must configure a custom domain for our application here, for example, contoso.org. Reference: [Custom domains in Azure Active Directory Application Proxy][appproxy-custom-domain] - Assign the application to the appropriate connector group (For example: North America) - Note down the URL generated by Application Proxy to access the application. For example, contoso.msappproxy.net - For the application configure a CNAME Entry in your DNS provider which points the external URL to the Front DoorΓÇÖs endpoint, for example ΓÇÿcontoso.orgΓÇÖ to contoso.msappproxy.net
-3. In the Front Door service, utilize the URL generated for the app by Application Proxy as a backend for the backend pool. For example, contoso.msappproxy.net
+3. In the Front Door service, utilize the URL generated for the app by Application Proxy as a backend for the backend pool. For example, contoso.msappproxy.net
#### Sample Application Proxy Configuration The following table shows a sample Application Proxy configuration. The sample scenario uses the sample application domain www.contoso.org as the External URL.
The configuration steps that follow refer to the following definitions:
- Origin host header: This represented the host header value being sent to the backend for each request. For example, contoso.org. For more information refer here: [Origins and origin groups ΓÇô Azure Front Door][front-door-origin] Follow these steps to configure the Front Door Service (Standard):
-1. Create a Front Door (Standard) with the configuration below:
+1. Create a Front Door (Standard) with the configuration below:
- Add an Endpoint name for generating the Front DoorΓÇÖs default domain i.e. azurefd.net. For example, contoso-nam that generated the Endpoint hostname contoso-nam.azurefd.net - Add an Origin Type for the type of backend resource. For example Custom here for the Application Proxy resource - Add an Origin host name to represent the backend host name. For example, contoso.msappproxy.net - Optional: Enable Caching for the routing rule for Front Door to cache your static content.
-2. Verify if the deployment is complete and the Front Door Service is ready
-3. To give your Front Door service a user-friendly domain host name URL, create a CNAME record with your DNS provider for your Application Proxy External URL that points to Front DoorΓÇÖs domain host name (generated by the Front Door service). For example, contoso.org points to contoso.azurefd.net Reference: [How to add a custom domain - Azure Front Door][front-door-custom-domain]
-4. As per the reference, on the Front Door Service Dashboard navigate to Front Door Manager and add a Domain with the Custom Hostname. For example, contoso.org
-5. Navigate to the Origin groups in the Front Door Service Dashboard, select the origin name and validate the Origin host header matches the domain of the backend. For example here the Origin host header should be: contoso.org
+2. Verify if the deployment is complete and the Front Door Service is ready
+3. To give your Front Door service a user-friendly domain host name URL, create a CNAME record with your DNS provider for your Application Proxy External URL that points to Front DoorΓÇÖs domain host name (generated by the Front Door service). For example, contoso.org points to contoso.azurefd.net Reference: [How to add a custom domain - Azure Front Door][front-door-custom-domain]
+4. As per the reference, on the Front Door Service Dashboard navigate to Front Door Manager and add a Domain with the Custom Hostname. For example, contoso.org
+5. Navigate to the Origin groups in the Front Door Service Dashboard, select the origin name and validate the Origin host header matches the domain of the backend. For example here the Origin host header should be: contoso.org
| | Configuration | Additional Information | |- | -- | - |
active-directory Application Proxy Configure Complex Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-complex-application.md
# Understanding Azure Active Directory Application Proxy Complex application scenario (Preview) When applications are made up of multiple individual web application using different domain suffixes or different ports or paths in the URL, the individual web application instances must be published in separate Azure AD Application Proxy apps and the following problems might arise:
-1. Pre-authentication- The client must separately acquire an access token or cookie for each Azure AD Application Proxy app. This might lead to additional redirects to login.microsoftonline.com and CORS issues.
-2. CORS issues- Cross-origin resource sharing calls (OPTIONS request) might be triggered to validate if the caller web app is allowed to access the URL of the targeted web app. These will be blocked by the Azure AD Application Proxy Cloud service, since these requests cannot contain authentication information.
-3. Poor app management- Multiple enterprise apps are created to enable access to a private app adding friction to the app management experience.
+1. Pre-authentication- The client must separately acquire an access token or cookie for each Azure AD Application Proxy app. This might lead to additional redirects to login.microsoftonline.com and CORS issues.
+2. CORS issues- Cross-origin resource sharing calls (OPTIONS request) might be triggered to validate if the caller web app is allowed to access the URL of the targeted web app. These will be blocked by the Azure AD Application Proxy Cloud service, since these requests cannot contain authentication information.
+3. Poor app management- Multiple enterprise apps are created to enable access to a private app adding friction to the app management experience.
The following figure shows an example for complex application domain structure.
active-directory Application Proxy Configure Connectors With Proxy Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-connectors-with-proxy-servers.md
To enable this, please follow the next steps:
`UseDefaultProxyForBackendRequests = 1` to the Connector configuration registry key located in "HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft AAD App Proxy Connector". ### Step 2: Configure the proxy server manually using netsh command
-1. Enable the group policy Make proxy settings per-machine. This is found in: Computer Configuration\Policies\Administrative Templates\Windows Components\Internet Explorer. This needs to be set rather than having this policy set to per-user.
-2. Run `gpupdate /force` on the server or reboot the server to ensure it uses the updated group policy settings.
-3. Launch an elevated command prompt with admin rights and enter `control inetcpl.cpl`.
-4. Configure the required proxy settings.
+1. Enable the group policy Make proxy settings per-machine. This is found in: Computer Configuration\Policies\Administrative Templates\Windows Components\Internet Explorer. This needs to be set rather than having this policy set to per-user.
+2. Run `gpupdate /force` on the server or reboot the server to ensure it uses the updated group policy settings.
+3. Launch an elevated command prompt with admin rights and enter `control inetcpl.cpl`.
+4. Configure the required proxy settings.
These settings make the connector use the same forward proxy for the communication to Azure and to the backend application. If the connector to Azure communication requires no forward proxy or a different forward proxy, you can set this up with modifying the file ApplicationProxyConnectorService.exe.config as described in the sections Bypass outbound proxies or Use the outbound proxy server.
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
To enable the certificate-based authentication in the Azure portal, complete the
1. Sign in to the [Azure portal](https://portal.azure.com) as an Authentication Policy Administrator. 1. Select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side. 1. Under **Manage**, select **Authentication methods** > **Certificate-based Authentication**.
-1. Under **Enable and Target**, click **Enable**.
+1. Under **Enable and Target**, click **Enable**.
1. Click **All users**, or click **Add groups** to select specific groups. :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/enable.png" alt-text="Screenshot of how to enable CBA.":::
As a first configuration test, you should try to sign in to the [MyApps portal](
1. Select **Sign in with a certificate**.
-1. Pick the correct user certificate in the client certificate picker UI and click **OK**.
+1. Pick the correct user certificate in the client certificate picker UI and click **OK**.
:::image type="content" border="true" source="./media/how-to-certificate-based-authentication/picker.png" alt-text="Screenshot of the certificate picker UI.":::
active-directory How To Mfa Server Migration Utility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md
If the upgrade had issues, follow these steps to roll back:
>[!NOTE] >Any changes since the backup was made will be lost, but should be minimal if backup was made right before upgrade and upgrade was unsuccessful.
-1. Run the installer for your previous version (for example, 8.0.x.x).
+1. Run the installer for your previous version (for example, 8.0.x.x).
1. Configure Azure AD to accept MFA requests to your on-premises federation server. Use Graph PowerShell to set [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true) to `enforceMfaByFederatedIdp`, as shown in the following example. **Request**
active-directory Concept Continuous Access Evaluation Workload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation-workload.md
When a clientΓÇÖs access to a resource is blocked due to CAE being triggered, th
The following steps detail how an admin can verify sign in activity in the sign-in logs:
-1. Sign into the Azure portal as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Sign-in logs** > **Service Principal Sign-ins**. You can use filters to ease the debugging process.
-1. Select an entry to see activity details. The **Continuous access evaluation** field indicates whether a CAE token was issued in a particular sign-in attempt.
+1. Sign into the Azure portal as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Sign-in logs** > **Service Principal Sign-ins**. You can use filters to ease the debugging process.
+1. Select an entry to see activity details. The **Continuous access evaluation** field indicates whether a CAE token was issued in a particular sign-in attempt.
## Next steps
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Customers who have configured CAE settings under Security before have to migrate
:::image type="content" source="media/concept-continuous-access-evaluation/migrate-continuous-access-evaluation.png" alt-text="Portal view showing the option to migrate continuous access evaluation to a Conditional Access policy." lightbox="media/concept-continuous-access-evaluation/migrate-continuous-access-evaluation.png"::: 1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Continuous access evaluation**.
-1. You have the option to **Migrate** your policy. This action is the only one that you have access to at this point.
+1. Browse to **Azure Active Directory** > **Security** > **Continuous access evaluation**.
+1. You have the option to **Migrate** your policy. This action is the only one that you have access to at this point.
1. Browse to **Conditional Access** and you find a new policy named **Conditional Access policy created from CAE settings** with your settings configured. Administrators can choose to customize this policy or create their own to replace it. The following table describes the migration experience of each customer group based on previously configured CAE settings.
active-directory Howto Continuous Access Evaluation Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-continuous-access-evaluation-troubleshoot.md
Administrators can monitor and troubleshoot sign in events where [continuous acc
Administrators can monitor user sign-ins where continuous access evaluation (CAE) is applied. This information is found in the Azure AD sign-in logs:
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Sign-in logs**.
-1. Apply the **Is CAE Token** filter.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Sign-in logs**.
+1. Apply the **Is CAE Token** filter.
[ ![Screenshot showing how to add a filter to the Sign-ins log to see where CAE is being applied or not.](./media/howto-continuous-access-evaluation-troubleshoot/sign-ins-log-apply-filter.png) ](./media/howto-continuous-access-evaluation-troubleshoot/sign-ins-log-apply-filter.png#lightbox)
The continuous access evaluation insights workbook allows administrators to view
Log Analytics integration must be completed before workbooks are displayed. For more information about how to stream Azure AD sign-in logs to a Log Analytics workspace, see the article [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Workbooks**.
-1. Under **Public Templates**, search for **Continuous access evaluation insights**.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Workbooks**.
+1. Under **Public Templates**, search for **Continuous access evaluation insights**.
The **Continuous access evaluation insights** workbook contains the following table:
Admins can view records filtered by time range and application. Admins can compa
To unblock users, administrators can add specific IP addresses to a trusted named location.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. Here you can create or update trusted IP locations.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. Here you can create or update trusted IP locations.
> [!NOTE] > Before adding an IP address as a trusted named location, confirm that the IP address does in fact belong to the intended organization.
active-directory Reference Office 365 Application Contents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/reference-office-365-application-contents.md
+ # Apps included in Conditional Access Office 365 app suite The following list is provided as a reference and includes a detailed list of services and applications that are included in the Conditional Access [Office 365](concept-conditional-access-cloud-apps.md#office-365) app. -- Augmentation Loop-- Call Recorder
+- Augmentation Loop
+- Call Recorder
- Connectors-- Device Management Service
+- Device Management Service
- EnrichmentSvc-- IC3 Gateway-- Media Analysis and Transformation Service-- Message Recall app-- Messaging Async Media
+- IC3 Gateway
+- Media Analysis and Transformation Service
+- Message Recall app
+- Messaging Async Media
- MessagingAsyncMediaProd-- Microsoft 365 Reporting Service-- Microsoft Discovery Service-- Microsoft Exchange Online Protection-- Microsoft Flow-- Microsoft Flow GCC-- Microsoft Forms-- Microsoft Forms Web-- Microsoft Forms Web in Azure Government-- Microsoft Legacy To-Do WebApp-- Microsoft Office 365 Portal-- Microsoft Office client application-- Microsoft People Cards Service-- Microsoft SharePoint Online - SharePoint Home-- Microsoft Stream Portal-- Microsoft Stream Service-- Microsoft Teams-- Microsoft Teams - T4L Web Client-- Microsoft Teams - Teams And Channels Service-- Microsoft Teams Chat Aggregator-- Microsoft Teams Graph Service-- Microsoft Teams Retail Service-- Microsoft Teams Services-- Microsoft Teams UIS-- Microsoft Teams Web Client-- Microsoft To-Do WebApp-- Microsoft Whiteboard Services-- O365 Suite UX-- OCPS Checkin Service-- Office 365 app, corresponding to a migrated siteId.-- Office 365 Exchange Microservices-- Office 365 Exchange Online-- Office 365 Search Service-- Office 365 SharePoint Online-- Office 365 Yammer-- Office Delve-- Office Hive-- Office Hive Azure Government-- Office Online-- Office Services Manager-- Office Services Manager in USGov-- Office Shredding Service-- Office365 Shell WCSS-Client-- Office365 Shell WCSS-Client in Azure Government
+- Microsoft 365 Reporting Service
+- Microsoft Discovery Service
+- Microsoft Exchange Online Protection
+- Microsoft Flow
+- Microsoft Flow GCC
+- Microsoft Forms
+- Microsoft Forms Web
+- Microsoft Forms Web in Azure Government
+- Microsoft Legacy To-Do WebApp
+- Microsoft Office 365 Portal
+- Microsoft Office client application
+- Microsoft People Cards Service
+- Microsoft SharePoint Online - SharePoint Home
+- Microsoft Stream Portal
+- Microsoft Stream Service
+- Microsoft Teams
+- Microsoft Teams - T4L Web Client
+- Microsoft Teams - Teams And Channels Service
+- Microsoft Teams Chat Aggregator
+- Microsoft Teams Graph Service
+- Microsoft Teams Retail Service
+- Microsoft Teams Services
+- Microsoft Teams UIS
+- Microsoft Teams Web Client
+- Microsoft To-Do WebApp
+- Microsoft Whiteboard Services
+- O365 Suite UX
+- OCPS Checkin Service
+- Office 365 app, corresponding to a migrated siteId.
+- Office 365 Exchange Microservices
+- Office 365 Exchange Online
+- Office 365 Search Service
+- Office 365 SharePoint Online
+- Office 365 Yammer
+- Office Delve
+- Office Hive
+- Office Hive Azure Government
+- Office Online
+- Office Services Manager
+- Office Services Manager in USGov
+- Office Shredding Service
+- Office365 Shell WCSS-Client
+- Office365 Shell WCSS-Client in Azure Government
- OfficeClientService - OfficeHome - OneDrive-- OneDrive SyncEngine
+- OneDrive SyncEngine
- OneNote-- Outlook Browser Extension-- Outlook Service for Exchange-- PowerApps Service-- PowerApps Web-- PowerApps Web GCC
+- Outlook Browser Extension
+- Outlook Service for Exchange
+- PowerApps Service
+- PowerApps Web
+- PowerApps Web GCC
- ProjectWorkManagement - ProjectWorkManagement_USGov-- Reply at mention-- Security & Compliance Center-- SharePoint Online Web Client Extensibility-- SharePoint Online Web Client Extensibility Isolated-- Skype and Teams Tenant Admin API-- Skype for Business Online-- Skype meeting broadcast-- Skype Presence Service
+- Reply at mention
+- Security & Compliance Center
+- SharePoint Online Web Client Extensibility
+- SharePoint Online Web Client Extensibility Isolated
+- Skype and Teams Tenant Admin API
+- Skype for Business Online
+- Skype meeting broadcast
+- Skype Presence Service
- SmartCompose - Sway-- Targeted Messaging Service-- The GCC DoD app for office.com-- The Office365 Shell DoD WCSS-Client
+- Targeted Messaging Service
+- The GCC DoD app for office.com
+- The Office365 Shell DoD WCSS-Client
active-directory Resilience Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/resilience-defaults.md
If there was an outage of the primary authentication service, the Azure Active D
For authentications protected by Conditional Access, policies are reevaluated before access tokens are issued to determine:
-1. Which Conditional Access policies apply?
-1. For policies that do apply, were the required controls are satisfied?
+1. Which Conditional Access policies apply?
+1. For policies that do apply, were the required controls are satisfied?
During an outage, not all conditions can be evaluated in real time by the Backup Authentication Service to determine whether a Conditional Access policy should apply. Conditional Access resilience defaults are a new session control that lets admins decide between:
You can configure Conditional Access resilience defaults from the Azure portal,
### Azure portal
-1. Navigate to the **Azure portal** > **Security** > **Conditional Access**
-1. Create a new policy or select an existing policy
-1. Open the Session control settings
-1. Select Disable resilience defaults to disable the setting for this policy. Sign-ins in scope of the policy will be blocked during an Azure AD outage
-1. Save changes to the policy
+1. Navigate to the **Azure portal** > **Security** > **Conditional Access**
+1. Create a new policy or select an existing policy
+1. Open the Session control settings
+1. Select Disable resilience defaults to disable the setting for this policy. Sign-ins in scope of the policy will be blocked during an Azure AD outage
+1. Save changes to the policy
### MS Graph APIs
active-directory Howto Restrict Your App To A Set Of Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md
Once you've configured your app to enable user assignment, you can go ahead and
Follow the steps in this section to secure app-to-app authentication access for your tenant.
-1. Navigate to Service Principal sign-in logs in your tenant to find services authenticating to access resources in your tenant.
-1. Check using app ID if a Service Principal exists for both resource and client apps in your tenant that you wish to manage access.
+1. Navigate to Service Principal sign-in logs in your tenant to find services authenticating to access resources in your tenant.
+1. Check using app ID if a Service Principal exists for both resource and client apps in your tenant that you wish to manage access.
```powershell Get-MgServicePrincipal ` -Filter "AppId eq '$appId'" ```
-1. Create a Service Principal using app ID, if it doesn't exist:
+1. Create a Service Principal using app ID, if it doesn't exist:
```powershell New-MgServicePrincipal ` -AppId $appId ```
-1. Explicitly assign client apps to resource apps (this functionality is available only in API and not in the Azure AD Portal):
+1. Explicitly assign client apps to resource apps (this functionality is available only in API and not in the Azure AD Portal):
```powershell $clientAppId = ΓÇ£[guid]ΓÇ¥ $clientId = (Get-MgServicePrincipal -Filter "AppId eq '$clientAppId'").Id
Follow the steps in this section to secure app-to-app authentication access for
-ResourceId (Get-MgServicePrincipal -Filter "AppId eq '$appId'").Id ` -AppRoleId "00000000-0000-0000-0000-000000000000" ```
-1. Require assignment for the resource application to restrict access only to the explicitly assigned users or services.
+1. Require assignment for the resource application to restrict access only to the explicitly assigned users or services.
```powershell Update-MgServicePrincipal -ServicePrincipalId (Get-MgServicePrincipal -Filter "AppId eq '$appId'").Id -AppRoleAssignmentRequired:$true ```
active-directory Scenario Daemon Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-acquire-token.md
Here's an example of defining the scopes for the web API as part of the configur
```json {
- "AzureAd": {
- // Same AzureAd section as before.
- },
-
- "MyWebApi": {
- "BaseUrl": "https://localhost:44372/",
- "RelativePath": "api/TodoList",
- "RequestAppToken": true,
- "Scopes": [ "[Enter here the scopes for your web API]" ]
- }
+ "AzureAd": {
+ // Same AzureAd section as before.
+ },
+
+ "MyWebApi": {
+ "BaseUrl": "https://localhost:44372/",
+ "RelativePath": "api/TodoList",
+ "RequestAppToken": true,
+ "Scopes": [ "[Enter here the scopes for your web API]" ]
+ }
} ```
active-directory Scenario Daemon App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-app-configuration.md
ConfidentialClientApplication cca =
```JavaScript const msalConfig = {
- auth: {
- clientId: process.env.CLIENT_ID,
- authority: process.env.AAD_ENDPOINT + process.env.TENANT_ID,
- clientSecret: process.env.CLIENT_SECRET,
- }
+ auth: {
+ clientId: process.env.CLIENT_ID,
+ authority: process.env.AAD_ENDPOINT + process.env.TENANT_ID,
+ clientSecret: process.env.CLIENT_SECRET,
+ }
}; const apiConfig = {
- uri: process.env.GRAPH_ENDPOINT + 'v1.0/users',
+ uri: process.env.GRAPH_ENDPOINT + 'v1.0/users',
}; const tokenRequest = {
- scopes: [process.env.GRAPH_ENDPOINT + '.default'],
+ scopes: [process.env.GRAPH_ENDPOINT + '.default'],
}; const cca = new msal.ConfidentialClientApplication(msalConfig);
active-directory Scenario Mobile Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-mobile-acquire-token.md
UIViewController *viewController = ...; // Pass a reference to the view controll
MSALWebviewParameters *webParameters = [[MSALWebviewParameters alloc] initWithAuthPresentationViewController:viewController]; MSALInteractiveTokenParameters *interactiveParams = [[MSALInteractiveTokenParameters alloc] initWithScopes:scopes webviewParameters:webParameters]; [application acquireTokenWithParameters:interactiveParams completionBlock:^(MSALResult *result, NSError *error) {
- if (!error)
- {
- // You'll want to get the account identifier to retrieve and reuse the account
- // for later acquireToken calls
- NSString *accountIdentifier = result.account.identifier;
-
- NSString *accessToken = result.accessToken;
- }
+ if (!error)
+ {
+ // You'll want to get the account identifier to retrieve and reuse the account
+ // for later acquireToken calls
+ NSString *accountIdentifier = result.account.identifier;
+
+ NSString *accessToken = result.accessToken;
+ }
}]; ```
let webviewParameters = MSALWebviewParameters(authPresentationViewController: vi
let interactiveParameters = MSALInteractiveTokenParameters(scopes: scopes, webviewParameters: webviewParameters) application.acquireToken(with: interactiveParameters, completionBlock: { (result, error) in
- guard let authResult = result, error == nil else {
- print(error!.localizedDescription)
- return
- }
+ guard let authResult = result, error == nil else {
+ print(error!.localizedDescription)
+ return
+ }
- // Get access token from result
- let accessToken = authResult.accessToken
+ // Get access token from result
+ let accessToken = authResult.accessToken
}) ```
active-directory Scenario Spa Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-acquire-token.md
import { filter, Subject, takeUntil } from 'rxjs';
// In app.component.ts export class AppComponent implements OnInit {
- private readonly _destroying$ = new Subject<void>();
-
- constructor(private broadcastService: MsalBroadcastService) { }
-
- ngOnInit() {
- this.broadcastService.msalSubject$
- .pipe(
- filter((msg: EventMessage) => msg.eventType === EventType.ACQUIRE_TOKEN_SUCCESS),
- takeUntil(this._destroying$)
- )
- .subscribe((result: EventMessage) => {
- // Do something with event payload here
- });
- }
-
- ngOnDestroy(): void {
- this._destroying$.next(undefined);
- this._destroying$.complete();
- }
+ private readonly _destroying$ = new Subject<void>();
+
+ constructor(private broadcastService: MsalBroadcastService) { }
+
+ ngOnInit() {
+ this.broadcastService.msalSubject$
+ .pipe(
+ filter((msg: EventMessage) => msg.eventType === EventType.ACQUIRE_TOKEN_SUCCESS),
+ takeUntil(this._destroying$)
+ )
+ .subscribe((result: EventMessage) => {
+ // Do something with event payload here
+ });
+ }
+
+ ngOnDestroy(): void {
+ this._destroying$.next(undefined);
+ this._destroying$.complete();
+ }
} ```
active-directory Tutorial V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-console.md
This should result in some JSON response from Microsoft Graph API and you should
You have selected: getUsers request made to web API at: Fri Jan 22 2021 09:31:52 GMT-0800 (Pacific Standard Time) {
- '@odata.context': 'https://graph.microsoft.com/v1.0/$metadata#users',
- value: [
- {
- displayName: 'Adele Vance'
- givenName: 'Adele',
- jobTitle: 'Retail Manager',
- mail: 'AdeleV@msaltestingjs.onmicrosoft.com',
- mobilePhone: null,
- officeLocation: '18/2111',
- preferredLanguage: 'en-US',
- surname: 'Vance',
- userPrincipalName: 'AdeleV@msaltestingjs.onmicrosoft.com',
- id: 'a6a218a5-f5ae-462a-acd3-581af4bcca00'
- }
- ]
+ '@odata.context': 'https://graph.microsoft.com/v1.0/$metadata#users',
+ value: [
+ {
+ displayName: 'Adele Vance'
+ givenName: 'Adele',
+ jobTitle: 'Retail Manager',
+ mail: 'AdeleV@msaltestingjs.onmicrosoft.com',
+ mobilePhone: null,
+ officeLocation: '18/2111',
+ preferredLanguage: 'en-US',
+ surname: 'Vance',
+ userPrincipalName: 'AdeleV@msaltestingjs.onmicrosoft.com',
+ id: 'a6a218a5-f5ae-462a-acd3-581af4bcca00'
+ }
+ ]
} ``` :::image type="content" source="media/tutorial-v2-nodejs-console/screenshot.png" alt-text="Command-line interface displaying Graph response":::
active-directory Tutorial V2 Windows Uwp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-uwp.md
In the current sample, the `WithRedirectUri("https://login.microsoftonline.com/c
.Build(); ```
-2. Find the callback URI for your app by adding the `redirectURI` field in *MainPage.xaml.cs* and setting a breakpoint on it:
+2. Find the callback URI for your app by adding the `redirectURI` field in *MainPage.xaml.cs* and setting a breakpoint on it:
```csharp
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
To uninstall old packages:
1. If the command fails, try the low-level tools with scripts disabled: 1. For Ubuntu/Debian, run `sudo dpkg --purge aadlogin`. If it's still failing because of the script, delete the `/var/lib/dpkg/info/aadlogin.prerm` file and try again. 1. For everything else, run `rpm -e --noscripts aadogin`.
-1. Repeat steps 3-4 for package `aadlogin-selinux`.
+1. Repeat steps 3-4 for package `aadlogin-selinux`.
### Extension installation errors
active-directory Domains Admin Takeover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md
# Take over an unmanaged directory as administrator in Azure Active Directory
-This article describes two ways to take over a DNS domain name in an unmanaged directory in Azure Active Directory (Azure AD), part of Microsoft Entra. When a self-service user signs up for a cloud service that uses Azure AD, they are added to an unmanaged Azure AD directory based on their email domain. For more about self-service or "viral" sign-up for a service, see [What is self-service sign-up for Azure Active Directory?](directory-self-service-signup.md)
+This article describes two ways to take over a DNS domain name in an unmanaged directory in Azure Active Directory (Azure AD), part of Microsoft Entra. When a self-service user signs up for a cloud service that uses Azure AD, they're added to an unmanaged Azure AD directory based on their email domain. For more about self-service or "viral" sign-up for a service, see [What is self-service sign-up for Azure Active Directory?](directory-self-service-signup.md)
> [!VIDEO https://www.youtube.com/embed/GOSpjHtrRsg]
This article describes two ways to take over a DNS domain name in an unmanaged d
## Decide how you want to take over an unmanaged directory During the process of admin takeover, you can prove ownership as described in [Add a custom domain name to Azure AD](../fundamentals/add-custom-domain.md). The next sections explain the admin experience in more detail, but here's a summary:
-* When you perform an ["internal" admin takeover](#internal-admin-takeover) of an unmanaged Azure directory, you are added as the global administrator of the unmanaged directory. No users, domains, or service plans are migrated to any other directory you administer.
+* When you perform an ["internal" admin takeover](#internal-admin-takeover) of an unmanaged Azure directory, you're added as the global administrator of the unmanaged directory. No users, domains, or service plans are migrated to any other directory you administer.
* When you perform an ["external" admin takeover](#external-admin-takeover) of an unmanaged Azure directory, you add the DNS domain name of the unmanaged directory to your managed Azure directory. When you add the domain name, a mapping of users to resources is created in your managed Azure directory so that users can continue to access services without interruption. ## Internal admin takeover
-Some products that include SharePoint and OneDrive, such as Microsoft 365, do not support external takeover. If that is your scenario, or if you are an admin and want to take over an unmanaged or "shadow" Azure AD organization create by users who used self-service sign-up, you can do this with an internal admin takeover.
+Some products that include SharePoint and OneDrive, such as Microsoft 365, don't support external takeover. If that is your scenario, or if you're an admin and want to take over an unmanaged or "shadow" Azure AD organization create by users who used self-service sign-up, you can do this with an internal admin takeover.
1. Create a user context in the unmanaged organization through signing up for Power BI. For convenience of example, these steps assume that path.
Some products that include SharePoint and OneDrive, such as Microsoft 365, do no
![first screenshot for Become the Admin](./media/domains-admin-takeover/become-admin-first.png)
-5. Add the TXT record to prove that you own the domain name **fourthcoffee.xyz** at your domain name registrar. In this example, it is GoDaddy.com.
+5. Add the TXT record to prove that you own the domain name **fourthcoffee.xyz** at your domain name registrar. In this example, it's GoDaddy.com.
![Add a txt record for the domain name](./media/domains-admin-takeover/become-admin-txt-record.png) When the DNS TXT records are verified at your domain name registrar, you can manage the Azure AD organization.
-When you complete the preceding steps, you are now the global administrator of the Fourth Coffee organization in Microsoft 365. To integrate the domain name with your other Azure services, you can remove it from Microsoft 365 and add it to a different managed organization in Azure.
+When you complete the preceding steps, you're now the global administrator of the Fourth Coffee organization in Microsoft 365. To integrate the domain name with your other Azure services, you can remove it from Microsoft 365 and add it to a different managed organization in Azure.
### Adding the domain name to a managed organization in Azure AD [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] 1. Open the [Microsoft 365 admin center](https://admin.microsoft.com).
-2. Select **Users** tab, and create a new user account with a name like *user\@fourthcoffeexyz.onmicrosoft.com* that does not use the custom domain name.
+2. Select **Users** tab, and create a new user account with a name like *user\@fourthcoffeexyz.onmicrosoft.com* that doesn't use the custom domain name.
3. Ensure that the new user account has Global Administrator privileges for the Azure AD organization. 4. Open **Domains** tab in the Microsoft 365 admin center, select the domain name and select **Remove**.
When you complete the preceding steps, you are now the global administrator of t
## External admin takeover
-If you already manage an organization with Azure services or Microsoft 365, you cannot add a custom domain name if it is already verified in another Azure AD organization. However, from your managed organization in Azure AD you can take over an unmanaged organization as an external admin takeover. The general procedure follows the article [Add a custom domain to Azure AD](../fundamentals/add-custom-domain.md).
+If you already manage an organization with Azure services or Microsoft 365, you can't add a custom domain name if it's already verified in another Azure AD organization. However, from your managed organization in Azure AD you can take over an unmanaged organization as an external admin takeover. The general procedure follows the article [Add a custom domain to Azure AD](../fundamentals/add-custom-domain.md).
When you verify ownership of the domain name, Azure AD removes the domain name from the unmanaged organization and moves it to your existing organization. External admin takeover of an unmanaged directory requires the same DNS TXT validation process as internal admin takeover. The difference is that the following are also moved over with the domain name:
The supported service plans include:
- Microsoft Stream - Dynamics 365 free trial
-External admin takeover is not supported for any service that has service plans that include SharePoint, OneDrive, or Skype For Business; for example, through an Office free subscription.
+External admin takeover isn't supported for any service that has service plans that include SharePoint, OneDrive, or Skype For Business; for example, through an Office free subscription.
> [!NOTE] > External admin takeover is not supported cross cloud boundaries (ex. Azure Commercial to Azure Government). In these scenarios it is recommended to perform External admin takeover into another Azure Commercial tenant, and then delete the domain from this tenant so you may verify successfully into the destination Azure Government tenant.
You can optionally use the [**ForceTakeover** option](#azure-ad-powershell-cmdle
For [RMS for individuals](/azure/information-protection/rms-for-individuals), when the unmanaged organization is in the same region as the organization that you own, the automatically created [Azure Information Protection organization key](/azure/information-protection/plan-implement-tenant-key) and [default protection templates](/azure/information-protection/configure-usage-rights#rights-included-in-the-default-templates) are additionally moved over with the domain name.
-The key and templates are not moved over when the unmanaged organization is in a different region. For example, if the unmanaged organization is in Europe and the organization that you own is in North America.
+The key and templates aren't moved over when the unmanaged organization is in a different region. For example, if the unmanaged organization is in Europe and the organization that you own is in North America.
-Although RMS for individuals is designed to support Azure AD authentication to open protected content, it doesn't prevent users from also protecting content. If users did protect content with the RMS for individuals subscription, and the key and templates were not moved over, that content is not accessible after the domain takeover.
+Although RMS for individuals is designed to support Azure AD authentication to open protected content, it doesn't prevent users from also protecting content. If users did protect content with the RMS for individuals subscription, and the key and templates weren't moved over, that content isn't accessible after the domain takeover.
### Azure AD PowerShell cmdlets for the ForceTakeover option You can see these cmdlets used in [PowerShell example](#powershell-example). cmdlet | Usage - | -
-`connect-msolservice` | When prompted, sign in to your managed organization.
-`get-msoldomain` | Shows your domain names associated with the current organization.
-`new-msoldomain ΓÇôname <domainname>` | Adds the domain name to organization as Unverified (no DNS verification has been performed yet).
-`get-msoldomain` | The domain name is now included in the list of domain names associated with your managed organization, but is listed as **Unverified**.
-`get-msoldomainverificationdns ΓÇôDomainname <domainname> ΓÇôMode DnsTxtRecord` | Provides the information to put into new DNS TXT record for the domain (MS=xxxxx). Verification might not happen immediately because it takes some time for the TXT record to propagate, so wait a few minutes before considering the **-ForceTakeover** option.
-`confirm-msoldomain ΓÇôDomainname <domainname> ΓÇôForceTakeover Force` | <li>If your domain name is still not verified, you can proceed with the **-ForceTakeover** option. It verifies that the TXT record was created and kicks off the takeover process.<li>The **-ForceTakeover** option should be added to the cmdlet only when forcing an external admin takeover, such as when the unmanaged organization has Microsoft 365 services blocking the takeover.
-`get-msoldomain` | The domain list now shows the domain name as **Verified**.
+`connect-mggraph` | When prompted, sign in to your managed organization.
+`get-mgdomain` | Shows your domain names associated with the current organization.
+`new-mgdomain -BodyParameter @{Id="<your domain name>"; IsDefault="False"}` | Adds the domain name to organization as Unverified (no DNS verification has been performed yet).
+`get-mgdomain` | The domain name is now included in the list of domain names associated with your managed organization, but is listed as **Unverified**.
+`Get-MgDomainVerificationDnsRecord` | Provides the information to put into new DNS TXT record for the domain (MS=xxxxx). Verification might not happen immediately because it takes some time for the TXT record to propagate, so wait a few minutes before considering the **-ForceTakeover** option.
+`confirm-mgdomain ΓÇôDomainname <domainname>` | - If your domain name is still not verified, you can proceed with the **-ForceTakeover** option. It verifies that the TXT record was created and kicks off the takeover process.<br>- The **-ForceTakeover** option should be added to the cmdlet only when forcing an external admin takeover, such as when the unmanaged organization has Microsoft 365 services blocking the takeover.
+`get-mgdomain` | The domain list now shows the domain name as **Verified**.
> [!NOTE] > The unmanaged Azure AD organization is deleted 10 days after you exercise the external takeover force option. ### PowerShell example
-1. Connect to Azure AD using the credentials that were used to respond to the self-service offering:
+1. Connect to Microsoft Graph using the credentials that were used to respond to the self-service offering:
```powershell
- Install-Module -Name MSOnline
- $msolcred = get-credential
-
- connect-msolservice -credential $msolcred
+ Install-Module -Name Microsoft.Graph
+
+ Connect-MgGraph -Scopes "User.ReadWrite.All","Domain.ReadWrite.All"
``` 2. Get a list of domains: ```powershell
- Get-MsolDomain
+ Get-MgDomain
```
-3. Run the Get-MsolDomainVerificationDns cmdlet to create a challenge:
+3. Run the New-MgDomain cmdlet to add a new domain in Azure:
```powershell
- Get-MsolDomainVerificationDns ΓÇôDomainName *your_domain_name* ΓÇôMode DnsTxtRecord
+ New-MgDomain -BodyParameter @{Id="<your domain name>"; IsDefault="False"}
```
- For example:
+4. Run the Get-MgDomainVerificationDnsRecord cmdlet to view the DNS challenge:
+ ```powershell
+ (Get-MgDomainVerificationDnsRecord -DomainId "<your domain name>" | ?{$_.recordtype -eq "Txt"}).AdditionalProperties.text
```
- Get-MsolDomainVerificationDns ΓÇôDomainName contoso.com ΓÇôMode DnsTxtRecord
+ For example:
+ ```powershell
+ (Get-MgDomainVerificationDnsRecord -DomainId "contoso.com" | ?{$_.recordtype -eq "Txt"}).AdditionalProperties.text
``` 4. Copy the value (the challenge) that is returned from this command. For example: ```powershell
- MS=32DD01B82C05D27151EA9AE93C5890787F0E65D9
+ MS=ms18939161
``` 5. In your public DNS namespace, create a DNS txt record that contains the value that you copied in the previous step. The name for this record is the name of the parent domain, so if you create this resource record by using the DNS role from Windows Server, leave the Record name blank and just paste the value into the Text box.
-6. Run the Confirm-MsolDomain cmdlet to verify the challenge:
+6. Run the Confirm-MgDomain cmdlet to verify the challenge:
```powershell
- Confirm-MsolDomain ΓÇôDomainName *your_domain_name* ΓÇôForceTakeover Force
+ Confirm-MgDomain -DomainId "<your domain name>"
``` For example: ```powershell
- Confirm-MsolDomain ΓÇôDomainName contoso.com ΓÇôForceTakeover Force
+ Confirm-MgDomain -DomainId "contoso.com"
``` A successful challenge returns you to the prompt without an error.
active-directory Groups Dynamic Rule Member Of https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-member-of.md
This feature can be used in the Azure portal, Microsoft Graph, and in PowerShell
1. Sign in to the [Azure portal](https://portal.azure.com) with an account that has Global Administrator, Intune Administrator, or User Administrator role permissions. 1. Select **Azure Active Directory** > **Groups**, and then select **New group**. 1. Fill in group details. The group type can be Security or Microsoft 365, and the membership type can be set to **Dynamic User** or **Dynamic Device**.
-1. Select **Add dynamic query**.
+1. Select **Add dynamic query**.
1. MemberOf isn't yet supported in the rule builder. Select **Edit** to write the rule in the **Rule syntax** box. 1. Example user rule: `user.memberof -any (group.objectId -in ['groupId', 'groupId'])` 1. Example device rule: `device.memberof -any (group.objectId -in ['groupId', 'groupId'])`
active-directory Groups Settings V2 Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-v2-cmdlets.md
To disable group creation for non-admin users:
2. If it returns `UsersPermissionToCreateGroupsEnabled : True`, then non-admin users can create groups. To disable this feature:
- ```powershell
+ ```powershell
Set-MsolCompanySettings -UsersPermissionToCreateGroupsEnabled $False ```
active-directory Allow Deny List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/allow-deny-list.md
If the module is not installed, or you don't have a required version, do one of
- If no results are returned, run the following command to install the latest version of the AzureADPreview module:
- ```powershell
+ ```powershell
Install-Module AzureADPreview ``` - If only the AzureAD module is shown in the results, run the following commands to install the AzureADPreview module:
- ```powershell
+ ```powershell
Uninstall-Module AzureAD Install-Module AzureADPreview ```
active-directory B2b Quickstart Invite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-invite-powershell.md
description: In this quickstart, you learn how to use PowerShell to send an invi
- Previously updated : 03/21/2023+ Last updated : 07/31/2023
Remove-MgUser -UserId '3f80a75e-750b-49aa-a6b0-d9bf6df7b4c6'
## Next steps
-In this quickstart, you invited and added a single guest user to your directory using PowerShell. Next, learn how to [invite guest users in bulk using PowerShell](tutorial-bulk-invite.md).
+In this quickstart, you invited and added a single guest user to your directory using PowerShell. You can also invite a guest user using the [Azure portal](b2b-quickstart-add-guest-users-portal.md). Additionally you can [invite guest users in bulk using PowerShell](tutorial-bulk-invite.md).
active-directory Bulk Invite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/bulk-invite-powershell.md
Previously updated : 11/18/2022 Last updated : 07/31/2023 -+ # Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user.
# Tutorial: Use PowerShell to bulk invite Azure AD B2B collaboration users
-If you use [Azure Active Directory (Azure AD) B2B collaboration](what-is-b2b.md) to work with external partners, you can invite multiple guest users to your organization at the same time [via the portal](tutorial-bulk-invite.md) or via PowerShell. In this tutorial, you learn how to use PowerShell to send bulk invitations to external users. Specifically, you do the following:
+If you use Azure Active Directory (Azure AD) B2B collaboration to work with external partners, you can invite multiple guest users to your organization at the same time via the portal or via PowerShell. In this tutorial, you learn how to use PowerShell to send bulk invitations to external users. Specifically, you do the following:
> [!div class="checklist"] > * Prepare a comma-separated value (.csv) file with the user information
To verify that the invited users were added to Azure AD, run the following comma
Get-AzureADUser -Filter "UserType eq 'Guest'" ```
-You should see the users that you invited listed, with a [user principal name (UPN)](../hybrid/plan-connect-userprincipalname.md#what-is-userprincipalname) in the format *emailaddress*#EXT#\@*domain*. For example, *lstokes_fabrikam.com#EXT#\@contoso.onmicrosoft.com*, where contoso.onmicrosoft.com is the organization from which you sent the invitations.
+You should see the users that you invited listed, with a user principal name (UPN) in the format *emailaddress*#EXT#\@*domain*. For example, *msullivan_fabrikam.com#EXT#\@contoso.onmicrosoft.com*, where contoso.onmicrosoft.com is the organization from which you sent the invitations.
## Clean up resources
When no longer needed, you can delete the test user accounts in the directory. R
Remove-AzureADUser -ObjectId "<UPN>" ```
-For example: `Remove-AzureADUser -ObjectId "lstokes_fabrikam.com#EXT#@contoso.onmicrosoft.com"`
+For example: `Remove-AzureADUser -ObjectId "msullivan_fabrikam.com#EXT#@contoso.onmicrosoft.com"`
## Next steps
-In this tutorial, you sent bulk invitations to guest users outside of your organization. Next, learn how the invitation redemption process works and how to enforce MFA for guest users.
+In this tutorial, you sent bulk invitations to guest users outside of your organization. Next, learn how to bulk invite guest users on the portal and how to enforce MFA for them.
-- [Learn about the Azure AD B2B collaboration invitation redemption process](redemption-experience.md)
+- [Bulk invite guest users via the portal](tutorial-bulk-invite.md)
- [Enforce multi-factor authentication for B2B guest users](b2b-tutorial-require-mfa.md)
active-directory Concept Branding Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/concept-branding-customers.md
The customer tenant is unique in that it doesn't have any default branding, but
The following list and image outline the elements of the default Microsoft sign-in experience in an Azure AD tenant:
-1. Microsoft background image and color.
-2. Microsoft favicon.
-3. Microsoft banner logo.
-4. Footer as a page layout element.
-5. Microsoft footer hyperlinks, for example, Privacy & cookies, Terms of use and troubleshooting details also known as ellipsis in the right bottom corner of the screen.
-6. Microsoft overlay.
+1. Microsoft background image and color.
+2. Microsoft favicon.
+3. Microsoft banner logo.
+4. Footer as a page layout element.
+5. Microsoft footer hyperlinks, for example, Privacy & cookies, Terms of use and troubleshooting details also known as ellipsis in the right bottom corner of the screen.
+6. Microsoft overlay.
:::image type="content" source="media/how-to-customize-branding-customers/microsoft-branding.png" alt-text="Screenshot of the Azure AD default Microsoft branding." lightbox="media/how-to-customize-branding-customers/microsoft-branding.png":::
active-directory How To Customize Branding Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-customize-branding-customers.md
Microsoft provides a neutral branding as the default for the customer tenant, wh
The following list and image outline the elements of the default Microsoft sign-in experience in an Azure AD tenant:
-1. Microsoft background image and color.
-2. Microsoft favicon.
-3. Microsoft banner logo.
-4. Footer as a page layout element.
-5. Microsoft footer hyperlinks, for example, Privacy & cookies, Terms of use and troubleshooting details also known as ellipsis in the right bottom corner of the screen.
-6. Microsoft overlay.
+1. Microsoft background image and color.
+2. Microsoft favicon.
+3. Microsoft banner logo.
+4. Footer as a page layout element.
+5. Microsoft footer hyperlinks, for example, Privacy & cookies, Terms of use and troubleshooting details also known as ellipsis in the right bottom corner of the screen.
+6. Microsoft overlay.
:::image type="content" source="media/how-to-customize-branding-customers/microsoft-branding.png" alt-text="Screenshot of the Azure AD default Microsoft branding." lightbox="media/how-to-customize-branding-customers/microsoft-branding.png":::
Before you customize any settings, the neutral default branding will appear in y
For your customer tenant, you might have different requirements for the information you want to collect during sign-up and sign-in. The customer tenant comes with a built-in set of information stored in attributes, such as Given Name, Surname, City, and Postal Code. You can create custom attributes in your customer tenant using the Microsoft Graph API or in the portal under the **Text** tab in **Company Branding**.
-1. On the **Text** tab select **Add Custom Text**.
-1. Select any of the options:
+1. On the **Text** tab select **Add Custom Text**.
+1. Select any of the options:
- Select **Attributes** to override the default values. - Select **Attribute collection** to add a new attribute option that you would like to collect during the sign-up process.
When no longer needed, you can remove the sign-in customization from your custom
1.If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier. 1. In the search bar, type and select **Company branding**. 1. Under **Default sign-in experience**, select **Edit**.
-1. Remove the elements you no longer need.
-1. Once finished select **Review + save**.
+1. Remove the elements you no longer need.
+1. Once finished select **Review + save**.
1. Wait a few minutes for the changes to take effect. ## Clean up resources via the Microsoft Graph API
active-directory How To Enable Password Reset Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-enable-password-reset-customers.md
Title: Enable self-service password reset
description: Learn how to enable self-service password reset so your customers can reset their own passwords without admin assistance. -+ Previously updated : 07/12/2023 Last updated : 07/28/2023
To enable self-service password reset, you need to enable the email one-time pas
1. Select **Save**.
-## Customize the password reset flow
+### Enable the password reset link
-You can configure options for showing, hiding, or customizing the self-service password reset link on the sign-in page. For details, see [To customize self-service password reset](how-to-customize-branding-customers.md#to-customize-self-service-password-reset) in the article [Customize the neutral branding in your customer tenant](how-to-customize-branding-customers.md).
+You can hide, show or customize the self-service password reset link on the sign-in page.
+
+1. In the search bar, type and select **Company Branding**.
+1. Under **Default sign-in** select **Edit**.
+1. On the **Sign-in form** tab, scroll to the **Self-service password reset** section and select **Show self-service password reset**.
+
+ :::image type="content" source="media/how-to-customize-branding-customers/company-branding-self-service-password-reset.png" alt-text="Screenshot of the company branding Self-service password reset.":::
+
+1. Select **Review + save** and **Save** on the **Review** tab.
+
+For more details, check out the [Customize the neutral branding in your customer tenant](how-to-customize-branding-customers.md#to-customize-self-service-password-reset) article.
## Test self-service password reset
active-directory How To Web App Dotnet Sign In Sign Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-web-app-dotnet-sign-in-sign-out.md
After installing the NuGet packages and adding necessary code for authentication
1. Next, add a reference to `_LoginPartial` in the *Layout.cshtml* file, which is located in the same folder. It's recommended to place this after the `navbar-collapse` class as shown in the following snippet:
- ```html
+ ```html
<div class="navbar-collapse collapse d-sm-inline-flex flex-sm-row-reverse"> <partial name="_LoginPartial" /> </div>
active-directory Tutorial Single Page App React Sign In Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-react-sign-in-prepare-app.md
All parts of the app that require authentication must be wrapped in the [`MsalPr
root.render( <App instance={msalInstance}/> );
- ```
+ ```
## Next steps
active-directory Hybrid Cloud To On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-cloud-to-on-premises.md
You must do the following:
- Assign Azure AD B2B Users to the SAML Application. When you've completed the steps above, your app should be up and running. To test Azure AD B2B access:
-1. Open a browser and navigate to the external URL that you created when you published the app.
-2. Sign in with the Azure AD B2B account that you assigned to the app. You should be able to open the app and access it with single sign-on.
+1. Open a browser and navigate to the external URL that you created when you published the app.
+2. Sign in with the Azure AD B2B account that you assigned to the app. You should be able to open the app and access it with single sign-on.
## Access to IWA and KCD apps
The following diagram provides a high-level overview of how Azure AD Application
![Diagram of MIM and B2B script solutions.](media/hybrid-cloud-to-on-premises/MIMScriptSolution.PNG)
-1. A user from a partner organization (the Fabrikam tenant) is invited to the Contoso tenant.
-2. A guest user object is created in the Contoso tenant (for example, a user object with a UPN of guest_fabrikam.com#EXT#@contoso.onmicrosoft.com).
-3. The Fabrikam guest is imported from Contoso through MIM or through the B2B PowerShell script.
-4. A representation or ΓÇ£footprintΓÇ¥ of the Fabrikam guest user object (Guest#EXT#) is created in the on-premises directory, Contoso.com, through MIM or through the B2B PowerShell script.
-5. The guest user accesses the on-premises application, app.contoso.com.
-6. The authentication request is authorized through Application Proxy, using Kerberos constrained delegation.
-7. Because the guest user object exists locally, the authentication is successful.
+1. A user from a partner organization (the Fabrikam tenant) is invited to the Contoso tenant.
+2. A guest user object is created in the Contoso tenant (for example, a user object with a UPN of guest_fabrikam.com#EXT#@contoso.onmicrosoft.com).
+3. The Fabrikam guest is imported from Contoso through MIM or through the B2B PowerShell script.
+4. A representation or ΓÇ£footprintΓÇ¥ of the Fabrikam guest user object (Guest#EXT#) is created in the on-premises directory, Contoso.com, through MIM or through the B2B PowerShell script.
+5. The guest user accesses the on-premises application, app.contoso.com.
+6. The authentication request is authorized through Application Proxy, using Kerberos constrained delegation.
+7. Because the guest user object exists locally, the authentication is successful.
### Lifecycle management policies
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/redemption-experience.md
However, the following scenarios should continue to work:
- Signing back into an application after redemption process using [SAML/WS-Fed IdP](./direct-federation.md) and [Google Federation](./google-federation.md) accounts. To unblock users who can't redeem an invitation due to a conflicting [Contact object](/graph/api/resources/contact), follow these steps:
-1. Delete the conflicting Contact object.
-2. Delete the guest user in the Azure portal (the user's "Invitation accepted" property should be in a pending state).
-3. Reinvite the guest user.
-4. Wait for the user to redeem invitation.
-5. Add the user's Contact email back into Exchange and any DLs they should be a part of.
+1. Delete the conflicting Contact object.
+2. Delete the guest user in the Azure portal (the user's "Invitation accepted" property should be in a pending state).
+3. Reinvite the guest user.
+4. Wait for the user to redeem invitation.
+5. Add the user's Contact email back into Exchange and any DLs they should be a part of.
## Invitation redemption flow
active-directory Tutorial Bulk Invite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tutorial-bulk-invite.md
Previously updated : 07/04/2023 Last updated : 07/31/2023 -+ # Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user.
active-directory Groups View Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/groups-view-azure-portal.md
The group you just created is used in other articles in the Azure AD Fundamental
1. On the **Groups - All groups** page, search for the **MDM policy - West** group.
-1. Select the **MDM policy - West** group.
+1. Select the **MDM policy - West** group.
The **MDM policy - West Overview** page appears.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page updates monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md).
+## July 2023
+
+### General Availability: Azure Active Directory (Azure AD) is being renamed.
+
+**Type:** Changed feature
+**Service category:** N/A
+**Product capability:** End User Experiences
+
+**No action is required from you, but you may need to update some of your own documentation.**
+
+Azure AD is being renamed to Microsoft Entra ID. The name change rolls out across all Microsoft products and experiences throughout the second half of 2023.
+
+Capabilities, licensing, and usage of the product isn't changing. To make the transition seamless for you, the pricing, terms, service level agreements, URLs, APIs, PowerShell cmdlets, Microsoft Authentication Library (MSAL) and developer tooling remain the same.
+
+Learn more and get renaming details: [New name for Azure Active Directory](../fundamentals/new-name.md).
+++
+### General Availability - Include/exclude My Apps in Conditional Access policies
+
+**Type:** Fixed
+**Service category:** Conditional Access
+**Product capability:** End User Experiences
+
+My Apps can now be targeted in conditional access policies. This solves a top customer blocker. The functionality is available in all clouds. GA also brings a new app launcher, which improves app launch performance for both SAML and other app types.
+
+Learn More about setting up conditional access policies here: [Azure AD Conditional Access documentation](../conditional-access/index.yml).
+++
+### General Availability - Conditional Access for Protected Actions
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+Protected actions are high-risk operations, such as altering access policies or changing trust settings, that can significantly impact an organization's security. To add an extra layer of protection, Conditional Access for Protected Actions lets organizations define specific conditions for users to perform these sensitive tasks. For more information, see: [What are protected actions in Azure AD?](../roles/protected-actions-overview.md).
+++
+### General Availability - Access Reviews for Inactive Users
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+This new feature, part of the Microsoft Entra ID Governance SKU, allows admins to review and address stale accounts that havenΓÇÖt been active for a specified period. Admins can set a specific duration to determine inactive accounts that weren't used for either interactive or non-interactive sign-in activities. As part of the review process, stale accounts can automatically be removed. For more information, see: [Microsoft Entra ID Governance Introduces Two New Features in Access Reviews](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-id-governance-introduces-two-new-features-in/ba-p/2466930).
+++
+### General Availability - Automatic assignments to access packages in Microsoft Entra ID Governance
+
+**Type:** Changed feature
+**Service category:** Entitlement Management
+**Product capability:** Entitlement Management
+
+Microsoft Entra ID Governance includes the ability for a customer to configure an assignment policy in an entitlement management access package that includes an attribute-based rule, similar to dynamic groups, of the users who should be assigned access. For more information, see: [Configure an automatic assignment policy for an access package in entitlement management](../governance/entitlement-management-access-package-auto-assignment-policy.md).
+++
+### General Availability - Custom Extensions in Entitlement Management
+
+**Type:** New feature
+**Service category:** Entitlement Management
+**Product capability:** Entitlement Management
+
+Custom extensions in Entitlement Management are now generally available, and allow you to extend the access lifecycle with your organization-specific processes and business logic when access is requested or about to expire. With custom extensions you can create tickets for manual access provisioning in disconnected systems, send custom notifications to additional stakeholders, or automate additional access-related configuration in your business applications such as assigning the correct sales region in Salesforce. You can also leverage custom extensions to embed external governance, risk, and compliance (GRC) checks in the access request.
+
+For more information, see:
+
+- [Microsoft Entra ID Governance Entitlement Management New Generally Available Capabilities](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-id-governance-entitlement-management-new/ba-p/2466929)
+- [Trigger Logic Apps with custom extensions in entitlement management](../governance/entitlement-management-logic-apps-integration.md)
+++
+### General Availability - Conditional Access templates
+
+**Type:** Plan for change
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+Conditional Access templates are predefined set of conditions and controls that provide a convenient method to deploy new policies aligned with Microsoft recommendations. Customers are assured that their policies reflect modern best practices for securing corporate assets, promoting secure, optimal access for their hybrid workforce. For more information, see: [Conditional Access templates](../conditional-access/concept-conditional-access-policy-common.md).
+++
+### General Availability - Lifecycle Workflows
+
+**Type:** New feature
+**Service category:** Lifecycle Workflows
+**Product capability:** Identity Governance
+
+User identity lifecycle is a critical part of an organizationΓÇÖs security posture, and when managed correctly, can have a positive impact on their usersΓÇÖ productivity for Joiners, Movers, and Leavers. The ongoing digital transformation is accelerating the need for good identity lifecycle management. However, IT and security teams face enormous challenges managing the complex, time-consuming, and error-prone manual processes necessary to execute the required onboarding and offboarding tasks for hundreds of employees at once. This is an ever present and complex issue IT admins continue to face with digital transformation across security, governance, and compliance.
+
+Lifecycle Workflows, one of our newest Microsoft Entra ID Governance capabilities is now generally available to help organizations further optimize their user identity lifecycle. For more information, see: [Lifecycle Workflows is now generally available!](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/lifecycle-workflows-is-now-generally-available/ba-p/2466931)
+++
+### General Availability - Enabling extended customization capabilities for sign-in and sign-up pages in Company Branding capabilities.
+
+**Type:** New feature
+**Service category:** User Experience and Management
+**Product capability:** User Authentication
+
+Update the Microsoft Entra ID and Microsoft 365 sign in experience with new Company Branding capabilities. You can apply your companyΓÇÖs brand guidance to authentication experiences with predefined templates. For more information, see: [Company Branding](../fundamentals/how-to-customize-branding.md)
+++
+### General Availability - Enabling customization capabilities for the Self-Service Password Reset (SSPR) hyperlinks, footer hyperlinks and browser icons in Company Branding.
+
+**Type:** Changed feature
+**Service category:** User Experience and Management
+**Product capability:** End User Experiences
+
+Update the Company Branding functionality on the Microsoft Entra ID/Microsoft 365 sign in experience to allow customizing Self Service Password Reset (SSPR) hyperlinks, footer hyperlinks, and a browser icon. For more information, see: [Company Branding](../fundamentals/how-to-customize-branding.md)
+++
+### General Availability - User-to-Group Affiliation recommendation for group Access Reviews
+
+**Type:** New feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+This feature provides Machine Learning based recommendations to the reviewers of Azure AD Access Reviews to make the review experience easier and more accurate. The recommendation leverages machine learning based scoring mechanism and compares usersΓÇÖ relative affiliation with other users in the group, based on the organizationΓÇÖs reporting structure. For more information, see: [Review recommendations for Access reviews](../governance/review-recommendations-access-reviews.md) and [Introducing Machine Learning based recommendations in Azure AD Access reviews](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/introducing-machine-learning-based-recommendations-in-azure-ad/ba-p/2466923)
+++
+### Public Preview - Inactive guest insights
+
+**Type:** New feature
+**Service category:** Reporting
+**Product capability:** Identity Governance
+
+Monitor guest accounts at scale with intelligent insights into inactive guest users in your organization. Customize the inactivity threshold depending on your organizationΓÇÖs needs, narrow down the scope of guest users you want to monitor and identify the guest users that may be inactive. For more information, see: [Monitor and clean up stale guest accounts using access reviews](../enterprise-users/clean-up-stale-guest-accounts.md).
+++
+### Public Preview - Just-in-time application access with PIM for Groups
+
+**Type:** New feature
+**Service category:** Privileged Identity Management
+**Product capability:** Privileged Identity Management
+
+You can minimize the number of persistent administrators in applications such as [AWS](../saas-apps/aws-single-sign-on-provisioning-tutorial.md#just-in-time-jit-application-access-with-pim-for-groups-preview)/[GCP](../saas-apps/g-suite-provisioning-tutorial.md#just-in-time-jit-application-access-with-pim-for-groups-preview) and get JIT access to groups in AWS and GCP. While PIM for Groups is publicly available, weΓÇÖve released a public preview that integrates PIM with provisioning and reduces the activation delay from 40+ minutes to 1 ΓÇô 2 minutes.
+++
+### Public Preview - Graph beta API for PIM security alerts on Azure AD roles
+
+**Type:** New feature
+**Service category:** Privileged Identity Management
+**Product capability:** Privileged Identity Management
+
+Announcing API support (beta) for managing PIM security alerts for Azure AD roles. [Azure Privileged Identity Management (PIM)](../privileged-identity-management/index.yml) generates alerts when there's suspicious or unsafe activity in your organization in Azure Active Directory (Azure AD), part of Microsoft Entra. You can now manage these alerts using REST APIs. These alerts can also be [managed through the Azure portal](../privileged-identity-management/pim-resource-roles-configure-alerts.md). For more information, see: [unifiedRoleManagementAlert resource type](/graph/api/resources/unifiedrolemanagementalert).
+++
+### General Availability - Reset Password on Azure Mobile App
+
+**Type:** New feature
+**Service category:** Other
+**Product capability:** End User Experiences
+
+The Azure mobile app has been enhanced to empower admins with specific permissions to conveniently reset their users' passwords. Self Service Password Reset won't be supported at this time. However, users can still more efficiently control and streamline their authentication methods. For more information, see: [What authentication and verification methods are available in Azure Active Directory?](../authentication/concept-authentication-methods.md).
+++
+### Public Preview - API-driven inbound user provisioning
+
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Inbound to Azure AD
+
+With API-driven inbound provisioning, Microsoft Entra ID provisioning service now supports integration with any system of record. Customers and partners can use any automation tool of their choice to retrieve workforce data from any system of record for provisioning into Entra ID and connected on-premises Active Directory domains. The IT admin has full control on how the data is processed and transformed with attribute mappings. Once the workforce data is available in Entra ID, the IT admin can configure appropriate joiner-mover-leaver business processes using Entra ID Governance Lifecycle Workflows. For more information, see: [API-driven inbound provisioning concepts (Public preview)](../app-provisioning/inbound-provisioning-api-concepts.md).
+++
+### Public Preview - Dynamic Groups based on EmployeeHireDate User attribute
+
+**Type:** New feature
+**Service category:** Group Management
+**Product capability:** Directory
+
+This feature enables admins to create dynamic group rules based on the user objects' employeeHireDate attribute. For more information, see: [Properties of type string](../enterprise-users/groups-dynamic-membership.md#properties-of-type-string).
+++
+### General Availability - Enhanced Create User and Invite User Experiences
+
+**Type:** Changed feature
+**Service category:** User Management
+**Product capability:** User Management
+
+We have increased the number of properties admins are able to define when creating and inviting a user in the Entra admin portal, bringing our UX to parity with our Create User APIs. Additionally, admins can now add users to a group or administrative unit, and assign roles. For more information, see: [Add or delete users using Azure Active Directory](../fundamentals/add-users-azure-active-directory.md).
+++
+### General Availability - All Users and User Profile
+
+**Type:** Changed feature
+**Service category:** User Management
+**Product capability:** User Management
+
+The All Users list now features an infinite scroll, and admins can now modify more properties in the User Profile. For more information, see: [How to create, invite, and delete users](../fundamentals/how-to-create-delete-users.md).
+++
+### Public Preview - Windows MAM
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+“*When will you have MAM for Windows?*” is one of our most frequently asked customer questions. We’re happy to report that the answer is: “Now!” We’re excited to offer this new and long-awaited MAM Conditional Access capability in Public Preview for Microsoft Edge for Business on Windows.
+
+Using MAM Conditional Access, Microsoft Edge for Business provides users with secure access to organizational data on personal Windows devices with a customizable user experience. WeΓÇÖve combined the familiar security features of app protection policies (APP), Windows Defender client threat defense, and conditional access, all anchored to Azure AD identity to ensure un-managed devices are healthy and protected before granting data access. This can help businesses to improve their security posture and protect sensitive data from unauthorized access, without requiring full mobile device enrollment.
+
+The new capability extends the benefits of app layer management to the Windows platform via Microsoft Edge for Business. Admins are empowered to configure the user experience and protect organizational data within Microsoft Edge for Business on un-managed Windows devices.
+
+For more information, see: [Require an app protection policy on Windows devices (preview)](../conditional-access/how-to-app-protection-policy-windows.md).
+++
+### General Availability - New Federated Apps available in Azure AD Application gallery - July 2023
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In July 2023 we've added the following 10 new applications in our App gallery with Federation support:
+
+[Gainsight SAML](../saas-apps/gainsight-saml-tutorial.md), [Dataddo](https://www.dataddo.com/), [Puzzel](https://www.puzzel.com/), [Worthix App](../saas-apps/worthix-app-tutorial.md), [iOps360 IdConnect](https://iops360.com/iops360-id-connect-azuread-single-sign-on/), [Airbase](../saas-apps/airbase-tutorial.md), [Couchbase Capella - SSO](../saas-apps/couchbase-capella-sso-tutorial.md), [SSO for Jama Connect®](../saas-apps/sso-for-jama-connect-tutorial.md), [mediment (メディメント)](https://mediment.jp/), [Netskope Cloud Exchange Administration Console](../saas-apps/netskope-cloud-exchange-administration-console-tutorial.md), [Uber](../saas-apps/uber-tutorial.md), [Plenda](https://app.plenda.nl/), [Deem Mobile](../saas-apps/deem-mobile-tutorial.md), [40SEAS](https://www.40seas.com/), [Vivantio](https://www.vivantio.com/), [AppTweak](https://www.apptweak.com/), [ioTORQ EMIS](https://www.iotorq.com/), [Vbrick Rev Cloud](../saas-apps/vbrick-rev-cloud-tutorial.md), [OptiTurn](../saas-apps/optiturn-tutorial.md), [Application Experience with Mist](https://www.mist.com/), [クラウド勤怠管理システムKING OF TIME](../saas-apps/cloud-attendance-management-system-king-of-time-tutorial.md), [Connect1](../saas-apps/connect1-tutorial.md), [DB Education Portal for Schools](../saas-apps/db-education-portal-for-schools-tutorial.md), [SURFconext](../saas-apps/surfconext-tutorial.md), [Chengliye Smart SMS Platform](../saas-apps/chengliye-smart-sms-platform-tutorial.md), [CivicEye SSO](../saas-apps/civic-eye-sso-tutorial.md), [Colloquial](../saas-apps/colloquial-tutorial.md), [BigPanda](../saas-apps/bigpanda-tutorial.md), [Foreman](https://foreman.mn/)
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
+
+For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest
+++
+### Public Preview - New provisioning connectors in the Azure AD Application Gallery - July 2023
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+
+We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps:
+
+- [Albert](../saas-apps/albert-provisioning-tutorial.md)
+- [Rhombus Systems](../saas-apps/rhombus-systems-provisioning-tutorial.md)
+- [Axiad Cloud](../saas-apps/axiad-cloud-provisioning-tutorial.md)
+- [Dagster Cloud](../saas-apps/dagster-cloud-provisioning-tutorial.md)
+- [WATS](../saas-apps/wats-provisioning-tutorial.md)
+- [Funnel Leasing](../saas-apps/funnel-leasing-provisioning-tutorial.md)
++
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
++++
+### General Availability - Microsoft Authentication Library for .NET 4.55.0
+
+**Type:** New feature
+**Service category:** Other
+**Product capability:** User Authentication
+
+Earlier this month we announced the release of [MSAL.NET 4.55.0](https://www.nuget.org/packages/Microsoft.Identity.Client/4.55.0), the latest version of the [Microsoft Authentication Library for the .NET platform](/entra/msal/dotnet/). The new version introduces support for user-assigned [managed identity](/entra/msal/dotnet/advanced/managed-identity) being specified through object IDs, CIAM authorities in the `WithTenantId` API, better error messages when dealing with cache serialization, and improved logging when using the [Windows authentication broker](/entra/msal/dotnet/acquiring-tokens/desktop-mobile/wam).
+++
+### General Availability - Microsoft Authentication Library for Python 1.23.0
+
+**Type:** New feature
+**Service category:** Other
+**Product capability:** User Authentication
+
+Earlier this month, the Microsoft Authentication Library team announced the release of [MSAL for Python version 1.23.0](https://pypi.org/project/msal/1.23.0/). The new version of the library adds support for better caching when using client credentials, eliminating the need to request new tokens repeatedly when cached tokens exist.
+
+To learn more about MSAL for Python, see: [Microsoft Authentication Library (MSAL) for Python](/entra/msal/python/).
+++ ## June 2023 ### Public Preview - New provisioning connectors in the Azure AD Application Gallery - June 2023
Starting today the modernized experience for viewing previously accepted terms o
**Service category:** Privileged Identity Management **Product capability:** Privileged Identity Management
-Privileged Identity Management for Groups is now generally available. With this feature, you have the ability to grant users just-in-time membership in a group, which in turn provides access to Azure Active Directory roles, Azure roles, Azure SQL, Azure Key Vault, Intune, other application roles, as well as third-party applications. Through one activation, you can conveniently assign a combination of permissions across different applications and RBAC systems.
+Privileged Identity Management for Groups is now generally available. With this feature, you have the ability to grant users just-in-time membership in a group, which in turn provides access to Azure Active Directory roles, Azure roles, Azure SQL, Azure Key Vault, Intune, other application roles, and third-party applications. Through one activation, you can conveniently assign a combination of permissions across different applications and RBAC systems.
PIM for Groups offers can also be used for just-in-time ownership. As the owner of the group, you can manage group properties, including membership. For more information, see: [Privileged Identity Management (PIM) for Groups](../privileged-identity-management/concept-pim-for-groups.md).
PIM for Groups offers can also be used for just-in-time ownership. As the owner
**Service category:** Privileged Identity Management **Product capability:** Privileged Identity Management
-The Privileged Identity Management (PIM) integration with Conditional Access authentication context is generally available. You can require users to meet a variety of requirements during role activation such as:
+The Privileged Identity Management (PIM) integration with Conditional Access authentication context is generally available. You can require users to meet various requirements during role activation such as:
- Have specific authentication method through [Authentication Strengths](../authentication/concept-authentication-strengths.md) - Activate from a compliant device
The Converged Authentication Methods Policy enables you to manage all authentica
**Service category:** Provisioning **Product capability:** Azure Active Directory Connect Cloud Sync
-Hybrid IT Admins can now sync both Active Directory and Azure AD Directory Extensions using Azure AD Cloud Sync. This new capability adds the ability to dynamically discover the schema for both Active Directory and Azure Active Directory, thereby, allowing customers to simply map the needed attributes using Cloud Sync's attribute mapping experience. For more information, see: [Cloud Sync directory extensions and custom attribute mapping](../hybrid/cloud-sync/custom-attribute-mapping.md).
+Hybrid IT Admins can now sync both Active Directory and Azure AD Directory Extensions using Azure AD Cloud Sync. This new capability adds the ability to dynamically discover the schema for both Active Directory and Azure Active Directory, thereby, allowing customers to map the needed attributes using Cloud Sync's attribute mapping experience. For more information, see: [Cloud Sync directory extensions and custom attribute mapping](../hybrid/cloud-sync/custom-attribute-mapping.md).
To address this challenge, we're introducing a new system-preferred authenticati
**Service category:** User Management **Product capability:** User Management
-Admins can now define more properties when creating and inviting a user in the Entra admin portal. These improvements bring our UX to parity with our [Create User APIS](/graph/api/user-post-users). Additionally, admins can now add users to a group or administrative unit, and assign roles. For more information, see: [Add or delete users using Azure Active Directory](../fundamentals/add-users-azure-active-directory.md).
+We have increased the number of properties that admins are able to define when creating and inviting a user in the Entra admin portal. This brings our UX to parity with our Create User APIs. Additionally, admins can now add users to a group or administrative unit, and assign roles. For more information, see: [How to create, invite, and delete users](../fundamentals/how-to-create-delete-users.md).
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
This article describes how to create one or more access reviews for group member
## Prerequisites - Microsoft Azure AD Premium P2 or Microsoft Entra ID Governance licenses. -- Creating a review on [inactive user](review-recommendations-access-reviews.md#inactive-user-recommendations) and with [use-to-group affiliation](review-recommendations-access-reviews.md#user-to-group-affiliation) recommendations requires a Microsoft Entra ID Governance license.
+- Creating a review on inactive users with [use-to-group affiliation](review-recommendations-access-reviews.md#user-to-group-affiliation) recommendations requires a Microsoft Entra ID Governance license.
- Global administrator, User administrator, or Identity Governance administrator to create reviews on groups or applications. - Global administrators and Privileged Role administrators can create reviews on role-assignable groups. For more information, see [Use Azure AD groups to manage role assignments](../roles/groups-concept.md). - Microsoft 365 and Security group owner.
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-assignments.md
In some cases, you might want to directly assign specific users to an access pac
![Assignments - Add user to access package](./media/entitlement-management-access-package-assignments/assignments-add-user.png)
-1. In the **Select policy** list, select a policy that the users' future requests and lifecycle will be governed and tracked by. If you want the selected users to have different policy settings, you can select **Create new policy** to add a new policy.
+1. In the **Select policy** list, select a policy that the users' future requests and lifecycle will be governed and tracked by. If you want the selected users to have different policy settings, you can select **Create new policy** to add a new policy.
-1. Once you select a policy, youΓÇÖll be able to Add users to select the users you want to assign this access package to, under the chosen policy.
+1. Once you select a policy, youΓÇÖll be able to Add users to select the users you want to assign this access package to, under the chosen policy.
> [!NOTE] > If you select a policy with questions, you can only assign one user at a time. 1. Set the date and time you want the selected users' assignment to start and end. If an end date isn't provided, the policy's lifecycle settings will be used.
-1. Optionally provide a justification for your direct assignment for record keeping.
+1. Optionally provide a justification for your direct assignment for record keeping.
-1. If the selected policy includes additional requestor information, select **View questions** to answer them on behalf of the users, then select **Save**.
+1. If the selected policy includes additional requestor information, select **View questions** to answer them on behalf of the users, then select **Save**.
![Assignments - click view questions](./media/entitlement-management-access-package-assignments/assignments-view-questions.png)
Entitlement management also allows you to directly assign external users to an a
**Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager
-1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
+1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**.
-1. In the left menu, select **Access packages** and then open the access package in which you want to add a user.
+1. In the left menu, select **Access packages** and then open the access package in which you want to add a user.
-1. In the left menu, select **Assignments**.
+1. In the left menu, select **Assignments**.
-1. Select **New assignment** to open **Add user to access package**.
+1. Select **New assignment** to open **Add user to access package**.
-1. In the **Select policy** list, select a policy that allows that is set to **For users not in your directory**
+1. In the **Select policy** list, select a policy that allows that is set to **For users not in your directory**
1. Select **Any user**. YouΓÇÖll be able to specify which users you want to assign to this access package. ![Assignments - Add any user to access package](./media/entitlement-management-access-package-assignments/assignments-add-any-user.png)
Entitlement management also allows you to directly assign external users to an a
> - Similarly, if you set your policy to include **All configured connected organizations**, the userΓÇÖs email address must be from one of your configured connected organizations. Otherwise, the user won't be added to the access package. > - If you wish to add any user to the access package, you'll need to ensure that you select **All users (All connected organizations + any external user)** when configuring your policy.
-1. Set the date and time you want the selected users' assignment to start and end. If an end date isn't provided, the policy's lifecycle settings will be used.
-1. Select **Add** to directly assign the selected users to the access package.
-1. After a few moments, select **Refresh** to see the users in the Assignments list.
+1. Set the date and time you want the selected users' assignment to start and end. If an end date isn't provided, the policy's lifecycle settings will be used.
+1. Select **Add** to directly assign the selected users to the access package.
+1. After a few moments, select **Refresh** to see the users in the Assignments list.
## Directly assigning users programmatically ### Assign a user to an access package with Microsoft Graph
active-directory Entitlement Management Access Package Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-create.md
Then, create the access package:
```powershell $params = @{
- CatalogId = $catalog.id
- DisplayName = "sales reps"
- Description = "outside sales representatives"
+ CatalogId = $catalog.id
+ DisplayName = "sales reps"
+ Description = "outside sales representatives"
} $ap = New-MgEntitlementManagementAccessPackage -BodyParameter $params
After you create the access package, assign the resource roles to it. For examp
```powershell $rparams = @{
- AccessPackageResourceRole = @{
- OriginId = $rr[2].OriginId
- DisplayName = $rr[2].DisplayName
- OriginSystem = $rr[2].OriginSystem
- AccessPackageResource = @{
- Id = $rsc[0].Id
- ResourceType = $rsc[0].ResourceType
- OriginId = $rsc[0].OriginId
- OriginSystem = $rsc[0].OriginSystem
- }
- }
- AccessPackageResourceScope = @{
- OriginId = $rsc[0].OriginId
- OriginSystem = $rsc[0].OriginSystem
- }
+ AccessPackageResourceRole = @{
+ OriginId = $rr[2].OriginId
+ DisplayName = $rr[2].DisplayName
+ OriginSystem = $rr[2].OriginSystem
+ AccessPackageResource = @{
+ Id = $rsc[0].Id
+ ResourceType = $rsc[0].ResourceType
+ OriginId = $rsc[0].OriginId
+ OriginSystem = $rsc[0].OriginSystem
+ }
+ }
+ AccessPackageResourceScope = @{
+ OriginId = $rsc[0].OriginId
+ OriginSystem = $rsc[0].OriginSystem
+ }
} New-MgEntitlementManagementAccessPackageResourceRoleScope -AccessPackageId $ap.Id -BodyParameter $rparams ```
Finally, create the policies. In this policy, only the administrator can assign
```powershell $pparams = @{
- AccessPackageId = $ap.Id
- DisplayName = "direct"
- Description = "direct assignments by administrator"
- AccessReviewSettings = $null
- RequestorSettings = @{
- ScopeType = "NoSubjects"
- AcceptRequests = $true
- AllowedRequestors = @(
- )
- }
- RequestApprovalSettings = @{
- IsApprovalRequired = $false
- IsApprovalRequiredForExtension = $false
- IsRequestorJustificationRequired = $false
- ApprovalMode = "NoApproval"
- ApprovalStages = @(
- )
- }
+ AccessPackageId = $ap.Id
+ DisplayName = "direct"
+ Description = "direct assignments by administrator"
+ AccessReviewSettings = $null
+ RequestorSettings = @{
+ ScopeType = "NoSubjects"
+ AcceptRequests = $true
+ AllowedRequestors = @(
+ )
+ }
+ RequestApprovalSettings = @{
+ IsApprovalRequired = $false
+ IsApprovalRequiredForExtension = $false
+ IsRequestorJustificationRequired = $false
+ ApprovalMode = "NoApproval"
+ ApprovalStages = @(
+ )
+ }
} New-MgEntitlementManagementAccessPackageAssignmentPolicy -BodyParameter $pparams
active-directory Entitlement Management Access Package Incompatible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-incompatible.md
To use entitlement management and assign users to access packages, you must have
Follow these steps to change the list of incompatible groups or other access packages for an existing access package:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure Active Directory**, and then select **Identity Governance**.
-1. In the left menu, select **Access packages** and then open the access package which users will request.
+1. In the left menu, select **Access packages** and then open the access package which users will request.
-1. In the left menu, select **Separation of duties**.
+1. In the left menu, select **Separation of duties**.
1. If you wish to prevent users who have another access package assignment already from requesting this access package, select on **Add access package** and select the access package that the user would already be assigned.
New-MgEntitlementManagementAccessPackageIncompatibleAccessPackageByRef -AccessPa
Follow these steps to view the list of other access packages that have indicated that they're incompatible with an existing access package:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure Active Directory**, and then select **Identity Governance**.
-1. In the left menu, select **Access packages** and then open the access package.
+1. In the left menu, select **Access packages** and then open the access package.
-1. In the left menu, select **Separation of duties**.
+1. In the left menu, select **Separation of duties**.
1. Select on **Incompatible With**.
If you've configured incompatible access settings on an access package that alre
Follow these steps to view the list of users who have assignments to two access packages.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure Active Directory**, and then select **Identity Governance**.
-1. In the left menu, select **Access packages** and then open the access package where you've configured another access package as incompatible.
+1. In the left menu, select **Access packages** and then open the access package where you've configured another access package as incompatible.
-1. In the left menu, select **Separation of duties**.
+1. In the left menu, select **Separation of duties**.
1. In the table, if there is a non-zero value in the Additional access column for the second access package, then that indicates there are one or more users with assignments.
If you're configuring incompatible access settings on an access package that alr
Follow these steps to view the list of users who have assignments to two access packages.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure Active Directory**, and then select **Identity Governance**.
-1. In the left menu, select **Access packages** and then open the access package where you'll be configuring incompatible assignments.
+1. In the left menu, select **Access packages** and then open the access package where you'll be configuring incompatible assignments.
-1. In the left menu, select **Assignments**.
+1. In the left menu, select **Assignments**.
1. In the **Status** field, ensure that **Delivered** status is selected.
Follow these steps to view the list of users who have assignments to two access
1. In the navigation bar, select **Identity Governance**.
-1. In the left menu, select **Access packages** and then open the access package that you plan to indicate as incompatible.
+1. In the left menu, select **Access packages** and then open the access package that you plan to indicate as incompatible.
-1. In the left menu, select **Assignments**.
+1. In the left menu, select **Assignments**.
1. In the **Status** field, ensure that the **Delivered** status is selected.
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
Select-MgProfile -Name "beta"
$apid = "cdd5f06b-752a-4c9f-97a6-82f4eda6c76d" $pparams = @{
- AccessPackageId = $apid
- DisplayName = "direct"
- Description = "direct assignments by administrator"
- AccessReviewSettings = $null
- RequestorSettings = @{
- ScopeType = "NoSubjects"
- AcceptRequests = $true
- AllowedRequestors = @(
- )
- }
- RequestApprovalSettings = @{
- IsApprovalRequired = $false
- IsApprovalRequiredForExtension = $false
- IsRequestorJustificationRequired = $false
- ApprovalMode = "NoApproval"
- ApprovalStages = @(
- )
- }
+ AccessPackageId = $apid
+ DisplayName = "direct"
+ Description = "direct assignments by administrator"
+ AccessReviewSettings = $null
+ RequestorSettings = @{
+ ScopeType = "NoSubjects"
+ AcceptRequests = $true
+ AllowedRequestors = @(
+ )
+ }
+ RequestApprovalSettings = @{
+ IsApprovalRequired = $false
+ IsApprovalRequiredForExtension = $false
+ IsRequestorJustificationRequired = $false
+ ApprovalMode = "NoApproval"
+ ApprovalStages = @(
+ )
+ }
} New-MgEntitlementManagementAccessPackageAssignmentPolicy -BodyParameter $pparams ```
active-directory Entitlement Management Access Reviews Review Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-review-access.md
If there are multiple reviewers, the last submitted response is recorded. Consid
To review access for multiple users more quickly, you can use the system-generated recommendations, accepting the recommendations with a single select. The recommendations are generated based on the user's sign-in activity.
-1. In the bar at the top of the page, select **Accept recommendations**.
+1. In the bar at the top of the page, select **Accept recommendations**.
![Select Accept recommendations](./media/entitlement-management-access-reviews-review-access/review-access-use-recommendations.png) You see a summary of the recommended actions.
-1. Select **Submit** to accept the recommendations.
+1. Select **Submit** to accept the recommendations.
## Next steps
active-directory Entitlement Management Access Reviews Self Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-self-review.md
To do an access review, you must first open the access review. Use the following
1. Select **Access reviews** on the left navigation bar to see a list of pending access reviews assigned to you.
-1. Select the review that youΓÇÖd like to begin.
+1. Select the review that youΓÇÖd like to begin.
## Perform the access review Once you open the access review, you can see your access. Use the following procedure to do the access review:
-1. Decide whether you still need access to the access package. For example, the project you're working on isn't complete, so you still need access to continue working on the project.
+1. Decide whether you still need access to the access package. For example, the project you're working on isn't complete, so you still need access to continue working on the project.
-1. Select **Yes** to keep your access or select **No** to remove your access.
+1. Select **Yes** to keep your access or select **No** to remove your access.
>[!NOTE] >If you stated that you no longer need access, you aren't removed from the access package immediately. You will be removed from the access package when the review ends or if an administrator stops the review.
-1. If you chose **Yes**, you may need to include a justification statement in the **Reason** box.
+1. If you chose **Yes**, you may need to include a justification statement in the **Reason** box.
-1. Select **Submit**.
+1. Select **Submit**.
You can return to the review if you change your mind and decide to change your response before the end of the review.
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-catalog-create.md
To require attributes for access requests:
![Screenshot that shows selecting Require attributes](./media/entitlement-management-catalog-create/resources-require-attributes.png)
-1. Select the attribute type:
+1. Select the attribute type:
1. **Built-in** includes Azure AD user profile attributes. 1. **Directory schema extension** provides a way to store more data in Azure AD on user objects and other directory objects. This includes groups, tenant details, and service principals. Only extension attributes on user objects can be used to send out claims to applications.
To require attributes for access requests:
> [!NOTE] > The User.mobilePhone attribute is a sensitive property that can be updated only by some administrators. Learn more at [Who can update sensitive user attributes?](/graph/api/resources/users#who-can-update-sensitive-attributes).
-1. Select the answer format you want requestors to use for their answer. Answer formats include **short text**, **multiple choice**, and **long text**.
+1. Select the answer format you want requestors to use for their answer. Answer formats include **short text**, **multiple choice**, and **long text**.
-1. If you select multiple choice, select **Edit and localize** to configure the answer options.
+1. If you select multiple choice, select **Edit and localize** to configure the answer options.
1. In the **View/edit question** pane that appears, enter the response options you want to give the requestor when they answer the question in the **Answer values** boxes. 1. Select the language for the response option. You can localize response options if you choose more languages. 1. Enter as many responses as you need, and then select **Save**.
To require attributes for access requests:
![Screenshot that shows adding localizations.](./media/entitlement-management-catalog-create/add-attributes-questions.png)
-1. If you want to add localization, select **Add localization**.
+1. If you want to add localization, select **Add localization**.
1. In the **Add localizations for question** pane, select the language code for the language in which you want to localize the question related to the selected attribute. 1. In the language you configured, enter the question in the **Localized Text** box.
To require attributes for access requests:
![Screenshot that shows saving the localizations.](./media/entitlement-management-catalog-create/attributes-add-localization.png)
-1. After all attribute information is completed on the **Require attributes** page, select **Save**.
+1. After all attribute information is completed on the **Require attributes** page, select **Save**.
### Add a Multi-Geo SharePoint site
active-directory Entitlement Management Reprocess Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-assignments.md
To use entitlement management and assign users to access packages, you must have
If you have users who are in the "Delivered" state but don't have access to resources that are a part of the access package, you'll likely need to reprocess the assignments to reassign those users to the access package's resources. Follow these steps to reprocess assignments for an existing access package:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Azure Active Directory**, and then select **Identity Governance**.
-1. In the left menu, select **Access packages** and then open the access package with the user assignment you want to reprocess.
+1. In the left menu, select **Access packages** and then open the access package with the user assignment you want to reprocess.
-1. Underneath **Manage** on the left side, select **Assignments**.
+1. Underneath **Manage** on the left side, select **Assignments**.
![Entitlement management in the Azure portal](./media/entitlement-management-reprocess-access-package-assignments/reprocess-access-package-assignment.png)
-1. Select all users whose assignments you wish to reprocess.
+1. Select all users whose assignments you wish to reprocess.
-1. Select **Reprocess**.
+1. Select **Reprocess**.
## Next steps
active-directory Entitlement Management Reprocess Access Package Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-requests.md
To use entitlement management and assign users to access packages, you must have
If you have a set of users whose requests are in the "Partially Delivered" or "Failed" state, you might need to reprocess some of those requests. Follow these steps to reprocess requests for an existing access package:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Click **Azure Active Directory**, and then click **Identity Governance**.
-1. In the left menu, click **Access packages** and then open the access package.
+1. In the left menu, click **Access packages** and then open the access package.
-1. Underneath **Manage** on the left side, click **Requests**.
+1. Underneath **Manage** on the left side, click **Requests**.
-1. Select all users whose requests you wish to reprocess.
+1. Select all users whose requests you wish to reprocess.
-1. Click **Reprocess**.
+1. Click **Reprocess**.
## Next steps
active-directory Entitlement Management Ticketed Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-ticketed-provisioning.md
After registering your application, you must add a client secret by following th
To authorize the created application to call the [MS Graph resume API](/graph/api/accesspackageassignmentrequest-resume) you'd do the following steps:
-1. Navigate to the Entra portal [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement)
+1. Navigate to the Entra portal [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement)
1. In the left menu, select **Catalogs**.
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
For Microsoft Graph, the parameters for the **Send welcome email to new hire** t
|arguments | The optional common email task parameters can be specified; if they are not included, the default behavior takes effect. |
+Example of usage within the workflow:
-```Example for usage within the workflow
+```json
{
- "category": "joiner",
- "continueOnError": false,
- "description": "Send welcome email to new hire",
- "displayName": "Send Welcome Email",
- "isEnabled": true,
- "taskDefinitionId": "70b29d51-b59a-4773-9280-8841dfd3f2ea",
- "arguments": [
- {
- "name": "cc",
- "value": "e94ad2cd-d590-4b39-8e46-bb4f8e293f85,ac17d108-60cd-4eb2-a4b4-084cacda33f2"
- },
- {
- "name": "customSubject",
- "value": "Welcome to the organization {{userDisplayName}}!"
- },
- {
- "name": "customBody",
- "value": "Welcome to our organization {{userGivenName}} {{userSurname}}.\n\nFor more information, reach out to your manager {{managerDisplayName}} at {{managerEmail}}."
- },
- {
- "name": "locale",
- "value": "en-us"
- }
- ]
+ "category": "joiner",
+ "continueOnError": false,
+ "description": "Send welcome email to new hire",
+ "displayName": "Send Welcome Email",
+ "isEnabled": true,
+ "taskDefinitionId": "70b29d51-b59a-4773-9280-8841dfd3f2ea",
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "e94ad2cd-d590-4b39-8e46-bb4f8e293f85,ac17d108-60cd-4eb2-a4b4-084cacda33f2"
+ },
+ {
+ "name": "customSubject",
+ "value": "Welcome to the organization {{userDisplayName}}!"
+ },
+ {
+ "name": "customBody",
+ "value": "Welcome to our organization {{userGivenName}} {{userSurname}}.\n\nFor more information, reach out to your manager {{managerDisplayName}} at {{managerEmail}}."
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ }
+ ]
} ```
For Microsoft Graph, the parameters for the **Send onboarding reminder email** t
|taskDefinitionId | 3C860712-2D37-42A4-928F-5C93935D26A1 | |arguments | The optional common email task parameters can be specified; if they are not included, the default behavior takes effect. |
+Example of usage within the workflow:
-```Example for usage within the workflow
+```json
{
- "category": "joiner",
- "continueOnError": false,
- "description": "Send onboarding reminder email to user\u2019s manager",
- "displayName": "Send onboarding reminder email",
- "isEnabled": true,
- "taskDefinitionId": "3C860712-2D37-42A4-928F-5C93935D26A1",
- "arguments": [
- {
- "name": "cc",
- "value": "e94ad2cd-d590-4b39-8e46-bb4f8e293f85,068fa0c1-fa00-4f4f-8411-e968d921c3e7"
- },
- {
- "name": "customSubject",
- "value": "Reminder: {{userDisplayName}} is starting soon"
- },
- {
- "name": "customBody",
- "value": "Hello {{managerDisplayName}}\n\nthis is a reminder that {{userDisplayName}} is starting soon.\n\nRegards\nYour IT department"
- },
- {
- "name": "locale",
- "value": "en-us"
- }
- ]
+ "category": "joiner",
+ "continueOnError": false,
+ "description": "Send onboarding reminder email to user\u2019s manager",
+ "displayName": "Send onboarding reminder email",
+ "isEnabled": true,
+ "taskDefinitionId": "3C860712-2D37-42A4-928F-5C93935D26A1",
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "e94ad2cd-d590-4b39-8e46-bb4f8e293f85,068fa0c1-fa00-4f4f-8411-e968d921c3e7"
+ },
+ {
+ "name": "customSubject",
+ "value": "Reminder: {{userDisplayName}} is starting soon"
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}\n\nthis is a reminder that {{userDisplayName}} is starting soon.\n\nRegards\nYour IT department"
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ }
+ ]
} ```
For Microsoft Graph, the parameters for the **Generate Temporary Access Pass and
|taskDefinitionId | 1b555e50-7f65-41d5-b514-5894a026d10d | |arguments | Argument contains the name parameter "tapLifetimeInMinutes", which is the lifetime of the temporaryAccessPass in minutes starting at startDateTime. Minimum 10, Maximum 43200 (equivalent to 30 days). The argument also contains the tapIsUsableOnce parameter, which determines whether the passcode is limited to a one time use. If true, the pass can be used once; if false, the pass can be used multiple times within the temporaryAccessPass lifetime. Additionally, the optional common email task parameters can be specified; if they are not included, the default behavior takes effect. |
+Example of usage within the workflow:
-```Example for usage within the workflow
+```json
{
- "category": "joiner",
- "continueOnError": false,
- "description": "Generate Temporary Access Pass and send via email to user's manager",
- "displayName": "Generate TAP and Send Email",
- "isEnabled": true,
- "taskDefinitionId": "1b555e50-7f65-41d5-b514-5894a026d10d",
- "arguments": [
- {
- "name": "tapLifetimeMinutes",
- "value": "480"
- },
- {
- "name": "tapIsUsableOnce",
- "value": "false"
- },
- {
- "name": "cc",
- "value": "068fa0c1-fa00-4f4f-8411-e968d921c3e7,9d208c40-7eb6-46ff-bebd-f30148c39b47"
- },
- {
- "name": "customSubject",
- "value": "Temporary access pass for your new employee {{userDisplayName}}"
- },
- {
- "name": "customBody",
- "value": "Hello {{managerDisplayName}}\n\nPlease find the temporary access pass for your new employee {{userDisplayName}} below:\n\n{{temporaryAccessPass}}\n\nRegards\nYour IT department"
- },
- {
- "name": "locale",
- "value": "en-us"
- }
- ]
+ "category": "joiner",
+ "continueOnError": false,
+ "description": "Generate Temporary Access Pass and send via email to user's manager",
+ "displayName": "Generate TAP and Send Email",
+ "isEnabled": true,
+ "taskDefinitionId": "1b555e50-7f65-41d5-b514-5894a026d10d",
+ "arguments": [
+ {
+ "name": "tapLifetimeMinutes",
+ "value": "480"
+ },
+ {
+ "name": "tapIsUsableOnce",
+ "value": "false"
+ },
+ {
+ "name": "cc",
+ "value": "068fa0c1-fa00-4f4f-8411-e968d921c3e7,9d208c40-7eb6-46ff-bebd-f30148c39b47"
+ },
+ {
+ "name": "customSubject",
+ "value": "Temporary access pass for your new employee {{userDisplayName}}"
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}\n\nPlease find the temporary access pass for your new employee {{userDisplayName}} below:\n\n{{temporaryAccessPass}}\n\nRegards\nYour IT department"
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ }
+ ]
} ```
For Microsoft Graph the parameters for the **Send email to notify manager of use
|taskDefinitionId | aab41899-9972-422a-9d97-f626014578b7 | |arguments | The optional common email task parameters can be specified; if they are not included, the default behavior takes effect. |
-```Example for usage within the workflow
+Example of usage within the workflow:
+
+```json
{
- "category": "mover",
- "continueOnError": false,
- "description": "Send email to notify user\u2019s manager of user move",
- "displayName": "Send email to notify manager of user move",
- "isEnabled": true,
- "taskDefinitionId": "aab41899-9972-422a-9d97-f626014578b7",
- "arguments": [
- {
- "name": "cc",
- "value": "ac17d108-60cd-4eb2-a4b4-084cacda33f2,7d3ee937-edcc-46b0-9e2c-f832e01231ea"
- },
- {
- "name": "customSubject",
- "value": "{{userDisplayName}} has moved"
- },
- {
- "name": "customBody",
- "value": "Hello {{managerDisplayName}}\n\nwe are reaching out to let you know {{userDisplayName}} has moved in the organization.\n\nRegards\nYour IT department"
- },
- {
- "name": "locale",
- "value": "en-us"
- }
- ]
+ "category": "mover",
+ "continueOnError": false,
+ "description": "Send email to notify user\u2019s manager of user move",
+ "displayName": "Send email to notify manager of user move",
+ "isEnabled": true,
+ "taskDefinitionId": "aab41899-9972-422a-9d97-f626014578b7",
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "ac17d108-60cd-4eb2-a4b4-084cacda33f2,7d3ee937-edcc-46b0-9e2c-f832e01231ea"
+ },
+ {
+ "name": "customSubject",
+ "value": "{{userDisplayName}} has moved"
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}\n\nwe are reaching out to let you know {{userDisplayName}} has moved in the organization.\n\nRegards\nYour IT department"
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ }
+ ]
} ```
For Microsoft Graph, the parameters for the **Request user access package assign
|taskDefinitionId | c1ec1e76-f374-4375-aaa6-0bb6bd4c60be | |arguments | Argument contains two name parameter that is the "assignmentPolicyId", and "accessPackageId". |
+Example of usage within the workflow:
-```Example for usage within the workflow
+```json
{
- "category": "joiner,mover",
- "continueOnError": false,
- "description": "Request user assignment to selected access package",
- "displayName": "Request user access package assignment",
- "isEnabled": true,
- "taskDefinitionId": "c1ec1e76-f374-4375-aaa6-0bb6bd4c60be",
- "arguments": [
- {
- "name": "assignmentPolicyId",
- "value": "00d6fd25-6695-4f4a-8186-e4c6f901d2c1"
- },
- {
- "name": "accessPackageId",
- "value": "2ae5d6e5-6cbe-4710-82f2-09ef6ffff0d0"
- }
- ]
+ "category": "joiner,mover",
+ "continueOnError": false,
+ "description": "Request user assignment to selected access package",
+ "displayName": "Request user access package assignment",
+ "isEnabled": true,
+ "taskDefinitionId": "c1ec1e76-f374-4375-aaa6-0bb6bd4c60be",
+ "arguments": [
+ {
+ "name": "assignmentPolicyId",
+ "value": "00d6fd25-6695-4f4a-8186-e4c6f901d2c1"
+ },
+ {
+ "name": "accessPackageId",
+ "value": "2ae5d6e5-6cbe-4710-82f2-09ef6ffff0d0"
+ }
+ ]
} ```
For Microsoft Graph, the parameters for the **Remove access package assignment f
```Example for usage within the workflow {
- "category": "leaver,mover",
- "continueOnError": false,
- "description": "Remove user assignment of selected access package",
- "displayName": "Remove access package assignment for user",
- "isEnabled": true,
- "taskDefinitionId": "4a0b64f2-c7ec-46ba-b117-18f262946c50",
- "arguments": [
- {
- "name": "accessPackageId",
- "value": "2ae5d6e5-6cbe-4710-82f2-09ef6ffff0d0"
- }
- ]
+ "category": "leaver,mover",
+ "continueOnError": false,
+ "description": "Remove user assignment of selected access package",
+ "displayName": "Remove access package assignment for user",
+ "isEnabled": true,
+ "taskDefinitionId": "4a0b64f2-c7ec-46ba-b117-18f262946c50",
+ "arguments": [
+ {
+ "name": "accessPackageId",
+ "value": "2ae5d6e5-6cbe-4710-82f2-09ef6ffff0d0"
+ }
+ ]
} ```
For Microsoft Graph, the parameters for the **Remove all access package assignme
|description | Remove all access packages assigned to the user (Customizable by user) | |taskDefinitionId | 42ae2956-193d-4f39-be06-691b8ac4fa1d |
+Example of usage within the workflow:
-```Example for usage within the workflow
+```json
{
- "category": "leaver",
- "continueOnError": false,
- "description": "Remove all access packages assigned to the user",
- "displayName": "Remove all access package assignments for user",
- "isEnabled": true,
- "taskDefinitionId": "42ae2956-193d-4f39-be06-691b8ac4fa1d",
- "arguments": []
+ "category": "leaver",
+ "continueOnError": false,
+ "description": "Remove all access packages assigned to the user",
+ "displayName": "Remove all access package assignments for user",
+ "isEnabled": true,
+ "taskDefinitionId": "42ae2956-193d-4f39-be06-691b8ac4fa1d",
+ "arguments": []
} ```
For Microsoft Graph, the parameters for the **Cancel all pending access package
|taskDefinitionId | 498770d9-bab7-4e4c-b73d-5ded82a1d0b3 |
-```Example for usage within the workflow
+Example of usage within the workflow:
+
+```json
{
- "category": "leaver",
- "continueOnError": false,
- "description": "Cancel all access package assignment requests pending for the user",
- "displayName": "Cancel all pending access package assignment requests for user",
- "isEnabled": true,
- "taskDefinitionId": "498770d9-bab7-4e4c-b73d-5ded82a1d0b3",
- "arguments": []
+ "category": "leaver",
+ "continueOnError": false,
+ "description": "Cancel all access package assignment requests pending for the user",
+ "displayName": "Cancel all pending access package assignment requests for user",
+ "isEnabled": true,
+ "taskDefinitionId": "498770d9-bab7-4e4c-b73d-5ded82a1d0b3",
+ "arguments": []
} ```
For Microsoft Graph the parameters for the **Send email before user's last day**
|taskDefinitionId | 52853a3e-f4e5-4eb8-bb24-1ac09a1da935 | |arguments | The optional common email task parameters can be specified; if they are not included, the default behavior takes effect. |
-```Example for usage within the workflow
+Example of usage within the workflow:
+
+```
{
- "category": "leaver",
- "continueOnError": false,
- "description": "Send offboarding email to userΓÇÖs manager before the last day of work",
- "displayName": "Send email before userΓÇÖs last day",
- "isEnabled": true,
- "taskDefinitionId": "52853a3e-f4e5-4eb8-bb24-1ac09a1da935",
- "arguments": [
- {
- "name": "cc",
- "value": "068fa0c1-fa00-4f4f-8411-e968d921c3e7,e94ad2cd-d590-4b39-8e46-bb4f8e293f85"
- },
- {
- "name": "customSubject",
- "value": "Reminder that {{userDisplayName}}'s last day is coming up"
- },
- {
- "name": "customBody",
- "value": "Hello {{managerDisplayName}}\n\nthis is a reminder that {{userDisplayName}}'s last day is coming up.\n\nRegards\nYour IT department"
- },
- {
- "name": "locale",
- "value": "en-us"
- }
- ]
+ "category": "leaver",
+ "continueOnError": false,
+ "description": "Send offboarding email to userΓÇÖs manager before the last day of work",
+ "displayName": "Send email before userΓÇÖs last day",
+ "isEnabled": true,
+ "taskDefinitionId": "52853a3e-f4e5-4eb8-bb24-1ac09a1da935",
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "068fa0c1-fa00-4f4f-8411-e968d921c3e7,e94ad2cd-d590-4b39-8e46-bb4f8e293f85"
+ },
+ {
+ "name": "customSubject",
+ "value": "Reminder that {{userDisplayName}}'s last day is coming up"
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}\n\nthis is a reminder that {{userDisplayName}}'s last day is coming up.\n\nRegards\nYour IT department"
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ }
+ ]
} ```
For Microsoft Graph, the parameters for the **Send email on user last day** task
|taskDefinitionId | 9c0a1eaf-5bda-4392-9d9e-6e155bb57411 | |arguments | The optional common email task parameters can be specified; if they are not included, the default behavior takes effect. |
-```Example for usage within the workflow
+Example of usage within the workflow:
+
+```json
{
- "category": "leaver",
- "continueOnError": false,
- "description": "Send offboarding email to userΓÇÖs manager on the last day of work",
- "displayName": "Send email on userΓÇÖs last day",
- "isEnabled": true,
- "taskDefinitionId": "9c0a1eaf-5bda-4392-9d9e-6e155bb57411",
- "arguments": [
- {
- "name": "cc",
- "value": "068fa0c1-fa00-4f4f-8411-e968d921c3e7,e94ad2cd-d590-4b39-8e46-bb4f8e293f85"
- },
- {
- "name": "customSubject",
- "value": "{{userDisplayName}}'s last day"
- },
- {
- "name": "customBody",
- "value": "Hello {{managerDisplayName}}\n\nthis is a reminder that {{userDisplayName}}'s last day is today and their access will be revoked.\n\nRegards\nYour IT department"
- },
- {
- "name": "locale",
- "value": "en-us"
- }
- ]
+ "category": "leaver",
+ "continueOnError": false,
+ "description": "Send offboarding email to userΓÇÖs manager on the last day of work",
+ "displayName": "Send email on userΓÇÖs last day",
+ "isEnabled": true,
+ "taskDefinitionId": "9c0a1eaf-5bda-4392-9d9e-6e155bb57411",
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "068fa0c1-fa00-4f4f-8411-e968d921c3e7,e94ad2cd-d590-4b39-8e46-bb4f8e293f85"
+ },
+ {
+ "name": "customSubject",
+ "value": "{{userDisplayName}}'s last day"
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}\n\nthis is a reminder that {{userDisplayName}}'s last day is today and their access will be revoked.\n\nRegards\nYour IT department"
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ }
+ ]
} ```
For Microsoft Graph, the parameters for the **Send email to users manager after
|taskDefinitionId | 6f22ddd4-b3a5-47a4-a846-0d7c201a49ce | |arguments | The optional common email task parameters can be specified; if they are not included, the default behavior takes effect. |
-```Example for usage within the workflow
+Example of usage within the workflow:
+
+```json
{
- "category": "leaver",
- "continueOnError": false,
- "description": "Send offboarding email to userΓÇÖs manager after the last day of work",
- "displayName": "Send email after userΓÇÖs last day",
- "isEnabled": true,
- "taskDefinitionId": "6f22ddd4-b3a5-47a4-a846-0d7c201a49ce",
- "arguments": [
- {
- "name": "cc",
- "value": "ac17d108-60cd-4eb2-a4b4-084cacda33f2,7d3ee937-edcc-46b0-9e2c-f832e01231ea"
- },
- {
- "name": "customSubject",
- "value": "{{userDisplayName}}'s accounts will be deleted today"
- },
- {
- "name": "customBody",
- "value": "Hello {{managerDisplayName}}\n\nthis is a reminder that {{userDisplayName}} left the organization a while ago and today their disabled accounts will be deleted.\n\nRegards\nYour IT department"
- },
- {
- "name": "locale",
- "value": "en-us"
- }
- ]
+ "category": "leaver",
+ "continueOnError": false,
+ "description": "Send offboarding email to userΓÇÖs manager after the last day of work",
+ "displayName": "Send email after userΓÇÖs last day",
+ "isEnabled": true,
+ "taskDefinitionId": "6f22ddd4-b3a5-47a4-a846-0d7c201a49ce",
+ "arguments": [
+ {
+ "name": "cc",
+ "value": "ac17d108-60cd-4eb2-a4b4-084cacda33f2,7d3ee937-edcc-46b0-9e2c-f832e01231ea"
+ },
+ {
+ "name": "customSubject",
+ "value": "{{userDisplayName}}'s accounts will be deleted today"
+ },
+ {
+ "name": "customBody",
+ "value": "Hello {{managerDisplayName}}\n\nthis is a reminder that {{userDisplayName}} left the organization a while ago and today their disabled accounts will be deleted.\n\nRegards\nYour IT department"
+ },
+ {
+ "name": "locale",
+ "value": "en-us"
+ }
+ ]
} ```
active-directory How To Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-prerequisites.md
You need the following to use Azure AD Connect cloud sync:
A group Managed Service Account is a managed domain account that provides automatic password management, simplified service principal name (SPN) management, the ability to delegate the management to other administrators, and also extends this functionality over multiple servers. Azure AD Connect Cloud Sync supports and uses a gMSA for running the agent. You will be prompted for administrative credentials during setup, in order to create this account. The account will appear as (domain\provAgentgMSA$). For more information on a gMSA, see [group Managed Service Accounts](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview) ### Prerequisites for gMSA:
-1. The Active Directory schema in the gMSA domain's forest needs to be updated to Windows Server 2012 or later.
-2. [PowerShell RSAT modules](/windows-server/remote/remote-server-administration-tools) on a domain controller
-3. At least one domain controller in the domain must be running Windows Server 2012 or later.
-4. A domain joined server where the agent is being installed needs to be either Windows Server 2016 or later.
+1. The Active Directory schema in the gMSA domain's forest needs to be updated to Windows Server 2012 or later.
+2. [PowerShell RSAT modules](/windows-server/remote/remote-server-administration-tools) on a domain controller
+3. At least one domain controller in the domain must be running Windows Server 2012 or later.
+4. A domain joined server where the agent is being installed needs to be either Windows Server 2016 or later.
### Custom gMSA account If you are creating a custom gMSA account, you need to ensure that the account has the following permissions.
active-directory Deprecated Azure Ad Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/deprecated-azure-ad-connect.md
We regularly update Azure AD Connect with [newer versions](reference-connect-ver
If you're still using a deprecated and unsupported version of Azure AD Connect, here's what you should do:
- 1. Verify which version you should install. Most customers no longer need Azure AD Connect and can now use [Azure AD Cloud Sync](../cloud-sync/what-is-cloud-sync.md). Cloud sync is the next generation of sync tools to provision users and groups from AD into Azure AD. It features a lightweight agent and is fully managed from the cloud ΓÇô and it upgrades to newer versions automatically, so you never have to worry about upgrading again!
+ 1. Verify which version you should install. Most customers no longer need Azure AD Connect and can now use [Azure AD Cloud Sync](../cloud-sync/what-is-cloud-sync.md). Cloud sync is the next generation of sync tools to provision users and groups from AD into Azure AD. It features a lightweight agent and is fully managed from the cloud ΓÇô and it upgrades to newer versions automatically, so you never have to worry about upgrading again!
- 2. If you're not yet eligible for Azure AD Cloud Sync, please follow this [link to download](https://www.microsoft.com/download/details.aspx?id=47594) and install the latest version of Azure AD Connect. In most cases, upgrading to the latest version will only take a few moments. For more information, see [Upgrading Azure AD Connect from a previous version.](how-to-upgrade-previous-version.md).
+ 2. If you're not yet eligible for Azure AD Cloud Sync, please follow this [link to download](https://www.microsoft.com/download/details.aspx?id=47594) and install the latest version of Azure AD Connect. In most cases, upgrading to the latest version will only take a few moments. For more information, see [Upgrading Azure AD Connect from a previous version.](how-to-upgrade-previous-version.md).
## Next steps
active-directory How To Connect Device Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-device-options.md
The following documentation provides information about the various device option
## Configure device options in Azure AD Connect
-1. Run Azure AD Connect. In the **Additional tasks** page, select **Configure device options**. Click **Next**.
+1. Run Azure AD Connect. In the **Additional tasks** page, select **Configure device options**. Click **Next**.
![Configure device options](./media/how-to-connect-device-options/deviceoptions.png) The **Overview** page displays the details.
The following documentation provides information about the various device option
>[!NOTE] > The new Configure device options is available only in version 1.1.819.0 and newer.
-2. After providing the credentials for Azure AD, you can chose the operation to be performed on the Device options page.
+2. After providing the credentials for Azure AD, you can chose the operation to be performed on the Device options page.
![Device operations](./media/how-to-connect-device-options/deviceoptionsselection.png) ## Next steps
active-directory How To Connect Health Adfs Risky Ip Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-health-adfs-risky-ip-workbook.md
Additionally, it is possible for a single IP address to attempt multiple logins
- Expanded functionality from the previous Risky IP report, which will be deprecated after January 24, 2022. ## Requirements
-1. Connect Health for AD FS installed and updated to the latest agent.
-2. A Log Analytics Workspace with the ΓÇ£ADFSSignInLogsΓÇ¥ stream enabled.
-3. Permissions to use the Azure AD Monitor Workbooks. To use Workbooks, you need:
+1. Connect Health for AD FS installed and updated to the latest agent.
+2. A Log Analytics Workspace with the ΓÇ£ADFSSignInLogsΓÇ¥ stream enabled.
+3. Permissions to use the Azure AD Monitor Workbooks. To use Workbooks, you need:
- An Azure Active Directory tenant with a premium (P1 or P2) license. - Access to a Log Analytics Workspace and the following roles in Azure AD (if accessing Log Analytics through Azure portal): Security administrator, Security reader, Reports reader, Global administrator
Alerting threshold can be updated through Threshold Settings. To start with, sys
## Configure notification alerts using Azure Monitor Alerts through the Azure portal: [![Azure Alerts Rule](./media/how-to-connect-health-adfs-risky-ip-workbook/azure-alerts-rule-1.png)](./media/how-to-connect-health-adfs-risky-ip-workbook/azure-alerts-rule-1.png#lightbox)
-1. In the Azure portal, search for ΓÇ£MonitorΓÇ¥ in the search bar to navigate to the Azure ΓÇ£MonitorΓÇ¥ service. Select ΓÇ£AlertsΓÇ¥ from the left menu, then ΓÇ£+ New alert ruleΓÇ¥.
-2. On the ΓÇ£Create alert ruleΓÇ¥ blade:
+1. In the Azure portal, search for ΓÇ£MonitorΓÇ¥ in the search bar to navigate to the Azure ΓÇ£MonitorΓÇ¥ service. Select ΓÇ£AlertsΓÇ¥ from the left menu, then ΓÇ£+ New alert ruleΓÇ¥.
+2. On the ΓÇ£Create alert ruleΓÇ¥ blade:
* Scope: Click ΓÇ£Select resourceΓÇ¥ and select your Log Analytics workspace that contains the ADFSSignInLogs you wish to monitor. * Condition: Click ΓÇ£Add conditionΓÇ¥. Select ΓÇ£LogΓÇ¥ for Signal type and ΓÇ£Log analyticsΓÇ¥ for Monitor service. Choose ΓÇ£Custom log searchΓÇ¥.
active-directory How To Connect Health Data Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-health-data-retrieval.md
This document describes how to use Azure AD Connect to retrieve data from Azure
To retrieve the email addresses for all of your users that are configured in Azure AD Connect Health to receive alerts, use the following steps.
-1. Start at the Azure Active Directory Connect health blade and select **Sync Services** from the left-hand navigation bar.
+1. Start at the Azure Active Directory Connect health blade and select **Sync Services** from the left-hand navigation bar.
![Sync Services](./media/how-to-connect-health-data-retrieval/retrieve1.png)
-2. Click on the **Alerts** tile.</br>
+2. Click on the **Alerts** tile.</br>
![Alert](./media/how-to-connect-health-data-retrieval/retrieve3.png)
-3. Click on **Notification Settings**.
+3. Click on **Notification Settings**.
![Notification](./media/how-to-connect-health-data-retrieval/retrieve4.png)
-4. On the **Notification Setting** blade, you will find the list of email addresses that have been enabled as recipients for health Alert notifications.
+4. On the **Notification Setting** blade, you will find the list of email addresses that have been enabled as recipients for health Alert notifications.
![Emails](./media/how-to-connect-health-data-retrieval/retrieve5a.png) ## Retrieve all sync errors To retrieve a list of all sync errors, use the following steps.
-1. Starting on the Azure Active Directory Health blade, select **Sync Errors**.
+1. Starting on the Azure Active Directory Health blade, select **Sync Errors**.
![Sync errors](./media/how-to-connect-health-data-retrieval/retrieve6.png)
-2. In the **Sync Errors** blade, click on **Export**. This will export a list of the recorded sync errors.
+2. In the **Sync Errors** blade, click on **Export**. This will export a list of the recorded sync errors.
![Export](./media/how-to-connect-health-data-retrieval/retrieve7.png) ## Next Steps
active-directory How To Connect Health Diagnose Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-health-diagnose-sync-errors.md
Follow the steps from the Azure portal to narrow down the sync error details and
![Sync error diagnosis steps](./media/how-to-connect-health-diagnose-sync-errors/IIdFixSteps.png) From the Azure portal, take a few steps to identify specific fixable scenarios:
-1. Check the **Diagnose status** column. The status shows if there's a possible way to fix a sync error directly from Azure Active Directory. In other words, a troubleshooting flow exists that can narrow down the error case and potentially fix it.
+1. Check the **Diagnose status** column. The status shows if there's a possible way to fix a sync error directly from Azure Active Directory. In other words, a troubleshooting flow exists that can narrow down the error case and potentially fix it.
| Status | What does it mean? | | | --|
active-directory How To Connect Install Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-custom.md
For more information, see [Directory extensions](how-to-connect-sync-feature-dir
### Enabling single sign-on On the **Single sign-on** page, you configure single sign-on for use with password synchronization or pass-through authentication. You do this step once for each forest that's being synchronized to Azure AD. Configuration involves two steps:
-1. Create the necessary computer account in your on-premises instance of Active Directory.
-2. Configure the intranet zone of the client machines to support single sign-on.
+1. Create the necessary computer account in your on-premises instance of Active Directory.
+2. Configure the intranet zone of the client machines to support single sign-on.
#### Create the computer account in Active Directory For each forest that has been added in Azure AD Connect, you need to supply domain administrator credentials so that the computer account can be created in each forest. The credentials are used only to create the account. They aren't stored or used for any other operation. Add the credentials on the **Enable single sign-on** page, as the following image shows.
To ensure that the client signs in automatically in the intranet zone, make sure
On a computer that has Group Policy management tools:
-1. Open the Group Policy management tools.
-2. Edit the group policy that will be applied to all users. For example, the Default Domain policy.
-3. Go to **User Configuration** > **Administrative Templates** > **Windows Components** > **Internet Explorer** > **Internet Control Panel** > **Security Page**. Then select **Site to Zone Assignment List**.
-4. Enable the policy. Then, in the dialog box, enter a value name of `https://autologon.microsoftazuread-sso.com` and value of `1`. Your setup should look like the following image.
+1. Open the Group Policy management tools.
+2. Edit the group policy that will be applied to all users. For example, the Default Domain policy.
+3. Go to **User Configuration** > **Administrative Templates** > **Windows Components** > **Internet Explorer** > **Internet Control Panel** > **Security Page**. Then select **Site to Zone Assignment List**.
+4. Enable the policy. Then, in the dialog box, enter a value name of `https://autologon.microsoftazuread-sso.com` and value of `1`. Your setup should look like the following image.
![Screenshot showing intranet zones.](./media/how-to-connect-install-custom/sitezone.png)
-6. Select **OK** twice.
+6. Select **OK** twice.
## Configuring federation with AD FS You can configure AD FS with Azure AD Connect in just a few clicks. Before you start, you need:
active-directory How To Connect Install Existing Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-existing-database.md
Important notes to take note of before you proceed:
- You cannot have multiple Azure AD Connect servers share the same ADSync database. The ΓÇ£use existing databaseΓÇ¥ method allows you to reuse an existing ADSync database with a new Azure AD Connect server. It does not support sharing. ## Steps to install Azure AD Connect with ΓÇ£use existing databaseΓÇ¥ mode
-1. Download Azure AD Connect installer (AzureADConnect.MSI) to the Windows server. Double-click the Azure AD Connect installer to start installing Azure AD Connect.
-2. Once the MSI installation completes, the Azure AD Connect wizard starts with the Express mode setup. Close the screen by clicking the Exit icon.
+1. Download Azure AD Connect installer (AzureADConnect.MSI) to the Windows server. Double-click the Azure AD Connect installer to start installing Azure AD Connect.
+2. Once the MSI installation completes, the Azure AD Connect wizard starts with the Express mode setup. Close the screen by clicking the Exit icon.
![Screenshot that shows the "Welcome to Azure A D Connect" page, with "Express Settings" in the left-side menu highlighted.](./media/how-to-connect-install-existing-database/db1.png)
-3. Start a new command prompt or PowerShell session. Navigate to folder "C:\Program Files\Microsoft Azure Active Directory Connect". Run command .\AzureADConnect.exe /useexistingdatabase to start the Azure AD Connect wizard in ΓÇ£Use existing databaseΓÇ¥ setup mode.
+3. Start a new command prompt or PowerShell session. Navigate to folder "C:\Program Files\Microsoft Azure Active Directory Connect". Run command .\AzureADConnect.exe /useexistingdatabase to start the Azure AD Connect wizard in ΓÇ£Use existing databaseΓÇ¥ setup mode.
> [!NOTE] > Use the switch **/UseExistingDatabase** only when the database already contains data from an earlier Azure AD Connect installation. For instance, when you are moving from a local database to a full SQL Server database or when the Azure AD Connect server was rebuilt and you restored a SQL backup of the ADSync database from an earlier installation of Azure AD Connect. If the database is empty, that is, it doesn't contain any data from a previous Azure AD Connect installation, skip this step.
active-directory How To Connect Pta Upgrade Preview Authentication Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-pta-upgrade-preview-authentication-agents.md
To check the versions of your Authentication Agents, on each server identified i
Before upgrading, ensure that you have the following items in place: 1. **Create cloud-only Global Administrator account**: DonΓÇÖt upgrade without having a cloud-only Global Administrator account to use in emergency situations where your Pass-through Authentication Agents are not working properly. Learn about [adding a cloud-only Global Administrator account](../../fundamentals/add-users-azure-active-directory.md). Doing this step is critical and ensures that you don't get locked out of your tenant.
-2. **Ensure high availability**: If not completed previously, install a second standalone Authentication Agent to provide high availability for sign-in requests, using these [instructions](how-to-connect-pta-quick-start.md#step-4-ensure-high-availability).
+2. **Ensure high availability**: If not completed previously, install a second standalone Authentication Agent to provide high availability for sign-in requests, using these [instructions](how-to-connect-pta-quick-start.md#step-4-ensure-high-availability).
## Upgrading the Authentication Agent on your Azure AD Connect server
active-directory How To Connect Pta User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-pta-user-privacy.md
Azure AD Pass-through Authentication creates the following log type, which can c
Improve user privacy for Pass-through Authentication in two ways:
-1. Upon request, extract data for a person and remove data from that person from the installations.
-2. Ensure no data is retained beyond 48 hours.
+1. Upon request, extract data for a person and remove data from that person from the installations.
+2. Ensure no data is retained beyond 48 hours.
We strongly recommend the second option as it is easier to implement and maintain. Following are the instructions for each log type:
Foreach ($file in $files) {
To schedule this script to run every 48 hours follow these steps:
-1. Save the script in a file with the ".PS1" extension.
-2. Open **Control Panel** and click on **System and Security**.
-3. Under the **Administrative Tools** heading, click on ΓÇ£**Schedule Tasks**ΓÇ¥.
-4. In **Task Scheduler**, right-click on “**Task Schedule Library**” and click on “**Create Basic task…**”.
-5. Enter the name for the new task and click **Next**.
-6. Select ΓÇ£**Daily**ΓÇ¥ for the **Task Trigger** and click **Next**.
-7. Set the recurrence to two days and click **Next**.
-8. Select ΓÇ£**Start a program**ΓÇ¥ as the action and click **Next**.
-9. Type ΓÇ£**PowerShell**ΓÇ¥ in the box for the Program/script, and in box labeled ΓÇ£**Add arguments (optional)**ΓÇ¥, enter the full path to the script that you created earlier, then click **Next**.
-10. The next screen shows a summary of the task you are about to create. Verify the values and click **Finish** to create the task:
+1. Save the script in a file with the ".PS1" extension.
+2. Open **Control Panel** and click on **System and Security**.
+3. Under the **Administrative Tools** heading, click on ΓÇ£**Schedule Tasks**ΓÇ¥.
+4. In **Task Scheduler**, right-click on “**Task Schedule Library**” and click on “**Create Basic task…**”.
+5. Enter the name for the new task and click **Next**.
+6. Select ΓÇ£**Daily**ΓÇ¥ for the **Task Trigger** and click **Next**.
+7. Set the recurrence to two days and click **Next**.
+8. Select ΓÇ£**Start a program**ΓÇ¥ as the action and click **Next**.
+9. Type ΓÇ£**PowerShell**ΓÇ¥ in the box for the Program/script, and in box labeled ΓÇ£**Add arguments (optional)**ΓÇ¥, enter the full path to the script that you created earlier, then click **Next**.
+10. The next screen shows a summary of the task you are about to create. Verify the values and click **Finish** to create the task:
### Note about Domain controller logs
active-directory How To Connect Sso User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sso-user-privacy.md
Azure AD Seamless SSO creates the following log type, which can contain Personal
Improve user privacy for Seamless SSO in two ways:
-1. Upon request, extract data for a person and remove data from that person from the installations.
-2. Ensure no data is retained beyond 48 hours.
+1. Upon request, extract data for a person and remove data from that person from the installations.
+2. Ensure no data is retained beyond 48 hours.
We strongly recommend the second option as it is easier to implement and maintain. See following instructions for each log type:
active-directory How To Connect Sync Change The Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-change-the-configuration.md
Before enabling synchronization of the UserType attribute, you must first decide
The steps to enable synchronization of the UserType attribute can be summarized as:
-1. Disable the sync scheduler and verify there is no synchronization in progress.
-2. Add the source attribute to the on-premises AD Connector schema.
-3. Add the UserType to the Azure AD Connector schema.
-4. Create an inbound synchronization rule to flow the attribute value from on-premises Active Directory.
-5. Create an outbound synchronization rule to flow the attribute value to Azure AD.
-6. Run a full synchronization cycle.
-7. Enable the sync scheduler.
+1. Disable the sync scheduler and verify there is no synchronization in progress.
+2. Add the source attribute to the on-premises AD Connector schema.
+3. Add the UserType to the Azure AD Connector schema.
+4. Create an inbound synchronization rule to flow the attribute value from on-premises Active Directory.
+5. Create an outbound synchronization rule to flow the attribute value to Azure AD.
+6. Run a full synchronization cycle.
+7. Enable the sync scheduler.
>[!NOTE] > The rest of this section covers these steps. They are described in the context of an Azure AD deployment with single-forest topology and without custom synchronization rules. If you have multi-forest topology, custom synchronization rules configured, or have a staging server, you need to adjust the steps accordingly.
active-directory How To Connect Sync Staging Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-staging-server.md
See the section [verify](#verify) on how to use this script.
```powershell Param(
- [Parameter(Mandatory=$true, HelpMessage="Must be a file generated using csexport 'Name of Connector' export.xml /f:x)")]
- [string]$xmltoimport="%temp%\exportedStage1a.xml",
- [Parameter(Mandatory=$false, HelpMessage="Maximum number of users per output file")][int]$batchsize=1000,
- [Parameter(Mandatory=$false, HelpMessage="Show console output")][bool]$showOutput=$false
+ [Parameter(Mandatory=$true, HelpMessage="Must be a file generated using csexport 'Name of Connector' export.xml /f:x)")]
+ [string]$xmltoimport="%temp%\exportedStage1a.xml",
+ [Parameter(Mandatory=$false, HelpMessage="Maximum number of users per output file")][int]$batchsize=1000,
+ [Parameter(Mandatory=$false, HelpMessage="Show console output")][bool]$showOutput=$false
) #LINQ isn't loaded automatically, so force it
$result=$reader = [System.Xml.XmlReader]::Create($resolvedXMLtoimport) 
$result=$reader.ReadToDescendant('cs-object') if($result) {
- do 
- {
- #create the object placeholder
- #adding them up here means we can enforce consistency
- $objOutputUser=New-Object psobject
- Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name ID -Value ""
- Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name Type -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name DN -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name operation -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name UPN -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name displayName -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name sourceAnchor -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name alias -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name primarySMTP -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name onPremisesSamAccountName -Value ""
- Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name mail -Value ""
-
- $user = [System.Xml.Linq.XElement]::ReadFrom($reader)
- if ($showOutput) {Write-Host Found an exported object... -ForegroundColor Green}
-
- #object id
- $outID=$user.Attribute('id').Value
- if ($showOutput) {Write-Host ID: $outID}
- $objOutputUser.ID=$outID
-
- #object type
- $outType=$user.Attribute('object-type').Value
- if ($showOutput) {Write-Host Type: $outType}
- $objOutputUser.Type=$outType
-
- #dn
- $outDN= $user.Element('unapplied-export').Element('delta').Attribute('dn').Value
- if ($showOutput) {Write-Host DN: $outDN}
- $objOutputUser.DN=$outDN
-
- #operation
- $outOperation= $user.Element('unapplied-export').Element('delta').Attribute('operation').Value
- if ($showOutput) {Write-Host Operation: $outOperation}
- $objOutputUser.operation=$outOperation
-
- #now that we have the basics, go get the details
-
- foreach ($attr in $user.Element('unapplied-export-hologram').Element('entry').Elements("attr"))
- {
- $attrvalue=$attr.Attribute('name').Value
- $internalvalue= $attr.Element('value').Value
-
- switch ($attrvalue)
- {
- "userPrincipalName"
- {
- if ($showOutput) {Write-Host UPN: $internalvalue}
- $objOutputUser.UPN=$internalvalue
- }
- "displayName"
- {
- if ($showOutput) {Write-Host displayName: $internalvalue}
- $objOutputUser.displayName=$internalvalue
- }
- "sourceAnchor"
- {
- if ($showOutput) {Write-Host sourceAnchor: $internalvalue}
- $objOutputUser.sourceAnchor=$internalvalue
- }
- "alias"
- {
- if ($showOutput) {Write-Host alias: $internalvalue}
- $objOutputUser.alias=$internalvalue
- }
- "proxyAddresses"
- {
- if ($showOutput) {Write-Host primarySMTP: ($internalvalue -replace "SMTP:","")}
- $objOutputUser.primarySMTP=$internalvalue -replace "SMTP:",""
- }
- }
- }
-
- $objOutputUsers += $objOutputUser
-
- Write-Progress -activity "Processing ${xmltoimport} in batches of ${batchsize}" -status "Batch ${outputfilecount}: " -percentComplete (($objOutputUsers.Count / $batchsize) * 100)
-
- #every so often, dump the processed users in case we blow up somewhere
- if ($count % $batchsize -eq 0)
- {
- Write-Host Hit the maximum users processed without completion... -ForegroundColor Yellow
-
- #export the collection of users as a CSV
- Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
- $objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation
-
- #increment the output file counter
- $outputfilecount+=1
-
- #reset the collection and the user counter
- $objOutputUsers = $null
- $count=0
- }
-
- $count+=1
-
- #need to bail out of the loop if no more users to process
- if ($reader.NodeType -eq [System.Xml.XmlNodeType]::EndElement)
- {
- break
- }
-
- } while ($reader.Read)
-
- #need to write out any users that didn't get picked up in a batch of 1000
- #export the collection of users as CSV
- Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
- $objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation
+ do 
+ {
+ #create the object placeholder
+ #adding them up here means we can enforce consistency
+ $objOutputUser=New-Object psobject
+ Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name ID -Value ""
+ Add-Member -InputObject $objOutputUser -MemberType NoteProperty -Name Type -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name DN -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name operation -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name UPN -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name displayName -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name sourceAnchor -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name alias -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name primarySMTP -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name onPremisesSamAccountName -Value ""
+ Add-Member -inputobject $objOutputUser -MemberType NoteProperty -Name mail -Value ""
+
+ $user = [System.Xml.Linq.XElement]::ReadFrom($reader)
+ if ($showOutput) {Write-Host Found an exported object... -ForegroundColor Green}
+
+ #object id
+ $outID=$user.Attribute('id').Value
+ if ($showOutput) {Write-Host ID: $outID}
+ $objOutputUser.ID=$outID
+
+ #object type
+ $outType=$user.Attribute('object-type').Value
+ if ($showOutput) {Write-Host Type: $outType}
+ $objOutputUser.Type=$outType
+
+ #dn
+ $outDN= $user.Element('unapplied-export').Element('delta').Attribute('dn').Value
+ if ($showOutput) {Write-Host DN: $outDN}
+ $objOutputUser.DN=$outDN
+
+ #operation
+ $outOperation= $user.Element('unapplied-export').Element('delta').Attribute('operation').Value
+ if ($showOutput) {Write-Host Operation: $outOperation}
+ $objOutputUser.operation=$outOperation
+
+ #now that we have the basics, go get the details
+
+ foreach ($attr in $user.Element('unapplied-export-hologram').Element('entry').Elements("attr"))
+ {
+ $attrvalue=$attr.Attribute('name').Value
+ $internalvalue= $attr.Element('value').Value
+
+ switch ($attrvalue)
+ {
+ "userPrincipalName"
+ {
+ if ($showOutput) {Write-Host UPN: $internalvalue}
+ $objOutputUser.UPN=$internalvalue
+ }
+ "displayName"
+ {
+ if ($showOutput) {Write-Host displayName: $internalvalue}
+ $objOutputUser.displayName=$internalvalue
+ }
+ "sourceAnchor"
+ {
+ if ($showOutput) {Write-Host sourceAnchor: $internalvalue}
+ $objOutputUser.sourceAnchor=$internalvalue
+ }
+ "alias"
+ {
+ if ($showOutput) {Write-Host alias: $internalvalue}
+ $objOutputUser.alias=$internalvalue
+ }
+ "proxyAddresses"
+ {
+ if ($showOutput) {Write-Host primarySMTP: ($internalvalue -replace "SMTP:","")}
+ $objOutputUser.primarySMTP=$internalvalue -replace "SMTP:",""
+ }
+ }
+ }
+
+ $objOutputUsers += $objOutputUser
+
+ Write-Progress -activity "Processing ${xmltoimport} in batches of ${batchsize}" -status "Batch ${outputfilecount}: " -percentComplete (($objOutputUsers.Count / $batchsize) * 100)
+
+ #every so often, dump the processed users in case we blow up somewhere
+ if ($count % $batchsize -eq 0)
+ {
+ Write-Host Hit the maximum users processed without completion... -ForegroundColor Yellow
+
+ #export the collection of users as a CSV
+ Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
+ $objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation
+
+ #increment the output file counter
+ $outputfilecount+=1
+
+ #reset the collection and the user counter
+ $objOutputUsers = $null
+ $count=0
+ }
+
+ $count+=1
+
+ #need to bail out of the loop if no more users to process
+ if ($reader.NodeType -eq [System.Xml.XmlNodeType]::EndElement)
+ {
+ break
+ }
+
+ } while ($reader.Read)
+
+ #need to write out any users that didn't get picked up in a batch of 1000
+ #export the collection of users as CSV
+ Write-Host Writing processedusers${outputfilecount}.csv -ForegroundColor Yellow
+ $objOutputUsers | Export-Csv -path processedusers${outputfilecount}.csv -NoTypeInformation
} else {
active-directory How To Upgrade Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-upgrade-previous-version.md
These steps also work to move from Azure AD Sync or a solution with FIM + Azure
### Use a swing migration to upgrade 1. If you only have one Azure AD Connect server, if you are upgrading from AD Sync, or upgrading from an old version, it's a good idea to install the new version on a new Windows Server. If you already have two Azure AD Connect servers, upgrade the staging server first. and promote the staging to active. It's recommended to always keep a pair of active/staging server running the same version, but it's not required. 2. If you have made a custom configuration and your staging server doesn't have it, follow the steps under [Move a custom configuration from the active server to the staging server](#move-a-custom-configuration-from-the-active-server-to-the-staging-server).
-3. Let the sync engine run full import and full synchronization on your staging server.
+3. Let the sync engine run full import and full synchronization on your staging server.
4. Verify that the new configuration did not cause any unexpected changes by using the steps under "Verify" in [Verify the configuration of a server](how-to-connect-sync-staging-server.md#verify-the-configuration-of-a-server). If something is not as expected, correct it, run a sync cycle, and verify the data until it looks good. 5. Before upgrading the other server, switch it to staging mode and promote the staging server to be the active server. This is the last step "Switch active server" in the process to [Verify the configuration of a server](how-to-connect-sync-staging-server.md#verify-the-configuration-of-a-server). 6. Upgrade the server that is now in staging mode to the latest release. Follow the same steps as before to get the data and configuration upgraded. If you upgrade from Azure AD Sync, you can now turn off and decommission your old server.
active-directory Plan Connect Userprincipalname https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-connect-userprincipalname.md
The following are example scenarios of how the UPN is calculated based on the gi
![Scenario1](./media/plan-connect-userprincipalname/example1.png) On-Premises user object:-- mailNickName : &lt;not set&gt;-- proxyAddresses : {SMTP:us1@contoso.com}-- mail : us2@contoso.com-- userPrincipalName : us3@contoso.com
+- mailNickName: &lt;not set&gt;
+- proxyAddresses: {SMTP:us1@contoso.com}
+- mail: us2@contoso.com
+- userPrincipalName: us3@contoso.com
Synchronized the user object to Azure AD Tenant for the first time - Set Azure AD MailNickName attribute to primary SMTP address prefix.
Synchronized the user object to Azure AD Tenant for the first time
- Set Azure AD UserPrincipalName attribute to MOERA. Azure AD Tenant user object:-- MailNickName : us1 -- UserPrincipalName : us1@contoso.onmicrosoft.com-
+- MailNickName : us1
+- UserPrincipalName: us1@contoso.onmicrosoft.com
### Scenario 2: Non-verified UPN suffix ΓÇô set on-premises mailNickName attribute ![Scenario2](./media/plan-connect-userprincipalname/example2.png) On-Premises user object:-- mailNickName : us4-- proxyAddresses : {SMTP:us1@contoso.com}-- mail : us2@contoso.com-- userPrincipalName : us3@contoso.com
+- mailNickName: us4
+- proxyAddresses: {SMTP:us1@contoso.com}
+- mail: us2@contoso.com
+- userPrincipalName: us3@contoso.com
Synchronize update on on-premises mailNickName attribute to Azure AD Tenant - Update Azure AD MailNickName attribute with on-premises mailNickName attribute. - Because there is no update to the on-premises userPrincipalName attribute, there is no change to the Azure AD UserPrincipalName attribute. Azure AD Tenant user object:-- MailNickName : us4-- UserPrincipalName : us1@contoso.onmicrosoft.com
+- MailNickName: us4
+- UserPrincipalName: us1@contoso.onmicrosoft.com
### Scenario 3: Non-verified UPN suffix ΓÇô update on-premises userPrincipalName attribute ![Scenario3](./media/plan-connect-userprincipalname/example3.png) On-Premises user object:-- mailNickName : us4-- proxyAddresses : {SMTP:us1@contoso.com}-- mail : us2@contoso.com-- userPrincipalName : us5@contoso.com
+- mailNickName: us4
+- proxyAddresses: {SMTP:us1@contoso.com}
+- mail: us2@contoso.com
+- userPrincipalName: us5@contoso.com
Synchronize update on on-premises userPrincipalName attribute to Azure AD Tenant - Update on on-premises userPrincipalName attribute triggers recalculation of MOERA and Azure AD UserPrincipalName attribute.
Synchronize update on on-premises userPrincipalName attribute to Azure AD Tenant
- Set Azure AD UserPrincipalName attribute to MOERA. Azure AD Tenant user object:-- MailNickName : us4-- UserPrincipalName : us4@contoso.onmicrosoft.com
+- MailNickName: us4
+- UserPrincipalName: us4@contoso.onmicrosoft.com
### Scenario 4: Non-verified UPN suffix ΓÇô update primary SMTP address and on-premises mail attribute ![Scenario4](./media/plan-connect-userprincipalname/example4.png) On-Premises user object:-- mailNickName : us4-- proxyAddresses : {SMTP:us6@contoso.com}-- mail : us7@contoso.com-- userPrincipalName : us5@contoso.com
+- mailNickName: us4
+- proxyAddresses: {SMTP:us6@contoso.com}
+- mail: us7@contoso.com
+- userPrincipalName: us5@contoso.com
Synchronize update on on-premises mail attribute and primary SMTP address to Azure AD Tenant - After the initial synchronization of the user object, updates to the on-premises mail attribute and the primary SMTP address will not affect the Azure AD MailNickName or the UserPrincipalName attribute. Azure AD Tenant user object:-- MailNickName : us4-- UserPrincipalName : us4@contoso.onmicrosoft.com
+- MailNickName: us4
+- UserPrincipalName: us4@contoso.onmicrosoft.com
### Scenario 5: Verified UPN suffix ΓÇô update on-premises userPrincipalName attribute suffix ![Scenario5](./media/plan-connect-userprincipalname/example5.png) On-Premises user object:-- mailNickName : us4-- proxyAddresses : {SMTP:us6@contoso.com}-- mail : us7@contoso.com-- userPrincipalName : us5@verified.contoso.com
+- mailNickName: us4
+- proxyAddresses: {SMTP:us6@contoso.com}
+- mail: us7@contoso.com
+- userPrincipalName: us5@verified.contoso.com
Synchronize update on on-premises userPrincipalName attribute to the Azure AD Tenant - Update on on-premises userPrincipalName attribute triggers recalculation of Azure AD UserPrincipalName attribute. - Set Azure AD UserPrincipalName attribute to on-premises userPrincipalName attribute as the UPN suffix is verified with the Azure AD Tenant. Azure AD Tenant user object:-- MailNickName : us4 -- UserPrincipalName : us5@verified.contoso.com
+- MailNickName: us4
+- UserPrincipalName: us5@verified.contoso.com
## Next Steps - [Integrate your on-premises directories with Azure Active Directory](../whatis-hybrid-identity.md)
active-directory Reference Connect User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-user-privacy.md
Improve user privacy for Azure AD Connect installations in two ways:
-1. Upon request, extract data for a person and remove data from that person from the installations
-2. Ensure no data is retained beyond 48 hours.
+1. Upon request, extract data for a person and remove data from that person from the installations
+2. Ensure no data is retained beyond 48 hours.
The Azure AD Connect team recommends the second option since it is much easier to implement and maintain. An Azure AD Connect sync server stores the following user privacy data:
-1. Data about a person in the **Azure AD Connect database**
-2. Data in the **Windows Event log** files that may contain information about a person
-3. Data in the **Azure AD Connect installation log files** that may contain about a person
+1. Data about a person in the **Azure AD Connect database**
+2. Data in the **Windows Event log** files that may contain information about a person
+3. Data in the **Azure AD Connect installation log files** that may contain about a person
Azure AD Connect customers should use the following guidelines when removing user data:
-1. Delete the contents of the folder that contains the Azure AD Connect installation log files on a regular basis ΓÇô at least every 48 hours
-2. This product may also create Event Logs. To learn more about Event Logs logs, please see the [documentation here](/windows/win32/wes/windows-event-log).
+1. Delete the contents of the folder that contains the Azure AD Connect installation log files on a regular basis ΓÇô at least every 48 hours
+2. This product may also create Event Logs. To learn more about Event Logs logs, please see the [documentation here](/windows/win32/wes/windows-event-log).
Data about a person is automatically removed from the Azure AD Connect database when that personΓÇÖs data is removed from the source system where it originated from. No specific action from administrators is required to be GDPR compliant. However, it does require that the Azure AD Connect data is synced with your data source at least every two days.
If ($File.ToUpper() -ne "$env:programdata\aadconnect\PERSISTEDSTATE.XML".toupper
### Schedule this script to run every 48 hours Use the following steps to schedule the script to run every 48 hours.
-1. Save the script in a file with the extension **&#46;PS1**, then open the Control Panel and click on **Systems and Security**.
+1. Save the script in a file with the extension **&#46;PS1**, then open the Control Panel and click on **Systems and Security**.
![System](./media/reference-connect-user-privacy/gdpr2.png)
-2. Under the Administrative Tools heading, click on **Schedule Tasks**.
+2. Under the Administrative Tools heading, click on **Schedule Tasks**.
![Task](./media/reference-connect-user-privacy/gdpr3.png)
-3. In Task Scheduler, right click on **Task Schedule Library** and click on **Create Basic task…**
-4. Enter the name for the new task and click **Next**.
-5. Select **Daily** for the task trigger and click on **Next**.
-6. Set the recurrence to **2 days** and click **Next**.
-7. Select **Start a program** as the action and click on **Next**.
-8. Type **PowerShell** in the box for the Program/script, and in box labeled **Add arguments (optional)**, enter the full path to the script that you created earlier, then click **Next**.
-9. The next screen shows a summary of the task you are about to create. Verify the values and click **Finish** to create the task.
+3. In Task Scheduler, right click on **Task Schedule Library** and click on **Create Basic task…**
+4. Enter the name for the new task and click **Next**.
+5. Select **Daily** for the task trigger and click on **Next**.
+6. Set the recurrence to **2 days** and click **Next**.
+7. Select **Start a program** as the action and click on **Next**.
+8. Type **PowerShell** in the box for the Program/script, and in box labeled **Add arguments (optional)**, enter the full path to the script that you created earlier, then click **Next**.
+9. The next screen shows a summary of the task you are about to create. Verify the values and click **Finish** to create the task.
active-directory Tshoot Connect Largeobjecterror Usercertificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-largeobjecterror-usercertificate.md
The steps can be summarized as:
8. Export the changes to Azure AD. 9. Re-enable sync scheduler.
-### Step 1. Disable sync scheduler and verify there is no synchronization in progress
+### Step 1. Disable sync scheduler and verify there is no synchronization in progress
Ensure no synchronization takes place while you are in the middle of implementing a new sync rule to avoid unintended changes being exported to Azure AD. To disable the built-in sync scheduler: 1. Start PowerShell session on the Azure AD Connect server.
Ensure no synchronization takes place while you are in the middle of implementin
1. Go to the **Operations** tab and confirm there is no operation whose status is *ΓÇ£in progress.ΓÇ¥*
-### Step 2. Find the existing outbound sync rule for userCertificate attribute
+### Step 2. Find the existing outbound sync rule for userCertificate attribute
There should be an existing sync rule that is enabled and configured to export userCertificate attribute for User objects to Azure AD. Locate this sync rule to find out its **precedence** and **scoping filter** configuration: 1. Start the **Synchronization Rules Editor** by going to START → Synchronization Rules Editor.
The new sync rule must have the same **scoping filter** and **higher precedence*
6. Click the **Add** button to create the sync rule.
-### Step 4. Verify the new sync rule on an existing object with LargeObject error
+### Step 4. Verify the new sync rule on an existing object with LargeObject error
This is to verify that the sync rule created is working correctly on an existing AD object with LargeObject error before you apply it to other objects: 1. Go to the **Operations** tab in the Synchronization Service Manager. 2. Select the most recent Export to Azure AD operation and click on one of the objects with LargeObject errors.
-3. In the Connector Space Object Properties pop-up screen, click on the **Preview** button.
+3. In the Connector Space Object Properties pop-up screen, click on the **Preview** button.
4. In the Preview pop-up screen, select **Full synchronization** and click **Commit Preview**. 5. Close the Preview screen and the Connector Space Object Properties screen. 6. Go to the **Connectors** tab in the Synchronization Service Manager.
This is to verify that the sync rule created is working correctly on an existing
8. In the Run Connector pop-up, select **Export** step and click **OK**. 9. Wait for Export to Azure AD to complete and confirm there is no more LargeObject error on this specific object.
-### Step 5. Apply the new sync rule to remaining objects with LargeObject error
+### Step 5. Apply the new sync rule to remaining objects with LargeObject error
Once the sync rule has been added, you need to run a full synchronization step on the AD Connector: 1. Go to the **Connectors** tab in the Synchronization Service Manager. 2. Right-click on the **AD** Connector and select **Run...**
Once the sync rule has been added, you need to run a full synchronization step o
4. Wait for the Full Synchronization step to complete. 5. Repeat the above steps for the remaining AD Connectors if you have more than one AD Connectors. Usually, multiple connectors are required if you have multiple on-premises directories.
-### Step 6. Verify there are no unexpected changes waiting to be exported to Azure AD
+### Step 6. Verify there are no unexpected changes waiting to be exported to Azure AD
1. Go to the **Connectors** tab in the Synchronization Service Manager. 2. Right-click on the **Azure AD** Connector and select **Search Connector Space**. 3. In the Search Connector Space pop-up:
Once the sync rule has been added, you need to run a full synchronization step o
3. Click **Search** button to return all objects with changes waiting to be exported to Azure AD. 4. Verify there are no unexpected changes. To examine the changes for a given object, double-click on the object.
-### Step 7. Export the changes to Azure AD
+### Step 7. Export the changes to Azure AD
To export the changes to Azure AD: 1. Go to the **Connectors** tab in the Synchronization Service Manager. 2. Right-click on the **Azure AD** Connector and select **Run...** 4. In the Run Connector pop-up, select **Export** step and click **OK**. 5. Wait for Export to Azure AD to complete and confirm there are no more LargeObject errors.
-### Step 8. Re-enable sync scheduler
+### Step 8. Re-enable sync scheduler
Now that the issue is resolved, re-enable the built-in sync scheduler: 1. Start PowerShell session. 2. Re-enable scheduled synchronization by running cmdlet: `Set-ADSyncScheduler -SyncCycleEnabled $true`
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
Once you determine if the workload identity was compromised, dismiss the account
## Remediate risky workload identities
-1. Inventory credentials assigned to the risky workload identity, whether for the service principal or application objects.
+1. Inventory credentials assigned to the risky workload identity, whether for the service principal or application objects.
1. Add a new credential. Microsoft recommends using x509 certificates. 1. Remove the compromised credentials. If you believe the account is at risk, we recommend removing all existing credentials.
-1. Remediate any Azure KeyVault secrets that the Service Principal has access to by rotating them.
+1. Remediate any Azure KeyVault secrets that the Service Principal has access to by rotating them.
The [Azure AD Toolkit](https://github.com/microsoft/AzureADToolkit) is a PowerShell module that can help you perform some of these actions.
active-directory Howto Identity Protection Configure Risk Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md
If you already have risk policies enabled in Identity Protection, we highly reco
### Migrating to Conditional Access
-1. **Create an equivalent** [user risk-based](#user-risk-policy-in-conditional-access) and [sign-in risk-based ](#sign-in-risk-policy-in-conditional-access) policy in Conditional Access in report-only mode. You can create a policy with the steps above or using [Conditional Access templates](../conditional-access/concept-conditional-access-policy-common.md) based on Microsoft's recommendations and your organizational requirements.
+1. **Create an equivalent** [user risk-based](#user-risk-policy-in-conditional-access) and [sign-in risk-based ](#sign-in-risk-policy-in-conditional-access) policy in Conditional Access in report-only mode. You can create a policy with the steps above or using [Conditional Access templates](../conditional-access/concept-conditional-access-policy-common.md) based on Microsoft's recommendations and your organizational requirements.
1. Ensure that the new Conditional Access risk policy works as expected by testing it in [report-only mode](../conditional-access/howto-conditional-access-insights-reporting.md).
-1. **Enable** the new Conditional Access risk policy. You can choose to have both policies running side-by-side to confirm the new policies are working as expected before turning off the Identity Protection risk policies.
+1. **Enable** the new Conditional Access risk policy. You can choose to have both policies running side-by-side to confirm the new policies are working as expected before turning off the Identity Protection risk policies.
1. Browse back to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select this new policy to edit it. 1. Set **Enable policy** to **On** to enable the policy
-1. **Disable** the old risk policies in Identity Protection.
+1. **Disable** the old risk policies in Identity Protection.
1. Browse to **Azure Active Directory** > **Identity Protection** > Select the **User risk** or **Sign-in risk** policy. 1. Set **Enforce policy** to **Off**
-1. Create other risk policies if needed in [Conditional Access](../conditional-access/concept-conditional-access-policy-common.md).
+1. Create other risk policies if needed in [Conditional Access](../conditional-access/concept-conditional-access-policy-common.md).
## Next steps
active-directory Id Protection Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/id-protection-dashboard.md
Customers with P2 licenses can view a comprehensive list of recommendations that
Recent Activity provides a summary of recent risk-related activities in your tenant. Possible activity types are:
-1. Attack Activity
-1. Admin Remediation Activity
-1. Self-Remediation Activity
-1. New High-Risk Users
+1. Attack Activity
+1. Admin Remediation Activity
+1. Self-Remediation Activity
+1. New High-Risk Users
[![Screenshot showing recent activities in the dashboard.](./media/id-protection-dashboard/microsoft-entra-id-protection-dashboard-recent-activities.png)](./media/id-protection-dashboard/microsoft-entra-id-protection-dashboard-recent-activities.png)
active-directory Add Application Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-configure.md
You'll need to consent to the `Application.ReadWrite.All` permission.
Import-Module Microsoft.Graph.Applications $params = @{
- Tags = @(
- "HR"
- "Payroll"
- "HideApp"
- )
- Info = @{
- LogoUrl = "https://cdn.pixabay.com/photo/2016/03/21/23/25/link-1271843_1280.png"
- MarketingUrl = "https://www.contoso.com/app/marketing"
- PrivacyStatementUrl = "https://www.contoso.com/app/privacy"
- SupportUrl = "https://www.contoso.com/app/support"
- TermsOfServiceUrl = "https://www.contoso.com/app/termsofservice"
- }
- Web = @{
- HomePageUrl = "https://www.contoso.com/"
- LogoutUrl = "https://www.contoso.com/frontchannel_logout"
- RedirectUris = @(
- "https://localhost"
- )
- }
- ServiceManagementReference = "Owners aliases: Finance @ contosofinance@contoso.com; The Phone Company HR consulting @ hronsite@thephone-company.com;"
+ Tags = @(
+ "HR"
+ "Payroll"
+ "HideApp"
+ )
+ Info = @{
+ LogoUrl = "https://cdn.pixabay.com/photo/2016/03/21/23/25/link-1271843_1280.png"
+ MarketingUrl = "https://www.contoso.com/app/marketing"
+ PrivacyStatementUrl = "https://www.contoso.com/app/privacy"
+ SupportUrl = "https://www.contoso.com/app/support"
+ TermsOfServiceUrl = "https://www.contoso.com/app/termsofservice"
+ }
+ Web = @{
+ HomePageUrl = "https://www.contoso.com/"
+ LogoutUrl = "https://www.contoso.com/frontchannel_logout"
+ RedirectUris = @(
+ "https://localhost"
+ )
+ }
+ ServiceManagementReference = "Owners aliases: Finance @ contosofinance@contoso.com; The Phone Company HR consulting @ hronsite@thephone-company.com;"
} Update-MgApplication -ApplicationId $applicationId -BodyParameter $params
active-directory Configure Linked Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-linked-sign-on.md
To configure linked-based SSO in your Azure AD tenant, you need:
## Configure linked-based single sign-on
-1. Sign in to the [Azure portal](https://portal.azure.com) with the appropriate role.
-2. Select **Azure Active Directory** in Azure Services, and then select **Enterprise applications**.
-3. Search for and select the application that you want to add linked SSO.
-4. Select **Single sign-on** and then select **Linked**.
-5. Enter the URL for the sign-in page of the application.
-6. Select **Save**.
+1. Sign in to the [Azure portal](https://portal.azure.com) with the appropriate role.
+2. Select **Azure Active Directory** in Azure Services, and then select **Enterprise applications**.
+3. Search for and select the application that you want to add linked SSO.
+4. Select **Single sign-on** and then select **Linked**.
+5. Enter the URL for the sign-in page of the application.
+6. Select **Save**.
## Next steps
active-directory Configure Password Single Sign On Non Gallery Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications.md
To configure password-based SSO in your Azure AD tenant, you need:
## Configure password-based single sign-on
-1. Sign in to the [Azure portal](https://portal.azure.com) with the appropriate role.
-1. Select **Azure Active Directory** in Azure Services, and then select **Enterprise applications**.
-1. Search for and select the application that you want to add password-based SSO.
-1. Select **Single sign-on** and then select **Password-based**.
-1. Enter the URL for the sign-in page of the application.
-1. Select **Save**.
+1. Sign in to the [Azure portal](https://portal.azure.com) with the appropriate role.
+1. Select **Azure Active Directory** in Azure Services, and then select **Enterprise applications**.
+1. Search for and select the application that you want to add password-based SSO.
+1. Select **Single sign-on** and then select **Password-based**.
+1. Enter the URL for the sign-in page of the application.
+1. Select **Save**.
Azure AD parses the HTML of the sign-in page for username and password input fields. If the attempt succeeds, you're done. Your next step is to [Assign users or groups](add-application-portal-assign-users.md) to the application.
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
In Service Provider settings, define SAML SP instance settings for the SHA-prote
3. (Optional) In Security Settings, select **Enable Encryption Assertion** to enable Azure AD to encrypt issued SAML assertions. Azure AD and BIG-IP APM encryption assertions help assure content tokens aren't intercepted, nor personal or corporate data compromised.
-4. In **Security Settings**, from the **Assertion Decryption Private Key** list, select **Create New**.
+4. In **Security Settings**, from the **Assertion Decryption Private Key** list, select **Create New**.
![Screenshot of the Create New option in the Assertion Decryption Private Key list.](./media/f5-big-ip-oracle/configure-security-create-new.png)
-5. Select **OK**.
-6. The **Import SSL Certificate and Keys** dialog appears.
-7. For **Import Type**, select **PKCS 12 (IIS)**. This action imports the certificate and private key.
-8. For **Certificate and Key Name**, select **New** and enter the input.
-9. Enter the **Password**.
-10. Select **Import**.
-11. Close the browser tab to return to the main tab.
+5. Select **OK**.
+6. The **Import SSL Certificate and Keys** dialog appears.
+7. For **Import Type**, select **PKCS 12 (IIS)**. This action imports the certificate and private key.
+8. For **Certificate and Key Name**, select **New** and enter the input.
+9. Enter the **Password**.
+10. Select **Import**.
+11. Close the browser tab to return to the main tab.
![Screenshot of selections and entries for SSL Certificate Key Source.](./media/f5-big-ip-oracle/import-ssl-certificates-and-keys.png)
-12. Check the box for **Enable Encrypted Assertion**.
-13. If you enabled encryption, from the **Assertion Decryption Private Key** list, select the certificate. BIG-IP APM uses this certificate private key to decrypt Azure AD assertions.
-14. If you enabled encryption, from the **Assertion Decryption Certificate** list, select the certificate. BIG-IP uploads this certificate to Azure AD to encrypt the issued SAML assertions.
+12. Check the box for **Enable Encrypted Assertion**.
+13. If you enabled encryption, from the **Assertion Decryption Private Key** list, select the certificate. BIG-IP APM uses this certificate private key to decrypt Azure AD assertions.
+14. If you enabled encryption, from the **Assertion Decryption Certificate** list, select the certificate. BIG-IP uploads this certificate to Azure AD to encrypt the issued SAML assertions.
![Screenshot of two entries and one option for Security Settings.](./media/f5-big-ip-easy-button-ldap/service-provider-security-settings.png)
Conditional Access policies control access based on device, application, locatio
To select a policy to be applied to the application being published:
-1. On the **Conditional Access Policy** tab, in the **Available Policies** list, select a policy.
-2. Select the **right arrow** and move it to the **Selected Policies** list.
+1. On the **Conditional Access Policy** tab, in the **Available Policies** list, select a policy.
+2. Select the **right arrow** and move it to the **Selected Policies** list.
> [!NOTE] > You can select the **Include** or **Exclude** option for a policy. If both options are selected, the policy is unenforced.
active-directory F5 Big Ip Sap Erp Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-sap-erp-easy-button.md
Use the Service Provider settings to define SAML SP instance properties of the a
![Screenshot of the Create New option from the Assertion Decryption Private Key list.](./media/f5-big-ip-oracle/configure-security-create-new.png)
-5. Select **OK**.
-6. The **Import SSL Certificate and Keys** dialog appears in a new tab.
+5. Select **OK**.
+6. The **Import SSL Certificate and Keys** dialog appears in a new tab.
-7. To import the certificate and private key, select **PKCS 12 (IIS)**.
-8. Close the browser tab to return to the main tab.
+7. To import the certificate and private key, select **PKCS 12 (IIS)**.
+8. Close the browser tab to return to the main tab.
![Screenshot of options and selections for Import SSL Certificates and Keys.](./media/f5-big-ip-easy-button-sap-erp/import-ssl-certificates-and-keys.png)
-9. For **Enable Encrypted Assertion**, check the box.
+9. For **Enable Encrypted Assertion**, check the box.
10. If you enabled encryption, from the **Assertion Decryption Private Key** list, select the private key for the certificate BIG-IP APM uses to decrypt Azure AD assertions. 11. If you enabled encryption, from the **Assertion Decryption Certificate** list, select the certificate BIG-IP uploads to Azure AD to encrypt the issued SAML assertions.
The **Selected Policies** view lists policies targeting cloud apps. You can't de
To select a policy for the application being published:
-1. From the **Available Policies** list, select the policy.
-2. Select the right arrow.
-3. Move the policy to the **Selected Policies** list.
+1. From the **Available Policies** list, select the policy.
+2. Select the right arrow.
+3. Move the policy to the **Selected Policies** list.
Selected policies have an **Include** or **Exclude** option checked. If both options are checked, the selected policy isn't enforced.
active-directory F5 Bigip Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-bigip-deployment-guide.md
For BIG-IP to be pre-configured and ready for SHA scenarios, provision Client an
![Screenshot of certificate, key, and chain selections.](./media/f5ve-deployment-plan/contoso-wildcard.png)
-13. Repeat steps to create an **SSL server certificate profile**.
-14. From the top ribbon, select **SSL** > **Server** > **Create**.
-15. In the **New Server SSL Profile** page, enter a unique, friendly **Name**.
-16. Ensure the Parent profile is set to **serverssl**.
-17. Select the far-right check box for the **Certificate** and **Key** rows
-18. From the **Certificate** and **Key** drop-down lists, select your imported certificate.
-19. Select **Finished**.
+13. Repeat steps to create an **SSL server certificate profile**.
+14. From the top ribbon, select **SSL** > **Server** > **Create**.
+15. In the **New Server SSL Profile** page, enter a unique, friendly **Name**.
+16. Ensure the Parent profile is set to **serverssl**.
+17. Select the far-right check box for the **Certificate** and **Key** rows
+18. From the **Certificate** and **Key** drop-down lists, select your imported certificate.
+19. Select **Finished**.
![Screenshot of general properties and configuration selections.](./media/f5ve-deployment-plan/server-ssl-profile.png)
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
In the example, the resource enterprise application is Microsoft Graph of object
1. Grant the delegated permissions to the client enterprise application by running the following request.
- ```http
+ ```http
POST https://graph.microsoft.com/v1.0/oauth2PermissionGrants Request body
active-directory Grant Consent Single User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-consent-single-user.md
In the example, the resource enterprise application is Microsoft Graph of object
1. Grant the delegated permissions to the client enterprise application on behalf of the user by running the following request.
- ```http
+ ```http
POST https://graph.microsoft.com/v1.0/oauth2PermissionGrants Request body
active-directory Tutorial Govern Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-govern-monitor.md
To send logs to your logs analytics workspace:
1. Select **Diagnostic settings**, and then select **Add diagnostic setting**. You can also select Export Settings from the Audit Logs or Sign-ins page to get to the diagnostic settings configuration page. 1. In the Diagnostic settings menu, select **Send to Log Analytics workspace**, and then select Configure. 1. Select the Log Analytics workspace you want to send the logs to, or create a new workspace in the provided dialog box.
-1. Select the logs that you would like to send to the workspace.
-1. Select **Save** to save the setting.
+1. Select the logs that you would like to send to the workspace.
+1. Select **Save** to save the setting.
After about 15 minutes, verify that events are streamed to your Log Analytics workspace.
active-directory How To Assign Managed Identity Via Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md
The policy is designed to implement this recommendation.
When executed, the policy takes the following actions:
-1. Create, if not exist, a new built-in user-assigned managed identity in the subscription and each Azure region based on the VMs that are in scope of the policy.
-2. Once created, put a lock on the user-assigned managed identity so that it will not be accidentally deleted.
-3. Assign the built-in user-assigned managed identity to Virtual Machines from the subscription and region based on the VMs that are in scope of the policy.
+1. Create, if not exist, a new built-in user-assigned managed identity in the subscription and each Azure region based on the VMs that are in scope of the policy.
+2. Once created, put a lock on the user-assigned managed identity so that it will not be accidentally deleted.
+3. Assign the built-in user-assigned managed identity to Virtual Machines from the subscription and region based on the VMs that are in scope of the policy.
> [!NOTE] > If the Virtual Machine has exactly 1 user-assigned managed identity already assigned, then the policy skips this VM to assign the built-in identity. This is to make sure assignment of the policy does not break applications that take a dependency on [the default behavior of the token endpoint on IMDS.](managed-identities-faq.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request)
active-directory How To View Associated Resources For An Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-associated-resources-for-an-identity.md
Notice a sample response from the REST API:
```json {
- "totalCount": 2,
- "value": [{
- "id": "/subscriptions/{subId}/resourceGroups/testrg/providers/Microsoft.CognitiveServices/accounts/test1",
- "name": "test1",
- "type": "microsoft.cognitiveservices/accounts",
- "resourceGroup": "testrg",
- "subscriptionId": "{subId}",
- "subscriptionDisplayName": "TestSubscription"
- },
- {
- "id": "/subscriptions/{subId}/resourceGroups/testrg/providers/Microsoft.CognitiveServices/accounts/test2",
- "name": "test2",
- "type": "microsoft.cognitiveservices/accounts",
- "resourceGroup": "testrg",
- "subscriptionId": "{subId}",
- "subscriptionDisplayName": "TestSubscription"
- }
- ],
- "nextLink": "https://management.azure.com/subscriptions/{subId}/resourceGroups/testrg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/testid?skiptoken=ew0KICAiJGlkIjogIjEiLA0KICAiTWF4Um93cyI6IDIsDQogICJSb3dzVG9Ta2lwIjogMiwNCiAgIkt1c3RvQ2x1c3RlclVybCI6ICJodHRwczovL2FybXRvcG9sb2d5Lmt1c3RvLndpbmRvd3MubmV0Ig0KfQ%253d%253d&api-version=2021"
+ "totalCount": 2,
+ "value": [
+ {
+ "id": "/subscriptions/{subId}/resourceGroups/testrg/providers/Microsoft.CognitiveServices/accounts/test1",
+ "name": "test1",
+ "type": "microsoft.cognitiveservices/accounts",
+ "resourceGroup": "testrg",
+ "subscriptionId": "{subId}",
+ "subscriptionDisplayName": "TestSubscription"
+ },
+ {
+ "id": "/subscriptions/{subId}/resourceGroups/testrg/providers/Microsoft.CognitiveServices/accounts/test2",
+ "name": "test2",
+ "type": "microsoft.cognitiveservices/accounts",
+ "resourceGroup": "testrg",
+ "subscriptionId": "{subId}",
+ "subscriptionDisplayName": "TestSubscription"
+ }
+ ],
+ "nextLink": "https://management.azure.com/subscriptions/{subId}/resourceGroups/testrg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/testid?skiptoken=ew0KICAiJGlkIjogIjEiLA0KICAiTWF4Um93cyI6IDIsDQogICJSb3dzVG9Ta2lwIjogMiwNCiAgIkt1c3RvQ2x1c3RlclVybCI6ICJodHRwczovL2FybXRvcG9sb2d5Lmt1c3RvLndpbmRvd3MubmV0Ig0KfQ%253d%253d&api-version=2021"
} ```
active-directory How To View Managed Identity Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-activity.md
System-assigned identity:
![Browse to active directory](./media/how-to-view-managed-identity-activity/browse-to-active-directory.png)
-2. Select **Sign-in logs** from the **Monitoring** section.
+2. Select **Sign-in logs** from the **Monitoring** section.
![Select sign-in logs](./media/how-to-view-managed-identity-activity/sign-in-logs-menu-item.png)
System-assigned identity:
![managed identity sign-in events](./media/how-to-view-managed-identity-activity/msi-sign-in-events.png)
-5. To view the identity's Enterprise application in Azure Active Directory, select the ΓÇ£Managed Identity IDΓÇ¥ column.
-6. To view the Azure resource or user-assigned managed identity, search by name in the search bar of the Azure portal.
+5. To view the identity's Enterprise application in Azure Active Directory, select the ΓÇ£Managed Identity IDΓÇ¥ column.
+6. To view the Azure resource or user-assigned managed identity, search by name in the search bar of the Azure portal.
## Next steps
active-directory Tutorial Windows Vm Access Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm.md
Using managed identities for Azure resources, your application can get access to
You'll need to use **PowerShell** in this portion. If you donΓÇÖt have **PowerShell** installed, download it [here](/powershell/azure/).
-1. In the portal, navigate to **Virtual Machines** and go to your Windows virtual machine and in the **Overview**, select **Connect**.
-2. Enter in your **Username** and **Password** for which you added when you created the Windows VM.
-3. Now that you've created a **Remote Desktop Connection** with the virtual machine, open **PowerShell** in the remote session.
-4. Using the Invoke-WebRequest cmdlet, make a request to the local managed identity for Azure resources endpoint to get an access token for Azure Resource Manager.
+1. In the portal, navigate to **Virtual Machines** and go to your Windows virtual machine and in the **Overview**, select **Connect**.
+2. Enter in your **Username** and **Password** for which you added when you created the Windows VM.
+3. Now that you've created a **Remote Desktop Connection** with the virtual machine, open **PowerShell** in the remote session.
+4. Using the Invoke-WebRequest cmdlet, make a request to the local managed identity for Azure resources endpoint to get an access token for Azure Resource Manager.
```powershell $response = Invoke-WebRequest -Uri 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/' -Method GET -Headers @{Metadata="true"}
active-directory Concept Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md
Up until January 2023, it was required that every Privileged Access Group (forme
## Making group of users eligible for Azure AD role There are two ways to make a group of users eligible for Azure AD role:
-1. Make active assignments of users to the group, and then assign the group to a role as eligible for activation.
-2. Make active assignment of a role to a group and assign users to be eligible to group membership.
+1. Make active assignments of users to the group, and then assign the group to a role as eligible for activation.
+2. Make active assignment of a role to a group and assign users to be eligible to group membership.
To provide a group of users with just-in-time access to Azure AD directory roles with permissions in SharePoint, Exchange, or Security & Microsoft Purview compliance portal (for example, Exchange Administrator role), be sure to make active assignments of users to the group, and then assign the group to a role as eligible for activation (Option #1 above). If you choose to make active assignment of a group to a role and assign users to be eligible to group membership instead, it may take significant time to have all permissions of the role activated and ready to use.
active-directory Groups Activate Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-activate-roles.md
When you need to take on a group membership or ownership, you can request activa
:::image type="content" source="media/pim-for-groups/pim-group-7.png" alt-text="Screenshot of where to provide a justification in the Reason box." lightbox="media/pim-for-groups/pim-group-7.png":::
-1. Select **Activate**.
+1. Select **Activate**.
If the [role requires approval](pim-resource-roles-approval-workflow.md) to activate, an Azure notification appears in the upper right corner of your browser informing you the request is pending approval.
active-directory Groups Assign Member Owner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-assign-member-owner.md
Follow these steps to make a user eligible member or owner of a group. You will
> For groups used for elevating into Azure AD roles, Microsoft recommends that you require an approval process for eligible member assignments. Assignments that can be activated without approval can leave you vulnerable to a security risk from another administrator with permission to reset an eligible user's passwords. - Active assignments don't require the member to perform any activations to use the role. Members or owners assigned as active have the privileges assigned to the role at all times.
-1. If the assignment should be permanent (permanently eligible or permanently assigned), select the **Permanently** checkbox. Depending on the group's settings, the check box might not appear or might not be editable. For more information, check out the [Configure PIM for Groups settings in Privileged Identity Management](groups-role-settings.md#assignment-duration) article.
+1. If the assignment should be permanent (permanently eligible or permanently assigned), select the **Permanently** checkbox. Depending on the group's settings, the check box might not appear or might not be editable. For more information, check out the [Configure PIM for Groups settings in Privileged Identity Management](groups-role-settings.md#assignment-duration) article.
:::image type="content" source="media/pim-for-groups/pim-group-5.png" alt-text="Screenshot of where to configure the setting for add assignments." lightbox="media/pim-for-groups/pim-group-5.png":::
-1. Select **Assign**.
+1. Select **Assign**.
## Update or remove an existing role assignment
Follow these steps to update or remove an existing role assignment. You will nee
:::image type="content" source="media/pim-for-groups/pim-group-3.png" alt-text="Screenshot of where to review existing membership or ownership assignments for selected group." lightbox="media/pim-for-groups/pim-group-3.png":::
-1. Select **Update** or **Remove** to update or remove the membership or ownership assignment.
+1. Select **Update** or **Remove** to update or remove the membership or ownership assignment.
## Next steps
active-directory Groups Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-audit.md
Follow these steps to view the audit history for groups in Privileged Identity M
:::image type="content" source="media/pim-for-groups/pim-group-19.png" alt-text="Screenshot of where to select Resource audit." lightbox="media/pim-for-groups/pim-group-19.png":::
-1. Filter the history using a predefined date or custom range.
+1. Filter the history using a predefined date or custom range.
## View my audit
Follow these steps to view the audit history for groups in Privileged Identity M
:::image type="content" source="media/pim-for-groups/pim-group-20.png" alt-text="Screenshot of where to select My audit." lightbox="media/pim-for-groups/pim-group-20.png":::
-1. Filter the history using a predefined date or custom range.
+1. Filter the history using a predefined date or custom range.
## Next steps
active-directory Groups Discover Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-discover-groups.md
You need appropriate permissions to bring groups in Azure AD PIM. For role-assig
:::image type="content" source="media/pim-for-groups/pim-group-1.png" alt-text="Screenshot of where to view groups that are already enabled for PIM for Groups." lightbox="media/pim-for-groups/pim-group-1.png":::
-1. Select **Discover groups** and select a group that you want to bring under management with PIM.
+1. Select **Discover groups** and select a group that you want to bring under management with PIM.
:::image type="content" source="media/pim-for-groups/pim-group-2.png" alt-text="Screenshot of where to select a group that you want to bring under management with PIM." lightbox="media/pim-for-groups/pim-group-2.png":::
active-directory Overview Flagged Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-flagged-sign-ins.md
Flagged sign-ins gives you the ability to enable flagging when signing in using
3. In **Troubleshooting details**, select **Enable Flagging**. The text changes to **Disable Flagging**. Flagging is now enabled. 4. Close the browser window. 5. Open a new browser window (in the same browser application) and attempt the same sign-in that failed.
-6. Reproduce the sign-in error that was seen before.
+6. Reproduce the sign-in error that was seen before.
With flagging enabled, the same browser application and client must be used or the events won't be flagged. ### Admin: Find flagged events in reports
-1. In the Azure portal, go to **Sign-in logs** > **Add Filters**.
-1. From the **Pick a field** menu, select **Flagged for review** and **Apply**.
-1. All events that were flagged by users are shown.
-1. If needed, apply more filters to refine the event view.
-1. Select the event to review what happened.
+1. In the Azure portal, go to **Sign-in logs** > **Add Filters**.
+1. From the **Pick a field** menu, select **Flagged for review** and **Apply**.
+1. All events that were flagged by users are shown.
+1. If needed, apply more filters to refine the event view.
+1. Select the event to review what happened.
### Admin or Developer: Find flagged events using MS Graph
active-directory Recommendation Migrate To Authenticator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-to-authenticator.md
The Microsoft Authenticator app is available for Android and iOS. Microsoft Auth
## Action plan
-1. Ensure that notification through mobile app and/or verification code from mobile app are available to users as authentication methods. How to Configure Verification Options
+1. Ensure that notification through mobile app and/or verification code from mobile app are available to users as authentication methods. How to Configure Verification Options
-2. Educate users on how to add a work or school account.
+2. Educate users on how to add a work or school account.
## Next steps
active-directory Recommendation Turn Off Per User Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-turn-off-per-user-mfa.md
This recommendation improves your user's productivity and minimizes the sign-in
1. Confirm that there's an existing CA policy with an MFA requirement. Ensure that you're covering all resources and users you would like to secure with MFA. - Review your [Conditional Access policies](https://portal.azure.com/?Microsoft_AAD_IAM_enableAadvisorFeaturePreview=true&amp%3BMicrosoft_AAD_IAM_enableAadvisorFeature=true#blade/Microsoft_AAD_IAM/PoliciesTemplateBlade).
-2. Require MFA using a Conditional Access policy.
+2. Require MFA using a Conditional Access policy.
- [Secure user sign-in events with Azure AD Multi-Factor Authentication](../authentication/tutorial-enable-azure-mfa.md). 3. Ensure that the per-user MFA configuration is turned off.
active-directory Reference Audit Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-audit-activities.md
If you're using Entitlement Management to streamline how you assign members of A
|Audit Category|Activity| ||| |EntitlementManagement|Add Entitlement Management role assignment|
-|EntitlementManagement|Administrator directly assigns user to access package|
+|EntitlementManagement|Administrator directly assigns user to access package|
|EntitlementManagement|Administrator directly removes user access package assignment| |EntitlementManagement|Approval stage completed for access package assignment request| |EntitlementManagement|Approve access package assignment request|
If you're using Entitlement Management to streamline how you assign members of A
|EntitlementManagement|Cancel access package assignment request| |EntitlementManagement|Create access package| |EntitlementManagement|Create access package assignment policy|
-|EntitlementManagement|Create access package assignment user update request|
+|EntitlementManagement|Create access package assignment user update request|
|EntitlementManagement|Create access package catalog|
-|EntitlementManagement|Create connected organization|
+|EntitlementManagement|Create connected organization|
|EntitlementManagement|Create custom extension| |EntitlementManagement|Create incompatible access package| |EntitlementManagement|Create incompatible group|
active-directory Acoustic Connect Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/acoustic-connect-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Acoustic Connect
+description: Learn how to configure single sign-on between Azure Active Directory and Acoustic Connect.
++++++++ Last updated : 07/20/2023++++
+# Azure Active Directory SSO integration with Acoustic Connect
+
+In this article, you'll learn how to integrate Acoustic Connect with Azure Active Directory (Azure AD). Acoustic Connect is platform that helps you create marketing campaigns that resonate with people, build a loyal following, and drive revenue. When you integrate Acoustic Connect with Azure AD, you can:
+
+* Control in Azure AD who has access to Acoustic Connect.
+* Enable your users to be automatically signed-in to Acoustic Connect with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Acoustic Connect in a test environment. Acoustic Connect supports both **SP** and **IDP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Acoustic Connect, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Acoustic Connect single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Acoustic Connect application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Acoustic Connect from the Azure AD gallery
+
+Add Acoustic Connect from the Azure AD application gallery to configure single sign-on with Acoustic Connect. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Acoustic Connect** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://www.okta.com/saml2/service-provider/<Acoustic_ID>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://login.goacoustic.com/sso/saml2/<ID>`
+
+1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** textbox, type the URL:
+ `https://login.goacoustic.com/`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Acoustic Connect support team](mailto:support@acoustic.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Acoustic Connect** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+## Configure Acoustic Connect SSO
+
+To configure single sign-on on **Acoustic Connect** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Acoustic Connect support team](mailto:support@acoustic.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Acoustic Connect test user
+
+In this section, a user called B.Simon is created in Acoustic Connect. Acoustic Connect supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Acoustic Connect, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Acoustic Connect Sign-on URL where you can initiate the login flow.
+
+* Go to Acoustic Connect Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Acoustic Connect for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Acoustic Connect tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Acoustic Connect for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Acoustic Connect you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Cloudbees Ci Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloudbees-ci-tutorial.md
+
+ Title: Azure Active Directory SSO integration with CloudBees CI
+description: Learn how to configure single sign-on between Azure Active Directory and CloudBees CI.
++++++++ Last updated : 07/21/2023++++
+# Azure Active Directory SSO integration with CloudBees CI
+
+In this article, you'll learn how to integrate CloudBees CI with Azure Active Directory (Azure AD). Centralize management, ensure compliance, and automate at scale with CloudBees CI - the secure, scalable, and flexible CI solution based on Jenkins. When you integrate CloudBees CI with Azure AD, you can:
+
+* Control in Azure AD who has access to CloudBees CI.
+* Enable your users to be automatically signed-in to CloudBees CI with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for CloudBees CI in a test environment. CloudBees CI supports only **SP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with CloudBees CI, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* CloudBees CI single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the CloudBees CI application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add CloudBees CI from the Azure AD gallery
+
+Add CloudBees CI from the Azure AD application gallery to configure single sign-on with CloudBees CI. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **CloudBees CI** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `<Customer_EntityID>`
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ ||
+ | `https://<CustomerDomain>/cjoc/securityRealm/finishLogin` |
+ | `https://<CustomerDomain>/<Environment>/securityRealm/finishLogin` |
+ | `https://cjoc.<CustomerDomain>/securityRealm/finishLogin` |
+ | `https://<Environment>.<CustomerDomain>/securityRealm/finishLogin` |
+
+1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** textbox, type the URL using one of the following patterns:
+
+ | **Sign on URL** |
+ ||
+ | `https://<CustomerDomain>/cjoc` |
+ | `https://<CustomerDomain>/<Environment>` |
+ | `https://cjoc.<CustomerDomain>` |
+ | `https://<Environment>.<CustomerDomain>` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [CloudBees CI support team](mailto:support@cloudbees.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. CloudBees CI application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, CloudBees CI application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | email | user.mail |
+ | username | user.userprincipalname |
+ | displayname | user.givenname |
+ | groups | user.groups |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up CloudBees CI** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+## Configure CloudBees CI SSO
+
+To configure single sign-on on **CloudBees CI** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CloudBees CI support team](mailto:support@cloudbees.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create CloudBees CI test user
+
+In this section, you create a user called Britta Simon at CloudBees CI SSO. Work with [CloudBees CI support team](mailto:support@cloudbees.com) to add the users in the CloudBees CI SSO platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to CloudBees CI Sign-on URL where you can initiate the login flow.
+
+* Go to CloudBees CI Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the CloudBees CI tile in the My Apps, this will redirect to CloudBees CI Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure CloudBees CI you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Kanbanbox Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kanbanbox-tutorial.md
+
+ Title: Azure Active Directory SSO integration with KanbanBOX
+description: Learn how to configure single sign-on between Azure Active Directory and KanbanBOX.
++++++++ Last updated : 07/17/2023++++
+# Azure Active Directory SSO integration with KanbanBOX
+
+In this article, you'll learn how to integrate KanbanBOX with Azure Active Directory (Azure AD).KanbanBOX digitizes kanban material flows along the Supply Chain. KanbanBOX supports internal production and logistic flows, as well as collaboration with external suppliers and customers. When you integrate KanbanBOX with Azure AD, you can:
+
+* Control in Azure AD who has access to KanbanBOX.
+* Enable your users to be automatically signed-in to KanbanBOX with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for KanbanBOX in a test environment. KanbanBOX supports both **SP** and **IDP** initiated single sign-on.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with KanbanBOX, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* KanbanBOX single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the KanbanBOX application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add KanbanBOX from the Azure AD gallery
+
+Add KanbanBOX from the Azure AD application gallery to configure single sign-on with KanbanBOX. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **KanbanBOX** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+ In the **Relay State** textbox, type the URL:
+ `https://app.kanbanbox.com/auth/idp_initiated_sso_login`
+
+1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** textbox, type the URL:
+ `https://app.kanbanbox.com/auth/login`
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up KanbanBOX** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+## Configure KanbanBOX SSO
+
+To configure single sign-on on **KanbanBOX** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [KanbanBOX support team](mailto:help@kanbanbox.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create KanbanBOX test user
+
+In this section, you create a user called Britta Simon at KanbanBOX SSO. Work with [KanbanBOX support team](mailto:help@kanbanbox.com) to add the users in the KanbanBOX SSO platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to KanbanBOX Sign-on URL where you can initiate the login flow.
+
+* Go to KanbanBOX Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the KanbanBOX for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the KanbanBOX tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the KanbanBOX for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure KanbanBOX you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Whosoff Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/whosoff-tutorial.md
Previously updated : 07/14/2023 Last updated : 07/31/2023
Complete the following steps to enable Azure AD single sign-on in the Azure port
![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+1. On the **Basic SAML Configuration** section, the user doesn't have to perform any step as the app is already preintegrated with Azure.
1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
Complete the following steps to enable Azure AD single sign-on in the Azure port
`https://app.whosoff.com/int/<Integration_ID>/sso/azure/` > [!NOTE]
- > This value is not real. Update this value with the actual Sign on URL. Contact [WhosOff support team](mailto:support@whosoff.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > This value is not real. Update this value with the actual Sign on URL. You can collect `Integration_ID` from your WhosOff account when activating Azure SSO which is explained later in this tutorial. For any queriers, please contact [WhosOff support team](mailto:support@whosoff.com). You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
Complete the following steps to enable Azure AD single sign-on in the Azure port
## Configure WhosOff SSO
-To configure single sign-on on **WhosOff** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [WhosOff support team](mailto:support@whosoff.com). They set this setting to have the SAML SSO connection set properly on both sides.
+1. Log in to your WhosOff company site as an administrator.
+
+1. Go to **ADMINISTRATION** on the left hand menu and click **COMPANY SETTINGS** > **Single Sign On**.
+
+1. In the **Setup Single Sign On** section, perform the following steps:
+
+ ![Screenshot shows settings of metadata and configuration.](./media/whosoff-tutorial/metadata.png "Account")
+
+ 1. Select **Azure** SSO provider from the drop-down and click **Active SSO**.
+
+ 1. Once activated, copy the **Integration GUID** and save it on your computer.
+
+ 1. Upload **Federation Metadata XML** file by clicking on the **Choose File** option, which you have downloaded from the Azure portal.
+
+ 1. Click **Save changes**.
### Create WhosOff test user
active-directory Admin Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md
You are able to [search](how-to-issuer-revoke.md) for verifiable credentials wit
string claimvalue = "Bowen"; string contractid = "ZjViZjJmYzYtNzEzNS00ZDk0LWE2ZmUtYzI2ZTQ1NDNiYzVhdGVzdDM"; string output;
-
+ using (var sha256 = SHA256.Create()) {
- var input = contractid + claimvalue;
- byte[] inputasbytes = Encoding.UTF8.GetBytes(input);
- hashedsearchclaimvalue = Convert.ToBase64String(sha256.ComputeHash(inputasbytes));
+ var input = contractid + claimvalue;
+ byte[] inputasbytes = Encoding.UTF8.GetBytes(input);
+ hashedsearchclaimvalue = Convert.ToBase64String(sha256.ComputeHash(inputasbytes));
} ```
active-directory Using Wallet Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/using-wallet-library.md
In order to test the demo app, you need a webapp that issues credentials and mak
## Building the Android sample On your developer machine with Android Studio, do the following:
-1. Download or clone the Android Wallet Library [github repo](https://github.com/microsoft/entra-verifiedid-wallet-library-android/archive/refs/heads/dev.zip).
+1. Download or clone the Android Wallet Library [github repo](https://github.com/microsoft/entra-verifiedid-wallet-library-android/archive/refs/heads/dev.zip).
You donΓÇÖt need the walletlibrary folder and you can delete it if you like.
-1. Start Android Studio and open the parent folder of walletlibrarydemo
+1. Start Android Studio and open the parent folder of walletlibrarydemo
![Screenshot of Android Studio.](media/using-wallet-library/androidstudio-screenshot.png)
-1. Select **Build** menu and then **Make Project**. This step takes some time.
-1. Connect your Android test device via USB cable to your laptop
-1. Select your test device in Android Studio and click **run** button (green triangle)
+1. Select **Build** menu and then **Make Project**. This step takes some time.
+1. Connect your Android test device via USB cable to your laptop
+1. Select your test device in Android Studio and click **run** button (green triangle)
## Issuing credentials using the Android sample
-1. Start the WalletLibraryDemo app
+1. Start the WalletLibraryDemo app
![Screenshot of Create Request on Android.](media/using-wallet-library/android-create-request.png)
-1. On your laptop, launch the public demo website [https://aka.ms/vcdemo](https://aka.ms/vcdemo) and do the following
+1. On your laptop, launch the public demo website [https://aka.ms/vcdemo](https://aka.ms/vcdemo) and do the following
1. Enter your First Name and Last Name and press **Next** 1. Select **Verify with True Identity** 1. Click **Take a selfie** and **Upload government issued ID**. The demo uses simulated data and you don't need to provide a real selfie or an ID. 1. Click **Next** and **OK**
-1. Scan the QR code with your QR Code Reader app on your test device, then copy the full URL displayed in the QR Code Reader app. Remember the pin code.
-1. Switch back to WalletLibraryDemo app and paste in the URL from the clipboard
-1. Press **CREATE REQUEST** button
-1. When the app has downloaded the request, it shows a screen like below. Click on the white rectangle, which is a textbox, and enter the pin code that is displayed in the browser page. Then click the **COMPLETE** button.
+1. Scan the QR code with your QR Code Reader app on your test device, then copy the full URL displayed in the QR Code Reader app. Remember the pin code.
+1. Switch back to WalletLibraryDemo app and paste in the URL from the clipboard
+1. Press **CREATE REQUEST** button
+1. When the app has downloaded the request, it shows a screen like below. Click on the white rectangle, which is a textbox, and enter the pin code that is displayed in the browser page. Then click the **COMPLETE** button.
![Screenshot of Enter Pin Code on Android.](media/using-wallet-library/android-enter-pincode.png)
-1. Once issuance completes, the demo app displays the claims in the credential
+1. Once issuance completes, the demo app displays the claims in the credential
![Screenshot of Issuance Complete on Android.](media/using-wallet-library/android-issuance-complete.png) ## Presenting credentials using the Android sample The sample app holds the issued credential in memory, so after issuance, you can use it for presentation.
-1. The WalletLibraryDemo app should display some credential details on the home screen if you have successfully issued a credential.
+1. The WalletLibraryDemo app should display some credential details on the home screen if you have successfully issued a credential.
![Screenshot of app with credential on Android.](media/using-wallet-library/android-have-credential.png)
-1. In the Woodgrove demo in the browser, click **Return to Woodgrove** if you havenΓÇÖt done so already and continue with step 3 **Access personalized portal**.
-1. Scan the QR code with the QR Code Reader app on your test device, then copy the full URL to the clipboard.
-1. Switch back to the WalletLibraryDemo app and paste in the URL and click **CREATE REQUEST** button
-1. The app retrieves the presentation request and display the matching credentials you have in memory. In this case you only have one. **Click on it** so that the little check mark appears, then click the **COMPLETE** button to submit the presentation response
+1. In the Woodgrove demo in the browser, click **Return to Woodgrove** if you havenΓÇÖt done so already and continue with step 3 **Access personalized portal**.
+1. Scan the QR code with the QR Code Reader app on your test device, then copy the full URL to the clipboard.
+1. Switch back to the WalletLibraryDemo app and paste in the URL and click **CREATE REQUEST** button
+1. The app retrieves the presentation request and display the matching credentials you have in memory. In this case you only have one. **Click on it** so that the little check mark appears, then click the **COMPLETE** button to submit the presentation response
![Screenshot of presenting credential on Android.](media/using-wallet-library/android-present-credential.png) ## Building the iOS sample On your Mac developer machine with Xcode, do the following:
-1. Download or clone the iOS Wallet Library [github repo](https://github.com/microsoft/entra-verifiedid-wallet-library-ios/archive/refs/heads/dev.zip).
-1. Start Xcode and open the top level folder for the WalletLibrary
-1. Set focus on WalletLibraryDemo project
+1. Download or clone the iOS Wallet Library [github repo](https://github.com/microsoft/entra-verifiedid-wallet-library-ios/archive/refs/heads/dev.zip).
+1. Start Xcode and open the top level folder for the WalletLibrary
+1. Set focus on WalletLibraryDemo project
![Screenshot of Xcode.](media/using-wallet-library/xcode-screenshot.png)
-1. Change the Team ID to your [Apple Developer Team ID](https://developer.apple.com/help/account/manage-your-team/locate-your-team-id).
-1. Select Product menu and then **Build**. This step takes some time.
-1. Connect your iOS test device via USB cable to your laptop
-1. Select your test device in Xcode
-1. Select Product menu and then **Run** or click on run triangle
+1. Change the Team ID to your [Apple Developer Team ID](https://developer.apple.com/help/account/manage-your-team/locate-your-team-id).
+1. Select Product menu and then **Build**. This step takes some time.
+1. Connect your iOS test device via USB cable to your laptop
+1. Select your test device in Xcode
+1. Select Product menu and then **Run** or click on run triangle
## Issuing credentials using the iOS sample
-1. Start the WalletLibraryDemo app
+1. Start the WalletLibraryDemo app
![Screenshot of Create Request on iOS.](media/using-wallet-library/ios-create-request.png)
-1. On your laptop, launch the public demo website [https://aka.ms/vcdemo](https://aka.ms/vcdemo) and do the following
+1. On your laptop, launch the public demo website [https://aka.ms/vcdemo](https://aka.ms/vcdemo) and do the following
1. Enter your First Name and Last Name and press **Next** 1. Select **Verify with True Identity** 1. Click **Take a selfie** and **Upload government issued ID**. The demo uses simulated data and you don't need to provide a real selfie or an ID. 1. Click **Next** and **OK**
-1. Scan the QR code with your QR Code Reader app on your test device, then copy the full URL displayed in the QR Code Reader app. Remember the pin code.
-1. Switch back to WalletLibraryDemo app and paste in the URL from the clipboard
-1. Press **Create Request** button
-1. When the app has downloaded the request, it shows a screen like below. Click on the **Add Pin** text to go to a screen where you can input the pin code, then click **Add** button to get back and finally click the **Complete** button.
+1. Scan the QR code with your QR Code Reader app on your test device, then copy the full URL displayed in the QR Code Reader app. Remember the pin code.
+1. Switch back to WalletLibraryDemo app and paste in the URL from the clipboard
+1. Press **Create Request** button
+1. When the app has downloaded the request, it shows a screen like below. Click on the **Add Pin** text to go to a screen where you can input the pin code, then click **Add** button to get back and finally click the **Complete** button.
![Screenshot of Enter Pin Code on iOS.](media/using-wallet-library/ios-enter-pincode.png)
-1. Once issuance completes, the demo app displays the claims in the credential.
+1. Once issuance completes, the demo app displays the claims in the credential.
![Screenshot of Issuance Complete on iOS.](media/using-wallet-library/ios-issuance-complete.png) ## Presenting credentials using the iOS sample The sample app holds the issued credential in memory, so after issuance, you can use it for presentation.
-1. The WalletLibraryDemo app should display credential type name on the home screen if you have successfully issued a credential.
+1. The WalletLibraryDemo app should display credential type name on the home screen if you have successfully issued a credential.
![Screenshot of app with credential on iOS.](media/using-wallet-library/ios-have-credential.png)
-1. In the Woodgrove demo in the browser, click **Return to Woodgrove** if you havenΓÇÖt done so already and continue with step 3 **Access personalized portal**.
-1. Scan the QR code with the QR Code Reader app on your test device, then copy the full URL to the clipboard.
-1. Switch back to the WalletLibraryDemo app, ***clear the previous request*** from the textbox, paste in the URL and click **Create Request** button
-1. The app retrieves the presentation request and display the matching credentials you have in memory. In this case you only have one. **Click on it** so that the little check mark switches from blue to green, then click the **Complete** button to submit the presentation response
+1. In the Woodgrove demo in the browser, click **Return to Woodgrove** if you havenΓÇÖt done so already and continue with step 3 **Access personalized portal**.
+1. Scan the QR code with the QR Code Reader app on your test device, then copy the full URL to the clipboard.
+1. Switch back to the WalletLibraryDemo app, ***clear the previous request*** from the textbox, paste in the URL and click **Create Request** button
+1. The app retrieves the presentation request and display the matching credentials you have in memory. In this case you only have one. **Click on it** so that the little check mark switches from blue to green, then click the **Complete** button to submit the presentation response
![Screenshot of presenting credential on iOS.](media/using-wallet-library/ios-present-credential.png)
ai-services Bring Your Own Storage Speech Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/bring-your-own-storage-speech-resource.md
If you perform all actions in the section, your Storage account will be in the f
- Access to all external network traffic is prohibited. - Access to Storage account using Storage account key is prohibited. - Access to Storage account blob storage using [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) is prohibited. (Except for [User delegation SAS](../../storage/common/shared-key-authorization-prevent.md#understand-how-disallowing-shared-key-affects-sas-tokens))-- Access to the BYOS-enanled Speech resource is allowed using the resource [system assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
+- Access to the BYOS-enabled Speech resource is allowed using the resource [system assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
So in effect your Storage account becomes completely "locked" and can only be accessed by your Speech resource, which will be able to: - Write artifacts of your Speech data processing (see details in the [correspondent articles](#next-steps)),
If you perform all actions in the section, your Storage account will be in the f
- External network traffic is allowed. - Access to Storage account using Storage account key is prohibited. - Access to Storage account blob storage using [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) is prohibited. (Except for [User delegation SAS](../../storage/common/shared-key-authorization-prevent.md#understand-how-disallowing-shared-key-affects-sas-tokens))-- Access to the BYOS-enanled Speech resource is allowed using the resource [system assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) and [User delegation SAS](../../storage/common/storage-sas-overview.md#user-delegation-sas).
+- Access to the BYOS-enabled Speech resource is allowed using the resource [system assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) and [User delegation SAS](../../storage/common/storage-sas-overview.md#user-delegation-sas).
These are the most restricted security settings possible for Text to speech scenario. You may further customize them according to your needs.
aks Workload Identity Migrate From Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md
Title: Migrate your Azure Kubernetes Service (AKS) pod to use workload identity
description: In this Azure Kubernetes Service (AKS) article, you learn how to configure your Azure Kubernetes Service pod to authenticate with workload identity. Previously updated : 07/26/2023 Last updated : 07/31/2023 # Migrate from pod managed-identity to workload identity
If your cluster is already using the latest version of the Azure Identity SDK, p
If your cluster isn't using the latest version of the Azure Identity SDK, you have two options: -- You can use a migration sidecar that we provide within your Linux applications, which proxies the IMDS transactions your application makes over to [OpenID Connect][openid-connect-overview] (OIDC). The migration sidecar isn't intended to be a long-term solution, but a way to get up and running quickly on workload identity. Perform the following steps to:
+- You can use a migration sidecar that we provide within your Linux applications, which proxies the IMDS transactions your application makes over to [OpenID Connect][openid-connect-overview] (OIDC). The migration sidecar isn't intended to be a long-term solution, but a way to get up and running quickly on workload identity. Perform the following steps to:
- [Deploy the workload with migration sidecar](#deploy-the-workload-with-migration-sidecar) to proxy the application IMDS transactions. - Verify the authentication transactions are completing successfully.
If your cluster isn't using the latest version of the Azure Identity SDK, you ha
- Once the SDK's are updated to the supported version, you can remove the proxy sidecar and redeploy the application. > [!NOTE]
- > The migration sidecar is **not supported for production use**. This feature is meant to give you time to migrate your application SDK's to a supported version, and not meant or intended to be a long-term solution.
- > The migration sidecar is only for Linux containers as pod-managed identities was available on Linux node pools only.
+ > The migration sidecar is **not supported for production use**. This feature is meant to give you time to migrate your application SDK's to a supported version, and not meant or intended to be a long-term solution.
+ > The migration sidecar is only available for Linux containers, due to only providing pod-managed identities with Linux node pools.
- Rewrite your application to support the latest version of the [Azure Identity][azure-identity-supported-versions] client library. Afterwards, perform the following steps:
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
This scenario shows you how to configure your Azure API Management instance to protect an API. We'll use the Azure AD B2C SPA (Auth Code + PKCE) flow to acquire a token, alongside API Management to secure an Azure Functions backend using EasyAuth.
-For a conceptual overview of API authorization, see [Authentication and authorization to APIs in API Management](authentication-authorization-overview.md).
+For a conceptual overview of API authorization, see [Authentication and authorization to APIs in API Management](authentication-authorization-overview.md).
## Aims
Here's an illustration of the components in use and the flow between them once t
Here's a quick overview of the steps: 1. Create the Azure AD B2C Calling (Frontend, API Management) and API Applications with scopes and grant API Access
-1. Create the sign-up and sign-in policies to allow users to sign in with Azure AD B2C
+1. Create the sign-up and sign-in policies to allow users to sign in with Azure AD B2C
1. Configure API Management with the new Azure AD B2C Client IDs and keys to Enable OAuth2 user authorization in the Developer Console 1. Build the Function API 1. Configure the Function API to enable EasyAuth with the new Azure AD B2C Client IDΓÇÖs and Keys and lock down to APIM VIP
Here's a quick overview of the steps:
1. Set up the **CORS** policy and add the **validate-jwt** policy to validate the OAuth token for every incoming request 1. Build the calling application to consume the API 1. Upload the JS SPA Sample
-1. Configure the Sample JS Client App with the new Azure AD B2C Client IDΓÇÖs and keys
+1. Configure the Sample JS Client App with the new Azure AD B2C Client IDΓÇÖs and keys
1. Test the Client Application > [!TIP]
- > We're going to capture quite a few pieces of information and keys etc as we walk this document, you might find it handy to have a text editor open to store the following items of configuration temporarily.
+ > We're going to capture quite a few pieces of information and keys etc as we walk this document, you might find it handy to have a text editor open to store the following items of configuration temporarily.
>
- > B2C BACKEND CLIENT ID:
- > B2C BACKEND CLIENT SECRET KEY:
- > B2C BACKEND API SCOPE URI:
- > B2C FRONTEND CLIENT ID:
- > B2C USER FLOW ENDPOINT URI:
- > B2C WELL-KNOWN OPENID ENDPOINT:
- > B2C POLICY NAME: Frontendapp_signupandsignin
- > FUNCTION URL:
- > APIM API BASE URL:
- > STORAGE PRIMARY ENDPOINT URL:
+ > B2C BACKEND CLIENT ID:
+ > B2C BACKEND CLIENT SECRET KEY:
+ > B2C BACKEND API SCOPE URI:
+ > B2C FRONTEND CLIENT ID:
+ > B2C USER FLOW ENDPOINT URI:
+ > B2C WELL-KNOWN OPENID ENDPOINT:
+ > B2C POLICY NAME: Frontendapp_signupandsignin
+ > FUNCTION URL:
+ > APIM API BASE URL:
+ > STORAGE PRIMARY ENDPOINT URL:
## Configure the backend application
Open the Azure AD B2C blade in the portal and do the following steps.
> [!NOTE] > B2C Policies allow you to expose the Azure AD B2C login endpoints to be able to capture different data components and sign in users in different ways.
- >
- > In this case we configured a sign-up or sign in flow (policy). This also exposed a well-known configuration endpoint, in both cases our created policy was identified in the URL by the "p=" query string parameter.
+ >
+ > In this case we configured a sign-up or sign in flow (policy). This also exposed a well-known configuration endpoint, in both cases our created policy was identified in the URL by the "p=" query string parameter.
> > Once this is done, you now have a functional Business to Consumer identity platform that will sign users into multiple applications.
Open the Azure AD B2C blade in the portal and do the following steps.
1. Select Save. ```csharp
-
+ using System.Net; using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Primitives;
-
+ public static async Task<IActionResult> Run(HttpRequest req, ILogger log) { log.LogInformation("C# HTTP trigger function processed a request.");
-
+ return (ActionResult)new OkObjectResult($"Hello World, time and date are {DateTime.Now.ToString()}"); }
-
+ ``` > [!TIP]
Open the Azure AD B2C blade in the portal and do the following steps.
1. Click 'Save' (at the top left of the blade). > [!IMPORTANT]
- > Now your Function API is deployed and should throw 401 responses if the correct JWT isn't supplied as an Authorization: Bearer header, and should return data when a valid request is presented.
- > You added additional defense-in-depth security in EasyAuth by configuring the 'Login With Azure AD' option to handle unauthenticated requests.
+ > Now your Function API is deployed and should throw 401 responses if the correct JWT isn't supplied as an Authorization: Bearer header, and should return data when a valid request is presented.
+ > You added additional defense-in-depth security in EasyAuth by configuring the 'Login With Azure AD' option to handle unauthenticated requests.
+ >
+ > We still have no IP security applied, if you have a valid key and OAuth2 token, anyone can call this from anywhere - ideally we want to force all requests to come via API Management.
>
- > We still have no IP security applied, if you have a valid key and OAuth2 token, anyone can call this from anywhere - ideally we want to force all requests to come via API Management.
- >
> If you're using APIM Consumption tier then [there isn't a dedicated Azure API Management Virtual IP](./api-management-howto-ip-addresses.md#ip-addresses-of-consumption-tier-api-management-service) to allow-list with the functions access-restrictions. In the Azure API Management Standard SKU and above [the VIP is single tenant and for the lifetime of the resource](./api-management-howto-ip-addresses.md#changes-to-the-ip-addresses). For the Azure API Management Consumption tier, you can lock down your API calls via the shared secret function key in the portion of the URI you copied above. Also, for the Consumption tier - steps 12-17 below do not apply. 1. Close the 'Authentication' blade from the App Service / Functions portal.
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
1. Click Browse, choose the function app you're hosting the API inside, and click select. Next, click select again. 1. Give the API a name and description for API Management's internal use and add it to the ΓÇÿunlimitedΓÇÖ Product. 1. Copy and record the API's 'base URL' and click 'create'.
-1. Click the 'settings' tab, then under subscription - switch off the 'Subscription Required' checkbox as we'll use the Oauth JWT token in this case to rate limit. Note that if you're using the consumption tier, this would still be required in a production environment.
+1. Click the 'settings' tab, then under subscription - switch off the 'Subscription Required' checkbox as we'll use the Oauth JWT token in this case to rate limit. Note that if you're using the consumption tier, this would still be required in a production environment.
> [!TIP]
- > If using the consumption tier of APIM the unlimited product won't be available as an out of the box. Instead, navigate to "Products" under "APIs" and hit "Add".
+ > If using the consumption tier of APIM the unlimited product won't be available as an out of the box. Instead, navigate to "Products" under "APIs" and hit "Add".
> Type "Unlimited" as the product name and description and select the API you just added from the "+" APIs callout at the bottom left of the screen. Select the "published" checkbox. Leave the rest as default. Finally, hit the "create" button. This created the "unlimited" product and assigned it to your API. You can customize your new product later. ## Configure and capture the correct storage endpoint settings
-1. Open the storage accounts blade in the Azure portal
+1. Open the storage accounts blade in the Azure portal
1. Select the account you created and select the 'Static Website' blade from the Settings section (if you don't see a 'Static Website' option, check you created a V2 account). 1. Set the static web hosting feature to 'enabled', and set the index document name to 'https://docsupdatetracker.net/index.html', then click 'save'. 1. Note down the contents of the 'Primary Endpoint' for later, as this location is where the frontend site will be hosted.
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
> [!NOTE] > Now Azure API management is able to respond to cross origin requests from your JavaScript SPA apps, and it will perform throttling, rate-limiting and pre-validation of the JWT auth token being passed BEFORE forwarding the request on to the Function API.
- >
+ >
> Congratulations, you now have Azure AD B2C, API Management and Azure Functions working together to publish, secure AND consume an API! > [!TIP]
- > If you're using the API Management consumption tier then instead of rate limiting by the JWT subject or incoming IP Address (Limit call rate by key policy isn't supported today for the "Consumption" tier), you can Limit by call rate quota see [here](rate-limit-policy.md).
+ > If you're using the API Management consumption tier then instead of rate limiting by the JWT subject or incoming IP Address (Limit call rate by key policy isn't supported today for the "Consumption" tier), you can Limit by call rate quota see [here](rate-limit-policy.md).
> As this example is a JavaScript Single Page Application, we use the API Management Key only for rate-limiting and billing calls. The actual Authorization and Authentication is handled by Azure AD B2C, and is encapsulated in the JWT, which gets validated twice, once by API Management, and then by the backend Azure Function. ## Upload the JavaScript SPA sample to static storage
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
<meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1">
- <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.0-beta2/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-BmbxuPwQa2lc/FVzBcNJ7UAyJxM6wuqIj61tLrc4wSX0szH/Ev+nYRRuWlolflfl" crossorigin="anonymous">
- <script type="text/javascript" src="https://alcdn.msauth.net/browser/2.11.1/js/msal-browser.min.js"></script>
+ <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.0-beta2/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-BmbxuPwQa2lc/FVzBcNJ7UAyJxM6wuqIj61tLrc4wSX0szH/Ev+nYRRuWlolflfl" crossorigin="anonymous">
+ <script type="text/javascript" src="https://alcdn.msauth.net/browser/2.11.1/js/msal-browser.min.js"></script>
</head> <body> <div class="container-fluid"> <div class="row"> <div class="col-md-12">
- <nav class="navbar navbar-expand-lg navbar-dark bg-dark">
- <div class="container-fluid">
- <a class="navbar-brand" href="#">Azure Active Directory B2C with Azure API Management</a>
- <div class="navbar-nav">
- <button class="btn btn-success" id="signinbtn" onClick="login()">Sign In</a>
- </div>
- </div>
- </nav>
+ <nav class="navbar navbar-expand-lg navbar-dark bg-dark">
+ <div class="container-fluid">
+ <a class="navbar-brand" href="#">Azure Active Directory B2C with Azure API Management</a>
+ <div class="navbar-nav">
+ <button class="btn btn-success" id="signinbtn" onClick="login()">Sign In</a>
+ </div>
+ </div>
+ </nav>
</div> </div> <div class="row"> <div class="col-md-12"> <div class="card" >
- <div id="cardheader" class="card-header">
- <div class="card-text"id="message">Please sign in to continue</div>
- </div>
- <div class="card-body">
- <button class="btn btn-warning" id="callapibtn" onClick="getAPIData()">Call API</a>
- <div id="progress" class="spinner-border" role="status">
- <span class="visually-hidden">Loading...</span>
- </div>
- </div>
+ <div id="cardheader" class="card-header">
+ <div class="card-text"id="message">Please sign in to continue</div>
+ </div>
+ <div class="card-body">
+ <button class="btn btn-warning" id="callapibtn" onClick="getAPIData()">Call API</a>
+ <div id="progress" class="spinner-border" role="status">
+ <span class="visually-hidden">Loading...</span>
+ </div>
+ </div>
</div> </div> </div> </div> <script lang="javascript">
- // Just change the values in this config object ONLY.
- var config = {
- msal: {
- auth: {
- clientId: "{CLIENTID}", // This is the client ID of your FRONTEND application that you registered with the SPA type in Azure Active Directory B2C
- authority: "{YOURAUTHORITYB2C}", // Formatted as https://{b2ctenantname}.b2clogin.com/tfp/{b2ctenantguid or full tenant name including onmicrosoft.com}/{signuporinpolicyname}
- redirectUri: "{StoragePrimaryEndpoint}", // The storage hosting address of the SPA, a web-enabled v2 storage account - recorded earlier as the Primary Endpoint.
- knownAuthorities: ["{B2CTENANTDOMAIN}"] // {b2ctenantname}.b2clogin.com
- },
- cache: {
- cacheLocation: "sessionStorage",
- storeAuthStateInCookie: false
- }
- },
- api: {
- scopes: ["{BACKENDAPISCOPE}"], // The scope that we request for the API from B2C, this should be the backend API scope, with the full URI.
- backend: "{APIBASEURL}/hello" // The location that we'll call for the backend api, this should be hosted in API Management, suffixed with the name of the API operation (in the sample this is '/hello').
- }
- }
- document.getElementById("callapibtn").hidden = true;
- document.getElementById("progress").hidden = true;
- const myMSALObj = new msal.PublicClientApplication(config.msal);
- myMSALObj.handleRedirectPromise().then((tokenResponse) => {
- if(tokenResponse !== null){
- console.log(tokenResponse.account);
- document.getElementById("message").innerHTML = "Welcome, " + tokenResponse.account.name;
- document.getElementById("signinbtn").hidden = true;
- document.getElementById("callapibtn").hidden = false;
- }}).catch((error) => {console.log("Error Signing in:" + error);
- });
- function login() {
- try {
- myMSALObj.loginRedirect({scopes: config.api.scopes});
- } catch (err) {console.log(err);}
- }
- function getAPIData() {
- document.getElementById("progress").hidden = false;
- document.getElementById("message").innerHTML = "Calling backend ... "
- document.getElementById("cardheader").classList.remove('bg-success','bg-warning','bg-danger');
- myMSALObj.acquireTokenSilent({scopes: config.api.scopes, account: getAccount()}).then(tokenResponse => {
- const headers = new Headers();
- headers.append("Authorization", `Bearer ${tokenResponse.accessToken}`);
- fetch(config.api.backend, {method: "GET", headers: headers})
- .then(async (response) => {
- if (!response.ok)
- {
- document.getElementById("message").innerHTML = "Error: " + response.status + " " + JSON.parse(await response.text()).message;
- document.getElementById("cardheader").classList.add('bg-warning');
- }
- else
- {
- document.getElementById("cardheader").classList.add('bg-success');
- document.getElementById("message").innerHTML = await response.text();
- }
- }).catch(async (error) => {
- document.getElementById("cardheader").classList.add('bg-danger');
- document.getElementById("message").innerHTML = "Error: " + error;
- });
- }).catch(error => {console.log("Error Acquiring Token Silently: " + error);
- return myMSALObj.acquireTokenRedirect({scopes: config.api.scopes, forceRefresh: false})
- });
- document.getElementById("progress").hidden = true;
+ // Just change the values in this config object ONLY.
+ var config = {
+ msal: {
+ auth: {
+ clientId: "{CLIENTID}", // This is the client ID of your FRONTEND application that you registered with the SPA type in Azure Active Directory B2C
+ authority: "{YOURAUTHORITYB2C}", // Formatted as https://{b2ctenantname}.b2clogin.com/tfp/{b2ctenantguid or full tenant name including onmicrosoft.com}/{signuporinpolicyname}
+ redirectUri: "{StoragePrimaryEndpoint}", // The storage hosting address of the SPA, a web-enabled v2 storage account - recorded earlier as the Primary Endpoint.
+ knownAuthorities: ["{B2CTENANTDOMAIN}"] // {b2ctenantname}.b2clogin.com
+ },
+ cache: {
+ cacheLocation: "sessionStorage",
+ storeAuthStateInCookie: false
+ }
+ },
+ api: {
+ scopes: ["{BACKENDAPISCOPE}"], // The scope that we request for the API from B2C, this should be the backend API scope, with the full URI.
+ backend: "{APIBASEURL}/hello" // The location that we'll call for the backend api, this should be hosted in API Management, suffixed with the name of the API operation (in the sample this is '/hello').
+ }
+ }
+ document.getElementById("callapibtn").hidden = true;
+ document.getElementById("progress").hidden = true;
+ const myMSALObj = new msal.PublicClientApplication(config.msal);
+ myMSALObj.handleRedirectPromise().then((tokenResponse) => {
+ if(tokenResponse !== null){
+ console.log(tokenResponse.account);
+ document.getElementById("message").innerHTML = "Welcome, " + tokenResponse.account.name;
+ document.getElementById("signinbtn").hidden = true;
+ document.getElementById("callapibtn").hidden = false;
+ }}).catch((error) => {console.log("Error Signing in:" + error);
+ });
+ function login() {
+ try {
+ myMSALObj.loginRedirect({scopes: config.api.scopes});
+ } catch (err) {console.log(err);}
+ }
+ function getAPIData() {
+ document.getElementById("progress").hidden = false;
+ document.getElementById("message").innerHTML = "Calling backend ... "
+ document.getElementById("cardheader").classList.remove('bg-success','bg-warning','bg-danger');
+ myMSALObj.acquireTokenSilent({scopes: config.api.scopes, account: getAccount()}).then(tokenResponse => {
+ const headers = new Headers();
+ headers.append("Authorization", `Bearer ${tokenResponse.accessToken}`);
+ fetch(config.api.backend, {method: "GET", headers: headers})
+ .then(async (response) => {
+ if (!response.ok)
+ {
+ document.getElementById("message").innerHTML = "Error: " + response.status + " " + JSON.parse(await response.text()).message;
+ document.getElementById("cardheader").classList.add('bg-warning');
+ }
+ else
+ {
+ document.getElementById("cardheader").classList.add('bg-success');
+ document.getElementById("message").innerHTML = await response.text();
+ }
+ }).catch(async (error) => {
+ document.getElementById("cardheader").classList.add('bg-danger');
+ document.getElementById("message").innerHTML = "Error: " + error;
+ });
+ }).catch(error => {console.log("Error Acquiring Token Silently: " + error);
+ return myMSALObj.acquireTokenRedirect({scopes: config.api.scopes, forceRefresh: false})
+ });
+ document.getElementById("progress").hidden = true;
} function getAccount() { var accounts = myMSALObj.getAllAccounts();
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
1. Browse to the Static Website Primary Endpoint you stored earlier in the last section. > [!NOTE]
- > Congratulations, you just deployed a JavaScript Single Page App to Azure Storage Static content hosting.
+ > Congratulations, you just deployed a JavaScript Single Page App to Azure Storage Static content hosting.
> Since we havenΓÇÖt configured the JS app with your Azure AD B2C details yet ΓÇô the page won't work yet if you open it. ## Configure the JavaScript SPA for Azure AD B2C 1. Now we know where everything is: we can configure the SPA with the appropriate API Management API address and the correct Azure AD B2C application / client IDs.
-1. Go back to the Azure portal storage blade
-1. Select 'Containers' (under 'Settings')
+1. Go back to the Azure portal storage blade
+1. Select 'Containers' (under 'Settings')
1. Select the '$web' container from the list
-1. Select https://docsupdatetracker.net/index.html blob from the list
-1. Click 'Edit'
+1. Select https://docsupdatetracker.net/index.html blob from the list
+1. Click 'Edit'
1. Update the auth values in the msal config section to match your *front-end* application you registered in B2C earlier. Use the code comments for hints on how the config values should look. The *authority* value needs to be in the format:- https://{b2ctenantname}.b2clogin.com/tfp/{b2ctenantname}.onmicrosoft.com}/{signupandsigninpolicyname}, if you have used our sample names and your b2c tenant is called 'contoso' then you would expect the authority to be 'https://contoso.b2clogin.com/tfp/contoso.onmicrosoft.com/Frontendapp_signupandsignin'. 1. Set the api values to match your backend address (The API Base Url you recorded earlier, and the 'b2cScopes' values were recorded earlier for the *backend application*).
The *authority* value needs to be in the format:- https://{b2ctenantname}.b2clog
1. Add a new URI for the primary (storage) endpoint (minus the trailing forward slash). > [!NOTE]
- > This configuration will result in a client of the frontend application receiving an access token with appropriate claims from Azure AD B2C.
- > The SPA will be able to add this as a bearer token in the https header in the call to the backend API.
- >
- > API Management will pre-validate the token, rate-limit calls to the endpoint by both the subject of the JWT issued by Azure ID (the user) and by IP address of the caller (depending on the service tier of API Management, see the note above), before passing through the request to the receiving Azure Function API, adding the functions security key.
+ > This configuration will result in a client of the frontend application receiving an access token with appropriate claims from Azure AD B2C.
+ > The SPA will be able to add this as a bearer token in the https header in the call to the backend API.
+ >
+ > API Management will pre-validate the token, rate-limit calls to the endpoint by both the subject of the JWT issued by Azure ID (the user) and by IP address of the caller (depending on the service tier of API Management, see the note above), before passing through the request to the receiving Azure Function API, adding the functions security key.
> The SPA will render the response in the browser. > > *Congratulations, youΓÇÖve configured Azure AD B2C, Azure API Management, Azure Functions, Azure App Service Authorization to work in perfect harmony!*
api-management Publish Event Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/publish-event-policy.md
The `publish-event` policy publishes an event to one or more subscriptions speci
<http-response> [...] <publish-event>
- <targets>
- <graphql-subscription id="subscription field" />
- </targets>
- </publish-event>
+ <targets>
+ <graphql-subscription id="subscription field" />
+ </targets>
+ </publish-event>
</http-response> </http-data-source> ```
The `publish-event` policy publishes an event to one or more subscriptions speci
### Usage notes
-* This policy is invoked only when a related GraphQL query or mutation is executed.
+* This policy is invoked only when a related GraphQL query or mutation is executed.
## Example
type Subscription {
```xml <http-data-source>
- <http-request>
- <set-method>POST</set-method>
- <set-url>https://contoso.com/api/user</set-url>
- <set-body template="liquid">{ "id" : {{body.arguments.id}}, "name" : "{{body.arguments.name}}"}</set-body>
- </http-request>
- <http-response>
- <publish-event>
- <targets>
- <graphql-subscription id="onUserCreated" />
- </targets>
- </publish-event>
- </http-response>
+ <http-request>
+ <set-method>POST</set-method>
+ <set-url>https://contoso.com/api/user</set-url>
+ <set-body template="liquid">{ "id" : {{body.arguments.id}}, "name" : "{{body.arguments.name}}"}</set-body>
+ </http-request>
+ <http-response>
+ <publish-event>
+ <targets>
+ <graphql-subscription id="onUserCreated" />
+ </targets>
+ </publish-event>
+ </http-response>
</http-data-source> ```
application-gateway How To Path Header Query String Routing Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-path-header-query-string-routing-gateway-api.md
+
+ Title: Path, header, and query string routing with Application Gateway for Containers - Gateway API (preview)
+description: Learn how to configure Application Gateway for Containers with support with path, header, and query string routing.
+++++ Last updated : 07/30/2023+++
+# Path, header, and query string routing with Application Gateway for Containers - Gateway API (preview)
+
+This document helps you set up an example application that uses the resources from Gateway API to demonstrate traffic routing based on URL path, query string, and header. Review the following gateway API resources for more information:
+- [Gateway](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gateway) - create a gateway with one HTTPS listener.
+- [HTTPRoute](https://gateway-api.sigs.k8s.io/v1alpha2/api-types/httproute/) - create an HTTP route that references a backend service.
+- [HTTPRouteMatch](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.HTTPRouteMatch) - Use `matches` to route based on path, header, and query string.
+
+## Prerequisites
+
+> [!IMPORTANT]
+> Application Gateway for Containers is currently in PREVIEW.<br>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
+2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTP application
+ Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing.
+ ```bash
+ kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ ```
+
+ This command creates the following on your cluster:
+ - a namespace called `test-infra`
+ - 2 services called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - 2 deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
+
+## Deploy the required Gateway API resources
+
+# [ALB managed deployment](#tab/alb-managed)
+
+Create a gateway:
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-namespace: alb-test-infra
+ alb.networking.azure.io/alb-name: alb-test
+spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: http-listener
+ port: 80
+ protocol: HTTP
+ allowedRoutes:
+ namespaces:
+ from: Same
+EOF
+```
++
+# [Bring your own (BYO) deployment](#tab/byo)
+1. Set the following environment variables
+
+```bash
+RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
+RESOURCE_NAME='alb-test'
+
+RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
+FRONTEND_NAME='frontend'
+```
+
+2. Create a Gateway
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: Gateway
+metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-id: $RESOURCE_ID
+spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: http-listener
+ port: 80
+ protocol: HTTP
+ allowedRoutes:
+ namespaces:
+ from: Same
+ addresses:
+ - type: alb.networking.azure.io/alb-frontend
+ value: $FRONTEND_NAME
+EOF
+```
+++
+Once the gateway resource has been created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
+```bash
+kubectl get gateway gateway-01 -n test-infra -o yaml
+```
+
+Example output of successful gateway creation.
+```yaml
+status:
+ addresses:
+ - type: IPAddress
+ value: xxxx.yyyy.alb.azure.com
+ conditions:
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Valid Gateway
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ listeners:
+ - attachedRoutes: 0
+ conditions:
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: ""
+ observedGeneration: 1
+ reason: ResolvedRefs
+ status: "True"
+ type: ResolvedRefs
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Listener is accepted
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ name: https-listener
+ supportedKinds:
+ - group: gateway.networking.k8s.io
+ kind: HTTPRoute
+```
+
+Once the gateway has been created, create an HTTPRoute to define two different matches and a default service to route traffic to.
+
+The way the following rules read are as follows:
+1) If the path is **/bar**, traffic is routed to backend-v2 service on port 8080 OR
+2) If the request contains an HTTP header with the name **magic** and the value **foo**, the URL contains a query string defining the name great with a value of example, AND the path is **/some/thing**, the request is sent to backend-v2 on port 8080.
+3) Otherwise, all other requests are routed to backend-v1 service on port 8080.
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1beta1
+kind: HTTPRoute
+metadata:
+ name: http-route
+ namespace: test-infra
+spec:
+ parentRefs:
+ - name: gateway-01
+ namespace: test-infra
+ rules:
+ - matches:
+ - path:
+ type: PathPrefix
+ value: /bar
+ backendRefs:
+ - name: backend-v2
+ port: 8080
+ - matches:
+ - headers:
+ - type: Exact
+ name: magic
+ value: foo
+ queryParams:
+ - type: Exact
+ name: great
+ value: example
+ path:
+ type: PathPrefix
+ value: /some/thing
+ method: GET
+ backendRefs:
+ - name: backend-v2
+ port: 8080
+ - backendRefs:
+ - name: backend-v1
+ port: 8080
+EOF
+```
+
+Once the HTTPRoute resource has been created, ensure the route has been _Accepted_ and the Application Gateway for Containers resource has been _Programmed_.
+```bash
+kubectl get httproute http-route -n test-infra -o yaml
+```
+
+Verify the status of the Application Gateway for Containers resource has been successfully updated.
+
+```yaml
+status:
+ parents:
+ - conditions:
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: ""
+ observedGeneration: 1
+ reason: ResolvedRefs
+ status: "True"
+ type: ResolvedRefs
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: Route is Accepted
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ controllerName: alb.networking.azure.io/alb-controller
+ parentRef:
+ group: gateway.networking.k8s.io
+ kind: Gateway
+ name: gateway-01
+ namespace: test-infra
+ ```
+
+## Test access to the application
+
+Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the following command to get the FQDN.
+
+```bash
+fqdn=$(kubectl get gateway gateway-01 -n test-infra -o jsonpath='{.status.addresses[0].value}')
+```
+
+By using the curl command, we can validate three different scenarios:
+
+### Path based routing
+In this scenario, the client request sent to http://frontend-fqdn/bar is routed to backend-v2 service.
+
+Run the following command:
+```bash
+curl http://$fqdn/bar
+```
+
+Notice the container serving the request is backend-v2.
+
+### Query string + header + path routing
+In this scenario, the client request sent to http://frontend-fqdn/some/thing?great=example with a header key/value part of "magic: foo" is routed to backend-v2 service.
+
+Run the following command:
+```bash
+curl http://$fqdn/some/thing?great=example -H "magic: foo"
+```
+
+Notice the container serving the request is backend-v2.
+
+### Default route
+If neither of the first two scenarios are satisfied, Application Gateway for Containers routes all other requests to the backend-v1 service.
+
+Run the following command:
+```bash
+curl http://$fqdn/
+```
+
+Notice the container serving the request is backend-v1.
+
+Congratulations, you have installed ALB Controller, deployed a backend application and routed traffic to the application via Gateway API on Application Gateway for Containers.
application-gateway How To Traffic Splitting Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-traffic-splitting-gateway-api.md
Previously updated : 07/24/2023 Last updated : 07/31/2023
EOF
Once the HTTPRoute resource has been created, ensure the route has been _Accepted_ and the Application Gateway for Containers resource has been _Programmed_. ```bash
-kubectl get httproute https-route -n test-infra -o yaml
+kubectl get httproute traffic-split-route -n test-infra -o yaml
``` Verify the status of the Application Gateway for Containers resource has been successfully updated.
application-gateway Quickstart Deploy Application Gateway For Containers Alb Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-deploy-application-gateway-for-containers-alb-controller.md
You need to complete the following tasks prior to deploying Application Gateway
1. Prepare your Azure subscription and your `az-cli` client.
- ```azurecli-interactive
- # Sign in to your Azure subscription.
- SUBSCRIPTION_ID='<your subscription id>'
- az login
- az account set --subscription $SUBSCRIPTION_ID
-
- # Register required resource providers on Azure.
- az provider register --namespace Microsoft.ContainerService
- az provider register --namespace Microsoft.Network
- az provider register --namespace Microsoft.NetworkFunction
- az provider register --namespace Microsoft.ServiceNetworking
-
- # Install Azure CLI extensions.
- az extension add --name alb
- ```
+ ```azurecli-interactive
+ # Sign in to your Azure subscription.
+ SUBSCRIPTION_ID='<your subscription id>'
+ az login
+ az account set --subscription $SUBSCRIPTION_ID
+
+ # Register required resource providers on Azure.
+ az provider register --namespace Microsoft.ContainerService
+ az provider register --namespace Microsoft.Network
+ az provider register --namespace Microsoft.NetworkFunction
+ az provider register --namespace Microsoft.ServiceNetworking
+
+ # Install Azure CLI extensions.
+ az extension add --name alb
+ ```
2. Set an AKS cluster for your workload.
- > [!NOTE]
- > The AKS cluster needs to be in a [region where Application Gateway for Containers is available](overview.md#supported-regions)
- > AKS cluster should use [Azure CNI](../../aks/configure-azure-cni.md).
+ > [!NOTE]
+ > The AKS cluster needs to be in a [region where Application Gateway for Containers is available](overview.md#supported-regions)
+ > AKS cluster should use [Azure CNI](../../aks/configure-azure-cni.md).
> AKS cluster should have the workload identity feature enabled. [Learn how](../../aks/workload-identity-deploy-cluster.md#update-an-existing-aks-cluster) to enable and use an existing AKS cluster section.
- If using an existing cluster, ensure you enable Workload Identity support on your AKS cluster. Workload identities can be enabled via the following:
-
- ```azurecli-interactive
- AKS_NAME='<your cluster name>'
- RESOURCE_GROUP='<your resource group name>'
- az aks update -g $RESOURCE_GROUP -n $AKS_NAME --enable-oidc-issuer --enable-workload-identity --no-wait
- ```
+ If using an existing cluster, ensure you enable Workload Identity support on your AKS cluster. Workload identities can be enabled via the following:
+
+ ```azurecli-interactive
+ AKS_NAME='<your cluster name>'
+ RESOURCE_GROUP='<your resource group name>'
+ az aks update -g $RESOURCE_GROUP -n $AKS_NAME --enable-oidc-issuer --enable-workload-identity --no-wait
+ ```
- If you don't have an existing cluster, use the following commands to create a new AKS cluster with Azure CNI and workload identity enabled.
+ If you don't have an existing cluster, use the following commands to create a new AKS cluster with Azure CNI and workload identity enabled.
- ```azurecli-interactive
- AKS_NAME='<your cluster name>'
- RESOURCE_GROUP='<your resource group name>'
- LOCATION='northeurope' # The list of available regions may grow as we roll out to more preview regions
- VM_SIZE='<the size of the vm in AKS>' # The size needs to be available in your location
-
- az group create --name $RESOURCE_GROUP --location $LOCATION
- az aks create \
- --resource-group $RESOURCE_GROUP \
- --name $AKS_NAME \
- --location $LOCATION \
- --node-vm-size $VM_SIZE \
- --network-plugin azure \
- --enable-oidc-issuer \
- --enable-workload-identity \
- --generate-ssh-key
- ```
+ ```azurecli-interactive
+ AKS_NAME='<your cluster name>'
+ RESOURCE_GROUP='<your resource group name>'
+ LOCATION='northeurope' # The list of available regions may grow as we roll out to more preview regions
+ VM_SIZE='<the size of the vm in AKS>' # The size needs to be available in your location
+
+ az group create --name $RESOURCE_GROUP --location $LOCATION
+ az aks create \
+ --resource-group $RESOURCE_GROUP \
+ --name $AKS_NAME \
+ --location $LOCATION \
+ --node-vm-size $VM_SIZE \
+ --network-plugin azure \
+ --enable-oidc-issuer \
+ --enable-workload-identity \
+ --generate-ssh-key
+ ```
3. Install Helm
- [Helm](https://github.com/helm/helm) is an open-source packaging tool that is used to install ALB controller.
+ [Helm](https://github.com/helm/helm) is an open-source packaging tool that is used to install ALB controller.
- > [!NOTE]
- > Helm is already available in Azure Cloud Shell. If you are using Azure Cloud Shell, no additional Helm installation is necessary.
+ > [!NOTE]
+ > Helm is already available in Azure Cloud Shell. If you are using Azure Cloud Shell, no additional Helm installation is necessary.
- You can also use the following steps to install Helm on a local device running Windows or Linux. Ensure that you have the latest version of helm installed.
+ You can also use the following steps to install Helm on a local device running Windows or Linux. Ensure that you have the latest version of helm installed.
- # [Windows](#tab/install-helm-windows)
- See the [instructions for installation](https://github.com/helm/helm#install) for various options of installation. Similarly, if your version of Windows has [Windows Package Manager winget](/windows/package-manager/winget/) installed, you may execute the following command:
- ```powershell
- winget install helm.helm
- ```
+ # [Windows](#tab/install-helm-windows)
+ See the [instructions for installation](https://github.com/helm/helm#install) for various options of installation. Similarly, if your version of Windows has [Windows Package Manager winget](/windows/package-manager/winget/) installed, you may execute the following command:
- # [Linux](#tab/install-helm-linux)
- The following command can be used to install Helm. Commands that use Helm with Azure CLI in this article can also be run using Bash.
- ```bash
- curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
- ```
+ ```powershell
+ winget install helm.helm
+ ```
+
+ # [Linux](#tab/install-helm-linux)
+ The following command can be used to install Helm. Commands that use Helm with Azure CLI in this article can also be run using Bash.
+ ```bash
+ curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
+ ```
## Install the ALB Controller
You need to complete the following tasks prior to deploying Application Gateway
1. Create a user managed identity for ALB controller and federate the identity as Pod Identity to use in the AKS cluster. ```azurecli-interactive
- RESOURCE_GROUP='<your resource group name>'
- AKS_NAME='<your aks cluster name>'
- IDENTITY_RESOURCE_NAME='azure-alb-identity'
-
- mcResourceGroup=$(az aks show --resource-group $RESOURCE_GROUP --name $AKS_NAME --query "nodeResourceGroup" -o tsv)
- mcResourceGroupId=$(az group show --name $mcResourceGroup --query id -otsv)
-
- echo "Creating identity $IDENTITY_RESOURCE_NAME in resource group $RESOURCE_GROUP"
- az identity create --resource-group $RESOURCE_GROUP --name $IDENTITY_RESOURCE_NAME
- principalId="$(az identity show -g $RESOURCE_GROUP -n $IDENTITY_RESOURCE_NAME --query principalId -otsv)"
-
- echo "Waiting 60 seconds to allow for replication of the identity..."
- sleep 60
+ RESOURCE_GROUP='<your resource group name>'
+ AKS_NAME='<your aks cluster name>'
+ IDENTITY_RESOURCE_NAME='azure-alb-identity'
+
+ mcResourceGroup=$(az aks show --resource-group $RESOURCE_GROUP --name $AKS_NAME --query "nodeResourceGroup" -o tsv)
+ mcResourceGroupId=$(az group show --name $mcResourceGroup --query id -otsv)
+
+ echo "Creating identity $IDENTITY_RESOURCE_NAME in resource group $RESOURCE_GROUP"
+ az identity create --resource-group $RESOURCE_GROUP --name $IDENTITY_RESOURCE_NAME
+ principalId="$(az identity show -g $RESOURCE_GROUP -n $IDENTITY_RESOURCE_NAME --query principalId -otsv)"
+
+ echo "Waiting 60 seconds to allow for replication of the identity..."
+ sleep 60
- echo "Apply Reader role to the AKS managed cluster resource group for the newly provisioned identity"
- az role assignment create --assignee-object-id $principalId --assignee-principal-type ServicePrincipal --scope $mcResourceGroupId --role "acdd72a7-3385-48ef-bd42-f606fba81ae7" # Reader role
-
- echo "Set up federation with AKS OIDC issuer"
- AKS_OIDC_ISSUER="$(az aks show -n "$AKS_NAME" -g "$RESOURCE_GROUP" --query "oidcIssuerProfile.issuerUrl" -o tsv)"
- az identity federated-credential create --name "azure-alb-identity" \
- --identity-name "$IDENTITY_RESOURCE_NAME" \
- --resource-group $RESOURCE_GROUP \
- --issuer "$AKS_OIDC_ISSUER" \
- --subject "system:serviceaccount:azure-alb-system:alb-controller-sa"
+ echo "Apply Reader role to the AKS managed cluster resource group for the newly provisioned identity"
+ az role assignment create --assignee-object-id $principalId --assignee-principal-type ServicePrincipal --scope $mcResourceGroupId --role "acdd72a7-3385-48ef-bd42-f606fba81ae7" # Reader role
+
+ echo "Set up federation with AKS OIDC issuer"
+ AKS_OIDC_ISSUER="$(az aks show -n "$AKS_NAME" -g "$RESOURCE_GROUP" --query "oidcIssuerProfile.issuerUrl" -o tsv)"
+ az identity federated-credential create --name "azure-alb-identity" \
+ --identity-name "$IDENTITY_RESOURCE_NAME" \
+ --resource-group $RESOURCE_GROUP \
+ --issuer "$AKS_OIDC_ISSUER" \
+ --subject "system:serviceaccount:azure-alb-system:alb-controller-sa"
``` ALB Controller requires a federated credential with the name of _azure-alb-identity_. Any other federated credential name is unsupported.
You need to complete the following tasks prior to deploying Application Gateway
2. Install ALB Controller using Helm
- ### For new deployments
- ALB Controller can be installed by running the following commands:
-
- ```azurecli-interactive
- az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME
- helm install alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \
- --version 0.4.023971 \
- --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv)
- ```
-
- > [!Note]
- > ALB Controller will automatically be provisioned into a namespace called azure-alb-system. The namespace name may be changed by defining the _--namespace <namespace_name>_ parameter when executing the helm command. During upgrade, please ensure you specify the --namespace parameter.
-
- ### For existing deployments
- ALB can be upgraded by running the following commands (ensure you add the `--namespace namespace_name` parameter to define the namespace if the previous installation did not use the namespace _azure-alb-system_):
- ```azurecli-interactive
- az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME
- helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \
- --version 0.4.023971 \
- --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv)
- ```
+ ### For new deployments
+ ALB Controller can be installed by running the following commands:
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME
+ helm install alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \
+ --version 0.4.023971 \
+ --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv)
+ ```
+
+ > [!Note]
+ > ALB Controller will automatically be provisioned into a namespace called azure-alb-system. The namespace name may be changed by defining the _--namespace <namespace_name>_ parameter when executing the helm command. During upgrade, please ensure you specify the --namespace parameter.
+
+ ### For existing deployments
+ ALB can be upgraded by running the following commands (ensure you add the `--namespace namespace_name` parameter to define the namespace if the previous installation did not use the namespace _azure-alb-system_):
+ ```azurecli-interactive
+ az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME
+ helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \
+ --version 0.4.023971 \
+ --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv)
+ ```
### Verify the ALB Controller installation
application-gateway Mutual Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-overview.md
description: This article is an overview of mutual authentication on Application
Previously updated : 12/21/2022 Last updated : 07/29/2023
If you're uploading a certificate chain with root CA and intermediate CA certifi
> [!IMPORTANT] > Make sure you upload the entire trusted client CA certificate chain to the Application Gateway when using mutual authentication.
-Each SSL profile can support up to five trusted client CA certificate chains.
+Each SSL profile can support up to 100 trusted client CA certificate chains. A single Application Gateway can support a total of 200 trusted client CA certificate chains.
> [!NOTE] > Mutual authentication is only available on Standard_v2 and WAF_v2 SKUs.
application-gateway Retirement Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/retirement-faq.md
Once the deadline arrives V1 gateways aren't supported. Any V1 SKU resources tha
### What is the definition of a new customer on Application Gateway V1 SKU?
-Customers who didn't have Application Gateway V1 SKU in their subscriptions as of 4 July 2023 are considered as new customers. These customers wonΓÇÖt be able to create new V1 gateways going forward.
+Customers who didn't have Application Gateway V1 SKU in their subscriptions as of 4 July 2023 are considered as new customers. These customers wonΓÇÖt be able to create new V1 gateways in subscriptions which didn't have an existing V1 gateway as of 4 July 2023 going forward.
### What is the definition of an existing customer on Application Gateway V1 SKU?
application-gateway V1 Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/v1-retirement.md
We announced the deprecation of Application Gateway V1 on **April 28 ,2023**. St
- Deprecation announcement: April 28 ,2023 -- No new subscriptions for V1 deployments: July 1,2023 onwards - Application Gateway V1 is no longer available for deployment on [new subscriptions](./retirement-faq.md#what-is-the-definition-of-a-new-customer-on-application-gateway-v1-sku) from July 1 2023 onwards.
+- No new subscriptions for V1 deployments: July 1,2023 onwards - Application Gateway V1 is no longer available for deployment on subscriptions with out V1 gateways(Refer to [FAQ](./retirement-faq.md#what-is-the-definition-of-a-new-customer-on-application-gateway-v1-sku) for details) from July 1 2023 onwards.
- No new V1 deployments: August 28, 2024 - V1 creation is stopped completely for all customers 28 August 2024 onwards. -- SKU retirement: April 28, 2026 - Any Application Gateway V1 that are in Running status will be stopped. Application Gateway V1s that is not migrated to Application Gateway V2 are informed regarding timelines for deleting them and subsequently force deleted.
+- SKU retirement: April 28, 2026 - Any Application Gateway V1 that are in Running status will be stopped. Application Gateway V1s that is not migrated to Application Gateway V2 are informed regarding timelines for deleting them and then force deleted.
## Resources available for migration -- Follow the steps outlined in the [migration script](./migrate-v1-v2.md) to migrate from Application Gateway v1 to v2. Please review [pricing](./understanding-pricing.md) before making the transition.
+- Follow the steps outlined in the [migration script](./migrate-v1-v2.md) to migrate from Application Gateway v1 to v2. Review [pricing](./understanding-pricing.md) before making the transition.
- If your company/organization has partnered with Microsoft or works with Microsoft representatives (like cloud solution architects (CSAs) or customer success account managers (CSAMs)), please work with them for migration.
automation Automation Create Alert Triggered Runbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-create-alert-triggered-runbook.md
description: This article tells how to trigger a runbook to run when an Azure al
Last updated 12/15/2022-+ #Customer intent: As a developer, I want to trigger a runbook so that VMs can be stopped under certain conditions.
Assign permissions to the appropriate [managed identity](./automation-security-o
{ Connect-AzAccount }
-
+ # If you have multiple subscriptions, set the one to use # Select-AzSubscription -SubscriptionId <SUBSCRIPTIONID> ```
Use this example to create a runbook called **Stop-AzureVmInResponsetoVMAlert**.
[object] $WebhookData ) $ErrorActionPreference = "stop"
-
+ if ($WebhookData) { # Get the data object from WebhookData $WebhookBody = (ConvertFrom-Json -InputObject $WebhookData.RequestBody)
-
+ # Get the info needed to identify the VM (depends on the payload schema) $schemaId = $WebhookBody.schemaId Write-Verbose "schemaId: $schemaId" -Verbose
Use this example to create a runbook called **Stop-AzureVmInResponsetoVMAlert**.
# Schema not supported Write-Error "The alert data schema - $schemaId - is not supported." }
-
+ Write-Verbose "status: $status" -Verbose if (($status -eq "Activated") -or ($status -eq "Fired")) {
Use this example to create a runbook called **Stop-AzureVmInResponsetoVMAlert**.
Write-Verbose "resourceName: $ResourceName" -Verbose Write-Verbose "resourceGroupName: $ResourceGroupName" -Verbose Write-Verbose "subscriptionId: $SubId" -Verbose
-
+ # Determine code path depending on the resourceType if ($ResourceType -eq "Microsoft.Compute/virtualMachines") { # This is an Resource Manager VM Write-Verbose "This is an Resource Manager VM." -Verbose
-
- # Ensures you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave -Scope Process
-
- # Connect to Azure with system-assigned managed identity
- $AzureContext = (Connect-AzAccount -Identity).context
-
- # set and store context
- $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile $AzureContext
-
+
+ # Ensures you do not inherit an AzContext in your runbook
+ Disable-AzContextAutosave -Scope Process
+
+ # Connect to Azure with system-assigned managed identity
+ $AzureContext = (Connect-AzAccount -Identity).context
+
+ # set and store context
+ $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile $AzureContext
+ # Stop the Resource Manager VM Write-Verbose "Stopping the VM - $ResourceName - in resource group - $ResourceGroupName -" -Verbose Stop-AzVM -Name $ResourceName -ResourceGroupName $ResourceGroupName -DefaultProfile $AzureContext -Force
Alerts use action groups, which are collections of actions that are triggered by
:::image type="content" source="./media/automation-create-alert-triggered-runbook/create-alert-rule-portal.png" alt-text="The create alert rule page and subsections.":::
-1. Under **Scope**, select **Edit resource**.
+1. Under **Scope**, select **Edit resource**.
1. On the **Select a resource** page, from the **Filter by resource type** drop-down list, select **Virtual machines**.
Alerts use action groups, which are collections of actions that are triggered by
1. On the **Configure signal logic** page, under **Threshold value** enter an initial low value for testing purposes, such as `5`. You can go back and update this value once you've confirmed the alert works as expected. Then select **Done** to return to the **Create alert rule** page. :::image type="content" source="./media/automation-create-alert-triggered-runbook/configure-signal-logic-portal.png" alt-text="Entering CPU percentage threshold value.":::
-
+ 1. Under **Actions**, select **Add action groups**, and then **+Create action group**. :::image type="content" source="./media/automation-create-alert-triggered-runbook/create-action-group-portal.png" alt-text="The create action group page with Basics tab open.":::
Alerts use action groups, which are collections of actions that are triggered by
1. On the **Create action group** page: 1. On the **Basics** tab, enter an **Action group name** and **Display name**. 1. On the **Actions** tab, in the **Name** text box, enter a name. Then from the **Action type** drop-down list, select **Automation Runbook** to open the **Configure Runbook** page.
- 1. For the **Runbook source** item, select **User**.
+ 1. For the **Runbook source** item, select **User**.
1. From the **Subscription** drop-down list, select your subscription. 1. From the **Automation account** drop-down list, select your Automation account. 1. From the **Runbook** drop-down list, select **Stop-AzureVmInResponsetoVMAlert**. 1. For the **Enable the common alert schema** item, select **Yes**. 1. Select **OK** to return to the **Create action group** page.
-
+ :::image type="content" source="./media/automation-create-alert-triggered-runbook/configure-runbook-portal.png" alt-text="Configure runbook page with values."::: 1. Select **Review + create** and then **Create** to return to the **Create alert rule** page.
Ensure your VM is running. Navigate to the runbook **Stop-AzureVmInResponsetoVMA
## Common Azure VM management operations
-Azure Automation provides scripts for common Azure VM management operations like restart VM, stop VM, delete VM, scale up and down scenarios in Runbook gallery. The scripts can also be found in the Azure Automation [GitHub repository](https://github.com/azureautomation) You can also use these scripts as mentioned in the above steps.
+Azure Automation provides scripts for common Azure VM management operations like restart VM, stop VM, delete VM, scale up and down scenarios in Runbook gallery. The scripts can also be found in the Azure Automation [GitHub repository](https://github.com/azureautomation) You can also use these scripts as mentioned in the above steps.
|**Azure VM management operations** | **Details**| | | |
automation Enforce Job Execution Hybrid Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/enforce-job-execution-hybrid-worker.md
> [!IMPORTANT] > Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 October 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md).
-Starting a runbook on a Hybrid Runbook Worker uses a **Run on** option that allows you to specify the name of a Hybrid Runbook Worker group when initiating from the Azure portal, with the Azure PowerShell, or REST API. When a group is specified, one of the workers in that group retrieves and runs the runbook. If your runbook does not specify this option, Azure Automation runs the runbook in the Azure sandbox.
+Starting a runbook on a Hybrid Runbook Worker uses a **Run on** option that allows you to specify the name of a Hybrid Runbook Worker group when initiating from the Azure portal, with the Azure PowerShell, or REST API. When a group is specified, one of the workers in that group retrieves and runs the runbook. If your runbook does not specify this option, Azure Automation runs the runbook in the Azure sandbox.
-Anyone in your organization who is a member of the [Automation Job Operator](automation-role-based-access-control.md#automation-job-operator) or higher can create runbook jobs. To manage runbook execution targeting a Hybrid Runbook Worker group in your Automation account, you can use [Azure Policy](../governance/policy/overview.md). This helps to enforce organizational standards and ensure your automation jobs are controlled and managed by those designated, and anyone cannot execute a runbook on an Azure sandbox, only on Hybrid Runbook workers.
+Anyone in your organization who is a member of the [Automation Job Operator](automation-role-based-access-control.md#automation-job-operator) or higher can create runbook jobs. To manage runbook execution targeting a Hybrid Runbook Worker group in your Automation account, you can use [Azure Policy](../governance/policy/overview.md). This helps to enforce organizational standards and ensure your automation jobs are controlled and managed by those designated, and anyone cannot execute a runbook on an Azure sandbox, only on Hybrid Runbook workers.
A custom Azure Policy definition is included in this article to help you control these activities using the following Automation REST API operations. Specifically:
Here we compose the policy rule and then assign it to either a management group
1. Use the following JSON snippet to create a JSON file with the name AuditAutomationHRWJobExecution.json.
- ```json
+ ```json
{
- "properties": {
- "displayName": "Enforce job execution on Automation Hybrid Runbook Worker",
- "description": "Enforce job execution on Hybrid Runbook Workers in your Automation account.",
- "mode": "all",
- "parameters": {
- "effectType": {
- "type": "string",
- "defaultValue": "Deny",
- "allowedValues": [
- "Deny",
- "Disabled"
- ],
- "metadata": {
- "displayName": "Effect",
- "description": "Enable or disable execution of the policy"
- }
- }
- },
- "policyRule": {
+ "properties": {
+ "displayName": "Enforce job execution on Automation Hybrid Runbook Worker",
+ "description": "Enforce job execution on Hybrid Runbook Workers in your Automation account.",
+ "mode": "all",
+ "parameters": {
+ "effectType": {
+ "type": "string",
+ "defaultValue": "Deny",
+ "allowedValues": [
+ "Deny",
+ "Disabled"
+ ],
+ "metadata": {
+ "displayName": "Effect",
+ "description": "Enable or disable execution of the policy"
+ }
+ }
+ },
+ "policyRule": {
"if": { "anyOf": [ {
Here we compose the policy rule and then assign it to either a management group
} } }
- ```
+ ```
2. Run the following Azure PowerShell or Azure CLI command to create a policy definition using the AuditAutomationHRWJobExecution.json file.
- # [Azure CLI](#tab/azure-cli)
+ # [Azure CLI](#tab/azure-cli)
- ```azurecli
+ ```azurecli
az policy definition create --name 'audit-enforce-jobs-on-automation-hybrid-runbook-workers' --display-name 'Audit Enforce Jobs on Automation Hybrid Runbook Workers' --description 'This policy enforces job execution on Automation account user Hybrid Runbook Workers.' --rules 'AuditAutomationHRWJobExecution.json' --mode All
- ```
+ ```
- The command creates a policy definition named **Audit Enforce Jobs on Automation Hybrid Runbook Workers**. For more information about other parameters that you can use, see [az policy definition create](/cli/azure/policy/definition#az-policy-definition-create).
+ The command creates a policy definition named **Audit Enforce Jobs on Automation Hybrid Runbook Workers**. For more information about other parameters that you can use, see [az policy definition create](/cli/azure/policy/definition#az-policy-definition-create).
- When called without location parameters, `az policy definition create` defaults to saving the policy definition in the selected subscription of the sessions context. To save the definition to a different location, use the following parameters:
+ When called without location parameters, `az policy definition create` defaults to saving the policy definition in the selected subscription of the sessions context. To save the definition to a different location, use the following parameters:
- * **subscription** - Save to a different subscription. Requires a *GUID* value for the subscription ID or a *string* value for the subscription name.
- * **management-group** - Save to a management group. Requires a *string* value.
+ * **subscription** - Save to a different subscription. Requires a *GUID* value for the subscription ID or a *string* value for the subscription name.
+ * **management-group** - Save to a management group. Requires a *string* value.
- # [PowerShell](#tab/azure-powershell)
+ # [PowerShell](#tab/azure-powershell)
- ```azurepowershell
- New-AzPolicyDefinition -Name 'audit-enforce-jobs-on-automation-hybrid-runbook-workers' -DisplayName 'Audit Enforce Jobs on Automation Hybrid Runbook Workers' -Policy 'AuditAutomationHRWJobExecution.json'
- ```
+ ```azurepowershell
+ New-AzPolicyDefinition -Name 'audit-enforce-jobs-on-automation-hybrid-runbook-workers' -DisplayName 'Audit Enforce Jobs on Automation Hybrid Runbook Workers' -Policy 'AuditAutomationHRWJobExecution.json'
+ ```
- The command creates a policy definition named **Audit Enforce Jobs on Automation Hybrid Runbook Workers**. For more information about other parameters that you can use, see [New-AzPolicyDefinition](/powershell/module/az.resources/new-azpolicydefinition).
+ The command creates a policy definition named **Audit Enforce Jobs on Automation Hybrid Runbook Workers**. For more information about other parameters that you can use, see [New-AzPolicyDefinition](/powershell/module/az.resources/new-azpolicydefinition).
- When called without location parameters, `New-AzPolicyDefinition` defaults to saving the policy definition in the selected subscription of the sessions context. To save the definition to a different location, use the following parameters:
+ When called without location parameters, `New-AzPolicyDefinition` defaults to saving the policy definition in the selected subscription of the sessions context. To save the definition to a different location, use the following parameters:
- * **SubscriptionId** - Save to a different subscription. Requires a *GUID* value.
- * **ManagementGroupName** - Save to a management group. Requires a *string* value.
+ * **SubscriptionId** - Save to a different subscription. Requires a *GUID* value.
+ * **ManagementGroupName** - Save to a management group. Requires a *string* value.
-
+
3. After you create your policy definition, you can create a policy assignment by running the following commands:
- # [Azure CLI](#tab/azure-cli)
-
- ```azurecli
- az policy assignment create --name '<name>' --scope '<scope>' --policy '<policy definition ID>'
- ```
-
- The **scope** parameter on `az policy assignment create` works with management group,
- subscription, resource group, or a single resource. The parameter uses a full resource path. The
- pattern for **scope** for each container is as follows. Replace `{rName}`, `{rgName}`, `{subId}`,
- and `{mgName}` with your resource name, resource group name, subscription ID, and management
- group name, respectively. `{rType}` would be replaced with the **resource type** of the resource,
- such as `Microsoft.Compute/virtualMachines` for a VM.
-
- - Resource - `/subscriptions/{subID}/resourceGroups/{rgName}/providers/{rType}/{rName}`
- - Resource group - `/subscriptions/{subID}/resourceGroups/{rgName}`
- - Subscription - `/subscriptions/{subID}`
- - Management group - `/providers/Microsoft.Management/managementGroups/{mgName}`
-
- You can get the Azure Policy Definition ID by using PowerShell with the following command:
-
- ```azurecli
- az policy definition show --name 'Audit Enforce Jobs on Automation Hybrid Runbook Workers'
- ```
-
- The policy definition ID for the policy definition that you created should resemble the following
- example:
-
- ```output
- "/subscription/<subscriptionId>/providers/Microsoft.Authorization/policyDefinitions/Audit Enforce Jobs on Automation Hybrid Runbook Workers"
- ```
-
- # [PowerShell](#tab/azure-powershell)
-
- ```azurepowershell
- $rgName = Get-AzResourceGroup -Name 'ContosoRG'
- $Policy = Get-AzPolicyDefinition -Name 'audit-enforce-jobs-on-automation-hybrid-runbook-workers'
- New-AzPolicyAssignment -Name 'audit-enforce-jobs-on-automation-hybrid-runbook-workers' -PolicyDefinition $Policy -Scope $rg.ResourceId
- ```
-
- Replace _ContosoRG_ with the name of your intended resource group.
-
- The **Scope** parameter on `New-AzPolicyAssignment` works with management group, subscription,
- resource group, or a single resource. The parameter uses a full resource path, which the
- **ResourceId** property on `Get-AzResourceGroup` returns. The pattern for **Scope** for each
- container is as follows. Replace `{rName}`, `{rgName}`, `{subId}`, and `{mgName}` with your
- resource name, resource group name, subscription ID, and management group name, respectively.
- `{rType}` would be replaced with the **resource type** of the resource, such as
- `Microsoft.Compute/virtualMachines` for a VM.
-
- - Resource - `/subscriptions/{subID}/resourceGroups/{rgName}/providers/{rType}/{rName}`
- - Resource group - `/subscriptions/{subId}/resourceGroups/{rgName}`
- - Subscription - `/subscriptions/{subId}`
- - Management group - `/providers/Microsoft.Management/managementGroups/{mgName}`
-
-
+ # [Azure CLI](#tab/azure-cli)
+
+ ```azurecli
+ az policy assignment create --name '<name>' --scope '<scope>' --policy '<policy definition ID>'
+ ```
+
+ The **scope** parameter on `az policy assignment create` works with management group, subscription, resource group, or a single resource. The parameter uses a full resource path. The pattern for **scope** for each container is as follows. Replace `{rName}`, `{rgName}`, `{subId}`, and `{mgName}` with your resource name, resource group name, subscription ID, and management group name, respectively. `{rType}` would be replaced with the **resource type** of the resource, such as `Microsoft.Compute/virtualMachines` for a VM.
+
+ - Resource - `/subscriptions/{subID}/resourceGroups/{rgName}/providers/{rType}/{rName}`
+ - Resource group - `/subscriptions/{subID}/resourceGroups/{rgName}`
+ - Subscription - `/subscriptions/{subID}`
+ - Management group - `/providers/Microsoft.Management/managementGroups/{mgName}`
+
+ You can get the Azure Policy Definition ID by using PowerShell with the following command:
+
+ ```azurecli
+ az policy definition show --name 'Audit Enforce Jobs on Automation Hybrid Runbook Workers'
+ ```
+
+ The policy definition ID for the policy definition that you created should resemble the following example:
+
+ ```output
+ "/subscription/<subscriptionId>/providers/Microsoft.Authorization/policyDefinitions/Audit Enforce Jobs on Automation Hybrid Runbook Workers"
+ ```
+
+ # [PowerShell](#tab/azure-powershell)
+
+ ```azurepowershell
+ $rgName = Get-AzResourceGroup -Name 'ContosoRG'
+ $Policy = Get-AzPolicyDefinition -Name 'audit-enforce-jobs-on-automation-hybrid-runbook-workers'
+ New-AzPolicyAssignment -Name 'audit-enforce-jobs-on-automation-hybrid-runbook-workers' -PolicyDefinition $Policy -Scope $rg.ResourceId
+ ```
+
+ Replace _ContosoRG_ with the name of your intended resource group.
+
+ The **Scope** parameter on `New-AzPolicyAssignment` works with management group, subscription, resource group, or a single resource. The parameter uses a full resource path, which the **ResourceId** property on `Get-AzResourceGroup` returns. The pattern for **Scope** for each container is as follows. Replace `{rName}`, `{rgName}`, `{subId}`, and `{mgName}` with your resource name, resource group name, subscription ID, and management group name, respectively. `{rType}` would be replaced with the **resource type** of the resource, such as `Microsoft.Compute/virtualMachines` for a VM.
+
+ - Resource - `/subscriptions/{subID}/resourceGroups/{rgName}/providers/{rType}/{rName}`
+ - Resource group - `/subscriptions/{subId}/resourceGroups/{rgName}`
+ - Subscription - `/subscriptions/{subId}`
+ - Management group - `/providers/Microsoft.Management/managementGroups/{mgName}`
+
+
4. Sign in to the [Azure portal](https://portal.azure.com). 5. Launch the Azure Policy service in the Azure portal by selecting **All services**, then searching for and selecting **Policy**.
The attempted operation is also logged in the Automation account's Activity Log,
## Next steps
-To work with runbooks, see [Manage runbooks in Azure Automation](manage-runbooks.md).
+To work with runbooks, see [Manage runbooks in Azure Automation](manage-runbooks.md).
automation Automation Tutorial Runbook Textual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/automation-tutorial-runbook-textual.md
description: This tutorial teaches you to create, test, and publish a PowerShell
Last updated 10/16/2022-+ #Customer intent: As a developer, I want use workflow runbooks so that I can automate the parallel starting of VMs.
Assign permissions to the appropriate [managed identity](../automation-security-
:::image type="content" source="../media/automation-tutorial-runbook-textual/system-assigned-role-assignments-portal.png" alt-text="Selecting Azure role assignments in portal.":::
-1. Select **+ Add role assignment (Preview)** to open the **Add role assignment (Preview)** page.
+1. Select **+ Add role assignment (Preview)** to open the **Add role assignment (Preview)** page.
:::image type="content" source="../media/automation-tutorial-runbook-textual/system-assigned-add-role-assignment-portal.png" alt-text="Add role assignments in portal.":::
Assign permissions to the appropriate [managed identity](../automation-security-
:::image type="content" source="../media/automation-tutorial-runbook-textual/managed-identity-client-id-portal.png" alt-text="Showing Client ID for managed identity in portal":::
-1. From the left menu, select **Azure role assignments** and then **+ Add role assignment (Preview)** to open the **Add role assignment (Preview)** page.
+1. From the left menu, select **Azure role assignments** and then **+ Add role assignment (Preview)** to open the **Add role assignment (Preview)** page.
:::image type="content" source="../media/automation-tutorial-runbook-textual/user-assigned-add-role-assignment-portal.png" alt-text="Add role assignments in portal for user-assigned identity.":::
Assign permissions to the appropriate [managed identity](../automation-security-
Start by creating a simple [PowerShell Workflow runbook](../automation-runbook-types.md#powershell-workflow-runbooks). One advantage of Windows PowerShell Workflows is the ability to perform a set of commands in parallel instead of sequentially as with a typical script. >[!NOTE]
-> With release runbook creation has a new experience in the Azure portal. When you select **Runbooks** blade > **Create a runbook**, a new page **Create a runbook** opens with applicable options.
+> With release runbook creation has a new experience in the Azure portal. When you select **Runbooks** blade > **Create a runbook**, a new page **Create a runbook** opens with applicable options.
1. From your open Automation account page, under **Process Automation**, select **Runbooks**
Start by creating a simple [PowerShell Workflow runbook](../automation-runbook-t
1. From the **Runtime version** drop-down, select **5.1**. 1. Enter applicable **Description**. 1. Select **Create**.
-
+ :::image type="content" source="../media/automation-tutorial-runbook-textual/create-powershell-workflow-runbook-options.png" alt-text="PowerShell workflow runbook options from portal":::
-
+ ## Add code to the runbook
Workflow MyFirstRunbook-Workflow
Write-Output "Non-Parallel" Get-Date Start-Sleep -s 3
- Get-Date
+ Get-Date
``` 1. Save the runbook by selecting **Save**.
Before you publish the runbook to make it available in production, you should te
:::image type="content" source="../media/automation-tutorial-runbook-textual/workflow-runbook-parallel-output.png" alt-text="PowerShell workflow runbook parallel output":::
- Review the output. Everything in the `Parallel` block, including the `Start-Sleep` command, executed at the same time. The same commands outside the `Parallel` block ran sequentially, as shown by the different date time stamps.
+ Review the output. Everything in the `Parallel` block, including the `Start-Sleep` command, executed at the same time. The same commands outside the `Parallel` block ran sequentially, as shown by the different date time stamps.
1. Close the **Test** page to return to the canvas.
You've tested and published your runbook, but so far it doesn't do anything usef
workflow MyFirstRunbook-Workflow { $resourceGroup = "resourceGroupName"
-
+ # Ensures you do not inherit an AzContext in your runbook Disable-AzContextAutosave -Scope Process
-
+ # Connect to Azure with system-assigned managed identity Connect-AzAccount -Identity
-
+ # set and store context
- $AzureContext = Set-AzContext ΓÇôSubscriptionId "<SubscriptionID>"
+ $AzureContext = Set-AzContext ΓÇôSubscriptionId "<SubscriptionID>"
} ```
You've tested and published your runbook, but so far it doesn't do anything usef
## Add code to start a virtual machine
-Now that your runbook is authenticating to the Azure subscription, you can manage resources. Add a command to start a virtual machine. You can pick any VM in your Azure subscription, and for now you're hardcoding that name in the runbook.
+Now that your runbook is authenticating to the Azure subscription, you can manage resources. Add a command to start a virtual machine. You can pick any VM in your Azure subscription, and for now you're hardcoding that name in the runbook.
-1. Add the code below as the last line immediately before the closing brace. Replace `VMName` with the actual name of a VM.
+1. Add the code below as the last line immediately before the closing brace. Replace `VMName` with the actual name of a VM.
```powershell Start-AzVM -Name "VMName" -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext
You can use the `ForEach -Parallel` construct to process commands for each item
```powershell workflow MyFirstRunbook-Workflow {
- Param(
- [string]$resourceGroup,
- [string[]]$VMs,
- [string]$action
- )
-
- # Ensures you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave -Scope Process
-
- # Connect to Azure with system-assigned managed identity
- Connect-AzAccount -Identity
-
- # set and store context
- $AzureContext = Set-AzContext ΓÇôSubscriptionId "<SubscriptionID>"
-
- # Start or stop VMs in parallel
- if($action -eq "Start")
- {
- ForEach -Parallel ($vm in $VMs)
- {
- Start-AzVM -Name $vm -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext
- }
- }
- elseif ($action -eq "Stop")
- {
- ForEach -Parallel ($vm in $VMs)
- {
- Stop-AzVM -Name $vm -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext -Force
- }
- }
- else {
- Write-Output "`r`n Action not allowed. Please enter 'stop' or 'start'."
- }
- }
+ Param(
+ [string]$resourceGroup,
+ [string[]]$VMs,
+ [string]$action
+ )
+
+ # Ensures you do not inherit an AzContext in your runbook
+ Disable-AzContextAutosave -Scope Process
+
+ # Connect to Azure with system-assigned managed identity
+ Connect-AzAccount -Identity
+
+ # set and store context
+ $AzureContext = Set-AzContext ΓÇôSubscriptionId "<SubscriptionID>"
+
+ # Start or stop VMs in parallel
+ if ($action -eq "Start") {
+ ForEach -Parallel ($vm in $VMs)
+ {
+ Start-AzVM -Name $vm -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext
+ }
+ }
+ elseif ($action -eq "Stop") {
+ ForEach -Parallel ($vm in $VMs)
+ {
+ Stop-AzVM -Name $vm -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext -Force
+ }
+ }
+ else {
+ Write-Output "`r`n Action not allowed. Please enter 'stop' or 'start'."
+ }
+ }
``` 1. If you want the runbook to execute with the system-assigned managed identity, leave the code as-is. If you prefer to use a user-assigned managed identity, then:+ 1. From line 9, remove `Connect-AzAccount -Identity`, 1. Replace it with `Connect-AzAccount -Identity -AccountId <ClientId>`, and 1. Enter the Client ID you obtained earlier.
automation Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/source-control-integration.md
Title: Use source control integration in Azure Automation
description: This article tells you how to synchronize Azure Automation source control with other repositories. Previously updated : 04/12/2023 Last updated : 07/31/2023
This example uses Azure PowerShell to show how to assign the Contributor role in
```powershell New-AzRoleAssignment `
- -ObjectId <automation-Identity-object-id> `
+ -ObjectId <automation-Identity-Object(Principal)-Id> `
-Scope "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/automationAccounts/{automationAccountName}" ` -RoleDefinitionName "Contributor" ```
automation Enable From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-template.md
Previously updated : 09/18/2020 Last updated : 09/18/2020 # Enable Update Management using Azure Resource Manager template
If you're new to Azure Automation and Azure Monitor, it's important that you und
} } },
- {
- "apiVersion": "2015-11-01-preview",
- "location": "[parameters('location')]",
- "name": "[variables('Updates').name]",
- "type": "Microsoft.OperationsManagement/solutions",
- "id": "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.OperationsManagement/solutions/', variables('Updates').name)]",
- "dependsOn": [
- "[concat('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
- ],
- "properties": {
- "workspaceResourceId": "[resourceId('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
- },
- "plan": {
- "name": "[variables('Updates').name]",
- "publisher": "Microsoft",
- "promotionCode": "",
- "product": "[concat('OMSGallery/', variables('Updates').galleryName)]"
- }
- },
+ {
+ "apiVersion": "2015-11-01-preview",
+ "location": "[parameters('location')]",
+ "name": "[variables('Updates').name]",
+ "type": "Microsoft.OperationsManagement/solutions",
+ "id": "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.OperationsManagement/solutions/', variables('Updates').name)]",
+ "dependsOn": [
+ "[concat('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
+ ],
+ "properties": {
+ "workspaceResourceId": "[resourceId('Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
+ },
+ "plan": {
+ "name": "[variables('Updates').name]",
+ "publisher": "Microsoft",
+ "promotionCode": "",
+ "product": "[concat('OMSGallery/', variables('Updates').galleryName)]"
+ }
+ },
{ "type": "Microsoft.Automation/automationAccounts", "apiVersion": "2020-01-13-preview",
azure-app-configuration Howto Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-import-export-data.md
Azure App Configuration supports data import and export operations. Use these operations to work with configuration data in bulk and exchange data between your App Configuration store and code project. For example, you can set up one App Configuration store for testing and another one for production. You can copy application settings between them so that you don't have to enter data twice.
-This article provides a guide for importing and exporting data with App Configuration. If youΓÇÖd like to set up an ongoing sync with your GitHub repo, take a look at [GitHub Actions](./concept-github-action.md) and [Azure Pipeline tasks](./pull-key-value-devops-pipeline.md).
+This article provides a guide for importing and exporting data with App Configuration. If youΓÇÖd like to set up an ongoing sync with your GitHub repo, take a look at [GitHub Actions](./concept-github-action.md) and [Azure Pipelines tasks](./pull-key-value-devops-pipeline.md).
You can import or export data using either the [Azure portal](https://portal.azure.com) or the [Azure CLI](./scripts/cli-import.md).
azure-app-configuration Quickstart Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-azure-kubernetes-service.md
Now that you have an application running in AKS, you'll deploy the App Configura
```console helm install azureappconfiguration.kubernetesprovider \ oci://mcr.microsoft.com/azure-app-configuration/helmchart/kubernetes-provider \
- --version 1.0.0-preview \
+ --version 1.0.0-preview3 \
--namespace azappconfig-system \ --create-namespace ```
azure-app-configuration Reference Kubernetes Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/reference-kubernetes-provider.md
The following reference outlines the properties supported by the Azure App Confi
## Properties
-An `AzureAppConfigurationProvider` resource has the following top-level child properties under the `spec`.
+An `AzureAppConfigurationProvider` resource has the following top-level child properties under the `spec`. Either `endpoint` or `connectionStringReference` has to be specified.
|Name|Description|Required|Type| |||||
-|endpoint|The endpoint of Azure App Configuration, which you would like to retrieve the key-values from|true|string|
+|endpoint|The endpoint of Azure App Configuration, which you would like to retrieve the key-values from|alternative|string|
+|connectionStringReference|The name of the Kubernetes Secret that contains Azure App Configuration connection string|alternative|string|
|target|The destination of the retrieved key-values in Kubernetes|true|object| |auth|The authentication method to access Azure App Configuration|false|object| |keyValues|The settings for querying and processing key-values|false|object|
The `spec.keyValues` has the following child properties. The `spec.keyValues.key
|selectors|The list of selectors for key-value filtering|false|object array| |trimKeyPrefixes|The list of key prefixes to be trimmed|false|string array| |keyVaults|The settings for Key Vault references|conditional|object|
+|refresh|The settings for refreshing the key-values in ConfigMap or Secret|false|object|
If the `spec.keyValues.selectors` property isn't set, all key-values with no label will be downloaded. It contains an array of *selector* objects, which have the following child properties.
If the `spec.keyValues.selectors` property isn't set, all key-values with no lab
|keyFilter|The key filter for querying key-values|true|string| |labelFilter|The label filter for querying key-values|false|string| - The `spec.keyValues.keyVaults` property has the following child properties. |Name|Description|Required|Type|
The authentication method of each *vault* can be specified with the following pr
|managedIdentityClientId|The client ID of a user-assigned managed identity used for authentication with a vault|false|string| |servicePrincipalReference|The name of the Kubernetes Secret that contains the credentials of a service principal used for authentication with a vault|false|string|
+The `spec.keyValues.refresh` property has the following child properties.
+
+|Name|Description|Required|Type|
+|||||
+|monitoring|The key-values that are monitored by the provider, provider automatically refreshes the ConfigMap or Secret if value change in any designated key-value|true|object|
+|interval|The interval for refreshing, default value is 30 seconds, must be greater than 1 second|false|duration string|
+
+The `spec.keyValues.refresh.monitoring.keyValues` is an array of objects, which have the following child properties.
+
+|Name|Description|Required|Type|
+|||||
+|key|The key of a key-value|true|string|
+|label|The label of a key-value|false|string|
+ ## Examples ### Authentication
The authentication method of each *vault* can be specified with the following pr
servicePrincipalReference: <your-service-principal-secret-name> ```
+#### Use Connection String
+
+1. Create a Kubernetes Secret in the same namespace as the `AzureAppConfigurationProvider` resource and add Azure App Configuration connection string with key *azure_app_configuration_connection_string* in the Secret.
+2. Set the `spec.connectionStringReference` property to the name of the Secret in the following sample `AzureAppConfigurationProvider` resource and deploy it to the Kubernetes cluster.
+
+ ``` yaml
+ apiVersion: azconfig.io/v1beta1
+ kind: AzureAppConfigurationProvider
+ metadata:
+ name: appconfigurationprovider-sample
+ spec:
+ connectionStringReference: <your-connection-string-secret-name>
+ target:
+ configMapName: configmap-created-by-appconfig-provider
+ ```
+ ### Key-value selection Use the `selectors` property to filter the key-values to be downloaded from Azure App Configuration.
spec:
- uri: <your-key-vault-uri> servicePrincipalReference: <name-of-secret-containing-service-principal-credentials> ```+
+### Dynamically refresh ConfigMap and Secret
+
+Setting the `spec.keyValues.refresh` property enables dynamic configuration data refresh in ConfigMap and Secret by monitoring designated key-values. The provider periodically polls the key-values, if there is any value change, provider triggers ConfigMap and Secret refresh in accordance with the present data in Azure App Configuration.
+
+The following sample instructs monitoring two key-values with 1 minute polling interval.
+
+``` yaml
+apiVersion: azconfig.io/v1beta1
+kind: AzureAppConfigurationProvider
+metadata:
+ name: appconfigurationprovider-sample
+spec:
+ endpoint: <your-app-configuration-store-endpoint>
+ target:
+ configMapName: configmap-created-by-appconfig-provider
+ keyValues:
+ selectors:
+ - keyFilter: app1*
+ labelFilter: common
+ - keyFilter: app1*
+ labelFilter: development
+ refresh:
+ interval: 1m
+ monitoring:
+ keyValues:
+ - key: sentinelKey
+ label: common
+ - key: sentinelKey
+ label: development
+```
azure-arc Monitor Gitops Flux 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/monitor-gitops-flux-2.md
Title: Monitor GitOps (Flux v2) status and activity Previously updated : 07/21/2023 Last updated : 07/28/2023 description: Learn how to monitor status, compliance, resource consumption, and reconciliation activity for GitOps with Flux v2.
Follow these steps to import dashboards that let you monitor Flux extension depl
> [!NOTE] > These steps describe the process for importing the dashboard to [Azure Managed Grafana](/azure/managed-grafana/overview). You can also [import this dashboard to any Grafana instance](https://grafana.com/docs/grafana/latest/dashboards/manage-dashboards/#import-a-dashboard). With this option, a service principal must be used; managed identity is not supported for data connection outside of Azure Managed Grafana.
-1. Create an Azure Managed Grafana instance by using the [Azure portal](/azure/managed-grafana/quickstart-managed-grafana-portal) or [Azure CLI](/azure/managed-grafana/quickstart-managed-grafana-cli). This connection lets the dashboard access Azure Resource Graph.
-1. [Create the Azure Monitor Data Source connection](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) in your Azure Managed Grafana instance.
-1. Ensure that the user account that will access the dashboard has the **Reader** role on the subscriptions and/or resource groups where the clusters are located.
-
- If you're using a managed identity, follow these steps to enable this access:
+1. Create an Azure Managed Grafana instance by using the [Azure portal](/azure/managed-grafana/quickstart-managed-grafana-portal) or [Azure CLI](/azure/managed-grafana/quickstart-managed-grafana-cli). Ensure that you're able to access Grafana by selecting its endpoint on the Overview page. You need at least **Reader** level permissions. You can check your access by going to **Access control (IAM)** on the Grafana instance.
+1. If you're using a managed identity for the Azure Managed Grafana instance, follow these steps to assign it a Reader role on the subscription(s):
1. In the Azure portal, navigate to the subscription that you want to add. 1. Select **Access control (IAM)**.
Follow these steps to import dashboards that let you monitor Flux extension depl
If you're using a service principal, grant the **Reader** role to the service principal that you'll use for your data source connection. Follow these same steps, but select **User, group, or service principal** in the **Members** tab, then select your service principal. (If you aren't using Azure Managed Grafana, you must use a service principal for data connection access.)
+1. [Create the Azure Monitor Data Source connection](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) in your Azure Managed Grafana instance. This connection lets the dashboard access Azure Resource Graph data.
1. Download the [GitOps Flux - Application Deployments Dashboard](https://github.com/Azure/fluxv2-grafana-dashboards/blob/main/dashboards/GitOps%20Flux%20-%20Application%20Deployments%20Dashboard.json). 1. Follow the steps to [import the JSON dashboard to Grafana](/azure/managed-grafana/how-to-create-dashboard#import-a-json-dashboard).
azure-functions Durable Functions Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-troubleshooting-guide.md
Title: Durable Functions Troubleshooting Guide - Azure Functions description: Guide to troubleshoot common issues with durable functions.-+ Last updated 03/10/2023
azure-functions Functions Bindings Azure Data Explorer Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-data-explorer-input.md
The Azure Data Explorer input binding retrieves data from a database.
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Azure Data Explorer Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-data-explorer-output.md
For information on setup and configuration details, see the [overview](functions
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
For information on setup and configuration details, see the [overview](./functio
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
For information on setup and configuration details, see the [overview](./functio
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
The following example shows a function that retrieves a single document. The fun
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/CosmosDB/CosmosDbInputBindingFunction.cs" id="docsnippet_qtrigger_with_cosmosdb_inputbinding" :::
-# [C# Script](#tab/csharp-script)
-
-This section contains the following examples:
-
-* [Queue trigger, look up ID from string](#queue-trigger-look-up-id-from-string-c-script)
-* [Queue trigger, get multiple docs, using SqlQuery](#queue-trigger-get-multiple-docs-using-sqlquery-c-script)
-* [HTTP trigger, look up ID from query string](#http-trigger-look-up-id-from-query-string-c-script)
-* [HTTP trigger, look up ID from route data](#http-trigger-look-up-id-from-route-data-c-script)
-* [HTTP trigger, get multiple docs, using SqlQuery](#http-trigger-get-multiple-docs-using-sqlquery-c-script)
-* [HTTP trigger, get multiple docs, using DocumentClient](#http-trigger-get-multiple-docs-using-documentclient-c-script)
-
-The HTTP trigger examples refer to a simple `ToDoItem` type:
-
-```cs
-namespace CosmosDBSamplesV2
-{
- public class ToDoItem
- {
- public string Id { get; set; }
- public string Description { get; set; }
- }
-}
-```
-
-<a id="queue-trigger-look-up-id-from-string-c-script"></a>
-
-### Queue trigger, look up ID from string
-
-The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads a single document and updates the document's text value.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "name": "inputDocument",
- "type": "cosmosDB",
- "databaseName": "MyDatabase",
- "collectionName": "MyCollection",
- "id" : "{queueTrigger}",
- "partitionKey": "{partition key value}",
- "connectionStringSetting": "MyAccount_COSMOSDB",
- "direction": "in"
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```cs
- using System;
-
- // Change input document contents using Azure Cosmos DB input binding
- public static void Run(string myQueueItem, dynamic inputDocument)
- {
- inputDocument.text = "This has changed.";
- }
-```
-
-<a id="queue-trigger-get-multiple-docs-using-sqlquery-c-script"></a>
-
-### Queue trigger, get multiple docs, using SqlQuery
-
-The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters.
-
-The queue trigger provides a parameter `departmentId`. A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "name": "documents",
- "type": "cosmosDB",
- "direction": "in",
- "databaseName": "MyDb",
- "collectionName": "MyCollection",
- "sqlQuery": "SELECT * from c where c.departmentId = {departmentId}",
- "connectionStringSetting": "CosmosDBConnection"
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```csharp
- public static void Run(QueuePayload myQueueItem, IEnumerable<dynamic> documents)
- {
- foreach (var doc in documents)
- {
- // operate on each document
- }
- }
-
- public class QueuePayload
- {
- public string departmentId { get; set; }
- }
-```
-
-<a id="http-trigger-look-up-id-from-query-string-c-script"></a>
-
-### HTTP trigger, look up ID from query string
-
-The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a single document. The function is triggered by an HTTP request that uses a query string to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "authLevel": "anonymous",
- "name": "req",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "name": "$return",
- "type": "http",
- "direction": "out"
- },
- {
- "type": "cosmosDB",
- "name": "toDoItem",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "connectionStringSetting": "CosmosDBConnection",
- "direction": "in",
- "Id": "{Query.id}",
- "PartitionKey" : "{Query.partitionKeyValue}"
- }
- ],
- "disabled": false
-}
-```
-
-Here's the C# script code:
-
-```cs
-using System.Net;
-using Microsoft.Extensions.Logging;
-
-public static HttpResponseMessage Run(HttpRequestMessage req, ToDoItem toDoItem, ILogger log)
-{
- log.LogInformation("C# HTTP trigger function processed a request.");
-
- if (toDoItem == null)
- {
- log.LogInformation($"ToDo item not found");
- }
- else
- {
- log.LogInformation($"Found ToDo item, Description={toDoItem.Description}");
- }
- return req.CreateResponse(HttpStatusCode.OK);
-}
-```
-
-<a id="http-trigger-look-up-id-from-route-data-c-script"></a>
-
-### HTTP trigger, look up ID from route data
-
-The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a single document. The function is triggered by an HTTP request that uses route data to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "authLevel": "anonymous",
- "name": "req",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "get",
- "post"
- ],
- "route":"todoitems/{partitionKeyValue}/{id}"
- },
- {
- "name": "$return",
- "type": "http",
- "direction": "out"
- },
- {
- "type": "cosmosDB",
- "name": "toDoItem",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "connectionStringSetting": "CosmosDBConnection",
- "direction": "in",
- "id": "{id}",
- "partitionKey": "{partitionKeyValue}"
- }
- ],
- "disabled": false
-}
-```
-
-Here's the C# script code:
-
-```cs
-using System.Net;
-using Microsoft.Extensions.Logging;
-
-public static HttpResponseMessage Run(HttpRequestMessage req, ToDoItem toDoItem, ILogger log)
-{
- log.LogInformation("C# HTTP trigger function processed a request.");
-
- if (toDoItem == null)
- {
- log.LogInformation($"ToDo item not found");
- }
- else
- {
- log.LogInformation($"Found ToDo item, Description={toDoItem.Description}");
- }
- return req.CreateResponse(HttpStatusCode.OK);
-}
-```
-
-<a id="http-trigger-get-multiple-docs-using-sqlquery-c-script"></a>
-
-### HTTP trigger, get multiple docs, using SqlQuery
-
-The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a list of documents. The function is triggered by an HTTP request. The query is specified in the `SqlQuery` attribute property.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "authLevel": "anonymous",
- "name": "req",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "name": "$return",
- "type": "http",
- "direction": "out"
- },
- {
- "type": "cosmosDB",
- "name": "toDoItems",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "connectionStringSetting": "CosmosDBConnection",
- "direction": "in",
- "sqlQuery": "SELECT top 2 * FROM c order by c._ts desc"
- }
- ],
- "disabled": false
-}
-```
-
-Here's the C# script code:
-
-```cs
-using System.Net;
-using Microsoft.Extensions.Logging;
-
-public static HttpResponseMessage Run(HttpRequestMessage req, IEnumerable<ToDoItem> toDoItems, ILogger log)
-{
- log.LogInformation("C# HTTP trigger function processed a request.");
-
- foreach (ToDoItem toDoItem in toDoItems)
- {
- log.LogInformation(toDoItem.Description);
- }
- return req.CreateResponse(HttpStatusCode.OK);
-}
-```
-
-<a id="http-trigger-get-multiple-docs-using-documentclient-c-script"></a>
-
-### HTTP trigger, get multiple docs, using DocumentClient
-
-The following example shows a [C# script function](functions-reference-csharp.md) that retrieves a list of documents. The function is triggered by an HTTP request. The code uses a `DocumentClient` instance provided by the Azure Cosmos DB binding to read a list of documents. The `DocumentClient` instance could also be used for write operations.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "authLevel": "anonymous",
- "name": "req",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "name": "$return",
- "type": "http",
- "direction": "out"
- },
- {
- "type": "cosmosDB",
- "name": "client",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "connectionStringSetting": "CosmosDBConnection",
- "direction": "inout"
- }
- ],
- "disabled": false
-}
-```
-
-Here's the C# script code:
-
-```cs
-#r "Microsoft.Azure.Documents.Client"
-
-using System.Net;
-using Microsoft.Azure.Documents.Client;
-using Microsoft.Azure.Documents.Linq;
-using Microsoft.Extensions.Logging;
-
-public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, DocumentClient client, ILogger log)
-{
- log.LogInformation("C# HTTP trigger function processed a request.");
-
- Uri collectionUri = UriFactory.CreateDocumentCollectionUri("ToDoItems", "Items");
- string searchterm = req.GetQueryNameValuePairs()
- .FirstOrDefault(q => string.Compare(q.Key, "searchterm", true) == 0)
- .Value;
-
- if (searchterm == null)
- {
- return req.CreateResponse(HttpStatusCode.NotFound);
- }
-
- log.LogInformation($"Searching for word: {searchterm} using Uri: {collectionUri.ToString()}");
- IDocumentQuery<ToDoItem> query = client.CreateDocumentQuery<ToDoItem>(collectionUri)
- .Where(p => p.Description.Contains(searchterm))
- .AsDocumentQuery();
-
- while (query.HasMoreResults)
- {
- foreach (ToDoItem result in await query.ExecuteNextAsync())
- {
- log.LogInformation(result.Description);
- }
- }
- return req.CreateResponse(HttpStatusCode.OK);
-}
-```
- ::: zone-end
Here's the binding data in the *function.json* file:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#cosmos-db-input).
# [Extension 4.x+](#tab/extensionv4/in-process)
Both [in-process](functions-dotnet-class-library.md) and [isolated worker proces
[!INCLUDE [functions-cosmosdb-input-attributes-v3](../../includes/functions-cosmosdb-input-attributes-v3.md)]
-# [Extension 4.x+](#tab/extensionv4/csharp-script)
--
-# [Functions 2.x+](#tab/functionsv2/csharp-script)
-- ::: zone-end
See the [Example section](#example) for complete examples.
[!INCLUDE [functions-cosmosdb-usage](../../includes/functions-cosmosdb-usage.md)]
-# [Extension 4.x+](#tab/extensionv4/csharp-script)
--
-# [Functions 2.x+](#tab/functionsv2/csharp-script)
-- The parameter type supported by the Cosmos DB input binding depends on the Functions runtime version, the extension package version, and the C# modality used.
See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfuncti
See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=isolated-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types.
-# [Extension 4.x+](#tab/extensionv4/csharp-script)
-
-See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4&pivots=programming-language-csharp#binding-types) for a list of supported types.
-
-# [Functions 2.x+](#tab/functionsv2/csharp-script)
-
-See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types.
- ::: zone-end
azure-functions Functions Bindings Cosmosdb V2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md
In the following example, the return type is an [`IReadOnlyList<T>`](/dotnet/api
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/CosmosDB/CosmosDBFunction.cs" range="4-35":::
-# [C# Script](#tab/csharp-script)
-
-This section contains the following examples:
-
-* [Queue trigger, write one doc](#queue-trigger-write-one-doc-c-script)
-* [Queue trigger, write docs using IAsyncCollector](#queue-trigger-write-docs-using-iasynccollector-c-script)
--
-<a id="queue-trigger-write-one-doc-c-script"></a>
-
-### Queue trigger, write one doc
-
-The following example shows an Azure Cosmos DB output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses a queue input binding for a queue that receives JSON in the following format:
-
-```json
-{
- "name": "John Henry",
- "employeeId": "123456",
- "address": "A town nearby"
-}
-```
-
-The function creates Azure Cosmos DB documents in the following format for each record:
-
-```json
-{
- "id": "John Henry-123456",
- "name": "John Henry",
- "employeeId": "123456",
- "address": "A town nearby"
-}
-```
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "name": "employeeDocument",
- "type": "cosmosDB",
- "databaseName": "MyDatabase",
- "collectionName": "MyCollection",
- "createIfNotExists": true,
- "connectionStringSetting": "MyAccount_COSMOSDB",
- "direction": "out"
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```cs
- #r "Newtonsoft.Json"
-
- using Microsoft.Azure.WebJobs.Host;
- using Newtonsoft.Json.Linq;
- using Microsoft.Extensions.Logging;
-
- public static void Run(string myQueueItem, out object employeeDocument, ILogger log)
- {
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
-
- dynamic employee = JObject.Parse(myQueueItem);
-
- employeeDocument = new {
- id = employee.name + "-" + employee.employeeId,
- name = employee.name,
- employeeId = employee.employeeId,
- address = employee.address
- };
- }
-```
-
-<a id="queue-trigger-write-docs-using-iasynccollector-c-script"></a>
-
-### Queue trigger, write docs using IAsyncCollector
-
-To create multiple documents, you can bind to `ICollector<T>` or `IAsyncCollector<T>` where `T` is one of the supported types.
-
-This example refers to a simple `ToDoItem` type:
-
-```cs
-namespace CosmosDBSamplesV2
-{
- public class ToDoItem
- {
- public string id { get; set; }
- public string Description { get; set; }
- }
-}
-```
-
-Here's the function.json file:
-
-```json
-{
- "bindings": [
- {
- "name": "toDoItemsIn",
- "type": "queueTrigger",
- "direction": "in",
- "queueName": "todoqueueforwritemulti",
- "connectionStringSetting": "AzureWebJobsStorage"
- },
- {
- "type": "cosmosDB",
- "name": "toDoItemsOut",
- "databaseName": "ToDoItems",
- "collectionName": "Items",
- "connectionStringSetting": "CosmosDBConnection",
- "direction": "out"
- }
- ],
- "disabled": false
-}
-```
-
-Here's the C# script code:
-
-```cs
-using System;
-using Microsoft.Extensions.Logging;
-
-public static async Task Run(ToDoItem[] toDoItemsIn, IAsyncCollector<ToDoItem> toDoItemsOut, ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed {toDoItemsIn?.Length} items");
-
- foreach (ToDoItem toDoItem in toDoItemsIn)
- {
- log.LogInformation($"Description={toDoItem.Description}");
- await toDoItemsOut.AddAsync(toDoItem);
- }
-}
-```
- ::: zone-end
def main(req: func.HttpRequest, doc: func.Out[func.Document]) -> func.HttpRespon
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#cosmos-db-output).
# [Extension 4.x+](#tab/extensionv4/in-process)
Both [in-process](functions-dotnet-class-library.md) and [isolated worker proces
[!INCLUDE [functions-cosmosdb-output-attributes-v3](../../includes/functions-cosmosdb-output-attributes-v3.md)]
-# [Extension 4.x+](#tab/functionsv4/csharp-script)
--
-# [Functions 2.x+](#tab/functionsv2/csharp-script)
-- ::: zone-end
See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfuncti
See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=isolated-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types.
-# [Extension 4.x+](#tab/extensionv4/csharp-script)
-
-See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4&pivots=programming-language-csharp#binding-types) for a list of supported types.
-
-# [Functions 2.x+](#tab/functionsv2/csharp-script)
-
-See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types.
- ::: zone-end
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
An in-process class library is a compiled C# function runs in the same process a
# [Isolated process](#tab/isolated-process)
-An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
-
-# [C# script](#tab/csharp-script)
-
-C# script is used primarily when creating C# functions in the Azure portal.
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
This example requires the following `using` statements:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/CosmosDB/CosmosDBFunction.cs" range="4-7"::: -
-# [Extension 4.x+](#tab/extensionv4/csharp-script)
-
-The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are added or modified.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "type": "cosmosDBTrigger",
- "name": "documents",
- "direction": "in",
- "leaseContainerName": "leases",
- "connection": "<connection-app-setting>",
- "databaseName": "Tasks",
- "containerName": "Items",
- "createLeaseContainerIfNotExists": true
-}
-```
-
-Here's the C# script code:
-
-```cs
- using System;
- using System.Collections.Generic;
- using Microsoft.Extensions.Logging;
-
- // Customize the model with your own desired properties
- public class ToDoItem
- {
- public string id { get; set; }
- public string Description { get; set; }
- }
-
- public static void Run(IReadOnlyList<ToDoItem> documents, ILogger log)
- {
- log.LogInformation("Documents modified " + documents.Count);
- log.LogInformation("First document Id " + documents[0].id);
- }
-```
-
-# [Functions 2.x+](#tab/functionsv2/csharp-script)
-
-The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are added or modified.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "type": "cosmosDBTrigger",
- "name": "documents",
- "direction": "in",
- "leaseCollectionName": "leases",
- "connectionStringSetting": "<connection-app-setting>",
- "databaseName": "Tasks",
- "collectionName": "Items",
- "createLeaseCollectionIfNotExists": true
-}
-```
-
-Here's the C# script code:
-
-```cs
- #r "Microsoft.Azure.DocumentDB.Core"
-
- using System;
- using Microsoft.Azure.Documents;
- using System.Collections.Generic;
- using Microsoft.Extensions.Logging;
-
- public static void Run(IReadOnlyList<Document> documents, ILogger log)
- {
- log.LogInformation("Documents modified " + documents.Count);
- log.LogInformation("First document Id " + documents[0].Id);
- }
-```
- ::: zone-end
Here's the Python code:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [CosmosDBTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [CosmosDBTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#cosmos-db-trigger).
# [Extension 4.x+](#tab/extensionv4/in-process)
Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotn
[!INCLUDE [functions-cosmosdb-attributes-v3](../../includes/functions-cosmosdb-attributes-v3.md)]
-# [Extension 4.x+](#tab/extensionv4/csharp-script)
--
-# [Functions 2.x+](#tab/functionsv2/csharp-script)
-- ::: zone-end
See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfuncti
See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=isolated-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types.
-# [Extension 4.x+](#tab/extensionv4/csharp-script)
-
-See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4&pivots=programming-language-csharp#binding-types) for a list of supported types.
-
-# [Functions 2.x+](#tab/functionsv2/csharp-script)
-
-See [Binding types](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp#binding-types) for a list of supported types.
- ::: zone-end
azure-functions Functions Bindings Event Grid Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md
This article supports both programming models.
The type of the output parameter used with an Event Grid output binding depends on the Functions runtime version, the binding extension version, and the modality of the C# function. The C# function can be created using one of the following C# modes: * [In-process class library](functions-dotnet-class-library.md): compiled C# function that runs in the same process as the Functions runtime.
-* [Isolated worker process class library](dotnet-isolated-process-guide.md): compiled C# function that runs in a worker process isolated from the runtime.
-* [C# script](functions-reference-csharp.md): used primarily when creating C# functions in the Azure portal.
+* [Isolated worker process class library](dotnet-isolated-process-guide.md): compiled C# function that runs in a worker process isolated from the runtime.
# [In-process](#tab/in-process)
The following example shows how the custom type is used in both the trigger and
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/EventGrid/EventGridFunction.cs" range="4-49":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows the Event Grid output binding data in the *function.json* file.
-
-```json
-{
- "type": "eventGrid",
- "name": "outputEvent",
- "topicEndpointUri": "MyEventGridTopicUriSetting",
- "topicKeySetting": "MyEventGridTopicKeySetting",
- "direction": "out"
-}
-```
-
-Here's C# script code that creates one event:
-
-```cs
-#r "Microsoft.Azure.EventGrid"
-using System;
-using Microsoft.Azure.EventGrid.Models;
-using Microsoft.Extensions.Logging;
-
-public static void Run(TimerInfo myTimer, out EventGridEvent outputEvent, ILogger log)
-{
- outputEvent = new EventGridEvent("message-id", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0");
-}
-```
-
-Here's C# script code that creates multiple events:
-
-```cs
-#r "Microsoft.Azure.EventGrid"
-using System;
-using Microsoft.Azure.EventGrid.Models;
-using Microsoft.Extensions.Logging;
-
-public static void Run(TimerInfo myTimer, ICollector<EventGridEvent> outputEvent, ILogger log)
-{
- outputEvent.Add(new EventGridEvent("message-id-1", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0"));
- outputEvent.Add(new EventGridEvent("message-id-2", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0"));
-}
-```
::: zone-end
def main(eventGridEvent: func.EventGridEvent,
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#event-grid-output).
The attribute's constructor takes the name of an application setting that contains the name of the custom topic, and the name of an application setting that contains the topic key.
The following table explains the parameters for the `EventGridOutputAttribute`.
|**TopicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. | |**TopicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
-# [C# Script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property | Description|
-|||-|
-|**type** | Must be set to `eventGrid`. |
-|**direction** | Must be set to `out`. This parameter is set automatically when you create the binding in the Azure portal. |
-|**name** | The variable name used in function code that represents the event. |
-|**topicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. |
-|**topicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
- ::: zone-end
Requires you to define a custom type, or use a string. See the [Example section]
Functions version 1.x doesn't support isolated worker process.
-# [Extension v3.x](#tab/extensionv3/csharp-script)
-
-C# script functions support the following types:
-
-+ [Azure.Messaging.CloudEvent][CloudEvent]
-+ [Azure.Messaging.EventGrid][EventGridEvent]
-+ [Newtonsoft.Json.Linq.JObject][JObject]
-+ [System.String][String]
-
-Send messages by using a method parameter such as `out EventGridEvent paramName`.
-To write multiple messages, you can instead use `ICollector<EventGridEvent>` or `IAsyncCollector<EventGridEvent>`.
-
-# [Extension v2.x](#tab/extensionv2/csharp-script)
-
-C# script functions support the following types:
-
-+ [Microsoft.Azure.EventGrid.Models.EventGridEvent][EventGridEvent]
-+ [Newtonsoft.Json.Linq.JObject][JObject]
-+ [System.String][String]
-
-Send messages by using a method parameter such as `out EventGridEvent paramName`.
-To write multiple messages, you can instead use `ICollector<EventGridEvent>` or `IAsyncCollector<EventGridEvent>`.
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-C# script functions support the following types:
-
-+ [Newtonsoft.Json.Linq.JObject][JObject]
-+ [System.String][String]
- ::: zone-end
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md
The following example shows how the custom type is used in both the trigger and
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/EventGrid/EventGridFunction.cs" range="11-33":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows an Event Grid trigger defined in the *function.json* file.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "type": "eventGridTrigger",
- "name": "eventGridEvent",
- "direction": "in"
- }
- ],
- "disabled": false
-}
-```
-
-Here's an example of a C# script function that uses an `EventGridEvent` binding parameter:
-
-```csharp
-#r "Microsoft.Azure.EventGrid"
-using Microsoft.Azure.EventGrid.Models;
-using Microsoft.Extensions.Logging;
-
-public static void Run(EventGridEvent eventGridEvent, ILogger log)
-{
- log.LogInformation(eventGridEvent.Data.ToString());
-}
-```
-
-For more information, see Packages, [Attributes](#attributes), [Configuration](#configuration), and [Usage](#usage).
--
-Here's an example of a C# script function that uses a `JObject` binding parameter:
-
-```cs
-#r "Newtonsoft.Json"
-
-using Newtonsoft.Json;
-using Newtonsoft.Json.Linq;
-
-public static void Run(JObject eventGridEvent, TraceWriter log)
-{
- log.Info(eventGridEvent.ToString(Formatting.Indented));
-}
-```
- ::: zone-end
def main(event: func.EventGridEvent):
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [EventGridTrigger](https://github.com/Azure/azure-functions-eventgrid-extension/blob/master/src/EventGridExtension/TriggerBinding/EventGridTriggerAttribute.cs) attribute. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [EventGridTrigger](https://github.com/Azure/azure-functions-eventgrid-extension/blob/master/src/EventGridExtension/TriggerBinding/EventGridTriggerAttribute.cs) attribute. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#event-grid-trigger).
# [In-process](#tab/in-process)
Here's an `EventGridTrigger` attribute in a method signature:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/EventGrid/EventGridFunction.cs" range="13-16":::
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file. There are no constructor parameters or properties to set in the `EventGridTrigger` attribute.
-
-|function.json property |Description|
-|||
-| **type** | Required - must be set to `eventGridTrigger`. |
-| **direction** | Required - must be set to `in`. |
-| **name** | Required - the variable name used in function code for the parameter that receives the event data. |
- ::: zone-end
Requires you to define a custom type, or use a string. See the [Example section]
Functions version 1.x doesn't support the isolated worker process.
-# [Extension v3.x](#tab/extensionv3/csharp-script)
-
-In-process C# class library functions supports the following types:
-
-+ [Azure.Messaging.CloudEvent][CloudEvent]
-+ [Azure.Messaging.EventGrid][EventGridEvent2]
-+ [Newtonsoft.Json.Linq.JObject][JObject]
-+ [System.String][String]
-
-# [Extension v2.x](#tab/extensionv2/csharp-script)
-
-In-process C# class library functions supports the following types:
-
-+ [Microsoft.Azure.EventGrid.Models.EventGridEvent][EventGridEvent]
-+ [Newtonsoft.Json.Linq.JObject][JObject]
-+ [System.String][String]
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-In-process C# class library functions supports the following types:
-
-+ [Newtonsoft.Json.Linq.JObject][JObject]
-+ [System.String][String]
- ::: zone-end
azure-functions Functions Bindings Event Hubs Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-output.md
The following example shows a [C# function](dotnet-isolated-process-guide.md) th
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/EventHubs/EventHubsFunction.cs" range="12-23":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows an event hub trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes a message to an event hub.
-
-The following examples show Event Hubs binding data in the *function.json* file for Functions runtime version 2.x and later versions.
-
-```json
-{
- "type": "eventHub",
- "name": "outputEventHubMessage",
- "eventHubName": "myeventhub",
- "connection": "MyEventHubSendAppSetting",
- "direction": "out"
-}
-```
-
-Here's C# script code that creates one message:
-
-```cs
-using System;
-using Microsoft.Extensions.Logging;
-
-public static void Run(TimerInfo myTimer, out string outputEventHubMessage, ILogger log)
-{
- String msg = $"TimerTriggerCSharp1 executed at: {DateTime.Now}";
- log.LogInformation(msg);
- outputEventHubMessage = msg;
-}
-```
-
-Here's C# script code that creates multiple messages:
-
-```cs
-public static void Run(TimerInfo myTimer, ICollector<string> outputEventHubMessage, ILogger log)
-{
- string message = $"Message created at: {DateTime.Now}";
- log.LogInformation(message);
- outputEventHubMessage.Add("1 " + message);
- outputEventHubMessage.Add("2 " + message);
-}
-```
- ::: zone-end
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to configure the binding. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#event-hubs-output).
# [In-process](#tab/in-process)
Use the [EventHubOutputAttribute] to define an output binding to an event hub, w
|**EventHubName** | The name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. | |**Connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. To learn more, see [Connections](#connections).|
-# [C# Script](#tab/csharp-script)
-
-The following table explains the binding configuration properties that you set in the *function.json* file.
-
-|function.json property | Description|
-|||
-|**type** | Must be set to `eventHub`. |
-|**direction** | Must be set to `out`. This parameter is set automatically when you create the binding in the Azure portal. |
-|**name** | The variable name used in function code that represents the event. |
-|**eventHubName** | Functions 2.x and higher. The name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. In Functions 1.x, this property is named `path`.|
-|**connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. To learn more, see [Connections](#connections).|
- ::: zone-end
Send messages by using a method parameter such as `out string paramName`. To wri
# [Extension v3.x+](#tab/extensionv3/isolated-process)
-Requires you to define a custom type, or use a string.
-
-# [Extension v5.x+](#tab/extensionv5/csharp-script)
-
-C# script functions support the following types:
-
-+ [Azure.Messaging.EventHubs.EventData](/dotnet/api/azure.messaging.eventhubs.eventdata)
-+ String
-+ Byte array
-+ Plain-old CLR object (POCO)
-
-This version of [EventData](/dotnet/api/azure.messaging.eventhubs.eventdata) drops support for the legacy `Body` type in favor of [EventBody](/dotnet/api/azure.messaging.eventhubs.eventdata.eventbody).
-
-Send messages by using a method parameter such as `out string paramName`, where `paramName` is the value specified in the `name` property of *function.json*. To write multiple messages, you can use `ICollector<string>` or `IAsyncCollector<string>` in place of `out string`.
-
-# [Extension v3.x+](#tab/extensionv3/csharp-script)
-
-C# script functions support the following types:
-
-+ [Microsoft.Azure.EventHubs.EventData](/dotnet/api/microsoft.azure.eventhubs.eventdata)
-+ String
-+ Byte array
-+ Plain-old CLR object (POCO)
-
-Send messages by using a method parameter such as `out string paramName`, where `paramName` is the value specified in the `name` property of *function.json*. To write multiple messages, you can use `ICollector<string>` or
-`IAsyncCollector<string>` in place of `out string`.
+Requires you to define a custom type, or use a string. Additional options are available in **Extension v5.x+**.
azure-functions Functions Bindings Http Webhook Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-output.md
The default return value for an HTTP-triggered function is:
::: zone pivot="programming-language-csharp" ## Attribute
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries don't require an attribute. C# script uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries don't require an attribute. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#http-output).
# [In-process](#tab/in-process)
A return value attribute isn't required. To learn more, see [Usage](#usage).
A return value attribute isn't required. To learn more, see [Usage](#usage).
-# [C# Script](#tab/csharp-script)
-
-The following table explains the binding configuration properties that you set in the *function.json* file.
-
-|Property |Description |
-|||
-| **type** |Must be set to `http`. |
-| **direction** | Must be set to `out`. |
-| **name** | The variable name used in function code for the response, or `$return` to use the return value. |
- ::: zone-end
The HTTP triggered function returns a type of [IActionResult] or `Task<IActionRe
The HTTP triggered function returns an [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata) object or a `Task<HttpResponseData>`. If the app uses [ASP.NET Core integration in .NET Isolated](./dotnet-isolated-process-guide.md#aspnet-core-integration-preview), it could also use [IActionResult], `Task<IActionResult>`, [HttpResponse], or `Task<HttpResponse>`.
-# [C# Script](#tab/csharp-script)
-
-The HTTP triggered function returns a type of [IActionResult] or `Task<IActionResult>`.
- [IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult [HttpResponse]: /dotnet/api/microsoft.aspnetcore.http.httpresponse
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md
public IActionResult Run(
[IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult
-# [C# Script](#tab/csharp-script)
-
-The following example shows a trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function looks for a `name` parameter either in the query string or the body of the HTTP request.
-
-Here's the *function.json* file:
-
-```json
-{
- "disabled": false,
- "bindings": [
- {
- "authLevel": "function",
- "name": "req",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "name": "$return",
- "type": "http",
- "direction": "out"
- }
- ]
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's C# script code that binds to `HttpRequest`:
-
-```cs
-#r "Newtonsoft.Json"
-
-using System.Net;
-using Microsoft.AspNetCore.Mvc;
-using Microsoft.Extensions.Primitives;
-using Newtonsoft.Json;
-
-public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
-{
- log.LogInformation("C# HTTP trigger function processed a request.");
-
- string name = req.Query["name"];
-
- string requestBody = String.Empty;
- using (StreamReader streamReader = new StreamReader(req.Body))
- {
- requestBody = await streamReader.ReadToEndAsync();
- }
- dynamic data = JsonConvert.DeserializeObject(requestBody);
- name = name ?? data?.name;
-
- return name != null
- ? (ActionResult)new OkObjectResult($"Hello, {name}")
- : new BadRequestObjectResult("Please pass a name on the query string or in the request body");
-}
-```
-
-You can bind to a custom object instead of `HttpRequest`. This object is created from the body of the request and parsed as JSON. Similarly, a type can be passed to the HTTP response output binding and returned as the response body, along with a `200` status code.
-
-```csharp
-using System.Net;
-using System.Threading.Tasks;
-using Microsoft.Extensions.Logging;
-
-public static string Run(Person person, ILogger log)
-{
- return person.Name != null
- ? (ActionResult)new OkObjectResult($"Hello, {person.Name}")
- : new BadRequestObjectResult("Please pass an instance of Person.");
-}
-
-public class Person {
- public string Name {get; set;}
-}
-```
- ::: zone-end
def main(req: func.HttpRequest) -> func.HttpResponse:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `HttpTriggerAttribute` to define the trigger binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the `HttpTriggerAttribute` to define the trigger binding. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#http-trigger).
# [In-process](#tab/in-process)
In [isolated worker process](dotnet-isolated-process-guide.md) function apps, th
| **Methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the HTTP endpoint](#customize-the-http-endpoint). | | **Route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is `<functionname>`. For more information, see [customize the HTTP endpoint](#customize-the-http-endpoint). |
-# [C# Script](#tab/csharp-script)
-
-The following table explains the trigger configuration properties that you set in the *function.json* file:
-
-|function.json property | Description|
-|||
-| **type** | Required - must be set to `httpTrigger`. |
-| **direction** | Required - must be set to `in`. |
-| **name** | Required - the variable name used in function code for the request or request body. |
-| **authLevel** | Determines what keys, if any, need to be present on the request in order to invoke the function. For supported values, see [Authorization level](#http-auth). |
-| **methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the HTTP endpoint](#customize-the-http-endpoint). |
-| **route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is `<functionname>`. For more information, see [customize the HTTP endpoint](#customize-the-http-endpoint). |
-| **webHookType** | _Supported only for the version 1.x runtime._<br/><br/>Configures the HTTP trigger to act as a [webhook](https://en.wikipedia.org/wiki/Webhook) receiver for the specified provider. For supported values, see [WebHook type](#webhook-type).|
- ::: zone-end
FunctionContext executionContext)
} ```
-# [C# Script](#tab/csharp-script)
-
- The following C# function code makes use of both parameters.
-
-```csharp
-#r "Newtonsoft.Json"
-
-using System.Net;
-using Microsoft.AspNetCore.Mvc;
-using Microsoft.Extensions.Primitives;
-
-public static IActionResult Run(HttpRequest req, string category, int? id, ILogger log)
-{
- var message = String.Format($"Category: {category}, ID: {id}");
- return (ActionResult)new OkObjectResult(message);
-}
-```
- ::: zone-end
public static void Run(JObject input, ClaimsPrincipal principal, ILogger log)
The authenticated user is available via [HTTP Headers](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code).
-# [C# Script](#tab/csharp-script)
-
-```csharp
-using System.Net;
-using Microsoft.AspNetCore.Mvc;
-using System.Security.Claims;
-
-public static IActionResult Run(HttpRequest req, ILogger log)
-{
- ClaimsPrincipal identities = req.HttpContext.User;
- // ...
- return new OkObjectResult();
-}
-```
-
-Alternatively, the ClaimsPrincipal can simply be included as an additional parameter in the function signature:
-
-```csharp
-#r "Newtonsoft.Json"
-
-using System.Net;
-using Microsoft.AspNetCore.Mvc;
-using System.Security.Claims;
-using Newtonsoft.Json.Linq;
-
-public static void Run(JObject input, ClaimsPrincipal principal, ILogger log)
-{
- // ...
- return;
-}
-```
- ::: zone-end
azure-functions Functions Bindings Http Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook.md
Functions execute in the same process as the Functions host. To learn more, see
Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
-# [C# script](#tab/csharp-script)
-
-Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
- The functionality of the extension varies depending on the extension version:
Add the extension to your project by installing the [NuGet package](https://www.
Functions 1.x doesn't support running in an isolated worker process.
-# [Functions v2.x+](#tab/functionsv2/csharp-script)
-
-This version of the extension should already be available to your function app with [extension bundle], version 2.x.
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
- ::: zone-end
azure-functions Functions Bindings Rabbitmq Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-output.md
For information on setup and configuration details, see the [overview](functions
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Rabbitmq Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-trigger.md
For information on setup and configuration details, see the [overview](functions
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-sendgrid.md
You can add the extension to your project by explicitly installing the [NuGet pa
## Example ::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
The following example shows a [C# function](dotnet-isolated-process-guide.md) th
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/ServiceBus/ServiceBusFunction.cs" range="10-25":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows a Service Bus output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses a timer trigger to send a queue message every 15 seconds.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "schedule": "0/15 * * * * *",
- "name": "myTimer",
- "runsOnStartup": true,
- "type": "timerTrigger",
- "direction": "in"
- },
- {
- "name": "outputSbQueue",
- "type": "serviceBus",
- "queueName": "testqueue",
- "connection": "MyServiceBusConnection",
- "direction": "out"
- }
- ],
- "disabled": false
-}
-```
-
-Here's C# script code that creates a single message:
-
-```cs
-public static void Run(TimerInfo myTimer, ILogger log, out string outputSbQueue)
-{
- string message = $"Service Bus queue message created at: {DateTime.Now}";
- log.LogInformation(message);
- outputSbQueue = message;
-}
-```
-
-Here's C# script code that creates multiple messages:
-
-```cs
-public static async Task Run(TimerInfo myTimer, ILogger log, IAsyncCollector<string> outputSbQueue)
-{
- string message = $"Service Bus queue messages created at: {DateTime.Now}";
- log.LogInformation(message);
- await outputSbQueue.AddAsync("1 " + message);
- await outputSbQueue.AddAsync("2 " + message);
-}
-```
::: zone-end
def main(req: func.HttpRequest, msg: func.Out[str]) -> func.HttpResponse:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the output binding. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#service-bus-output).
# [In-process](#tab/in-process)
The following table explains the properties you can set using the attribute:
|**QueueOrTopicName**|Name of the topic or queue to send messages to. Use `EntityType` to set the destination type.| |**Connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
-# [C# script](#tab/csharp-script)
-
-C# script uses a *function.json* file for configuration instead of attributes. The following table explains the binding configuration properties that you set in the *function.json* file.
-
-|function.json property | Description|
-|||-|
-|**type** |Must be set to "serviceBus". This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | Must be set to "out". This property is set automatically when you create the trigger in the Azure portal. |
-|**name** | The name of the variable that represents the queue or topic message in function code. Set to "$return" to reference the function return value. |
-|**queueName**|Name of the queue. Set only if sending queue messages, not for a topic.
-|**topicName**|Name of the topic. Set only if sending topic messages, not for a queue.|
-|**connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
-|**accessRights** (v1 only)|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
- ::: zone-end
Use the [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmes
# [Functions 2.x and higher](#tab/functionsv2/isolated-process)
-Messaging-specific types are not yet supported.
+Earlier versions of this extension in the isolated worker process only support binding to messaging-specific types. Additional options are available to **Extension 5.x and higher**
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Messaging-specific types are not yet supported.
-
-# [Extension 5.x and higher](#tab/extensionv5/csharp-script)
-
-Use the [ServiceBusMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessage) type when sending messages with metadata. Parameters are defined as `out` parameters. Use an `ICollector<T>` or `IAsyncCollector<T>` to write multiple messages. A message is created when you call the `Add` method.
-
-When the parameter value is null when the function exits, Functions doesn't create a message.
-
-# [Functions 2.x and higher](#tab/functionsv2/csharp-script)
-
-Use the [Message](/dotnet/api/microsoft.azure.servicebus.message) type when sending messages with metadata. Parameters are defined as `out` parameters. Use an `ICollector<T>` or `IAsyncCollector<T>` to write multiple messages. A message is created when you call the `Add` method.
-
-When the parameter value is null when the function exits, Functions doesn't create a message.
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-Use the [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) type when sending messages with metadata. Parameters are defined as `out` parameters. Use an `ICollector<T>` or `IAsyncCollector<T>` to write multiple messages. A message is created when you call the `Add` method.
-
-When the parameter value is null when the function exits, Functions doesn't create a message.
+Functions version 1.x doesn't support isolated worker process. To use the isolated worker model, [upgrade your application to Functions 4.x].
::: zone-end
For a complete example, see [the examples section](#example).
## Next steps - [Run a function when a Service Bus queue or topic message is created (Trigger)](./functions-bindings-service-bus-trigger.md)+
+[upgrade your application to Functions 4.x]: ./migrate-version-1-version-4.md
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
The following example shows a [C# function](dotnet-isolated-process-guide.md) th
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/ServiceBus/ServiceBusFunction.cs" range="10-25":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows a Service Bus trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function reads [message metadata](#message-metadata) and logs a Service Bus queue message.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
-"bindings": [
- {
- "queueName": "testqueue",
- "connection": "MyServiceBusConnection",
- "name": "myQueueItem",
- "type": "serviceBusTrigger",
- "direction": "in"
- }
-],
-"disabled": false
-}
-```
-
-Here's the C# script code:
-
-```cs
-using System;
-
-public static void Run(string myQueueItem,
- Int32 deliveryCount,
- DateTime enqueuedTimeUtc,
- string messageId,
- TraceWriter log)
-{
- log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
-
- log.Info($"EnqueuedTimeUtc={enqueuedTimeUtc}");
- log.Info($"DeliveryCount={deliveryCount}");
- log.Info($"MessageId={messageId}");
-}
-```
::: zone-end
def main(msg: azf.ServiceBusMessage) -> str:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [ServiceBusTriggerAttribute](https://github.com/Azure/azure-functions-servicebus-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ServiceBusTriggerAttribute.cs) attribute to define the function trigger. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [ServiceBusTriggerAttribute](https://github.com/Azure/azure-functions-servicebus-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.ServiceBus/ServiceBusTriggerAttribute.cs) attribute to define the function trigger. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#service-bus-trigger).
# [In-process](#tab/in-process)
The following table explains the properties you can set using this trigger attri
|**IsBatched**| Messages are delivered in batches. Requires an array or collection type. | |**IsSessionsEnabled**|`true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
-# [C# script](#tab/csharp-script)
-
-C# script uses a *function.json* file for configuration instead of attributes. The following table explains the binding configuration properties that you set in the *function.json* file.
-
-|function.json property | Description|
-||-|
-|**type** | Must be set to `serviceBusTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
-|**name** | The name of the variable that represents the queue or topic message in function code. |
-|**queueName**| Name of the queue to monitor. Set only if monitoring a queue, not for a topic.
-|**topicName**| Name of the topic to monitor. Set only if monitoring a topic, not for a queue.|
-|**subscriptionName**| Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.|
-|**connection**| The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections).|
-|**accessRights**| Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
-|**isSessionsEnabled**| `true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
-|**autoComplete**| `true` when the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>Setting to `false` is only supported in C#.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br/><br/>This property is available only in Azure Functions 2.x and higher. |
- [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ::: zone-end
In [C# class libraries](functions-dotnet-class-library.md), the attribute's cons
# [Functions 2.x and higher](#tab/functionsv2/isolated-process)
-Messaging-specific types are not yet supported.
+Earlier versions of this extension in the isolated worker process only support binding to messaging-specific types. Additional options are available to **extension 5.x and higher**
# [Functions 1.x](#tab/functionsv1/isolated-process)
-Messaging-specific types are not yet supported.
-
-# [Extension 5.x and higher](#tab/extensionv5/csharp-script)
-
-Use the [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) type to receive message metadata from Service Bus Queues and Subscriptions. To learn more, see [Messages, payloads, and serialization](../service-bus-messaging/service-bus-messages-payloads.md).
-
-# [Functions 2.x and higher](#tab/functionsv2/csharp-script)
-
-Use the [Message](/dotnet/api/microsoft.azure.servicebus.message) type to receive messages with metadata. To learn more, see [Messages, payloads, and serialization](../service-bus-messaging/service-bus-messages-payloads.md).
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-The following parameter types are available for the queue or topic message:
-
-* [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) - Gives you the deserialized message with the [BrokeredMessage.GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1) method.
-* [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) - Used to receive and acknowledge messages from the message container, which is required when `autoComplete` is set to `false`.
+Functions version 1.x doesn't support isolated worker process. To use the isolated worker model, [upgrade your application to Functions 4.x].
These properties are members of the [BrokeredMessage](/dotnet/api/microsoft.serv
# [Extension 5.x and higher](#tab/extensionv5/isolated-process)
-Messaging-specific types are not yet supported.
-
-# [Functions 2.x and higher](#tab/functionsv2/isolated-process)
-
-Messaging-specific types are not yet supported.
-
-# [Functions 1.x](#tab/functionsv1/isolated-process)
-
-Messaging-specific types are not yet supported.
-
-# [Extension 5.x and higher](#tab/extensionv5/csharp-script)
- These properties are members of the [ServiceBusReceivedMessage](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) class. |Property|Type|Description|
These properties are members of the [ServiceBusReceivedMessage](/dotnet/api/azur
|`Subject`|`string`|The application-specific label which can be used in place of the `Label` metadata property.| |`To`|`string`|The send to address.|
-# [Functions 2.x and higher](#tab/functionsv2/csharp-script)
-These properties are members of the [Message](/dotnet/api/microsoft.azure.servicebus.message) class.
-
-|Property|Type|Description|
-|--|-|--|
-|`ContentType`|`string`|A content type identifier utilized by the sender and receiver for application-specific logic.|
-|`CorrelationId`|`string`|The correlation ID.|
-|`DeliveryCount`|`Int32`|The number of deliveries.|
-|`ScheduledEnqueueTimeUtc`|`DateTime`|The scheduled enqueued time in UTC.|
-|`ExpiresAtUtc`|`DateTime`|The expiration time in UTC.|
-|`Label`|`string`|The application-specific label.|
-|`MessageId`|`string`|A user-defined value that Service Bus can use to identify duplicate messages, if enabled.|
-|`ReplyTo`|`string`|The reply to queue address.|
-|`To`|`string`|The send to address.|
-|`UserProperties`|`IDictionary<string, object>`|Properties set by the sender. |
+# [Functions 2.x and higher](#tab/functionsv2/isolated-process)
-# [Functions 1.x](#tab/functionsv1/csharp-script)
+Earlier versions of this extension in the isolated worker process only support binding to messaging-specific types. Additional options are available to **Extension 5.x and higher**
-These properties are members of the [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) and [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) classes.
+# [Functions 1.x](#tab/functionsv1/isolated-process)
-|Property|Type|Description|
-|--|-|--|
-|`ContentType`|`string`|A content type identifier utilized by the sender and receiver for application-specific logic.|
-|`CorrelationId`|`string`|The correlation ID.|
-|`DeadLetterSource`|`string`|The dead letter source.|
-|`DeliveryCount`|`Int32`|The number of deliveries.|
-|`EnqueuedTimeUtc`|`DateTime`|The enqueued time in UTC.|
-|`ExpiresAtUtc`|`DateTime`|The expiration time in UTC.|
-|`Label`|`string`|The application-specific label.|
-|`MessageId`|`string`|A user-defined value that Service Bus can use to identify duplicate messages, if enabled.|
-|`MessageReceiver`|`MessageReceiver`|Service Bus message receiver. Can be used to abandon, complete, or deadletter the message.|
-|`MessageSession`|`MessageSession`|A message receiver specifically for session-enabled queues and topics.|
-|`ReplyTo`|`string`|The reply to queue address.|
-|`SequenceNumber`|`long`|The unique number assigned to a message by the Service Bus.|
-|`To`|`string`|The send to address.|
-|`UserProperties`|`IDictionary<string, object>`|Properties set by the sender. |
+Functions version 1.x doesn't support isolated worker process. To use the isolated worker model, [upgrade your application to Functions 4.x].
These properties are members of the [BrokeredMessage](/dotnet/api/microsoft.serv
[BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage
+[upgrade your application to Functions 4.x]: ./migrate-version-1-version-4.md
azure-functions Functions Bindings Signalr Service Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-input.md
For information on setup and configuration details, see the [overview](functions
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Signalr Service Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-output.md
For information on setup and configuration details, see the [overview](functions
::: zone pivot="programming-language-csharp" ### Broadcast to all clients
azure-functions Functions Bindings Signalr Service Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-trigger.md
For information on setup and configuration details, see the [overview](functions
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
The trigger input type is declared as either `InvocationContext` or a custom typ
### InvocationContext
-`InvocationContext` contains all the content in the message send from aa SignalR service, which includes the following properties:
+`InvocationContext` contains all the content in the message sent from a SignalR service, which includes the following properties:
|Property | Description| |||
azure-functions Functions Bindings Storage Blob Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md
The following example is a [C# function](dotnet-isolated-process-guide.md) that
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Blob/BlobFunction.cs" range="9-26":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows blob input and output bindings in a *function.json* file and [C# script (.csx)](functions-reference-csharp.md) code that uses the bindings. The function makes a copy of a text blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
-
-In the *function.json* file, the `queueTrigger` metadata property is used to specify the blob name in the `path` properties:
-
-```json
-{
- "bindings": [
- {
- "queueName": "myqueue-items",
- "connection": "MyStorageConnectionAppSetting",
- "name": "myQueueItem",
- "type": "queueTrigger",
- "direction": "in"
- },
- {
- "name": "myInputBlob",
- "type": "blob",
- "path": "samples-workitems/{queueTrigger}",
- "connection": "MyStorageConnectionAppSetting",
- "direction": "in"
- },
- {
- "name": "myOutputBlob",
- "type": "blob",
- "path": "samples-workitems/{queueTrigger}-Copy",
- "connection": "MyStorageConnectionAppSetting",
- "direction": "out"
- }
- ],
- "disabled": false
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```cs
-public static void Run(string myQueueItem, string myInputBlob, out string myOutputBlob, ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
- myOutputBlob = myInputBlob;
-}
-```
::: zone-end
def main(queuemsg: func.QueueMessage, inputblob: bytes) -> bytes:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#blob-input).
# [In-process](#tab/in-process)
isolated worker process defines an input binding by using a `BlobInputAttribute`
|**BlobPath** | The path to the blob.| |**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property | Description|
-||-|
-|**type** | Must be set to `blob`. |
-|**direction** | Must be set to `in`. Exceptions are noted in the [usage](#usage) section. |
-|**name** | The name of the variable that represents the blob in function code.|
-|**path** | The path to the blob. |
-|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
-|**dataType**| For dynamically typed languages, specifies the underlying data type. Possible values are `string`, `binary`, or `stream`. For more detail, refer to the [triggers and bindings concepts](functions-triggers-bindings.md?tabs=python#trigger-and-binding-definitions). |
- [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding
[!INCLUDE [functions-bindings-storage-blob-input-dotnet-isolated-types](../../includes/functions-bindings-storage-blob-input-dotnet-isolated-types.md)]
-# [C# Script](#tab/csharp-script)
-
-See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types.
- Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage).
azure-functions Functions Bindings Storage Blob Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md
The following example is a [C# function](dotnet-isolated-process-guide.md) that
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Blob/BlobFunction.cs" range="4-26":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows blob input and output bindings in a *function.json* file and [C# script (.csx)](functions-reference-csharp.md) code that uses the bindings. The function makes a copy of a text blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
-
-In the *function.json* file, the `queueTrigger` metadata property is used to specify the blob name in the `path` properties:
-
-```json
-{
- "bindings": [
- {
- "queueName": "myqueue-items",
- "connection": "MyStorageConnectionAppSetting",
- "name": "myQueueItem",
- "type": "queueTrigger",
- "direction": "in"
- },
- {
- "name": "myInputBlob",
- "type": "blob",
- "path": "samples-workitems/{queueTrigger}",
- "connection": "MyStorageConnectionAppSetting",
- "direction": "in"
- },
- {
- "name": "myOutputBlob",
- "type": "blob",
- "path": "samples-workitems/{queueTrigger}-Copy",
- "connection": "MyStorageConnectionAppSetting",
- "direction": "out"
- }
- ],
- "disabled": false
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```cs
-public static void Run(string myQueueItem, string myInputBlob, out string myOutputBlob, ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
- myOutputBlob = myInputBlob;
-}
-```
- ::: zone-end
def main(queuemsg: func.QueueMessage, inputblob: bytes, outputblob: func.Out[byt
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attribute to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#blob-output).
# [In-process](#tab/in-process)
The `BlobOutputAttribute` constructor takes the following parameters:
|**BlobPath** | The path to the blob.| |**Connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).| -
-# [C# script](#tab/csharp-script)
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property | Description|
-||-|
-|**type** | Must be set to `blob`. |
-|**direction** | Must be set to `in`. Exceptions are noted in the [usage](#usage) section. |
-|**name** | The name of the variable that represents the blob in function code.|
-|**path** | The path to the blob. |
-|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
-|**dataType**| For dynamically typed languages, specifies the underlying data type. Possible values are `string`, `binary`, or `stream`. For more detail, refer to the [triggers and bindings concepts](functions-triggers-bindings.md?tabs=python#trigger-and-binding-definitions). |
- [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding
[!INCLUDE [functions-bindings-storage-blob-output-dotnet-isolated-types](../../includes/functions-bindings-storage-blob-output-dotnet-isolated-types.md)]
-# [C# Script](#tab/csharp-script)
-
-See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types.
- Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage).
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
The following example is a [C# function](dotnet-isolated-process-guide.md) that
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Blob/BlobFunction.cs" range="9-25":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows a blob trigger binding in a *function.json* file and code that uses the binding. The function writes a log when a blob is added or updated in the `samples-workitems` [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources).
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "disabled": false,
- "bindings": [
- {
- "name": "myBlob",
- "type": "blobTrigger",
- "direction": "in",
- "path": "samples-workitems/{name}",
- "connection":"MyStorageAccountAppSetting"
- }
- ]
-}
-```
-
-The string `{name}` in the blob trigger path `samples-workitems/{name}` creates a [binding expression](./functions-bindings-expressions-patterns.md) that you can use in function code to access the file name of the triggering blob. For more information, see [Blob name patterns](#blob-name-patterns) later in this article.
-
-For more information about *function.json* file properties, see the [Configuration](#configuration) section explains these properties.
-
-Here's C# script code that binds to a `Stream`:
-
-```cs
-public static void Run(Stream myBlob, string name, ILogger log)
-{
- log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes");
-}
-```
-
-Here's C# script code that binds to a `CloudBlockBlob`:
-
-```cs
-#r "Microsoft.WindowsAzure.Storage"
-
-using Microsoft.WindowsAzure.Storage.Blob;
-
-public static void Run(CloudBlockBlob myBlob, string name, ILogger log)
-{
- log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name}\nURI:{myBlob.StorageUri}");
-}
-```
- ::: zone-end
def main(myblob: func.InputStream):
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [BlobAttribute](/dotnet/api/microsoft.azure.webjobs.blobattribute) attribute to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [BlobAttribute](/dotnet/api/microsoft.azure.webjobs.blobattribute) attribute to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#blob-trigger).
The attribute's constructor takes the following parameters:
Here's an `BlobTrigger` attribute in a method signature:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Blob/BlobFunction.cs" range="11-16"::: -
-# [C# script](#tab/csharp-script)
-
-C# script uses a *function.json* file for configuration instead of attributes.
-
-|function.json property | Description|
-||-|
-|**type** | Must be set to `blobTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. Exceptions are noted in the [usage](#usage) section. |
-|**name** | The name of the variable that represents the blob in function code. |
-|**path** | The [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources) to monitor. May be a [blob name pattern](#blob-name-patterns). |
-|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](#connections).|
- [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding
[!INCLUDE [functions-bindings-storage-blob-trigger-dotnet-isolated-types](../../includes/functions-bindings-storage-blob-trigger-dotnet-isolated-types.md)]
-# [C# Script](#tab/csharp-script)
-
-See [Binding types](./functions-bindings-storage-blob.md?tabs=in-process#binding-types) for a list of supported types.
- Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage).
azure-functions Functions Bindings Storage Queue Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-output.md
public static class QueueFunctions
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Queue/QueueFunction.cs" id="docsnippet_queue_output_binding" :::
-# [C# Script](#tab/csharp-script)
-
-The following example shows an HTTP trigger binding in a *function.json* file and [C# script (.csx)](functions-reference-csharp.md) code that uses the binding. The function creates a queue item with a **CustomQueueMessage** object payload for each HTTP request received.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "type": "httpTrigger",
- "direction": "in",
- "authLevel": "function",
- "name": "input"
- },
- {
- "type": "http",
- "direction": "out",
- "name": "$return"
- },
- {
- "type": "queue",
- "direction": "out",
- "name": "$return",
- "queueName": "outqueue",
- "connection": "MyStorageConnectionAppSetting"
- }
- ]
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's C# script code that creates a single queue message:
-
-```cs
-public class CustomQueueMessage
-{
- public string PersonName { get; set; }
- public string Title { get; set; }
-}
-
-public static CustomQueueMessage Run(CustomQueueMessage input, ILogger log)
-{
- return input;
-}
-```
-
-You can send multiple messages at once by using an `ICollector` or `IAsyncCollector` parameter. Here's C# script code that sends multiple messages, one with the HTTP request data and one with hard-coded values:
-
-```cs
-public static void Run(
- CustomQueueMessage input,
- ICollector<CustomQueueMessage> myQueueItems,
- ILogger log)
-{
- myQueueItems.Add(input);
- myQueueItems.Add(new CustomQueueMessage { PersonName = "You", Title = "None" });
-}
-```
- ::: zone-end
def main(req: func.HttpRequest, msg: func.Out[typing.List[str]]) -> func.HttpRes
::: zone pivot="programming-language-csharp" ## Attributes
-The attribute that defines an output binding in C# libraries depends on the mode in which the C# class library runs. C# script instead uses a function.json configuration file.
+The attribute that defines an output binding in C# libraries depends on the mode in which the C# class library runs.
# [In-process](#tab/in-process)
-In [C# class libraries](functions-dotnet-class-library.md), use the [QueueAttribute](/dotnet/api/microsoft.azure.webjobs.queueattribute).
+In [C# class libraries](functions-dotnet-class-library.md), use the [QueueAttribute](/dotnet/api/microsoft.azure.webjobs.queueattribute). C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#queue-output).
The attribute applies to an `out` parameter or the return value of the function. The attribute's constructor takes the name of the queue, as shown in the following example:
When running in an isolated worker process, you use the [QueueOutputAttribute](h
Only returned variables are supported when running in an isolated worker process. Output parameters can't be used.
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties that you set in the *function.json* file and the `Queue` attribute.
-
-|function.json property | Description|
-||-|
-|**type** |Must be set to `queue`. This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | Must be set to `out`. This property is set automatically when you create the trigger in the Azure portal. |
-|**name** | The name of the variable that represents the queue in function code. Set to `$return` to reference the function return value.|
-|**queueName** | The name of the queue. |
-|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](#connections).|
+ ::: zone-end ::: zone pivot="programming-language-python"
An in-process class library is a compiled C# function runs in the same process a
An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
-# [C# script](#tab/csharp-script)
-
-C# script is used primarily when creating C# functions in the Azure portal.
- Choose a version to see usage details for the mode and version.
You can write multiple messages to the queue by using one of the following types
Isolated worker process currently only supports binding to string parameters.
-# [Extension 5.x+](#tab/extensionv5/csharp-script)
-
-Write a single queue message by using a method parameter such as `out T paramName`. You can use the method return type instead of an `out` parameter, and `T` can be any of the following types:
-
-* An object serializable as JSON
-* `string`
-* `byte[]`
-* [QueueMessage]
-
-For examples using these types, see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
-
-You can write multiple messages to the queue by using one of the following types:
-
-* `ICollector<T>` or `IAsyncCollector<T>`
-* [QueueClient]
-
-For examples using [QueueMessage] and [QueueClient], see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
-
-# [Extension 2.x+](#tab/extensionv2/csharp-script)
-
-Write a single queue message by using a method parameter such as `out T paramName`. You can use the method return type instead of an `out` parameter, and `T` can be any of the following types:
-
-* An object serializable as JSON
-* `string`
-* `byte[]`
-* [CloudQueueMessage]
-
-If you try to bind to [CloudQueueMessage] and get an error message, make sure that you have a reference to [the correct Storage SDK version](functions-bindings-storage-queue.md#azure-storage-sdk-version-in-functions-1x).
-
-You can write multiple messages to the queue by using one of the following types:
-
-* `ICollector<T>` or `IAsyncCollector<T>`
-* [CloudQueue](/dotnet/api/microsoft.azure.storage.queue.cloudqueue)
- ::: zone-end
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md
The following example shows a [C# function](dotnet-isolated-process-guide.md) th
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Queue/QueueFunction.cs" id="docsnippet_queue_output_binding":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows a queue trigger binding in a *function.json* file and [C# script (.csx)](functions-reference-csharp.md) code that uses the binding. The function polls the `myqueue-items` queue and writes a log each time a queue item is processed.
-
-Here's the *function.json* file:
-
-```json
-{
- "disabled": false,
- "bindings": [
- {
- "type": "queueTrigger",
- "direction": "in",
- "name": "myQueueItem",
- "queueName": "myqueue-items",
- "connection":"MyStorageConnectionAppSetting"
- }
- ]
-}
-```
-
-The [section below](#attributes) explains these properties.
-
-Here's the C# script code:
-
-```csharp
-#r "Microsoft.WindowsAzure.Storage"
-
-using Microsoft.Extensions.Logging;
-using Microsoft.WindowsAzure.Storage.Queue;
-using System;
-
-public static void Run(CloudQueueMessage myQueueItem,
- DateTimeOffset expirationTime,
- DateTimeOffset insertionTime,
- DateTimeOffset nextVisibleTime,
- string queueTrigger,
- string id,
- string popReceipt,
- int dequeueCount,
- ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem.AsString}\n" +
- $"queueTrigger={queueTrigger}\n" +
- $"expirationTime={expirationTime}\n" +
- $"insertionTime={insertionTime}\n" +
- $"nextVisibleTime={nextVisibleTime}\n" +
- $"id={id}\n" +
- $"popReceipt={popReceipt}\n" +
- $"dequeueCount={dequeueCount}");
-}
-```
-
-The [usage](#usage) section explains `myQueueItem`, which is named by the `name` property in function.json. The [message metadata section](#message-metadata) explains all of the other variables shown.
- ::: zone-end
def main(msg: func.QueueMessage):
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [QueueTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Storage/Queues/QueueTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use the [QueueTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Storage/Queues/QueueTriggerAttribute.cs) to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#queue-trigger).
# [In-process](#tab/in-process)
In [C# class libraries](dotnet-isolated-process-guide.md), the attribute's const
This example also demonstrates setting the [connection string setting](#connections) in the attribute itself.
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property | Description|
-||-|
-|**type** |Must be set to `queueTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
-|**direction**| In the *function.json* file only. Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. |
-|**name** | The name of the variable that contains the queue item payload in the function code. |
-|**queueName** | The name of the queue to poll. |
-|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](#connections).|
- ::: zone-end
An in-process class library is a compiled C# function runs in the same process a
# [Isolated process](#tab/isolated-process) An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
-
-# [C# script](#tab/csharp-script)
-
-C# script is used primarily when creating C# functions in the Azure portal.
When binding to an object, the Functions runtime tries to deserialize the JSON p
# [Extension 2.x+](#tab/extensionv2/isolated-process)
-Isolated worker process currently only supports binding to string parameters.
-
-# [Extension 5.x+](#tab/extensionv5/csharp-script)
-
-Access the message data by using a method parameter such as `string paramName`. The `paramName` is the value specified in the *function.json* file. You can bind to any of the following types:
-
-* Plain-old CLR object (POCO)
-* `string`
-* `byte[]`
-* [QueueMessage]
-
-When binding to an object, the Functions runtime tries to deserialize the JSON payload into an instance of an arbitrary class defined in your code. For examples using [QueueMessage], see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Queues#examples).
--
-# [Extension 2.x+](#tab/extensionv2/csharp-script)
-
-Access the message data by using a method parameter such as `string paramName`. The `paramName` is the value specified in the *function.json* file. You can bind to any of the following types:
-
-* Plain-old CLR object (POCO)
-* `string`
-* `byte[]`
-* [CloudQueueMessage]
-
-When binding to an object, the Functions runtime tries to deserialize the JSON payload into an instance of an arbitrary class defined in your code. If you try to bind to [CloudQueueMessage] and get an error message, make sure that you have a reference to [the correct Storage SDK version](functions-bindings-storage-queue.md).
+Earlier versions of this extension in the isolated worker process only support binding to strings. Additional options are available to **Extension 5.x+**.
::: zone-end
azure-functions Functions Bindings Storage Table Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md
An [in-process class library](functions-dotnet-class-library.md) is a compiled C
An [isolated worker process class library](dotnet-isolated-process-guide.md) compiled C# function runs in a process isolated from the runtime.
-# [C# script](#tab/csharp-script)
-
-C# script is used primarily when creating C# functions in the Azure portal.
- Choose a version to see examples for the mode and version.
The `Filter` and `Take` properties are used to limit the number of entities retu
Functions version 1.x doesn't support isolated worker process.
-# [Azure Tables extension](#tab/table-api/csharp-script)
-
-The following example shows a table input binding in a *function.json* file and [C# script](./functions-reference-csharp.md) code that uses the binding. The function uses a queue trigger to read a single table row.
-
-The *function.json* file specifies a `partitionKey` and a `rowKey`. The `rowKey` value `{queueTrigger}` indicates that the row key comes from the queue message string.
-
-```json
-{
- "bindings": [
- {
- "queueName": "myqueue-items",
- "connection": "MyStorageConnectionAppSetting",
- "name": "myQueueItem",
- "type": "queueTrigger",
- "direction": "in"
- },
- {
- "name": "personEntity",
- "type": "table",
- "tableName": "Person",
- "partitionKey": "Test",
- "rowKey": "{queueTrigger}",
- "connection": "MyStorageConnectionAppSetting",
- "direction": "in"
- }
- ],
- "disabled": false
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```csharp
-#r "Microsoft.WindowsAzure.Storage"
-using Microsoft.Extensions.Logging;
-using Azure.Data.Tables;
-
-public static void Run(string myQueueItem, Person personEntity, ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
- log.LogInformation($"Name in Person entity: {personEntity.Name}");
-}
-
-public class Person : ITableEntity
-{
- public string Name { get; set; }
-
- public string PartitionKey { get; set; }
- public string RowKey { get; set; }
- public DateTimeOffset? Timestamp { get; set; }
- public ETag ETag { get; set; }
-}
-```
-
-# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
-
-The following example shows a table input binding in a *function.json* file and [C# script](./functions-reference-csharp.md) code that uses the binding. The function uses a queue trigger to read a single table row.
-
-The *function.json* file specifies a `partitionKey` and a `rowKey`. The `rowKey` value `{queueTrigger}` indicates that the row key comes from the queue message string.
-
-```json
-{
- "bindings": [
- {
- "queueName": "myqueue-items",
- "connection": "MyStorageConnectionAppSetting",
- "name": "myQueueItem",
- "type": "queueTrigger",
- "direction": "in"
- },
- {
- "name": "personEntity",
- "type": "table",
- "tableName": "Person",
- "partitionKey": "Test",
- "rowKey": "{queueTrigger}",
- "connection": "MyStorageConnectionAppSetting",
- "direction": "in"
- }
- ],
- "disabled": false
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```csharp
-public static void Run(string myQueueItem, Person personEntity, ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
- log.LogInformation($"Name in Person entity: {personEntity.Name}");
-}
-
-public class Person
-{
- public string PartitionKey { get; set; }
- public string RowKey { get; set; }
- public string Name { get; set; }
-}
-```
-
-To read more than one row, use a `CloudTable` method parameter to read the table by using the Azure Storage SDK. Here's an example of a function that queries an Azure Functions log table:
-
-```json
-{
- "bindings": [
- {
- "name": "myTimer",
- "type": "timerTrigger",
- "direction": "in",
- "schedule": "0 */1 * * * *"
- },
- {
- "name": "cloudTable",
- "type": "table",
- "connection": "AzureWebJobsStorage",
- "tableName": "AzureWebJobsHostLogscommon",
- "direction": "in"
- }
- ],
- "disabled": false
-}
-```
-
-```csharp
-#r "Microsoft.WindowsAzure.Storage"
-using Microsoft.WindowsAzure.Storage.Table;
-using System;
-using System.Threading.Tasks;
-using Microsoft.Extensions.Logging;
-
-public static async Task Run(TimerInfo myTimer, CloudTable cloudTable, ILogger log)
-{
- log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
-
- TableQuery<LogEntity> rangeQuery = new TableQuery<LogEntity>().Where(
- TableQuery.CombineFilters(
- TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal,
- "FD2"),
- TableOperators.And,
- TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThan,
- "a")));
-
- // Execute the query and loop through the results
- foreach (LogEntity entity in
- await cloudTable.ExecuteQuerySegmentedAsync(rangeQuery, null))
- {
- log.LogInformation(
- $"{entity.PartitionKey}\t{entity.RowKey}\t{entity.Timestamp}\t{entity.OriginalName}");
- }
-}
-
-public class LogEntity : TableEntity
-{
- public string OriginalName { get; set; }
-}
-```
-
-For more information about how to use CloudTable, see [Get started with Azure Table storage](../cosmos-db/tutorial-develop-table-dotnet.md).
-
-If you try to bind to `CloudTable` and get an error message, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-table.md#azure-storage-sdk-version-in-functions-1x).
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-The following example shows a table input binding in a *function.json* file and [C# script](./functions-reference-csharp.md) code that uses the binding. The function uses a queue trigger to read a single table row.
-
-The *function.json* file specifies a `partitionKey` and a `rowKey`. The `rowKey` value `{queueTrigger}` indicates that the row key comes from the queue message string.
-
-```json
-{
- "bindings": [
- {
- "queueName": "myqueue-items",
- "connection": "MyStorageConnectionAppSetting",
- "name": "myQueueItem",
- "type": "queueTrigger",
- "direction": "in"
- },
- {
- "name": "personEntity",
- "type": "table",
- "tableName": "Person",
- "partitionKey": "Test",
- "rowKey": "{queueTrigger}",
- "connection": "MyStorageConnectionAppSetting",
- "direction": "in"
- }
- ],
- "disabled": false
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-Here's the C# script code:
-
-```csharp
-public static void Run(string myQueueItem, Person personEntity, ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
- log.LogInformation($"Name in Person entity: {personEntity.Name}");
-}
-
-public class Person
-{
- public string PartitionKey { get; set; }
- public string RowKey { get; set; }
- public string Name { get; set; }
-}
-```
-
-The following example shows a table input binding in a *function.json* file and [C# script](./functions-reference-csharp.md) code that uses the binding. The function uses `IQueryable<T>` to read entities for a partition key that is specified in a queue message. `IQueryable<T>` is only supported by version 1.x of the Functions runtime.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "queueName": "myqueue-items",
- "connection": "MyStorageConnectionAppSetting",
- "name": "myQueueItem",
- "type": "queueTrigger",
- "direction": "in"
- },
- {
- "name": "tableBinding",
- "type": "table",
- "connection": "MyStorageConnectionAppSetting",
- "tableName": "Person",
- "direction": "in"
- }
- ],
- "disabled": false
-}
-```
-
-The [configuration](#configuration) section explains these properties.
-
-The C# script code adds a reference to the Azure Storage SDK so that the entity type can derive from `TableEntity`:
-
-```csharp
-#r "Microsoft.WindowsAzure.Storage"
-using Microsoft.WindowsAzure.Storage.Table;
-using Microsoft.Extensions.Logging;
-
-public static void Run(string myQueueItem, IQueryable<Person> tableBinding, ILogger log)
-{
- log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
- foreach (Person person in tableBinding.Where(p => p.PartitionKey == myQueueItem).ToList())
- {
- log.LogInformation($"Name: {person.Name}");
- }
-}
-
-public class Person : TableEntity
-{
- public string Name { get; set; }
-}
-```
- ::: zone-end
With this simple binding, you can't programmatically handle a case in which no r
## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#table-input).
# [In-process](#tab/in-process)
In [C# class libraries](dotnet-isolated-process-guide.md), the `TableInputAttrib
|**Filter** | Optional. An OData filter expression for entities to read into an [`IEnumerable<T>`]. Can't be used with `RowKey`. | |**Connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). |
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property | Description|
-||-|
-|**type** | Must be set to `table`. This property is set automatically when you create the binding in the Azure portal.|
-|**direction** | Must be set to `in`. This property is set automatically when you create the binding in the Azure portal. |
-|**name** | The name of the variable that represents the table or entity in function code. |
-|**tableName** | The name of the table.|
-|**partitionKey** | Optional. The partition key of the table entity to read. |
-|**rowKey** |Optional. The row key of the table entity to read. Can't be used with `take` or `filter`.|
-|**take** | Optional. The maximum number of entities to return. Can't be used with `rowKey`. |
-|**filter** | Optional. An OData filter expression for the entities to return from the table. Can't be used with `rowKey`.|
-|**connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). |
- ::: zone-end
An in-process class library is a compiled C# function that runs in the same proc
# [Isolated process](#tab/isolated-process)
-An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
-
-# [C# script](#tab/csharp-script)
-
-C# script is used primarily when creating C# functions in the Azure portal.
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
To return a specific entity by key, use a plain-old CLR object (POCO). The speci
Functions version 1.x doesn't support isolated worker process.
-# [Azure Tables extension](#tab/table-api/csharp-script)
-
-To return a specific entity by key, use a binding parameter that derives from [TableEntity](/dotnet/api/azure.data.tables.tableentity).
-
-To execute queries that return multiple entities, bind to a [TableClient] object. You can then use this object to create and execute queries against the bound table. Note that [TableClient] and related APIs belong to the [Azure.Data.Tables](/dotnet/api/azure.data.tables) namespace.
-
-# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
-
-To return a specific entity by key, use a binding parameter that derives from [TableEntity](/dotnet/api/azure.data.tables.tableentity).
-
-To execute queries that return multiple entities, bind to a [CloudTable] object. You can then use this object to create and execute queries against the bound table. Note that [CloudTable] and related APIs belong to the [Microsoft.Azure.Cosmos.Table](/dotnet/api/microsoft.azure.cosmos.table) namespace.
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-To return a specific entity by key, use a binding parameter that derives from [TableEntity]. The specific `TableName`, `PartitionKey`, and `RowKey` are used to try and get a specific entity from the table.
-
-To execute queries that return multiple entities, bind to an [`IQueryable<T>`] of a type that inherits from [TableEntity].
- ::: zone-end
azure-functions Functions Bindings Storage Table Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-output.md
public static MyTableData Run(
} ```
-# [C# Script](#tab/csharp-script)
-
-The following example shows a table output binding in a *function.json* file and [C# script](functions-reference-csharp.md) code that uses the binding. The function writes multiple table entities.
-
-Here's the *function.json* file:
-
-```json
-{
- "bindings": [
- {
- "name": "input",
- "type": "manualTrigger",
- "direction": "in"
- },
- {
- "tableName": "Person",
- "connection": "MyStorageConnectionAppSetting",
- "name": "tableBinding",
- "type": "table",
- "direction": "out"
- }
- ],
- "disabled": false
-}
-```
-
-The [attributes](#attributes) section explains these properties.
-
-Here's the C# script code:
-
-```csharp
-public static void Run(string input, ICollector<Person> tableBinding, ILogger log)
-{
- for (int i = 1; i < 10; i++)
- {
- log.LogInformation($"Adding Person entity {i}");
- tableBinding.Add(
- new Person() {
- PartitionKey = "Test",
- RowKey = i.ToString(),
- Name = "Name" + i.ToString() }
- );
- }
-
-}
-
-public class Person
-{
- public string PartitionKey { get; set; }
- public string RowKey { get; set; }
- public string Name { get; set; }
-}
-
-```
- ::: zone-end
def main(req: func.HttpRequest, message: func.Out[str]) -> func.HttpResponse:
::: zone pivot="programming-language-csharp" ## Attributes
-Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file.
+Both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# libraries use attributes to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#table-output).
# [In-process](#tab/in-process)
In [C# class libraries](dotnet-isolated-process-guide.md), the `TableInputAttrib
|**PartitionKey** | The partition key of the table entity to write. | |**RowKey** | The row key of the table entity to write. | |**Connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). |
-
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property | Description|
-|||
-|**type** |Must be set to `table`. This property is set automatically when you create the binding in the Azure portal.|
-|**direction** | Must be set to `out`. This property is set automatically when you create the binding in the Azure portal. |
-|**name** | The variable name used in function code that represents the table or entity. Set to `$return` to reference the function return value.|
-|**tableName** |The name of the table to which to write.|
-|**partitionKey** |The partition key of the table entity to write. |
-|**rowKey** | The row key of the table entity to write. |
-|**connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). |
An in-process class library is a compiled C# function runs in the same process a
# [Isolated process](#tab/isolated-process)
-An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
-
-# [C# script](#tab/csharp-script)
-
-C# script is used primarily when creating C# functions in the Azure portal.
+An isolated worker process class library compiled C# function runs in a process isolated from the runtime.
Return a plain-old CLR object (POCO) with properties that can be mapped to the t
Functions version 1.x doesn't support isolated worker process.
-# [Azure Tables extension](#tab/table-api/csharp-script)
-
-The following types are supported for `out` parameters and return types:
--- A plain-old CLR object (POCO) that includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity`.-- `ICollector<T>` or `IAsyncCollector<T>` where `T` includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity`.-
-You can also bind to `TableClient` [from the Azure SDK](/dotnet/api/azure.data.tables.tableclient). You can then use that object to write to the table.
-
-# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
-
-The following types are supported for `out` parameters and return types:
--- A plain-old CLR object (POCO) that includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity` or inheriting `TableEntity`.-- `ICollector<T>` or `IAsyncCollector<T>` where `T` includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity` or inheriting `TableEntity`.-
-You can also bind to `CloudTable` [from the Storage SDK](/dotnet/api/microsoft.azure.cosmos.table.cloudtable) as a method parameter. You can then use that object to write to the table.
-
-# [Functions 1.x](#tab/functionsv1/csharp-script)
-
-The following types are supported for `out` parameters and return types:
--- A plain-old CLR object (POCO) that includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity` or inheriting `TableEntity`.-- `ICollector<T>` or `IAsyncCollector<T>` where `T` includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity` or inheriting `TableEntity`.-
-You can also bind to `CloudTable` [from the Storage SDK](/dotnet/api/microsoft.azure.cosmos.table.cloudtable) as a method parameter. You can then use that object to write to the table.
- ::: zone-end
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
public static void Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, ILogger
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Timer/TimerFunction.cs" range="11-17":::
-# [C# Script](#tab/csharp-script)
-
-The following example shows a timer trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes a log indicating whether this function invocation is due to a missed schedule occurrence. The [`TimerInfo`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerInfo.cs) object is passed into the function.
-
-Here's the binding data in the *function.json* file:
-
-```json
-{
- "schedule": "0 */5 * * * *",
- "name": "myTimer",
- "type": "timerTrigger",
- "direction": "in"
-}
-```
-
-Here's the C# script code:
-
-```csharp
-public static void Run(TimerInfo myTimer, ILogger log)
-{
- if (myTimer.IsPastDue)
- {
- log.LogInformation("Timer is running late!");
- }
- log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}" );
-}
-```
- ::: zone-end ::: zone pivot="programming-language-java"
def main(mytimer: func.TimerRequest) -> None:
::: zone pivot="programming-language-csharp" ## Attributes
-[In-process](functions-dotnet-class-library.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs) from [Microsoft.Azure.WebJobs.Extensions](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions) whereas [isolated worker process](dotnet-isolated-process-guide.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Timer/src/TimerTriggerAttribute.cs) from [Microsoft.Azure.Functions.Worker.Extensions.Timer](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Timer) to define the function.
-
-C# script instead uses a function.json configuration file.
+[In-process](functions-dotnet-class-library.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerTriggerAttribute.cs) from [Microsoft.Azure.WebJobs.Extensions](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions) whereas [isolated worker process](dotnet-isolated-process-guide.md) C# library uses [TimerTriggerAttribute](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/extensions/Worker.Extensions.Timer/src/TimerTriggerAttribute.cs) from [Microsoft.Azure.Functions.Worker.Extensions.Timer](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Timer) to define the function. C# script instead uses a function.json configuration file as described in the [C# scripting guide](./functions-reference-csharp.md#timer-trigger).
# [In-process](#tab/in-process)
C# script instead uses a function.json configuration file.
|**RunOnStartup**| If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **RunOnStartup** should rarely if ever be set to `true`, especially in production. | |**UseMonitor**| Set to `true` or `false` to indicate whether the schedule should be monitored. Schedule monitoring persists schedule occurrences to aid in ensuring the schedule is maintained correctly even when function app instances restart. If not set explicitly, the default is `true` for schedules that have a recurrence interval greater than or equal to 1 minute. For schedules that trigger more than once per minute, the default is `false`. |
-# [C# script](#tab/csharp-script)
-
-C# script uses a function.json file for configuration instead of attributes.
-
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
-
-|function.json property | Description|
-||-|
-|**type** | Must be set to "timerTrigger". This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
-|**name** | The name of the variable that represents the timer object in function code. |
-|**schedule**| A [CRON expression](#ncrontab-expressions) or a [TimeSpan](#timespan) value. A `TimeSpan` can be used only for a function app that runs on an App Service Plan. You can put the schedule expression in an app setting and set this property to the app setting name wrapped in **%** signs, as in this example: "%ScheduleAppSetting%". |
-|**runOnStartup**| If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **runOnStartup** should rarely if ever be set to `true`, especially in production. |
-|**useMonitor**| Set to `true` or `false` to indicate whether the schedule should be monitored. Schedule monitoring persists schedule occurrences to aid in ensuring the schedule is maintained correctly even when function app instances restart. If not set explicitly, the default is `true` for schedules that have a recurrence interval greater than or equal to 1 minute. For schedules that trigger more than once per minute, the default is `false`. |
- ::: zone-end
azure-functions Functions Bindings Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-twilio.md
You can add the extension to your project by explicitly installing the [NuGet pa
Unless otherwise noted, these examples are specific to version 2.x and later version of the Functions runtime. ::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
azure-functions Functions Bindings Warmup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md
The following considerations apply when using a warmup trigger:
<!--Optional intro text goes here, followed by the C# modes include.--> # [In-process](#tab/in-process)
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
The following table lists the .NET attributes for each binding type and the pack
> | Storage table | [`Microsoft.Azure.WebJobs.TableAttribute`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs), [`Microsoft.Azure.WebJobs.StorageAccountAttribute`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/StorageAccountAttribute.cs) | | > | Twilio | [`Microsoft.Azure.WebJobs.TwilioSmsAttribute`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.Twilio/TwilioSMSAttribute.cs) | `#r "Microsoft.Azure.WebJobs.Extensions.Twilio"` |
+## Binding configuration and examples
+
+### Blob trigger
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to `blobTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the blob in function code. |
+|**path** | The [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources) to monitor. May be a [blob name pattern](./functions-bindings-storage-blob-trigger.md#blob-name-patterns). |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](./functions-bindings-storage-blob-trigger.md#connections).|
++
+The following example shows a blob trigger binding in a *function.json* file and code that uses the binding. The function writes a log when a blob is added or updated in the `samples-workitems` [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources).
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "disabled": false,
+ "bindings": [
+ {
+ "name": "myBlob",
+ "type": "blobTrigger",
+ "direction": "in",
+ "path": "samples-workitems/{name}",
+ "connection":"MyStorageAccountAppSetting"
+ }
+ ]
+}
+```
+
+The string `{name}` in the blob trigger path `samples-workitems/{name}` creates a [binding expression](./functions-bindings-expressions-patterns.md) that you can use in function code to access the file name of the triggering blob. For more information, see [Blob name patterns](./functions-bindings-storage-blob-trigger.md#blob-name-patterns).
+
+Here's C# script code that binds to a `Stream`:
+
+```cs
+public static void Run(Stream myBlob, string name, ILogger log)
+{
+ log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes");
+}
+```
+
+Here's C# script code that binds to a `CloudBlockBlob`:
+
+```cs
+#r "Microsoft.WindowsAzure.Storage"
+
+using Microsoft.WindowsAzure.Storage.Blob;
+
+public static void Run(CloudBlockBlob myBlob, string name, ILogger log)
+{
+ log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name}\nURI:{myBlob.StorageUri}");
+}
+```
+
+### Blob input
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to `blob`. |
+|**direction** | Must be set to `in`. |
+|**name** | The name of the variable that represents the blob in function code.|
+|**path** | The path to the blob. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](./functions-bindings-storage-blob-input.md#connections).|
+
+The following example shows blob input and output bindings in a *function.json* file and C# script code that uses the bindings. The function makes a copy of a text blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
+
+In the *function.json* file, the `queueTrigger` metadata property is used to specify the blob name in the `path` properties:
+
+```json
+{
+ "bindings": [
+ {
+ "queueName": "myqueue-items",
+ "connection": "MyStorageConnectionAppSetting",
+ "name": "myQueueItem",
+ "type": "queueTrigger",
+ "direction": "in"
+ },
+ {
+ "name": "myInputBlob",
+ "type": "blob",
+ "path": "samples-workitems/{queueTrigger}",
+ "connection": "MyStorageConnectionAppSetting",
+ "direction": "in"
+ },
+ {
+ "name": "myOutputBlob",
+ "type": "blob",
+ "path": "samples-workitems/{queueTrigger}-Copy",
+ "connection": "MyStorageConnectionAppSetting",
+ "direction": "out"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+public static void Run(string myQueueItem, string myInputBlob, out string myOutputBlob, ILogger log)
+{
+ log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
+ myOutputBlob = myInputBlob;
+}
+```
+
+### Blob output
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to `blob`. |
+|**direction** | Must be set to `out`. |
+|**name** | The name of the variable that represents the blob in function code.|
+|**path** | The path to the blob. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](./functions-bindings-storage-blob-output.md#connections).|
+
+The following example shows blob input and output bindings in a *function.json* file and C# script code that uses the bindings. The function makes a copy of a text blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
+
+In the *function.json* file, the `queueTrigger` metadata property is used to specify the blob name in the `path` properties:
+
+```json
+{
+ "bindings": [
+ {
+ "queueName": "myqueue-items",
+ "connection": "MyStorageConnectionAppSetting",
+ "name": "myQueueItem",
+ "type": "queueTrigger",
+ "direction": "in"
+ },
+ {
+ "name": "myInputBlob",
+ "type": "blob",
+ "path": "samples-workitems/{queueTrigger}",
+ "connection": "MyStorageConnectionAppSetting",
+ "direction": "in"
+ },
+ {
+ "name": "myOutputBlob",
+ "type": "blob",
+ "path": "samples-workitems/{queueTrigger}-Copy",
+ "connection": "MyStorageConnectionAppSetting",
+ "direction": "out"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+public static void Run(string myQueueItem, string myInputBlob, out string myOutputBlob, ILogger log)
+{
+ log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
+ myOutputBlob = myInputBlob;
+}
+```
+
+### Queue trigger
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** |Must be set to `queueTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction**| In the *function.json* file only. Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that contains the queue item payload in the function code. |
+|**queueName** | The name of the queue to poll. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](./functions-bindings-storage-queue-trigger.md#connections).|
++
+The following example shows a queue trigger binding in a *function.json* file and C# script code that uses the binding. The function polls the `myqueue-items` queue and writes a log each time a queue item is processed.
+
+Here's the *function.json* file:
+
+```json
+{
+ "disabled": false,
+ "bindings": [
+ {
+ "type": "queueTrigger",
+ "direction": "in",
+ "name": "myQueueItem",
+ "queueName": "myqueue-items",
+ "connection":"MyStorageConnectionAppSetting"
+ }
+ ]
+}
+```
+
+Here's the C# script code:
+
+```csharp
+#r "Microsoft.WindowsAzure.Storage"
+
+using Microsoft.Extensions.Logging;
+using Microsoft.WindowsAzure.Storage.Queue;
+using System;
+
+public static void Run(CloudQueueMessage myQueueItem,
+ DateTimeOffset expirationTime,
+ DateTimeOffset insertionTime,
+ DateTimeOffset nextVisibleTime,
+ string queueTrigger,
+ string id,
+ string popReceipt,
+ int dequeueCount,
+ ILogger log)
+{
+ log.LogInformation($"C# Queue trigger function processed: {myQueueItem.AsString}\n" +
+ $"queueTrigger={queueTrigger}\n" +
+ $"expirationTime={expirationTime}\n" +
+ $"insertionTime={insertionTime}\n" +
+ $"nextVisibleTime={nextVisibleTime}\n" +
+ $"id={id}\n" +
+ $"popReceipt={popReceipt}\n" +
+ $"dequeueCount={dequeueCount}");
+}
+```
+
+### Queue output
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** |Must be set to `queue`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to `out`. This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the queue in function code. Set to `$return` to reference the function return value.|
+|**queueName** | The name of the queue. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](./functions-bindings-storage-queue-output.md#connections).|
+
+The following example shows an HTTP trigger binding in a *function.json* file and C# script code that uses the binding. The function creates a queue item with a **CustomQueueMessage** object payload for each HTTP request received.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "httpTrigger",
+ "direction": "in",
+ "authLevel": "function",
+ "name": "input"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "$return"
+ },
+ {
+ "type": "queue",
+ "direction": "out",
+ "name": "$return",
+ "queueName": "outqueue",
+ "connection": "MyStorageConnectionAppSetting"
+ }
+ ]
+}
+```
+
+Here's C# script code that creates a single queue message:
+
+```cs
+public class CustomQueueMessage
+{
+ public string PersonName { get; set; }
+ public string Title { get; set; }
+}
+
+public static CustomQueueMessage Run(CustomQueueMessage input, ILogger log)
+{
+ return input;
+}
+```
+
+You can send multiple messages at once by using an `ICollector` or `IAsyncCollector` parameter. Here's C# script code that sends multiple messages, one with the HTTP request data and one with hard-coded values:
+
+```cs
+public static void Run(
+ CustomQueueMessage input,
+ ICollector<CustomQueueMessage> myQueueItems,
+ ILogger log)
+{
+ myQueueItems.Add(input);
+ myQueueItems.Add(new CustomQueueMessage { PersonName = "You", Title = "None" });
+}
+```
+
+### Table input
+
+This section outlines support for the [Tables API version of the extension](./functions-bindings-storage-table.md?tabs=in-process%2Ctable-api) only.
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to `table`. This property is set automatically when you create the binding in the Azure portal.|
+|**direction** | Must be set to `in`. This property is set automatically when you create the binding in the Azure portal. |
+|**name** | The name of the variable that represents the table or entity in function code. |
+|**tableName** | The name of the table.|
+|**partitionKey** | Optional. The partition key of the table entity to read. |
+|**rowKey** |Optional. The row key of the table entity to read. Can't be used with `take` or `filter`.|
+|**take** | Optional. The maximum number of entities to return. Can't be used with `rowKey`. |
+|**filter** | Optional. An OData filter expression for the entities to return from the table. Can't be used with `rowKey`.|
+|**connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](./functions-bindings-storage-table-input.md#connections). |
+
+he following example shows a table input binding in a *function.json* file and C# script code that uses the binding. The function uses a queue trigger to read a single table row.
+
+The *function.json* file specifies a `partitionKey` and a `rowKey`. The `rowKey` value `{queueTrigger}` indicates that the row key comes from the queue message string.
+
+```json
+{
+ "bindings": [
+ {
+ "queueName": "myqueue-items",
+ "connection": "MyStorageConnectionAppSetting",
+ "name": "myQueueItem",
+ "type": "queueTrigger",
+ "direction": "in"
+ },
+ {
+ "name": "personEntity",
+ "type": "table",
+ "tableName": "Person",
+ "partitionKey": "Test",
+ "rowKey": "{queueTrigger}",
+ "connection": "MyStorageConnectionAppSetting",
+ "direction": "in"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```csharp
+#r "Azure.Data.Tables"
+using Microsoft.Extensions.Logging;
+using Azure.Data.Tables;
+
+public static void Run(string myQueueItem, Person personEntity, ILogger log)
+{
+ log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
+ log.LogInformation($"Name in Person entity: {personEntity.Name}");
+}
+
+public class Person : ITableEntity
+{
+ public string Name { get; set; }
+
+ public string PartitionKey { get; set; }
+ public string RowKey { get; set; }
+ public DateTimeOffset? Timestamp { get; set; }
+ public ETag ETag { get; set; }
+}
+```
+
+### Table output
+
+This section outlines support for the [Tables API version of the extension](./functions-bindings-storage-table.md?tabs=in-process%2Ctable-api) only.
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+|||
+|**type** |Must be set to `table`. This property is set automatically when you create the binding in the Azure portal.|
+|**direction** | Must be set to `out`. This property is set automatically when you create the binding in the Azure portal. |
+|**name** | The variable name used in function code that represents the table or entity. Set to `$return` to reference the function return value.|
+|**tableName** |The name of the table to which to write.|
+|**partitionKey** |The partition key of the table entity to write. |
+|**rowKey** | The row key of the table entity to write. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](./functions-bindings-storage-table-output.md#connections). |
+
+The following example shows a table output binding in a *function.json* file and C# script code that uses the binding. The function writes multiple table entities.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "name": "input",
+ "type": "manualTrigger",
+ "direction": "in"
+ },
+ {
+ "tableName": "Person",
+ "connection": "MyStorageConnectionAppSetting",
+ "name": "tableBinding",
+ "type": "table",
+ "direction": "out"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```csharp
+public static void Run(string input, ICollector<Person> tableBinding, ILogger log)
+{
+ for (int i = 1; i < 10; i++)
+ {
+ log.LogInformation($"Adding Person entity {i}");
+ tableBinding.Add(
+ new Person() {
+ PartitionKey = "Test",
+ RowKey = i.ToString(),
+ Name = "Name" + i.ToString() }
+ );
+ }
+
+}
+
+public class Person
+{
+ public string PartitionKey { get; set; }
+ public string RowKey { get; set; }
+ public string Name { get; set; }
+}
+
+```
+
+### Timer trigger
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to "timerTrigger". This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the timer object in function code. |
+|**schedule**| A [CRON expression](./functions-bindings-timer.md#ncrontab-expressions) or a [TimeSpan](./functions-bindings-timer.md#timespan) value. A `TimeSpan` can be used only for a function app that runs on an App Service Plan. You can put the schedule expression in an app setting and set this property to the app setting name wrapped in **%** signs, as in this example: "%ScheduleAppSetting%". |
+|**runOnStartup**| If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **runOnStartup** should rarely if ever be set to `true`, especially in production. |
+|**useMonitor**| Set to `true` or `false` to indicate whether the schedule should be monitored. Schedule monitoring persists schedule occurrences to aid in ensuring the schedule is maintained correctly even when function app instances restart. If not set explicitly, the default is `true` for schedules that have a recurrence interval greater than or equal to 1 minute. For schedules that trigger more than once per minute, the default is `false`. |
+
+The following example shows a timer trigger binding in a *function.json* file and a C# script function that uses the binding. The function writes a log indicating whether this function invocation is due to a missed schedule occurrence. The [`TimerInfo`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/TimerInfo.cs) object is passed into the function.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "schedule": "0 */5 * * * *",
+ "name": "myTimer",
+ "type": "timerTrigger",
+ "direction": "in"
+}
+```
+
+Here's the C# script code:
+
+```csharp
+public static void Run(TimerInfo myTimer, ILogger log)
+{
+ if (myTimer.IsPastDue)
+ {
+ log.LogInformation("Timer is running late!");
+ }
+ log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}" );
+}
+```
+
+### HTTP trigger
+
+The following table explains the trigger configuration properties that you set in the *function.json* file:
+
+|function.json property | Description|
+|||
+| **type** | Required - must be set to `httpTrigger`. |
+| **direction** | Required - must be set to `in`. |
+| **name** | Required - the variable name used in function code for the request or request body. |
+| **authLevel** | Determines what keys, if any, need to be present on the request in order to invoke the function. For supported values, see [Authorization level](./functions-bindings-http-webhook-trigger.md#http-auth). |
+| **methods** | An array of the HTTP methods to which the function responds. If not specified, the function responds to all HTTP methods. See [customize the HTTP endpoint](./functions-bindings-http-webhook-trigger.md#customize-the-http-endpoint). |
+| **route** | Defines the route template, controlling to which request URLs your function responds. The default value if none is provided is `<functionname>`. For more information, see [customize the HTTP endpoint](./functions-bindings-http-webhook-trigger.md#customize-the-http-endpoint). |
+| **webHookType** | _Supported only for the version 1.x runtime._<br/><br/>Configures the HTTP trigger to act as a [webhook](https://en.wikipedia.org/wiki/Webhook) receiver for the specified provider. For supported values, see [WebHook type](./functions-bindings-http-webhook-trigger.md#webhook-type).|
+
+The following example shows a trigger binding in a *function.json* file and a C# script function that uses the binding. The function looks for a `name` parameter either in the query string or the body of the HTTP request.
+
+Here's the *function.json* file:
+
+```json
+{
+ "disabled": false,
+ "bindings": [
+ {
+ "authLevel": "function",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ }
+ ]
+}
+```
+
+Here's C# script code that binds to `HttpRequest`:
+
+```cs
+#r "Newtonsoft.Json"
+
+using System.Net;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+
+public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ string name = req.Query["name"];
+
+ string requestBody = String.Empty;
+ using (StreamReader streamReader = new StreamReader(req.Body))
+ {
+ requestBody = await streamReader.ReadToEndAsync();
+ }
+ dynamic data = JsonConvert.DeserializeObject(requestBody);
+ name = name ?? data?.name;
+
+ return name != null
+ ? (ActionResult)new OkObjectResult($"Hello, {name}")
+ : new BadRequestObjectResult("Please pass a name on the query string or in the request body");
+}
+```
+
+You can bind to a custom object instead of `HttpRequest`. This object is created from the body of the request and parsed as JSON. Similarly, a type can be passed to the HTTP response output binding and returned as the response body, along with a `200` status code.
+
+```csharp
+using System.Net;
+using System.Threading.Tasks;
+using Microsoft.Extensions.Logging;
+
+public static string Run(Person person, ILogger log)
+{
+ return person.Name != null
+ ? (ActionResult)new OkObjectResult($"Hello, {person.Name}")
+ : new BadRequestObjectResult("Please pass an instance of Person.");
+}
+
+public class Person {
+ public string Name {get; set;}
+}
+```
+
+### HTTP output
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|Property |Description |
+|||
+| **type** |Must be set to `http`. |
+| **direction** | Must be set to `out`. |
+| **name** | The variable name used in function code for the response, or `$return` to use the return value. |
+
+### Event Hubs trigger
+
+The following table explains the trigger configuration properties that you set in the *function.json* file:
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to `eventHubTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the event item in function code. |
+|**eventHubName** | Functions 2.x and higher. The name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. Can be referenced via [app settings](./functions-bindings-expressions-patterns.md#binding-expressionsapp-settings) `%eventHubName%`. In version 1.x, this property is named `path`. |
+|**consumerGroup** |An optional property that sets the [consumer group](../event-hubs/event-hubs-features.md#event-consumers) used to subscribe to events in the hub. If omitted, the `$Default` consumer group is used. |
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. See [Connections](./functions-bindings-event-hubs-trigger.md#connections).|
++
+The following example shows an Event Hubs trigger binding in a *function.json* file and a C# script functionthat uses the binding. The function logs the message body of the Event Hubs trigger.
+
+The following examples show Event Hubs binding data in the *function.json* file for Functions runtime version 2.x and later versions.
+
+```json
+{
+ "type": "eventHubTrigger",
+ "name": "myEventHubMessage",
+ "direction": "in",
+ "eventHubName": "MyEventHub",
+ "connection": "myEventHubReadConnectionAppSetting"
+}
+```
+
+Here's the C# script code:
+
+```cs
+using System;
+
+public static void Run(string myEventHubMessage, TraceWriter log)
+{
+ log.Info($"C# function triggered to process a message: {myEventHubMessage}");
+}
+```
+
+To get access to event metadata in function code, bind to an [EventData](/dotnet/api/microsoft.servicebus.messaging.eventdata) object. You can also access the same properties by using binding expressions in the method signature. The following example shows both ways to get the same data:
+
+```cs
+#r "Microsoft.Azure.EventHubs"
+
+using System.Text;
+using System;
+using Microsoft.ServiceBus.Messaging;
+using Microsoft.Azure.EventHubs;
+
+public void Run(EventData myEventHubMessage,
+ DateTime enqueuedTimeUtc,
+ Int64 sequenceNumber,
+ string offset,
+ TraceWriter log)
+{
+ log.Info($"Event: {Encoding.UTF8.GetString(myEventHubMessage.Body)}");
+ log.Info($"EnqueuedTimeUtc={myEventHubMessage.SystemProperties.EnqueuedTimeUtc}");
+ log.Info($"SequenceNumber={myEventHubMessage.SystemProperties.SequenceNumber}");
+ log.Info($"Offset={myEventHubMessage.SystemProperties.Offset}");
+
+ // Metadata accessed by using binding expressions
+ log.Info($"EnqueuedTimeUtc={enqueuedTimeUtc}");
+ log.Info($"SequenceNumber={sequenceNumber}");
+ log.Info($"Offset={offset}");
+}
+```
+
+To receive events in a batch, make `string` or `EventData` an array:
+
+```cs
+public static void Run(string[] eventHubMessages, TraceWriter log)
+{
+ foreach (var message in eventHubMessages)
+ {
+ log.Info($"C# function triggered to process a message: {message}");
+ }
+}
+```
+
+### Event Hubs output
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property | Description|
+|||
+|**type** | Must be set to `eventHub`. |
+|**direction** | Must be set to `out`. This parameter is set automatically when you create the binding in the Azure portal. |
+|**name** | The variable name used in function code that represents the event. |
+|**eventHubName** | Functions 2.x and higher. The name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. In Functions 1.x, this property is named `path`.|
+|**connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. To learn more, see [Connections](./functions-bindings-event-hubs-output.md#connections).|
+
+The following example shows an event hub trigger binding in a *function.json* file and a C# script function that uses the binding. The function writes a message to an event hub.
+
+The following examples show Event Hubs binding data in the *function.json* file for Functions runtime version 2.x and later versions.
+
+```json
+{
+ "type": "eventHub",
+ "name": "outputEventHubMessage",
+ "eventHubName": "myeventhub",
+ "connection": "MyEventHubSendAppSetting",
+ "direction": "out"
+}
+```
+
+Here's C# script code that creates one message:
+
+```cs
+using System;
+using Microsoft.Extensions.Logging;
+
+public static void Run(TimerInfo myTimer, out string outputEventHubMessage, ILogger log)
+{
+ String msg = $"TimerTriggerCSharp1 executed at: {DateTime.Now}";
+ log.LogInformation(msg);
+ outputEventHubMessage = msg;
+}
+```
+
+Here's C# script code that creates multiple messages:
+
+```cs
+public static void Run(TimerInfo myTimer, ICollector<string> outputEventHubMessage, ILogger log)
+{
+ string message = $"Message created at: {DateTime.Now}";
+ log.LogInformation(message);
+ outputEventHubMessage.Add("1 " + message);
+ outputEventHubMessage.Add("2 " + message);
+}
+```
+
+### Event Grid trigger
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file. There are no constructor parameters or properties to set in the `EventGridTrigger` attribute.
+
+|function.json property |Description|
+|||
+| **type** | Required - must be set to `eventGridTrigger`. |
+| **direction** | Required - must be set to `in`. |
+| **name** | Required - the variable name used in function code for the parameter that receives the event data. |
+
+The following example shows an Event Grid trigger defined in the *function.json* file.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "eventGridTrigger",
+ "name": "eventGridEvent",
+ "direction": "in"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's an example of a C# script function that uses an `EventGridEvent` binding parameter:
+
+```csharp
+#r "Azure.Messaging.EventGrid"
+using Azure.Messaging.EventGrid;
+using Microsoft.Extensions.Logging;
+
+public static void Run(EventGridEvent eventGridEvent, ILogger log)
+{
+ log.LogInformation(eventGridEvent.Data.ToString());
+}
+```
+
+Here's an example of a C# script function that uses a `JObject` binding parameter:
+
+```cs
+#r "Newtonsoft.Json"
+
+using Newtonsoft.Json;
+using Newtonsoft.Json.Linq;
+
+public static void Run(JObject eventGridEvent, TraceWriter log)
+{
+ log.Info(eventGridEvent.ToString(Formatting.Indented));
+}
+```
+
+### Event Grid output
+
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+
+|function.json property | Description|
+|||-|
+|**type** | Must be set to `eventGrid`. |
+|**direction** | Must be set to `out`. This parameter is set automatically when you create the binding in the Azure portal. |
+|**name** | The variable name used in function code that represents the event. |
+|**topicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. |
+|**topicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
+
+The following example shows the Event Grid output binding data in the *function.json* file.
+
+```json
+{
+ "type": "eventGrid",
+ "name": "outputEvent",
+ "topicEndpointUri": "MyEventGridTopicUriSetting",
+ "topicKeySetting": "MyEventGridTopicKeySetting",
+ "direction": "out"
+}
+```
+
+Here's C# script code that creates one event:
+
+```cs
+#r "Microsoft.Azure.EventGrid"
+using System;
+using Microsoft.Azure.EventGrid.Models;
+using Microsoft.Extensions.Logging;
+
+public static void Run(TimerInfo myTimer, out EventGridEvent outputEvent, ILogger log)
+{
+ outputEvent = new EventGridEvent("message-id", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0");
+}
+```
+
+Here's C# script code that creates multiple events:
+
+```cs
+#r "Microsoft.Azure.EventGrid"
+using System;
+using Microsoft.Azure.EventGrid.Models;
+using Microsoft.Extensions.Logging;
+
+public static void Run(TimerInfo myTimer, ICollector<EventGridEvent> outputEvent, ILogger log)
+{
+ outputEvent.Add(new EventGridEvent("message-id-1", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0"));
+ outputEvent.Add(new EventGridEvent("message-id-2", "subject-name", "event-data", "event-type", DateTime.UtcNow, "1.0"));
+}
+```
+
+### Service Bus trigger
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to `serviceBusTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the queue or topic message in function code. |
+|**queueName**| Name of the queue to monitor. Set only if monitoring a queue, not for a topic.
+|**topicName**| Name of the topic to monitor. Set only if monitoring a topic, not for a queue.|
+|**subscriptionName**| Name of the subscription to monitor. Set only if monitoring a topic, not for a queue.|
+|**connection**| The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](./functions-bindings-service-bus-trigger.md#connections).|
+|**accessRights**| Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
+|**isSessionsEnabled**| `true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
+|**autoComplete**| `true` when the trigger should automatically call complete after processing, or if the function code will manually call complete.<br/><br/>Setting to `false` is only supported in C#.<br/><br/>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br/><br/>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br/><br/>This property is available only in Azure Functions 2.x and higher. |
+
+The following example shows a Service Bus trigger binding in a *function.json* file and a C# script function that uses the binding. The function reads message metadata and logs a Service Bus queue message.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+"bindings": [
+ {
+ "queueName": "testqueue",
+ "connection": "MyServiceBusConnection",
+ "name": "myQueueItem",
+ "type": "serviceBusTrigger",
+ "direction": "in"
+ }
+],
+"disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+using System;
+
+public static void Run(string myQueueItem,
+ Int32 deliveryCount,
+ DateTime enqueuedTimeUtc,
+ string messageId,
+ TraceWriter log)
+{
+ log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
+
+ log.Info($"EnqueuedTimeUtc={enqueuedTimeUtc}");
+ log.Info($"DeliveryCount={deliveryCount}");
+ log.Info($"MessageId={messageId}");
+}
+```
+
+### Service Bus output
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property | Description|
+|||-|
+|**type** |Must be set to "serviceBus". This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to "out". This property is set automatically when you create the trigger in the Azure portal. |
+|**name** | The name of the variable that represents the queue or topic message in function code. Set to "$return" to reference the function return value. |
+|**queueName**|Name of the queue. Set only if sending queue messages, not for a topic.
+|**topicName**|Name of the topic. Set only if sending topic messages, not for a queue.|
+|**connection**|The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](./functions-bindings-service-bus-output.md#connections).|
+|**accessRights** (v1 only)|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.|
+
+The following example shows a Service Bus output binding in a *function.json* file and a C# script function that uses the binding. The function uses a timer trigger to send a queue message every 15 seconds.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "schedule": "0/15 * * * * *",
+ "name": "myTimer",
+ "runsOnStartup": true,
+ "type": "timerTrigger",
+ "direction": "in"
+ },
+ {
+ "name": "outputSbQueue",
+ "type": "serviceBus",
+ "queueName": "testqueue",
+ "connection": "MyServiceBusConnection",
+ "direction": "out"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's C# script code that creates a single message:
+
+```cs
+public static void Run(TimerInfo myTimer, ILogger log, out string outputSbQueue)
+{
+ string message = $"Service Bus queue message created at: {DateTime.Now}";
+ log.LogInformation(message);
+ outputSbQueue = message;
+}
+```
+
+Here's C# script code that creates multiple messages:
+
+```cs
+public static async Task Run(TimerInfo myTimer, ILogger log, IAsyncCollector<string> outputSbQueue)
+{
+ string message = $"Service Bus queue messages created at: {DateTime.Now}";
+ log.LogInformation(message);
+ await outputSbQueue.AddAsync("1 " + message);
+ await outputSbQueue.AddAsync("2 " + message);
+}
+```
+
+### Cosmos DB trigger
+
+This section outlines support for the [version 4.x+ of the extension](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4) only.
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
++
+The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are added or modified.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "type": "cosmosDBTrigger",
+ "name": "documents",
+ "direction": "in",
+ "leaseContainerName": "leases",
+ "connection": "<connection-app-setting>",
+ "databaseName": "Tasks",
+ "containerName": "Items",
+ "createLeaseContainerIfNotExists": true
+}
+```
+
+Here's the C# script code:
+
+```cs
+ using System;
+ using System.Collections.Generic;
+ using Microsoft.Extensions.Logging;
+
+ // Customize the model with your own desired properties
+ public class ToDoItem
+ {
+ public string id { get; set; }
+ public string Description { get; set; }
+ }
+
+ public static void Run(IReadOnlyList<ToDoItem> documents, ILogger log)
+ {
+ log.LogInformation("Documents modified " + documents.Count);
+ log.LogInformation("First document Id " + documents[0].id);
+ }
+```
+
+### Cosmos DB input
+
+This section outlines support for the [version 4.x+ of the extension](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4) only.
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
++
+This section contains the following examples:
+
+* [Queue trigger, look up ID from string](#queue-trigger-look-up-id-from-string-c-script)
+* [Queue trigger, get multiple docs, using SqlQuery](#queue-trigger-get-multiple-docs-using-sqlquery-c-script)
+* [HTTP trigger, look up ID from query string](#http-trigger-look-up-id-from-query-string-c-script)
+* [HTTP trigger, look up ID from route data](#http-trigger-look-up-id-from-route-data-c-script)
+* [HTTP trigger, get multiple docs, using SqlQuery](#http-trigger-get-multiple-docs-using-sqlquery-c-script)
+* [HTTP trigger, get multiple docs, using DocumentClient](#http-trigger-get-multiple-docs-using-documentclient-c-script)
+
+The HTTP trigger examples refer to a simple `ToDoItem` type:
+
+```cs
+namespace CosmosDBSamplesV2
+{
+ public class ToDoItem
+ {
+ public string Id { get; set; }
+ public string Description { get; set; }
+ }
+}
+```
+
+<a id="queue-trigger-look-up-id-from-string-c-script"></a>
+
+#### Queue trigger, look up ID from string
+
+The following example shows an Azure Cosmos DB input binding in a *function.json* file and a C# script function that uses the binding. The function reads a single document and updates the document's text value.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "name": "inputDocument",
+ "type": "cosmosDB",
+ "databaseName": "MyDatabase",
+ "collectionName": "MyCollection",
+ "id" : "{queueTrigger}",
+ "partitionKey": "{partition key value}",
+ "connectionStringSetting": "MyAccount_COSMOSDB",
+ "direction": "in"
+}
+```
+
+Here's the C# script code:
+
+```cs
+ using System;
+
+ // Change input document contents using Azure Cosmos DB input binding
+ public static void Run(string myQueueItem, dynamic inputDocument)
+ {
+ inputDocument.text = "This has changed.";
+ }
+```
+
+<a id="queue-trigger-get-multiple-docs-using-sqlquery-c-script"></a>
+
+#### Queue trigger, get multiple docs, using SqlQuery
+
+The following example shows an Azure Cosmos DB input binding in a *function.json* file and a C# script function that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters.
+
+The queue trigger provides a parameter `departmentId`. A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department.
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "name": "documents",
+ "type": "cosmosDB",
+ "direction": "in",
+ "databaseName": "MyDb",
+ "collectionName": "MyCollection",
+ "sqlQuery": "SELECT * from c where c.departmentId = {departmentId}",
+ "connectionStringSetting": "CosmosDBConnection"
+}
+```
+
+Here's the C# script code:
+
+```csharp
+ public static void Run(QueuePayload myQueueItem, IEnumerable<dynamic> documents)
+ {
+ foreach (var doc in documents)
+ {
+ // operate on each document
+ }
+ }
+
+ public class QueuePayload
+ {
+ public string departmentId { get; set; }
+ }
+```
+
+<a id="http-trigger-look-up-id-from-query-string-c-script"></a>
+
+#### HTTP trigger, look up ID from query string
+
+The following example shows a C# script function that retrieves a single document. The function is triggered by an HTTP request that uses a query string to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "type": "cosmosDB",
+ "name": "toDoItem",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "connectionStringSetting": "CosmosDBConnection",
+ "direction": "in",
+ "Id": "{Query.id}",
+ "PartitionKey" : "{Query.partitionKeyValue}"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+using System.Net;
+using Microsoft.Extensions.Logging;
+
+public static HttpResponseMessage Run(HttpRequestMessage req, ToDoItem toDoItem, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ if (toDoItem == null)
+ {
+ log.LogInformation($"ToDo item not found");
+ }
+ else
+ {
+ log.LogInformation($"Found ToDo item, Description={toDoItem.Description}");
+ }
+ return req.CreateResponse(HttpStatusCode.OK);
+}
+```
+
+<a id="http-trigger-look-up-id-from-route-data-c-script"></a>
+
+#### HTTP trigger, look up ID from route data
+
+The following example shows a C# script function that retrieves a single document. The function is triggered by an HTTP request that uses route data to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ],
+ "route":"todoitems/{partitionKeyValue}/{id}"
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "type": "cosmosDB",
+ "name": "toDoItem",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "connectionStringSetting": "CosmosDBConnection",
+ "direction": "in",
+ "id": "{id}",
+ "partitionKey": "{partitionKeyValue}"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+using System.Net;
+using Microsoft.Extensions.Logging;
+
+public static HttpResponseMessage Run(HttpRequestMessage req, ToDoItem toDoItem, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ if (toDoItem == null)
+ {
+ log.LogInformation($"ToDo item not found");
+ }
+ else
+ {
+ log.LogInformation($"Found ToDo item, Description={toDoItem.Description}");
+ }
+ return req.CreateResponse(HttpStatusCode.OK);
+}
+```
+
+<a id="http-trigger-get-multiple-docs-using-sqlquery-c-script"></a>
+
+#### HTTP trigger, get multiple docs, using SqlQuery
+
+The following example shows a C# script function that retrieves a list of documents. The function is triggered by an HTTP request. The query is specified in the `SqlQuery` attribute property.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "type": "cosmosDB",
+ "name": "toDoItems",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "connectionStringSetting": "CosmosDBConnection",
+ "direction": "in",
+ "sqlQuery": "SELECT top 2 * FROM c order by c._ts desc"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+using System.Net;
+using Microsoft.Extensions.Logging;
+
+public static HttpResponseMessage Run(HttpRequestMessage req, IEnumerable<ToDoItem> toDoItems, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ foreach (ToDoItem toDoItem in toDoItems)
+ {
+ log.LogInformation(toDoItem.Description);
+ }
+ return req.CreateResponse(HttpStatusCode.OK);
+}
+```
+
+<a id="http-trigger-get-multiple-docs-using-documentclient-c-script"></a>
+
+#### HTTP trigger, get multiple docs, using DocumentClient
+
+The following example shows a C# script function that retrieves a list of documents. The function is triggered by an HTTP request. The code uses a `DocumentClient` instance provided by the Azure Cosmos DB binding to read a list of documents. The `DocumentClient` instance could also be used for write operations.
+
+Here's the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "type": "cosmosDB",
+ "name": "client",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "connectionStringSetting": "CosmosDBConnection",
+ "direction": "inout"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+#r "Microsoft.Azure.Documents.Client"
+
+using System.Net;
+using Microsoft.Azure.Documents.Client;
+using Microsoft.Azure.Documents.Linq;
+using Microsoft.Extensions.Logging;
+
+public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, DocumentClient client, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ Uri collectionUri = UriFactory.CreateDocumentCollectionUri("ToDoItems", "Items");
+ string searchterm = req.GetQueryNameValuePairs()
+ .FirstOrDefault(q => string.Compare(q.Key, "searchterm", true) == 0)
+ .Value;
+
+ if (searchterm == null)
+ {
+ return req.CreateResponse(HttpStatusCode.NotFound);
+ }
+
+ log.LogInformation($"Searching for word: {searchterm} using Uri: {collectionUri.ToString()}");
+ IDocumentQuery<ToDoItem> query = client.CreateDocumentQuery<ToDoItem>(collectionUri)
+ .Where(p => p.Description.Contains(searchterm))
+ .AsDocumentQuery();
+
+ while (query.HasMoreResults)
+ {
+ foreach (ToDoItem result in await query.ExecuteNextAsync())
+ {
+ log.LogInformation(result.Description);
+ }
+ }
+ return req.CreateResponse(HttpStatusCode.OK);
+}
+```
+
+### Cosmos DB output
+
+This section outlines support for the [version 4.x+ of the extension](./functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4) only.
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
++
+This section contains the following examples:
+
+* [Queue trigger, write one doc](#queue-trigger-write-one-doc-c-script)
+* [Queue trigger, write docs using IAsyncCollector](#queue-trigger-write-docs-using-iasynccollector-c-script)
+
+<a id="queue-trigger-write-one-doc-c-script"></a>
+
+#### Queue trigger, write one doc
+
+The following example shows an Azure Cosmos DB output binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function uses a queue input binding for a queue that receives JSON in the following format:
+
+```json
+{
+ "name": "John Henry",
+ "employeeId": "123456",
+ "address": "A town nearby"
+}
+```
+
+The function creates Azure Cosmos DB documents in the following format for each record:
+
+```json
+{
+ "id": "John Henry-123456",
+ "name": "John Henry",
+ "employeeId": "123456",
+ "address": "A town nearby"
+}
+```
+
+Here's the binding data in the *function.json* file:
+
+```json
+{
+ "name": "employeeDocument",
+ "type": "cosmosDB",
+ "databaseName": "MyDatabase",
+ "collectionName": "MyCollection",
+ "createIfNotExists": true,
+ "connectionStringSetting": "MyAccount_COSMOSDB",
+ "direction": "out"
+}
+```
+
+Here's the C# script code:
+
+```cs
+ #r "Newtonsoft.Json"
+
+ using Microsoft.Azure.WebJobs.Host;
+ using Newtonsoft.Json.Linq;
+ using Microsoft.Extensions.Logging;
+
+ public static void Run(string myQueueItem, out object employeeDocument, ILogger log)
+ {
+ log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
+
+ dynamic employee = JObject.Parse(myQueueItem);
+
+ employeeDocument = new {
+ id = employee.name + "-" + employee.employeeId,
+ name = employee.name,
+ employeeId = employee.employeeId,
+ address = employee.address
+ };
+ }
+```
+
+<a id="queue-trigger-write-docs-using-iasynccollector-c-script"></a>
+
+#### Queue trigger, write docs using IAsyncCollector
+
+To create multiple documents, you can bind to `ICollector<T>` or `IAsyncCollector<T>` where `T` is one of the supported types.
+
+This example refers to a simple `ToDoItem` type:
+
+```cs
+namespace CosmosDBSamplesV2
+{
+ public class ToDoItem
+ {
+ public string id { get; set; }
+ public string Description { get; set; }
+ }
+}
+```
+
+Here's the function.json file:
+
+```json
+{
+ "bindings": [
+ {
+ "name": "toDoItemsIn",
+ "type": "queueTrigger",
+ "direction": "in",
+ "queueName": "todoqueueforwritemulti",
+ "connectionStringSetting": "AzureWebJobsStorage"
+ },
+ {
+ "type": "cosmosDB",
+ "name": "toDoItemsOut",
+ "databaseName": "ToDoItems",
+ "collectionName": "Items",
+ "connectionStringSetting": "CosmosDBConnection",
+ "direction": "out"
+ }
+ ],
+ "disabled": false
+}
+```
+
+Here's the C# script code:
+
+```cs
+using System;
+using Microsoft.Extensions.Logging;
+
+public static async Task Run(ToDoItem[] toDoItemsIn, IAsyncCollector<ToDoItem> toDoItemsOut, ILogger log)
+{
+ log.LogInformation($"C# Queue trigger function processed {toDoItemsIn?.Length} items");
+
+ foreach (ToDoItem toDoItem in toDoItemsIn)
+ {
+ log.LogInformation($"Description={toDoItem.Description}");
+ await toDoItemsOut.AddAsync(toDoItem);
+ }
+}
+```
+ ## Next steps > [!div class="nextstepaction"]
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
To learn more about specific language version support policy timeline, visit the
|Language | Configuration guides | |--|--|
-|C# (class library) |[link](./functions-dotnet-class-library.md#supported-versions)|
+|C# (in-process model) |[link](./functions-dotnet-class-library.md#supported-versions)|
+|C# (isolated worker model) |[link](./dotnet-isolated-process-guide.md#supported-versions)|
|Node |[link](./functions-reference-node.md#setting-the-node-version)| |PowerShell |[link](./functions-reference-powershell.md#changing-the-powershell-version)| |Python |[link](./functions-reference-python.md#python-version)|
azure-functions Migrate Version 1 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md
zone_pivot_groups: programming-languages-set-functions
-# Migrate apps from Azure Functions version 1.x to version 4.x
+# <a name="top"></a>Migrate apps from Azure Functions version 1.x to version 4.x
::: zone pivot="programming-language-java"+ > [!IMPORTANT] > Java isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your Java app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above. + ::: zone-end+ ::: zone pivot="programming-language-typescript"+ > [!IMPORTANT] > TypeScript isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your TypeScript app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above. + ::: zone-end+ ::: zone pivot="programming-language-powershell"+ > [!IMPORTANT] > PowerShell isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your PowerShell app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above. + ::: zone-end+ ::: zone pivot="programming-language-python"+ > [!IMPORTANT]
-> Python isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your Python app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above.
+> Python isn't supported by version 1.x of the Azure Functions runtime. Perhaps you're instead looking to [migrate your Python app from version 3.x to version 4.x](./migrate-version-3-version-4.md). If you're migrating a version 1.x function app, select either C# or JavaScript above.
+ ::: zone-end++
+This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top).
++ ::: zone pivot="programming-language-csharp"
-If you're running on version 1.x of the Azure Functions runtime, it's likely because your C# app requires .NET Framework 2.1. Version 4.x of the runtime now lets you run .NET Framework 4.8 apps. At this point, you should consider migrating your version 1.x function apps to run on version 4.x. For more information about Functions runtime versions, see [Azure Functions runtime versions overview](./functions-versions.md).
-Migrating a C# function app from version 1.x to version 4.x of the Functions runtime requires you to make changes to your project code. Many of these changes are a result of changes in the C# language and .NET APIs. JavaScript apps generally don't require code changes to migrate.
+## Choose your target .NET version
+
+On version 1.x of the Functions runtime, your C# function app targets .NET Framework.
-You can upgrade your C# project to one of the following versions of .NET, all of which can run on Functions version 4.x:
-| .NET version | Process model<sup>*</sup> |
-| | | |
-| .NET 7 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
-| .NET 6 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
-| .NET 6 | [In-process](./functions-dotnet-class-library.md) |
-| .NET&nbsp;Framework&nbsp;4.8 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
+> [!TIP]
+> **Unless your app depends on a library or API only available to .NET Framework, we recommend upgrading to .NET 6 on the isolated worker model.** Many apps on version 1.x target .NET Framework only because that is what was available when they were created. Additional capabilities are available to more recent versions of .NET, and if your app is not forced to stay on .NET Framework due to a dependency, you should upgrade.
+>
+> Migrating to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. The [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
-<sup>*</sup> [In-process execution](./functions-dotnet-class-library.md) is only supported for Long Term Support (LTS) releases of .NET. Non-LTS releases and .NET Framework require you to run in an [isolated worker process](./dotnet-isolated-process-guide.md). For a feature and functionality comparison between the two process models, see [Differences between in-process and isolate worker process .NET Azure Functions](./dotnet-isolated-in-process-differences.md).
::: zone-end+ ::: zone pivot="programming-language-javascript,programming-language-csharp"
-This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime.
## Prepare for migration
Before you upgrade your app to version 4.x of the Functions runtime, you should
* Consider using a [staging slot](functions-deployment-slots.md) to test and verify your app in Azure on the new runtime version. You can then deploy your app with the updated version settings to the production slot. For more information, see [Migrate using slots](#upgrade-using-slots). ::: zone-end ::: zone pivot="programming-language-csharp"+ ## Update your project files The following sections describes the updates you must make to your C# project files to be able to run on one of the supported versions of .NET in Functions version 4.x. The updates shown are ones common to most projects. Your project code may require updates not mentioned in this article, especially when using custom NuGet packages.
+Migrating a C# function app from version 1.x to version 4.x of the Functions runtime requires you to make changes to your project code. Many of these changes are a result of changes in the C# language and .NET APIs.
+ Choose the tab that matches your target version of .NET and the desired process model (in-process or isolated worker process).
+> [!TIP]
+> The [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
+ ### .csproj file The following example is a .csproj project file that runs on version 1.x:
In version 2.x, the following changes were made:
> [!div class="nextstepaction"] > [Learn more about Functions versions](functions-versions.md)+
+[.NET Upgrade Assistant]: /dotnet/core/porting/upgrade-assistant-overview
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
zone_pivot_groups: programming-languages-set-functions
Azure Functions version 4.x is highly backwards compatible to version 3.x. Most apps should safely upgrade to 4.x without requiring significant code changes. For more information about Functions runtime versions, see [Azure Functions runtime versions overview](./functions-versions.md). > [!IMPORTANT]
-> Beginning on December 13, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime have reached the end of life (EOL) of extended support.
+> As of December 13, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime have reached the end of life (EOL) of extended support.
>
-> After the deadline, function apps can be created and deployed from your CI/CD DevOps pipeline, and all existing apps continue to run without breaking changes. However, your apps are not eligible for new features, security patches, and performance optimizations. You'll get related service support once you upgraded them to version 4.x.
+> Apps using versions 2.x and 3.x can still be created and deployed from your CI/CD DevOps pipeline, and all existing apps continue to run without breaking changes. However, your apps are not eligible for new features, security patches, and performance optimizations. You'll only get related service support once you upgrade them to version 4.x.
>
->End of support for these runtime versions is due to the ending of support for .NET Core 3.1, which is required by these older runtime versions. This requirement affects all [languages supported by Azure Functions](supported-languages.md).
+> End of support for these older runtime versions is due to the end of support for .NET Core 3.1, which they had as a core dependency. This requirement affects all [languages supported by Azure Functions](supported-languages.md).
>
->We highly recommend you migrating your function apps to version 4.x of the Functions runtime by following this article.
->
->Functions version 1.x is still supported for C# function apps that require the .NET Framework. Preview support is now available in Functions 4.x to [run C# functions on .NET Framework 4.8](dotnet-isolated-process-guide.md#supported-versions).
-
+> We highly recommend that you migrate your function apps to version 4.x of the Functions runtime by following this article.
This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top). ::: zone pivot="programming-language-csharp"
-## Choose your target .NET
-
-On version 3.x of the Functions runtime, your C# function app targets .NET Core 3.1. When you migrate your function app to version 4.x, you have the opportunity to choose the target version of .NET. You can upgrade your C# project to one of the following versions of .NET, all of which can run on Functions version 4.x:
+## Choose your target .NET version
-| .NET version | Process model<sup>*</sup> |
-| | | |
-| .NET 7 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
-| .NET 6 | [Isolated worker process](./dotnet-isolated-process-guide.md) |
-| .NET 6 | [In-process](./functions-dotnet-class-library.md) |
+On version 3.x of the Functions runtime, your C# function app targets .NET Core 3.1 using the in-process model or .NET 5 using the isolated worker model.
-<sup>*</sup> [In-process execution](./functions-dotnet-class-library.md) is only supported for Long Term Support (LTS) releases of .NET. Standard Terms Support (STS) releases and .NET Framework are supported .NET Azure functions [isolated worker process](./dotnet-isolated-process-guide.md).
> [!TIP]
-> On version 3.x of the Functions runtime, if you're on .NET 5, we recommend you upgrade to .NET 7. If you're on .NET Core 3.1, we recommend you upgrade to .NET 6 (in-process) for a quick upgrade path.
+> **If you're migrating from .NET 5 (on the isolated worker model), we recommend upgrading to .NET 6 on the isolated worker model.** This provides a quick upgrade path with the longest support window from .NET.
>
-> If you're looking for moving to a Long Term Support (LTS) .NET release, we recommend you upgrade to .NET 6 .
->
-> Migrating to .NET Isolated worker model to get all benefits provided by Azure Functions .NET isolated worker process. For more information about .NET isolated worker process advantages see [.NET isolated worker process enhancement](./dotnet-isolated-in-process-differences.md). For more information about .NET version support, see [Supported versions](./dotnet-isolated-process-guide.md#supported-versions).
+> **If you're migrating from .NET Core 3.1 (on the in-process model), we recommend upgrading to .NET 6 on the in-process model.** This provides a quick upgrade path. However, you might also consider upgrading to .NET 6 on the isolated worker model. Switching to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. The [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
-Upgrading from .NET Core 3.1 to .NET 6 running in-process requires minimal updates to your project and virtually no updates to code. Switching to the isolated worker process model requires you to make changes to your code, but provides the flexibility of being able to easily run on any future version of .NET. For a feature and functionality comparison between the two process models, see [Differences between in-process and isolate worker process .NET Azure Functions](./dotnet-isolated-in-process-differences.md).
::: zone-end ## Prepare for migration
Upgrading instructions are language dependent. If you don't see your language, c
Choose the tab that matches your target version of .NET and the desired process model (in-process or isolated worker process).
+> [!TIP]
+> The [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
+ ### .csproj file The following example is a .csproj project file that uses .NET Core 3.1 on version 3.x:
If you don't see your programming language, go select it from the [top of the pa
> [!div class="nextstepaction"] > [Learn more about Functions versions](functions-versions.md)+
+[.NET Upgrade Assistant]: /dotnet/core/porting/upgrade-assistant-overview
azure-monitor Azure Monitor Agent Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-tools.md
To install DCR Config Generator:
1. Run the script:
- Option 1: Outputs **ready-to-deploy ARM template files** only, which creates the generated DCR in the specified subscription and resource group, when deployed.
-
- ```powershell
- .\WorkspaceConfigToDCRMigrationTool.ps1 -SubscriptionId $subId -ResourceGroupName $rgName -WorkspaceName $workspaceName -DCRName $dcrName -Location $location -FolderPath $folderPath
- ```
- Option 2: Outputs **ready-to-deploy ARM template files** and **the DCR JSON files** separately for you to deploy via other means. You need to set the `GetDcrPayload` parameter.
-
- ```powershell
- .\WorkspaceConfigToDCRMigrationTool.ps1 -SubscriptionId $subId -ResourceGroupName $rgName -WorkspaceName $workspaceName -DCRName $dcrName -Location $location -FolderPath $folderPath -GetDcrPayload
- ```
-
- **Parameters**
-
- | Parameter | Required? | Description |
- ||||
- | `SubscriptionId` | Yes | ID of the subscription that contains the target workspace. |
- | `ResourceGroupName` | Yes | Resource group that contains the target workspace. |
- | `WorkspaceName` | Yes | Name of the target workspace. |
- | `DCRName` | Yes | Name of the new DCR. |
- | `Location` | Yes | Region location for the new DCR. |
- | `GetDcrPayload` | No | When set, it generates additional DCR JSON files
- | `FolderPath` | No | Path in which to save the ARM template files and JSON files (optional). By default, Azure Monitor uses the current directory. |
-
+ Option 1: Outputs **ready-to-deploy ARM template files** only, which creates the generated DCR in the specified subscription and resource group, when deployed.
+
+ ```powershell
+ .\WorkspaceConfigToDCRMigrationTool.ps1 -SubscriptionId $subId -ResourceGroupName $rgName -WorkspaceName $workspaceName -DCRName $dcrName -Location $location -FolderPath $folderPath
+ ```
+ Option 2: Outputs **ready-to-deploy ARM template files** and **the DCR JSON files** separately for you to deploy via other means. You need to set the `GetDcrPayload` parameter.
+
+ ```powershell
+ .\WorkspaceConfigToDCRMigrationTool.ps1 -SubscriptionId $subId -ResourceGroupName $rgName -WorkspaceName $workspaceName -DCRName $dcrName -Location $location -FolderPath $folderPath -GetDcrPayload
+ ```
+
+ **Parameters**
+
+ | Parameter | Required? | Description |
+ ||||
+ | `SubscriptionId` | Yes | ID of the subscription that contains the target workspace. |
+ | `ResourceGroupName` | Yes | Resource group that contains the target workspace. |
+ | `WorkspaceName` | Yes | Name of the target workspace. |
+ | `DCRName` | Yes | Name of the new DCR. |
+ | `Location` | Yes | Region location for the new DCR. |
+ | `GetDcrPayload` | No | When set, it generates additional DCR JSON files
+ | `FolderPath` | No | Path in which to save the ARM template files and JSON files (optional). By default, Azure Monitor uses the current directory. |
+ 1. Review the output ARM template files. The script can produce two types of ARM template files, depending on the agent configuration in the target workspace:
- - Windows ARM template and parameter files - if the target workspace contains Windows performance counters or Windows events.
- - Linux ARM template and parameter files - if the target workspace contains Linux performance counters or Linux Syslog events.
-
- If the Log Analytics workspace wasn't [configured to collect data](./log-analytics-agent.md#data-collected) from connected agents, the generated files will be empty. This is a scenario in which the agent was connected to a Log Analytics workspace, but wasn't configured to send any data from the host machine.
+ - Windows ARM template and parameter files - if the target workspace contains Windows performance counters or Windows events.
+ - Linux ARM template and parameter files - if the target workspace contains Linux performance counters or Linux Syslog events.
+
+ If the Log Analytics workspace wasn't [configured to collect data](./log-analytics-agent.md#data-collected) from connected agents, the generated files will be empty. This is a scenario in which the agent was connected to a Log Analytics workspace, but wasn't configured to send any data from the host machine.
1. Deploy the generated ARM templates:
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm Rsyslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md
Overview of Azure Monitor Agent for Linux Syslog collection and supported RFC st
- Azure Monitor Agent ingests Syslog events via the previously mentioned socket and filters them based on facility or severity combination from data collection rule (DCR) configuration in `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`. Any `facility` or `severity` not present in the DCR is dropped. - Azure Monitor Agent attempts to parse events in accordance with **RFC3164** and **RFC5424**. It also knows how to parse the message formats listed on [this website](./azure-monitor-agent-overview.md#data-sources-and-destinations). - Azure Monitor Agent identifies the destination endpoint for Syslog events from the DCR configuration and attempts to upload the events.
- > [!NOTE]
- > Azure Monitor Agent uses local persistency by default. All events received from `rsyslog` or `syslog-ng` are queued in `/var/opt/microsoft/azuremonitoragent/events` if they fail to be uploaded.
+ > [!NOTE]
+ > Azure Monitor Agent uses local persistency by default. All events received from `rsyslog` or `syslog-ng` are queued in `/var/opt/microsoft/azuremonitoragent/events` if they fail to be uploaded.
## Issues
If you're sending a high log volume through rsyslog and your system is set up to
1. For example, to remove `local4` events from being logged at `/var/log/syslog` or `/var/log/messages`, change this line in `/etc/rsyslog.d/50-default.conf` from this snippet:
- ```config
- *.*;auth,authpriv.none -/var/log/syslog
- ```
+ ```config
+ *.*;auth,authpriv.none -/var/log/syslog
+ ```
- To this snippet (add `local4.none;`):
+ To this snippet (add `local4.none;`):
- ```config
- *.*;local4.none;auth,authpriv.none -/var/log/syslog
- ```
+ ```config
+ *.*;local4.none;auth,authpriv.none -/var/log/syslog
+ ```
1. `sudo systemctl restart rsyslog`
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).** 2. **Verify that the extension was successfully installed and provisioned, which installs the agent binaries on your machine**:
- 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** from the pane on the left > 'AzureMonitorLinuxAgent'should show up with Status: 'Provisioning succeeded'
- 2. If you don't see the extension listed, check if machine can reach Azure and find the extension to install using the command below:
- ```azurecli
- az vm extension image list-versions --location <machine-region> --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor
- ```
- 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up as above, [uninstall and install the extension](./azure-monitor-agent-manage.md) again.
- 4. Check if you see any errors in extension logs located at `/var/log/azure/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent/` on your machine
- 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
-
+ 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** from the pane on the left > 'AzureMonitorLinuxAgent'should show up with Status: 'Provisioning succeeded'
+ 2. If you don't see the extension listed, check if machine can reach Azure and find the extension to install using the command below:
+ ```azurecli
+ az vm extension image list-versions --location <machine-region> --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor
+ ```
+ 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up as above, [uninstall and install the extension](./azure-monitor-agent-manage.md) again.
+ 4. Check if you see any errors in extension logs located at `/var/log/azure/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent/` on your machine
+ 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
+ 3. **Verify that the agent is running**:
- 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
- ```Kusto
- Heartbeat | where Category == "Azure Monitor Agent" and Computer == "<computer-name>" | take 10
- ```
- 2. Check if the agent service is running
- ```
- systemctl status azuremonitoragent
- ```
- 3. Check if you see any errors in core agent logs located at `/var/opt/microsoft/azuremonitoragent/log/mdsd.*` on your machine
- 3. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
-
+ 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
+ ```Kusto
+ Heartbeat | where Category == "Azure Monitor Agent" and Computer == "<computer-name>" | take 10
+ ```
+ 2. Check if the agent service is running
+ ```
+ systemctl status azuremonitoragent
+ ```
+ 3. Check if you see any errors in core agent logs located at `/var/opt/microsoft/azuremonitoragent/log/mdsd.*` on your machine
+ 3. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
+
4. **Verify that the DCR exists and is associated with the virtual machine:**
- 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
- 2. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the virtual machine listed here.
- 3. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs.
- 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
+ 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
+ 2. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the virtual machine listed here.
+ 3. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs.
+ 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
5. **Verify that agent was able to download the associated DCR(s) from AMCS service:**
- 1. Check if you see the latest DCR downloaded at this location `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`
- 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
+ 1. Check if you see the latest DCR downloaded at this location `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`
+ 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
## Issues collecting Syslog For more information on how to troubleshoot syslog issues with Azure Monitor Agent, see [here](azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md).
-
-- The quality of service (QoS) file `/var/opt/microsoft/azuremonitoragent/log/mdsd.qos` provides CSV-format 15-minute aggregations of the processed events and contains the information on the amount of the processed syslog events in the given timeframe. **This file is useful in tracking Syslog event ingestion drops**. -
- For example, the below fragment shows that in the 15 minutes preceding 2022-02-28T19:55:23.5432920Z, the agent received 77 syslog events with facility daemon and level info and sent 77 of said events to the upload task. Additionally, the agent upload task received 77 and successfully uploaded all 77 of these daemon.info messages.
-
- ```
- #Time: 2022-02-28T19:55:23.5432920Z
- #Fields: Operation,Object,TotalCount,SuccessCount,Retries,AverageDuration,AverageSize,AverageDelay,TotalSize,TotalRowsRead,TotalRowsSent
- ...
- MaRunTaskLocal,daemon.debug,15,15,0,60000,0,0,0,0,0
- MaRunTaskLocal,daemon.info,15,15,0,60000,46.2,0,693,77,77
- MaRunTaskLocal,daemon.notice,15,15,0,60000,0,0,0,0,0
- MaRunTaskLocal,daemon.warning,15,15,0,60000,0,0,0,0,0
- MaRunTaskLocal,daemon.error,15,15,0,60000,0,0,0,0,0
- MaRunTaskLocal,daemon.critical,15,15,0,60000,0,0,0,0,0
- MaRunTaskLocal,daemon.alert,15,15,0,60000,0,0,0,0,0
- MaRunTaskLocal,daemon.emergency,15,15,0,60000,0,0,0,0,0
- ...
- MaODSRequest,https://e73fd5e3-ea2b-4637-8da0-5c8144b670c8_LogManagement,15,15,0,455067,476.467,0,7147,77,77
- ```
-
+
+- The quality of service (QoS) file `/var/opt/microsoft/azuremonitoragent/log/mdsd.qos` provides CSV-format 15-minute aggregations of the processed events and contains the information on the amount of the processed syslog events in the given timeframe. **This file is useful in tracking Syslog event ingestion drops**.
+
+ For example, the below fragment shows that in the 15 minutes preceding 2022-02-28T19:55:23.5432920Z, the agent received 77 syslog events with facility daemon and level info and sent 77 of said events to the upload task. Additionally, the agent upload task received 77 and successfully uploaded all 77 of these daemon.info messages.
+
+ ```
+ #Time: 2022-02-28T19:55:23.5432920Z
+ #Fields: Operation,Object,TotalCount,SuccessCount,Retries,AverageDuration,AverageSize,AverageDelay,TotalSize,TotalRowsRead,TotalRowsSent
+ ...
+ MaRunTaskLocal,daemon.debug,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.info,15,15,0,60000,46.2,0,693,77,77
+ MaRunTaskLocal,daemon.notice,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.warning,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.error,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.critical,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.alert,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.emergency,15,15,0,60000,0,0,0,0,0
+ ...
+ MaODSRequest,https://e73fd5e3-ea2b-4637-8da0-5c8144b670c8_LogManagement,15,15,0,455067,476.467,0,7147,77,77
+ ```
+ **Troubleshooting steps** 1. Review the [generic Linux AMA troubleshooting steps](#basic-troubleshooting-steps) first. If agent is emitting heartbeats, proceed to step 2. 2. The parsed configuration is stored at `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`. Check that Syslog collection is defined and the log destinations are the same as constructed in DCR UI / DCR JSON.
- 1. If yes, proceed to step 3. If not, the issue is in the configuration workflow.
- 2. Investigate `mdsd.err`,`mdsd.warn`, `mdsd.info` files under `/var/opt/microsoft/azuremonitoragent/log` for possible configuration errors.
- 3. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'Syslog DCR not available' and **Problem type** as 'I need help configuring data collection from a VM'.
+ 1. If yes, proceed to step 3. If not, the issue is in the configuration workflow.
+ 2. Investigate `mdsd.err`,`mdsd.warn`, `mdsd.info` files under `/var/opt/microsoft/azuremonitoragent/log` for possible configuration errors.
+ 3. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'Syslog DCR not available' and **Problem type** as 'I need help configuring data collection from a VM'.
3. Validate the layout of the Syslog collection workflow to ensure all necessary pieces are in place and accessible:
- 1. For `rsyslog` users, ensure the `/etc/rsyslog.d/10-azuremonitoragent.conf` file is present, isn't empty, and is accessible by the `rsyslog` daemon (syslog user).
- 1. Check your rsyslog configuration at `/etc/rsyslog.conf` and `/etc/rsyslog.d/*` to see if you have any inputs bound to a non-default ruleset, as messages from these inputs won't be forwarded to Azure Monitor Agent. For instance, messages from an input configured with a non-default ruleset like `input(type="imtcp" port="514" `**`ruleset="myruleset"`**`)` won't be forward.
- 2. For `syslog-ng` users, ensure the `/etc/syslog-ng/conf.d/azuremonitoragent.conf` file is present, isn't empty, and is accessible by the `syslog-ng` daemon (syslog user).
- 3. Ensure the file `/run/azuremonitoragent/default_syslog.socket` exists and is accessible by `rsyslog` or `syslog-ng` respectively.
- 4. Check for a corresponding drop in count of processed syslog events in `/var/opt/microsoft/azuremonitoragent/log/mdsd.qos`. If such drop isn't indicated in the file, [file a ticket](#file-a-ticket) with **Summary** as 'Syslog data dropped in pipeline' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
- 5. Check that syslog daemon queue isn't overflowing, causing the upload to fail, by referring the guidance here: [Rsyslog data not uploaded due to Full Disk space issue on AMA Linux Agent](./azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md)
+ 1. For `rsyslog` users, ensure the `/etc/rsyslog.d/10-azuremonitoragent.conf` file is present, isn't empty, and is accessible by the `rsyslog` daemon (syslog user).
+ 1. Check your rsyslog configuration at `/etc/rsyslog.conf` and `/etc/rsyslog.d/*` to see if you have any inputs bound to a non-default ruleset, as messages from these inputs won't be forwarded to Azure Monitor Agent. For instance, messages from an input configured with a non-default ruleset like `input(type="imtcp" port="514" `**`ruleset="myruleset"`**`)` won't be forward.
+ 2. For `syslog-ng` users, ensure the `/etc/syslog-ng/conf.d/azuremonitoragent.conf` file is present, isn't empty, and is accessible by the `syslog-ng` daemon (syslog user).
+ 3. Ensure the file `/run/azuremonitoragent/default_syslog.socket` exists and is accessible by `rsyslog` or `syslog-ng` respectively.
+ 4. Check for a corresponding drop in count of processed syslog events in `/var/opt/microsoft/azuremonitoragent/log/mdsd.qos`. If such drop isn't indicated in the file, [file a ticket](#file-a-ticket) with **Summary** as 'Syslog data dropped in pipeline' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
+ 5. Check that syslog daemon queue isn't overflowing, causing the upload to fail, by referring the guidance here: [Rsyslog data not uploaded due to Full Disk space issue on AMA Linux Agent](./azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md)
4. To debug syslog events ingestion further, you can append trace flag **-T 0x2002** at the end of **MDSD_OPTIONS** in the file `/etc/default/azuremonitoragent`, and restart the agent:
- ```
- export MDSD_OPTIONS="-A -c /etc/opt/microsoft/azuremonitoragent/mdsd.xml -d -r $MDSD_ROLE_PREFIX -S $MDSD_SPOOL_DIRECTORY/eh -L $MDSD_SPOOL_DIRECTORY/events -e $MDSD_LOG_DIR/mdsd.err -w $MDSD_LOG_DIR/mdsd.warn -o $MDSD_LOG_DIR/mdsd.info -T 0x2002"
- ```
+ ```
+ export MDSD_OPTIONS="-A -c /etc/opt/microsoft/azuremonitoragent/mdsd.xml -d -r $MDSD_ROLE_PREFIX -S $MDSD_SPOOL_DIRECTORY/eh -L $MDSD_SPOOL_DIRECTORY/events -e $MDSD_LOG_DIR/mdsd.err -w $MDSD_LOG_DIR/mdsd.warn -o $MDSD_LOG_DIR/mdsd.info -T 0x2002"
+ ```
5. After the issue is reproduced with the trace flag on, you'll find more debug information in `/var/opt/microsoft/azuremonitoragent/log/mdsd.info`. Inspect the file for the possible cause of syslog collection issue, such as parsing / processing / configuration / upload errors.
- > [!WARNING]
- > Ensure to remove trace flag setting **-T 0x2002** after the debugging session, since it generates many trace statements that could fill up the disk more quickly or make visually parsing the log file difficult.
+ > [!WARNING]
+ > Ensure to remove trace flag setting **-T 0x2002** after the debugging session, since it generates many trace statements that could fill up the disk more quickly or make visually parsing the log file difficult.
6. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA fails to collect syslog events' and **Problem type** as 'I need help with Azure Monitor Linux Agent'. ## Troubleshooting issues on Arc-enabled server
azure-monitor Azure Monitor Agent Troubleshoot Windows Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-arc.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).** 2. **Verify that the extension was successfully installed and provisioned, which installs the agent binaries on your machine**:
- 1. Open Azure portal > select your Arc-enabled server > Open **Settings** : **Extensions** from the pane on the left > 'AzureMonitorWindowsAgent'should show up with Status: 'Succeeded'
- 2. If not, check if the Arc agent (Connected Machine Agent) is able to connect to Azure and the extension service is running.
- ```azurecli
- azcmagent show
- ```
- You should see the below output:
- ```
- Resource Name : <server name>
- [...]
- Dependent Service Status
- Agent Service (himds) : running
- GC Service (gcarcservice) : running
- Extension Service (extensionservice) : running
- ```
- If instead you see `Agent Status: Disconnected` or any other status, [file a ticket](#file-a-ticket) with **Summary** as 'Arc agent or extensions service not working' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
- 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up, [uninstall and install the extension](./azure-monitor-agent-manage.md) again and repeat the verification to see the extension show up.
- 4. If not, check if you see any errors in extension logs located at `C:\ProgramData\GuestConfig\extension_logs\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent` on your machine
- 5. If none of the above works, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
-
+ 1. Open Azure portal > select your Arc-enabled server > Open **Settings** : **Extensions** from the pane on the left > 'AzureMonitorWindowsAgent'should show up with Status: 'Succeeded'
+ 2. If not, check if the Arc agent (Connected Machine Agent) is able to connect to Azure and the extension service is running.
+ ```azurecli
+ azcmagent show
+ ```
+ You should see the below output:
+ ```
+ Resource Name : <server name>
+ [...]
+ Dependent Service Status
+ Agent Service (himds) : running
+ GC Service (gcarcservice) : running
+ Extension Service (extensionservice) : running
+ ```
+ If instead you see `Agent Status: Disconnected` or any other status, [file a ticket](#file-a-ticket) with **Summary** as 'Arc agent or extensions service not working' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up, [uninstall and install the extension](./azure-monitor-agent-manage.md) again and repeat the verification to see the extension show up.
+ 4. If not, check if you see any errors in extension logs located at `C:\ProgramData\GuestConfig\extension_logs\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent` on your machine
+ 5. If none of the above works, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ 3. **Verify that the agent is running**:
- 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
- ```Kusto
- Heartbeat | where Category == "Azure Monitor Agent" and Computer == "<computer-name>" | take 10
- ```
- 2. If not, open Task Manager and check if 'MonAgentCore.exe' process is running. If it is, wait for 5 minutes for heartbeat to show up.
- 3. If not, check if you see any errors in core agent logs located at `C:\Resources\Directory\AMADataStore\Configuration` on your machine
- 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
-
+ 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
+ ```Kusto
+ Heartbeat | where Category == "Azure Monitor Agent" and Computer == "<computer-name>" | take 10
+ ```
+ 2. If not, open Task Manager and check if 'MonAgentCore.exe' process is running. If it is, wait for 5 minutes for heartbeat to show up.
+ 3. If not, check if you see any errors in core agent logs located at `C:\Resources\Directory\AMADataStore\Configuration` on your machine
+ 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+
4. **Verify that the DCR exists and is associated with the Arc-enabled server:**
- 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
- 2. On your Arc-enabled server, verify the existence of the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.latest.xml`. If this file doesn't exist, the Arc-enabled server may not be associated with a DCR.
- 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the Arc-enabled server listed here
- 4. If not listed, click 'Add' and select your Arc-enabled server from the resource picker. Repeat across all DCRs.
- 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
+ 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
+ 2. On your Arc-enabled server, verify the existence of the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.latest.xml`. If this file doesn't exist, the Arc-enabled server may not be associated with a DCR.
+ 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the Arc-enabled server listed here
+ 4. If not listed, click 'Add' and select your Arc-enabled server from the resource picker. Repeat across all DCRs.
+ 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
5. **Verify that agent was able to download the associated DCR(s) from AMCS service:**
- 1. Check if you see the latest DCR downloaded at this location `C:\Resources\Directory\AMADataStore\mcs\configchunks`
- 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ 1. Check if you see the latest DCR downloaded at this location `C:\Resources\Directory\AMADataStore\mcs\configchunks`
+ 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
## Issues collecting Performance counters 1. Check that your DCR JSON contains a section for 'performanceCounters'. If not, fix your DCR. See [how to create DCR](./data-collection-rule-azure-monitor-agent.md) or [sample DCR](./data-collection-rule-sample-agent.md). 2. Check that the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.lkg.xml` exists. If it doesn't exist, [file a ticket](#file-a-ticket) with **Summary** as 'AMA didn't run long enough to mark and **Problem type** as 'I need help with Azure Monitor Windows Agent'. 3. Open the file and check if it contains `CounterSet` nodes as shown in the example below:
- ```xml
- <CounterSet storeType="Local" duration="PT1M"
+ ```xml
+ <CounterSet storeType="Local" duration="PT1M"
eventName="c9302257006473204344_16355538690556228697" sampleRateInSeconds="15" format="Factored"> <Counter>\Processor(_Total)\% Processor Time</Counter>
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
<Counter>\LogicalDisk(_Total)\Free Megabytes</Counter> <Counter>\PhysicalDisk(_Total)\Avg. Disk Queue Length</Counter> </CounterSet>
- ```
- If there are no `CounterSet` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ ```
+ If there are no `CounterSet` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
### Issues using 'Custom Metrics' as destination 1. Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites). 2. Ensure that the associated DCR is correctly authored to collect performance counters and send them to Azure Monitor metrics. You should see this section in your DCR:
- ```json
- "destinations": {
- "azureMonitorMetrics": {
- "name":"myAmMetricsDest"
- }
- }
- ```
-
+ ```json
+ "destinations": {
+ "azureMonitorMetrics": {
+ "name":"myAmMetricsDest"
+ }
+ }
+ ```
+
3. Run PowerShell command:
- ```powershell
- Get-WmiObject Win32_Process -Filter "name = 'MetricsExtension.Native.exe'" | select Name,ExecutablePath,CommandLine | Format-List
- ```
-
- Verify that the *CommandLine* parameter in the output contains the argument "-TokenSource MSI"
+ ```powershell
+ Get-WmiObject Win32_Process -Filter "name = 'MetricsExtension.Native.exe'" | select Name,ExecutablePath,CommandLine | Format-List
+ ```
+
+ Verify that the *CommandLine* parameter in the output contains the argument "-TokenSource MSI"
4. Verify `C:\Resources\Directory\AMADataStore\mcs\AuthToken-MSI.json` file is present. 5. Verify `C:\Resources\Directory\AMADataStore\mcs\CUSTOMMETRIC_<subscription>_<region>_MonitoringAccount_Configuration.json` file is present. 6. Collect logs by running the command `C:\Packages\Plugins\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\<version-number>\Monitoring\Agent\table2csv.exe C:\Resources\Directory\AMADataStore\Tables\MaMetricsExtensionEtw.tsf`
- 1. The command will generate the file 'MaMetricsExtensionEtw.csv'
- 2. Open it and look for any Level 2 errors and try to fix them.
+ 1. The command will generate the file 'MaMetricsExtensionEtw.csv'
+ 2. Open it and look for any Level 2 errors and try to fix them.
7. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to collect custom metrics' and **Problem type** as 'I need help with Azure Monitor Windows Agent'. ## Issues collecting Windows event logs 1. Check that your DCR JSON contains a section for 'windowsEventLogs'. If not, fix your DCR. See [how to create DCR](./data-collection-rule-azure-monitor-agent.md) or [sample DCR](./data-collection-rule-sample-agent.md). 2. Check that the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.lkg.xml` exists. If it doesn't exist, [file a ticket](#file-a-ticket) with **Summary** as 'AMA didn't run long enough to mark and **Problem type** as 'I need help with Azure Monitor Windows Agent'. 3. Open the file and check if it contains `Subscription` nodes as shown in the example below:
- ```xml
- <Subscription eventName="c9302257006473204344_14882095577508259570"
+ ```xml
+ <Subscription eventName="c9302257006473204344_14882095577508259570"
query="System!*[System[(Level = 1 or Level = 2 or Level = 3)]]"> <Column name="ProviderGuid" type="mt:wstr" defaultAssignment="00000000-0000-0000-0000-000000000000"> <Value>/Event/System/Provider/@Guid</Value> </Column>
- ...
-
+ ...
+
</Column> </Subscription>
- ```
- If there are no `Subscription` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ ```
+ If there are no `Subscription` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
[!INCLUDE [azure-monitor-agent-file-a-ticket](../../../includes/azure-monitor-agent/azure-monitor-agent-file-a-ticket.md)]
azure-monitor Azure Monitor Agent Troubleshoot Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-vm.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).** 2. **Verify that the extension was successfully installed and provisioned, which installs the agent binaries on your machine**:
- 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** from the pane on the left > 'AzureMonitorWindowsAgent'should show up with Status: 'Provisioning succeeded'
- 2. If not, check if machine can reach Azure and find the extension to install using the command below:
- ```azurecli
- az vm extension image list-versions --location <machine-region> --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor
- ```
- 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up, [uninstall and install the extension](./azure-monitor-agent-manage.md) again and repeat the verification to see the extension show up.
- 4. If not, check if you see any errors in extension logs located at `C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent` on your machine
- 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
-
+ 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** from the pane on the left > 'AzureMonitorWindowsAgent'should show up with Status: 'Provisioning succeeded'
+ 2. If not, check if machine can reach Azure and find the extension to install using the command below:
+ ```azurecli
+ az vm extension image list-versions --location <machine-region> --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor
+ ```
+ 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up, [uninstall and install the extension](./azure-monitor-agent-manage.md) again and repeat the verification to see the extension show up.
+ 4. If not, check if you see any errors in extension logs located at `C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent` on your machine
+ 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+
3. **Verify that the agent is running**:
- 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
- ```Kusto
- Heartbeat | where Category == "Azure Monitor Agent" and 'Computer' == "<computer-name>" | take 10
- ```
- 2. If not, open Task Manager and check if 'MonAgentCore.exe' process is running. If it is, wait for 5 minutes for heartbeat to show up.
- 3. If not, check if you see any errors in core agent logs located at `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Configuration` on your machine
- 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
-
+ 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
+ ```Kusto
+ Heartbeat | where Category == "Azure Monitor Agent" and 'Computer' == "<computer-name>" | take 10
+ ```
+ 2. If not, open Task Manager and check if 'MonAgentCore.exe' process is running. If it is, wait for 5 minutes for heartbeat to show up.
+ 3. If not, check if you see any errors in core agent logs located at `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Configuration` on your machine
+ 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+
4. **Verify that the DCR exists and is associated with the virtual machine:**
- 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
- 2. On your virtual machine, verify the existence of the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.latest.xml`. If this file doesn't exist:
- - The virtual machine may not be associated with a DCR. See step 3
- - The virtual machine may not have Managed Identity enabled. [See here](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-during-creation-of-a-vm) on how to enable.
- - IMDS service is not running/accessible from the virtual machine. [Check if you can access IMDS from the machine](../../virtual-machines/windows/instance-metadata-service.md?tabs=windows). If not, [file a ticket](#file-a-ticket) with **Summary** as 'IMDS service not running' and **Problem type** as 'I need help configuring data collection from a VM'.
- - AMA cannot access IMDS. Check if you see IMDS errors in `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Tables\MAEventTable.tsf` file. If yes, [file a ticket](#file-a-ticket) with **Summary** as 'AMA cannot access IMDS' and **Problem type** as 'I need help configuring data collection from a VM'.
- 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the virtual machine listed here
- 4. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs.
- 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
+ 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
+ 2. On your virtual machine, verify the existence of the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.latest.xml`. If this file doesn't exist:
+ - The virtual machine may not be associated with a DCR. See step 3
+ - The virtual machine may not have Managed Identity enabled. [See here](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-during-creation-of-a-vm) on how to enable.
+ - IMDS service is not running/accessible from the virtual machine. [Check if you can access IMDS from the machine](../../virtual-machines/windows/instance-metadata-service.md?tabs=windows). If not, [file a ticket](#file-a-ticket) with **Summary** as 'IMDS service not running' and **Problem type** as 'I need help configuring data collection from a VM'.
+ - AMA cannot access IMDS. Check if you see IMDS errors in `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Tables\MAEventTable.tsf` file. If yes, [file a ticket](#file-a-ticket) with **Summary** as 'AMA cannot access IMDS' and **Problem type** as 'I need help configuring data collection from a VM'.
+ 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the virtual machine listed here
+ 4. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs.
+ 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
5. **Verify that agent was able to download the associated DCR(s) from AMCS service:**
- 1. Check if you see the latest DCR downloaded at this location `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\configchunks`
- 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ 1. Check if you see the latest DCR downloaded at this location `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\configchunks`
+ 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
-
+
## Issues collecting Performance counters 1. Check that your DCR JSON contains a section for 'performanceCounters'. If not, fix your DCR. See [how to create DCR](./data-collection-rule-azure-monitor-agent.md) or [sample DCR](./data-collection-rule-sample-agent.md). 2. Check that the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.lkg.xml` exists. If it doesn't exist, [file a ticket](#file-a-ticket) with **Summary** as 'AMA didn't run long enough to mark and **Problem type** as 'I need help with Azure Monitor Windows Agent'. 3. Open the file and check if it contains `CounterSet` nodes as shown in the example below:
- ```xml
- <CounterSet storeType="Local" duration="PT1M"
+ ```xml
+ <CounterSet storeType="Local" duration="PT1M"
eventName="c9302257006473204344_16355538690556228697" sampleRateInSeconds="15" format="Factored"> <Counter>\Processor(_Total)\% Processor Time</Counter>
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
<Counter>\LogicalDisk(_Total)\Free Megabytes</Counter> <Counter>\PhysicalDisk(_Total)\Avg. Disk Queue Length</Counter> </CounterSet>
- ```
- If there are no `CounterSet` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ ```
+ If there are no `CounterSet` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
### Issues using 'Custom Metrics' as destination 1. Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites). 2. Ensure that the associated DCR is correctly authored to collect performance counters and send them to Azure Monitor metrics. You should see this section in your DCR:
- ```json
- "destinations": {
- "azureMonitorMetrics": {
- "name":"myAmMetricsDest"
- }
- }
- ```
+ ```json
+ "destinations": {
+ "azureMonitorMetrics": {
+ "name":"myAmMetricsDest"
+ }
+ }
+ ```
3. Run PowerShell command:
- ```powershell
- Get-WmiObject Win32_Process -Filter "name = 'MetricsExtension.Native.exe'" | select Name,ExecutablePath,CommandLine | Format-List
- ```
- Verify that the *CommandLine* parameter in the output contains the argument "-TokenSource MSI"
+ ```powershell
+ Get-WmiObject Win32_Process -Filter "name = 'MetricsExtension.Native.exe'" | select Name,ExecutablePath,CommandLine | Format-List
+ ```
+ Verify that the *CommandLine* parameter in the output contains the argument "-TokenSource MSI"
4. Verify `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\AuthToken-MSI.json` file is present. 5. Verify `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\CUSTOMMETRIC_<subscription>_<region>_MonitoringAccount_Configuration.json` file is present. 6. Collect logs by running the command `C:\Packages\Plugins\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\<version-number>\Monitoring\Agent\table2csv.exe C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Tables\MaMetricsExtensionEtw.tsf`
- 1. The command will generate the file 'MaMetricsExtensionEtw.csv'
- 2. Open it and look for any Level 2 errors and try to fix them.
+ 1. The command will generate the file 'MaMetricsExtensionEtw.csv'
+ 2. Open it and look for any Level 2 errors and try to fix them.
7. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to collect custom metrics' and **Problem type** as 'I need help with Azure Monitor Windows Agent'. ## Issues collecting Windows event logs 1. Check that your DCR JSON contains a section for 'windowsEventLogs'. If not, fix your DCR. See [how to create DCR](./data-collection-rule-azure-monitor-agent.md) or [sample DCR](./data-collection-rule-sample-agent.md). 2. Check that the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.lkg.xml` exists. If it doesn't exist, [file a ticket](#file-a-ticket) with **Summary** as 'AMA didn't run long enough to mark and **Problem type** as 'I need help with Azure Monitor Windows Agent'. 3. Open the file and check if it contains `Subscription` nodes as shown in the example below:
- ```xml
- <Subscription eventName="c9302257006473204344_14882095577508259570"
+ ```xml
+ <Subscription eventName="c9302257006473204344_14882095577508259570"
query="System!*[System[(Level = 1 or Level = 2 or Level = 3)]]"> <Column name="ProviderGuid" type="mt:wstr" defaultAssignment="00000000-0000-0000-0000-000000000000"> <Value>/Event/System/Provider/@Guid</Value> </Column>
- ...
-
+ ...
+
</Column> </Subscription>
- ```
- If there are no `Subscription`, nodes then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
-
-
-
+ ```
+ If there are no `Subscription`, nodes then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ [!INCLUDE [azure-monitor-agent-file-a-ticket](../../../includes/azure-monitor-agent/azure-monitor-agent-file-a-ticket.md)]
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Here is a comparison between client installer and VM extension for Azure Monitor
| Authentication | Using Managed Identity | Using AAD device token | | Central configuration | Via Data collection rules | Same | | Associating config rules to agents | DCRs associates directly to individual VM resources | DCRs associate to Monitored Object (MO), which maps to all devices within the AAD tenant |
-| Data upload to Log Analytics | Via Log Analytics endpoints | Same |
+| Data upload to Log Analytics | Via Log Analytics endpoints | Same |
| Feature support | All features documented [here](./azure-monitor-agent-overview.md) | Features dependent on AMA agent extension that don't require additional extensions. This includes support for Sentinel Windows Event filtering | | [Networking options](./azure-monitor-agent-overview.md#networking) | Proxy support, Private link support | Proxy support only |
Here is a comparison between client installer and VM extension for Azure Monitor
3. The machine must be domain joined to an Azure AD tenant (AADj or Hybrid AADj machines), which enables the agent to fetch Azure AD device tokens used to authenticate and fetch data collection rules from Azure. 4. You may need tenant admin permissions on the Azure AD tenant. 5. The device must have access to the following HTTPS endpoints:
- - global.handler.control.monitor.azure.com
- - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com)
- - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opinsights.azure.com)
+ - global.handler.control.monitor.azure.com
+ - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com)
+ - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opinsights.azure.com)
(If using private links on the agent, you must also add the [data collection endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)) 6. A data collection rule you want to associate with the devices. If it doesn't exist already, [create a data collection rule](./data-collection-rule-azure-monitor-agent.md#create-a-data-collection-rule). **Do not associate the rule to any resources yet**. ## Install the agent 1. Download the Windows MSI installer for the agent using [this link](https://go.microsoft.com/fwlink/?linkid=2192409). You can also download it from **Monitor** > **Data Collection Rules** > **Create** experience on Azure portal (shown below):
- [![Diagram shows download agent link on Azure portal.](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal.png)](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal-focus.png#lightbox)
+ [![Diagram shows download agent link on Azure portal.](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal.png)](media/azure-monitor-agent-windows-client/azure-monitor-agent-client-installer-portal-focus.png#lightbox)
2. Open an elevated admin command prompt window and change directory to the location where you downloaded the installer. 3. To install with **default settings**, run the following command:
- ```cli
- msiexec /i AzureMonitorAgentClientSetup.msi /qn
- ```
+ ```cli
+ msiexec /i AzureMonitorAgentClientSetup.msi /qn
+ ```
4. To install with custom file paths, [network proxy settings](./azure-monitor-agent-overview.md#proxy-configuration), or on a Non-Public Cloud use the command below with the values from the following table:
- ```cli
- msiexec /i AzureMonitorAgentClientSetup.msi /qn DATASTOREDIR="C:\example\folder"
- ```
-
- | Parameter | Description |
- |:|:|
- | INSTALLDIR | Directory path where the agent binaries are installed |
- | DATASTOREDIR | Directory path where the agent stores its operational logs and data |
- | PROXYUSE | Must be set to "true" to use proxy |
- | PROXYADDRESS | Set to Proxy Address. PROXYUSE must be set to "true" to be correctly applied |
- | PROXYUSEAUTH | Set to "true" if proxy requires authentication |
- | PROXYUSERNAME | Set to Proxy username. PROXYUSE and PROXYUSEAUTH must be set to "true" |
- | PROXYPASSWORD | Set to Proxy password. PROXYUSE and PROXYUSEAUTH must be set to "true" |
- | CLOUDENV | Set to Cloud. "Azure Commercial", "Azure China", "Azure US Gov", "Azure USNat", or "Azure USSec
+ ```cli
+ msiexec /i AzureMonitorAgentClientSetup.msi /qn DATASTOREDIR="C:\example\folder"
+ ```
+
+ | Parameter | Description |
+ |:|:|
+ | INSTALLDIR | Directory path where the agent binaries are installed |
+ | DATASTOREDIR | Directory path where the agent stores its operational logs and data |
+ | PROXYUSE | Must be set to "true" to use proxy |
+ | PROXYADDRESS | Set to Proxy Address. PROXYUSE must be set to "true" to be correctly applied |
+ | PROXYUSEAUTH | Set to "true" if proxy requires authentication |
+ | PROXYUSERNAME | Set to Proxy username. PROXYUSE and PROXYUSEAUTH must be set to "true" |
+ | PROXYPASSWORD | Set to Proxy password. PROXYUSE and PROXYUSEAUTH must be set to "true" |
+ | CLOUDENV | Set to Cloud. "Azure Commercial", "Azure China", "Azure US Gov", "Azure USNat", or "Azure USSec
6. Verify successful installation:
- - Open **Control Panel** -> **Programs and Features** OR **Settings** -> **Apps** -> **Apps & Features** and ensure you see ΓÇÿAzure Monitor AgentΓÇÖ listed
- - Open **Services** and confirm ΓÇÿAzure Monitor AgentΓÇÖ is listed and shows as **Running**.
+ - Open **Control Panel** -> **Programs and Features** OR **Settings** -> **Apps** -> **Apps & Features** and ensure you see ΓÇÿAzure Monitor AgentΓÇÖ listed
+ - Open **Services** and confirm ΓÇÿAzure Monitor AgentΓÇÖ is listed and shows as **Running**.
7. Proceed to create the monitored object that you'll associate data collection rules to, for the agent to actually start operating. > [!NOTE]
PUT https://management.azure.com/providers/microsoft.insights/providers/microsof
**Request Body** ```JSON {
- "properties":
- {
- "roleDefinitionId":"/providers/Microsoft.Authorization/roleDefinitions/56be40e24db14ccf93c37e44c597135b",
- "principalId":"aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
- }
+ "properties":
+ {
+ "roleDefinitionId":"/providers/Microsoft.Authorization/roleDefinitions/56be40e24db14ccf93c37e44c597135b",
+ "principalId":"aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
+ }
} ```
PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{
```JSON { "properties":
- {
+ {
"location":"eastus" } }
PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{
**Request Body** ```JSON {
- "properties":
- {
- "dataCollectionRuleId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{DCRName}"
- }
+ "properties":
+ {
+ "dataCollectionRuleId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{DCRName}"
+ }
} ``` **Body parameters**
In order to update the version, install the new version you wish to update to.
## Troubleshoot ### View agent diagnostic logs 1. Rerun the installation with logging turned on and specify the log file name:
- `Msiexec /I AzureMonitorAgentClientSetup.msi /L*V <log file name>`
+ `Msiexec /I AzureMonitorAgentClientSetup.msi /L*V <log file name>`
2. Runtime logs are collected automatically either at the default location `C:\Resources\Azure Monitor Agent\` or at the file path mentioned during installation.
- - If you can't locate the path, the exact location can be found on the registry as `AMADataRootDirPath` on `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMonitorAgent`.
+ - If you can't locate the path, the exact location can be found on the registry as `AMADataRootDirPath` on `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMonitorAgent`.
3. The 'ServiceLogs' folder contains log from AMA Windows Service, which launches and manages AMA processes 4. 'AzureMonitorAgent.MonitoringDataStore' contains data/logs from AMA processes.
azure-monitor Data Collection Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-iis.md
To create the data collection rule in the Azure portal:
[ ![Screenshot that shows the Azure portal form to select basic performance counters in a data collection rule.](media/data-collection-iis/iis-data-collection-rule.png)](media/data-collection-iis/iis-data-collection-rule.png#lightbox) 1. Specify a file pattern to identify the directory where the log files are located.
-1. On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of the same or different types. For instance, you can select multiple Log Analytics workspaces, which is also known as multihoming.
+1. On the **Destination** tab, add a destinations for the data source.
[ ![Screenshot that shows the Azure portal form to add a data source in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png#lightbox)
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
Application Insights JavaScript SDK feature extensions are extra features that c
In this article, we cover the Click Analytics plug-in, which automatically tracks click events on webpages and uses `data-*` attributes or customized tags on HTML elements to populate event telemetry.
-> [!IMPORTANT]
-> If you haven't already, you need to first [enable Azure Monitor Application Insights Real User Monitoring](./javascript-sdk.md) before you enable the Click Analytics plug-in.
+## Prerequisites
+
+[Install the JavaScript SDK](./javascript-sdk.md) before you enable the Click Analytics plug-in.
## What data does the plug-in collect?
Users can set up the Click Analytics Auto-Collection plug-in via JavaScript (Web
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-### 1. Add the code
+### Add the code
#### [JavaScript (Web) SDK Loader Script](#tab/javascriptwebsdkloaderscript)
-Ignore this setup if you use the npm setup.
-
-```html
-<script type="text/javascript" src="https://js.monitor.azure.com/scripts/b/ext/ai.clck.2.min.js"></script>
-<script type="text/javascript">
- var clickPluginInstance = new Microsoft.ApplicationInsights.ClickAnalyticsPlugin();
- // Click Analytics configuration
- var clickPluginConfig = {
- autoCapture : true,
- dataTags: {
- useDefaultContentNameOrId: true
- }
- }
- // Application Insights configuration
- var configObj = {
- connectionString: "YOUR_CONNECTION_STRING",
- // Alternatively, you can pass in the instrumentation key,
- // but support for instrumentation key ingestion will end on March 31, 2025.
- // instrumentationKey: "YOUR INSTRUMENTATION KEY",
- extensions: [
- clickPluginInstance
- ],
- extensionConfig: {
- [clickPluginInstance.identifier] : clickPluginConfig
- },
- };
- // Application Insights JavaScript (Web) SDK Loader Script code
- !function(v,y,T){<!-- Removed the JavaScript (Web) SDK Loader Script code for brevity -->}(window,document,{
- src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js",
- crossOrigin: "anonymous",
- cfg: configObj // configObj is defined above.
- });
-</script>
-```
-
-> [!NOTE]
-> To add or update JavaScript (Web) SDK Loader Script configuration, see [JavaScript (Web) SDK Loader Script configuration](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#javascript-web-sdk-loader-script-configuration).
+1. Paste the JavaScript (Web) SDK Loader Script at the top of each page for which you want to enable Application Insights.
+
+ ```html
+ <script type="text/javascript" src="https://js.monitor.azure.com/scripts/b/ext/ai.clck.2.min.js"></script>
+ <script type="text/javascript">
+ var clickPluginInstance = new Microsoft.ApplicationInsights.ClickAnalyticsPlugin();
+ // Click Analytics configuration
+ var clickPluginConfig = {
+ autoCapture : true,
+ dataTags: {
+ useDefaultContentNameOrId: true
+ }
+ }
+ // Application Insights configuration
+ var configObj = {
+ connectionString: "YOUR_CONNECTION_STRING",
+ // Alternatively, you can pass in the instrumentation key,
+ // but support for instrumentation key ingestion will end on March 31, 2025.
+ // instrumentationKey: "YOUR INSTRUMENTATION KEY",
+ extensions: [
+ clickPluginInstance
+ ],
+ extensionConfig: {
+ [clickPluginInstance.identifier] : clickPluginConfig
+ },
+ };
+ // Application Insights JavaScript (Web) SDK Loader Script code
+ !function(v,y,T){<!-- Removed the JavaScript (Web) SDK Loader Script code for brevity -->}(window,document,{
+ src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js",
+ crossOrigin: "anonymous",
+ cfg: configObj // configObj is defined above.
+ });
+ </script>
+ ```
+
+1. To add or update JavaScript (Web) SDK Loader Script configuration, see [JavaScript (Web) SDK Loader Script configuration](./javascript-sdk.md?tabs=javascriptwebsdkloaderscript#javascript-web-sdk-loader-script-configuration).
#### [npm package](#tab/npmpackage)
appInsights.loadAppInsights();
> [!TIP]
-> If you want to add a framework extension or you've already added one, see the [React, React Native, and Angular code samples for how to add the Click Analytics plug-in](./javascript-framework-extensions.md#2-add-the-extension-to-your-code).
+> If you want to add a framework extension or you've already added one, see the [React, React Native, and Angular code samples for how to add the Click Analytics plug-in](./javascript-framework-extensions.md#add-the-extension-to-your-code).
-### 2. (Optional) Set the authenticated user context
+### (Optional) Set the authenticated user context
If you want to set this optional setting, see [Set the authenticated user context](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#setauthenticatedusercontext).
-> [!NOTE]
-> If you're using a HEART workbook with the Click Analytics plug-in, you don't need to set the authenticated user context to see telemetry data. For more information, see the [HEART workbook documentation](./usage-heart.md#confirm-that-data-is-flowing).
+If you're using a HEART workbook with the Click Analytics plug-in, you don't need to set the authenticated user context to see telemetry data. For more information, see the [HEART workbook documentation](./usage-heart.md#confirm-that-data-is-flowing).
## Use the plug-in
Telemetry data generated from the click events are stored as `customEvents` in t
The `name` column of the `customEvent` is populated based on the following rules: 1. The `id` provided in the `data-*-id`, which means it must start with `data` and end with `id`, is used as the `customEvent` name. For example, if the clicked HTML element has the attribute `"data-sample-id"="button1"`, then `"button1"` is the `customEvent` name. 1. If no such attribute exists and if the `useDefaultContentNameOrId` is set to `true` in the configuration, the clicked element's HTML attribute `id` or content name of the element is used as the `customEvent` name. If both `id` and the content name are present, precedence is given to `id`.
- 1. If `useDefaultContentNameOrId` is `false`, the `customEvent` name is `"not_specified"`.
-
- > [!TIP]
- > We recommend setting `useDefaultContentNameOrId` to `true` for generating meaningful data.
+ 1. If `useDefaultContentNameOrId` is `false`, the `customEvent` name is `"not_specified"`. We recommend setting `useDefaultContentNameOrId` to `true` for generating meaningful data.
### `parentId` key
The value for `parentId` is fetched based on the following rules:
- If both `data-*-id` and `id` are defined, precedence is given to `data-*-id`. - If `parentDataTag` is defined but the plug-in can't find this tag under the DOM tree, the plug-in uses the `id` or `data-*-id` defined within the element that is closest to the clicked element as `parentId`. However, we recommend defining the `data-{parentDataTag}` or `customDataPrefix-{parentDataTag}` attribute to reduce the number of loops needed to find `parentId`. Declaring `parentDataTag` is useful when you need to use the plug-in with customized options. - If no `parentDataTag` is defined and no `parentId` information is included in current element, no `parentId` value is collected. -
-> [!NOTE]
-> If `parentDataTag` is defined, `useDefaultContentNameOrId` is set to `false`, and only an `id` attribute is defined within the element closest to the clicked element, the `parentId` populates as `"not_specified"`. To fetch the value of `id`, set `useDefaultContentNameOrId` to `true`.
+- If `parentDataTag` is defined, `useDefaultContentNameOrId` is set to `false`, and only an `id` attribute is defined within the element closest to the clicked element, the `parentId` populates as `"not_specified"`. To fetch the value of `id`, set `useDefaultContentNameOrId` to `true`.
When you define the `data-parentid` or `data-*-parentid` attribute, the plug-in fetches the instance of this attribute that is closest to the clicked element, including within the clicked element if applicable. If you declare `parentDataTag` and define the `data-parentid` or `data-*-parentid` attribute, precedence is given to `data-parentid` or `data-*-parentid`.
-> [!NOTE]
-> For examples showing which value is fetched as the `parentId` for different configurations, see [Examples of `parentid` key](#examples-of-parentid-key).
-
-> [!CAUTION]
-> Once `parentDataTag` is included in *any* HTML element across your application *the SDK begins looking for parents tags across your entire application* and not just the HTML element where you used it.
+For examples showing which value is fetched as the `parentId` for different configurations, see [Examples of `parentid` key](#examples-of-parentid-key).
> [!CAUTION]
-> If you're using the HEART workbook with the Click Analytics plug-in, for HEART events to be logged or detected, the tag `parentDataTag` must be declared in all other parts of an end user's application.
+> - Once `parentDataTag` is included in *any* HTML element across your application *the SDK begins looking for parents tags across your entire application* and not just the HTML element where you used it.
+> - If you're using the HEART workbook with the Click Analytics plug-in, for HEART events to be logged or detected, the tag `parentDataTag` must be declared in all other parts of an end user's application.
### `customDataPrefix`
export const clickPluginConfigWithParentDataTag = {
</div> ```
-For example 2, for clicked element `<Button>`, the value of `parentId` is `parentid2`. Even though `parentDataTag` is declared, the `data-parentid` definition takes precedence.
-> [!NOTE]
-> If the `data-parentid` attribute was defined within the div element with `className=ΓÇ¥test2ΓÇ¥`, the value for `parentId` would still be `parentid2`.
+For example 2, for clicked element `<Button>`, the value of `parentId` is `parentid2`. Even though `parentDataTag` is declared, the `data-parentid` definition takes precedence. If the `data-parentid` attribute was defined within the div element with `className=ΓÇ¥test2ΓÇ¥`, the value for `parentId` would still be `parentid2`.
### Example 3
export const clickPluginConfigWithParentDataTag = {
</div> ``` For example 3, for clicked element `<Button>`, because `parentDataTag` is declared and the `data-parentid` or `data-*-parentid` attribute isnΓÇÖt defined, the value of `parentId` is `test6parent`. It's `test6parent` because when `parentDataTag` is declared, the plug-in fetches the value of the `id` or `data-*-id` attribute from the parent HTML element that is closest to the clicked element. Because `data-group="buttongroup1"` is defined, the plug-in finds the `parentId` more efficiently.
-> [!NOTE]
-> If you remove the `data-group="buttongroup1"` attribute, the value of `parentId` is still `test6parent`, because `parentDataTag` is still declared.
+
+If you remove the `data-group="buttongroup1"` attribute, the value of `parentId` is still `test6parent`, because `parentDataTag` is still declared.
## Troubleshooting
See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/ap
## Next steps -- [Confirm data is flowing](./javascript-sdk.md#5-confirm-data-is-flowing).
+- [Confirm data is flowing](./javascript-sdk.md#confirm-data-is-flowing).
- See the [documentation on utilizing HEART workbook](usage-heart.md) for expanded product analytics. - See the [GitHub repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Autocollection Plug-in. - Use [Events Analysis in the Usage experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions.
azure-monitor Javascript Framework Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md
In addition to the core SDK, there are also plugins available for specific frame
These plugins provide extra functionality and integration with the specific framework.
-> [!IMPORTANT]
-> If you haven't already, you need to first [enable Azure Monitor Application Insights Real User Monitoring](./javascript-sdk.md) before you enable a framework extension.
- ## Prerequisites
+- Install the [JavaScript SDK](./javascript-sdk.md).
+ ### [React](#tab/react) None. ### [React Native](#tab/reactnative)
-You must be using a version >= 2.0.0 of `@microsoft/applicationinsights-web`. This plugin only works in react-native apps. It doesn't work with [apps using the Expo framework](https://docs.expo.io/) or Create React Native App, which is based on the Expo framework.
+- You must be using a version >= 2.0.0 of `@microsoft/applicationinsights-web`. This plugin only works in react-native apps. It doesn't work with [apps using the Expo framework](https://docs.expo.io/) or Create React Native App, which is based on the Expo framework.
### [Angular](#tab/angular)
-None.
+- The Angular plugin is NOT ECMAScript 3 (ES3) compatible.
+- When we add support for a new Angular version, our npm package becomes incompatible with down-level Angular versions. Continue to use older npm packages until you're ready to upgrade your Angular version.
The Angular plugin for the Application Insights JavaScript SDK enables:
- Track exceptions - Chain more custom exception handlers
-> [!WARNING]
-> Angular plugin is NOT ECMAScript 3 (ES3) compatible.
-
-> [!IMPORTANT]
-> When we add support for a new Angular version, our NPM package becomes incompatible with down-level Angular versions. Continue to use older NPM packages until you're ready to upgrade your Angular version.
- ## Add a plug-in To add a plug-in, follow the steps in this section.
-### 1. Install the package
+### Install the package
#### [React](#tab/react)
npm install @microsoft/applicationinsights-angularplugin-js
-### 2. Add the extension to your code
+### Add the extension to your code
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
npm install @microsoft/applicationinsights-angularplugin-js
Initialize a connection to Application Insights:
-> [!TIP]
-> If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [reactPlugin],`.
- ```javascript import React from 'react'; import { ApplicationInsights } from '@microsoft/applicationinsights-web';
import { ReactPlugin } from '@microsoft/applicationinsights-react-js';
import { createBrowserHistory } from "history"; const browserHistory = createBrowserHistory({ basename: '' }); var reactPlugin = new ReactPlugin();
-// Add the Click Analytics plug-in.
+// *** Add the Click Analytics plug-in. ***
/* var clickPluginInstance = new ClickAnalyticsPlugin(); var clickPluginConfig = { autoCapture: true
var reactPlugin = new ReactPlugin();
var appInsights = new ApplicationInsights({ config: { connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
- // If you're adding the Click Analytics plug-in, delete the next line.
+ // *** If you're adding the Click Analytics plug-in, delete the next line. ***
extensions: [reactPlugin],
- // Add the Click Analytics plug-in.
+ // *** Add the Click Analytics plug-in. ***
// extensions: [reactPlugin, clickPluginInstance], extensionConfig: { [reactPlugin.identifier]: { history: browserHistory }
- // Add the Click Analytics plug-in.
+ // *** Add the Click Analytics plug-in. ***
// [clickPluginInstance.identifier]: clickPluginConfig } }
var appInsights = new ApplicationInsights({
appInsights.loadAppInsights(); ```
-> [!TIP]
-> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
- #### [React Native](#tab/reactnative) - **React Native Plug-in** To use this plugin, you need to construct the plugin and add it as an `extension` to your existing Application Insights instance.
- > [!TIP]
- > If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [RNPlugin]`.
- ```typescript import { ApplicationInsights } from '@microsoft/applicationinsights-web'; import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native';
- // Add the Click Analytics plug-in.
+ // *** Add the Click Analytics plug-in. ***
// import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js'; var RNPlugin = new ReactNativePlugin();
- // Add the Click Analytics plug-in.
+ // *** Add the Click Analytics plug-in. ***
/* var clickPluginInstance = new ClickAnalyticsPlugin(); var clickPluginConfig = { autoCapture: true
appInsights.loadAppInsights();
var appInsights = new ApplicationInsights({ config: { connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
- // If you're adding the Click Analytics plug-in, delete the next line.
+ // *** If you're adding the Click Analytics plug-in, delete the next line. ***
extensions: [RNPlugin]
- // Add the Click Analytics plug-in.
+ // *** Add the Click Analytics plug-in. ***
/* extensions: [RNPlugin, clickPluginInstance], extensionConfig: { [clickPluginInstance.identifier]: clickPluginConfig
appInsights.loadAppInsights();
```
- > [!TIP]
- > If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
-- - **React Native Manual Device Plugin** To use this plugin, you must either disable automatic device info collection or use your own device info collection class after you add the extension to your code.
Set up an instance of Application Insights in the entry component in your app:
> [!IMPORTANT] > When using the ErrorService, there is an implicit dependency on the `@microsoft/applicationinsights-analytics-js` extension. you MUST include either the `'@microsoft/applicationinsights-web'` or include the `@microsoft/applicationinsights-analytics-js` extension. Otherwise, unhandled exceptions caught by the error service will not be sent.
-> [!TIP]
-> If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [angularPlugin],`.
- ```js import { Component } from '@angular/core'; import { ApplicationInsights } from '@microsoft/applicationinsights-web'; import { AngularPlugin } from '@microsoft/applicationinsights-angularplugin-js';
-// Add the Click Analytics plug-in.
+// *** Add the Click Analytics plug-in. ***
// import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js'; import { Router } from '@angular/router';
export class AppComponent {
private router: Router ){ var angularPlugin = new AngularPlugin();
- // Add the Click Analytics plug-in.
+ // *** Add the Click Analytics plug-in. ***
/* var clickPluginInstance = new ClickAnalyticsPlugin(); var clickPluginConfig = { autoCapture: true
export class AppComponent {
const appInsights = new ApplicationInsights({ config: { connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
- // If you're adding the Click Analytics plug-in, delete the next line.
+ // *** If you're adding the Click Analytics plug-in, delete the next line. ***
extensions: [angularPlugin],
- // Add the Click Analytics plug-in.
+ // *** Add the Click Analytics plug-in. ***
// extensions: [angularPlugin, clickPluginInstance], extensionConfig: { [angularPlugin.identifier]: { router: this.router }
- // Add the Click Analytics plug-in.
+ // *** Add the Click Analytics plug-in. ***
// [clickPluginInstance.identifier]: clickPluginConfig } }
export class AppComponent {
} ```
-> [!TIP]
-> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
-
+### (Optional) Add the Click Analytics plug-in
+
+If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md):
+
+1. Uncomment the lines for Click Analytics.
+1. Do one of the following, depending on which plug-in you're adding:
+
+ - For React, delete `extensions: [reactPlugin],`.
+ - For React Native, delete `extensions: [RNPlugin]`.
+ - For Angular, delete `extensions: [angularPlugin],`.
+
+1. See [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
+ ## Configuration This section covers configuration settings for the framework extensions for Application Insights JavaScript SDK.
To chain more custom exception handlers:
#### [React](#tab/react)
-N/A
-
-> [!NOTE]
-> The device information, which includes Browser, OS, version, and language, is already being collected by the Application Insights web package.
+The device information, which includes Browser, OS, version, and language, is already being collected by the Application Insights web package.
#### [React Native](#tab/reactnative)
N/A
#### [Angular](#tab/angular)
-N/A
-
-> [!NOTE]
-> The device information, which includes Browser, OS, version, and language, is already being collected by the Application Insights web package.
+The device information, which includes Browser, OS, version, and language, is already being collected by the Application Insights web package.
customMetrics
| summarize avg(value), count() by tostring(customDimensions["Component Name"]) ```
-> [!NOTE]
-> It can take up to 10 minutes for new custom metrics to appear in the Azure portal.
+It can take up to 10 minutes for new custom metrics to appear in the Azure portal.
#### Use Application Insights with React Context
Check out the [Application Insights Angular demo](https://github.com/microsoft/a
## Next steps -- [Confirm data is flowing](javascript-sdk.md#5-confirm-data-is-flowing).
+- [Confirm data is flowing](javascript-sdk.md#confirm-data-is-flowing).
azure-monitor Javascript Sdk Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-configuration.md
The Azure Application Insights JavaScript SDK provides configuration for trackin
These configuration fields are optional and default to false unless otherwise stated.
-| Name | Type | Default | Description |
-||||-|
-| accountId | string | null | An optional account ID, if your app groups users into accounts. No spaces, commas, semicolons, equals, or vertical bars |
-| addRequestContext | (requestContext: IRequestionContext) => {[key: string]: any} | undefined | Provide a way to enrich dependencies logs with context at the beginning of api call. Default is undefined. You need to check if `xhr` exists if you configure `xhr` related context. You need to check if `fetch request` and `fetch response` exist if you configure `fetch` related context. Otherwise you may not get the data you need. |
-| ajaxPerfLookupDelay | numeric | 25 | Defaults to 25 ms. The amount of time to wait before reattempting to find the windows.performance timings for an Ajax request, time is in milliseconds and is passed directly to setTimeout().
-| appId | string | null | AppId is used for the correlation between AJAX dependencies happening on the client-side with the server-side requests. When Beacon API is enabled, it can't be used automatically, but can be set manually in the configuration. Default is null |
-| autoTrackPageVisitTime | boolean | false | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It's sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (IE8 or less). Default is false. |
-| convertUndefined | `any` | undefined | Provide user an option to convert undefined field to user defined value.
-| cookieCfg | [ICookieCfgConfig](#cookie-management)<br>[Optional]<br>(Since 2.6.0) | undefined | Defaults to cookie usage enabled see [ICookieCfgConfig](#cookie-management) settings for full defaults. |
-| cookieDomain | alias for [`cookieCfg.domain`](#cookie-management)<br>[Optional] | null | Custom cookie domain. It's helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined it takes precedence over this value. |
-| cookiePath | alias for [`cookieCfg.path`](#cookie-management)<br>[Optional]<br>(Since 2.6.0) | null | Custom cookie path. It's helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined, it takes precedence. |
-| correlationHeaderDomains | string[] | undefined | Enable correlation headers for specific domains |
-| correlationHeaderExcludedDomains | string[] | undefined | Disable correlation headers for specific domains |
-| correlationHeaderExcludePatterns | regex[] | undefined | Disable correlation headers using regular expressions |
-| createPerfMgr | (core: IAppInsightsCore, notification
-| customHeaders | `[{header: string, value: string}]` | undefined | The ability for the user to provide extra headers when using a custom endpoint. customHeaders aren't added on browser shutdown moment when beacon sender is used. And adding custom headers isn't supported on IE9 or earlier.
-| diagnosticLogInterval | numeric | 10000 | (internal) Polling interval (in ms) for internal logging queue |
-| disableAjaxTracking | boolean | false | If true, Ajax calls aren't autocollected. Default is false. |
-| disableCookiesUsage | alias for [`cookieCfg.enabled`](#cookie-management)<br>[Optional] | false | Default false. A boolean that indicates whether to disable the use of cookies by the SDK. If true, the SDK doesn't store or read any data from cookies.<br>(Since v2.6.0) If `cookieCfg.enabled` is defined it takes precedence. Cookie usage can be re-enabled after initialization via the core.getCookieMgr().setEnabled(true). |
-| disableCorrelationHeaders | boolean | false | If false, the SDK adds two headers ('Request-Id' and 'Request-Context') to all dependency requests to correlate them with corresponding requests on the server side. Default is false. |
-| disableDataLossAnalysis | boolean | true | If false, internal telemetry sender buffers are checked at startup for items not yet sent. |
-| disableExceptionTracking | boolean | false | If true, exceptions aren't autocollected. Default is false. |
-| disableFetchTracking | boolean | false | The default setting for `disableFetchTracking` is `false`, meaning it's enabled. However, in versions prior to 2.8.10, it was disabled by default. When set to `true`, Fetch requests aren't automatically collected. The default setting changed from `true` to `false` in version 2.8.0. |
-| disableFlushOnBeforeUnload | boolean | false | Default false. If true, flush method isn't called when onBeforeUnload event triggers |
-| disableIkeyDeprecationMessage | boolean | true | Disable instrumentation Key deprecation error message. If true, error messages are NOT sent.
-| disableInstrumentationKeyValidation | boolean | false | If true, instrumentation key validation check is bypassed. Default value is false.
-| disableTelemetry | boolean | false | If true, telemetry isn't collected or sent. Default is false. |
-| disableXhr | boolean | false | Don't use XMLHttpRequest or XDomainRequest (for IE < 9) by default instead attempt to use fetch() or sendBeacon. If no other transport is available, it uses XMLHttpRequest |
-| distributedTracingMode | numeric or `DistributedTracingModes` | `DistributedTracingModes.AI_AND_W3C` | Sets the distributed tracing mode. If AI_AND_W3C mode or W3C mode is set, W3C trace context headers (traceparent/tracestate) are generated and included in all outgoing requests. AI_AND_W3C is provided for back-compatibility with any legacy Application Insights instrumented services.
-| enableAjaxErrorStatusText | boolean | false | Default false. If true, include response error data text boolean in dependency event on failed AJAX requests. |
-| enableAjaxPerfTracking | boolean | false | Default false. Flag to enable looking up and including extra browser window.performance timings in the reported Ajax (XHR and fetch) reported metrics.
-| enableAutoRouteTracking | boolean | false | Automatically track route changes in Single Page Applications (SPA). If true, each route change sends a new Pageview to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views.<br>***Note***: If you enable this field, don't enable the `history` object for [React router configuration](./javascript-framework-extensions.md?tabs=react#track-router-history) because you'll get multiple page view events.
-| enableCorsCorrelation | boolean | false | If true, the SDK adds two headers ('Request-Id' and 'Request-Context') to all CORS requests to correlate outgoing AJAX dependencies with corresponding requests on the server side. Default is false |
-| enableDebug | boolean | false | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting results in dropped telemetry whenever an internal error occurs. It can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `loggingLevelConsole` or `loggingLevelTelemetry` instead of `enableDebug`. |
-| enablePerfMgr | boolean | false | When enabled (true) it creates local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). It can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code.
-| enableRequestHeaderTracking | boolean | false | If true, AJAX & Fetch request headers is tracked, default is false. If ignoreHeaders isn't configured, Authorization and X-API-Key headers aren't logged.
-| enableResponseHeaderTracking | boolean | false | If true, AJAX & Fetch request's response headers is tracked, default is false. If ignoreHeaders isn't configured, WWW-Authenticate header isn't logged.
-| enableSessionStorageBuffer | boolean | true | Default true. If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load |
-| enableUnhandledPromiseRejectionTracking | boolean | false | If true, unhandled promise rejections are autocollected as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value is ignored and unhandled promise rejections aren't reported.
-| eventsLimitInMem | number | 10000 | The number of events that can be kept in memory before the SDK starts to drop events when not using Session Storage (the default).
-| excludeRequestFromAutoTrackingPatterns | string[] \| RegExp[] | undefined | Provide a way to exclude specific route from automatic tracking for XMLHttpRequest or Fetch request. If defined, for an Ajax / fetch request that the request url matches with the regex patterns, auto tracking is turned off. Default is undefined. |
-| idLength | numeric | 22 | Identifies the default length used to generate new random session and user IDs. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set the value to 5.
-| ignoreHeaders | string[] | ["Authorization", "X-API-Key", "WWW-Authenticate"] | AJAX & Fetch request and response headers to be ignored in log data. To override or discard the default, add an array with all headers to be excluded or an empty array to the configuration.
-| isBeaconApiDisabled | boolean | true | If false, the SDK sends all telemetry using the [Beacon API](https://www.w3.org/TR/beacon) |
-| isBrowserLinkTrackingEnabled | boolean | false | Default is false. If true, the SDK tracks all [Browser Link](/aspnet/core/client-side/using-browserlink) requests. |
-| isRetryDisabled | boolean | false | Default false. If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected) |
-| isStorageUseDisabled | boolean | false | If true, the SDK doesn't store or read any data from local and session storage. Default is false. |
-| loggingLevelConsole | numeric | 0 | Logs **internal** Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) |
-| loggingLevelTelemetry | numeric | 1 | Sends **internal** Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) |
-| maxAjaxCallsPerView | numeric | 500 | Default 500 - controls how many Ajax calls are monitored per page view. Set to -1 to monitor all (unlimited) Ajax calls on the page. |
-| maxAjaxPerfLookupAttempts | numeric | 3 | Defaults to 3. The maximum number of times to look for the window.performance timings (if available) is required. Not all browsers populate the window.performance before reporting the end of the XHR request. For fetch requests, it's added after it's complete.
-| maxBatchInterval | numeric | 15000 | How long to batch telemetry for before sending (milliseconds) |
-| maxBatchSizeInBytes | numeric | 10000 | Max size of telemetry batch. If a batch exceeds this limit, it's immediately sent and a new batch is started |
-| namePrefix | string | undefined | An optional value that is used as name postfix for localStorage and session cookie name.
-| onunloadDisableBeacon | boolean | false | Default false. when tab is closed, the SDK sends all remaining telemetry using the [Beacon API](https://www.w3.org/TR/beacon) |
-| onunloadDisableFetch | boolean | false | If fetch keepalive is supported don't use it for sending events during unload, it may still fall back to fetch() without keepalive |
-| overridePageViewDuration | boolean | false | If true, default behavior of trackPageView is changed to record end of page view duration interval when trackPageView is called. If false and no custom duration is provided to trackPageView, the page view performance is calculated using the navigation timing API. Default is false. |
-| perfEvtsSendAll | boolean | false | When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent() this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for 'parent' events (false &lt;default&gt;).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of the event being created and its _parent_ property isn't null or undefined. Since v2.5.7
-| samplingPercentage | numeric | 100 | Percentage of events that is sent. Default is 100, meaning all events are sent. Set it if you wish to preserve your data cap for large-scale applications. |
-| sdkExtension | string | null | Sets the sdk extension name. Only alphabetic characters are allowed. The extension name is added as a prefix to the 'ai.internal.sdkVersion' tag (for example, 'ext_javascript:2.0.0'). Default is null. |
-| sessionCookiePostfix | string | undefined | An optional value that is used as name postfix for session cookie name. If undefined, namePrefix is used as name postfix for session cookie name.
-| sessionExpirationMs | numeric | 86400000 | A session is logged if it has continued for this amount of time in milliseconds. Default is 24 hours |
-| sessionRenewalMs | numeric | 1800000 | A session is logged if the user is inactive for this amount of time in milliseconds. Default is 30 minutes |
-| userCookiePostfix | string | undefined | An optional value that is used as name postfix for user cookie name. If undefined, no postfix is added on user cookie name.
+| Name | Type | Default |
+||||
+| accountId<br><br>An optional account ID, if your app groups users into accounts. No spaces, commas, semicolons, equals, or vertical bars | string | null |
+| addRequestContext<br><br>Provide a way to enrich dependencies logs with context at the beginning of api call. Default is undefined. You need to check if `xhr` exists if you configure `xhr` related context. You need to check if `fetch request` and `fetch response` exist if you configure `fetch` related context. Otherwise you may not get the data you need. | (requestContext: IRequestionContext) => {[key: string]: any} | undefined |
+| ajaxPerfLookupDelay<br><br>Defaults to 25 ms. The amount of time to wait before reattempting to find the windows.performance timings for an Ajax request, time is in milliseconds and is passed directly to setTimeout(). | numeric | 25 |
+| appId<br><br>AppId is used for the correlation between AJAX dependencies happening on the client-side with the server-side requests. When Beacon API is enabled, it can't be used automatically, but can be set manually in the configuration. Default is null | string | null |
+| autoTrackPageVisitTime<br><br>If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It's sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (IE8 or less). Default is false. | boolean | false |
+| convertUndefined<br><br>Provide user an option to convert undefined field to user defined value. | `any` | undefined |
+| cookieCfg<br><br>Defaults to cookie usage enabled see [ICookieCfgConfig](#cookie-management) settings for full defaults. | [ICookieCfgConfig](#cookie-management)<br>[Optional]<br>(Since 2.6.0) | undefined |
+| cookieDomain<br><br>Custom cookie domain. It's helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined it takes precedence over this value. | alias for [`cookieCfg.domain`](#cookie-management)<br>[Optional] | null |
+| cookiePath<br><br>Custom cookie path. It's helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined, it takes precedence. | alias for [`cookieCfg.path`](#cookie-management)<br>[Optional]<br>(Since 2.6.0) | null |
+| correlationHeaderDomains<br><br>Enable correlation headers for specific domains | string[] | undefined |
+| correlationHeaderExcludedDomains<br><br>Disable correlation headers for specific domains | string[] | undefined |
+| correlationHeaderExcludePatterns<br><br>Disable correlation headers using regular expressions | regex[] | undefined |
+| createPerfMgr<br><br>Callback function that will be called to create a IPerfManager instance when required and ```enablePerfMgr``` is enabled, it enables you to override the default creation of a PerfManager() without needing to ```setPerfMgr()``` after initialization. | (core: IAppInsightsCore, notification
+| customHeaders<br><br>The ability for the user to provide extra headers when using a custom endpoint. customHeaders aren't added on browser shutdown moment when beacon sender is used. And adding custom headers isn't supported on IE9 or earlier. | `[{header: string, value: string}]` | undefined |
+| diagnosticLogInterval<br><br>(internal) Polling interval (in ms) for internal logging queue | numeric | 10000 |
+| disableAjaxTracking<br><br>If true, Ajax calls aren't autocollected. Default is false. | boolean | false |
+| disableCookiesUsage<br><br>Default false. A boolean that indicates whether to disable the use of cookies by the SDK. If true, the SDK doesn't store or read any data from cookies.<br>(Since v2.6.0) If `cookieCfg.enabled` is defined it takes precedence. Cookie usage can be re-enabled after initialization via the core.getCookieMgr().setEnabled(true). | alias for [`cookieCfg.enabled`](#cookie-management)<br>[Optional] | false |
+| disableCorrelationHeaders<br><br>If false, the SDK adds two headers ('Request-Id' and 'Request-Context') to all dependency requests to correlate them with corresponding requests on the server side. Default is false. | boolean | false |
+| disableDataLossAnalysis<br><br>If false, internal telemetry sender buffers are checked at startup for items not yet sent. | boolean | true |
+| disableExceptionTracking<br><br>If true, exceptions aren't autocollected. Default is false. | boolean | false |
+| disableFetchTracking<br><br>The default setting for `disableFetchTracking` is `false`, meaning it's enabled. However, in versions prior to 2.8.10, it was disabled by default. When set to `true`, Fetch requests aren't automatically collected. The default setting changed from `true` to `false` in version 2.8.0. | boolean | false |
+| disableFlushOnBeforeUnload<br><br>Default false. If true, flush method isn't called when onBeforeUnload event triggers | boolean | false |
+| disableIkeyDeprecationMessage<br><br>Disable instrumentation Key deprecation error message. If true, error messages are NOT sent. | boolean | true |
+| disableInstrumentationKeyValidation<br><br>If true, instrumentation key validation check is bypassed. Default value is false. | boolean | false |
+| disableTelemetry<br><br>If true, telemetry isn't collected or sent. Default is false. | boolean | false |
+| disableXhr<br><br>Don't use XMLHttpRequest or XDomainRequest (for IE < 9) by default instead attempt to use fetch() or sendBeacon. If no other transport is available, it uses XMLHttpRequest | boolean | false |
+| distributedTracingMode<br><br>Sets the distributed tracing mode. If AI_AND_W3C mode or W3C mode is set, W3C trace context headers (traceparent/tracestate) are generated and included in all outgoing requests. AI_AND_W3C is provided for back-compatibility with any legacy Application Insights instrumented services. | numeric or `DistributedTracingModes` | `DistributedTracing Modes.AI_AND_W3C` |
+| enableAjaxErrorStatusText<br><br>Default false. If true, include response error data text boolean in dependency event on failed AJAX requests. | boolean | false |
+| enableAjaxPerfTracking<br><br>Default false. Flag to enable looking up and including extra browser window.performance timings in the reported Ajax (XHR and fetch) reported metrics. | boolean | false |
+| enableAutoRouteTracking<br><br>Automatically track route changes in Single Page Applications (SPA). If true, each route change sends a new Pageview to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views.<br>***Note***: If you enable this field, don't enable the `history` object for [React router configuration](./javascript-framework-extensions.md?tabs=react#track-router-history) because you'll get multiple page view events. | boolean | false |
+| enableCorsCorrelation<br><br>If true, the SDK adds two headers ('Request-Id' and 'Request-Context') to all CORS requests to correlate outgoing AJAX dependencies with corresponding requests on the server side. Default is false | boolean | false |
+| enableDebug<br><br>If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting results in dropped telemetry whenever an internal error occurs. It can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `loggingLevelConsole` or `loggingLevelTelemetry` instead of `enableDebug`. | boolean | false |
+| enablePerfMgr<br><br>When enabled (true) it creates local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). It can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code. | boolean | false |
+| enableRequestHeaderTracking<br><br>If true, AJAX & Fetch request headers is tracked, default is false. If ignoreHeaders isn't configured, Authorization and X-API-Key headers aren't logged. | boolean | false |
+| enableResponseHeaderTracking<br><br>If true, AJAX & Fetch request's response headers is tracked, default is false. If ignoreHeaders isn't configured, WWW-Authenticate header isn't logged. | boolean | false |
+| enableSessionStorageBuffer<br><br>Default true. If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load | boolean | true |
+| enableUnhandledPromiseRejectionTracking<br><br>If true, unhandled promise rejections are autocollected as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value is ignored and unhandled promise rejections aren't reported. | boolean | false |
+| eventsLimitInMem<br><br>The number of events that can be kept in memory before the SDK starts to drop events when not using Session Storage (the default). | number | 10000 |
+| excludeRequestFromAutoTrackingPatterns<br><br>Provide a way to exclude specific route from automatic tracking for XMLHttpRequest or Fetch request. If defined, for an Ajax / fetch request that the request url matches with the regex patterns, auto tracking is turned off. Default is undefined. | string[] \| RegExp[] | undefined |
+| idLength<br><br>Identifies the default length used to generate new random session and user IDs. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set the value to 5. | numeric | 22 |
+| ignoreHeaders<br><br>AJAX & Fetch request and response headers to be ignored in log data. To override or discard the default, add an array with all headers to be excluded or an empty array to the configuration. | string[] | ["Authorization", "X-API-Key", "WWW-Authenticate"] |
+| isBeaconApiDisabled<br><br>If false, the SDK sends all telemetry using the [Beacon API](https://www.w3.org/TR/beacon) | boolean | true |
+| isBrowserLinkTrackingEnabled<br><br>Default is false. If true, the SDK tracks all [Browser Link](/aspnet/core/client-side/using-browserlink) requests. | boolean | false |
+| isRetryDisabled<br><br>Default false. If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected) | boolean | false |
+| isStorageUseDisabled<br><br>If true, the SDK doesn't store or read any data from local and session storage. Default is false. | boolean | false |
+| loggingLevelConsole<br><br>Logs **internal** Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric | 0 |
+| loggingLevelTelemetry<br><br>Sends **internal** Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric | 1 |
+| maxAjaxCallsPerView<br><br>Default 500 - controls how many Ajax calls are monitored per page view. Set to -1 to monitor all (unlimited) Ajax calls on the page. | numeric | 500 |
+| maxAjaxPerfLookupAttempts<br><br>Defaults to 3. The maximum number of times to look for the window.performance timings (if available) is required. Not all browsers populate the window.performance before reporting the end of the XHR request. For fetch requests, it's added after it's complete. | numeric | 3 |
+| maxBatchInterval<br><br>How long to batch telemetry for before sending (milliseconds) | numeric | 15000 |
+| maxBatchSizeInBytes<br><br>Max size of telemetry batch. If a batch exceeds this limit, it's immediately sent and a new batch is started | numeric | 10000 |
+| namePrefix<br><br>An optional value that is used as name postfix for localStorage and session cookie name. | string | undefined |
+| onunloadDisableBeacon<br><br>Default false. when tab is closed, the SDK sends all remaining telemetry using the [Beacon API](https://www.w3.org/TR/beacon) | boolean | false |
+| onunloadDisableFetch<br><br>If fetch keepalive is supported don't use it for sending events during unload, it may still fall back to fetch() without keepalive | boolean | false |
+| overridePageViewDuration<br><br>If true, default behavior of trackPageView is changed to record end of page view duration interval when trackPageView is called. If false and no custom duration is provided to trackPageView, the page view performance is calculated using the navigation timing API. Default is false. | boolean | false |
+| perfEvtsSendAll<br><br>When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent() this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for 'parent' events (false &lt;default&gt;).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of the event being created and its _parent_ property isn't null or undefined. Since v2.5.7 | boolean | false |
+| samplingPercentage<br><br>Percentage of events that is sent. Default is 100, meaning all events are sent. Set it if you wish to preserve your data cap for large-scale applications. | numeric | 100 |
+| sdkExtension<br><br>Sets the SDK extension name. Only alphabetic characters are allowed. The extension name is added as a prefix to the 'ai.internal.sdkVersion' tag (for example, 'ext_javascript:2.0.0'). Default is null. | string | null |
+| sessionCookiePostfix<br><br>An optional value that is used as name postfix for session cookie name. If undefined, namePrefix is used as name postfix for session cookie name. | string | undefined |
+| sessionExpirationMs<br><br>A session is logged if it has continued for this amount of time in milliseconds. Default is 24 hours | numeric | 86400000 |
+| sessionRenewalMs<br><br>A session is logged if the user is inactive for this amount of time in milliseconds. Default is 30 minutes | numeric | 1800000 |
+| userCookiePostfix<br><br>An optional value that is used as name postfix for user cookie name. If undefined, no postfix is added on user cookie name. | string | undefined |
## Cookie management
azure-monitor Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md
# Enable Azure Monitor Application Insights Real User Monitoring
-The Microsoft Azure Monitor Application Insights JavaScript SDK allows you to monitor and analyze the performance of JavaScript web applications. This is commonly referred to as Real User Monitoring or RUM.
+The Microsoft Azure Monitor Application Insights JavaScript SDK collects usage data which allows you to monitor and analyze the performance of JavaScript web applications. This is commonly referred to as Real User Monitoring or RUM.
+
+We collect page views by default. But if you want to also collect clicks by default, consider adding the [Click Analytics Auto-Collection plug-in](./javascript-feature-extensions.md):
+
+- If you're adding a [framework extension](./javascript-framework-extensions.md), which you can [add](#optional-add-advanced-sdk-configuration) after you follow the steps to get started below, you'll have the option to add Click Analytics when you add the framework extension.
+- If you're not adding a framework extension, [add the Click Analytics plug-in](./javascript-feature-extensions.md) after you follow the steps to get started.
## Prerequisites
Follow the steps in this section to instrument your application with the Applica
> [!TIP] > Good news! We're making it even easier to enable JavaScript. Check out where [JavaScript (Web) SDK Loader Script injection by configuration is available](./codeless-overview.md#javascript-web-sdk-loader-script-injection-by-configuration)!
-> [!NOTE]
-> If you have a React, React Native, or Angular application, you can [optionally add these plug-ins after you follow the steps to get started](#4-optional-add-advanced-sdk-configuration).
-
-### 1. Add the JavaScript code
+### Add the JavaScript code
Two methods are available to add the code to enable Application Insights via the Application Insights JavaScript SDK:
Two methods are available to add the code to enable Application Insights via the
#### [JavaScript (Web) SDK Loader Script](#tab/javascriptwebsdkloaderscript)
-1. Paste the JavaScript (Web) SDK Loader Script at the top of each page for which you want to enable Application Insights.
+1. Paste the JavaScript (Web) SDK Loader Script at the top of each page for which you want to enable Application Insights.
- > [!NOTE]
- > Preferably, you should add it as the first script in your <head> section so that it can monitor any potential issues with all of your dependencies.
+ Preferably, you should add it as the first script in your <head> section so that it can monitor any potential issues with all of your dependencies.
```html <script type="text/javascript">
Two methods are available to add the code to enable Application Insights via the
npm i --save @microsoft/applicationinsights-web ```
- > [!Note]
- > *Typings are included with this package*, so you do *not* need to install a separate typings package.
+ *Typings are included with this package*, so you do *not* need to install a separate typings package.
1. Add the following JavaScript to your application's code.
- > [!NOTE]
- > Where and also how you add this JavaScript code depends on your application code. For example, you might be able to add it exactly as it appears below or you may need to create wrappers around it.
+ Where and also how you add this JavaScript code depends on your application code. For example, you might be able to add it exactly as it appears below or you may need to create wrappers around it.
```js import { ApplicationInsights } from '@microsoft/applicationinsights-web'
Two methods are available to add the code to enable Application Insights via the
-### 2. Paste the connection string in your environment
+### Paste the connection string in your environment
To paste the connection string in your environment, follow these steps:
To paste the connection string in your environment, follow these steps:
:::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot that shows Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png":::
- 1. Replace the placeholder `"YOUR_CONNECTION_STRING"` in the JavaScript code with your connection string copied to the clipboard.
+ 1. Replace the placeholder `"YOUR_CONNECTION_STRING"` in the JavaScript code with your [connection string](./sdk-connection-string.md) copied to the clipboard.
- > [!NOTE]
- > An Application Insights [connection string](sdk-connection-string.md) contains information to connect to the Azure cloud and associate telemetry data with a specific Application Insights resource. The connection string includes the Instrumentation Key (a unique identifier), the endpoint suffix (to specify the Azure cloud), and optional explicit endpoints for individual services. The connection string isn't considered a security token or key.
+ The connection string isn't considered a security token or key. For more information, see [Do new Azure regions require the use of connection strings?](../faq.yml#do-new-azure-regions-require-the-use-of-connection-strings-).
-### 3. (Optional) Add SDK configuration
+### (Optional) Add SDK configuration
The optional [SDK configuration](./javascript-sdk-configuration.md#sdk-configuration) is passed to the Application Insights JavaScript SDK during initialization.
To add SDK configuration, add each configuration option directly under `connecti
:::image type="content" source="media/javascript-sdk/example-sdk-configuration.png" alt-text="Screenshot of JavaScript code with SDK configuration options added and highlighted." lightbox="media/javascript-sdk/example-sdk-configuration.png":::
-### 4. (Optional) Add advanced SDK configuration
+### (Optional) Add advanced SDK configuration
If you want to use the extra features provided by plugins for specific frameworks and optionally enable the Click Analytics plug-in, see:
If you want to use the extra features provided by plugins for specific framework
- [React native plugin](javascript-framework-extensions.md?tabs=reactnative) - [Angular plugin](javascript-framework-extensions.md?tabs=reactnative)
-> [!TIP]
-> We collect page views by default. But if you want to also collect clicks by default, consider adding the Click Analytics Auto-Collection plug-in. If you're adding a framework extension, you'll have the option to add Click Analytics when you add the framework extension. If you're not adding a framework extension, [add the Click Analytics plug-in](./javascript-feature-extensions.md).
-
-### 5. Confirm data is flowing
+### Confirm data is flowing
1. Go to your Application Insights resource that you've enabled the SDK for. 1. In the Application Insights resource menu on the left, under **Investigate**, select the **Transaction search** pane.
If you want to use the extra features provided by plugins for specific framework
:::image type="content" source="media/javascript-sdk/confirm-data-flowing.png" alt-text="Screenshot of the Application Insights Transaction search pane in the Azure portal with the Page View option selected. The page views are highlighted." lightbox="media/javascript-sdk/confirm-data-flowing.png":::
+1. If you want to query data to confirm data is flowing:
+
+ 1. Select **Logs** in the left pane.
+
+ When you select Logs, the [Queries dialog](../logs/queries.md#queries-dialog) opens, which contains sample queries relevant to your data.
+
+ 1. Select **Run** for the sample query you want to run.
+
+ 1. If needed, you can update the sample query or write a new query by using [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/).
+
+ For essential KQL operators, see [Learn common KQL operators](/azure/data-explorer/kusto/query/tutorials/learn-common-operators).
+ ## Support - If you can't run the application or you aren't getting data as expected, see the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting).
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Follow the link to the *Azure status* page and check if there's an activate outa
### Unexpected large number of requests to livediagnostics.monitor.azure.com
-Heavier traffic is expected while the LiveMetrics pane is open. Navigate away from the LiveMetrics pane to restore normal traffic flow of traffic. Application Insights SDKs poll QuickPulse endpoints with REST API calls once every five seconds to check if the LiveMetrics pane is being viewed.
+Application Insights SDKs use a REST API to communicate with QuickPulse endpoints, which provide live metrics for your web application. By default, the SDKs poll the endpoints once every five seconds to check if you are viewing the Live Metrics pane in the Azure portal.
-The SDKs send new metrics to QuickPulse every one second while the LiveMetrics pane is open.
+If you open the Live Metrics pane, the SDKs switch to a higher frequency mode and send new metrics to QuickPulse every second. This allows you to monitor and diagnose your live application with 1-second latency, but also generates more network traffic. To restore normal flow of traffic, naviage away from the Live Metrics pane.
+
+> [!NOTE]
+> The REST API calls made by the SDKs to QuickPulse endpoints are not tracked by Application Insights and do not affect your dependency calls or other metrics. However, you may see them in other network monitoring tools.
## Next steps
azure-monitor Usage Heart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-heart.md
You only have to interact with the main workbook, **HEART Analytics - All Sectio
To validate that data is flowing as expected to light up the metrics accurately, select the **Development Requirements** tab. > [!IMPORTANT]
-> Unless you [set the authenticated user context](./javascript-feature-extensions.md#2-optional-set-the-authenticated-user-context), you must select **Anonymous Users** from the **ConversionScope** dropdown to see telemetry data.
+> Unless you [set the authenticated user context](./javascript-feature-extensions.md#optional-set-the-authenticated-user-context), you must select **Anonymous Users** from the **ConversionScope** dropdown to see telemetry data.
:::image type="content" source="media/usage-overview/development-requirements-1.png" alt-text="Screenshot that shows the Development Requirements tab of the HEART Analytics - All Sections workbook.":::
The tabs are:
Happiness is a user-reported dimension that measures how users feel about the product offered to them.
-A common approach to measure happiness is to ask users a CSAT question like How satisfied are you with this product?. Users' responses on a three- or a five-point scale (for example, *no, maybe,* and *yes*) are aggregated to create a product-level score that ranges from 1 to 5. Because user-initiated feedback tends to be negatively biased, HEART tracks happiness from surveys displayed to users at predefined intervals.
+A common approach to measure happiness is to ask users a CSAT question like How satisfied are you with this product? Users' responses on a three- or a five-point scale (for example, *no, maybe,* and *yes*) are aggregated to create a product-level score that ranges from 1 to 5. Because user-initiated feedback tends to be negatively biased, HEART tracks happiness from surveys displayed to users at predefined intervals.
Common happiness metrics include values such as **Average Star Rating** and **Customer Satisfaction Score**. Send these values to Azure Monitor by using one of the custom ingestion methods described in [Custom sources](../data-sources.md#custom-sources).
To learn more about Logs in Azure Monitor, see [Azure Monitor Logs overview](../
### Can I edit visuals in the workbook?
-Yes. When you select the public template of the workbook, select **Edit** and make your changes.
+Yes. When you select the public template of the workbook:
+1. Select **Edit** and make your changes.
-After you make your changes, select **Done Editing**, and then select the **Save** icon.
+ :::image type="content" source="media/usage-overview/workbook-edit-faq.png" alt-text="Screenshot that shows the Edit button in the upper-left corner of the workbook template.":::
+1. After you make your changes, select **Done Editing**, and then select the **Save** icon.
-To view your saved workbook, under **Monitoring**, go to the **Workbooks** section and then select the **Workbooks** tab. A copy of your customized workbook appears there. You can make any further changes you want in this copy.
+ :::image type="content" source="media/usage-overview/workbook-save-faq.png" alt-text="Screenshot that shows the Save icon at the top of the workbook template that becomes available after you make edits.":::
+1. To view your saved workbook, under **Monitoring**, go to the **Workbooks** section and then select the **Workbooks** tab.
+
+ A copy of your customized workbook appears there. You can make any further changes you want in this copy.
+
+ :::image type="content" source="media/usage-overview/workbook-view-faq.png" alt-text="Screenshot that shows the Workbooks tab next to the Public Templates tab, where the edited copy of the workbook is located.":::
For more on editing workbook templates, see [Azure Workbooks templates](../visualize/workbooks-templates.md).
azure-monitor Autoscale Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-get-started.md
You have now defined a scale condition for a specific day. When CPU usage is gre
### View the history of your resource's scale events
-Whenever your resource is scaled up or down, an event is logged in the activity log. You can view the history of the scale events in the **Run history** tab.
+Whenever your resource has any scaling event, it is logged in the activity log. You can view the history of the scale events in the **Run history** tab.
:::image type="content" source="./media/autoscale-get-started/run-history.png" lightbox="./media/autoscale-get-started/run-history.png" alt-text="A screenshot showing the run history tab in autoscale settings.":::
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
This option uses the following defaults:
- Creates or uses existing default log analytics workspace corresponding to the region of the cluster - Auto-upgrade is enabled for the Azure Monitor cluster extension
+>[!NOTE]
+>Managed identity authentication will be default in k8s-extension version 1.43.0 or higher.
+>
+ ```azurecli az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers ```
To use [managed identity authentication](container-insights-onboard.md#authentic
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true ```
+>[!NOTE]
+>Managed identity authentication is not supported for Arc k8s connected clusters with **ARO**.
+>
+
+To use legacy/non-managed identity authentication to create extension instance on **Arc K8S connected clusters with ARO**, you can use the commands below that does not use managed identity. Non-cli onboarding is not supported for Arc K8s connected clusters with **ARO**. Currently, only k8s-extension version 1.3.7 or below is supported.
+
+If you are using k8s-extension version above 1.3.7, downgrade the version.
+
+```azurecli
+Install the extension with **amalogs.useAADAuth=false**.
+az extension add --name k8s-extension --version 1.3.7
+```
+
+Install the extension with **amalogs.useAADAuth=false**.
+
+```azurecli
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=false
+```
+ ### Option 2 - With existing Azure Log Analytics workspace
az k8s-extension show --name azuremonitor-containers --cluster-name <cluster-nam
## Migrate to managed identity authentication Use the flowing guidance to migrate an existing extension instance to managed identity authentication.
+>[!NOTE]
+>Managed identity authentication is not supported for Arc k8s connected clusters with **ARO**.
+>
+ ## [CLI](#tab/migrate-cli) First retrieve the Log Analytics workspace configured for Container insights extension.
azure-monitor Collect Custom Metrics Guestos Resource Manager Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vmss.md
Previously updated : 09/09/2019 Last updated : 07/30/2023 # Send guest OS metrics to the Azure Monitor metric store by using an Azure Resource Manager template for a Windows virtual machine scale set [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-By using the Azure Monitor [Windows Azure Diagnostics (WAD) extension](../agents/diagnostics-extension-overview.md), you can collect metrics and logs from the guest operating system (guest OS) that runs as part of a virtual machine, cloud service, or Azure Service Fabric cluster. The extension can send telemetry to many different locations listed in the previously linked article.
+By using the Azure Monitor [Azure Diagnostics extension for Windows (WAD)](../agents/diagnostics-extension-overview.md), you can collect metrics and logs from the guest operating system (guest OS) that runs as part of a virtual machine, cloud service, or Azure Service Fabric cluster. The extension can send telemetry to many different locations listed in the previously linked article.
-This article describes the process to send guest OS performance metrics for a Windows virtual machine scale set to the Azure Monitor data store. Starting with Windows Azure Diagnostics version 1.11, you can write metrics directly to the Azure Monitor metrics store, where standard platform metrics are already collected. By storing them in this location, you can access the same actions that are available for platform metrics. Actions include near real-time alerting, charting, routing, access from the REST API, and more. In the past, the Windows Azure Diagnostics extension wrote to Azure Storage but not the Azure Monitor data store.
+This article describes the process to send guest OS performance metrics for a Windows virtual machine scale set to the Azure Monitor data store. Starting with Microsoft Azure Diagnostics version 1.11, you can write metrics directly to the Azure Monitor metrics store, where standard platform metrics are already collected. By storing them in this location, you can access the same actions that are available for platform metrics. Actions include near real-time alerting, charting, routing, access from the REST API, and more. In the past, the Microsoft Azure Diagnostics extension wrote to Azure Storage but not the Azure Monitor data store.
If you're new to Resource Manager templates, learn about [template deployments](../../azure-resource-manager/management/overview.md) and their structure and syntax.
For this example, you can use a publicly available [sample template](https://git
Download and save both files locally.
-### Modify azuredeploy.parameters.json
+### Modify azuredeploy.parameters.json
+ Open the **azuredeploy.parameters.json** file:
-
+ - Provide a **vmSKU** you want to deploy. We recommend Standard_D2_v3. -- Specify a **windowsOSVersion** you want for your virtual machine scale set. We recommend 2016-Datacenter. -- Name the virtual machine scale set resource to be deployed by using a **vmssName** property. An example is **VMSS-WAD-TEST**.
+- Specify a **windowsOSVersion** you want for your virtual machine scale set. We recommend 2016-Datacenter.
+- Name the virtual machine scale set resource to be deployed by using a **vmssName** property. An example is **VMSS-WAD-TEST**.
- Specify the number of VMs you want to run on the virtual machine scale set by using the **instanceCount** property.-- Enter values for **adminUsername** and **adminPassword** for the virtual machine scale set. These parameters are used for remote access to the VMs in the scale set. To avoid having your VM hijacked, **do not** use the ones in this template. Bots scan the internet for usernames and passwords in public GitHub repositories. They're likely to be testing VMs with these defaults.
+- Enter values for **adminUsername** and **adminPassword** for the virtual machine scale set. These parameters are used for remote access to the VMs in the scale set. To avoid having your VM hijacked, **do not** use the ones in this template. Bots scan the internet for usernames and passwords in public GitHub repositories. They're likely to be testing VMs with these defaults.
+### Modify azuredeploy.json
-### Modify azuredeploy.json
Open the **azuredeploy.json** file. Add a variable to hold the storage account information in the Resource Manager template. Any logs or performance counters specified in the diagnostics config file are written to both the Azure Monitor metric store and the storage account you specify here:
To deploy the Resource Manager template, use Azure PowerShell:
1. On the **Monitor** page, select **Metrics**.
- ![Monitor - Metrics page](media/collect-custom-metrics-guestos-resource-manager-vmss/metrics.png)
+ :::image source="media/collect-custom-metrics-guestos-resource-manager-vmss/metrics.png" alt-text="A screenshot showing the metrics menu item on the Azure Monitor menu page." lightbox="media/collect-custom-metrics-guestos-resource-manager-vmss/metrics.png":::
1. Change the aggregation period to **Last 30 minutes**. 1. In the resource drop-down menu, select the virtual machine scale set you created.
-1. In the namespaces drop-down menu, select **azure.vm.windows.guest**.
+1. In the namespaces drop-down menu, select **Virtual Machine Guest**.
1. In the metrics drop-down menu, select **Memory\%Committed Bytes in Use**.
+ :::image source="media/collect-custom-metrics-guestos-resource-manager-vmss/create-metrics-chart.png" alt-text="A screenshot showing the selection of namespace metric and aggregation for a metrics chart." lightbox="media/collect-custom-metrics-guestos-resource-manager-vmss/create-metrics-chart.png":::
You can then also choose to use the dimensions on this metric to chart it for a particular VM or to plot each VM in the scale set. - ## Next steps+ - Learn more about [custom metrics](./metrics-custom-overview.md).
azure-monitor Collect Custom Metrics Linux Telegraf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-linux-telegraf.md
Title: Collect custom metrics for Linux VM with the InfluxData Telegraf agent description: Instructions on how to deploy the InfluxData Telegraf agent on a Linux VM in Azure and configure the agent to publish metrics to Azure Monitor. ++ Previously updated : 06/16/2022 Last updated : 08/01/2023 # Collect custom metrics for a Linux VM with the InfluxData Telegraf agent
This article explains how to deploy and configure the [InfluxData](https://www.i
## InfluxData Telegraf agent
-[Telegraf](https://docs.influxdata.com/telegraf/) is a plug-in-driven agent that enables the collection of metrics from over 150 different sources. Depending on what workloads run on your VM, you can configure the agent to leverage specialized input plug-ins to collect metrics. Examples are MySQL, NGINX, and Apache. By using output plug-ins, the agent can then write to destinations that you choose. The Telegraf agent has integrated directly with the Azure Monitor custom metrics REST API. It supports an Azure Monitor output plug-in. By using this plug-in, the agent can collect workload-specific metrics on your Linux VM and submit them as custom metrics to Azure Monitor.
+[Telegraf](https://docs.influxdata.com/telegraf/) is a plug-in-driven agent that enables the collection of metrics from over 150 different sources. Depending on what workloads run on your VM, you can configure the agent to use specialized input plug-ins to collect metrics. Examples are MySQL, NGINX, and Apache. By using output plug-ins, the agent can then write to destinations that you choose. The Telegraf agent has integrated directly with the Azure Monitor custom metrics REST API. It supports an Azure Monitor output plug-in. Using this plug-in, the agent can collect workload-specific metrics on your Linux VM and submit them as custom metrics to Azure Monitor.
- ![Telegraph agent overview](./media/collect-custom-metrics-linux-telegraf/telegraf-agent-overview.png)
-> [!NOTE]
-> Custom Metrics are not supported in all regions. Supported regions are listed [here](./metrics-custom-overview.md#supported-regions)
--
-
-## Connect to the VM
+## Connect to the VM
-Create an SSH connection with the VM. Select the **Connect** button on the overview page for your VM.
+Create an SSH connection to the VM where you want to install Telegraf. Select the **Connect** button on the overview page for your virtual machine.
-![Telegraf VM overview page](./media/collect-custom-metrics-linux-telegraf/connect-VM-button2.png)
-In the **Connect to virtual machine** page, keep the default options to connect by DNS name over port 22. In **Login using VM local account**, a connection command is shown. Select the button to copy the command. The following example shows what the SSH connection command looks like:
+In the **Connect to virtual machine** page, keep the default options to connect by DNS name over port 22. In **Login using VM local account**, a connection command is shown. Select the button to copy the command. The following example shows what the SSH connection command looks like:
```cmd ssh azureuser@XXXX.XX.XXX
source /etc/lsb-release
sudo echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list sudo curl -fsSL https://repos.influxdata.com/influxdata-archive_compat.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg add ```
-Instal the package:
+Install the package:
```bash
- apt-get update
- apt-get install telegraf
+ sudo apt-get update
+ sudo apt-get install telegraf
``` # [RHEL, CentOS, Oracle Linux](#tab/redhat)
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdata-archive_compat.key EOF ```
-Instal the package:
+Install the package:
```bash sudo yum -y install telegraf
sudo systemctl stop telegraf
# start and enable the telegraf agent on the VM to ensure it picks up the latest configuration sudo systemctl enable --now telegraf ```
-Now the agent will collect metrics from each of the input plug-ins specified and emit them to Azure Monitor.
+Now the agent collects metrics from each of the input plug-ins specified and emits them to Azure Monitor.
## Plot your Telegraf metrics in the Azure portal
Now the agent will collect metrics from each of the input plug-ins specified and
1. Navigate to the new **Monitor** tab. Then select **Metrics**. - 1. Select your VM in the resource selector.
- ![Metric chart](./media/collect-custom-metrics-linux-telegraf/metric-chart.png)
- 1. Select the **Telegraf/CPU** namespace, and select the **usage_system** metric. You can choose to filter by the dimensions on this metric or split on them.
- ![Select namespace and metric](./media/collect-custom-metrics-linux-telegraf/VM-resource-selector.png)
+ :::image type="content" source="./media/collect-custom-metrics-linux-telegraf/metric-chart.png" alt-text="A screenshot showing a metric chart with telegraph metrics selected." lightbox="./media/collect-custom-metrics-linux-telegraf/metric-chart.png":::
-## Additional configuration
+## Additional configuration
The preceding walkthrough provides information on how to configure the Telegraf agent to collect metrics from a few basic input plug-ins. The Telegraf agent has support for over 150 input plug-ins, with some supporting additional configuration options. InfluxData has published a [list of supported plugins](https://docs.influxdata.com/telegraf/v1.15/plugins/inputs/) and instructions on [how to configure them](https://docs.influxdata.com/telegraf/v1.15/administration/configuration/). Additionally, in this walkthrough, you used the Telegraf agent to emit metrics about the VM the agent is deployed on. The Telegraf agent can also be used as a collector and forwarder of metrics for other resources. To learn how to configure the agent to emit metrics for other Azure resources, see [Azure Monitor Custom Metric Output for Telegraf](https://github.com/influxdat).
-## Clean up resources
+## Clean up resources
When they're no longer needed, you can delete the resource group, virtual machine, and all related resources. To do so, select the resource group for the virtual machine and select **Delete**. Then confirm the name of the resource group to delete.
azure-monitor Metric Chart Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metric-chart-samples.md
Title: Azure Monitor metric chart example
description: Learn about visualizing your Azure Monitor data. Previously updated : 01/29/2019++ Last updated : 08/01/2023 # Metric chart examples
-The Azure platform offers [over a thousand metrics](./metrics-supported.md), many of which have dimensions. By using [dimension filters](./metrics-charts.md), applying [splitting](./metrics-charts.md), controlling chart type, and adjusting chart settings you can create powerful diagnostic views and dashboards that provide insight into the health of your infrastructure and applications. This article shows some examples of the charts that you can build using [Metrics Explorer](./metrics-charts.md) and explains the necessary steps to configure each of these charts.
+The Azure platform offers [over a thousand metrics](/azure/azure-monitor/reference/supported-metrics/metrics-index.md), many of which have dimensions. By using [dimension filters](./metrics-charts.md), applying [splitting](./metrics-charts.md), controlling chart type, and adjusting chart settings you can create powerful diagnostic views and dashboards that provide insight into the health of your infrastructure and applications. This article shows some examples of the charts that you can build using [Metrics Explorer](./metrics-charts.md), and explains the necessary steps to configure each of these charts.
-Want to share your great charts examples with the world? Contribute to this page on GitHub and share your own chart examples here!
## Website CPU utilization by server instances
-This chart shows if CPU for an App Service was within the acceptable range and breaks it down by instance to determine whether the load was properly distributed. You can see from the chart that the app was running on a single server instance before 6 AM, and then scaled up by adding another instance.
+This chart shows if the CPU usage for an App Service Plan was within the acceptable range and breaks it down by instance to determine whether the load was properly distributed.
-![Line chart of average cpu percentage by server instance](./media/metrics-charts/cpu-by-instance.png)
-### How to configure this chart?
-
-Select your App Service resource and find the **CPU Percentage** metric. Then click on **Apply splitting** and select the **Instance** dimension.
+### How to configure this chart
+1. Select **Metrics** from the **Monitoring** section of your App service plan's menu
+1. Select the **CPU Percentage** metric.
+1. Select **Apply splitting** and select the **Instance** dimension.
## Application availability by region
-View your application's availability by region to identify which geographic locations are having problems. This chart shows the Application Insights availability metric. You can see that the monitored application has no problem with availability from the East US datacenter, but it is experiencing a partial availability problem from West US, and East Asia.
+View your application's availability by region to identify which geographic locations are having problems. This chart shows the Application Insights availability metric. You can see that the monitored application has no problem with availability from the East US datacenter, but it's experiencing a partial availability problem from West US, and East Asia.
-![Chart of average availability by locations](./media/metrics-charts/availability-by-location.png)
-### How to configure this chart?
+### How to configure this chart
-You first need to turn on [Application Insights availability](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) monitoring for your website. After that, pick your Application Insights resource and select the Availability metric. Apply splitting on the **Run location** dimension.
+1. You must turn on [Application Insights availability](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) monitoring for your website.
+1. Select your Application Insights resource.
+1. Select the **Availability** metric.
+1. Apply splitting on the **Run location** dimension.
## Volume of failed storage account transactions by API name Your storage account resource is experiencing an excess volume of failed transactions. You can use the transactions metric to identify which API is responsible for the excess failure. Notice that the following chart is configured with the same dimension (API name) in splitting and filtered by failed response type:
-![Bar graph of API transactions](./media/metrics-charts/split-and-filter-example.png)
-### How to configure this chart?
+### How to configure this chart
-In the metric picker, select your storage account and the **Transactions** metric. Switch chart type to **Bar chart**. Click **Apply splitting** and select dimension **API name**. Then click on the **Add filter** and pick the **API name** dimension once again. In the filter dialog, select the APIs that you want to plot on the chart.
+1. In the Scope dropdown, select your Storage Account
+1. In the metric dropdown, select the **Transactions** metric.
+1. Select **Add filter** and select **Response type** from the **Property** dropdown.
+1. Select **CLientOtherError** from the **Values** dropdown.
+1. Select **Apply splitting** and select **API name** from the values dropdown.
## Total requests of Cosmos DB by Database Names and Collection Names You want to identify which collection in which database of your Cosmos DB instance is having maximum requests to adjust your costs for Cosmos DB.
-![Segmented line chart of Total Requests](./media/metrics-charts/multiple-split-example.png)
-### How to configure this chart?
+### How to configure this chart
-In the metric picker, select your Cosmos DB resource and the **Total Requests** metric. Click **Apply splitting** and select dimensions **DatabaseName** and **CollectionName**.
+1. In the scope dropdown, select your Cosmos DB.
+1. In the metric dropdown, select **Total Requests**.
+1. Select **Apply splitting** and select the **DatabaseName** and **CollectionName** dimensions from the **Values** dropdown.
## Next steps
azure-monitor Metrics Store Custom Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-store-custom-rest-api.md
Save the access token from the response for use in the following HTTP requests.
- **accessToken**: The authorization token acquired from the previous step. ```Shell
- curl -X POST 'https://<location>.monitoring.azure.com<resourceId>/metrics' \
+ curl -X POST 'https://<location>/.monitoring.azure.com<resourceId>/metrics' \
-H 'Content-Type: application/json' \ -H 'Authorization: Bearer <accessToken>' \ -d @custommetric.json
azure-monitor Metrics Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-troubleshoot.md
Previously updated : 06/09/2022 Last updated : 09/27/2022 # Troubleshooting metrics charts
-Use this article if you run into issues with creating, customizing, or interpreting charts in Azure metrics explorer. If you are new to metrics, learn about [getting started with metrics explorer](metrics-getting-started.md) and [advanced features of metrics explorer](../essentials/metrics-charts.md). You can also see [examples](../essentials/metric-chart-samples.md) of the configured metric charts.
+Use this article if you run into issues with creating, customizing, or interpreting charts in Azure metrics explorer. If you're new to metrics, learn about [getting started with metrics explorer](metrics-getting-started.md) and [advanced features of metrics explorer](../essentials/metrics-charts.md). You can also see [examples](../essentials/metric-chart-samples.md) of the configured metric charts.
## Chart shows no data
Sometimes the charts might show no data after selecting correct resources and me
### Microsoft.Insights resource provider isn't registered for your subscription
-Exploring metrics requires *Microsoft.Insights* resource provider registered in your subscription. In many cases, it is registered automatically (that is, after you configure an alert rule, customize diagnostic settings for any resource, or configure an autoscale rule). If the Microsoft.Insights resource provider is not registered, you must manually register it by following steps described in [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).
+Exploring metrics requires *Microsoft.Insights* resource provider registered in your subscription. In many cases, it's registered automatically (that is, after you configure an alert rule, customize diagnostic settings for any resource, or configure an autoscale rule). If the Microsoft.Insights resource provider isn't registered, you must manually register it by following steps described in [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).
**Solution:** Open **Subscriptions**, **Resource providers** tab, and verify that *Microsoft.Insights* is registered for your subscription.
Exploring metrics requires *Microsoft.Insights* resource provider registered in
In Azure, access to metrics is controlled by [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md). You must be a member of [monitoring reader](../../role-based-access-control/built-in-roles.md#monitoring-reader), [monitoring contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor), or [contributor](../../role-based-access-control/built-in-roles.md#contributor) to explore metrics for any resource.
-**Solution:** Ensure that you have sufficient permissions for the resource from which you are exploring metrics.
+**Solution:** Ensure that you have sufficient permissions for the resource from which you're exploring metrics.
### Your resource didn't emit metrics during the selected time range
-Some resources donΓÇÖt constantly emit their metrics. For example, Azure will not collect metrics for stopped virtual machines. Other resources might emit their metrics only when some condition occurs. For example, a metric showing processing time of a transaction requires at least one transaction. If there were no transactions in the selected time range, the chart will naturally be empty. Additionally, while most of the metrics in Azure are collected every minute, there are some that are collected less frequently. See the metric documentation to get more details about the metric that you are trying to explore.
+Some resources donΓÇÖt constantly emit their metrics. For example, Azure won't collect metrics for stopped virtual machines. Other resources might emit their metrics only when some condition occurs. For example, a metric showing processing time of a transaction requires at least one transaction. If there were no transactions in the selected time range, the chart will naturally be empty. Additionally, while most of the metrics in Azure are collected every minute, there are some that are collected less frequently. See the metric documentation to get more details about the metric that you're trying to explore.
**Solution:** Change the time of the chart to a wider range. You may start from ΓÇ£Last 30 daysΓÇ¥ using a larger time granularity (or relying on the ΓÇ£Automatic time granularityΓÇ¥ option).
By [locking the boundaries of chart y-axis](../essentials/metrics-charts.md#lock
**Solution:** Verify that the y-axis boundaries of the chart arenΓÇÖt locked outside of the range of the metric values. If the y-axis boundaries are locked, you may want to temporarily reset them to ensure that the metric values donΓÇÖt fall outside of the chart range. Locking the y-axis range isnΓÇÖt recommended with automatic granularity for the charts with **sum**, **min**, and **max** aggregation because their values will change with granularity by resizing browser window or going from one screen resolution to another. Switching granularity may leave the display area of your chart empty.
-### You are looking at a Guest (classic) metric but didnΓÇÖt enable Azure Diagnostic Extension
+### You're looking at a Guest (classic) metric but didnΓÇÖt enable Azure Diagnostic Extension
Collection of **Guest (classic)** metrics requires configuring the Azure Diagnostics Extension or enabling it using the **Diagnostic Settings** panel for your resource.
-**Solution:** If Azure Diagnostics Extension is enabled but you are still unable to see your metrics, follow steps outlined in [Azure Diagnostics Extension troubleshooting guide](../agents/diagnostics-extension-troubleshooting.md#metric-data-doesnt-appear-in-the-azure-portal). See also the troubleshooting steps for [Cannot pick Guest (classic) namespace and metrics](#cannot-pick-guest-namespace-and-metrics)
+**Solution:** If Azure Diagnostics Extension is enabled but you're still unable to see your metrics, follow steps outlined in [Azure Diagnostics Extension troubleshooting guide](../agents/diagnostics-extension-troubleshooting.md#metric-data-doesnt-appear-in-the-azure-portal). See also the troubleshooting steps for [Cannot pick Guest (classic) namespace and metrics](#cannot-pick-guest-namespace-and-metrics)
### Chart is segmented by a property that the metric doesn't define
Filters apply to all of the charts on the pane. If you set a filter on another c
## ΓÇ£Error retrieving dataΓÇ¥ message on dashboard
-This problem may happen when your dashboard was created with a metric that was later deprecated and removed from Azure. To verify that it is the case, open the **Metrics** tab of your resource, and check the available metrics in the metric picker. If the metric is not shown, the metric has been removed from Azure. Usually, when a metric is deprecated, there is a better new metric that provides with a similar perspective on the resource health.
+This problem may happen when your dashboard was created with a metric that was later deprecated and removed from Azure. To verify that it's the case, open the **Metrics** tab of your resource, and check the available metrics in the metric picker. If the metric isn't shown, the metric has been removed from Azure. Usually, when a metric is deprecated, there's a better new metric that provides with a similar perspective on the resource health.
**Solution:** Update the failing tile by picking an alternative metric for your chart on dashboard. You can [review a list of available metrics for Azure services](./metrics-supported.md). ## Chart shows dashed line
-Azure metrics charts use dashed line style to indicate that there is a missing value (also known as ΓÇ£null valueΓÇ¥) between two known time grain data points. For example, if in the time selector you picked ΓÇ£1 minuteΓÇ¥ time granularity but the metric was reported at 07:26, 07:27, 07:29, and 07:30 (note a minute gap between second and third data points), then a dashed line will connect 07:27 and 07:29 and a solid line will connect all other data points. The dashed line drops down to zero when the metric uses **count** and **sum** aggregation. For the **avg**, **min** or **max** aggregations, the dashed line connects two nearest known data points. Also, when the data is missing on the rightmost or leftmost side of the chart, the dashed line expands to the direction of the missing data point.
+Azure metrics charts use dashed line style to indicate that there's a missing value (also known as ΓÇ£null valueΓÇ¥) between two known time grain data points. For example, if in the time selector you picked ΓÇ£1 minuteΓÇ¥ time granularity but the metric was reported at 07:26, 07:27, 07:29, and 07:30 (note a minute gap between second and third data points), then a dashed line will connect 07:27 and 07:29 and a solid line will connect all other data points. The dashed line drops down to zero when the metric uses **count** and **sum** aggregation. For the **avg**, **min** or **max** aggregations, the dashed line connects two nearest known data points. Also, when the data is missing on the rightmost or leftmost side of the chart, the dashed line expands to the direction of the missing data point.
![Screenshot that shows how when the data is missing on the rightmost or leftmost side of the chart, the dashed line expands to the direction of the missing data point.](./media/metrics-troubleshoot/dashed-line.png)
-**Solution:** This behavior is by design. It is useful for identifying missing data points. The line chart is a superior choice for visualizing trends of high-density metrics but may be difficult to interpret for the metrics with sparse values, especially when corelating values with time grain is important. The dashed line makes reading of these charts easier but if your chart is still unclear, consider viewing your metrics with a different chart type. For example, a scattered plot chart for the same metric clearly shows each time grain by only visualizing a dot when there is a value and skipping the data point altogether when the value is missing:
+**Solution:** This behavior is by design. It's useful for identifying missing data points. The line chart is a superior choice for visualizing trends of high-density metrics but may be difficult to interpret for the metrics with sparse values, especially when corelating values with time grain is important. The dashed line makes reading of these charts easier but if your chart is still unclear, consider viewing your metrics with a different chart type. For example, a scattered plot chart for the same metric clearly shows each time grain by only visualizing a dot when there's a value and skipping the data point altogether when the value is missing:
![Screenshot that highlights the Scatter chart menu option.](./media/metrics-troubleshoot/scatter-plot.png) > [!NOTE]
Azure metrics charts use dashed line style to indicate that there is a missing v
## Units of measure in metrics charts Azure monitor metrics uses SI based prefixes. Metrics will only be using IEC prefixes if the resource provider has chosen an appropriate unit for a metric.
-For ex: The resource provider Network interface (resource name: rarana-vm816) has no metric unit defined for "Packets Sent". The prefix used for the metric value here is k representing kilo (1000), a SI prefix.
+For ex: The resource provider Network interface (resource name: rarana-vm816) has no metric unit defined for "Packets Sent". The prefix used for the metric value here's k representing kilo (1000), a SI prefix.
![Screenshot that shows metric value with prefix kilo.](./media/metrics-troubleshoot/prefix-si.png) The resource provider Storage account (resource name: ibabichvm) has metric unit defined for "Blob Capacity" as bytes. Hence, the prefix used is mebi (1024^2), an IEC prefix.
IEC uses binary
In many cases, the perceived drop in the metric values is a misunderstanding of the data shown on the chart. You can be misled by a drop in sums or counts when the chart shows the most-recent minutes because the last metric data points havenΓÇÖt been received or processed by Azure yet. Depending on the service, the latency of processing metrics can be within a couple minutes range. For charts showing a recent time range with a 1- or 5- minute granularity, a drop of the value over the last few minutes becomes more noticeable: ![Screenshot that shows a drop of the value over the last few minutes.](./media/metrics-troubleshoot/unexpected-dip.png)
-**Solution:** This behavior is by design. We believe that showing data as soon as we receive it is beneficial even when the data is *partial* or *incomplete*. Doing so allows you to make important conclusion sooner and start investigation right away. For example, for a metric that shows the number of failures, seeing a partial value X tells you that there were at least X failures on a given minute. You can start investigating the problem right away, rather than wait to see the exact count of failures that happened on this minute, which might not be as important. The chart will update once we receive the entire set of data, but at that time it may also show new incomplete data points from more recent minutes.
+**Solution:** This behavior is by design. We believe that showing data as soon as we receive it's beneficial even when the data is *partial* or *incomplete*. Doing so allows you to make important conclusion sooner and start investigation right away. For example, for a metric that shows the number of failures, seeing a partial value X tells you that there were at least X failures on a given minute. You can start investigating the problem right away, rather than wait to see the exact count of failures that happened on this minute, which might not be as important. The chart will update once we receive the entire set of data, but at that time it may also show new incomplete data points from more recent minutes.
## Cannot pick Guest namespace and metrics Virtual machines and virtual machine scale sets have two categories of metrics: **Virtual Machine Host** metrics that are collected by the Azure hosting environment, and **Guest (classic)** metrics that are collected by the [monitoring agent](../agents/agents-overview.md) running on your virtual machines. You install the monitoring agent by enabling [Azure Diagnostic Extension](../agents/diagnostics-extension-overview.md).
-By default, Guest (classic) metrics are stored in Azure Storage account, which you pick from the **Diagnostic settings** tab of your resource. If Guest metrics aren't collected or metrics explorer cannot access them, you will only see the **Virtual Machine Host** metric namespace:
+By default, Guest (classic) metrics are stored in Azure Storage account, which you pick from the **Diagnostic settings** tab of your resource. If Guest metrics aren't collected or metrics explorer cannot access them, you'll only see the **Virtual Machine Host** metric namespace:
![metric image](./media/metrics-troubleshoot/vm-metrics.png)
azure-monitor Platform Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/platform-logs-overview.md
Title: Overview of Azure platform logs | Microsoft Docs description: Overview of logs in Azure Monitor, which provide rich, frequent data about the operation of an Azure resource.-++ Previously updated : 12/19/2019 Last updated : 07/31/2023 # Overview of Azure platform logs
-Platform logs provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on. Although they're automatically generated, you need to configure certain platform logs to be forwarded to one or more destinations to be retained. This article provides an overview of platform logs including what information they provide and how you can configure them for collection and analysis.
+Platform logs provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on. Platform logs are automatically generated. This article provides an overview of platform logs including the information they provide, and how to configure them for collection and analysis.
## Types of platform logs
-The following table lists the specific platform logs that are available at different layers of Azure.
+The following table lists the platform logs that are available at different layers within Azure.
| Log | Layer | Description | |:|:|:|
-| [Resource logs](./resource-logs.md) | Azure Resources | Provide insight into operations that were performed within an Azure resource (the *data plane*). Examples might be getting a secret from a key vault or making a request to a database. The content of resource logs varies by the Azure service and resource type.<br><br>*Resource logs were previously referred to as diagnostic logs.* |
-| [Activity log](../essentials/activity-log.md) | Azure Subscription | Provides insight into the operations on each Azure resource in the subscription from the outside (the *management plane*) in addition to updates on Service Health events. Use the Activity log to determine the _what_, _who_, and _when_ for any write operations (PUT, POST, DELETE) taken on the resources in your subscription. There's a single activity log for each Azure subscription. |
-| [Azure Active Directory (Azure AD) logs](../../active-directory/reports-monitoring/overview-reports.md) | Azure Tenant | Contain the history of sign-in activity and audit trail of changes made in Azure AD for a particular tenant. |
+| [Resource logs](./resource-logs.md) | Azure Resources | Resource logs provide an insight into operations that were performed within an Azure resource. This is know as the *data plane*. Examples include getting a secret from a key vault, or making a request to a database. The contents of resource logs varies according to the Azure service and resource type.<br><br>*Resource logs were previously referred to as diagnostic logs.* |
+| [Activity logs](../essentials/activity-log.md) | Azure Subscription |Activity logs provide an insight into the operations performed *on* each Azure resource in the subscription from the outside, known as the *management plane*. in addition to updates on Service Health events. Use the Activity log to determine *what*, *who*, and *when* for any write operation (PUT, POST, DELETE) executed on the resources in your subscription. There's a single activity log for each Azure subscription. |
+| [Azure Active Directory (Azure AD) logs](../../active-directory/reports-monitoring/overview-reports.md) | Azure Tenant | Azure Active Directory logs contain the history of sign-in activity and an audit trail of changes made in Azure AD for a particular tenant. |
> [!NOTE]
-> The Azure activity log is primarily for activities that occur in Azure Resource Manager. It doesn't track resources by using the classic/RDFE model. Some classic resource types have a proxy resource provider in Resource Manager (for example, Microsoft.ClassicCompute). If you interact with a classic resource type through Resource Manager by using these proxy resource providers, the operations appear in the activity log. If you interact with a classic resource type outside of the Resource Manager proxies, your actions are only recorded in the Operation log. The Operation log can be browsed in a separate section of the portal.
+> The Azure activity log is primarily for activities that occur in Azure Resource Manager. The activity log doesn't track resources by using the classic/RDFE model. Some classic resource types have a proxy resource provider in Resource Manager, for example, Microsoft.ClassicCompute. If you interact with a classic resource type through Resource Manager by using these proxy resource providers, the operations appear in the activity log. If you interact with a classic resource type outside of the Resource Manager proxies, your actions are only recorded in the Operation log. The [Operation log](https://portal.azure.com/?Microsoft_Azure_Monitoring_Log=#view/Microsoft_Azure_Resources/OperationLogsBlade) can be browsed in a separate section of the portal.
-![Diagram that shows a platform logs overview.](media/platform-logs-overview/logs-overview.png)
## View platform logs There are different options for viewing and analyzing the different Azure platform logs: -- View the activity log in the Azure portal and access events from PowerShell and the Azure CLI. See [View the activity log](../essentials/activity-log.md#view-the-activity-log) for details.
+- View the activity log using the Azure portal and access events from PowerShell and the Azure CLI. See [View the activity log](../essentials/activity-log.md#view-the-activity-log) for details.
- View Azure AD security and activity reports in the Azure portal. See [What are Azure AD reports?](../../active-directory/reports-monitoring/overview-reports.md) for details.-- Resource logs are automatically generated by supported Azure resources. They aren't available to be viewed unless you create a [diagnostic setting](#diagnostic-settings).
+- Resource logs are automatically generated by supported Azure resources. You must create a [diagnostic setting](#diagnostic-settings) for the resource to store and view the log.
## Diagnostic settings
-Create a [diagnostic setting](../essentials/diagnostic-settings.md) to send platform logs to one of the following destinations for analysis or other purposes. Resource logs must have a diagnostic setting to be used because they have no other way of being viewed.
+Resource logs must have a diagnostic setting to be viewed. Create a [diagnostic setting](../essentials/diagnostic-settings.md) to send platform logs to one of the following destinations for analysis or other purposes.
| Destination | Description | |:|:| | Log Analytics workspace | Analyze the logs of all your Azure resources together and take advantage of all the features available to [Azure Monitor Logs](../logs/data-platform-logs.md) including [log queries](../logs/log-query-overview.md) and [log alerts](../alerts/alerts-log.md). Pin the results of a log query to an Azure dashboard or include it in a workbook as part of an interactive report. |
-| Event hub | Send platform log data outside of Azure, for example, to a third-party SIEM or custom telemetry platform. |
-| Azure Storage | Archive the logs for audit or backup. |
-| [Azure Monitor partner integrations](../../partner-solutions/overview.md)| Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms. Useful when you're already using one of the partners. |
+| Event hub | Send platform log data outside of Azure, for example, to a third-party SIEM or custom telemetry platform via Event hubs |
+| Azure Storage | Archive the logs to Azure storage for audit or backup. |
+| [Azure Monitor partner integrations](../../partner-solutions/overview.md)| Partner integrations are specialized integrations between Azure Monitor and non-Microsoft monitoring platforms. Partner integrations are especially useful when you're already using one of the supported partners. |
- For details on how to create a diagnostic setting for activity logs or resource logs, see [Create diagnostic settings to send platform logs and metrics to different destinations](../essentials/diagnostic-settings.md). - For details on how to create a diagnostic setting for Azure AD logs, see the following articles:
Create a [diagnostic setting](../essentials/diagnostic-settings.md) to send plat
## Pricing model
-Processing data to stream logs is charged for [certain services](resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace. There is a Log Analytics charge for ingesting the data into a workspace.
-
-The charge is based on the number of bytes in the exported JSON-formatted log data, measured in GB (10^9 bytes).
+Processing data to stream logs is charged for [certain services](resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace.
+While there's no direct charge when this data is sent from the resource to a Log Analytics workspace, there's a Log Analytics charge for ingesting the data into a workspace. The charge is based on the number of bytes in the exported JSON-formatted log data, measured in GB (10^9 bytes).
+
Pricing is available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). ## Next steps
azure-monitor Prometheus Metrics Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-disable.md
+
+ Title: Disable collecting Prometheus metrics on an Azure Kubernetes Service cluster
+description: Disable the collection of Prometheus metrics from an Azure Kubernetes Service cluster and remove the agent from the cluster nodes.
++++ Last updated : 07/30/2023+++
+# Disable Prometheus metrics collection from an AKS cluster
+
+Currently, the Azure CLI is the only option to remove the metrics add-on from your AKS cluster, and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus.
+
+The `az aks update --disable-azure-monitor-metrics` command:
+++ Removes the agent from the cluster nodes. ++ Deletes the recording rules created for that cluster. ++ Deletes the data collection endpoint (DCE). ++ Deletes the data collection rule (DCR).++ Deletes the DCRA and recording rules groups created as part of onboarding.+
+> [!NOTE]
+> This action doesn't remove any existing data stored in your Azure Monitor workspace.
+
+```azurecli
+az aks update --disable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group>
+```
+
+## Next steps
+
+- [See the default configuration for Prometheus metrics](./prometheus-metrics-scrape-default.md)
+- [Customize Prometheus metric scraping for the cluster](./prometheus-metrics-scrape-configuration.md)
+- [Use Azure Monitor managed service for Prometheus as the data source for Grafana](./prometheus-grafana.md)
+- [Configure self-hosted Grafana to use Azure Monitor managed service for Prometheus](./prometheus-self-managed-grafana-azure-active-directory.md)
azure-monitor Prometheus Metrics Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md
Previously updated : 01/24/2022 Last updated : 07/30/2023
If you're using an existing Azure Managed Grafana instance that's already linked
] } }
- ````
+ ```
In this JSON, `full_resource_id_1` and `full_resource_id_2` were already in the Azure Managed Grafana resource JSON. They're added here to the Azure Resource Manager template (ARM template). If you have no existing Grafana integrations, don't include these entries for `full_resource_id_1` and `full_resource_id_2`.
If you're using an existing Azure Managed Grafana instance that's already linked
] } }
- ````
+ ```
In this JSON, `full_resource_id_1` and `full_resource_id_2` were already in the Azure Managed Grafana resource JSON. They're added here to the ARM template. If you have no existing Grafana integrations, don't include these entries for `full_resource_id_1` and `full_resource_id_2`.
Deploy the template with the parameter file by using any valid method for deploy
### Limitations during enablement/deployment -- Ensure that you update the `kube-state metrics` annotations and labels list with proper formatting. There's a limitation in the ARM template deployments that require exact values in the `kube-state` metrics pods. If the Kubernetes pod has any issues with malformed parameters and isn't running, the feature might not as expected.
+- Ensure that you update the `kube-state metrics` annotations and labels list with proper formatting. There's a limitation in the ARM template deployments that require exact values in the `kube-state` metrics pods. If the Kubernetes pod has any issues with malformed parameters and isn't running, the feature might not run as expected.
- A data collection rule and data collection endpoint are created with the name `MSProm-\<short-cluster-region\>-\<cluster-name\>`. Currently, these names can't be modified. - You must get the existing Azure Monitor workspace integrations for a Grafana instance and update the ARM template with it. Otherwise, the ARM deployment gets over-written, which removes existing integrations.
The following table lists the firewall configuration required for Azure monitor
| `*.handler.control.monitor.azure.us` | For querying data collection rules | 443 | ## Uninstall the metrics add-on
-Currently, the Azure CLI is the only option to remove the metrics add-on and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus. Use the following command to remove the agent from the cluster nodes and delete the recording rules created for that cluster. This will also delete the data collection endpoint (DCE), data collection rule (DCR), DCRA and recording rules groups created as part of onboarding. . This action doesn't remove any existing data stored in your Azure Monitor workspace.
-```azurecli
-az aks update --disable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group>
-```
+To uninstall the metrics add-on, see [Disable Prometheus metrics collection on an AKS cluster.](./prometheus-metrics-disable.md)
## Supported regions
azure-monitor Data Ingestion Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-ingestion-time.md
After the data is available at the data collection endpoint, it takes another 30
After log records are ingested into the Azure Monitor pipeline (as identified in the [_TimeReceived](./log-standard-columns.md#_timereceived) property), they're written to temporary storage to ensure tenant isolation and to make sure that data isn't lost. This process typically adds 5 to 15 seconds.
-Some management solutions implement heavier algorithms to aggregate data and derive insights as data is streaming in. For example, Azure Network Performance Monitoring aggregates incoming data over 3-minute intervals, which effectively adds 3-minute latency.
+Some solutions implement heavier algorithms to aggregate data and derive insights as data is streaming in. For example, Application Insights calculates application map data; Azure Network Performance Monitoring aggregates incoming data over 3-minute intervals, which effectively adds 3-minute latency.
Another process that adds latency is the process that handles custom logs. In some cases, this process might add a few minutes of latency to logs that are collected from files by the agent.
azure-netapp-files Azacsnap Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-tips.md
na Previously updated : 05/04/2023 Last updated : 07/30/2023
AzAcSnap 8 introduced a new global settings file (`.azacsnaprc`) which must be l
Settings, which can be controlled by adding/editing the global settings file are: -- **MAINLOG_LOCATION** which sets the location of the "mainlog" output file, which is called `azacsnap.log` and was introduced in AzAcSnap 8. Values should be absolute paths, for example:
+- **MAINLOG_LOCATION** which sets the location of the "main-log" output file, which is called `azacsnap.log` and was introduced in AzAcSnap 8. Values should be absolute paths, for example:
- `MAINLOG_LOCATION=/home/azacsnap/bin/logs`
-## Mainlog parsing
+## Main-log parsing
-AzAcSnap 8 introduced a new "mainlog" to provide simpler parsing of runs of AzAcSnap. The inspiration for this file is the SAP HANA backup catalog, which shows when AzAcSnap was started, how long it took, and what the snapshot name is. With AzAcSnap, this idea has been taken further to include information for each of the AzAcSnap commands, specifically the `-c` options, and the file has the following headers:
+AzAcSnap 8 introduced a new "main-log" to provide simpler parsing of runs of AzAcSnap. The inspiration for this file is the SAP HANA backup catalog, which shows when AzAcSnap was started, how long it took, and what the snapshot name is. With AzAcSnap, this idea has been taken further to include information for each of the AzAcSnap commands, specifically the `-c` options, and the file has the following headers:
```output DATE_TIME,OPERATION_NAME,STATUS,SID,DATABASE_TYPE,DURATION,SNAPSHOT_NAME,AZACSNAP_VERSION,AZACSNAP_CONFIG_FILE,VOLUME
This format makes the file parse-able with the Linux commands `watch`, `grep`, `
# Monitor execution of AzAcSnap backup commands # # These values can be modified as appropriate.
-HEADER_VALUES_TO_EXCLUDE="AZACSNAP_VERSION,VOLUME,AZACSNAP_CONFIG_FILE"
+# Mainlog header fields:
+# 1. DATE_TIME,
+# 2. OPERATION_NAME,
+# 3. STATUS,
+# 4. SID,
+# 5. DATABASE_TYPE,
+# 6. DURATION,
+# 7. SNAPSHOT_NAME,
+# 8. AZACSNAP_VERSION,
+# 9. AZACSNAP_CONFIG_FILE,
+# 10. VOLUME
+FIELDS_TO_INCLUDE="1,2,3,4,5,6,7"
SCREEN_REFRESH_SECS=2 # # Use AzAcSnap global settings file (.azacsnaprc) if available,
echo "Changing current working directory to ${MAINLOG_LOCATION}"
# Default MAINLOG filename. MAINLOG_FILENAME="azacsnap.log" #
+echo "Parsing '${MAINLOG_FILENAME}'"
# High-level explanation of how commands used. # `watch` - continuously monitoring the command output. # `column` - provide pretty output.
watch -t -n ${SCREEN_REFRESH_SECS} \
echo -n "Monitoring AzAcSnap @ "; \ date ; \ echo ; \
- column -N"$(head -n1 ${MAINLOG_FILENAME})" \
- -d -H "${HEADER_VALUES_TO_EXCLUDE}" \
- -s"," -t ${MAINLOG_FILENAME} \
- | head -n1 ; \
- grep -e "DATE" -e "backup" ${MAINLOG_FILENAME} \
- | column -N"$(head -n1 ${MAINLOG_FILENAME})" \
- -d -H "${HEADER_VALUES_TO_EXCLUDE}" \
- -s"," -t \
- | tail -n +2 \
- | tail -n 12 \
+ cat ${MAINLOG_FILENAME} \
+ | grep -e "DATE" -e ",backup," \
+ | ( sleep 1; head -n1 - ; sleep 1; tail -n+2 - | tail -n20; sleep 1 ) \
+ | cut -f${FIELDS_TO_INCLUDE} -d"," | column -s"," -t
" ```
compress
} ```
-After creating the `logrotate.conf` file, the `logrotate` command should be run regularly to archive AzAcSnap log files accordingly. Automating the `logrotate` command can be done using cron. The following output is one line of the azacsnap user's crontab, this example runs logrotate daily using the configuration file `~/logrotate.conf`.
+After the `logrotate.conf` file has been created, the `logrotate` command should be run regularly to archive AzAcSnap log files accordingly. Automating the `logrotate` command can be done using cron. The following output is one line of the azacsnap user's crontab, this example runs logrotate daily using the configuration file `~/logrotate.conf`.
```output @daily /usr/sbin/logrotate -s ~/logrotate.state ~/logrotate.conf >> ~/logrotate.log
ls -ltra ~/bin/logs
The following conditions should be monitored to ensure a healthy system: 1. Available disk space. Snapshots slowly consume disk space based on the block-level change rate, as keeping older disk blocks are retained in the snapshot.
- 1. To help automate disk space management, use the `--retention` and `--trim` options to automatically cleanup the old snapshots and database log files.
+ 1. To help automate disk space management, use the `--retention` and `--trim` options to automatically clean up the old snapshots and database log files.
1. Successful execution of the snapshot tools 1. Check the `*.result` file for the success or failure of the latest running of `azacsnap`. 1. Check `/var/log/messages` for output from the `azacsnap` command.
A 'boot' snapshot can be recovered as follows:
1. The customer needs to shut down the server. 1. After the Server is shut down, the customer will need to open a service request that contains the Machine ID and Snapshot to restore. > Customers can open a service request via the [Azure portal](https://portal.azure.com).
-1. Microsoft restores the Operating System LUN using the specified Machine ID and Snapshot, and then boot the Server.
+1. Microsoft restores the Operating System LUN using the specified Machine ID and Snapshot, and then boots the Server.
1. The customer then needs to confirm Server is booted and healthy. No other steps to be performed after the restore.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
Azure NetApp Files volumes are designed to be contained in a special purpose sub
* Australia Southeast * Brazil South * Canada Central
+* Canada East
* Central India * Central US * East Asia
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
Title: Configure customer-managed keys for Azure NetApp Files volume encryption | Microsoft Docs
-description: Describes how to configure customer-managed keys for Azure NetApp Files volume encryption.
+description: Describes how to configure customer-managed keys for Azure NetApp Files volume encryption.
documentationcenter: ''
na -+ Last updated 05/03/2023 # Configure customer-managed keys for Azure NetApp Files volume encryption
-Customer-managed keys in Azure NetApp Files volume encryption enable you to use your own keys rather than a Microsoft-managed key when creating a new volume. With customer-managed keys, you can fully manage the relationship between a key's life cycle, key usage permissions, and auditing operations on keys.
+Customer-managed keys in Azure NetApp Files volume encryption enable you to use your own keys rather than a Microsoft-managed key when creating a new volume. With customer-managed keys, you can fully manage the relationship between a key's life cycle, key usage permissions, and auditing operations on keys.
The following diagram demonstrates how customer-managed keys work with Azure NetApp Files:
The following diagram demonstrates how customer-managed keys work with Azure Net
## Considerations > [!IMPORTANT]
-> Customer-managed keys for Azure NetApp Files volume encryption is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Customer-managed keys for Azure NetApp Files volume encryption](https://aka.ms/anfcmkpreviewsignup)** page. Customer-managed keys feature is expected to be enabled within a week after you submit the waitlist request. You can check the status of feature registration by using the following command:
+> Customer-managed keys for Azure NetApp Files volume encryption is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Customer-managed keys for Azure NetApp Files volume encryption](https://aka.ms/anfcmkpreviewsignup)** page. Customer-managed keys feature is expected to be enabled within a week after you submit the waitlist request. You can check the status of feature registration by using the following command:
> > ```azurepowershell-interactive
-> Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAzureKeyVaultEncryption
->
-> FeatureName ProviderName RegistrationState
-> -- --
+> Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAzureKeyVaultEncryption
+>
+> FeatureName ProviderName RegistrationState
+> -- --
> ANFAzureKeyVaultEncryption Microsoft.NetApp Registered > ```
-* Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption.
+* Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption.
* To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in to [Set the Network Features option](configure-network-features.md#set-the-network-features-option) in the volume creation page. * For increased security, you can select the **Disable public access** option within the network settings of your key vault. When selecting this option, you must also select **Allow trusted Microsoft services to bypass this firewall** to permit the Azure NetApp Files service to access your encryption key.
-* MSI Automatic certificate renewal isn't currently supported. It is recommended to set up an Azure monitor alert for when the MSI certificate is going to expire.
+* MSI Automatic certificate renewal isn't currently supported. It is recommended to set up an Azure monitor alert for when the MSI certificate is going to expire.
* The MSI certificate has a lifetime of 90 days. It becomes eligible for renewal after 46 days. **After 90 days, the certificate is no longer be valid and the customer-managed key volumes under the NetApp account will go offline.**
- * To renew, you need to call the NetApp account operation `renewCredentials` if eligible for renewal. If it's not eligible, an error message communicates the date of eligibility.
+ * To renew, you need to call the NetApp account operation `renewCredentials` if eligible for renewal. If it's not eligible, an error message communicates the date of eligibility.
* Version 2.42 or later of the Azure CLI supports running the `renewCredentials` operation with the [az netappfiles account command](/cli/azure/netappfiles/account#az-netappfiles-account-renew-credentials). For example:
-
+ `az netappfiles account renew-credentials ΓÇô-account-name myaccount ΓÇôresource-group myresourcegroup` * If the account isn't eligible for MSI certificate renewal, an error message communicates the date and time when the account is eligible. It's recommended you run this operation periodically (for example, daily) to prevent the certificate from expiring and from the customer-managed key volume going offline.
-* Applying Azure network security groups on the private link subnet to Azure Key Vault isn't supported for Azure NetApp Files customer-managed keys. Network security groups don't affect connectivity to Private Link unless `Private endpoint network policy` is enabled on the subnet. It's recommended to keep this option disabled.
-* If Azure NetApp Files fails to create a customer-managed key volume, error messages are displayed. Refer to the [Error messages and troubleshooting](#error-messages-and-troubleshooting) section for more information.
+* Applying Azure network security groups on the private link subnet to Azure Key Vault isn't supported for Azure NetApp Files customer-managed keys. Network security groups don't affect connectivity to Private Link unless `Private endpoint network policy` is enabled on the subnet. It's recommended to keep this option disabled.
+* If Azure NetApp Files fails to create a customer-managed key volume, error messages are displayed. Refer to the [Error messages and troubleshooting](#error-messages-and-troubleshooting) section for more information.
* If Azure Key Vault becomes inaccessible, Azure NetApp Files loses its access to the encryption keys and the ability to read or write data to volumes enabled with customer-managed keys. In this situation, create a support ticket to have access manually restored for the affected volumes.
-* Azure NetApp Files supports customer-managed keys on source and data replication volumes with cross-region replication or cross-zone replication relationships.
+* Azure NetApp Files supports customer-managed keys on source and data replication volumes with cross-region replication or cross-zone replication relationships.
-## Supported regions
+## Supported regions
-Azure NetApp Files customer-managed keys is supported for the following regions:
+Azure NetApp Files customer-managed keys is supported for the following regions:
* Australia Central * Australia Central 2 * Australia East * Australia Southeast * Brazil South
-* Canada Central
+* Canada Central
* Central US * East Asia * East US * East US 2
-* France Central
+* France Central
* Germany North * Germany West Central * Japan East
Azure NetApp Files customer-managed keys is supported for the following regions:
## Requirements
-Before creating your first customer-managed key volume, you must have set up:
-* An [Azure Key Vault](../key-vault/general/overview.md), containing at least one key.
- * The key vault must have soft delete and purge protection enabled.
- * The key must be of type RSA.
+Before creating your first customer-managed key volume, you must have set up:
+* An [Azure Key Vault](../key-vault/general/overview.md), containing at least one key.
+ * The key vault must have soft delete and purge protection enabled.
+ * The key must be of type RSA.
* The key vault must have an [Azure Private Endpoint](../private-link/private-endpoint-overview.md).
- * The private endpoint must reside in a different subnet than the one delegated to Azure NetApp Files. The subnet must be in the same VNet as the one delegated to Azure NetApp.
+ * The private endpoint must reside in a different subnet than the one delegated to Azure NetApp Files. The subnet must be in the same VNet as the one delegated to Azure NetApp.
For more information about Azure Key Vault and Azure Private Endpoint, refer to:
-* [Quickstart: Create a key vault ](../key-vault/general/quick-create-portal.md)
+* [Quickstart: Create a key vault ](../key-vault/general/quick-create-portal.md)
* [Create or import a key into the vault](../key-vault/keys/quick-create-portal.md) * [Create a private endpoint](../private-link/create-private-endpoint-portal.md) * [More about keys and supported key types](../key-vault/keys/about-keys.md)
For more information about Azure Key Vault and Azure Private Endpoint, refer to:
:::image type="content" source="../media/azure-netapp-files/encryption-menu.png" alt-text="Screenshot of the encryption menu." lightbox="../media/azure-netapp-files/encryption-menu.png":::
-1. When you set your NetApp account to use customer-managed key, you have two ways to specify the Key URI:
- * The **Select from key vault** option allows you to select a key vault and a key.
+1. When you set your NetApp account to use customer-managed key, you have two ways to specify the Key URI:
+ * The **Select from key vault** option allows you to select a key vault and a key.
:::image type="content" source="../media/azure-netapp-files/select-key.png" alt-text="Screenshot of the select a key interface." lightbox="../media/azure-netapp-files/select-key.png":::
-
- * The **Enter key URI** option allows you to enter manually the key URI.
+
+ * The **Enter key URI** option allows you to enter manually the key URI.
:::image type="content" source="../media/azure-netapp-files/key-enter-uri.png" alt-text="Screenshot of the encryption menu showing key URI field." lightbox="../media/azure-netapp-files/key-enter-uri.png"::: 1. Select the identity type that you want to use for authentication to the Azure Key Vault. If your Azure Key Vault is configured to use Vault access policy as its permission model, both options are available. Otherwise, only the user-assigned option is available.
For more information about Azure Key Vault and Azure Private Endpoint, refer to:
:::image type="content" source="../media/azure-netapp-files/encryption-system-assigned.png" alt-text="Screenshot of the encryption menu with system-assigned options." lightbox="../media/azure-netapp-files/encryption-system-assigned.png":::
- * If you choose **User-assigned**, you must select an identity. Choose **Select an identity** to open a context pane where you select a user-assigned managed identity.
+ * If you choose **User-assigned**, you must select an identity. Choose **Select an identity** to open a context pane where you select a user-assigned managed identity.
:::image type="content" source="../media/azure-netapp-files/encryption-user-assigned.png" alt-text="Screenshot of user-assigned submenu." lightbox="../media/azure-netapp-files/encryption-user-assigned.png":::
-
- If you've configured your Azure Key Vault to use Vault access policy, the Azure portal configures the NetApp account automatically with the following process: The user-assigned identity you select is added to your NetApp account. An access policy is created on your Azure Key Vault with the key permissions Get, Encrypt, Decrypt.
+
+ If you've configured your Azure Key Vault to use Vault access policy, the Azure portal configures the NetApp account automatically with the following process: The user-assigned identity you select is added to your NetApp account. An access policy is created on your Azure Key Vault with the key permissions Get, Encrypt, Decrypt.
If you've configure your Azure Key Vault to use Azure role-based access control, then you need to make sure the selected user-assigned identity has a role assignment on the key vault with permissions for data actions: * `Microsoft.KeyVault/vaults/keys/read` * `Microsoft.KeyVault/vaults/keys/encrypt/action` * `Microsoft.KeyVault/vaults/keys/decrypt/action`
- The user-assigned identity you select is added to your NetApp account. Due to the customizable nature of role-based access control (RBAC), the Azure portal doesn't configure access to the key vault. See [Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control](../key-vault/general/rbac-guide.md) for details on configuring Azure Key Vault.
+ The user-assigned identity you select is added to your NetApp account. Due to the customizable nature of role-based access control (RBAC), the Azure portal doesn't configure access to the key vault. See [Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control](../key-vault/general/rbac-guide.md) for details on configuring Azure Key Vault.
-1. After selecting **Save** button, you'll receive a notification communicating the status of the operation. If the operation was not successful, an error message displays. Refer to [error messages and troubleshooting](#error-messages-and-troubleshooting) for assistance in resolving the error.
+1. After selecting **Save** button, you'll receive a notification communicating the status of the operation. If the operation was not successful, an error message displays. Refer to [error messages and troubleshooting](#error-messages-and-troubleshooting) for assistance in resolving the error.
## Use role-based access control
-You can use an Azure Key Vault that is configured to use Azure role-based access control. To configure customer-managed keys through Azure portal, you need to provide a user-assigned identity.
+You can use an Azure Key Vault that is configured to use Azure role-based access control. To configure customer-managed keys through Azure portal, you need to provide a user-assigned identity.
1. In your Azure account, navigate to the **Access policies** menu. 1. To create an access policy, under **Permission model**, select **Azure role-based access-control**. :::image type="content" source="../media/azure-netapp-files/rbac-permission.png" alt-text="Screenshot of access configuration menu." lightbox="../media/azure-netapp-files/rbac-permission.png":::
-1. When creating the user-assigned role, there are three permissions required for customer-managed keys:
+1. When creating the user-assigned role, there are three permissions required for customer-managed keys:
1. `Microsoft.KeyVault/vaults/keys/read` 1. `Microsoft.KeyVault/vaults/keys/encrypt/action` 1. `Microsoft.KeyVault/vaults/keys/decrypt/action`
You can use an Azure Key Vault that is configured to use Azure role-based access
```json {
- "id": "/subscriptions/<subscription>/Microsoft.Authorization/roleDefinitions/<roleDefinitionsID>",
- "properties": {
- "roleName": "NetApp account",
- "description": "Has the necessary permissions for customer-managed key encryption: get key, encrypt and decrypt",
- "assignableScopes": [
+ "id": "/subscriptions/<subscription>/Microsoft.Authorization/roleDefinitions/<roleDefinitionsID>",
+ "properties": {
+ "roleName": "NetApp account",
+ "description": "Has the necessary permissions for customer-managed key encryption: get key, encrypt and decrypt",
+ "assignableScopes": [
"/subscriptions/<subscriptionID>/resourceGroups/<resourceGroup>" ],
- "permissions": [
+ "permissions": [
{ "actions": [], "notActions": [],
You can use an Azure Key Vault that is configured to use Azure role-based access
"Microsoft.KeyVault/vaults/keys/decrypt/action" ], "notDataActions": []
- }
+ }
]
- }
+ }
} ```
-1. Once the custom role is created and available to use with the key vault, you apply it to the user-assigned identity.
+1. Once the custom role is created and available to use with the key vault, you apply it to the user-assigned identity.
:::image type="content" source="../media/azure-netapp-files/rbac-review-assign.png" alt-text="Screenshot of RBAC review and assign menu." lightbox="../media/azure-netapp-files/rbac-review-assign.png"::: ## Create an Azure NetApp Files volume using customer-managed keys
-1. From Azure NetApp Files, select **Volumes** and then **+ Add volume**.
+1. From Azure NetApp Files, select **Volumes** and then **+ Add volume**.
1. Follow the instructions in [Configure network features for an Azure NetApp Files volume](configure-network-features.md): * [Set the Network Features option in volume creation page](configure-network-features.md#set-the-network-features-option). * The network security group for the volumeΓÇÖs delegated subnet must allow incoming traffic from NetApp's storage VM.
-1. For a NetApp account configured to use a customer-managed key, the Create Volume page includes an option Encryption Key Source.
-
- To encrypt the volume with your key, select **Customer-Managed Key** in the **Encryption Key Source** dropdown menu.
-
- When you create a volume using a customer-managed key, you must also select **Standard** for the **Network features** option. Basic network features are not supported.
+1. For a NetApp account configured to use a customer-managed key, the Create Volume page includes an option Encryption Key Source.
+
+ To encrypt the volume with your key, select **Customer-Managed Key** in the **Encryption Key Source** dropdown menu.
+
+ When you create a volume using a customer-managed key, you must also select **Standard** for the **Network features** option. Basic network features are not supported.
You must select a key vault private endpoint as well. The dropdown menu displays private endpoints in the selected Virtual network. If there's no private endpoint for your key vault in the selected virtual network, then the dropdown is empty, and you won't be able to proceed. If so, see to [Azure Private Endpoint](../private-link/private-endpoint-overview.md). :::image type="content" source="../media/azure-netapp-files/keys-create-volume.png" alt-text="Screenshot of create volume menu." lightbox="../media/azure-netapp-files/keys-create-volume.png":::
-1. Continue to complete the volume creation process. Refer to:
+1. Continue to complete the volume creation process. Refer to:
* [Create an NFS volume](azure-netapp-files-create-volumes.md) * [Create an SMB volume](azure-netapp-files-create-volumes-smb.md) * [Create a dual-protocol volume](create-volumes-dual-protocol.md) ## Rekey all volumes under a NetApp account
-If you have already configured your NetApp account for customer-managed keys and have one or more volumes encrypted with customer-managed keys, you can change the key that is used to encrypt all volumes under the NetApp account. You can select any key that is in the same key vault. Changing key vaults isn't supported.
+If you have already configured your NetApp account for customer-managed keys and have one or more volumes encrypted with customer-managed keys, you can change the key that is used to encrypt all volumes under the NetApp account. You can select any key that is in the same key vault. Changing key vaults isn't supported.
1. Under your NetApp account, navigate to the **Encryption** menu. Under the **Current key** input field, select the **Rekey** link. :::image type="content" source="../media/azure-netapp-files/encryption-current-key.png" alt-text="Screenshot of the encryption key." lightbox="../media/azure-netapp-files/encryption-current-key.png":::
If you have already configured your NetApp account for customer-managed keys and
1. In the **Rekey** menu, select one of the available keys from the dropdown menu. The chosen key must be different from the current key. :::image type="content" source="../media/azure-netapp-files/encryption-rekey.png" alt-text="Screenshot of the rekey menu." lightbox="../media/azure-netapp-files/encryption-rekey.png":::
-1. Select **OK** to save. The rekey operation may take several minutes.
+1. Select **OK** to save. The rekey operation may take several minutes.
## Switch from system-assigned to user-assigned identity
-To switch from system-assigned to user-assigned identity, you must grant the target identity access to the key vault being used with read/get, encrypt, and decrypt permissions.
+To switch from system-assigned to user-assigned identity, you must grant the target identity access to the key vault being used with read/get, encrypt, and decrypt permissions.
1. Update the NetApp account by sending a PATCH request using the `az rest` command: ```azurecli
To switch from system-assigned to user-assigned identity, you must grant the tar
} ``` 1. Confirm the operation completed successfully with the `az netappfiles account show` command. The output includes the following fields:
- ```azurecli
+ ```azurecli
"id": "/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.NetApp/netAppAccounts/account", "identity": { "principalId": null,
To switch from system-assigned to user-assigned identity, you must grant the tar
* `encryption.identity.principalId` matches the value in `identity.userAssignedIdentities.principalId` * `encryption.identity.userAssignedIdentity` matches the value in `identity.userAssignedIdentities[]`
- ```azurecli
+ ```json
"encryption": {
- "identity": {
- "principalId": "<principal-id>",
- "userAssignedIdentity": "/subscriptions/<subscriptionId>/resourceGroups/<resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity>"
+ "identity": {
+ "principalId": "<principal-id>",
+ "userAssignedIdentity": "/subscriptions/<subscriptionId>/resourceGroups/<resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity>"
},
- "KeySource": "Microsoft.KeyVault",
+ "KeySource": "Microsoft.KeyVault",
}, ```
To switch from system-assigned to user-assigned identity, you must grant the tar
This section lists error messages and possible resolutions when Azure NetApp Files fails to configure customer-managed key encryption or create a volume using a customer-managed key.
-### Errors configuring customer-managed key encryption on a NetApp account
+### Errors configuring customer-managed key encryption on a NetApp account
| Error Condition | Resolution | | -- | -- |
This section lists error messages and possible resolutions when Azure NetApp Fil
| `Azure Key Vault key is not enabled` | Ensure that the selected key is enabled. | | `Azure Key Vault key is expired` | Ensure that the selected key is not expired. | | `Azure Key Vault key has not been activated` | Ensure that the selected key is active. |
-| `Key Vault URI is invalid` | When entering key URI manually, ensure that the URI is correct. |
+| `Key Vault URI is invalid` | When entering key URI manually, ensure that the URI is correct. |
| `Azure Key Vault is not recoverable. Make sure that Soft-delete and Purge protection are both enabled on the Azure Key Vault` | Update the key vault recovery level to: <br> `ΓÇ£Recoverable/Recoverable+ProtectedSubscription/CustomizedRecoverable/CustomizedRecoverable+ProtectedSubscriptionΓÇ¥` | | `Account must be in the same region as the Vault` | Ensure the key vault is in the same region as the NetApp account. |
-### Errors creating a volume encrypted with customer-managed keys
+### Errors creating a volume encrypted with customer-managed keys
| Error Condition | Resolution | | -- | -- |
azure-resource-manager Deployment Stacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-stacks.md
To delete a managed resource, remove the resource definition from the underlying
## Protect managed resources against deletion
-When creating a deployment stack, it's possible to assign a specific type of permissions to the managed resources, which prevents their deletion by unauthorized security principals. These settings are refereed as deny settings. You want to store the stack at a parent scope.
+When creating a deployment stack, it's possible to assign a specific type of permissions to the managed resources, which prevents their deletion by unauthorized security principals. These settings are referred to as deny settings. You want to store the stack at a parent scope.
# [PowerShell](#tab/azure-powershell)
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/overview.md
Title: Overview of Azure Managed Applications description: Describes the concepts for Azure Managed Applications that provide cloud solutions that are easy for customers to deploy and operate. Previously updated : 08/19/2022 Last updated : 07/31/2023 # Azure Managed Applications overview
-Azure Managed Applications enable you to offer cloud solutions that are easy for customers to deploy and operate. You implement the infrastructure and provide ongoing support. To make a managed application available to all customers, publish it in Azure Marketplace. To make it available to only users in your organization, publish it to an internal catalog.
+Azure Managed Applications enable you to offer cloud solutions that are easy for customers to deploy and operate. As a publisher, you implement the infrastructure and can provide ongoing support. To make a managed application available to all customers, publish it in Azure Marketplace. To make it available to only users in your organization, publish it to an internal service catalog.
-A managed application is similar to a solution template in Azure Marketplace, with one key difference. In a managed application, the resources are deployed to a resource group that's managed by the publisher of the app. The resource group is present in the customer's subscription, but an identity in the publisher's tenant has access to the resource group. As the publisher, you specify the cost for ongoing support of the solution.
+A managed application is similar to a solution template in Azure Marketplace, with one key difference. In a managed application, the resources are deployed to a managed resource group that's managed by the application's publisher or by the customer. The managed resource group is present in the customer's subscription, but an identity in the publisher's tenant can be given access to the managed resource group. As the publisher, if you manage the application, you specify the cost for ongoing support of the solution.
> [!NOTE] > The documentation for Azure Custom Providers used to be included with Managed Applications. That documentation was moved to [Azure Custom Providers](../custom-providers/overview.md).
+## Publisher and customer permissions
+
+For the managed resource group, the publisher's management access and the customer's deny assignment are optional. There are different permission scenarios available based on publisher and customer needs for a managed application.
+
+- **Publisher managed**: Publisher has management access to resources in the managed resource group in the customer's Azure tenant. Customer access to the managed resource group is restricted by a deny assignment. Publisher managed is the default managed application permission scenario.
+- **Publisher and customer access**: Publisher and customer have full access to the managed resource group. The deny assignment is removed.
+- **Locked mode**: Publisher doesn't have any access to the customers deployed managed application or managed resource group. Customer access is restricted by deny assignment.
+- **Customer managed**: Customer has full management access to the managed resource group and the publisher's access is removed. There's no deny assignment. Publisher develops the application and publishes on Azure Marketplace but doesn't manage the application. Publisher licenses the application for billing through Azure Marketplace.
+
+Advantages of using permission scenarios:
+
+- For security reasons, publishers don't want persistent management access to the managed resource group, customer's tenant, or data in managed resource group.
+- Publishers want to remove the deny assignment so that customers manage the application. Publisher doesn't need to manage the deny assignment to enable or disable actions for the customer. For example, an action like rebooting a virtual machine in the managed application.
+- Provide customers with full control to manage the application so that publishers don't have to be a service provider to manage the application.
+ ## Advantages of managed applications
-Managed applications reduce barriers to customers using your solutions. They don't need expertise in cloud infrastructure to use your solution. Customers have limited access to the critical resources and don't need to worry about making a mistake when managing it.
+Managed applications reduce barriers to customers using your solutions. They don't need expertise in cloud infrastructure to use your solution. Depending on the permissions configured by the publisher, customers might have limited access to the critical resources and don't need to worry about making a mistake when managing it.
Managed applications enable you to establish an ongoing relationship with your customers. You define terms for managing the application and all charges are handled through Azure billing.
-Although customers deploy managed applications in their subscriptions, they don't have to maintain, update, or service them. You can make sure that all customers are using approved versions. Customers don't have to develop application-specific domain knowledge to manage these applications. Customers automatically acquire application updates without the need to worry about troubleshooting and diagnosing issues with the applications.
+Although customers deploy managed applications in their subscriptions, they don't have to maintain, update, or service them. But there are permissions that allow the customer to have full access to resources in the managed resource group. You can make sure that all customers are using approved versions. Customers don't have to develop application-specific domain knowledge to manage these applications. Customers automatically acquire application updates without the need to worry about troubleshooting and diagnosing issues with the applications.
-For IT teams, managed applications enable you to offer pre-approved solutions to users in the organization. You know these solutions are compliant with organizational standards.
+For IT teams, managed applications enable you to offer preapproved solutions to users in the organization. You know these solutions are compliant with organizational standards.
Managed applications support [managed identities for Azure resources](./publish-managed-identity.md).
You can publish your managed application either internally in the service catalo
### Service catalog
-The service catalog is an internal catalog of approved solutions for users in an organization. You use the catalog to meet organizational standards and offer solutions for the organization. Employees use the catalog to find applications that are recommended and approved by their IT departments. They see the managed applications that other people in their organization share with them.
+The service catalog is an internal catalog of approved solutions for users in an organization. You use the catalog to meet organizational standards and offer solutions for the organization. Employees use the service catalog to find applications that are recommended and approved by their IT departments. They can access the managed applications that other people in their organization share with them.
For information about publishing a managed application to a service catalog, see [Quickstart: Create and publish a managed application definition](publish-service-catalog-app.md).
For information about publishing a managed application to Azure Marketplace, see
## Resource groups for managed applications
-Typically, the resources for a managed application are in two resource groups. The customer manages one resource group, and the publisher manages the other resource group. When the managed application is defined, the publisher specifies the levels of access. The publisher can request either a permanent role assignment, or [just-in-time access](request-just-in-time-access.md) for an assignment that's constrained to a time period.
+Typically, the resources for a managed application are in two resource groups. The customer manages one resource group, and the publisher manages the other resource group. When the managed application is defined, the publisher specifies the levels of access. The publisher can request either a permanent role assignment, or [just-in-time access](request-just-in-time-access.md) for an assignment that's constrained to a time period. Publishers can also configure the managed application so that there's no publisher access.
Restricting access for [data operations](../../role-based-access-control/role-definitions.md) is currently not supported for all data providers in Azure.
-The following image shows the relationship between the customer's Azure subscription and the publisher's Azure subscription. The managed application and managed resource group are in the customer's subscription. The publisher has management access to the managed resource group to maintain the managed application's resources. The publisher places a read-only lock on the managed resource group that limits the customer's access to manage resources. The publisher's identities that have access to the managed resource group are exempt from the lock.
+The following image shows the relationship between the customer's Azure subscription and the publisher's Azure subscription, which is the default _publisher managed_ permission. The managed application and managed resource group are in the customer's subscription. The publisher has management access to the managed resource group to maintain the managed application's resources. The publisher places a read-only lock (deny assignment) on the managed resource group that limits the customer's access to manage resources. The publisher's identities that have access to the managed resource group are exempt from the lock.
:::image type="content" source="./media/overview/managed-apps-resource-group.png" alt-text="Diagram that shows the relationship between customer and publisher Azure subscriptions for a managed resource group.":::
+The management access as shown in the image can be changed. The customer can be given full access to the managed resource group. And, the publisher access to the managed resource group can be removed.
+ ### Application resource group This resource group holds the managed application instance. This resource group may only contain one resource. The resource type of the managed application is [Microsoft.Solutions/applications](#resource-provider).
The customer has full access to the resource group and uses it to manage the lif
### Managed resource group
-This resource group holds all the resources that are required by the managed application. For example, this resource group contains the virtual machines, storage accounts, and virtual networks for the solution. The customer has limited access to this resource group because the customer doesn't manage the individual resources for the managed application. The publisher's access to this resource group corresponds to the role specified in the managed application definition. For example, the publisher might request the Owner or Contributor role for this resource group. The access is either permanent or limited to a specific time.
+This resource group holds all the resources that are required by the managed application. For example, an application's virtual machines, storage accounts, and virtual networks. The customer might have limited access to this resource group because unless permission options are changed, the customer doesn't manage the individual resources for the managed application. The publisher's access to this resource group corresponds to the role specified in the managed application definition. For example, the publisher might request the Owner or Contributor role for this resource group. The access is either permanent or limited to a specific time. The publisher can choose to not have access to the managed resource group.
-When the [managed application is published to the marketplace](../../marketplace/azure-app-offer-setup.md), the publisher can grant customers the ability to perform specific actions on resources in the managed resource group. For example, the publisher can specify that customers can restart virtual machines. All other actions beyond read actions are still denied. Changes to resources in a managed resource group by a customer with granted actions are subject to the [Azure Policy](../../governance/policy/overview.md) assignments within the customer's tenant scoped to include the managed resource group.
+When the [managed application is published to the marketplace](../../marketplace/azure-app-offer-setup.md), the publisher can grant customers the ability to perform specific actions on resources in the managed resource group or be given full access. For example, the publisher can specify that customers can restart virtual machines. All other actions beyond read actions are still denied. Changes to resources in a managed resource group by a customer with granted actions are subject to the [Azure Policy](../../governance/policy/overview.md) assignments within the customer's tenant scoped to include the managed resource group.
When the customer deletes the managed application, the managed resource group is also deleted.
azure-resource-manager Quickstart Create Templates Use Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md
Title: Create template - Visual Studio Code description: Use Visual Studio Code and the Azure Resource Manager tools extension to work on Azure Resource Manager templates (ARM templates).- Previously updated : 06/27/2022 Last updated : 07/28/2023 - #Customer intent: As a developer new to Azure deployment, I want to learn how to use Visual Studio Code to create and edit Resource Manager templates, so I can use the templates to deploy Azure resources.
-# Quickstart: Create ARM templates with Visual Studio Code
+# Quickstart: Create ARM templates with Visual Studio Code
-The Azure Resource Manager Tools for Visual Studio Code provide language support, resource snippets, and resource autocompletion. These tools help create and validate Azure Resource Manager templates (ARM templates), and are therefore the recommended method of ARM template creation and configuration. In this quickstart, you use the extension to create an ARM template from scratch. While doing so you experience the extensions capabilities such as ARM template snippets, validation, completions, and parameter file support.
+The Azure Resource Manager Tools for Visual Studio Code provide language support, resource snippets, and resource autocompletion. These tools help create and validate Azure Resource Manager templates (ARM templates), and are therefore the recommended method of ARM template creation and configuration. In this quickstart, you use the extension to create an ARM template from scratch. While doing so you experience the extensions capabilities such as ARM template snippets, validation, completions, and parameter file support.
To complete this quickstart, you need [Visual Studio Code](https://code.visualstudio.com/), with the [Azure Resource Manager tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools) installed. You also need either the [Azure CLI](/cli/azure/) or the [Azure PowerShell module](/powershell/azure/new-azureps-module-az) installed and authenticated.
Create and open with Visual Studio Code a new file named *azuredeploy.json*. Ent
Select `arm!` to create a template scoped for an Azure resource group deployment.
-![Image showing Azure Resource Manager scaffolding snippets](./media/quickstart-create-templates-use-visual-studio-code/1.png)
This snippet creates the basic building blocks for an ARM template.
-![Image showing a fully scaffolded ARM template](./media/quickstart-create-templates-use-visual-studio-code/2.png)
Notice that the Visual Studio Code language mode has changed from *JSON* to *Azure Resource Manager Template*. The extension includes a language server specific to ARM templates that provides ARM template-specific validation, completion, and other language services.
-![Image showing Azure Resource Manager as the Visual Studio Code language mode](./media/quickstart-create-templates-use-visual-studio-code/3.png)
## Add an Azure resource
The extension includes snippets for many Azure resources. These snippets can be
Place the cursor in the template **resources** block, type in `storage`, and select the *arm-storage* snippet.
-![Image showing a resource being added to the ARM template](./media/quickstart-create-templates-use-visual-studio-code/4.png)
This action adds a storage resource to the template.
-![Image showing an Azure Storage resource in an ARM template](./media/quickstart-create-templates-use-visual-studio-code/5.png)
The **tab** key can be used to tab through configurable properties on the storage account.
-![Image showing how the tab key can be used to navigate through resource configuration](./media/quickstart-create-templates-use-visual-studio-code/6.png)
## Completion and validation
One of the most powerful capabilities of the extension is its integration with A
First, update the storage account kind to an invalid value such as `megaStorage`. Notice that this action produces a warning indicating that `megaStorage` isn't a valid value.
-![Image showing an invalid storage configuration](./media/quickstart-create-templates-use-visual-studio-code/7.png)
To use the completion capabilities, remove `megaStorage`, place the cursor inside of the double quotes, and press `ctrl` + `space`. This action presents a completion list of valid values.
-![Image showing extension auto-completion](./media/quickstart-create-templates-use-visual-studio-code/8.png)
## Add template parameters
Now create and use a parameter to specify the storage account name.
Place your cursor in the parameters block, add a carriage return, type `"`, and then select the `new-parameter` snippet. This action adds a generic parameter to the template.
-![Image showing a parameter being added to the ARM template](./media/quickstart-create-templates-use-visual-studio-code/9.png)
Update the name of the parameter to `storageAccountName` and the description to `Storage Account Name`.
-![Image showing the completed parameter in an ARM template](./media/quickstart-create-templates-use-visual-studio-code/10.png)
Azure storage account names have a minimum length of 3 characters and a maximum of 24. Add both `minLength` and `maxLength` to the parameter and provide appropriate values.
-![Image showing minLength and maxLength being added to an ARM template parameter](./media/quickstart-create-templates-use-visual-studio-code/11.png)
Now, on the storage resource, update the name property to use the parameter. To do so, remove the current name. Enter a double quote and an opening square bracket `[`, which produces a list of ARM template functions. Select *parameters* from the list.
-![Image showing auto-completion when using parameters in ARM template resources](./media/quickstart-create-templates-use-visual-studio-code/12.png)
Entering a single quote `'` inside of the round brackets produces a list of all parameters defined in the template, in this case, *storageAccountName*. Select the parameter.
-![Image showing completed parameter in an ARM template resource](./media/quickstart-create-templates-use-visual-studio-code/13.png)
## Create a parameter file
An ARM template parameter file allows you to store environment-specific paramete
The extension makes it easy to create a parameter file from your existing templates. To do so, right-click on the template in the code editor and select `Select/Create Parameter File`.
-![Image showing the right-click process for creating a parameter file from an ARM template](./media/quickstart-create-templates-use-visual-studio-code/14.png)
Select `New` > `All Parameters` > Select a name and location for the parameter file.
-![Image showing the name and save file dialog when creating a parameters file from an ARM template](./media/quickstart-create-templates-use-visual-studio-code/15.png)
- This action creates a new parameter file and maps it with the template from which it was created. You can see and modify the current template/parameter file mapping in the Visual Studio Code status bar while the template is selected.
-![Image showing the template/parameter file mapping in the Visual Studio Code status bar.](./media/quickstart-create-templates-use-visual-studio-code/16.png)
Now that the parameter file has been mapped to the template, the extension validates both the template and parameter file together. To see this validation in practice, add a two-character value to the `storageAccountName` parameter in the parameter file and save the file.
-![Image showing an invalidated template due to parameter file issue](./media/quickstart-create-templates-use-visual-studio-code/17.png)
Navigate back to the ARM template and notice that an error has been raised indicating that the value doesn't meet the parameter criteria.
-![Image showing a valid ARM template](./media/quickstart-create-templates-use-visual-studio-code/18.png)
Update the value to something appropriate, save the file, and navigate back to the template. Notice that the error on the parameter has been resolved.
azure-resource-manager Template Tutorial Add Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-functions.md
Title: Tutorial - add template functions description: Add template functions to your Azure Resource Manager template (ARM template) to construct values.- Previously updated : 06/17/2022 Last updated : 07/28/2023 -
At the end of the previous tutorial, your template had the following JSON file:
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/add-sku/azuredeploy.json":::
-Suppose you hard-coded the location of the [Azure storage account](../../storage/common/storage-account-create.md) to **eastus**, but you need to deploy it to another region. You need to add a parameter to add flexibility to your template and allow it to have a different location.
+Suppose you hard-coded the location of the [Azure storage account](../../storage/common/storage-account-create.md) to **eastus**, but you need to deploy it to another region. You need to add a parameter to add flexibility to your template and allow it to have a different location.
## Use function
If you completed the [parameters tutorial](./template-tutorial-add-parameters.md
Functions add flexibility to your template by dynamically getting values during deployment. In this tutorial, you use a function to get the resource group deployment location.
-The following example highlights the changes to add a parameter called `location`. The parameter default value calls the [resourceGroup](template-functions-resource.md#resourcegroup) function. This function returns an object with information about the deployed resource group. One of the object properties is a location property. When you use the default value, the storage account and the resource group have the same location. The resources inside a group have different locations.
+The following example highlights the changes to add a parameter called `location`. The parameter default value calls the [resourceGroup](template-functions-resource.md#resourcegroup) function. This function returns an object with information about the deployed resource group. One of the object properties is a location property. When you use the default value, the storage account and the resource group have the same location. The resources inside a group have different locations.
Copy the whole file and replace your template with its contents.
You can verify the deployment by exploring the resource group from the Azure por
1. Sign in to the [Azure portal](https://portal.azure.com). 1. From the left menu, select **Resource groups**. 1. Check the box to the left of **myResourceGroup** and select **myResourceGroup**.
-1. Select the resource group you created. The default name is **myResourceGroup**.
+1. Select the resource group you created. The default name is **myResourceGroup**.
1. Notice your deployed storage account and your resource group have the same location.
azure-resource-manager Template Tutorial Add Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-outputs.md
Title: Tutorial - add outputs to template description: Add outputs to your Azure Resource Manager template (ARM template) to simplify the syntax.- Previously updated : 08/17/2022 Last updated : 07/28/2023 -
azure-resource-manager Template Tutorial Add Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-parameters.md
Title: Tutorial - add parameters to template description: Add parameters to your Azure Resource Manager template (ARM template) to make it reusable.- Previously updated : 06/15/2022 Last updated : 07/28/2023 - # Tutorial: Add parameters to your ARM template
azure-resource-manager Template Tutorial Add Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-resource.md
Title: Tutorial - Add resource to template description: Describes the steps to create your first Azure Resource Manager template (ARM template). You learn about the template file syntax and how to deploy a storage account.- Previously updated : 06/14/2022 Last updated : 07/28/2023 -
azure-resource-manager Template Tutorial Add Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-tags.md
Title: Tutorial - add tags to resources in template description: Add tags to resources that you deploy in your Azure Resource Manager template (ARM template). Tags let you logically organize resources.- Previously updated : 08/22/2022 Last updated : 07/28/2023 -
azure-resource-manager Template Tutorial Add Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-variables.md
Title: Tutorial - add variable to template description: Add variables to your Azure Resource Manager template (ARM template) to simplify the syntax.- Previously updated : 06/17/2022 Last updated : 07/28/2023 -
azure-resource-manager Template Tutorial Create First Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-create-first-template.md
Title: Tutorial - Create and deploy template description: Create your first Azure Resource Manager template (ARM template). In the tutorial, you learn about the template file syntax and how to deploy a storage account.- Previously updated : 06/15/2022 Last updated : 07/28/2023 - #Customer intent: As a developer new to Azure deployment, I want to learn how to use Visual Studio Code to create and edit Azure Resource Manager templates, so I can use them to deploy Azure resources.
New-AzResourceGroup `
```azurecli az group create \ --name myResourceGroup \
- --location "Central US"
+ --location 'Central US'
```
azure-resource-manager Template Tutorial Deploy Sql Extensions Bacpac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-deploy-sql-extensions-bacpac.md
The BACPAC file must be stored in an Azure Storage account before it can be impo
-Blob $bacpacFileName ` -Context $storageAccount.Context
- Write-Host "The project name: $projectName`
- The location: $location`
- The storage account key: $storageAccountKey`
- The BACPAC file URL: https://$storageAccountName.blob.core.windows.net/$containerName/$bacpacFileName`
- "
+ Write-Host "The project name: $projectName `
+ The location: $location `
+ The storage account key: $storageAccountKey `
+ The BACPAC file URL: https://$storageAccountName.blob.core.windows.net/$containerName/$bacpacFileName `
+ "
+ Write-Host "Press [ENTER] to continue ..." ```
azure-resource-manager Template Tutorial Export Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-export-template.md
Title: Tutorial - Export template from the Azure portal description: Learn how to use an exported template to complete your template development.- Previously updated : 08/17/2022 Last updated : 07/28/2023 -
This template works well for deploying storage accounts, but you might want to a
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Create a resource**.
-1. In **Search services and Marketplace**, enter **App Service Plan**, and then select **App Service Plan**.
+1. In **Search services and Marketplace**, enter **App Service Plan**, and then select **App Service Plan**.
1. Select **Create**. 1. On the **Create App Service Plan** page, enter the following:
azure-resource-manager Template Tutorial Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-quickstart-template.md
Title: Tutorial - Use quickstart templates description: Learn how to use Azure Quickstart Templates to complete your template development.- Previously updated : 08/17/2022 Last updated : 07/28/2023 --+ # Tutorial: Use Azure Quickstart Templates
azure-resource-manager Template Tutorial Use Parameter File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-use-parameter-file.md
Title: Tutorial - use parameter file to deploy template description: Use parameter files that contain the values to use for deploying your Azure Resource Manager template (ARM template).- Previously updated : 08/22/2022 Last updated : 07/28/2023 - # Tutorial: Use parameter files to deploy your ARM template
templateFile="{path-to-the-template-file}"
devParameterFile="{path-to-azuredeploy.parameters.dev.json}" az group create \ --name myResourceGroupDev \
- --location "East US"
+ --location 'East US'
az deployment group create \ --name devenvironment \ --resource-group myResourceGroupDev \
You can verify the deployment by exploring the resource groups from the Azure po
1. From the Azure portal, select **Resource groups** from the left menu. 1. Select the hyperlinked resource group name next to the check box. If you complete this series, you have three resource groups to delete - **myResourceGroup**, **myResourceGroupDev**, and **myResourceGroupProd**. 1. Select the **Delete resource group** icon from the top menu.
-
+ > [!CAUTION] > Deleting a resource group is irreversible.
azure-video-indexer Observed Matched People https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-matched-people.md
Title: Azure AI Video Indexer observed people tracking & matched faces overview -
+ Title: Azure AI Video Indexer observed people tracking & matched faces overview
+ description: An introduction to Azure AI Video Indexer observed people tracking & matched faces component responsibly.
Last updated 04/06/2023
-# Observed people tracking & matched faces
+# Observed people tracking and matched faces
> [!IMPORTANT] > Face identification, customization and celebrity recognition features access is limited based on eligibility and usage criteria in order to support our Responsible AI principles. Face identification, customization and celebrity recognition features are only available to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to apply for access.
-Observed people tracking and matched faces are Azure AI Video Indexer AI features that automatically detect and match people in media files. Observed people tracking and matched faces can be set to display insights on people, their clothing, and the exact timeframe of their appearance.
+Observed people tracking and matched faces are Azure AI Video Indexer AI features that automatically detect and match people in media files. Observed people tracking and matched faces can be set to display insights on people, their clothing, and the exact timeframe of their appearance.
-The resulting insights are displayed in a categorized list in the Insights tab, the tab includes a thumbnail of each person and their ID. Clicking the thumbnail of a person displays the matched person (the corresponding face in the People insight). Insights are also generated in a categorized list in a JSON file that includes the thumbnail ID of the person, the percentage of time appearing in the file, Wiki link (if they're a celebrity) and confidence level.
+The resulting insights are displayed in a categorized list in the Insights tab, the tab includes a thumbnail of each person and their ID. Clicking the thumbnail of a person displays the matched person (the corresponding face in the People insight). Insights are also generated in a categorized list in a JSON file that includes the thumbnail ID of the person, the percentage of time appearing in the file, Wiki link (if they're a celebrity) and confidence level.
-## Prerequisites
+## Prerequisites
Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
-## General principles
+## General principles
This article discusses observed people tracking and matched faces and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
This article discusses observed people tracking and matched faces and the key co
When uploading the media file, go to Video + Audio Indexing and select Advanced.
-To display observed people tracking and matched faces insight on the website, do the following:
+To display observed people tracking and matched faces insight on the website, do the following:
1. After the file has been indexed, go to Insights and then scroll to observed people.
-To see the insights in a JSON file, do the following:
+To see the insights in a JSON file, do the following:
-1. Click Download and then Insights (JSON).
+1. Click Download and then Insights (JSON).
1. Copy the `observedPeople` text and paste it into your JSON viewer.
- The following section shows observed people and clothing. For the person with id 4 (`"id": 4`) there's also a matching face.
-
- ```json
+ The following section shows observed people and clothing. For the person with id 4 (`"id": 4`) there's also a matching face.
+
+ ```json
"observedPeople": [
- {
- "id": 1,
- "thumbnailId": "4addcebf-6c51-42cd-b8e0-aedefc9d8f6b",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "long"
- }
- },
- {
- "id": 2,
- "type": "pants",
- "properties": {
- "length": "long"
- }
- }
- ],
- "instances": [
- {
- "adjustedStart": "0:00:00.0667333",
- "adjustedEnd": "0:00:12.012",
- "start": "0:00:00.0667333",
- "end": "0:00:12.012"
- }
- ]
- },
- {
- "id": 2,
- "thumbnailId": "858903a7-254a-438e-92fd-69f8bdb2ac88",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "short"
- }
- }
- ],
- "instances": [
- {
- "adjustedStart": "0:00:23.2565666",
- "adjustedEnd": "0:00:25.4921333",
- "start": "0:00:23.2565666",
- "end": "0:00:25.4921333"
- },
- {
- "adjustedStart": "0:00:25.8925333",
- "adjustedEnd": "0:00:25.9926333",
- "start": "0:00:25.8925333",
- "end": "0:00:25.9926333"
- },
- {
- "adjustedStart": "0:00:26.3930333",
- "adjustedEnd": "0:00:28.5618666",
- "start": "0:00:26.3930333",
- "end": "0:00:28.5618666"
- }
- ]
- },
- {
- "id": 3,
- "thumbnailId": "1406252d-e7f5-43dc-852d-853f652b39b6",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "short"
- }
- },
- {
- "id": 2,
- "type": "pants",
- "properties": {
- "length": "long"
- }
- },
- {
- "id": 3,
- "type": "skirtAndDress"
- }
- ],
- "instances": [
- {
- "adjustedStart": "0:00:31.9652666",
- "adjustedEnd": "0:00:34.4010333",
- "start": "0:00:31.9652666",
- "end": "0:00:34.4010333"
- }
- ]
- },
- {
- "id": 4,
- "thumbnailId": "d09ad62e-e0a4-42e5-8ca9-9a640c686596",
- "clothing": [
- {
- "id": 1,
- "type": "sleeve",
- "properties": {
- "length": "short"
- }
- },
- {
- "id": 2,
- "type": "pants",
- "properties": {
- "length": "short"
- }
- }
- ],
- "matchingFace": {
- "id": 1310,
- "confidence": 0.3819
- },
- "instances": [
- {
- "adjustedStart": "0:00:34.8681666",
- "adjustedEnd": "0:00:36.0026333",
- "start": "0:00:34.8681666",
- "end": "0:00:36.0026333"
- },
- {
- "adjustedStart": "0:00:36.6699666",
- "adjustedEnd": "0:00:36.7367",
- "start": "0:00:36.6699666",
- "end": "0:00:36.7367"
- },
- {
- "adjustedStart": "0:00:37.2038333",
- "adjustedEnd": "0:00:39.6729666",
- "start": "0:00:37.2038333",
- "end": "0:00:39.6729666"
- }
- ]
- },
- ```
-
-To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
-
-## Observed people tracking and matched faces components
+ {
+ "id": 1,
+ "thumbnailId": "4addcebf-6c51-42cd-b8e0-aedefc9d8f6b",
+ "clothing": [
+ {
+ "id": 1,
+ "type": "sleeve",
+ "properties": {
+ "length": "long"
+ }
+ },
+ {
+ "id": 2,
+ "type": "pants",
+ "properties": {
+ "length": "long"
+ }
+ }
+ ],
+ "instances": [
+ {
+ "adjustedStart": "0:00:00.0667333",
+ "adjustedEnd": "0:00:12.012",
+ "start": "0:00:00.0667333",
+ "end": "0:00:12.012"
+ }
+ ]
+ },
+ {
+ "id": 2,
+ "thumbnailId": "858903a7-254a-438e-92fd-69f8bdb2ac88",
+ "clothing": [
+ {
+ "id": 1,
+ "type": "sleeve",
+ "properties": {
+ "length": "short"
+ }
+ }
+ ],
+ "instances": [
+ {
+ "adjustedStart": "0:00:23.2565666",
+ "adjustedEnd": "0:00:25.4921333",
+ "start": "0:00:23.2565666",
+ "end": "0:00:25.4921333"
+ },
+ {
+ "adjustedStart": "0:00:25.8925333",
+ "adjustedEnd": "0:00:25.9926333",
+ "start": "0:00:25.8925333",
+ "end": "0:00:25.9926333"
+ },
+ {
+ "adjustedStart": "0:00:26.3930333",
+ "adjustedEnd": "0:00:28.5618666",
+ "start": "0:00:26.3930333",
+ "end": "0:00:28.5618666"
+ }
+ ]
+ },
+ {
+ "id": 3,
+ "thumbnailId": "1406252d-e7f5-43dc-852d-853f652b39b6",
+ "clothing": [
+ {
+ "id": 1,
+ "type": "sleeve",
+ "properties": {
+ "length": "short"
+ }
+ },
+ {
+ "id": 2,
+ "type": "pants",
+ "properties": {
+ "length": "long"
+ }
+ },
+ {
+ "id": 3,
+ "type": "skirtAndDress"
+ }
+ ],
+ "instances": [
+ {
+ "adjustedStart": "0:00:31.9652666",
+ "adjustedEnd": "0:00:34.4010333",
+ "start": "0:00:31.9652666",
+ "end": "0:00:34.4010333"
+ }
+ ]
+ },
+ {
+ "id": 4,
+ "thumbnailId": "d09ad62e-e0a4-42e5-8ca9-9a640c686596",
+ "clothing": [
+ {
+ "id": 1,
+ "type": "sleeve",
+ "properties": {
+ "length": "short"
+ }
+ },
+ {
+ "id": 2,
+ "type": "pants",
+ "properties": {
+ "length": "short"
+ }
+ }
+ ],
+ "matchingFace": {
+ "id": 1310,
+ "confidence": 0.3819
+ },
+ "instances": [
+ {
+ "adjustedStart": "0:00:34.8681666",
+ "adjustedEnd": "0:00:36.0026333",
+ "start": "0:00:34.8681666",
+ "end": "0:00:36.0026333"
+ },
+ {
+ "adjustedStart": "0:00:36.6699666",
+ "adjustedEnd": "0:00:36.7367",
+ "start": "0:00:36.6699666",
+ "end": "0:00:36.7367"
+ },
+ {
+ "adjustedStart": "0:00:37.2038333",
+ "adjustedEnd": "0:00:39.6729666",
+ "start": "0:00:37.2038333",
+ "end": "0:00:39.6729666"
+ }
+ ]
+ }
+ ]
+ ```
+
+To download the JSON file via the API, use the [Azure AI Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+
+## Observed people tracking and matched faces components
During the observed people tracking and matched faces procedure, images in a media file are processed, as follows: |Component|Definition| |||
-|Source file | The user uploads the source file for indexing. |
-|Detection | The media file is tracked to detect observed people and their clothing. For example, shirt with long sleeves, dress or long pants. Note that to be detected, the full upper body of the person must appear in the media.|
-|Local grouping |The identified observed faces are filtered into local groups. If a person is detected more than once, additional observed faces instances are created for this person. |
-|Matching and Classification |The observed people instances are matched to faces. If there is a known celebrity, the observed person will be given their name. Any number of observed people instances can be matched to the same face. |
-|Confidence value| The estimated confidence level of each observed person is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
+|Source file | The user uploads the source file for indexing. |
+|Detection | The media file is tracked to detect observed people and their clothing. For example, shirt with long sleeves, dress or long pants. Note that to be detected, the full upper body of the person must appear in the media.|
+|Local grouping |The identified observed faces are filtered into local groups. If a person is detected more than once, additional observed faces instances are created for this person. |
+|Matching and Classification |The observed people instances are matched to faces. If there is a known celebrity, the observed person will be given their name. Any number of observed people instances can be matched to the same face. |
+|Confidence value| The estimated confidence level of each observed person is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
-## Example use cases
+## Example use cases
- Tracking a personΓÇÖs movement, for example, in law enforcement for more efficiency when analyzing an accident or crime.-- Improving efficiency by deep searching for matched people in organizational archives for insight on specific celebrities, for example when creating promos and trailers.
+- Improving efficiency by deep searching for matched people in organizational archives for insight on specific celebrities, for example when creating promos and trailers.
- Improved efficiency when creating feature stories, for example, searching for people wearing a red shirt in the archives of a football game at a News or Sports agency.
-## Considerations and limitations when choosing a use case
+## Considerations and limitations when choosing a use case
-Below are some considerations to keep in mind when using observed people and matched faces.
+Below are some considerations to keep in mind when using observed people and matched faces.
### Limitations of observed people tracing
It's important to note the limitations of observed people tracing, to avoid or m
* People are generally not detected if they appear small (minimum person height is 100 pixels). * Maximum frame size is FHD
-* Low quality video (for example, dark lighting conditions) may impact the detection results.
-* The recommended frame rate at least 30 FPS.
-* Recommended video input should contain up to 10 people in a single frame. The feature could work with more people in a single frame, but the detection result retrieves up to 10 people in a frame with the detection highest confidence.
-* People with similar clothes: (for example, people wear uniforms, players in sport games) could be detected as the same person with the same ID number.
+* Low quality video (for example, dark lighting conditions) may impact the detection results.
+* The recommended frame rate at least 30 FPS.
+* Recommended video input should contain up to 10 people in a single frame. The feature could work with more people in a single frame, but the detection result retrieves up to 10 people in a frame with the detection highest confidence.
+* People with similar clothes: (for example, people wear uniforms, players in sport games) could be detected as the same person with the same ID number.
* Obstruction ΓÇô there maybe errors where there are obstructions (scene/self or obstructions by other people).
-* Pose: The tracks may be split due to different poses (back/front)
+* Pose: The tracks may be split due to different poses (back/front)
### Other considerations
-When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
+When used responsibly and carefully, Azure AI Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
-- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes. -- Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom. -- Commit to respecting and promoting human rights in the design and deployment of your analyzed media. -- When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them. -- Always seek legal advice when using media from unknown sources. -- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access. -- Provide a feedback channel that allows users and individuals to report issues with the service. -- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people. -- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making. -- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.
+- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes.
+- Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom.
+- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.
+- When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.
+- Always seek legal advice when using media from unknown sources.
+- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.
+- Provide a feedback channel that allows users and individuals to report issues with the service.
+- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.
+- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.
+- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.
## Next steps ### Learn More about Responsible AI -- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)
+- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)
- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources) - [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)-- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5)
+- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5)
### Contact us
-`visupport@microsoft.com`
+`visupport@microsoft.com`
## Azure AI Video Indexer insights
When used responsibly and carefully, Azure AI Video Indexer is a valuable tool f
- [Face detection](face-detection.md) - [Keywords extraction](keywords.md) - [Transcription, translation & language identification](transcription-translation-lid.md)-- [Labels identification](labels-identification.md)
+- [Labels identification](labels-identification.md)
- [Named entities](named-entities.md) - [Topics inference](topics-inference.md)
azure-web-pubsub Reference Server Sdk Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-js.md
You can use this library in your app server side to manage the WebSocket client
#### Currently supported environments -- [LTS versions of Node.js](https://nodejs.org/about/releases/)
+- [LTS versions of Node.js](https://nodejs.dev/)
#### Prerequisites
When a WebSocket connection connects, the Web PubSub service transforms the conn
#### Currently supported environments -- [LTS versions of Node.js](https://nodejs.org/about/releases/)
+- [LTS versions of Node.js](https://nodejs.dev/)
- [Express](https://expressjs.com/) version 4.x.x or higher #### Prerequisites
const express = require("express");
const { WebPubSubEventHandler } = require("@azure/web-pubsub-express"); const handler = new WebPubSubEventHandler("chat", {
- path: "customPath1"
+ path: "/customPath1"
}); const app = express();
backup Backup Azure Monitoring Use Azuremonitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-monitoring-use-azuremonitor.md
Recovery Services vaults and Backup vaults send data to a common set of tables t
AddonAzureBackupJobs | where JobOperation == "Restore" | summarize arg_max(TimeGenerated,*) by JobUniqueId
- | where DatasourceType == "Microsoft.Compute/disks"
- | where JobStatus=="Completed"
+ | where DatasourceType == "Microsoft.Compute/disks"
+ | where JobStatus=="Completed"
```` - Backup Storage Consumed per Backup Item ````Kusto CoreAzureBackup
- | where OperationName == "BackupItem"
- | summarize arg_max(TimeGenerated, *) by BackupItemUniqueId
- | project BackupItemUniqueId, BackupItemFriendlyName, StorageConsumedInMBs
- ````
+ | where OperationName == "BackupItem"
+ | summarize arg_max(TimeGenerated, *) by BackupItemUniqueId
+ | project BackupItemUniqueId, BackupItemFriendlyName, StorageConsumedInMBs
+ ````
### Diagnostic data update frequency
backup Sap Hana Database Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-restore.md
Title: Restore SAP HANA databases on Azure VMs description: In this article, you'll learn how to restore SAP HANA databases that are running on Azure virtual machines. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 07/18/2023 Last updated : 07/31/2023
This article describes how to restore SAP HANA databases that are running on Azu
Azure Backup now supports backup and restore of SAP HANA System Replication (HSR) instance. >[!Note]
->- The restore process for HANA databases with HSR is the same as the restore process for HANA databases without HSR. As per SAP advisories, you can restore databases with HSR mode as *standalone* databases. If the target system has the HSR mode enabled, first disable the mode, and then restore the database.
+>- The restore process for HANA databases with HSR is the same as the restore process for HANA databases without HSR. As per SAP advisories, you can restore databases with HSR mode as *standalone* databases. If the target system has the HSR mode enabled, first disable the mode, and then restore the database. Howver, if you're restoring HSR as files, you don't need to disable the HSR mode (break the HSR).
>- Original Location Recovery (OLR) is currently not supported for HSR. Alternatively, select **Alternate location** restore, and then select the source VM as your *Host* from the list. >- Restore to HSR instance isn't supported. However, restore only to HANA instance is supported.
backup Delete Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/delete-recovery-services-vault.md
$NWversion = $NWmodule.Version.ToString()
if($RSversion -lt "5.3.0") {
- Uninstall-Module -Name Az.RecoveryServices
- Set-ExecutionPolicy -ExecutionPolicy Unrestricted
- Install-Module -Name Az.RecoveryServices -Repository PSGallery -Force -AllowClobber
+ Uninstall-Module -Name Az.RecoveryServices
+ Set-ExecutionPolicy -ExecutionPolicy Unrestricted
+ Install-Module -Name Az.RecoveryServices -Repository PSGallery -Force -AllowClobber
} if($NWversion -lt "4.15.0") {
- Uninstall-Module -Name Az.Network
- Set-ExecutionPolicy -ExecutionPolicy Unrestricted
- Install-Module -Name Az.Network -Repository PSGallery -Force -AllowClobber
+ Uninstall-Module -Name Az.Network
+ Set-ExecutionPolicy -ExecutionPolicy Unrestricted
+ Install-Module -Name Az.Network -Repository PSGallery -Force -AllowClobber
} Connect-AzAccount
foreach($item in $backupItemsVM)
} Write-Host "Disabled and deleted Azure VM backup items"
-foreach($item in $backupItemsSQL)
+foreach($item in $backupItemsSQL)
{ Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete SQL Server in Azure VM backup items }
foreach($item in $backupContainersSQL)
{ Unregister-AzRecoveryServicesBackupContainer -Container $item -Force -VaultId $VaultToDelete.ID #unregister SQL Server in Azure VM protected server }
-Write-Host "Deleted SQL Servers in Azure VM containers"
+Write-Host "Deleted SQL Servers in Azure VM containers"
-foreach($item in $backupItemsSAP)
+foreach($item in $backupItemsSAP)
{ Disable-AzRecoveryServicesBackupProtection -Item $item -VaultId $VaultToDelete.ID -RemoveRecoveryPoints -Force #stop backup and delete SAP HANA in Azure VM backup items }
foreach($item in $backupItemsAFS)
Write-Host "Disabled and deleted Azure File Share backups" foreach($item in $StorageAccounts)
- {
+ {
Unregister-AzRecoveryServicesBackupContainer -container $item -Force -VaultId $VaultToDelete.ID #unregister storage accounts } Write-Host "Unregistered Storage Accounts"
-foreach($item in $backupServersMARS)
+foreach($item in $backupServersMARS)
{
- Unregister-AzRecoveryServicesBackupContainer -Container $item -Force -VaultId $VaultToDelete.ID #unregister MARS servers and delete corresponding backup items
+ Unregister-AzRecoveryServicesBackupContainer -Container $item -Force -VaultId $VaultToDelete.ID #unregister MARS servers and delete corresponding backup items
} Write-Host "Deleted MARS Servers" foreach($item in $backupServersMABS)
- {
- Unregister-AzRecoveryServicesBackupManagementServer -AzureRmBackupManagementServer $item -VaultId $VaultToDelete.ID #unregister MABS servers and delete corresponding backup items
+ {
+ Unregister-AzRecoveryServicesBackupManagementServer -AzureRmBackupManagementServer $item -VaultId $VaultToDelete.ID #unregister MABS servers and delete corresponding backup items
} Write-Host "Deleted MAB Servers"
-foreach($item in $backupServersDPM)
+foreach($item in $backupServersDPM)
{
- Unregister-AzRecoveryServicesBackupManagementServer -AzureRmBackupManagementServer $item -VaultId $VaultToDelete.ID #unregister DPM servers and delete corresponding backup items
+ Unregister-AzRecoveryServicesBackupManagementServer -AzureRmBackupManagementServer $item -VaultId $VaultToDelete.ID #unregister DPM servers and delete corresponding backup items
} Write-Host "Deleted DPM Servers"
Write-Host "Deleted DPM Servers"
$fabricObjects = Get-AzRecoveryServicesAsrFabric if ($null -ne $fabricObjects) {
- # First DisableDR all VMs.
- foreach ($fabricObject in $fabricObjects) {
- $containerObjects = Get-AzRecoveryServicesAsrProtectionContainer -Fabric $fabricObject
- foreach ($containerObject in $containerObjects) {
- $protectedItems = Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $containerObject
- # DisableDR all protected items
- foreach ($protectedItem in $protectedItems) {
- Write-Host "Triggering DisableDR(Purge) for item:" $protectedItem.Name
- Remove-AzRecoveryServicesAsrReplicationProtectedItem -InputObject $protectedItem -Force
- Write-Host "DisableDR(Purge) completed"
- }
-
- $containerMappings = Get-AzRecoveryServicesAsrProtectionContainerMapping `
- -ProtectionContainer $containerObject
- # Remove all Container Mappings
- foreach ($containerMapping in $containerMappings) {
- Write-Host "Triggering Remove Container Mapping: " $containerMapping.Name
- Remove-AzRecoveryServicesAsrProtectionContainerMapping -ProtectionContainerMapping $containerMapping -Force
- Write-Host "Removed Container Mapping."
- }
- }
- $NetworkObjects = Get-AzRecoveryServicesAsrNetwork -Fabric $fabricObject
- foreach ($networkObject in $NetworkObjects)
- {
- #Get the PrimaryNetwork
- $PrimaryNetwork = Get-AzRecoveryServicesAsrNetwork -Fabric $fabricObject -FriendlyName $networkObject
- $NetworkMappings = Get-AzRecoveryServicesAsrNetworkMapping -Network $PrimaryNetwork
- foreach ($networkMappingObject in $NetworkMappings)
- {
- #Get the Neetwork Mappings
- $NetworkMapping = Get-AzRecoveryServicesAsrNetworkMapping -Name $networkMappingObject.Name -Network $PrimaryNetwork
- Remove-AzRecoveryServicesAsrNetworkMapping -InputObject $NetworkMapping
- }
- }
- # Remove Fabric
- Write-Host "Triggering Remove Fabric:" $fabricObject.FriendlyName
- Remove-AzRecoveryServicesAsrFabric -InputObject $fabricObject -Force
- Write-Host "Removed Fabric."
- }
+ # First DisableDR all VMs.
+ foreach ($fabricObject in $fabricObjects) {
+ $containerObjects = Get-AzRecoveryServicesAsrProtectionContainer -Fabric $fabricObject
+ foreach ($containerObject in $containerObjects) {
+ $protectedItems = Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $containerObject
+ # DisableDR all protected items
+ foreach ($protectedItem in $protectedItems) {
+ Write-Host "Triggering DisableDR(Purge) for item:" $protectedItem.Name
+ Remove-AzRecoveryServicesAsrReplicationProtectedItem -InputObject $protectedItem -Force
+ Write-Host "DisableDR(Purge) completed"
+ }
+
+ $containerMappings = Get-AzRecoveryServicesAsrProtectionContainerMapping `
+ -ProtectionContainer $containerObject
+ # Remove all Container Mappings
+ foreach ($containerMapping in $containerMappings) {
+ Write-Host "Triggering Remove Container Mapping: " $containerMapping.Name
+ Remove-AzRecoveryServicesAsrProtectionContainerMapping -ProtectionContainerMapping $containerMapping -Force
+ Write-Host "Removed Container Mapping."
+ }
+ }
+ $NetworkObjects = Get-AzRecoveryServicesAsrNetwork -Fabric $fabricObject
+ foreach ($networkObject in $NetworkObjects)
+ {
+ #Get the PrimaryNetwork
+ $PrimaryNetwork = Get-AzRecoveryServicesAsrNetwork -Fabric $fabricObject -FriendlyName $networkObject
+ $NetworkMappings = Get-AzRecoveryServicesAsrNetworkMapping -Network $PrimaryNetwork
+ foreach ($networkMappingObject in $NetworkMappings)
+ {
+ #Get the Neetwork Mappings
+ $NetworkMapping = Get-AzRecoveryServicesAsrNetworkMapping -Name $networkMappingObject.Name -Network $PrimaryNetwork
+ Remove-AzRecoveryServicesAsrNetworkMapping -InputObject $NetworkMapping
+ }
+ }
+ # Remove Fabric
+ Write-Host "Triggering Remove Fabric:" $fabricObject.FriendlyName
+ Remove-AzRecoveryServicesAsrFabric -InputObject $fabricObject -Force
+ Write-Host "Removed Fabric."
+ }
} foreach($item in $pvtendpoints)
- {
- $penamesplit = $item.Name.Split(".")
- $pename = $penamesplit[0]
- Remove-AzPrivateEndpointConnection -ResourceId $item.Id -Force #remove private endpoint connections
- Remove-AzPrivateEndpoint -Name $pename -ResourceGroupName $ResourceGroup -Force #remove private endpoints
- }
+ {
+ $penamesplit = $item.Name.Split(".")
+ $pename = $penamesplit[0]
+ Remove-AzPrivateEndpointConnection -ResourceId $item.Id -Force #remove private endpoint connections
+ Remove-AzPrivateEndpoint -Name $pename -ResourceGroupName $ResourceGroup -Force #remove private endpoints
+ }
Write-Host "Removed Private Endpoints" #Recheck ASR items in vault
$ASRProtectedItems = 0
$ASRPolicyMappings = 0 $fabricObjects = Get-AzRecoveryServicesAsrFabric if ($null -ne $fabricObjects) {
- foreach ($fabricObject in $fabricObjects) {
- $containerObjects = Get-AzRecoveryServicesAsrProtectionContainer -Fabric $fabricObject
- foreach ($containerObject in $containerObjects) {
- $protectedItems = Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $containerObject
- foreach ($protectedItem in $protectedItems) {
- $ASRProtectedItems++
- }
- $containerMappings = Get-AzRecoveryServicesAsrProtectionContainerMapping `
- -ProtectionContainer $containerObject
- foreach ($containerMapping in $containerMappings) {
- $ASRPolicyMappings++
- }
- }
- $fabricCount++
- }
+ foreach ($fabricObject in $fabricObjects) {
+ $containerObjects = Get-AzRecoveryServicesAsrProtectionContainer -Fabric $fabricObject
+ foreach ($containerObject in $containerObjects) {
+ $protectedItems = Get-AzRecoveryServicesAsrReplicationProtectedItem -ProtectionContainer $containerObject
+ foreach ($protectedItem in $protectedItems) {
+ $ASRProtectedItems++
+ }
+ $containerMappings = Get-AzRecoveryServicesAsrProtectionContainerMapping `
+ -ProtectionContainer $containerObject
+ foreach ($containerMapping in $containerMappings) {
+ $ASRPolicyMappings++
+ }
+ }
+ $fabricCount++
+ }
} #Recheck presence of backup items in vault $backupItemsVMFin = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -VaultId $VaultToDelete.ID
batch Batch Pool Create Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-create-event.md
Last updated 12/13/2020
The following example shows the body of a pool create event.
-```
+```json
{
- "id": "myPool1",
- "displayName": "Production Pool",
- "vmSize": "Standard_F1s",
- "imageType": "VirtualMachineConfiguration",
- "cloudServiceConfiguration": {
- "osFamily": "3",
- "targetOsVersion": "*"
- },
- "networkConfiguration": {
- "subnetId": " "
- },
- "virtualMachineConfiguration": {
+ "id": "myPool1",
+ "displayName": "Production Pool",
+ "vmSize": "Standard_F1s",
+ "imageType": "VirtualMachineConfiguration",
+ "cloudServiceConfiguration": {
+ "osFamily": "3",
+ "targetOsVersion": "*"
+ },
+ "networkConfiguration": {
+ "subnetId": " "
+ },
+ "virtualMachineConfiguration": {
"imageReference": { "publisher": " ", "offer": " ",
Last updated 12/13/2020
"version": " " }, "nodeAgentId": " "
- },
- "resizeTimeout": "300000",
- "targetDedicatedNodes": 2,
- "targetLowPriorityNodes": 2,
- "taskSlotsPerNode": 1,
- "vmFillType": "Spread",
- "enableAutoScale": false,
- "enableInterNodeCommunication": false,
- "isAutoPool": false
+ },
+ "resizeTimeout": "300000",
+ "targetDedicatedNodes": 2,
+ "targetLowPriorityNodes": 2,
+ "taskSlotsPerNode": 1,
+ "vmFillType": "Spread",
+ "enableAutoScale": false,
+ "enableInterNodeCommunication": false,
+ "isAutoPool": false
} ```
batch Create Pool Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-extensions.md
The following extensions can currently be installed when creating a Batch pool:
- [HPC GPU driver extension for Linux on NVIDIA](../virtual-machines/extensions/hpccompute-gpu-linux.md) - [Microsoft Antimalware extension for Windows](../virtual-machines/extensions/iaas-antimalware-windows.md) - [Azure Monitor agent for Linux](../azure-monitor/agents/azure-monitor-agent-manage.md)
+- [Azure Monitor agent for Windows](../azure-monitor/agents/azure-monitor-agent-manage.md)
You can request support for additional publishers and/or extension types by opening a support request.
cognitive-services Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/java.md
Although this application is written in Java, the API is a RESTful Web service c
public class EntitySearch { static String subscriptionKey = "ENTER KEY HERE";
-
- static String host = "https://api.bing.microsoft.com";
- static String path = "/v7.0/search";
-
- static String mkt = "en-US";
- static String query = "italian restaurant near me";
+
+ static String host = "https://api.bing.microsoft.com";
+ static String path = "/v7.0/search";
+
+ static String mkt = "en-US";
+ static String query = "italian restaurant near me";
//...
-
+ ``` ## Construct a search request string 1. Create a function called `search()` that returns a JSON `String`. url-encode your search query, and add it to a parameters string with `&q=`. Add your market to the parameter string with `?mkt=`.
-
+ 2. Create a URL object with your host, path, and parameters strings.
-
+ ```java //... public static String search () throws Exception {
Although this application is written in Java, the API is a RESTful Web service c
URL url = new URL (host + path + params); //... ```
-
+ ## Send a search request and receive a response 1. In the `search()` function created above, create a new `HttpsURLConnection` object with `url.openCOnnection()`. Set the request method to `GET`, and add your subscription key to the `Ocp-Apim-Subscription-Key` header.
Although this application is written in Java, the API is a RESTful Web service c
//... ```
-2. Create a new `StringBuilder`. Use a new `InputStreamReader` as a parameter when instantiating `BufferedReader` to read the API response.
-
+2. Create a new `StringBuilder`. Use a new `InputStreamReader` as a parameter when instantiating `BufferedReader` to read the API response.
+ ```java //... StringBuilder response = new StringBuilder ();
Although this application is written in Java, the API is a RESTful Web service c
//... ```
-3. Create a `String` object to store the response from the `BufferedReader`. Iterate through it, and append each line to the string. Then, close the reader and return the response.
-
+3. Create a `String` object to store the response from the `BufferedReader`. Iterate through it, and append each line to the string. Then, close the reader and return the response.
+ ```java String line;
-
+ while ((line = in.readLine()) != null) { response.append(line); } in.close();
-
+ return response.toString(); ``` ## Format the JSON response
-1. Create a new function called `prettify` to format the JSON response. Create a new `JsonParser`, call `parse()` on the JSON text, and then store it as a JSON object.
+1. Create a new function called `prettify` to format the JSON response. Create a new `JsonParser`, call `parse()` on the JSON text, and then store it as a JSON object.
+
+2. Use the Gson library to create a new `GsonBuilder()`, use `setPrettyPrinting().create()` to format the JSON, and then return it.
-2. Use the Gson library to create a new `GsonBuilder()`, use `setPrettyPrinting().create()` to format the JSON, and then return it.
-
```java //... public static String prettify (String json_text) {
Although this application is written in Java, the API is a RESTful Web service c
## Call the search function - From the main method of your project, call `search()`, and use `prettify()` to format the text.
-
+ ```java
- public static void main(String[] args) {
- try {
- String response = search ();
- System.out.println (prettify (response));
- }
- catch (Exception e) {
- System.out.println (e);
- }
- }
+ public static void main(String[] args) {
+ try {
+ String response = search ();
+ System.out.println (prettify (response));
+ }
+ catch (Exception e) {
+ System.out.println (e);
+ }
+ }
``` ## Example JSON response
-A successful response is returned in JSON, as shown in the following example:
+A successful response is returned in JSON, as shown in the following example:
```json {
A successful response is returned in JSON, as shown in the following example:
}, "telephone": "(800) 555-1212" },
-
+ . . . ] }
cognitive-services Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/quickstarts/java.md
[!INCLUDE [Bing move notice](../../bing-web-search/includes/bing-move-notice.md)]
-Use this quickstart to make your first call to the Bing Spell Check REST API. This simple Java application sends a request to the API and returns a list of suggested corrections.
+Use this quickstart to make your first call to the Bing Spell Check REST API. This simple Java application sends a request to the API and returns a list of suggested corrections.
Although this application is written in Java, the API is a RESTful web service compatible with most programming languages. The source code for this application is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/java/Search/BingSpellCheck.java).
Although this application is written in Java, the API is a RESTful web service c
1. Create a function called `check()` to create and send the API request. Within this function, add the code specified in the next steps. Create a string for the request parameters:
- 1. Assign your market code to the `mkt` parameter with the `=` operator.
+ 1. Assign your market code to the `mkt` parameter with the `=` operator.
- 1. Add the `mode` parameter with the `&` operator, and then assign the spell-check mode.
+ 1. Add the `mode` parameter with the `&` operator, and then assign the spell-check mode.
```java public static void check () throws Exception {
- String params = "?mkt=" + mkt + "&mode=" + mode;
- // add the rest of the code snippets here (except prettify() and main())...
+ String params = "?mkt=" + mkt + "&mode=" + mode;
+ // add the rest of the code snippets here (except prettify() and main())...
} ``` 2. Create a URL by combining the endpoint host, path, and parameters string. Create a new `HttpsURLConnection` object.
- ```java
- URL url = new URL(host + path + params);
- HttpsURLConnection connection = (HttpsURLConnection) url.openConnection();
- ```
+ ```java
+ URL url = new URL(host + path + params);
+ HttpsURLConnection connection = (HttpsURLConnection) url.openConnection();
+ ```
3. Open a connection to the URL. Set the request method to `POST` and add your request parameters. Be sure to add your subscription key to the `Ocp-Apim-Subscription-Key` header.
- ```java
- connection.setRequestMethod("POST");
- connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");
- connection.setRequestProperty("Ocp-Apim-Subscription-Key", key);
- connection.setDoOutput(true);
- ```
+ ```java
+ connection.setRequestMethod("POST");
+ connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");
+ connection.setRequestProperty("Ocp-Apim-Subscription-Key", key);
+ connection.setDoOutput(true);
+ ```
4. Create a new `DataOutputStream` object and send the request to the API.
- ```java
- DataOutputStream wr = new DataOutputStream(connection.getOutputStream());
- wr.writeBytes("text=" + text);
- wr.flush();
- wr.close();
- ```
+ ```java
+ DataOutputStream wr = new DataOutputStream(connection.getOutputStream());
+ wr.writeBytes("text=" + text);
+ wr.flush();
+ wr.close();
+ ```
## Format and read the API response 1. Add the `prettify()` method to your class, which formats the JSON for a more readable output.
- ``` java
- // This function prettifies the json response.
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonElement json = parser.parse(json_text);
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
- ```
+ ``` java
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+ ```
1. Create a `BufferedReader` and read the response from the API. Print it to the console.
-
+ ```java
- BufferedReader in = new BufferedReader(
- new InputStreamReader(connection.getInputStream()));
- String line;
- while ((line = in.readLine()) != null) {
- System.out.println(prettify(line));
- }
- in.close();
+ BufferedReader in = new BufferedReader(
+ new InputStreamReader(connection.getInputStream()));
+ String line;
+ while ((line = in.readLine()) != null) {
+ System.out.println(prettify(line));
+ }
+ in.close();
``` ## Call the API In the main function of your application, call your `check()` method created previously. ```java
- public static void main(String[] args) {
- try {
- check();
- }
- catch (Exception e) {
- System.out.println (e);
- }
- }
+public static void main(String[] args) {
+ try {
+ check();
+ }
+ catch (Exception e) {
+ System.out.println (e);
+ }
+}
``` ## Run the application
cognitive-services Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Spell-Check/quickstarts/php.md
[!INCLUDE [Bing move notice](../../bing-web-search/includes/bing-move-notice.md)]
-Use this quickstart to make your first call to the Bing Spell Check REST API. This simple PHP application sends a request to the API and returns a list of suggested corrections.
+Use this quickstart to make your first call to the Bing Spell Check REST API. This simple PHP application sends a request to the API and returns a list of suggested corrections.
Although this application is written in PHP, the API is a RESTful Web service compatible with most programming languages.
Although this application is written in PHP, the API is a RESTful Web service co
3. Replace the `subscriptionKey` value with an access key valid for your subscription. 4. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource. 5. Run the program.
-
+ ```php <?php
-
+ // NOTE: Be sure to uncomment the following line in your php.ini file. // ;extension=php_openssl.dll
-
+ // These properties are used for optional headers (see below). // define("CLIENT_ID", "<Client ID from Previous Response Goes Here>"); // define("CLIENT_IP", "999.999.999.999"); // define("CLIENT_LOCATION", "+90.0000000000000;long: 00.0000000000000;re:100.000000000000");
-
+ $host = 'https://api.cognitive.microsoft.com'; $path = '/bing/v7.0/spellcheck?'; $params = 'mkt=en-us&mode=proof';
-
+ $input = "Hollo, wrld!";
-
+ $data = array (
- 'text' => urlencode ($input)
+ 'text' => urlencode ($input)
);
-
+ // NOTE: Replace this example key with a valid subscription key. $key = 'ENTER KEY HERE';
-
+ // The following headers are optional, but it is recommended // that they are treated as required. These headers will assist the service // with returning more accurate results. //'X-Search-Location' => CLIENT_LOCATION //'X-MSEdge-ClientID' => CLIENT_ID //'X-MSEdge-ClientIP' => CLIENT_IP
-
+ $headers = "Content-type: application/x-www-form-urlencoded\r\n" .
- "Ocp-Apim-Subscription-Key: $key\r\n";
-
+ "Ocp-Apim-Subscription-Key: $key\r\n";
+ // NOTE: Use the key 'http' even if you are making an HTTPS request. See: // https://php.net/manual/en/function.stream-context-create.php $options = array (
Although this application is written in PHP, the API is a RESTful Web service co
); $context = stream_context_create ($options); $result = file_get_contents ($host . $path . $params, false, $context);
-
+ if ($result === FALSE) {
- /* Handle error */
+ /* Handle error */
}
-
+ $json = json_encode(json_decode($result), JSON_UNESCAPED_UNICODE | JSON_PRETTY_PRINT); echo $json; ?>
Run your application by starting a web server and navigating to your file.
## Example JSON response
-A successful response is returned in JSON, as shown in the following example:
+A successful response is returned in JSON, as shown in the following example:
```json {
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-action.md
# Playing audio in call
-The play action provided through the call automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. The play action allows you to provide ACS access to your pre-recorded audio files with support for authentication.
+The play action provided through the call automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. The play action allows you to provide Azure Communication Services access to your pre-recorded audio files with support for authentication.
> [!NOTE]
-> ACS currently only supports WAV files formatted as mono channel audio recorded at 16KHz. You can create your own audio files using [Speech synthesis with Audio Content Creation tool](../../../ai-services/Speech-Service/how-to-audio-content-creation.md).
+> Azure Communication Services currently only supports WAV files formatted as mono channel audio recorded at 16KHz. You can create your own audio files using [Speech synthesis with Audio Content Creation tool](../../../ai-services/Speech-Service/how-to-audio-content-creation.md).
-The Play action allows you to provide access to a pre-recorded audio file of WAV format that ACS can access with support for authentication.
+The Play action allows you to provide access to a pre-recorded audio file of WAV format that Azure Communication Services can access with support for authentication.
## Common use cases
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
Sending a high volume of messages has a set of limitations on the number of emai
|Total email request size (including attachments) |10 MB | ### Action to take
-This sandbox setup is to help developers start building the application. You can gradually request to increase the sending volume once the application is ready to go live. Submit a support request to raise your desired sending limit if you require sending a volume of messages exceeding the rate limits.
+This sandbox setup is designed to help developers begin building the application. Once the application is ready for production, you can gradually request to increase the sending volume. If you need to send more messages than the rate limits allow, submit a support request to raise your desired email sending limit. The reviewing team will consider your overall sender reputation, which includes factors such as your email delivery failure rates, your domain reputation, and reports of spam and abuse, when determining approval status.
## Chat
communication-services End Of Call Survey Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/end-of-call-survey-tutorial.md
# Use the End of Call Survey to collect user feedback
-> [!NOTE]
+> [!NOTE]
> End of Call Survey is currently supported only for our JavaScript / Web SDK. This tutorial shows you how to use the Azure Communication Services End of Call Survey for JavaScript / Web SDK.
This tutorial shows you how to use the Azure Communication Services End of Call
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-- [Node.js](https://nodejs.org/) active Long Term Support(LTS) versions are recommended.
+- [Node.js](https://nodejs.org/) active Long Term Support(LTS) versions are recommended.
-- An active Communication Services resource. [Create a Communication Services resource](../quickstarts/create-communication-resource.md). Survey results are tied to single Communication Services resources.-- An active Log Analytics Workspace, also known as Azure Monitor Logs. See [End of Call Survey Logs](../concepts/analytics/logs/end-of-call-survey-logs.md).-- To conduct a survey with custom questions using free form text, you need an [App Insight resource](../../azure-monitor/app/create-workspace-resource.md#create-a-workspace-based-resource).
+- An active Communication Services resource. [Create a Communication Services resource](../quickstarts/create-communication-resource.md). Survey results are tied to single Communication Services resources.
+- An active Log Analytics Workspace, also known as Azure Monitor Logs. See [End of Call Survey Logs](../concepts/analytics/logs/end-of-call-survey-logs.md).
+- To conduct a survey with custom questions using free form text, you need an [App Insight resource](../../azure-monitor/app/create-workspace-resource.md#create-a-workspace-based-resource).
> [!IMPORTANT]
The End of Call Survey feature should be used after the call ends. Users can rat
The following code snips show an example of one-to-one call. After the end of the call, your application can show a survey UI and once the user chooses a rating, your application should call the feature API to submit the survey with the user choices.
-We encourage you to use the default rating scale. However, you can submit a survey with custom rating scale. You can check out the [sample application](https://github.com/Azure-Samples/communication-services-web-calling-tutorial/blob/main/Project/src/MakeCall/CallSurvey.js) for the sample API usage.
+We encourage you to use the default rating scale. However, you can submit a survey with custom rating scale. You can check out the [sample application](https://github.com/Azure-Samples/communication-services-web-calling-tutorial/blob/main/Project/src/MakeCall/CallSurvey.js) for the sample API usage.
### Rate call only - no custom scale
call.feature(Features.CallSurvey).submitSurvey({
``` ### Handle errors the SDK can send
- ``` javascript
+ ``` javascript
call.feature(Features.CallSurvey).submitSurvey({ overallRating: { score: 3 } }).catch((e) => console.log('error when submitting survey: ' + e))
call.feature(Features.CallSurvey).submitSurvey({
## Find different types of errors
-
+ ### Failures while submitting survey:
The API will return the following error messages if data validation fails or the
``` - One 408 (timeout) when event discarded:
-
+ ``` { message: "Please try again.", code: 408 } ```
The API will return the following error messages if data validation fails or the
### Default survey API configuration | API Rating Categories | Cutoff Value* | Input Range | Comments |
-| -- | -- | -- | -- |
+| -- | -- | -- | -- |
| Overall Call | 2 | 1 - 5 | Surveys a calling participantΓÇÖs overall quality experience on a scale of 1-5. A response of 1 indicates an imperfect call experience and 5 indicates a perfect call. The cutoff value of 2 means that a customer response of 1 or 2 indicates a less than perfect call experience. | | Audio | 2 | 1 - 5 | A response of 1 indicates an imperfect audio experience and 5 indicates no audio issues were experienced. | | Video | 2 | 1 - 5 | A response of 1 indicates an imperfect video experience and 5 indicates no video issues were experienced. |
The API will return the following error messages if data validation fails or the
-> [!NOTE]
+> [!NOTE]
>A questionΓÇÖs indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization.
about their audio, video, and screen share experience. You can also
customize input ranges to suit your needs. The default input range is 1 to 5 for Overall Call, Audio, Video, and Screenshare. However, each API value can be customized from a minimum of
-0 to maximum of 100.
+0 to maximum of 100.
### Customization examples | API Rating Categories | Cutoff Value* | Input Range |
-| -- | -- | -- |
-| Overall Call | 0 - 100 | 0 - 100 |
-| Audio | 0 - 100 | 0 - 100 |
-| Video | 0 - 100 | 0 - 100 |
-| Screenshare | 0 - 100 | 0 - 100 |
+| -- | -- | -- |
+| Overall Call | 0 - 100 | 0 - 100 |
+| Audio | 0 - 100 | 0 - 100 |
+| Video | 0 - 100 | 0 - 100 |
+| Screenshare | 0 - 100 | 0 - 100 |
> [!NOTE] > A questionΓÇÖs indicated cutoff value in the API is the threshold that Microsoft uses when analyzing your survey data. When you customize the cutoff value or Input Range, Microsoft analyzes your survey data according to your customization.
In addition to using the End of Call Survey API you can create your own survey q
- Build a UI in your application that will serve custom questions to the user and gather their input, lets assume that your application gathered responses as a string in the `improvementSuggestion` variable - Submit survey results to ACS and send user response using App Insights:
- ``` javascript
- currentCall.feature(SDK.Features.CallSurvey).submitSurvey(survey).then(res => {
- // `improvementSuggesstion` contains custom, user response
+ ``` javascript
+ currentCall.feature(SDK.Features.CallSurvey).submitSurvey(survey).then(res => {
+ // `improvementSuggesstion` contains custom, user response
if (improvementSuggestion !== '') {
- appInsights.trackEvent({
+ appInsights.trackEvent({
name: "CallSurvey", properties: { // Survey ID to correlate the survey id: res.id,
In addition to using the End of Call Survey API you can create your own survey q
} }); }
- });
- appInsights.flush();
- ```
+ });
+ appInsights.flush();
+ ```
User responses that were sent using AppInsights are available under your App Insights workspace. You can use [Workbooks](../../update-center/workbooks.md) to query between multiple resources, correlate call ratings and custom survey data. Steps to correlate the call ratings and custom survey data: - Create new [Workbooks](../../update-center/workbooks.md) (Your ACS Resource -> Monitoring -> Workbooks -> New) and query Call Survey data from your ACS resource. - Add new query (+Add -> Add query)
User responses that were sent using AppInsights are available under your App Ins
> [!IMPORTANT] > You must enable a Diagnostic Setting in Azure Monitor to send the log data of your surveys to a Log Analytics workspace, Event Hubs, or an Azure storage account to receive and analyze your survey data. If you do not send survey data to one of these options your survey data will not be stored and will be lost. To enable these logs for your Communications Services, see: [End of Call Survey Logs](../concepts/analytics/logs/end-of-call-survey-logs.md)
-
### View survey data with a Log Analytics workspace
-You need to enable a Log Analytics Workspace to both store the log data of your surveys and access survey results. To enable these logs for your Communications Service, see: [End of Call Survey Logs](../concepts/analytics/logs/end-of-call-survey-logs.md).
+You need to enable a Log Analytics Workspace to both store the log data of your surveys and access survey results. To enable these logs for your Communications Service, see: [End of Call Survey Logs](../concepts/analytics/logs/end-of-call-survey-logs.md).
-- You can also integrate your Log Analytics workspace with Power BI, see: [Integrate Log Analytics with Power BI](../../../articles/azure-monitor/logs/log-powerbi.md).
+- You can also integrate your Log Analytics workspace with Power BI, see: [Integrate Log Analytics with Power BI](../../../articles/azure-monitor/logs/log-powerbi.md).
## Best practices Here are our recommended survey flows and suggested question prompts for consideration. Your development can use our recommendation or use customized question prompts and flows for your visual interface. **Question 1:** How did the users perceive their overall call quality experience?
-We recommend you start the survey by only asking about the participantsΓÇÖ overall quality. If you separate the first and second questions, it helps to only collect responses to Audio, Video, and Screen Share issues if a survey participant indicates they experienced call quality issues.
+We recommend you start the survey by only asking about the participantsΓÇÖ overall quality. If you separate the first and second questions, it helps to only collect responses to Audio, Video, and Screen Share issues if a survey participant indicates they experienced call quality issues.
-- Suggested prompt: ΓÇ£How was the call quality?ΓÇ¥ -- API Question Values: Overall Call
+- Suggested prompt: ΓÇ£How was the call quality?ΓÇ¥
+- API Question Values: Overall Call
**Question 2:** Did the user perceive any Audio, Video, or Screen Sharing issues in the call? If a survey participant responded to Question 1 with a score at or below the cutoff value for the overall call, then present the second question. -- Suggested prompt: ΓÇ£What could have been better?ΓÇ¥ -- API Question Values: Audio, Video, and Screenshare
+- Suggested prompt: ΓÇ£What could have been better?ΓÇ¥
+- API Question Values: Audio, Video, and Screenshare
### Surveying Guidelines-- Avoid survey burnout, donΓÇÖt survey all call participants.-- The order of your questions matters. We recommend you randomize the sequence of optional tags in Question 2 in case respondents focus most of their feedback on the first prompt they visually see.-- Consider using surveys for separate Azure Communication Services Resources in controlled experiments to identify release impacts.
+- Avoid survey burnout, donΓÇÖt survey all call participants.
+- The order of your questions matters. We recommend you randomize the sequence of optional tags in Question 2 in case respondents focus most of their feedback on the first prompt they visually see.
+- Consider using surveys for separate Azure Communication Services Resources in controlled experiments to identify release impacts.
## Next steps
If a survey participant responded to Question 1 with a score at or below the cut
- Learn more about the End of Call Survey, see: [End of Call Survey overview](../concepts/voice-video-calling/end-of-call-survey-concept.md) -- Learn how to use the Log Analytics workspace, see: [Log Analytics Tutorial](../../../articles/azure-monitor/logs/log-analytics-tutorial.md)
+- Learn how to use the Log Analytics workspace, see: [Log Analytics Tutorial](../../../articles/azure-monitor/logs/log-analytics-tutorial.md)
-- Create your own queries in Log Analytics, see: [Get Started Queries](../../../articles/azure-monitor/logs/get-started-queries.md)
+- Create your own queries in Log Analytics, see: [Get Started Queries](../../../articles/azure-monitor/logs/get-started-queries.md)
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
The latest management API versions for Azure Container Apps are:
- [`2022-10-01`](/rest/api/containerapps/stable/container-apps) (stable) - [`2023-04-01-preview`](/rest/api/containerapps/preview/container-apps) (preview)
+To learn more about the differences between API versions, see [Microsoft.App change log](/azure/templates/microsoft.app/change-log/summary).
+ ### Updating API versions To use a specific API version in ARM or Bicep, update the version referenced in your templates. To use the latest API version in the Azure CLI, update the Azure Container Apps extension by running the following command:
cosmos-db Troubleshoot Nohostavailable Exception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/troubleshoot-nohostavailable-exception.md
# Troubleshoot NoHostAvailableException and NoNodeAvailableException
-NoHostAvailableException is a top-level wrapper exception with many possible causes and inner exceptions, many of which can be client-related. This exception tends to occur if there are some issues with the cluster or connection settings, or if one or more Cassandra nodes are unavailable.
+NoHostAvailableException is a top-level wrapper exception with many possible causes and inner exceptions, many of which can be client-related. This exception tends to occur if there are some issues with the cluster or connection settings, or if one or more Cassandra nodes are unavailable.
This article explores possible reasons for this exception, and it discusses specific details about the client driver that's being used.
This article explores possible reasons for this exception, and it discusses spec
One of the most common causes of NoHostAvailableException is the default driver settings. We recommend that you use the [settings](#code-sample) listed at the end of this article. Here is some explanatory information: - The default value of the connections per host is 1, which we don't recommend for Azure Cosmos DB. We do recommend a minimum value of 10. Although more aggregated Request Units (RU) are provided, increase the connection count. The general guideline is 10 connections per 200,000 RU.-- Use the Azure Cosmos DB retry policy to handle intermittent throttling responses. For more information, see the Azure Cosmos DB extension libraries:
+- Use the Azure Cosmos DB retry policy to handle intermittent throttling responses. For more information, see the Azure Cosmos DB extension libraries:
- [Driver 3 extension library](https://github.com/Azure/azure-cosmos-cassandra-extensions) - [Driver 4 extension library](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.1) - For multi-region accounts, use the Azure Cosmos DB load-balancing policy in the extension.
Apply one of the following options:
### All hosts tried for query failed When the client is set to connect to a region other than the primary contact point region, during the initial few seconds at startup, you'll get one of the following exception messages:
-
+ - For Java driver 3: `Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)at cassandra.driver.core@3.10.2/com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:83)` - For Java driver 4: `No node was available to execute the query`
Use CosmosLoadBalancingPolicy in [Java driver 3](https://github.com/Azure/azure-
// https://docs.datastax.com/en/developer/java-driver/3.6/manual/socket_options/ SocketOptions socketOptions = new SocketOptions() .setReadTimeoutMillis(90000); // default 12000
-
+ // connection pooling options (default values are 1s) // https://docs.datastax.com/en/developer/java-driver/3.6/manual/pooling/ PoolingOptions poolingOptions = new PoolingOptions()
Use CosmosLoadBalancingPolicy in [Java driver 3](https://github.com/Azure/azure-
.setMaxConnectionsPerHost(HostDistance.LOCAL, 10) // default 1 .setCoreConnectionsPerHost(HostDistance.REMOTE, 10) // default 1 .setMaxConnectionsPerHost(HostDistance.REMOTE, 10); //default 1
-
+ // Azure Cosmos DB load balancing policy String Region = "West US"; CosmosLoadBalancingPolicy cosmosLoadBalancingPolicy = CosmosLoadBalancingPolicy.builder() .withWriteDC(Region) .withReadDC(Region) .build();
-
+ // Azure Cosmos DB retry policy CosmosRetryPolicy retryPolicy = CosmosRetryPolicy.builder() .withFixedBackOffTimeInMillis(5000) .withGrowingBackOffTimeInMillis(1000) .withMaxRetryCount(5) .build();
-
+ Cluster cluster = Cluster.builder() .addContactPoint(EndPoint).withPort(10350) .withCredentials(UserName, Password)
Use CosmosLoadBalancingPolicy in [Java driver 3](https://github.com/Azure/azure-
// driver configurations // https://docs.datastax.com/en/developer/java-driver/4.6/manual/core/configuration/ ProgrammaticDriverConfigLoaderBuilder configBuilder = DriverConfigLoader.programmaticBuilder();
-
+ // connection settings // https://docs.datastax.com/en/developer/java-driver/4.6/manual/core/pooling/ configBuilder
Use CosmosLoadBalancingPolicy in [Java driver 3](https://github.com/Azure/azure-
.withDuration(DefaultDriverOption.REQUEST_TIMEOUT, Duration.ofSeconds(90)) // default 2 .withClass(DefaultDriverOption.RECONNECTION_POLICY_CLASS, ConstantReconnectionPolicy.class) // default ExponentialReconnectionPolicy .withBoolean(DefaultDriverOption.METADATA_TOKEN_MAP_ENABLED, false); // default true
-
+ // load balancing settings // https://docs.datastax.com/en/developer/java-driver/4.6/manual/core/load_balancing/ String Region = "West US";
Use CosmosLoadBalancingPolicy in [Java driver 3](https://github.com/Azure/azure-
// retry policy // https://docs.datastax.com/en/developer/java-driver/4.6/manual/core/retries/ configBuilder
- .withClass(DefaultDriverOption.RETRY_POLICY_CLASS, CosmosRetryPolicy.class)
+ .withClass(DefaultDriverOption.RETRY_POLICY_CLASS, CosmosRetryPolicy.class)
.withInt(CosmosRetryPolicyOption.FIXED_BACKOFF_TIME, 5000) .withInt(CosmosRetryPolicyOption.GROWING_BACKOFF_TIME, 1000) .withInt(CosmosRetryPolicyOption.MAX_RETRIES, 5);
cosmos-db Performance Tips Query Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-query-sdk.md
CosmosPagedFlux<MyItem> filteredItems =
Pre-fetching works the same way regardless of the degree of parallelism, and there's a single buffer for the data from all partitions.
+## Optimizing single partition queries with Optimistic Direct Execution
+
+Azure Cosmos DB NoSQL has an optimization called Optimistic Direct Execution (ODE), which can improve the efficiency of certain NoSQL queries. Specifically, queries that donΓÇÖt require distribution include those that can be executed on a single physical partition or that have responses that don't require [pagination](query/pagination.md). Queries that donΓÇÖt require distribution can confidently skip some processes, such as client-side query plan generation and query rewrite, thereby reducing query latency and RU cost. If you specify the partition key in the request or query itself (or have only one physical partition), and the results of your query donΓÇÖt require pagination, then ODE can improve your queries.
+
+Single partition queries that feature GROUP BY, ORDER BY, DISTINCT, and aggregation functions (like sum, mean, min, and max) can significantly benefit from using ODE. However, in scenarios where the query is targeting multiple partitions or still requires pagination, the latency of the query response and RU cost might be higher than without using ODE. Therefore, when using ODE, we recommend to:
+- Specify the partition key in the call or query itself.
+- Ensure that your data size hasnΓÇÖt grown and caused the partition to split.
+- Ensure that your query results donΓÇÖt require pagination to get the full benefit of ODE.
+
+Here are a few examples of simple single partition queries which can benefit from ODE:
+```
+- SELECT * FROM r
+- SELECT VALUE r.id FROM r
+- SELECT * FROM r WHERE r.id > 5
+- SELECT r.id FROM r JOIN id IN r.id
+- SELECT TOP 5 r.id FROM r ORDER BY r.id
+- SELECT * FROM r WHERE r.id > 5 OFFSET 5 LIMIT 3
+```
+There can be cases where single partition queries may still require distribution if the number of data items increases over time and your Azure Cosmos DB database [splits the partition](../partitioning-overview.md#physical-partitions). Examples of queries where this could occur include:
+```
+- SELECT Count(r.id) AS count_a FROM r
+- SELECT DISTINCT r.id FROM r
+- SELECT Max(r.a) as min_a FROM r
+- SELECT Avg(r.a) as min_a FROM r
+- SELECT Sum(r.a) as sum_a FROM r WHERE r.a > 0
+```
+Some complex queries can always require distribution, even if targeting a single partition. Examples of such queries include:
+```
+- SELECT Sum(id) as sum_id FROM r JOIN id IN r.id
+- SELECT DISTINCT r.id FROM r GROUP BY r.id
+- SELECT DISTINCT r.id, Sum(r.id) as sum_a FROM r GROUP BY r.id
+- SELECT Count(1) FROM (SELECT DISTINCT r.id FROM root r)
+- SELECT Avg(1) AS avg FROM root r
+```
+
+It's important to note that ODE might not always retrieve the query plan and, as a result, is not able to disallow or turn off for unsupported queries. For example, after partition split, such queries are no longer eligible for ODE and, therefore, won't run because client-side query plan evaluation will block those. To ensure compatibility/service continuity, it's critical to ensure that only queries that are fully supported in scenarios without ODE (that is, they execute and produce the correct result in the general multi-partition case) are used with ODE.
+
+### Using ODE via the SDKs
+ODE is now available and enabled by default in the C# Preview SDK for versions 3.35.0 and later. When you execute a query and specify a partition key in the request or query itself, or your database has only one physical partition, your query execution can leverage the benefits of ODE.
+
+To disable ODE, set the flag `EnableOptimisticDirectExecution` to false in your QueryRequestOptions object.
++ ## Next steps To learn more about performance using the Java SDK:
cosmos-db Computed Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/computed-properties.md
There are a few considerations for indexing computed properties, including:
- Wildcard paths under the computed property path work like they do for regular properties. -- If you're removing a computed property that has been indexed, all indexes on that property must also be dropped.
+- If you're creating, updating, or removing a computed property, all indexes on that property name must be dropped first.
> [!NOTE] > All computed properties are defined at the top level of the item. The path is always `/<computed property name>`.
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Previously updated : 07/09/2023 Last updated : 07/27/2023 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g
Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters.
-### June 2023
+### July 2023
* General availability: Terraform support is now available for all cluster management operations. See the following pages for details: * [Cluster management](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_postgresql_cluster) * [Worker node configuration](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_postgresql_node_configuration)
Updates that change cluster internals, such as installing a [new minor PostgreSQ
* [Private access: Private Link service management](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/private_link_service) * General availability: 99.99% monthly availability [Service Level Agreement (SLA)](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services).
-### June 2023
+### June 2023
* General availability: Customer-defined database name is now available in [all regions](./resources-regions.md) at [cluster provisioning](./quickstart-create-portal.md) time. * If the database name isn't specified, the default `citus` name is used. * General availability: [Managed PgBouncer settings](./reference-parameters.md#managed-pgbouncer-parameters) are now configurable on all clusters.
Updates that change cluster internals, such as installing a [new minor PostgreSQ
* Preview: Audit logging of database activities in Azure Cosmos DB for PostgreSQL is available through the PostgreSQL pgAudit extension. * See [details](./how-to-enable-audit.md).
-### May 2023
+### May 2023
* General availability: [Pgvector extension](howto-use-pgvector.md) enabling vector storage is now fully supported on Azure Cosmos DB for Postgres. * General availability: [The latest minor PostgreSQL version updates](reference-versions.md#postgresql-versions) (11.20, 12.15, 13.11, 14.8, and 15.3) are now available in all supported regions.
Updates that change cluster internals, such as installing a [new minor PostgreSQ
* See [this page](./reference-extensions.md#citus-extension) for the latest supported Citus versions. * See [this page](./concepts-upgrade.md) for information on PostgreSQL and Citus version in-place upgrade.
-### April 2023
+### April 2023
* General availability: [Representational State Transfer (REST) APIs](/rest/api/postgresqlhsc/) are now fully supported for all cluster management operations. * General availability: [Bicep](/azure/templates/microsoft.dbforpostgresql/servergroupsv2?pivots=deployment-language-bicep) and [ARM templates](/azure/templates/microsoft.dbforpostgresql/servergroupsv2?pivots=deployment-language-arm-template) for Azure Cosmos DB for PostgreSQL's serverGroupsv2 resource type.
cosmos-db Reference Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-metadata.md
function.
- Setting a node capacity exception by hostname pattern:
- ```postgresql
+ ```postgresql
CREATE FUNCTION v2_node_double_capacity(nodeidarg int) RETURNS boolean AS $$ SELECT
cosmos-db How To Use Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-go.md
To follow along with this tutorial you'll need an Azure resource group, a storag
1. Create an Azure resource group.
- ```azurecli
- az group create --name myResourceGroup --location eastus
- ```
+ ```azurecli
+ az group create --name myResourceGroup --location eastus
+ ```
2. Next create an Azure storage account for your new Azure Table.
- ```azurecli
- az storage account create --name <storageAccountName> --resource-group myResourceGroup --location eastus --sku Standard_LRS
- ```
+ ```azurecli
+ az storage account create --name <storageAccountName> --resource-group myResourceGroup --location eastus --sku Standard_LRS
+ ```
3. Create a table resource.
- ```azurecli
- az storage table create --account-name <storageAccountName> --account-key 'storageKey' --name mytable
- ```
+ ```azurecli
+ az storage table create --account-name <storageAccountName> --account-key 'storageKey' --name mytable
+ ```
### Install packages
Next, create a file called `main.go`, then copy below into it:
package main import (
- "context"
- "encoding/json"
- "fmt"
- "os"
-
- "github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
- "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
- "github.com/Azure/azure-sdk-for-go/sdk/data/aztables"
+ "context"
+ "encoding/json"
+ "fmt"
+ "os"
+
+ "github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
+ "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/aztables"
) type InventoryEntity struct {
- aztables.Entity
- Price float32
- Inventory int32
- ProductName string
- OnSale bool
+ aztables.Entity
+ Price float32
+ Inventory int32
+ ProductName string
+ OnSale bool
} type PurchasedEntity struct {
- aztables.Entity
- Price float32
- ProductName string
- OnSale bool
+ aztables.Entity
+ Price float32
+ ProductName string
+ OnSale bool
} func getClient() *aztables.Client {
- accountName, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT")
- if !ok {
- panic("AZURE_STORAGE_ACCOUNT environment variable not found")
- }
-
- tableName, ok := os.LookupEnv("AZURE_TABLE_NAME")
- if !ok {
- panic("AZURE_TABLE_NAME environment variable not found")
- }
-
- cred, err := azidentity.NewDefaultAzureCredential(nil)
- if err != nil {
- panic(err)
- }
- serviceURL := fmt.Sprintf("https://%s.table.core.windows.net/%s", accountName, tableName)
- client, err := aztables.NewClient(serviceURL, cred, nil)
- if err != nil {
- panic(err)
- }
- return client
+ accountName, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT")
+ if !ok {
+ panic("AZURE_STORAGE_ACCOUNT environment variable not found")
+ }
+
+ tableName, ok := os.LookupEnv("AZURE_TABLE_NAME")
+ if !ok {
+ panic("AZURE_TABLE_NAME environment variable not found")
+ }
+
+ cred, err := azidentity.NewDefaultAzureCredential(nil)
+ if err != nil {
+ panic(err)
+ }
+ serviceURL := fmt.Sprintf("https://%s.table.core.windows.net/%s", accountName, tableName)
+ client, err := aztables.NewClient(serviceURL, cred, nil)
+ if err != nil {
+ panic(err)
+ }
+ return client
} func createTable(client *aztables.Client) {
- //TODO: Check access policy, Storage Blob Data Contributor role needed
- _, err := client.Create(context.TODO(), nil)
- if err != nil {
- panic(err)
- }
+ //TODO: Check access policy, Storage Blob Data Contributor role needed
+ _, err := client.Create(context.TODO(), nil)
+ if err != nil {
+ panic(err)
+ }
} func addEntity(client *aztables.Client) {
- myEntity := InventoryEntity{
- Entity: aztables.Entity{
- PartitionKey: "pk001",
- RowKey: "rk001",
- },
- Price: 3.99,
- Inventory: 20,
- ProductName: "Markers",
- OnSale: false,
- }
-
- marshalled, err := json.Marshal(myEntity)
- if err != nil {
- panic(err)
- }
-
- _, err = client.AddEntity(context.TODO(), marshalled, nil) // TODO: Check access policy, need Storage Table Data Contributor role
- if err != nil {
- panic(err)
- }
+ myEntity := InventoryEntity{
+ Entity: aztables.Entity{
+ PartitionKey: "pk001",
+ RowKey: "rk001",
+ },
+ Price: 3.99,
+ Inventory: 20,
+ ProductName: "Markers",
+ OnSale: false,
+ }
+
+ marshalled, err := json.Marshal(myEntity)
+ if err != nil {
+ panic(err)
+ }
+
+ _, err = client.AddEntity(context.TODO(), marshalled, nil) // TODO: Check access policy, need Storage Table Data Contributor role
+ if err != nil {
+ panic(err)
+ }
} func listEntities(client *aztables.Client) {
- listPager := client.List(nil)
- pageCount := 0
- for listPager.More() {
- response, err := listPager.NextPage(context.TODO())
- if err != nil {
- panic(err)
- }
- fmt.Printf("There are %d entities in page #%d\n", len(response.Entities), pageCount)
- pageCount += 1
- }
+ listPager := client.List(nil)
+ pageCount := 0
+ for listPager.More() {
+ response, err := listPager.NextPage(context.TODO())
+ if err != nil {
+ panic(err)
+ }
+ fmt.Printf("There are %d entities in page #%d\n", len(response.Entities), pageCount)
+ pageCount += 1
+ }
} func queryEntity(client *aztables.Client) {
- filter := fmt.Sprintf("PartitionKey eq '%v' or RowKey eq '%v'", "pk001", "rk001")
- options := &aztables.ListEntitiesOptions{
- Filter: &filter,
- Select: to.StringPtr("RowKey,Price,Inventory,ProductName,OnSale"),
- Top: to.Int32Ptr(15),
- }
-
- pager := client.List(options)
- for pager.More() {
- resp, err := pager.NextPage(context.Background())
- if err != nil {
- panic(err)
- }
- for _, entity := range resp.Entities {
- var myEntity PurchasedEntity
- err = json.Unmarshal(entity, &myEntity)
- if err != nil {
- panic(err)
- }
- fmt.Println("Return custom type [PurchasedEntity]")
- fmt.Printf("Price: %v; ProductName: %v; OnSale: %v\n", myEntity.Price, myEntity.ProductName, myEntity.OnSale)
- }
- }
+ filter := fmt.Sprintf("PartitionKey eq '%v' or RowKey eq '%v'", "pk001", "rk001")
+ options := &aztables.ListEntitiesOptions{
+ Filter: &filter,
+ Select: to.StringPtr("RowKey,Price,Inventory,ProductName,OnSale"),
+ Top: to.Int32Ptr(15),
+ }
+
+ pager := client.List(options)
+ for pager.More() {
+ resp, err := pager.NextPage(context.Background())
+ if err != nil {
+ panic(err)
+ }
+ for _, entity := range resp.Entities {
+ var myEntity PurchasedEntity
+ err = json.Unmarshal(entity, &myEntity)
+ if err != nil {
+ panic(err)
+ }
+ fmt.Println("Return custom type [PurchasedEntity]")
+ fmt.Printf("Price: %v; ProductName: %v; OnSale: %v\n", myEntity.Price, myEntity.ProductName, myEntity.OnSale)
+ }
+ }
} func deleteEntity(client *aztables.Client) {
- _, err := client.DeleteEntity(context.TODO(), "pk001", "rk001", nil)
- if err != nil {
- panic(err)
- }
+ _, err := client.DeleteEntity(context.TODO(), "pk001", "rk001", nil)
+ if err != nil {
+ panic(err)
+ }
} func deleteTable(client *aztables.Client) {
- _, err := client.Delete(context.TODO(), nil)
- if err != nil {
- panic(err)
- }
+ _, err := client.Delete(context.TODO(), nil)
+ if err != nil {
+ panic(err)
+ }
} func main() {
- fmt.Println("Authenticating...")
- client := getClient()
+ fmt.Println("Authenticating...")
+ client := getClient()
- fmt.Println("Creating a table...")
- createTable(client)
+ fmt.Println("Creating a table...")
+ createTable(client)
- fmt.Println("Adding an entity to the table...")
- addEntity(client)
+ fmt.Println("Adding an entity to the table...")
+ addEntity(client)
- fmt.Println("Calculating all entities in the table...")
- listEntities(client)
+ fmt.Println("Calculating all entities in the table...")
+ listEntities(client)
- fmt.Println("Querying a specific entity...")
- queryEntity(client)
+ fmt.Println("Querying a specific entity...")
+ queryEntity(client)
- fmt.Println("Deleting an entity...")
- deleteEntity(client)
+ fmt.Println("Deleting an entity...")
+ deleteEntity(client)
- fmt.Println("Deleting a table...")
- deleteTable(client)
+ fmt.Println("Deleting a table...")
+ deleteTable(client)
} ```
if err != nil {
```go // Define the table entity as a custom type type InventoryEntity struct {
- aztables.Entity
- Price float32
- Inventory int32
- ProductName string
- OnSale bool
+ aztables.Entity
+ Price float32
+ Inventory int32
+ ProductName string
+ OnSale bool
} // Define the entity values myEntity := InventoryEntity{
- Entity: aztables.Entity{
- PartitionKey: "pk001",
- RowKey: "rk001",
- },
- Price: 3.99,
- Inventory: 20,
- ProductName: "Markers",
- OnSale: false,
+ Entity: aztables.Entity{
+ PartitionKey: "pk001",
+ RowKey: "rk001",
+ },
+ Price: 3.99,
+ Inventory: 20,
+ ProductName: "Markers",
+ OnSale: false,
} // Marshal the entity to JSON marshalled, err := json.Marshal(myEntity) if err != nil {
- panic(err)
+ panic(err)
} // Add the entity to the table _, err = client.AddEntity(context.TODO(), marshalled, nil) // needs Storage Table Data Contributor role if err != nil {
- panic(err)
+ panic(err)
} ```
if err != nil {
```go // Define the new custom type type PurchasedEntity struct {
- aztables.Entity
- Price float32
- ProductName string
- OnSale bool
+ aztables.Entity
+ Price float32
+ ProductName string
+ OnSale bool
} // Define the query filter and options filter := fmt.Sprintf("PartitionKey eq '%v' or RowKey eq '%v'", "pk001", "rk001") options := &aztables.ListEntitiesOptions{
- Filter: &filter,
- Select: to.StringPtr("RowKey,Price,Inventory,ProductName,OnSale"),
- Top: to.Int32Ptr(15),
+ Filter: &filter,
+ Select: to.StringPtr("RowKey,Price,Inventory,ProductName,OnSale"),
+ Top: to.Int32Ptr(15),
} // Query the table for the entity pager := client.List(options) for pager.More() {
- resp, err := pager.NextPage(context.Background())
- if err != nil {
- panic(err)
- }
- for _, entity := range resp.Entities {
- var myEntity PurchasedEntity
- err = json.Unmarshal(entity, &myEntity)
- if err != nil {
- panic(err)
- }
- fmt.Println("Return custom type [PurchasedEntity]")
- fmt.Printf("Price: %v; ProductName: %v; OnSale: %v\n", myEntity.Price, myEntity.ProductName, myEntity.OnSale)
- }
+ resp, err := pager.NextPage(context.Background())
+ if err != nil {
+ panic(err)
+ }
+ for _, entity := range resp.Entities {
+ var myEntity PurchasedEntity
+ err = json.Unmarshal(entity, &myEntity)
+ if err != nil {
+ panic(err)
+ }
+ fmt.Println("Return custom type [PurchasedEntity]")
+ fmt.Printf("Price: %v; ProductName: %v; OnSale: %v\n", myEntity.Price, myEntity.ProductName, myEntity.OnSale)
+ }
} ```
cost-management-billing Ea Portal Enrollment Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-enrollment-invoices.md
Title: Azure Enterprise enrollment invoices
description: This article explains how to manage and act on your Azure Enterprise invoice. Previously updated : 12/16/2022 Last updated : 07/29/2023
A customer's billing frequency is annual, quarterly, or monthly. The billing cyc
The change becomes effective at the end of the current billing quarter.
-If an Amendment M503 is signed, you can move any agreement from any frequency to monthly billing. ΓÇï
- ### Request an invoice copy If you're an indirect enterprise agreement customer, contact your partner to request a copy of your invoice.
data-factory Create Azure Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-integration-runtime.md
The Integration Runtime (IR) is the compute infrastructure used by Azure Data Factory and Synapse pipelines to provide data integration capabilities across different network environments. For more information about IR, see [Integration runtime](concepts-integration-runtime.md).
-Azure IR provides a fully managed compute to natively perform data movement and dispatch data transformation activities to compute services like HDInsight. It is hosted in Azure environment and supports connecting to resources in public network environment with public accessible endpoints.
+Azure IR provides a fully managed compute to natively perform data movement and dispatch data transformation activities to compute services like HDInsight. It's hosted in Azure environment and supports connecting to resources in public network environment with public accessible endpoints.
This document introduces how you can create and configure Azure Integration Runtime. [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Default Azure IR
-By default, each data factory or Synapse workspace has an Azure IR in the backend that supports operations on cloud data stores and compute services in public network. The location of that Azure IR is autoresolve. If **connectVia** property is not specified in the linked service definition, the default Azure IR is used. You only need to explicitly create an Azure IR when you would like to explicitly define the location of the IR, or if you would like to virtually group the activity executions on different IRs for management purpose.
+By default, each data factory or Synapse workspace has an Azure IR in the backend that supports operations on cloud data stores and compute services in public network. The location of that Azure IR is autoresolve. If **connectVia** property isn't specified in the linked service definition, the default Azure IR is used. You only need to explicitly create an Azure IR when you would like to explicitly define the location of the IR, or if you would like to virtually group the activity executions on different IRs for management purpose.
## Create Azure IR To create and set up an Azure IR, you can use the following procedures. ### Create an Azure IR via Azure PowerShell
-Integration Runtime can be created using the **Set-AzDataFactoryV2IntegrationRuntime** PowerShell cmdlet. To create an Azure IR, you specify the name, location, and type to the command. Here is a sample command to create an Azure IR with location set to "West Europe":
+Integration Runtime can be created using the **Set-AzDataFactoryV2IntegrationRuntime** PowerShell cmdlet. To create an Azure IR, you specify the name, location, and type to the command. Here's a sample command to create an Azure IR with location set to "West Europe":
```powershell Set-AzDataFactoryV2IntegrationRuntime -DataFactoryName "SampleV2DataFactory1" -Name "MySampleAzureIR" -ResourceGroupName "ADFV2SampleRG" -Type Managed -Location "West Europe" ```
-For Azure IR, the type must be set to **Managed**. You do not need to specify compute details because it is fully managed elastically in cloud. Specify compute details like node size and node count when you would like to create Azure-SSIS IR. For more information, see [Create and Configure Azure-SSIS IR](create-azure-ssis-integration-runtime.md).
+For Azure IR, the type must be set to **Managed**. You don't need to specify compute details because it's fully managed elastically in cloud. Specify compute details like node size and node count when you would like to create Azure-SSIS IR. For more information, see [Create and Configure Azure-SSIS IR](create-azure-ssis-integration-runtime.md).
You can configure an existing Azure IR to change its location using the Set-AzDataFactoryV2IntegrationRuntime PowerShell cmdlet. For more information about the location of an Azure IR, see [Introduction to integration runtime](concepts-integration-runtime.md).
Use the following steps to create an Azure IR using UI.
:::image type="content" source="media/create-azure-integration-runtime/new-azure-integration-runtime.png" alt-text="Screenshot that shows create an Azure integration runtime."::: 1. Enter a name for your Azure IR, and select **Create**. :::image type="content" source="media/create-azure-integration-runtime/create-azure-integration-runtime.png" alt-text="Screenshot that shows the final step to create the Azure integration runtime.":::
-1. You'll see a pop-up notification when the creation completes. On the **Integration runtimes** page, make sure that you see the newly created IR in the list.
+1. You see a pop-up notification when the creation completes. On the **Integration runtimes** page, make sure that you see the newly created IR in the list.
:::image type="content" source="media/create-azure-integration-runtime/integration-runtime-in-the-list.png" alt-text="Screenshot showing the Azure integration runtime in the list.":::
-
+1. You can repair Azure integration runtime by clicking **repair** button if the status is shown as **Limited**.
+ > [!NOTE] > If you want to enable managed virtual network on Azure IR, please see [How to enable managed virtual network](managed-virtual-network-private-endpoint.md)
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-self-hosted-integration-runtime.md
This article describes how you can create and configure a self-hosted IR.
## Considerations for using a self-hosted IR - You can use a single self-hosted integration runtime for multiple on-premises data sources. You can also share it with another data factory within the same Azure Active Directory (Azure AD) tenant. For more information, see [Sharing a self-hosted integration runtime](./create-shared-self-hosted-integration-runtime-powershell.md).-- You can install only one instance of a self-hosted integration runtime on any single machine. If you have two data factories that need to access on-premises data sources, either use the [self-hosted IR sharing feature](./create-shared-self-hosted-integration-runtime-powershell.md) to share the self-hosted IR, or install the self-hosted IR on two on-premises computers, one for each data factory or Synapse workspace. Synapse workspace does not support Integration Runtime Sharing.
+- You can install only one instance of a self-hosted integration runtime on any single machine. If you have two data factories that need to access on-premises data sources, either use the [self-hosted IR sharing feature](./create-shared-self-hosted-integration-runtime-powershell.md) to share the self-hosted IR, or install the self-hosted IR on two on-premises computers, one for each data factory or Synapse workspace. Synapse workspace doesn't support Integration Runtime Sharing.
- The self-hosted integration runtime doesn't need to be on the same machine as the data source. However, having the self-hosted integration runtime close to the data source reduces the time for the self-hosted integration runtime to connect to the data source. We recommend that you install the self-hosted integration runtime on a machine that differs from the one that hosts the on-premises data source. When the self-hosted integration runtime and data source are on different machines, the self-hosted integration runtime doesn't compete with the data source for resources. - You can have multiple self-hosted integration runtimes on different machines that connect to the same on-premises data source. For example, if you have two self-hosted integration runtimes that serve two data factories, the same on-premises data source can be registered with both data factories. - Use a self-hosted integration runtime to support data integration within an Azure virtual network.
This article describes how you can create and configure a self-hosted IR.
When you move data between on-premises and the cloud, the activity uses a self-hosted integration runtime to transfer the data between an on-premises data source and the cloud.
-Here is a high-level summary of the data-flow steps for copying with a self-hosted IR:
+Here's a high-level summary of the data-flow steps for copying with a self-hosted IR:
:::image type="content" source="media/create-self-hosted-integration-runtime/high-level-overview.png" alt-text="The high-level overview of data flow":::
For some cloud databases, such as Azure SQL Database and Azure Data Lake, you mi
> [!NOTE] > It is not right to install both Integration Runtime and Power BI gateway in same machine, because mainly Integration Runtime uses port number 443, which is one of the main ports being used by Power BI gateway as well. +
+### Self-contained interactive authoring (preview)
+In order to perform interactive authoring actions such as data preview and connection testing, the self-hosted integration runtime requires a connection to Azure Relay. If the connection is not established, there are two possible solutions to ensure uninterrupted functionality. The first option is to add the Azure Relay endpoints to your firewall's allowlist [Get URL of Azure Relay](#get-url-of-azure-relay). Alternatively, you can enable self-contained interactive authoring.
+
+> [!NOTE]
+> If the self-hosted integration runtime fails to establish a connection to Azure Relay, its status will be marked as "limited".
+
+ :::image type="content" source="media/create-self-hosted-integration-runtime/self-contained-interactive-authoring.png" alt-text="Screenshot of self-contained interactive authoring.":::
+
+> [!NOTE]
+> While self-contained interactive authoring is enabled, all interactive authoring traffic will be routed exclusively through this functionality, bypassing Azure Relay. The traffic will only be redirected back to Azure Relay once you choose to disable this feature.
+
+> [!NOTE]
+> Both "Get IP" and "Send log" are not supported when self-contained interactive authoring is enabled.
+ ### Get URL of Azure Relay One required domain and port that need to be put in the allowlist of your firewall is for the communication to Azure Relay. The self-hosted integration runtime uses it for interactive authoring such as test connection, browse folder list and table list, get schema, and preview data. If you don't want to allow **.servicebus.windows.net** and would like to have more specific URLs, then you can see all the FQDNs that are required by your self-hosted integration runtime from the service portal. Follow these steps:
data-factory Data Factory Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-private-link.md
Enabling Private Link for each of the preceding communication channels offers th
- The command communications between the self-hosted IR and Data Factory can be performed securely in a private network environment. The traffic between the self-hosted IR and Data Factory goes through Private Link. - **Not currently supported**: - Interactive authoring that uses a self-hosted IR, such as test connection, browse folder list and table list, get schema, and preview data, goes through Private Link.
+ Please notice that the traffic goes through private link if the self-contained interactive authoring is enabled. See [Self-contained Interactive Authoring](create-self-hosted-integration-runtime.md#self-contained-interactive-authoring-preview).
+
+ > [!NOTE]
+ > Both "Get IP" and "Send log" are not supported when self-contained interactive authoring is enabled.
+ - The new version of the self-hosted IR that can be automatically downloaded from Microsoft Download Center if you enable auto-update isn't supported at this time. For functionality that isn't currently supported, you need to configure the previously mentioned domain and port in the virtual network or your corporate firewall.
If you don't have an existing virtual network to use with your private endpoint
| Resource group | Select a resource group for your virtual network. | | **Instance details** | | | Name | Enter a name for your virtual network. |
- | Region | *Important:* Select the same region your private endpoint will use. |
+ | Region | *Important:* Select the same region your private endpoint uses. |
1. Select the **IP Addresses** tab or select **Next: IP Addresses** at the bottom of the page.
data-factory Monitor Managed Virtual Network Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-managed-virtual-network-integration-runtime.md
In this scenario, we recommend that you increase the allocated compute resources
### Intermittent activity execution
-If you notice that the available capacity percentage fluctuates between low and high within a specific time period, it's likely due to the intermittent execution of your activities. That is, the TTL period that you configured is shorter than the interval between your activities. This problem can have a significant impact on the performance of your workflow and can increase costs, because we charge for the warm-up time of the compute for up to 2 minutes.
-
-To address this problem, there are two possible solutions:
--- Queue more activities to maintain a consistent workload and utilize the available compute resources more effectively. By keeping the compute continuously engaged, you can avoid the warm-up time and achieve better performance.-- Consider enlarging the TTL period to align with the interval between your activities. This approach keeps the compute resources available for a longer duration, which reduces the frequency of warm-up periods and optimizes cost efficiency.
+If you notice that the Available Capacity Percentage fluctuates between low and high within a specific time period, it's likely due to the intermittent execution of your activities, where the Time-To-Live (TTL) period you have configured is shorter than the interval between your activities. This can have a significant impact on the performance of your workflow.
+To address this issue, there are two possible solutions. First, you can queue more activities to maintain a consistent workload and utilize the available compute resources more effectively. By keeping the compute continuously engaged, you can avoid the warm-up time and achieve better performance.
+Alternatively, you can consider enlarging the TTL period to align with the interval between your activities. This ensures that the compute resources remain available for a longer duration, reducing the frequency of warm-up periods and optimizing cost-efficiency.
By implementing either of these solutions, you can enhance the performance of your workflow, minimize cost implications, and ensure a smoother execution of your intermittent activities.
data-factory Solution Template Migration S3 Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-migration-s3-azure.md
Last updated 04/12/2023
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Use the templates to migrate petabytes of data consisting of hundreds of millions of files from Amazon S3 to Azure Data Lake Storage Gen2.
+Use the templates to migrate petabytes of data consisting of hundreds of millions of files from Amazon S3 to Azure Data Lake Storage Gen2.
> [!NOTE] > If you want to copy small data volume from AWS S3 to Azure (for example, less than 10 TB), it's more efficient and easy to use the [Azure Data Factory Copy Data tool](copy-data-tool.md). The template that's described in this article is more than what you need.
The template contains two parameters:
### For the template to copy changed files only from Amazon S3 to Azure Data Lake Storage Gen2
-This template (*template name: copy delta data from AWS S3 to Azure Data Lake Storage Gen2*) uses LastModifiedTime of each file to copy the new or updated files only from AWS S3 to Azure. Be aware if your files or folders has already been time partitioned with timeslice information as part of the file or folder name on AWS S3 (for example, /yyyy/mm/dd/file.csv), you can go to this [tutorial](tutorial-incremental-copy-partitioned-file-name-copy-data-tool.md) to get the more performant approach for incremental loading new files.
+This template (*template name: copy delta data from AWS S3 to Azure Data Lake Storage Gen2*) uses LastModifiedTime of each file to copy the new or updated files only from AWS S3 to Azure. Be aware if your files or folders has already been time partitioned with timeslice information as part of the file or folder name on AWS S3 (for example, /yyyy/mm/dd/file.csv), you can go to this [tutorial](tutorial-incremental-copy-partitioned-file-name-copy-data-tool.md) to get the more performant approach for incremental loading new files.
This template assumes that you have written a partition list in an external control table in Azure SQL Database. So it will use a *Lookup* activity to retrieve the partition list from the external control table, iterate over each partition, and make each ADF copy job copy one partition at a time. When each copy job starts to copy the files from AWS S3, it relies on LastModifiedTime property to identify and copy the new or updated files only. Once any copy job completed, it uses *Stored Procedure* activity to update the status of copying each partition in control table. The template contains seven activities:
The template contains two parameters:
## How to use these two solution templates
-### For the template to migrate historical data from Amazon S3 to Azure Data Lake Storage Gen2
+### For the template to migrate historical data from Amazon S3 to Azure Data Lake Storage Gen2
-1. Create a control table in Azure SQL Database to store the partition list of AWS S3.
+1. Create a control table in Azure SQL Database to store the partition list of AWS S3.
> [!NOTE] > The table name is s3_partition_control_table.
The template contains two parameters:
```sql CREATE TABLE [dbo].[s3_partition_control_table](
- [PartitionPrefix] [varchar](255) NULL,
- [SuccessOrFailure] [bit] NULL
+ [PartitionPrefix] [varchar](255) NULL,
+ [SuccessOrFailure] [bit] NULL
) INSERT INTO s3_partition_control_table (PartitionPrefix, SuccessOrFailure)
The template contains two parameters:
('e', 0); ```
-2. Create a Stored Procedure on the same Azure SQL Database for control table.
+2. Create a Stored Procedure on the same Azure SQL Database for control table.
> [!NOTE] > The name of the Stored Procedure is sp_update_partition_success. It will be invoked by SqlServerStoredProcedure activity in your ADF pipeline.
The template contains two parameters:
CREATE PROCEDURE [dbo].[sp_update_partition_success] @PartPrefix varchar(255) AS BEGIN
-
- UPDATE s3_partition_control_table
- SET [SuccessOrFailure] = 1 WHERE [PartitionPrefix] = @PartPrefix
+
+ UPDATE s3_partition_control_table
+ SET [SuccessOrFailure] = 1 WHERE [PartitionPrefix] = @PartPrefix
END GO ```
The template contains two parameters:
4. Select **Use this template**. :::image type="content" source="media/solution-template-migration-s3-azure/historical-migration-s3-azure-2.png" alt-text="Screenshot that highlights the Use this template button.":::
-
+ 5. You see the 2 pipelines and 3 datasets were created, as shown in the following example: :::image type="content" source="media/solution-template-migration-s3-azure/historical-migration-s3-azure-3.png" alt-text="Screenshot that shows the two pipelines and three datasets that were created by using the template.":::
The template contains two parameters:
### For the template to copy changed files only from Amazon S3 to Azure Data Lake Storage Gen2
-1. Create a control table in Azure SQL Database to store the partition list of AWS S3.
+1. Create a control table in Azure SQL Database to store the partition list of AWS S3.
> [!NOTE] > The table name is s3_partition_delta_control_table.
The template contains two parameters:
```sql CREATE TABLE [dbo].[s3_partition_delta_control_table](
- [PartitionPrefix] [varchar](255) NULL,
- [JobRunTime] [datetime] NULL,
- [SuccessOrFailure] [bit] NULL
- )
+ [PartitionPrefix] [varchar](255) NULL,
+ [JobRunTime] [datetime] NULL,
+ [SuccessOrFailure] [bit] NULL
+ )
INSERT INTO s3_partition_delta_control_table (PartitionPrefix, JobRunTime, SuccessOrFailure) VALUES
The template contains two parameters:
('e','1/1/2019 12:00:00 AM',1); ```
-2. Create a Stored Procedure on the same Azure SQL Database for control table.
+2. Create a Stored Procedure on the same Azure SQL Database for control table.
> [!NOTE] > The name of the Stored Procedure is sp_insert_partition_JobRunTime_success. It will be invoked by SqlServerStoredProcedure activity in your ADF pipeline. ```sql
- CREATE PROCEDURE [dbo].[sp_insert_partition_JobRunTime_success] @PartPrefix varchar(255), @JobRunTime datetime, @SuccessOrFailure bit
- AS
- BEGIN
- INSERT INTO s3_partition_delta_control_table (PartitionPrefix, JobRunTime, SuccessOrFailure)
- VALUES
- (@PartPrefix,@JobRunTime,@SuccessOrFailure)
- END
- GO
+ CREATE PROCEDURE [dbo].[sp_insert_partition_JobRunTime_success] @PartPrefix varchar(255), @JobRunTime datetime, @SuccessOrFailure bit
+ AS
+ BEGIN
+ INSERT INTO s3_partition_delta_control_table (PartitionPrefix, JobRunTime, SuccessOrFailure)
+ VALUES
+ (@PartPrefix,@JobRunTime,@SuccessOrFailure)
+ END
+ GO
``` - 3. Go to the **Copy delta data from AWS S3 to Azure Data Lake Storage Gen2** template. Input the connections to your external control table, AWS S3 as the data source store and Azure Data Lake Storage Gen2 as the destination store. Be aware that the external control table and the stored procedure are reference to the same connection. :::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure-1.png" alt-text="Create a new connection":::
The template contains two parameters:
4. Select **Use this template**. :::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure-2.png" alt-text="Use this template":::
-
+ 5. You see the 2 pipelines and 3 datasets were created, as shown in the following example: :::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure-3.png" alt-text="Review the pipeline":::
-6. Go the "DeltaCopyFromS3" pipeline and select **Debug**, and enter the **Parameters**. Then, select **Finish**.
+6. Go the "DeltaCopyFromS3" pipeline and select **Debug**, and enter the **Parameters**. Then, select **Finish**.
:::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure-4.png" alt-text="Click **Debug**":::
The template contains two parameters:
8. You can also check the results from the control table by a query *"select * from s3_partition_delta_control_table"*, you will see the output similar to the following example: :::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure-6.png" alt-text="Screenshot that shows the results from the control table after you run the query.":::
-
+ ## Next steps - [Copy files from multiple containers](solution-template-copy-files-multiple-containers.md)
data-factory Tutorial Incremental Copy Multiple Tables Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-multiple-tables-powershell.md
Last updated 09/26/2022
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-In this tutorial, you create an Azure Data Factory with a pipeline that loads delta data from multiple tables in a SQL Server database to Azure SQL Database.
+In this tutorial, you create an Azure Data Factory with a pipeline that loads delta data from multiple tables in a SQL Server database to Azure SQL Database.
You perform the following steps in this tutorial:
You perform the following steps in this tutorial:
> * Prepare source and destination data stores. > * Create a data factory. > * Create a self-hosted integration runtime.
-> * Install the integration runtime.
-> * Create linked services.
+> * Install the integration runtime.
+> * Create linked services.
> * Create source, sink, and watermark datasets. > * Create, run, and monitor a pipeline. > * Review the results.
You perform the following steps in this tutorial:
> * Review the final results. ## Overview
-Here are the important steps to create this solution:
+Here are the important steps to create this solution:
1. **Select the watermark column**.
- Select one column for each table in the source data store, which you can identify the new or updated records for every run. Normally, the data in this selected column (for example, last_modify_time or ID) keeps increasing when rows are created or updated. The maximum value in this column is used as a watermark.
+ Select one column for each table in the source data store, which you can identify the new or updated records for every run. Normally, the data in this selected column (for example, last_modify_time or ID) keeps increasing when rows are created or updated. The maximum value in this column is used as a watermark.
2. **Prepare a data store to store the watermark value**.
- In this tutorial, you store the watermark value in a SQL database.
+ In this tutorial, you store the watermark value in a SQL database.
3. **Create a pipeline with the following activities**:
-
- a. Create a ForEach activity that iterates through a list of source table names that is passed as a parameter to the pipeline. For each source table, it invokes the following activities to perform delta loading for that table.
- b. Create two lookup activities. Use the first Lookup activity to retrieve the last watermark value. Use the second Lookup activity to retrieve the new watermark value. These watermark values are passed to the Copy activity.
+ 1. Create a ForEach activity that iterates through a list of source table names that is passed as a parameter to the pipeline. For each source table, it invokes the following activities to perform delta loading for that table.
+
+ 1. Create two lookup activities. Use the first Lookup activity to retrieve the last watermark value. Use the second Lookup activity to retrieve the new watermark value. These watermark values are passed to the Copy activity.
- c. Create a Copy activity that copies rows from the source data store with the value of the watermark column greater than the old watermark value and less than or equal to the new watermark value. Then, it copies the delta data from the source data store to Azure Blob storage as a new file.
+ 1. Create a Copy activity that copies rows from the source data store with the value of the watermark column greater than the old watermark value and less than or equal to the new watermark value. Then, it copies the delta data from the source data store to Azure Blob storage as a new file.
- d. Create a StoredProcedure activity that updates the watermark value for the pipeline that runs next time.
+ 1. Create a StoredProcedure activity that updates the watermark value for the pipeline that runs next time.
- Here is the high-level solution diagram:
+ Here is the high-level solution diagram:
:::image type="content" source="media/tutorial-incremental-copy-multiple-tables-powershell/high-level-solution-diagram.png" alt-text="Incrementally load data":::
If you don't have an Azure subscription, create a [free](https://azure.microsoft
## Prerequisites
-* **SQL Server**. You use a SQL Server database as the source data store in this tutorial.
-* **Azure SQL Database**. You use a database in Azure SQL Database as the sink data store. If you don't have a SQL database, see [Create a database in Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart) for steps to create one.
+* **SQL Server**. You use a SQL Server database as the source data store in this tutorial.
+* **Azure SQL Database**. You use a database in Azure SQL Database as the sink data store. If you don't have a SQL database, see [Create a database in Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart) for steps to create one.
### Create source tables in your SQL Server database
If you don't have an Azure subscription, create a [free](https://azure.microsoft
3. Run the following SQL command against your database to create tables named `customer_table` and `project_table`:
- ```sql
+ ```sql
create table customer_table ( PersonID int, Name varchar(255), LastModifytime datetime );
-
+ create table project_table ( Project varchar(255), Creationtime datetime );
-
+ INSERT INTO customer_table (PersonID, Name, LastModifytime) VALUES
If you don't have an Azure subscription, create a [free](https://azure.microsoft
(3, 'Alice','9/3/2017 2:36:00 AM'), (4, 'Andy','9/4/2017 3:21:00 AM'), (5, 'Anny','9/5/2017 8:06:00 AM');
-
+ INSERT INTO project_table (Project, Creationtime) VALUES ('project1','1/1/2015 0:00:00 AM'), ('project2','2/2/2016 1:23:00 AM'), ('project3','3/4/2017 5:16:00 AM');
- ```
+ ```
### Create destination tables in your Azure SQL Database
If you don't have an Azure subscription, create a [free](https://azure.microsoft
2. In **Server Explorer (SSMS)** or in the **Connections pane (Azure Data Studio)**, right-click the database and choose **New Query**.
-3. Run the following SQL command against your database to create tables named `customer_table` and `project_table`:
+3. Run the following SQL command against your database to create tables named `customer_table` and `project_table`:
- ```sql
+ ```sql
create table customer_table ( PersonID int, Name varchar(255), LastModifytime datetime );
-
+ create table project_table ( Project varchar(255), Creationtime datetime );
- ```
+ ```
### Create another table in Azure SQL Database to store the high watermark value
-1. Run the following SQL command against your database to create a table named `watermarktable` to store the watermark value:
-
- ```sql
+1. Run the following SQL command against your database to create a table named `watermarktable` to store the watermark value:
+
+ ```sql
create table watermarktable (
-
+ TableName varchar(255), WatermarkValue datetime, );
- ```
-2. Insert initial watermark values for both source tables into the watermark table.
+ ```
- ```sql
+2. Insert initial watermark values for both source tables into the watermark table.
+ ```sql
INSERT INTO watermarktable
- VALUES
+ VALUES
('customer_table','1/1/2010 12:00:00 AM'), ('project_table','1/1/2010 12:00:00 AM');
-
- ```
+ ```
-### Create a stored procedure in the Azure SQL Database
+### Create a stored procedure in the Azure SQL Database
-Run the following command to create a stored procedure in your database. This stored procedure updates the watermark value after every pipeline run.
+Run the following command to create a stored procedure in your database. This stored procedure updates the watermark value after every pipeline run.
```sql CREATE PROCEDURE usp_write_watermark @LastModifiedtime datetime, @TableName varchar(50)
AS
BEGIN UPDATE watermarktable
-SET [WatermarkValue] = @LastModifiedtime
+SET [WatermarkValue] = @LastModifiedtime
WHERE [TableName] = @TableName END
END
### Create data types and additional stored procedures in Azure SQL Database
-Run the following query to create two stored procedures and two data types in your database.
-They're used to merge the data from source tables into destination tables.
+Run the following query to create two stored procedures and two data types in your database.
+They're used to merge the data from source tables into destination tables.
-In order to make the journey easy to start with, we directly use these Stored Procedures passing the delta data in via a table variable and then merge the them into destination store. Be cautious it is not expecting a "large" number of delta rows (more than 100) to be stored in the table variable.
+In order to make the journey easy to start with, we directly use these Stored Procedures passing the delta data in via a table variable and then merge the them into destination store. Be cautious it is not expecting a "large" number of delta rows (more than 100) to be stored in the table variable.
-If you do need to merge a large number of delta rows into the destination store, we suggest you to use copy activity to copy all the delta data into a temporary "staging" table in the destination store first, and then built your own stored procedure without using table variable to merge them from the ΓÇ£stagingΓÇ¥ table to the ΓÇ£finalΓÇ¥ table.
+If you do need to merge a large number of delta rows into the destination store, we suggest you to use copy activity to copy all the delta data into a temporary "staging" table in the destination store first, and then built your own stored procedure without using table variable to merge them from the ΓÇ£stagingΓÇ¥ table to the ΓÇ£finalΓÇ¥ table.
```sql
Install the latest Azure PowerShell modules by following the instructions in [In
1. Define a variable for the resource group name that you use in PowerShell commands later. Copy the following command text to PowerShell, specify a name for the [Azure resource group](../azure-resource-manager/management/overview.md) in double quotation marks, and then run the command. An example is `"adfrg"`.
- ```powershell
- $resourceGroupName = "ADFTutorialResourceGroup";
- ```
+ ```powershell
+ $resourceGroupName = "ADFTutorialResourceGroup";
+ ```
If the resource group already exists, you might not want to overwrite it. Assign a different value to the `$resourceGroupName` variable, and run the command again.
-2. Define a variable for the location of the data factory.
+2. Define a variable for the location of the data factory.
- ```powershell
- $location = "East US"
- ```
-3. To create the Azure resource group, run the following command:
+ ```powershell
+ $location = "East US"
+ ```
- ```powershell
- New-AzResourceGroup $resourceGroupName $location
- ```
- If the resource group already exists, you might not want to overwrite it. Assign a different value to the `$resourceGroupName` variable, and run the command again.
+3. To create the Azure resource group, run the following command:
-4. Define a variable for the data factory name.
+ ```powershell
+ New-AzResourceGroup $resourceGroupName $location
+ ```
- > [!IMPORTANT]
- > Update the data factory name to make it globally unique. An example is ADFIncMultiCopyTutorialFactorySP1127.
+ If the resource group already exists, you might not want to overwrite it. Assign a different value to the `$resourceGroupName` variable, and run the command again.
+
+4. Define a variable for the data factory name.
- ```powershell
- $dataFactoryName = "ADFIncMultiCopyTutorialFactory";
- ```
-5. To create the data factory, run the following **Set-AzDataFactoryV2** cmdlet:
-
- ```powershell
- Set-AzDataFactoryV2 -ResourceGroupName $resourceGroupName -Location $location -Name $dataFactoryName
- ```
+ > [!IMPORTANT]
+ > Update the data factory name to make it globally unique. An example is ADFIncMultiCopyTutorialFactorySP1127.
+
+ ```powershell
+ $dataFactoryName = "ADFIncMultiCopyTutorialFactory";
+ ```
+
+5. To create the data factory, run the following **Set-AzDataFactoryV2** cmdlet:
+
+ ```powershell
+ Set-AzDataFactoryV2 -ResourceGroupName $resourceGroupName -Location $location -Name $dataFactoryName
+ ```
Note the following points: * The name of the data factory must be globally unique. If you receive the following error, change the name and try again:
- ```powershell
- Set-AzDataFactoryV2 : HTTP Status Code: Conflict
- Error Code: DataFactoryNameInUse
- Error Message: The specified resource name 'ADFIncMultiCopyTutorialFactory' is already in use. Resource names must be globally unique.
- ```
+ ```powershell
+ Set-AzDataFactoryV2 : HTTP Status Code: Conflict
+ Error Code: DataFactoryNameInUse
+ Error Message: The specified resource name 'ADFIncMultiCopyTutorialFactory' is already in use. Resource names must be globally unique.
+ ```
* To create Data Factory instances, the user account you use to sign in to Azure must be a member of contributor or owner roles, or an administrator of the Azure subscription.
Note the following points:
## Create linked services
-You create linked services in a data factory to link your data stores and compute services to the data factory. In this section, you create linked services to your SQL Server database and your database in Azure SQL Database.
+You create linked services in a data factory to link your data stores and compute services to the data factory. In this section, you create linked services to your SQL Server database and your database in Azure SQL Database.
### Create the SQL Server linked service In this step, you link your SQL Server database to the data factory.
-1. Create a JSON file named **SqlServerLinkedService.json** in the C:\ADFTutorials\IncCopyMultiTableTutorial folder (create the local folders if they don't already exist) with the following content. Select the right section based on the authentication you use to connect to SQL Server.
+1. Create a JSON file named **SqlServerLinkedService.json** in the C:\ADFTutorials\IncCopyMultiTableTutorial folder (create the local folders if they don't already exist) with the following content. Select the right section based on the authentication you use to connect to SQL Server.
- > [!IMPORTANT]
- > Select the right section based on the authentication you use to connect to SQL Server.
+ > [!IMPORTANT]
+ > Select the right section based on the authentication you use to connect to SQL Server.
- If you use SQL authentication, copy the following JSON definition:
+ If you use SQL authentication, copy the following JSON definition:
- ```json
- {
+ ```json
+ {
"name":"SqlServerLinkedService",
- "properties":{
- "annotations":[
-
+ "properties":{
+ "annotations":[
+ ], "type":"SqlServer",
- "typeProperties":{
+ "typeProperties":{
"connectionString":"integrated security=False;data source=<servername>;initial catalog=<database name>;user id=<username>;Password=<password>" },
- "connectVia":{
+ "connectVia":{
"referenceName":"<integration runtime name>", "type":"IntegrationRuntimeReference" } }
- }
- ```
+ }
+ ```
+ If you use Windows authentication, copy the following JSON definition:
- ```json
- {
+ ```json
+ {
"name":"SqlServerLinkedService",
- "properties":{
- "annotations":[
-
+ "properties":{
+ "annotations":[
+ ], "type":"SqlServer",
- "typeProperties":{
+ "typeProperties":{
"connectionString":"integrated security=True;data source=<servername>;initial catalog=<database name>", "userName":"<username> or <domain>\\<username>",
- "password":{
+ "password":{
"type":"SecureString", "value":"<password>" } },
- "connectVia":{
+ "connectVia":{
"referenceName":"<integration runtime name>", "type":"IntegrationRuntimeReference" } } }
- ```
+ ```
+ > [!IMPORTANT] > - Select the right section based on the authentication you use to connect to SQL Server. > - Replace &lt;integration runtime name> with the name of your integration runtime.
In this step, you link your SQL Server database to the data factory.
2. In PowerShell, run the following cmdlet to switch to the C:\ADFTutorials\IncCopyMultiTableTutorial folder.
- ```powershell
- Set-Location 'C:\ADFTutorials\IncCopyMultiTableTutorial'
- ```
+ ```powershell
+ Set-Location 'C:\ADFTutorials\IncCopyMultiTableTutorial'
+ ```
-3. Run the **Set-AzDataFactoryV2LinkedService** cmdlet to create the linked service AzureStorageLinkedService. In the following example, you pass values for the *ResourceGroupName* and *DataFactoryName* parameters:
+3. Run the **Set-AzDataFactoryV2LinkedService** cmdlet to create the linked service AzureStorageLinkedService. In the following example, you pass values for the *ResourceGroupName* and *DataFactoryName* parameters:
- ```powershell
- Set-AzDataFactoryV2LinkedService -DataFactoryName $dataFactoryName -ResourceGroupName $resourceGroupName -Name "SqlServerLinkedService" -File ".\SqlServerLinkedService.json"
- ```
+ ```powershell
+ Set-AzDataFactoryV2LinkedService -DataFactoryName $dataFactoryName -ResourceGroupName $resourceGroupName -Name "SqlServerLinkedService" -File ".\SqlServerLinkedService.json"
+ ```
- Here is the sample output:
+ Here is the sample output:
- ```console
- LinkedServiceName : SqlServerLinkedService
- ResourceGroupName : <ResourceGroupName>
- DataFactoryName : <DataFactoryName>
- Properties : Microsoft.Azure.Management.DataFactory.Models.SqlServerLinkedService
- ```
+ ```console
+ LinkedServiceName : SqlServerLinkedService
+ ResourceGroupName : <ResourceGroupName>
+ DataFactoryName : <DataFactoryName>
+ Properties : Microsoft.Azure.Management.DataFactory.Models.SqlServerLinkedService
+ ```
### Create the SQL Database linked service
-1. Create a JSON file named **AzureSQLDatabaseLinkedService.json** in C:\ADFTutorials\IncCopyMultiTableTutorial folder with the following content. (Create the folder ADF if it doesn't already exist.) Replace &lt;servername&gt;, &lt;database name&gt;, &lt;user name&gt;, and &lt;password&gt; with the name of your SQL Server database, name of your database, user name, and password before you save the file.
+1. Create a JSON file named **AzureSQLDatabaseLinkedService.json** in C:\ADFTutorials\IncCopyMultiTableTutorial folder with the following content. (Create the folder ADF if it doesn't already exist.) Replace &lt;servername&gt;, &lt;database name&gt;, &lt;user name&gt;, and &lt;password&gt; with the name of your SQL Server database, name of your database, user name, and password before you save the file.
- ```json
- {
+ ```json
+ {
"name":"AzureSQLDatabaseLinkedService",
- "properties":{
- "annotations":[
-
+ "properties":{
+ "annotations":[
+ ], "type":"AzureSqlDatabase",
- "typeProperties":{
+ "typeProperties":{
"connectionString":"integrated security=False;encrypt=True;connection timeout=30;data source=<servername>.database.windows.net;initial catalog=<database name>;user id=<user name>;Password=<password>;" } } }
- ```
-2. In PowerShell, run the **Set-AzDataFactoryV2LinkedService** cmdlet to create the linked service AzureSQLDatabaseLinkedService.
+ ```
- ```powershell
- Set-AzDataFactoryV2LinkedService -DataFactoryName $dataFactoryName -ResourceGroupName $resourceGroupName -Name "AzureSQLDatabaseLinkedService" -File ".\AzureSQLDatabaseLinkedService.json"
- ```
+2. In PowerShell, run the **Set-AzDataFactoryV2LinkedService** cmdlet to create the linked service AzureSQLDatabaseLinkedService.
- Here is the sample output:
+ ```powershell
+ Set-AzDataFactoryV2LinkedService -DataFactoryName $dataFactoryName -ResourceGroupName $resourceGroupName -Name "AzureSQLDatabaseLinkedService" -File ".\AzureSQLDatabaseLinkedService.json"
+ ```
- ```console
- LinkedServiceName : AzureSQLDatabaseLinkedService
- ResourceGroupName : <ResourceGroupName>
- DataFactoryName : <DataFactoryName>
- Properties : Microsoft.Azure.Management.DataFactory.Models.AzureSqlDatabaseLinkedService
- ```
+ Here is the sample output:
+
+ ```console
+ LinkedServiceName : AzureSQLDatabaseLinkedService
+ ResourceGroupName : <ResourceGroupName>
+ DataFactoryName : <DataFactoryName>
+ Properties : Microsoft.Azure.Management.DataFactory.Models.AzureSqlDatabaseLinkedService
+ ```
## Create datasets
In this step, you create datasets to represent the data source, the data destina
### Create a source dataset
-1. Create a JSON file named **SourceDataset.json** in the same folder with the following content:
+1. Create a JSON file named **SourceDataset.json** in the same folder with the following content:
- ```json
- {
+ ```json
+ {
"name":"SourceDataset",
- "properties":{
- "linkedServiceName":{
+ "properties":{
+ "linkedServiceName":{
"referenceName":"SqlServerLinkedService", "type":"LinkedServiceReference" },
- "annotations":[
-
+ "annotations":[
+ ], "type":"SqlServerTable",
- "schema":[
-
+ "schema":[
+ ] }
- }
-
- ```
+ }
+ ```
- The Copy activity in the pipeline uses a SQL query to load the data rather than load the entire table.
+ The Copy activity in the pipeline uses a SQL query to load the data rather than load the entire table.
2. Run the **Set-AzDataFactoryV2Dataset** cmdlet to create the dataset SourceDataset.
-
- ```powershell
- Set-AzDataFactoryV2Dataset -DataFactoryName $dataFactoryName -ResourceGroupName $resourceGroupName -Name "SourceDataset" -File ".\SourceDataset.json"
- ```
- Here is the sample output of the cmdlet:
-
- ```json
- DatasetName : SourceDataset
- ResourceGroupName : <ResourceGroupName>
- DataFactoryName : <DataFactoryName>
- Structure :
- Properties : Microsoft.Azure.Management.DataFactory.Models.SqlServerTableDataset
- ```
+ ```powershell
+ Set-AzDataFactoryV2Dataset -DataFactoryName $dataFactoryName -ResourceGroupName $resourceGroupName -Name "SourceDataset" -File ".\SourceDataset.json"
+ ```
+
+ Here is the sample output of the cmdlet:
+
+ ```output
+ DatasetName : SourceDataset
+ ResourceGroupName : <ResourceGroupName>
+ DataFactoryName : <DataFactoryName>
+ Structure :
+ Properties : Microsoft.Azure.Management.DataFactory.Models.SqlServerTableDataset
+ ```
### Create a sink dataset
-1. Create a JSON file named **SinkDataset.json** in the same folder with the following content. The tableName element is set by the pipeline dynamically at runtime. The ForEach activity in the pipeline iterates through a list of table names and passes the table name to this dataset in each iteration.
+1. Create a JSON file named **SinkDataset.json** in the same folder with the following content. The tableName element is set by the pipeline dynamically at runtime. The ForEach activity in the pipeline iterates through a list of table names and passes the table name to this dataset in each iteration.
- ```json
- {
+ ```json
+ {
"name":"SinkDataset",
- "properties":{
- "linkedServiceName":{
+ "properties":{
+ "linkedServiceName":{
"referenceName":"AzureSQLDatabaseLinkedService", "type":"LinkedServiceReference" },
- "parameters":{
- "SinkTableName":{
+ "parameters":{
+ "SinkTableName":{
"type":"String" } },
- "annotations":[
-
+ "annotations":[
+ ], "type":"AzureSqlTable",
- "typeProperties":{
- "tableName":{
+ "typeProperties":{
+ "tableName":{
"value":"@dataset().SinkTableName", "type":"Expression" } } } }
- ```
+ ```
2. Run the **Set-AzDataFactoryV2Dataset** cmdlet to create the dataset SinkDataset.
-
- ```powershell
- Set-AzDataFactoryV2Dataset -DataFactoryName $dataFactoryName -ResourceGroupName $resourceGroupName -Name "SinkDataset" -File ".\SinkDataset.json"
- ```
+
+ ```powershell
+ Set-AzDataFactoryV2Dataset -DataFactoryName $dataFactoryName -ResourceGroupName $resourceGroupName -Name "SinkDataset" -File ".\SinkDataset.json"
+ ```
Here is the sample output of the cmdlet:
-
- ```json
- DatasetName : SinkDataset
- ResourceGroupName : <ResourceGroupName>
- DataFactoryName : <DataFactoryName>
- Structure :
- Properties : Microsoft.Azure.Management.DataFactory.Models.AzureSqlTableDataset
- ```
+
+ ```output
+ DatasetName : SinkDataset
+ ResourceGroupName : <ResourceGroupName>
+ DataFactoryName : <DataFactoryName>
+ Structure :
+ Properties : Microsoft.Azure.Management.DataFactory.Models.AzureSqlTableDataset
+ ```
### Create a dataset for a watermark
-In this step, you create a dataset for storing a high watermark value.
+In this step, you create a dataset for storing a high watermark value.
-1. Create a JSON file named **WatermarkDataset.json** in the same folder with the following content:
+1. Create a JSON file named **WatermarkDataset.json** in the same folder with the following content:
- ```json
+ ```json
{
- "name": " WatermarkDataset ",
- "properties": {
- "type": "AzureSqlTable",
- "typeProperties": {
- "tableName": "watermarktable"
- },
- "linkedServiceName": {
- "referenceName": "AzureSQLDatabaseLinkedService",
- "type": "LinkedServiceReference"
- }
- }
- }
- ```
+ "name": " WatermarkDataset ",
+ "properties": {
+ "type": "AzureSqlTable",
+ "typeProperties": {
+ "tableName": "watermarktable"
+ },
+ "linkedServiceName": {
+ "referenceName": "AzureSQLDatabaseLinkedService",
+ "type": "LinkedServiceReference"
+ }
+ }
+ }
+ ```
+ 2. Run the **Set-AzDataFactoryV2Dataset** cmdlet to create the dataset WatermarkDataset.
-
- ```powershell
- Set-AzDataFactoryV2Dataset -DataFactoryName $dataFactoryName -ResourceGroupName $resourceGroupName -Name "WatermarkDataset" -File ".\WatermarkDataset.json"
- ```
- Here is the sample output of the cmdlet:
-
- ```json
- DatasetName : WatermarkDataset
- ResourceGroupName : <ResourceGroupName>
- DataFactoryName : <DataFactoryName>
- Structure :
- Properties : Microsoft.Azure.Management.DataFactory.Models.AzureSqlTableDataset
- ```
+ ```powershell
+ Set-AzDataFactoryV2Dataset -DataFactoryName $dataFactoryName -ResourceGroupName $resourceGroupName -Name "WatermarkDataset" -File ".\WatermarkDataset.json"
+ ```
+
+ Here is the sample output of the cmdlet:
+
+ ```output
+ DatasetName : WatermarkDataset
+ ResourceGroupName : <ResourceGroupName>
+ DataFactoryName : <DataFactoryName>
+ Structure :
+ Properties : Microsoft.Azure.Management.DataFactory.Models.AzureSqlTableDataset
+ ```
## Create a pipeline
-The pipeline takes a list of table names as a parameter. The **ForEach activity** iterates through the list of table names and performs the following operations:
+The pipeline takes a list of table names as a parameter. The **ForEach activity** iterates through the list of table names and performs the following operations:
1. Use the **Lookup activity** to retrieve the old watermark value (the initial value or the one that was used in the last iteration).
The pipeline takes a list of table names as a parameter. The **ForEach activity*
3. Use the **Copy activity** to copy data between these two watermark values from the source database to the destination database.
-4. Use the **StoredProcedure activity** to update the old watermark value to be used in the first step of the next iteration.
+4. Use the **StoredProcedure activity** to update the old watermark value to be used in the first step of the next iteration.
### Create the pipeline
-1. Create a JSON file named **IncrementalCopyPipeline.json** in the same folder with the following content:
+1. Create a JSON file named **IncrementalCopyPipeline.json** in the same folder with the following content:
- ```json
- {
+ ```json
+ {
"name":"IncrementalCopyPipeline",
- "properties":{
- "activities":[
- {
+ "properties":{
+ "activities":[
+ {
"name":"IterateSQLTables", "type":"ForEach",
- "dependsOn":[
-
+ "dependsOn":[
+ ],
- "userProperties":[
-
+ "userProperties":[
+ ],
- "typeProperties":{
- "items":{
+ "typeProperties":{
+ "items":{
"value":"@pipeline().parameters.tableList", "type":"Expression" }, "isSequential":false,
- "activities":[
- {
+ "activities":[
+ {
"name":"LookupOldWaterMarkActivity", "type":"Lookup",
- "dependsOn":[
-
+ "dependsOn":[
+ ],
- "policy":{
+ "policy":{
"timeout":"7.00:00:00", "retry":0, "retryIntervalInSeconds":30, "secureOutput":false, "secureInput":false },
- "userProperties":[
-
+ "userProperties":[
+ ],
- "typeProperties":{
- "source":{
+ "typeProperties":{
+ "source":{
"type":"AzureSqlSource",
- "sqlReaderQuery":{
+ "sqlReaderQuery":{
"value":"select * from watermarktable where TableName = '@{item().TABLE_NAME}'", "type":"Expression" } },
- "dataset":{
+ "dataset":{
"referenceName":"WatermarkDataset", "type":"DatasetReference" } } },
- {
+ {
"name":"LookupNewWaterMarkActivity", "type":"Lookup",
- "dependsOn":[
-
+ "dependsOn":[
+ ],
- "policy":{
+ "policy":{
"timeout":"7.00:00:00", "retry":0, "retryIntervalInSeconds":30, "secureOutput":false, "secureInput":false },
- "userProperties":[
-
+ "userProperties":[
+ ],
- "typeProperties":{
- "source":{
+ "typeProperties":{
+ "source":{
"type":"SqlServerSource",
- "sqlReaderQuery":{
+ "sqlReaderQuery":{
"value":"select MAX(@{item().WaterMark_Column}) as NewWatermarkvalue from @{item().TABLE_NAME}", "type":"Expression" } },
- "dataset":{
+ "dataset":{
"referenceName":"SourceDataset", "type":"DatasetReference" }, "firstRowOnly":true } },
- {
+ {
"name":"IncrementalCopyActivity", "type":"Copy",
- "dependsOn":[
- {
+ "dependsOn":[
+ {
"activity":"LookupOldWaterMarkActivity",
- "dependencyConditions":[
+ "dependencyConditions":[
"Succeeded" ] },
- {
+ {
"activity":"LookupNewWaterMarkActivity",
- "dependencyConditions":[
+ "dependencyConditions":[
"Succeeded" ] } ],
- "policy":{
+ "policy":{
"timeout":"7.00:00:00", "retry":0, "retryIntervalInSeconds":30, "secureOutput":false, "secureInput":false },
- "userProperties":[
-
+ "userProperties":[
+ ],
- "typeProperties":{
- "source":{
+ "typeProperties":{
+ "source":{
"type":"SqlServerSource",
- "sqlReaderQuery":{
+ "sqlReaderQuery":{
"value":"select * from @{item().TABLE_NAME} where @{item().WaterMark_Column} > '@{activity('LookupOldWaterMarkActivity').output.firstRow.WatermarkValue}' and @{item().WaterMark_Column} <= '@{activity('LookupNewWaterMarkActivity').output.firstRow.NewWatermarkvalue}'", "type":"Expression" } },
- "sink":{
+ "sink":{
"type":"AzureSqlSink",
- "sqlWriterStoredProcedureName":{
+ "sqlWriterStoredProcedureName":{
"value":"@{item().StoredProcedureNameForMergeOperation}", "type":"Expression" },
- "sqlWriterTableType":{
+ "sqlWriterTableType":{
"value":"@{item().TableType}", "type":"Expression" },
- "storedProcedureTableTypeParameterName":{
+ "storedProcedureTableTypeParameterName":{
"value":"@{item().TABLE_NAME}", "type":"Expression" },
The pipeline takes a list of table names as a parameter. The **ForEach activity*
}, "enableStaging":false },
- "inputs":[
- {
+ "inputs":[
+ {
"referenceName":"SourceDataset", "type":"DatasetReference" } ],
- "outputs":[
- {
+ "outputs":[
+ {
"referenceName":"SinkDataset", "type":"DatasetReference",
- "parameters":{
- "SinkTableName":{
+ "parameters":{
+ "SinkTableName":{
"value":"@{item().TABLE_NAME}", "type":"Expression" }
The pipeline takes a list of table names as a parameter. The **ForEach activity*
} ] },
- {
+ {
"name":"StoredProceduretoWriteWatermarkActivity", "type":"SqlServerStoredProcedure",
- "dependsOn":[
- {
+ "dependsOn":[
+ {
"activity":"IncrementalCopyActivity",
- "dependencyConditions":[
+ "dependencyConditions":[
"Succeeded" ] } ],
- "policy":{
+ "policy":{
"timeout":"7.00:00:00", "retry":0, "retryIntervalInSeconds":30, "secureOutput":false, "secureInput":false },
- "userProperties":[
-
+ "userProperties":[
+ ],
- "typeProperties":{
+ "typeProperties":{
"storedProcedureName":"[dbo].[usp_write_watermark]",
- "storedProcedureParameters":{
- "LastModifiedtime":{
- "value":{
+ "storedProcedureParameters":{
+ "LastModifiedtime":{
+ "value":{
"value":"@{activity('LookupNewWaterMarkActivity').output.firstRow.NewWatermarkvalue}", "type":"Expression" }, "type":"DateTime" },
- "TableName":{
- "value":{
+ "TableName":{
+ "value":{
"value":"@{activity('LookupOldWaterMarkActivity').output.firstRow.TableName}", "type":"Expression" },
The pipeline takes a list of table names as a parameter. The **ForEach activity*
} } },
- "linkedServiceName":{
+ "linkedServiceName":{
"referenceName":"AzureSQLDatabaseLinkedService", "type":"LinkedServiceReference" }
The pipeline takes a list of table names as a parameter. The **ForEach activity*
} } ],
- "parameters":{
- "tableList":{
+ "parameters":{
+ "tableList":{
"type":"array" } },
- "annotations":[
-
+ "annotations":[
+ ] } }
- ```
+ ```
+ 2. Run the **Set-AzDataFactoryV2Pipeline** cmdlet to create the pipeline IncrementalCopyPipeline.
-
+ ```powershell Set-AzDataFactoryV2Pipeline -DataFactoryName $dataFactoryName -ResourceGroupName $resourceGroupName -Name "IncrementalCopyPipeline" -File ".\IncrementalCopyPipeline.json"
- ```
+ ```
- Here is the sample output:
+ Here is the sample output:
- ```console
+ ```output
PipelineName : IncrementalCopyPipeline ResourceGroupName : <ResourceGroupName> DataFactoryName : <DataFactoryName> Activities : {IterateSQLTables} Parameters : {[tableList, Microsoft.Azure.Management.DataFactory.Models.ParameterSpecification]} ```
-
+ ## Run the pipeline 1. Create a parameter file named **Parameters.json** in the same folder with the following content:
- ```json
+ ```json
{
- "tableList":
+ "tableList":
[ {
- "TABLE_NAME": "customer_table",
- "WaterMark_Column": "LastModifytime",
- "TableType": "DataTypeforCustomerTable",
- "StoredProcedureNameForMergeOperation": "usp_upsert_customer_table"
- },
- {
- "TABLE_NAME": "project_table",
- "WaterMark_Column": "Creationtime",
- "TableType": "DataTypeforProjectTable",
- "StoredProcedureNameForMergeOperation": "usp_upsert_project_table"
- }
- ]
+ "TABLE_NAME": "customer_table",
+ "WaterMark_Column": "LastModifytime",
+ "TableType": "DataTypeforCustomerTable",
+ "StoredProcedureNameForMergeOperation": "usp_upsert_customer_table"
+ },
+ {
+ "TABLE_NAME": "project_table",
+ "WaterMark_Column": "Creationtime",
+ "TableType": "DataTypeforProjectTable",
+ "StoredProcedureNameForMergeOperation": "usp_upsert_project_table"
+ }
+ ]
}
- ```
+ ```
+ 2. Run the pipeline IncrementalCopyPipeline by using the **Invoke-AzDataFactoryV2Pipeline** cmdlet. Replace placeholders with your own resource group and data factory name.
- ```powershell
- $RunId = Invoke-AzDataFactoryV2Pipeline -PipelineName "IncrementalCopyPipeline" -ResourceGroup $resourceGroupName -dataFactoryName $dataFactoryName -ParameterFile ".\Parameters.json"
- ```
+ ```powershell
+ $RunId = Invoke-AzDataFactoryV2Pipeline -PipelineName "IncrementalCopyPipeline" -ResourceGroup $resourceGroupName -dataFactoryName $dataFactoryName -ParameterFile ".\Parameters.json"
+ ```
## Monitor the pipeline 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **All services**, search with the keyword *Data factories*, and select **Data factories**.
+2. Select **All services**, search with the keyword *Data factories*, and select **Data factories**.
-3. Search for your data factory in the list of data factories, and select it to open the **Data factory** page.
+3. Search for your data factory in the list of data factories, and select it to open the **Data factory** page.
4. On the **Data factory** page, select **Open** on the **Open Azure Data Factory Studio** tile to launch Azure Data Factory in a separate tab.
-5. On the Azure Data Factory home page, select **Monitor** on the left side.
+5. On the Azure Data Factory home page, select **Monitor** on the left side.
- :::image type="content" source="media/doc-common-process/get-started-page-monitor-button.png" alt-text="Screenshot shows the home page for Azure Data Factory.":::
+ :::image type="content" source="media/doc-common-process/get-started-page-monitor-button.png" alt-text="Screenshot shows the home page for Azure Data Factory.":::
6. You can see all the pipeline runs and their status. Notice that in the following example, the status of the pipeline run is **Succeeded**. To check parameters passed to the pipeline, select the link in the **Parameters** column. If an error occurred, you see a link in the **Error** column.
- :::image type="content" source="media/tutorial-incremental-copy-multiple-tables-powershell/monitor-pipeline-runs-4.png" alt-text="Screenshot shows pipeline runs for a data factory including your pipeline.":::
-7. When you select the link in the **Actions** column, you see all the activity runs for the pipeline.
+ :::image type="content" source="media/tutorial-incremental-copy-multiple-tables-powershell/monitor-pipeline-runs-4.png" alt-text="Screenshot shows pipeline runs for a data factory including your pipeline.":::
+7. When you select the link in the **Actions** column, you see all the activity runs for the pipeline.
-8. To go back to the **Pipeline Runs** view, select **All Pipeline Runs**.
+8. To go back to the **Pipeline Runs** view, select **All Pipeline Runs**.
## Review the results
-In SQL Server Management Studio, run the following queries against the target SQL database to verify that the data was copied from source tables to destination tables:
+In SQL Server Management Studio, run the following queries against the target SQL database to verify that the data was copied from source tables to destination tables:
+
+**Query**
-**Query**
```sql select * from customer_table ``` **Output**
-```
+```output
===========================================
-PersonID Name LastModifytime
+PersonID Name LastModifytime
===========================================
-1 John 2017-09-01 00:56:00.000
-2 Mike 2017-09-02 05:23:00.000
-3 Alice 2017-09-03 02:36:00.000
-4 Andy 2017-09-04 03:21:00.000
-5 Anny 2017-09-05 08:06:00.000
+1 John 2017-09-01 00:56:00.000
+2 Mike 2017-09-02 05:23:00.000
+3 Alice 2017-09-03 02:36:00.000
+4 Andy 2017-09-04 03:21:00.000
+5 Anny 2017-09-05 08:06:00.000
``` **Query**
select * from project_table
**Output**
-```
+```output
===================================
-Project Creationtime
+Project Creationtime
===================================
-project1 2015-01-01 00:00:00.000
-project2 2016-02-02 01:23:00.000
-project3 2017-03-04 05:16:00.000
+project1 2015-01-01 00:00:00.000
+project2 2016-02-02 01:23:00.000
+project3 2017-03-04 05:16:00.000
``` **Query**
select * from watermarktable
**Output**
-```
+```output
======================================
-TableName WatermarkValue
+TableName WatermarkValue
======================================
-customer_table 2017-09-05 08:06:00.000
-project_table 2017-03-04 05:16:00.000
+customer_table 2017-09-05 08:06:00.000
+project_table 2017-03-04 05:16:00.000
```
-Notice that the watermark values for both tables were updated.
+Notice that the watermark values for both tables were updated.
## Add more data to the source tables
-Run the following query against the source SQL Server database to update an existing row in customer_table. Insert a new row into project_table.
+Run the following query against the source SQL Server database to update an existing row in customer_table. Insert a new row into project_table.
```sql UPDATE customer_table
INSERT INTO project_table
(Project, Creationtime) VALUES ('NewProject','10/1/2017 0:00:00 AM');
-```
+```
## Rerun the pipeline 1. Now, rerun the pipeline by executing the following PowerShell command:
- ```powershell
- $RunId = Invoke-AzDataFactoryV2Pipeline -PipelineName "IncrementalCopyPipeline" -ResourceGroup $resourceGroupname -dataFactoryName $dataFactoryName -ParameterFile ".\Parameters.json"
- ```
-2. Monitor the pipeline runs by following the instructions in the [Monitor the pipeline](#monitor-the-pipeline) section. When the pipeline status is **In Progress**, you see another action link under **Actions** to cancel the pipeline run.
+ ```powershell
+ $RunId = Invoke-AzDataFactoryV2Pipeline -PipelineName "IncrementalCopyPipeline" -ResourceGroup $resourceGroupname -dataFactoryName $dataFactoryName -ParameterFile ".\Parameters.json"
+ ```
+
+2. Monitor the pipeline runs by following the instructions in the [Monitor the pipeline](#monitor-the-pipeline) section. When the pipeline status is **In Progress**, you see another action link under **Actions** to cancel the pipeline run.
-3. Select **Refresh** to refresh the list until the pipeline run succeeds.
+3. Select **Refresh** to refresh the list until the pipeline run succeeds.
-4. Optionally, select the **View Activity Runs** link under **Actions** to see all the activity runs associated with this pipeline run.
+4. Optionally, select the **View Activity Runs** link under **Actions** to see all the activity runs associated with this pipeline run.
## Review the final results
-In SQL Server Management Studio, run the following queries against the target database to verify that the updated/new data was copied from source tables to destination tables.
+In SQL Server Management Studio, run the following queries against the target database to verify that the updated/new data was copied from source tables to destination tables.
+
+**Query**
-**Query**
```sql select * from customer_table ``` **Output**
-```
+
+```output
===========================================
-PersonID Name LastModifytime
+PersonID Name LastModifytime
===========================================
-1 John 2017-09-01 00:56:00.000
-2 Mike 2017-09-02 05:23:00.000
-3 NewName 2017-09-08 00:00:00.000
-4 Andy 2017-09-04 03:21:00.000
-5 Anny 2017-09-05 08:06:00.000
+1 John 2017-09-01 00:56:00.000
+2 Mike 2017-09-02 05:23:00.000
+3 NewName 2017-09-08 00:00:00.000
+4 Andy 2017-09-04 03:21:00.000
+5 Anny 2017-09-05 08:06:00.000
```
-Notice the new values of **Name** and **LastModifytime** for the **PersonID** for number 3.
+Notice the new values of **Name** and **LastModifytime** for the **PersonID** for number 3.
**Query**
select * from project_table
**Output**
-```
+```output
===================================
-Project Creationtime
+Project Creationtime
===================================
-project1 2015-01-01 00:00:00.000
-project2 2016-02-02 01:23:00.000
-project3 2017-03-04 05:16:00.000
-NewProject 2017-10-01 00:00:00.000
+project1 2015-01-01 00:00:00.000
+project2 2016-02-02 01:23:00.000
+project3 2017-03-04 05:16:00.000
+NewProject 2017-10-01 00:00:00.000
```
-Notice that the **NewProject** entry was added to project_table.
+Notice that the **NewProject** entry was added to project_table.
**Query**
select * from watermarktable
**Output**
-```
+```output
======================================
-TableName WatermarkValue
+TableName WatermarkValue
======================================
-customer_table 2017-09-08 00:00:00.000
-project_table 2017-10-01 00:00:00.000
+customer_table 2017-09-08 00:00:00.000
+project_table 2017-10-01 00:00:00.000
``` Notice that the watermark values for both tables were updated. ## Next steps
-You performed the following steps in this tutorial:
+You performed the following steps in this tutorial:
> [!div class="checklist"] > * Prepare source and destination data stores. > * Create a data factory. > * Create a self-hosted integration runtime (IR). > * Install the integration runtime.
-> * Create linked services.
+> * Create linked services.
> * Create source, sink, and watermark datasets. > * Create, run, and monitor a pipeline. > * Review the results.
data-factory Data Factory Build Your First Pipeline Using Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-arm.md
Title: Build your first data factory (Resource Manager template)
+ Title: Build your first data factory (Resource Manager template)
description: In this tutorial, you create a sample Azure Data Factory pipeline using an Azure Resource Manager template.
Last updated 04/12/2023
> * [PowerShell](data-factory-build-your-first-pipeline-using-powershell.md) > * [Resource Manager Template](data-factory-build-your-first-pipeline-using-arm.md) > * [REST API](data-factory-build-your-first-pipeline-using-rest-api.md)
->
-
+>
+ > [!NOTE] > This article applies to version 1 of Data Factory. If you are using the current version of the Data Factory service, see [Quickstart: Create a data factory using Azure Data Factory](../quickstart-create-data-factory-dot-net.md). In this article, you use an Azure Resource Manager template to create your first Azure data factory. To do the tutorial using other tools/SDKs, select one of the options from the drop-down list.
-The pipeline in this tutorial has one activity: **HDInsight Hive activity**. This activity runs a hive script on an Azure HDInsight cluster that transforms input data to produce output data. The pipeline is scheduled to run once a month between the specified start and end times.
+The pipeline in this tutorial has one activity: **HDInsight Hive activity**. This activity runs a hive script on an Azure HDInsight cluster that transforms input data to produce output data. The pipeline is scheduled to run once a month between the specified start and end times.
> [!NOTE] > The data pipeline in this tutorial transforms input data to produce output data. For a tutorial on how to copy data using Azure Data Factory, see [Tutorial: Copy data from Blob Storage to SQL Database](data-factory-copy-data-from-azure-blob-storage-to-sql-database.md).
->
-> The pipeline in this tutorial has only one activity of type: HDInsightHive. A pipeline can have more than one activity. And, you can chain two activities (run one activity after another) by setting the output dataset of one activity as the input dataset of the other activity. For more information, see [scheduling and execution in Data Factory](data-factory-scheduling-and-execution.md#multiple-activities-in-a-pipeline).
+>
+> The pipeline in this tutorial has only one activity of type: HDInsightHive. A pipeline can have more than one activity. And, you can chain two activities (run one activity after another) by setting the output dataset of one activity as the input dataset of the other activity. For more information, see [scheduling and execution in Data Factory](data-factory-scheduling-and-execution.md#multiple-activities-in-a-pipeline).
## Prerequisites
The pipeline in this tutorial has one activity: **HDInsight Hive activity**. Thi
* Read through [Tutorial Overview](data-factory-build-your-first-pipeline.md) article and complete the **prerequisite** steps. * Follow instructions in [How to install and configure Azure PowerShell](/powershell/azure/) article to install latest version of Azure PowerShell on your computer.
-* See [Authoring Azure Resource Manager Templates](../../azure-resource-manager/templates/syntax.md) to learn about Azure Resource Manager templates.
+* See [Authoring Azure Resource Manager Templates](../../azure-resource-manager/templates/syntax.md) to learn about Azure Resource Manager templates.
## In this tutorial
A data factory can have one or more pipelines. A pipeline can have one or more a
The following section provides the complete Resource Manager template for defining Data Factory entities so that you can quickly run through the tutorial and test the template. To understand how each Data Factory entity is defined, see [Data Factory entities in the template](#data-factory-entities-in-the-template) section. To learn about the JSON syntax and properties for Data Factory resources in a template, see [Microsoft.DataFactory resource types](/azure/templates/microsoft.datafactory/allversions). ## Data Factory JSON template
-The top-level Resource Manager template for defining a data factory is:
+The top-level Resource Manager template for defining a data factory is:
```json {
Create a JSON file named **ADFTutorialARM.json** in **C:\ADFGetStarted** folder
``` > [!NOTE]
-> You can find another example of Resource Manager template for creating an Azure data factory on [Tutorial: Create a pipeline with Copy Activity using an Azure Resource Manager template](data-factory-copy-activity-tutorial-using-azure-resource-manager-template.md).
->
->
+> You can find another example of Resource Manager template for creating an Azure data factory on [Tutorial: Create a pipeline with Copy Activity using an Azure Resource Manager template](data-factory-copy-activity-tutorial-using-azure-resource-manager-template.md).
+>
+>
## Parameters JSON
-Create a JSON file named **ADFTutorialARM-Parameters.json** that contains parameters for the Azure Resource Manager template.
+Create a JSON file named **ADFTutorialARM-Parameters.json** that contains parameters for the Azure Resource Manager template.
> [!IMPORTANT]
-> Specify the name and key of your Azure Storage account for the **storageAccountName** and **storageAccountKey** parameters in this parameter file.
->
->
+> Specify the name and key of your Azure Storage account for the **storageAccountName** and **storageAccountKey** parameters in this parameter file.
+>
+>
```json {
Create a JSON file named **ADFTutorialARM-Parameters.json** that contains parame
Get-AzSubscription -SubscriptionName <SUBSCRIPTION NAME> | Set-AzContext ```
-2. Run the following command to deploy Data Factory entities using the Resource Manager template you created in Step 1.
+2. Run the following command to deploy Data Factory entities using the Resource Manager template you created in Step 1.
```powershell New-AzResourceGroupDeployment -Name MyARMDeployment -ResourceGroupName ADFTutorialResourceGroup -TemplateFile C:\ADFGetStarted\ADFTutorialARM.json -TemplateParameterFile C:\ADFGetStarted\ADFTutorialARM-Parameters.json
Create a JSON file named **ADFTutorialARM-Parameters.json** that contains parame
1. After logging in to the [Azure portal](https://portal.azure.com/), Click **Browse** and select **Data factories**. :::image type="content" source="./media/data-factory-build-your-first-pipeline-using-arm/BrowseDataFactories.png" alt-text="Browse->Data factories":::
-2. In the **Data Factories** blade, click the data factory (**TutorialFactoryARM**) you created.
+2. In the **Data Factories** blade, click the data factory (**TutorialFactoryARM**) you created.
3. In the **Data Factory** blade for your data factory, click **Diagram**. :::image type="content" source="./media/data-factory-build-your-first-pipeline-using-arm/DiagramTile.png" alt-text="Diagram Tile"::: 4. In the **Diagram View**, you see an overview of the pipelines, and datasets used in this tutorial.
-
- :::image type="content" source="./media/data-factory-build-your-first-pipeline-using-arm/DiagramView.png" alt-text="Diagram View":::
+
+ :::image type="content" source="./media/data-factory-build-your-first-pipeline-using-arm/DiagramView.png" alt-text="Diagram View":::
5. In the Diagram View, double-click the dataset **AzureBlobOutput**. You see that the slice that is currently being processed.
-
+ :::image type="content" source="./media/data-factory-build-your-first-pipeline-using-arm/AzureBlobOutput.png" alt-text="Screenshot that shows the AzureBlobOutput dataset."::: 6. When processing is done, you see the slice in **Ready** state. Creation of an on-demand HDInsight cluster usually takes sometime (approximately 20 minutes). Therefore, expect the pipeline to take **approximately 30 minutes** to process the slice.
-
- :::image type="content" source="./media/data-factory-build-your-first-pipeline-using-arm/SliceReady.png" alt-text="Dataset":::
-7. When the slice is in **Ready** state, check the **partitioneddata** folder in the **adfgetstarted** container in your blob storage for the output data.
+
+ :::image type="content" source="./media/data-factory-build-your-first-pipeline-using-arm/SliceReady.png" alt-text="Dataset":::
+7. When the slice is in **Ready** state, check the **partitioneddata** folder in the **adfgetstarted** container in your blob storage for the output data.
See [Monitor datasets and pipeline](data-factory-monitor-manage-pipelines.md) for instructions on how to use the Azure portal blades to monitor the pipeline and datasets you have created in this tutorial.
-You can also use Monitor and Manage App to monitor your data pipelines. See [Monitor and manage Azure Data Factory pipelines using Monitoring App](data-factory-monitor-manage-app.md) for details about using the application.
+You can also use Monitor and Manage App to monitor your data pipelines. See [Monitor and manage Azure Data Factory pipelines using Monitoring App](data-factory-monitor-manage-app.md) for details about using the application.
> [!IMPORTANT] > The input file gets deleted when the slice is processed successfully. Therefore, if you want to rerun the slice or do the tutorial again, upload the input file (input.log) to the inputdata folder of the adfgetstarted container.
->
->
+>
+>
## Data Factory entities in the template ### Define data factory
-You define a data factory in the Resource Manager template as shown in the following sample:
+You define a data factory in the Resource Manager template as shown in the following sample:
```json "resources": [
-{
+ {
"name": "[variables('dataFactoryName')]", "apiVersion": "2015-10-01", "type": "Microsoft.DataFactory/factories", "location": "West US"
-}
+ }
+]
```
-The dataFactoryName is defined as:
+
+The dataFactoryName is defined as:
```json "dataFactoryName": "[concat('HiveTransformDF', uniqueString(resourceGroup().id))]", ```
-It is a unique string based on the resource group ID.
+It is a unique string based on the resource group ID.
### Defining Data Factory entities
-The following Data Factory entities are defined in the JSON template:
+The following Data Factory entities are defined in the JSON template:
* [Azure Storage linked service](#azure-storage-linked-service) * [HDInsight on-demand linked service](#hdinsight-on-demand-linked-service)
The following Data Factory entities are defined in the JSON template:
* [Data pipeline with a copy activity](#data-pipeline) #### Azure Storage linked service
-You specify the name and key of your Azure storage account in this section. See [Azure Storage linked service](data-factory-azure-blob-connector.md#azure-storage-linked-service) for details about JSON properties used to define an Azure Storage linked service.
+You specify the name and key of your Azure storage account in this section. See [Azure Storage linked service](data-factory-azure-blob-connector.md#azure-storage-linked-service) for details about JSON properties used to define an Azure Storage linked service.
```json {
- "type": "linkedservices",
- "name": "[variables('azureStorageLinkedServiceName')]",
- "dependsOn": [
- "[variables('dataFactoryName')]"
- ],
- "apiVersion": "2015-10-01",
- "properties": {
- "type": "AzureStorage",
- "description": "Azure Storage linked service",
- "typeProperties": {
- "connectionString": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageAccountName'),';AccountKey=',parameters('storageAccountKey'))]"
- }
- }
+ "type": "linkedservices",
+ "name": "[variables('azureStorageLinkedServiceName')]",
+ "dependsOn": [
+ "[variables('dataFactoryName')]"
+ ],
+ "apiVersion": "2015-10-01",
+ "properties": {
+ "type": "AzureStorage",
+ "description": "Azure Storage linked service",
+ "typeProperties": {
+ "connectionString": "[concat('DefaultEndpointsProtocol=https;AccountName=',parameters('storageAccountName'),';AccountKey=',parameters('storageAccountKey'))]"
+ }
+ }
} ```
-The **connectionString** uses the storageAccountName and storageAccountKey parameters. The values for these parameters passed by using a configuration file. The definition also uses variables: azureStorageLinkedService and dataFactoryName defined in the template.
+The **connectionString** uses the storageAccountName and storageAccountKey parameters. The values for these parameters passed by using a configuration file. The definition also uses variables: azureStorageLinkedService and dataFactoryName defined in the template.
#### HDInsight on-demand linked service
-See [Compute linked services](data-factory-compute-linked-services.md#azure-hdinsight-on-demand-linked-service) article for details about JSON properties used to define an HDInsight on-demand linked service.
+See [Compute linked services](data-factory-compute-linked-services.md#azure-hdinsight-on-demand-linked-service) article for details about JSON properties used to define an HDInsight on-demand linked service.
```json {
- "type": "linkedservices",
- "name": "[variables('hdInsightOnDemandLinkedServiceName')]",
- "dependsOn": [
- "[variables('dataFactoryName')]"
- ],
- "apiVersion": "2015-10-01",
- "properties": {
- "type": "HDInsightOnDemand",
- "typeProperties": {
+ "type": "linkedservices",
+ "name": "[variables('hdInsightOnDemandLinkedServiceName')]",
+ "dependsOn": [
+ "[variables('dataFactoryName')]"
+ ],
+ "apiVersion": "2015-10-01",
+ "properties": {
+ "type": "HDInsightOnDemand",
+ "typeProperties": {
"version": "3.5", "clusterSize": 1, "timeToLive": "00:05:00", "osType": "Linux",
- "linkedServiceName": "[variables('azureStorageLinkedServiceName')]"
- }
- }
+ "linkedServiceName": "[variables('azureStorageLinkedServiceName')]"
+ }
+ }
} ```
-Note the following points:
+Note the following points:
-* The Data Factory creates a **Linux-based** HDInsight cluster for you with the above JSON. See [On-demand HDInsight Linked Service](data-factory-compute-linked-services.md#azure-hdinsight-on-demand-linked-service) for details.
+* The Data Factory creates a **Linux-based** HDInsight cluster for you with the above JSON. See [On-demand HDInsight Linked Service](data-factory-compute-linked-services.md#azure-hdinsight-on-demand-linked-service) for details.
* You could use **your own HDInsight cluster** instead of using an on-demand HDInsight cluster. See [HDInsight Linked Service](data-factory-compute-linked-services.md#azure-hdinsight-linked-service) for details. * The HDInsight cluster creates a **default container** in the blob storage you specified in the JSON (**linkedServiceName**). HDInsight does not delete this container when the cluster is deleted. This behavior is by design. With on-demand HDInsight linked service, a HDInsight cluster is created every time a slice needs to be processed unless there is an existing live cluster (**timeToLive**) and is deleted when the processing is done.
-
+ As more slices are processed, you see many containers in your Azure blob storage. If you do not need them for troubleshooting of the jobs, you may want to delete them to reduce the storage cost. The names of these containers follow a pattern: "adf**yourdatafactoryname**-**linkedservicename**-datetimestamp". Use tools such as [Microsoft Azure Storage Explorer](https://storageexplorer.com/) to delete containers in your Azure blob storage. See [On-demand HDInsight Linked Service](data-factory-compute-linked-services.md#azure-hdinsight-on-demand-linked-service) for details. #### Azure blob input dataset
-You specify the names of blob container, folder, and file that contains the input data. See [Azure Blob dataset properties](data-factory-azure-blob-connector.md#dataset-properties) for details about JSON properties used to define an Azure Blob dataset.
+You specify the names of blob container, folder, and file that contains the input data. See [Azure Blob dataset properties](data-factory-azure-blob-connector.md#dataset-properties) for details about JSON properties used to define an Azure Blob dataset.
```json {
- "type": "datasets",
- "name": "[variables('blobInputDatasetName')]",
- "dependsOn": [
- "[variables('dataFactoryName')]",
- "[variables('azureStorageLinkedServiceName')]"
- ],
- "apiVersion": "2015-10-01",
- "properties": {
- "type": "AzureBlob",
- "linkedServiceName": "[variables('azureStorageLinkedServiceName')]",
- "typeProperties": {
- "fileName": "[parameters('inputBlobName')]",
- "folderPath": "[concat(parameters('blobContainer'), '/', parameters('inputBlobFolder'))]",
- "format": {
- "type": "TextFormat",
- "columnDelimiter": ","
- }
- },
- "availability": {
- "frequency": "Month",
- "interval": 1
- },
- "external": true
- }
+ "type": "datasets",
+ "name": "[variables('blobInputDatasetName')]",
+ "dependsOn": [
+ "[variables('dataFactoryName')]",
+ "[variables('azureStorageLinkedServiceName')]"
+ ],
+ "apiVersion": "2015-10-01",
+ "properties": {
+ "type": "AzureBlob",
+ "linkedServiceName": "[variables('azureStorageLinkedServiceName')]",
+ "typeProperties": {
+ "fileName": "[parameters('inputBlobName')]",
+ "folderPath": "[concat(parameters('blobContainer'), '/', parameters('inputBlobFolder'))]",
+ "format": {
+ "type": "TextFormat",
+ "columnDelimiter": ","
+ }
+ },
+ "availability": {
+ "frequency": "Month",
+ "interval": 1
+ },
+ "external": true
+ }
} ```
-This definition uses the following parameters defined in parameter template: blobContainer, inputBlobFolder, and inputBlobName.
+This definition uses the following parameters defined in parameter template: blobContainer, inputBlobFolder, and inputBlobName.
#### Azure Blob output dataset
-You specify the names of blob container and folder that holds the output data. See [Azure Blob dataset properties](data-factory-azure-blob-connector.md#dataset-properties) for details about JSON properties used to define an Azure Blob dataset.
+You specify the names of blob container and folder that holds the output data. See [Azure Blob dataset properties](data-factory-azure-blob-connector.md#dataset-properties) for details about JSON properties used to define an Azure Blob dataset.
```json {
- "type": "datasets",
- "name": "[variables('blobOutputDatasetName')]",
- "dependsOn": [
- "[variables('dataFactoryName')]",
- "[variables('azureStorageLinkedServiceName')]"
- ],
- "apiVersion": "2015-10-01",
- "properties": {
- "type": "AzureBlob",
- "linkedServiceName": "[variables('azureStorageLinkedServiceName')]",
- "typeProperties": {
- "folderPath": "[concat(parameters('blobContainer'), '/', parameters('outputBlobFolder'))]",
- "format": {
- "type": "TextFormat",
- "columnDelimiter": ","
- }
- },
- "availability": {
- "frequency": "Month",
- "interval": 1
- }
- }
+ "type": "datasets",
+ "name": "[variables('blobOutputDatasetName')]",
+ "dependsOn": [
+ "[variables('dataFactoryName')]",
+ "[variables('azureStorageLinkedServiceName')]"
+ ],
+ "apiVersion": "2015-10-01",
+ "properties": {
+ "type": "AzureBlob",
+ "linkedServiceName": "[variables('azureStorageLinkedServiceName')]",
+ "typeProperties": {
+ "folderPath": "[concat(parameters('blobContainer'), '/', parameters('outputBlobFolder'))]",
+ "format": {
+ "type": "TextFormat",
+ "columnDelimiter": ","
+ }
+ },
+ "availability": {
+ "frequency": "Month",
+ "interval": 1
+ }
+ }
} ```
-This definition uses the following parameters defined in the parameter template: blobContainer and outputBlobFolder.
+This definition uses the following parameters defined in the parameter template: blobContainer and outputBlobFolder.
#### Data pipeline
-You define a pipeline that transform data by running Hive script on an on-demand Azure HDInsight cluster. See [Pipeline JSON](data-factory-create-pipelines.md#pipeline-json) for descriptions of JSON elements used to define a pipeline in this example.
+You define a pipeline that transform data by running Hive script on an on-demand Azure HDInsight cluster. See [Pipeline JSON](data-factory-create-pipelines.md#pipeline-json) for descriptions of JSON elements used to define a pipeline in this example.
```json {
New-AzResourceGroupDeployment -Name MyARMDeployment -ResourceGroupName ADFTutori
New-AzResourceGroupDeployment -Name MyARMDeployment -ResourceGroupName ADFTutorialResourceGroup -TemplateFile ADFTutorialARM.json -TemplateParameterFile ADFTutorialARM-Parameters-Production.json ```
-Notice that the first command uses parameter file for the development environment, second one for the test environment, and the third one for the production environment.
+Notice that the first command uses parameter file for the development environment, second one for the test environment, and the third one for the production environment.
-You can also reuse the template to perform repeated tasks. For example, you need to create many data factories with one or more pipelines that implement the same logic but each data factory uses different Azure storage and Azure SQL Database accounts. In this scenario, you use the same template in the same environment (dev, test, or production) with different parameter files to create data factories.
+You can also reuse the template to perform repeated tasks. For example, you need to create many data factories with one or more pipelines that implement the same logic but each data factory uses different Azure storage and Azure SQL Database accounts. In this scenario, you use the same template in the same environment (dev, test, or production) with different parameter files to create data factories.
## Resource Manager template for creating a gateway Here is a sample Resource Manager template for creating a logical gateway in the back. Install a gateway on your on-premises computer or Azure IaaS VM and register the gateway with Data Factory service using a key. See [Move data between on-premises and cloud](data-factory-move-data-between-onprem-and-cloud.md) for details.
Here is a sample Resource Manager template for creating a logical gateway in the
"properties": { "description": "my gateway" }
- }
+ }
] } ] } ```
-This template creates a data factory named GatewayUsingArmDF with a gateway named: GatewayUsingARM.
+This template creates a data factory named GatewayUsingArmDF with a gateway named: GatewayUsingARM.
## See Also
data-factory Data Factory Build Your First Pipeline Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-powershell.md
In this step, you use Azure PowerShell to create an Azure Data Factory named **F
``` 2. Create an Azure resource group named **ADFTutorialResourceGroup** by running the following command:
-
+ ```powershell New-AzResourceGroup -Name ADFTutorialResourceGroup -Location "West US" ```
In this step, you create datasets to represent the input and output data for Hiv
### Create input dataset 1. Create a JSON file named **InputTable.json** in the **C:\ADFGetStarted** folder with the following content:
- ```json
- {
- "name": "AzureBlobInput",
- "properties": {
- "type": "AzureBlob",
- "linkedServiceName": "StorageLinkedService",
- "typeProperties": {
- "fileName": "input.log",
- "folderPath": "adfgetstarted/inputdata",
- "format": {
- "type": "TextFormat",
- "columnDelimiter": ","
- }
- },
- "availability": {
- "frequency": "Month",
- "interval": 1
- },
- "external": true,
- "policy": {}
- }
- }
+ ```json
+ {
+ "name": "AzureBlobInput",
+ "properties": {
+ "type": "AzureBlob",
+ "linkedServiceName": "StorageLinkedService",
+ "typeProperties": {
+ "fileName": "input.log",
+ "folderPath": "adfgetstarted/inputdata",
+ "format": {
+ "type": "TextFormat",
+ "columnDelimiter": ","
+ }
+ },
+ "availability": {
+ "frequency": "Month",
+ "interval": 1
+ },
+ "external": true,
+ "policy": {}
+ }
+ }
``` The JSON defines a dataset named **AzureBlobInput**, which represents input data for an activity in the pipeline. In addition, it specifies that the input data is located in the blob container called **adfgetstarted** and the folder called **inputdata**.
In this step, you create your first pipeline with a **HDInsightHive** activity.
> >
- ```json
- {
- "name": "MyFirstPipeline",
- "properties": {
- "description": "My first Azure Data Factory pipeline",
- "activities": [
- {
- "type": "HDInsightHive",
- "typeProperties": {
- "scriptPath": "adfgetstarted/script/partitionweblogs.hql",
- "scriptLinkedService": "StorageLinkedService",
- "defines": {
- "inputtable": "wasb://adfgetstarted@<storageaccountname>.blob.core.windows.net/inputdata",
- "partitionedtable": "wasb://adfgetstarted@<storageaccountname>.blob.core.windows.net/partitioneddata"
- }
- },
- "inputs": [
- {
- "name": "AzureBlobInput"
- }
- ],
- "outputs": [
- {
- "name": "AzureBlobOutput"
- }
- ],
- "policy": {
- "concurrency": 1,
- "retry": 3
- },
- "scheduler": {
- "frequency": "Month",
- "interval": 1
- },
- "name": "RunSampleHiveActivity",
- "linkedServiceName": "HDInsightOnDemandLinkedService"
- }
- ],
- "start": "2017-07-01T00:00:00Z",
- "end": "2017-07-02T00:00:00Z",
- "isPaused": false
- }
- }
+ ```json
+ {
+ "name": "MyFirstPipeline",
+ "properties": {
+ "description": "My first Azure Data Factory pipeline",
+ "activities": [
+ {
+ "type": "HDInsightHive",
+ "typeProperties": {
+ "scriptPath": "adfgetstarted/script/partitionweblogs.hql",
+ "scriptLinkedService": "StorageLinkedService",
+ "defines": {
+ "inputtable": "wasb://adfgetstarted@<storageaccountname>.blob.core.windows.net/inputdata",
+ "partitionedtable": "wasb://adfgetstarted@<storageaccountname>.blob.core.windows.net/partitioneddata"
+ }
+ },
+ "inputs": [
+ {
+ "name": "AzureBlobInput"
+ }
+ ],
+ "outputs": [
+ {
+ "name": "AzureBlobOutput"
+ }
+ ],
+ "policy": {
+ "concurrency": 1,
+ "retry": 3
+ },
+ "scheduler": {
+ "frequency": "Month",
+ "interval": 1
+ },
+ "name": "RunSampleHiveActivity",
+ "linkedServiceName": "HDInsightOnDemandLinkedService"
+ }
+ ],
+ "start": "2017-07-01T00:00:00Z",
+ "end": "2017-07-02T00:00:00Z",
+ "isPaused": false
+ }
+ }
``` In the JSON snippet, you are creating a pipeline that consists of a single activity that uses Hive to process Data on an HDInsight cluster.
data-factory Data Factory Build Your First Pipeline Using Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-rest-api.md
The pipeline in this tutorial has one activity: **HDInsight Hive activity**. Thi
3. Run **Get-AzSubscription -SubscriptionName NameOfAzureSubscription | Set-AzContext** to select the subscription that you want to work with. Replace **NameOfAzureSubscription** with the name of your Azure subscription. * Create an Azure resource group named **ADFTutorialResourceGroup** by running the following command in the PowerShell:
- ```powershell
- New-AzResourceGroup -Name ADFTutorialResourceGroup -Location "West US"
- ```
+ ```powershell
+ New-AzResourceGroup -Name ADFTutorialResourceGroup -Location "West US"
+ ```
Some of the steps in this tutorial assume that you use the resource group named ADFTutorialResourceGroup. If you use a different resource group, you need to use the name of your resource group in place of ADFTutorialResourceGroup in this tutorial.
In this step, you create an Azure Data Factory named **FirstDataFactoryREST**. A
```powershell $cmd = {.\curl.exe -X PUT -H "Authorization: Bearer $accessToken" -H "Content-Type: application/json" --data "@datafactory.json" https://management.azure.com/subscriptions/$subscription_id/resourcegroups/$rg/providers/Microsoft.DataFactory/datafactories/FirstDataFactoryREST?api-version=2015-10-01};
- ```
+ ```
2. Run the command by using **Invoke-Command**.
- ```powershell
- $results = Invoke-Command -scriptblock $cmd;
- ```
+ ```powershell
+ $results = Invoke-Command -scriptblock $cmd;
+ ```
3. View the results. If the data factory has been successfully created, you see the JSON for the data factory in the **results**; otherwise, you see an error message.
- ```powershell
- Write-Host $results
- ```
+ ```powershell
+ Write-Host $results
+ ```
Note the following points:
In this step, you create the input dataset to represent input data stored in the
1. Assign the command to variable named **cmd**.
- ```powershell
- $cmd = {.\curl.exe -X PUT -H "Authorization: Bearer $accessToken" -H "Content-Type: application/json" --data "@inputdataset.json" https://management.azure.com/subscriptions/$subscription_id/resourcegroups/$rg/providers/Microsoft.DataFactory/datafactories/$adf/datasets/AzureBlobInput?api-version=2015-10-01};
- ```
+ ```powershell
+ $cmd = {.\curl.exe -X PUT -H "Authorization: Bearer $accessToken" -H "Content-Type: application/json" --data "@inputdataset.json" https://management.azure.com/subscriptions/$subscription_id/resourcegroups/$rg/providers/Microsoft.DataFactory/datafactories/$adf/datasets/AzureBlobInput?api-version=2015-10-01};
+ ```
2. Run the command by using **Invoke-Command**.
- ```powershell
- $results = Invoke-Command -scriptblock $cmd;
- ```
+ ```powershell
+ $results = Invoke-Command -scriptblock $cmd;
+ ```
3. View the results. If the dataset has been successfully created, you see the JSON for the dataset in the **results**; otherwise, you see an error message.
- ```powershell
- Write-Host $results
- ```
+ ```powershell
+ Write-Host $results
+ ```
### Create output dataset In this step, you create the output dataset to represent output data stored in the Azure Blob storage. 1. Assign the command to variable named **cmd**.
- ```powershell
- $cmd = {.\curl.exe -X PUT -H "Authorization: Bearer $accessToken" -H "Content-Type: application/json" --data "@outputdataset.json" https://management.azure.com/subscriptions/$subscription_id/resourcegroups/$rg/providers/Microsoft.DataFactory/datafactories/$adf/datasets/AzureBlobOutput?api-version=2015-10-01};
- ```
+ ```powershell
+ $cmd = {.\curl.exe -X PUT -H "Authorization: Bearer $accessToken" -H "Content-Type: application/json" --data "@outputdataset.json" https://management.azure.com/subscriptions/$subscription_id/resourcegroups/$rg/providers/Microsoft.DataFactory/datafactories/$adf/datasets/AzureBlobOutput?api-version=2015-10-01};
+ ```
2. Run the command by using **Invoke-Command**.
- ```powershell
- $results = Invoke-Command -scriptblock $cmd;
- ```
+ ```powershell
+ $results = Invoke-Command -scriptblock $cmd;
+ ```
3. View the results. If the dataset has been successfully created, you see the JSON for the dataset in the **results**; otherwise, you see an error message.
- ```powershell
- Write-Host $results
- ```
+ ```powershell
+ Write-Host $results
+ ```
## Create pipeline In this step, you create your first pipeline with a **HDInsightHive** activity. Input slice is available monthly (frequency: Month, interval: 1), output slice is produced monthly, and the scheduler property for the activity is also set to monthly. The settings for the output dataset and the activity scheduler must match. Currently, output dataset is what drives the schedule, so you must create an output dataset even if the activity does not produce any output. If the activity doesn't take any input, you can skip creating the input dataset.
Confirm that you see the **input.log** file in the **adfgetstarted/inputdata** f
1. Assign the command to variable named **cmd**.
- ```powershell
- $cmd = {.\curl.exe -X PUT -H "Authorization: Bearer $accessToken" -H "Content-Type: application/json" --data "@pipeline.json" https://management.azure.com/subscriptions/$subscription_id/resourcegroups/$rg/providers/Microsoft.DataFactory/datafactories/$adf/datapipelines/MyFirstPipeline?api-version=2015-10-01};
- ```
+ ```powershell
+ $cmd = {.\curl.exe -X PUT -H "Authorization: Bearer $accessToken" -H "Content-Type: application/json" --data "@pipeline.json" https://management.azure.com/subscriptions/$subscription_id/resourcegroups/$rg/providers/Microsoft.DataFactory/datafactories/$adf/datapipelines/MyFirstPipeline?api-version=2015-10-01};
+ ```
2. Run the command by using **Invoke-Command**.
- ```powershell
- $results = Invoke-Command -scriptblock $cmd;
- ```
+ ```powershell
+ $results = Invoke-Command -scriptblock $cmd;
+ ```
3. View the results. If the dataset has been successfully created, you see the JSON for the dataset in the **results**; otherwise, you see an error message.
- ```powershell
- Write-Host $results
- ```
+ ```powershell
+ Write-Host $results
+ ```
4. Congratulations, you have successfully created your first pipeline using Azure PowerShell! ## Monitor pipeline
data-factory Data Factory Monitor Manage Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-monitor-manage-pipelines.md
If the activity run fails in a pipeline, the dataset that is produced by the pip
3. Now, run the **Get-AzDataFactoryRun** cmdlet to get details about the activity run for the slice.
- ```powershell
- Get-AzDataFactoryRun [-ResourceGroupName] <String> [-DataFactoryName] <String> [-DatasetName] <String> [-StartDateTime]
- <DateTime> [-Profile <AzureProfile> ] [ <CommonParameters>]
- ```
+ ```powershell
+ Get-AzDataFactoryRun [-ResourceGroupName] <String> [-DataFactoryName] <String> [-DatasetName] <String> [-StartDateTime]
+ <DateTime> [-Profile <AzureProfile> ] [ <CommonParameters>]
+ ```
For example:
data-lake-analytics Data Lake Analytics U Sql Programmability Guide UDT AGG https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide-UDT-AGG.md
If we try to use UDT in EXTRACTOR or OUTPUTTER (out of previous SELECT), as show
```usql @rs1 =
- SELECT
- MyNameSpace.Myfunction_Returning_UDT(filed1) AS myfield
+ SELECT
+ MyNameSpace.Myfunction_Returning_UDT(filed1) AS myfield
FROM @rs0;
-OUTPUT @rs1
- TO @output_file
+OUTPUT @rs1
+ TO @output_file
USING Outputters.Text(); ```
using System.IO;
SqlUserDefinedType is a required attribute for UDT definition.
-The constructor of the class:
+The constructor of the class:
* SqlUserDefinedTypeAttribute (type formatter)
The `IFormatter` interface serializes and de-serializes an object graph with the
* **Serialize**: Serializes an object, or graph of objects, with the given root to the provided stream.
-`MyType` instance: Instance of the type.
-`IColumnWriter` writer / `IColumnReader` reader: The underlying column stream.
+`MyType` instance: Instance of the type.
+`IColumnWriter` writer / `IColumnReader` reader: The underlying column stream.
`ISerializationContext` context: Enum that defines a set of flags that specifies the source or destination context for the stream during serialization. * **Intermediate**: Specifies that the source or destination context isn't a persisted store.
public struct FiscalPeriod
public FiscalPeriod(int quarter, int month):this() {
- this.Quarter = quarter;
- this.Month = month;
+ this.Quarter = quarter;
+ this.Month = month;
} public override bool Equals(object obj) {
- if (ReferenceEquals(null, obj))
- {
- return false;
- }
+ if (ReferenceEquals(null, obj))
+ {
+ return false;
+ }
- return obj is FiscalPeriod && Equals((FiscalPeriod)obj);
+ return obj is FiscalPeriod && Equals((FiscalPeriod)obj);
} public bool Equals(FiscalPeriod other)
return this.Quarter.CompareTo(other.Quarter) < 0 || this.Month.CompareTo(other.M
public override int GetHashCode() {
- unchecked
- {
- return (this.Quarter.GetHashCode() * 397) ^ this.Month.GetHashCode();
- }
+ unchecked
+ {
+ return (this.Quarter.GetHashCode() * 397) ^ this.Month.GetHashCode();
+ }
} public static FiscalPeriod operator +(FiscalPeriod c1, FiscalPeriod c2)
return new FiscalPeriod((c1.Quarter + c2.Quarter) > 4 ? (c1.Quarter + c2.Quarter
public static bool operator ==(FiscalPeriod c1, FiscalPeriod c2) {
- return c1.Equals(c2);
+ return c1.Equals(c2);
} public static bool operator !=(FiscalPeriod c1, FiscalPeriod c2) {
- return !c1.Equals(c2);
+ return !c1.Equals(c2);
} public static bool operator >(FiscalPeriod c1, FiscalPeriod c2) {
- return c1.GreaterThan(c2);
+ return c1.GreaterThan(c2);
} public static bool operator <(FiscalPeriod c1, FiscalPeriod c2) {
- return c1.LessThan(c2);
+ return c1.LessThan(c2);
} public override string ToString() {
- return (String.Format("Q{0}:P{1}", this.Quarter, this.Month));
+ return (String.Format("Q{0}:P{1}", this.Quarter, this.Month));
} }
public class FiscalPeriodFormatter : IFormatter<FiscalPeriod>
{ public void Serialize(FiscalPeriod instance, IColumnWriter writer, ISerializationContext context) {
- using (var binaryWriter = new BinaryWriter(writer.BaseStream))
- {
- binaryWriter.Write(instance.Quarter);
- binaryWriter.Write(instance.Month);
- binaryWriter.Flush();
- }
+ using (var binaryWriter = new BinaryWriter(writer.BaseStream))
+ {
+ binaryWriter.Write(instance.Quarter);
+ binaryWriter.Write(instance.Month);
+ binaryWriter.Flush();
+ }
} public FiscalPeriod Deserialize(IColumnReader reader, ISerializationContext context) {
- using (var binaryReader = new BinaryReader(reader.BaseStream))
- {
+ using (var binaryReader = new BinaryReader(reader.BaseStream))
+ {
var result = new FiscalPeriod(binaryReader.ReadInt16(), binaryReader.ReadInt16());
- return result;
- }
+ return result;
+ }
} } ```
public static FiscalPeriod GetFiscalPeriodWithCustomType(DateTime dt)
int FiscalMonth = 0; if (dt.Month < 7) {
- FiscalMonth = dt.Month + 6;
+ FiscalMonth = dt.Month + 6;
} else {
- FiscalMonth = dt.Month - 6;
+ FiscalMonth = dt.Month - 6;
} int FiscalQuarter = 0; if (FiscalMonth >= 1 && FiscalMonth <= 3) {
- FiscalQuarter = 1;
+ FiscalQuarter = 1;
} if (FiscalMonth >= 4 && FiscalMonth <= 6) {
- FiscalQuarter = 2;
+ FiscalQuarter = 2;
} if (FiscalMonth >= 7 && FiscalMonth <= 9) {
- FiscalQuarter = 3;
+ FiscalQuarter = 3;
} if (FiscalMonth >= 10 && FiscalMonth <= 12) {
- FiscalQuarter = 4;
+ FiscalQuarter = 4;
} return new FiscalPeriod(FiscalQuarter, FiscalMonth);
DECLARE @input_file string = @"c:\work\cosmos\usql-programmability\input_file.ts
DECLARE @output_file string = @"c:\work\cosmos\usql-programmability\output_file.tsv"; @rs0 =
- EXTRACT
- guid string,
- dt DateTime,
- user String,
- des String
- FROM @input_file USING Extractors.Tsv();
+ EXTRACT
+ guid string,
+ dt DateTime,
+ user String,
+ des String
+ FROM @input_file USING Extractors.Tsv();
@rs1 =
- SELECT
- guid AS start_id,
+ SELECT
+ guid AS start_id,
dt, DateTime.Now.ToString("M/d/yyyy") AS Nowdate, USQL_Programmability.CustomFunctions.GetFiscalPeriodWithCustomType(dt).Quarter AS fiscalquarter,
DECLARE @output_file string = @"c:\work\cosmos\usql-programmability\output_file.
FROM @rs0; @rs2 =
- SELECT
+ SELECT
start_id, dt, DateTime.Now.ToString("M/d/yyyy") AS Nowdate,
DECLARE @output_file string = @"c:\work\cosmos\usql-programmability\output_file.
fiscalmonth, USQL_Programmability.CustomFunctions.GetFiscalPeriodWithCustomType(dt).ToString() AS fiscalperiod,
- // This user-defined type was created in the prior SELECT. Passing the UDT to this subsequent SELECT would have failed if the UDT was not annotated with an IFormatter.
+ // This user-defined type was created in the prior SELECT. Passing the UDT to this subsequent SELECT would have failed if the UDT was not annotated with an IFormatter.
fiscalperiod_adjusted.ToString() AS fiscalperiod_adjusted, user, des FROM @rs1;
-OUTPUT @rs2
- TO @output_file
- USING Outputters.Text();
+OUTPUT @rs2
+ TO @output_file
+ USING Outputters.Text();
``` Here's an example of a full code-behind section:
The base class allows you to pass three abstract parameters: two as input parame
```csharp public class GuidAggregate : IAggregate<string, string, string> {
- string guid_agg;
+ string guid_agg;
- public override void Init()
- { … }
+ public override void Init()
+ { … }
- public override void Accumulate(string guid, string user)
- { … }
+ public override void Accumulate(string guid, string user)
+ { … }
- public override string Terminate()
- { … }
+ public override string Terminate()
+ { … }
} ```
-* **Init** invokes once for each group during computation. It provides an initialization routine for each aggregation group.
+* **Init** invokes once for each group during computation. It provides an initialization routine for each aggregation group.
* **Accumulate** is executed once for each value. It provides the main functionality for the aggregation algorithm. It can be used to aggregate values with various data types that are defined during class inheritance. It can accept two parameters of variable data types. * **Terminate** is executed once per aggregation group at the end of processing to output the result for each group.
Here's an example of UDAGG:
```csharp public class GuidAggregate : IAggregate<string, string, string> {
- string guid_agg;
-
- public override void Init()
- {
- guid_agg = "";
- }
-
- public override void Accumulate(string guid, string user)
- {
- if (user.ToUpper()== "USER1")
- {
- guid_agg += "{" + guid + "}";
- }
- }
-
- public override string Terminate()
- {
- return guid_agg;
- }
+ string guid_agg;
+
+ public override void Init()
+ {
+ guid_agg = "";
+ }
+
+ public override void Accumulate(string guid, string user)
+ {
+ if (user.ToUpper()== "USER1")
+ {
+ guid_agg += "{" + guid + "}";
+ }
+ }
+
+ public override string Terminate()
+ {
+ return guid_agg;
+ }
} ```
DECLARE @input_file string = @"\usql-programmability\input_file.tsv";
DECLARE @output_file string = @" \usql-programmability\output_file.tsv"; @rs0 =
- EXTRACT
+ EXTRACT
guid string,
- dt DateTime,
+ dt DateTime,
user String, des String
- FROM @input_file
- USING Extractors.Tsv();
+ FROM @input_file
+ USING Extractors.Tsv();
@rs1 = SELECT
data-lake-analytics Data Lake Analytics U Sql Programmability Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide.md
U-SQL is a query language that's designed for big data type of workloads. One of
Download and install [Azure Data Lake Tools for Visual Studio](https://www.microsoft.com/download/details.aspx?id=49504).
-## Get started with U-SQL
+## Get started with U-SQL
Look at the following U-SQL script: ```usql
-@a =
- SELECT * FROM
+@a =
+ SELECT * FROM
(VALUES ("Contoso", 1500.0, "2017-03-39"), ("Woodgrove", 2700.0, "2017-04-10")
Look at the following U-SQL script:
customer, amount, date
- FROM @a;
+ FROM @a;
``` This script defines two RowSets: `@a` and `@results`. RowSet `@results` is defined from `@a`.
A U-SQL Expression is a C# expression combined with U-SQL logical operations suc
customer, amount, DateTime.Parse(date) AS date
- FROM @a;
+ FROM @a;
``` The following snippet parses a string as DateTime value in a DECLARE statement.
The following example demonstrates how you can do a datetime data conversion by
DECLARE @dt = "2016-07-06 10:23:15"; @rs1 =
- SELECT
+ SELECT
Convert.ToDateTime(Convert.ToDateTime(@dt).ToString("yyyy-MM-dd")) AS dt, dt AS olddt FROM @rs0;
-OUTPUT @rs1
- TO @output_file
+OUTPUT @rs1
+ TO @output_file
USING Outputters.Text(); ```
Here's an example of how to use this expression in a script:
``` ## Using .NET assemblies
-U-SQLΓÇÖs extensibility model relies heavily on the ability to add custom code from .NET assemblies.
+U-SQLΓÇÖs extensibility model relies heavily on the ability to add custom code from .NET assemblies.
### Register a .NET assembly
-Use the `CREATE ASSEMBLY` statement to place a .NET assembly into a U-SQL Database. Afterwards, U-SQL scripts can use those assemblies by using the `REFERENCE ASSEMBLY` statement.
+Use the `CREATE ASSEMBLY` statement to place a .NET assembly into a U-SQL Database. Afterwards, U-SQL scripts can use those assemblies by using the `REFERENCE ASSEMBLY` statement.
The following code shows how to register an assembly:
public static string GetFiscalPeriod(DateTime dt)
int FiscalMonth=0; if (dt.Month < 7) {
- FiscalMonth = dt.Month + 6;
+ FiscalMonth = dt.Month + 6;
} else {
- FiscalMonth = dt.Month - 6;
+ FiscalMonth = dt.Month - 6;
} int FiscalQuarter=0; if (FiscalMonth >=1 && FiscalMonth<=3) {
- FiscalQuarter = 1;
+ FiscalQuarter = 1;
} if (FiscalMonth >= 4 && FiscalMonth <= 6) {
- FiscalQuarter = 2;
+ FiscalQuarter = 2;
} if (FiscalMonth >= 7 && FiscalMonth <= 9) {
- FiscalQuarter = 3;
+ FiscalQuarter = 3;
} if (FiscalMonth >= 10 && FiscalMonth <= 12) {
- FiscalQuarter = 4;
+ FiscalQuarter = 4;
} return "Q" + FiscalQuarter.ToString() + ":P" + FiscalMonth.ToString();
DECLARE @input_file string = @"\usql-programmability\input_file.tsv";
DECLARE @output_file string = @"\usql-programmability\output_file.tsv"; @rs0 =
- EXTRACT
- guid Guid,
- dt DateTime,
- user String,
- des String
- FROM @input_file USING Extractors.Tsv();
+ EXTRACT
+ guid Guid,
+ dt DateTime,
+ user String,
+ des String
+ FROM @input_file USING Extractors.Tsv();
DECLARE @default_dt DateTime = Convert.ToDateTime("06/01/2016"); @rs1 = SELECT MAX(guid) AS start_id,
- MIN(dt) AS start_time,
+ MIN(dt) AS start_time,
MIN(Convert.ToDateTime(Convert.ToDateTime(dt<@default_dt?@default_dt:dt).ToString("yyyy-MM-dd"))) AS start_zero_time, MIN(USQL_Programmability.CustomFunctions.GetFiscalPeriod(dt)) AS start_fiscalperiod, user,
DECLARE @default_dt DateTime = Convert.ToDateTime("06/01/2016");
FROM @rs0 GROUP BY user, des;
-OUTPUT @rs1
- TO @output_file
+OUTPUT @rs1
+ TO @output_file
USING Outputters.Text(); ```
DECLARE @out3 string = @"\UserSession\Out3.csv";
@records = EXTRACT DataId string,
- EventDateTime string,
+ EventDateTime string,
UserName string, UserSessionTimestamp string
DECLARE @out3 string = @"\UserSession\Out3.csv";
USING Extractors.Tsv(); @rs1 =
- SELECT
+ SELECT
EventDateTime, UserName,
- LAG(EventDateTime, 1)
- OVER(PARTITION BY UserName ORDER BY EventDateTime ASC) AS prevDateTime,
- string.IsNullOrEmpty(LAG(EventDateTime, 1)
- OVER(PARTITION BY UserName ORDER BY EventDateTime ASC)) AS Flag,
+ LAG(EventDateTime, 1)
+ OVER(PARTITION BY UserName ORDER BY EventDateTime ASC) AS prevDateTime,
+ string.IsNullOrEmpty(LAG(EventDateTime, 1)
+ OVER(PARTITION BY UserName ORDER BY EventDateTime ASC)) AS Flag,
USQLApplication21.UserSession.StampUserSession (
- EventDateTime,
- LAG(EventDateTime, 1) OVER(PARTITION BY UserName ORDER BY EventDateTime ASC),
- LAG(UserSessionTimestamp, 1) OVER(PARTITION BY UserName ORDER BY EventDateTime ASC)
+ EventDateTime,
+ LAG(EventDateTime, 1) OVER(PARTITION BY UserName ORDER BY EventDateTime ASC),
+ LAG(UserSessionTimestamp, 1) OVER(PARTITION BY UserName ORDER BY EventDateTime ASC)
) AS UserSessionTimestamp FROM @records; @rs2 =
- SELECT
- EventDateTime,
+ SELECT
+ EventDateTime,
UserName,
- LAG(EventDateTime, 1)
- OVER(PARTITION BY UserName ORDER BY EventDateTime ASC) AS prevDateTime,
+ LAG(EventDateTime, 1)
+ OVER(PARTITION BY UserName ORDER BY EventDateTime ASC) AS prevDateTime,
string.IsNullOrEmpty( LAG(EventDateTime, 1) OVER(PARTITION BY UserName ORDER BY EventDateTime ASC)) AS Flag, USQLApplication21.UserSession.getStampUserSession(UserSessionTimestamp) AS UserSessionTimestamp FROM @rs1
data-lake-store Data Lake Store Get Started Cli 2.0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-get-started-cli-2.0.md
This article uses a simpler authentication approach with Data Lake Storage Gen1
1. Log into your Azure subscription.
- ```azurecli
- az login
- ```
+ ```azurecli
+ az login
+ ```
- You get a code to use in the next step. Use a web browser to open the page https://aka.ms/devicelogin and enter the code to authenticate. You are prompted to log in using your credentials.
+ You get a code to use in the next step. Use a web browser to open the page https://aka.ms/devicelogin and enter the code to authenticate. You are prompted to log in using your credentials.
2. Once you log in, the window lists all the Azure subscriptions that are associated with your account. Use the following command to use a specific subscription.
- ```azurecli
- az account set --subscription <subscription id>
- ```
+ ```azurecli
+ az account set --subscription <subscription id>
+ ```
## Create an Azure Data Lake Storage Gen1 account 1. Create a new resource group. In the following command, provide the parameter values you want to use. If the location name contains spaces, put it in quotes. For example "East US 2".
- ```azurecli
- az group create --location "East US 2" --name myresourcegroup
- ```
+ ```azurecli
+ az group create --location "East US 2" --name myresourcegroup
+ ```
2. Create the Data Lake Storage Gen1 account.
- ```azurecli
- az dls account create --account mydatalakestoragegen1 --resource-group myresourcegroup
- ```
+ ```azurecli
+ az dls account create --account mydatalakestoragegen1 --resource-group myresourcegroup
+ ```
## Create folders in a Data Lake Storage Gen1 account
az dls fs list --account mydatalakestoragegen1 --path /mynewfolder
The output of this should be similar to the following:
-```output
+```json
[
- {
- "accessTime": 1491323529542,
- "aclBit": false,
- "blockSize": 268435456,
- "group": "1808bd5f-62af-45f4-89d8-03c5e81bac20",
- "length": 1589881,
- "modificationTime": 1491323531638,
- "msExpirationTime": 0,
- "name": "mynewfolder/vehicle1_09142014.csv",
- "owner": "1808bd5f-62af-45f4-89d8-03c5e81bac20",
- "pathSuffix": "vehicle1_09142014.csv",
- "permission": "770",
- "replication": 1,
- "type": "FILE"
- }
+ {
+ "accessTime": 1491323529542,
+ "aclBit": false,
+ "blockSize": 268435456,
+ "group": "1808bd5f-62af-45f4-89d8-03c5e81bac20",
+ "length": 1589881,
+ "modificationTime": 1491323531638,
+ "msExpirationTime": 0,
+ "name": "mynewfolder/vehicle1_09142014.csv",
+ "owner": "1808bd5f-62af-45f4-89d8-03c5e81bac20",
+ "pathSuffix": "vehicle1_09142014.csv",
+ "permission": "770",
+ "replication": 1,
+ "type": "FILE"
+ }
] ```
The output of this should be similar to the following:
* **To rename a file**, use the following command:
- ```azurecli
- az dls fs move --account mydatalakestoragegen1 --source-path /mynewfolder/vehicle1_09142014.csv --destination-path /mynewfolder/vehicle1_09142014_copy.csv
- ```
+ ```azurecli
+ az dls fs move --account mydatalakestoragegen1 --source-path /mynewfolder/vehicle1_09142014.csv --destination-path /mynewfolder/vehicle1_09142014_copy.csv
+ ```
* **To download a file**, use the following command. Make sure the destination path you specify already exists.
- ```azurecli
- az dls fs download --account mydatalakestoragegen1 --source-path /mynewfolder/vehicle1_09142014_copy.csv --destination-path "C:\mysampledata\vehicle1_09142014_copy.csv"
- ```
+ ```azurecli
+ az dls fs download --account mydatalakestoragegen1 --source-path /mynewfolder/vehicle1_09142014_copy.csv --destination-path "C:\mysampledata\vehicle1_09142014_copy.csv"
+ ```
- > [!NOTE]
- > The command creates the destination folder if it does not exist.
- >
- >
+ > [!NOTE]
+ > The command creates the destination folder if it does not exist.
+ >
+ >
* **To delete a file**, use the following command:
- ```azurecli
- az dls fs delete --account mydatalakestoragegen1 --path /mynewfolder/vehicle1_09142014_copy.csv
- ```
+ ```azurecli
+ az dls fs delete --account mydatalakestoragegen1 --path /mynewfolder/vehicle1_09142014_copy.csv
+ ```
- If you want to delete the folder **mynewfolder** and the file **vehicle1_09142014_copy.csv** together in one command, use the --recurse parameter
+ If you want to delete the folder **mynewfolder** and the file **vehicle1_09142014_copy.csv** together in one command, use the --recurse parameter
- ```azurecli
- az dls fs delete --account mydatalakestoragegen1 --path /mynewfolder --recurse
- ```
+ ```azurecli
+ az dls fs delete --account mydatalakestoragegen1 --path /mynewfolder --recurse
+ ```
## Work with permissions and ACLs for a Data Lake Storage Gen1 account
In this section you learn about how to manage ACLs and permissions using the Azu
* **To update the owner of a file/folder**, use the following command:
- ```azurecli
- az dls fs access set-owner --account mydatalakestoragegen1 --path /mynewfolder/vehicle1_09142014.csv --group 80a3ed5f-959e-4696-ba3c-d3c8b2db6766 --owner 6361e05d-c381-4275-a932-5535806bb323
- ```
+ ```azurecli
+ az dls fs access set-owner --account mydatalakestoragegen1 --path /mynewfolder/vehicle1_09142014.csv --group 80a3ed5f-959e-4696-ba3c-d3c8b2db6766 --owner 6361e05d-c381-4275-a932-5535806bb323
+ ```
* **To update the permissions for a file/folder**, use the following command:
- ```azurecli
- az dls fs access set-permission --account mydatalakestoragegen1 --path /mynewfolder/vehicle1_09142014.csv --permission 777
- ```
-
+ ```azurecli
+ az dls fs access set-permission --account mydatalakestoragegen1 --path /mynewfolder/vehicle1_09142014.csv --permission 777
+ ```
+
* **To get the ACLs for a given path**, use the following command:
- ```azurecli
- az dls fs access show --account mydatalakestoragegen1 --path /mynewfolder/vehicle1_09142014.csv
- ```
+ ```azurecli
+ az dls fs access show --account mydatalakestoragegen1 --path /mynewfolder/vehicle1_09142014.csv
+ ```
- The output should be similar to the following:
+ The output should be similar to the following:
```output
- {
- "entries": [
- "user::rwx",
- "group::rwx",
- "other::"
- ],
- "group": "1808bd5f-62af-45f4-89d8-03c5e81bac20",
- "owner": "1808bd5f-62af-45f4-89d8-03c5e81bac20",
- "permission": "770",
- "stickyBit": false
- }
+ {
+ "entries": [
+ "user::rwx",
+ "group::rwx",
+ "other::"
+ ],
+ "group": "1808bd5f-62af-45f4-89d8-03c5e81bac20",
+ "owner": "1808bd5f-62af-45f4-89d8-03c5e81bac20",
+ "permission": "770",
+ "stickyBit": false
+ }
``` * **To set an entry for an ACL**, use the following command:
- ```azurecli
- az dls fs access set-entry --account mydatalakestoragegen1 --path /mynewfolder --acl-spec user:6360e05d-c381-4275-a932-5535806bb323:-w-
- ```
+ ```azurecli
+ az dls fs access set-entry --account mydatalakestoragegen1 --path /mynewfolder --acl-spec user:6360e05d-c381-4275-a932-5535806bb323:-w-
+ ```
* **To remove an entry for an ACL**, use the following command:
- ```azurecli
- az dls fs access remove-entry --account mydatalakestoragegen1 --path /mynewfolder --acl-spec user:6360e05d-c381-4275-a932-5535806bb323
- ```
+ ```azurecli
+ az dls fs access remove-entry --account mydatalakestoragegen1 --path /mynewfolder --acl-spec user:6360e05d-c381-4275-a932-5535806bb323
+ ```
* **To remove an entire default ACL**, use the following command:
- ```azurecli
- az dls fs access remove-all --account mydatalakestoragegen1 --path /mynewfolder --default-acl
- ```
+ ```azurecli
+ az dls fs access remove-all --account mydatalakestoragegen1 --path /mynewfolder --default-acl
+ ```
* **To remove an entire non-default ACL**, use the following command:
- ```azurecli
- az dls fs access remove-all --account mydatalakestoragegen1 --path /mynewfolder
- ```
+ ```azurecli
+ az dls fs access remove-all --account mydatalakestoragegen1 --path /mynewfolder
+ ```
## Delete a Data Lake Storage Gen1 account Use the following command to delete a Data Lake Storage Gen1 account.
data-lake-store Data Lake Store Get Started Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-get-started-python.md
pip install azure-datalake-store
1. In the IDE of your choice create a new Python application, for example, **mysample.py**.
-2. Add the following snippet to import the required modules
+2. Add the following snippet to import the required modules:
- ```python
- # Acquire a credential object for the app identity. When running in the cloud,
- # DefaultAzureCredential uses the app's managed identity (MSI) or user-assigned service principal.
- # When run locally, DefaultAzureCredential relies on environment variables named
- # AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, and AZURE_TENANT_ID.
- from azure.identity import DefaultAzureCredential
+ ```python
+ # Acquire a credential object for the app identity. When running in the cloud,
+ # DefaultAzureCredential uses the app's managed identity (MSI) or user-assigned service principal.
+ # When run locally, DefaultAzureCredential relies on environment variables named
+ # AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, and AZURE_TENANT_ID.
+ from azure.identity import DefaultAzureCredential
- ## Required for Data Lake Storage Gen1 account management
- from azure.mgmt.datalake.store import DataLakeStoreAccountManagementClient
- from azure.mgmt.datalake.store.models import CreateDataLakeStoreAccountParameters
+ ## Required for Data Lake Storage Gen1 account management
+ from azure.mgmt.datalake.store import DataLakeStoreAccountManagementClient
+ from azure.mgmt.datalake.store.models import CreateDataLakeStoreAccountParameters
- ## Required for Data Lake Storage Gen1 filesystem management
- from azure.datalake.store import core, lib, multithread
+ ## Required for Data Lake Storage Gen1 filesystem management
+ from azure.datalake.store import core, lib, multithread
- # Common Azure imports
- import adal
- from azure.mgmt.resource.resources import ResourceManagementClient
- from azure.mgmt.resource.resources.models import ResourceGroup
+ # Common Azure imports
+ import adal
+ from azure.mgmt.resource.resources import ResourceManagementClient
+ from azure.mgmt.resource.resources.models import ResourceGroup
- # Use these as needed for your application
- import logging, getpass, pprint, uuid, time
- ```
+ # Use these as needed for your application
+ import logging, getpass, pprint, uuid, time
+ ```
3. Save changes to mysample.py.
adlsAcctClient = DataLakeStoreAccountManagementClient(credential, subscription_i
## Create a Data Lake Storage Gen1 account adlsAcctResult = adlsAcctClient.accounts.begin_create(
- resourceGroup,
- adlsAccountName,
- CreateDataLakeStoreAccountParameters(
- location=location
- )
+ resourceGroup,
+ adlsAccountName,
+ CreateDataLakeStoreAccountParameters(
+ location=location
+ )
) ```
-
+
## List the Data Lake Storage Gen1 accounts ```python
for items in result_list:
## Delete an existing Data Lake Storage Gen1 account adlsAcctClient.accounts.begin_delete(resourceGroup, adlsAccountName) ```
-
+
## Next steps * [Filesystem operations on Data Lake Storage Gen1 using Python](data-lake-store-data-operations-python.md).
data-lake-store Data Lake Store Service To Service Authenticate Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-service-to-service-authenticate-python.md
pip install azure-datalake-store
1. In the IDE of your choice create a new Python application, for example, **mysample.py**.
-2. Add the following snippet to import the required modules
+2. Add the following snippet to import the required modules:
- ```
- ## Use this for Azure AD authentication
- from msrestazure.azure_active_directory import AADTokenCredentials
+ ```
+ ## Use this for Azure AD authentication
+ from msrestazure.azure_active_directory import AADTokenCredentials
## Required for Data Lake Storage Gen1 account management
- from azure.mgmt.datalake.store import DataLakeStoreAccountManagementClient
- from azure.mgmt.datalake.store.models import DataLakeStoreAccount
+ from azure.mgmt.datalake.store import DataLakeStoreAccountManagementClient
+ from azure.mgmt.datalake.store.models import DataLakeStoreAccount
- ## Required for Data Lake Storage Gen1 filesystem management
- from azure.datalake.store import core, lib, multithread
+ ## Required for Data Lake Storage Gen1 filesystem management
+ from azure.datalake.store import core, lib, multithread
- # Common Azure imports
+ # Common Azure imports
import adal
- from azure.mgmt.resource.resources import ResourceManagementClient
- from azure.mgmt.resource.resources.models import ResourceGroup
+ from azure.mgmt.resource.resources import ResourceManagementClient
+ from azure.mgmt.resource.resources.models import ResourceGroup
- ## Use these as needed for your application
- import logging, getpass, pprint, uuid, time
- ```
+ ## Use these as needed for your application
+ import logging, getpass, pprint, uuid, time
+ ```
3. Save changes to mysample.py.
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Install Gpu Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md
Follow these steps to verify the driver installation:
See "man sudo_root" for details. Administrator@VM1:~$
- ```
+ ```
+ 2. Run the nvidia-smi command-line utility installed with the driver. If the driver is successfully installed, you'll be able to run the utility and see the following output: ```powershell
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-templates.md
The file `CreateImage.parameters.json` takes the following parameters:
```json "parameters": {
- "osType": {
- "value": "<Operating system corresponding to the VHD you upload can be Windows or Linux>"
- },
- "imageName": {
- "value": "<Name for the VM image>"
- },
- "imageUri": {
- "value": "<Path to the VHD that you uploaded in the Storage account>"
- },
- "hyperVGeneration": {
- "type": "string",
- "value": "<Generation of the VM, V1 or V2>
- },
- }
+ "osType": {
+ "value": "<Operating system corresponding to the VHD you upload can be Windows or Linux>"
+ },
+ "imageName": {
+ "value": "<Name for the VM image>"
+ },
+ "imageUri": {
+ "value": "<Path to the VHD that you uploaded in the Storage account>"
+ },
+ "hyperVGeneration": {
+ "type": "string",
+ "value": "<Generation of the VM, V1 or V2>"
+ },
+}
``` Edit the file `CreateImage.parameters.json` to include the following values for your Azure Stack Edge Pro device:
Deploy the VM creation template `CreateVM.json`. This template creates a network
You can also run the `New-AzureRmResourceGroupDeployment` command asynchronously with `ΓÇôAsJob` parameter. Here's a sample output when the cmdlet runs in the background. You can then query the status of job that is created using the `Get-Job` cmdlet.
- ```powershell
+ ```powershell
PS C:\WINDOWS\system32> New-AzureRmResourceGroupDeployment `
- >> -ResourceGroupName $RGName `
- >> -TemplateFile $templateFile `
- >> -TemplateParameterFile $templateParameterFile `
- >> -Name "Deployment2" `
- >> -AsJob
-
- Id Name PSJobTypeName State HasMoreData Location Command
- -- - - -- -- -- -
- 2 Long Running... AzureLongRun... Running True localhost New-AzureRmResourceGro...
-
- PS C:\WINDOWS\system32> Get-Job -Id 2
-
- Id Name PSJobTypeName State HasMoreData Location Command
- -- - - -- -- -- -
- ```
+ >> -ResourceGroupName $RGName `
+ >> -TemplateFile $templateFile `
+ >> -TemplateParameterFile $templateParameterFile `
+ >> -Name "Deployment2" `
+ >> -AsJob
+
+ Id Name PSJobTypeName State HasMoreData Location Command
+ -- - - -- -- -- -
+ 2 Long Running... AzureLongRun... Running True localhost New-AzureRmResourceGro...
+
+ PS C:\WINDOWS\system32> Get-Job -Id 2
+
+ Id Name PSJobTypeName State HasMoreData Location Command
+ -- - - -- -- -- -
+ ```
1. Check if the VM is successfully provisioned. Run the following command:
databox-online Azure Stack Edge Gpu Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-sharing.md
Graphics processing unit (GPU) is a specialized processor designed to accelerate
## About GPU sharing
-Many machine learning or other compute workloads may not need a dedicated GPU. GPUs can be shared and sharing GPUs among containerized or VM workloads helps increase the GPU utilization without significantly affecting the performance benefits of GPU.
+Many machine learning or other compute workloads may not need a dedicated GPU. GPUs can be shared and sharing GPUs among containerized or VM workloads helps increase the GPU utilization without significantly affecting the performance benefits of GPU.
## Using GPU with VMs
On your Azure Stack Edge Pro device, a GPU can't be shared when deploying VM wor
## Using GPU with containers
-If you are deploying containerized workloads, a GPU can be shared in more than one ways at the hardware and software layer. With the Tesla T4 GPU on your Azure Stack Edge Pro device, we are limited to software sharing. On your device, the following two approaches for software sharing of GPU are used:
+If you are deploying containerized workloads, a GPU can be shared in more than one ways at the hardware and software layer. With the Tesla T4 GPU on your Azure Stack Edge Pro device, we are limited to software sharing. On your device, the following two approaches for software sharing of GPU are used:
- The first approach involves using environment variables to specify the number of GPUs that can be time shared. Consider the following caveats when using this approach: - You can specify one or both or no GPUs with this method. It is not possible to specify fractional usage. - Multiple modules can map to one GPU but the same module cannot be mapped to more than one GPU. - With the Nvidia SMI output, you can see the overall GPU utilization including the memory utilization.
-
+ For more information, see how to [Deploy an IoT Edge module that uses GPU](azure-stack-edge-gpu-configure-gpu-modules.md) on your device. - The second approach requires you to enable the Multi-Process Service on your Nvidia GPUs. MPS is a runtime service that lets multiple processes using CUDA to run concurrently on a single shared GPU. MPS allows overlapping of kernel and memcopy operations from different processes on the GPU to achieve maximum utilization. For more information, see [Multi-Process Service](https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf). Consider the following caveats when using this approach:
-
+ - MPS allows you to specify more flags in GPU deployment.
- - You can specify fractional usage via MPS thereby limiting the usage of each application deployed on the device. You can specify the GPU percentage to use for each app under the `env` section of the `deployment.yaml` by adding the following parameter:
+ - You can specify fractional usage via MPS thereby limiting the usage of each application deployed on the device. You can specify the GPU percentage to use for each app under the `env` section of the `deployment.yaml` by adding the following parameter:
```yml // Example: application wants to limit gpu percentage to 20%
-
- env:
- - name: CUDA_MPS_ACTIVE_THREAD_PERCENTAGE
- value: "20"
+
+ env:
+ - name: CUDA_MPS_ACTIVE_THREAD_PERCENTAGE
+ value: "20"
``` ## GPU utilization
-
+ When you share GPU on containerized workloads deployed on your device, you can use the Nvidia System Management Interface (nvidia-smi). Nvidia-smi is a command-line utility that helps you manage and monitor Nvidia GPU devices. For more information, see [Nvidia System Management Interface](https://developer.nvidia.com/nvidia-system-management-interface). To view GPU usage, first connect to the PowerShell interface of the device. Run the `Get-HcsNvidiaSmi` command and view the Nvidia SMI output. You can also view how the GPU utilization changes by enabling MPS and then deploying multiple workloads on the device. For more information, see [Enable Multi-Process Service](azure-stack-edge-gpu-connect-powershell-interface.md#enable-multi-process-service-mps).
defender-for-cloud Agentless Container Registry Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-container-registry-vulnerability-assessment.md
+
+ Title: Vulnerability assessments for Azure with Microsoft Defender Vulnerability Management
+description: Learn about vulnerability assessments for Azure with Microsoft Defender Vulnerability Management.
++ Last updated : 07/11/2023+++
+# Vulnerability assessments for Azure with Microsoft Defender Vulnerability Management
+
+Vulnerability assessment for Azure, powered by Microsoft Defender Vulnerability Management (MDVM), is an out-of-box solution that empowers security teams to easily discover and remediate vulnerabilities in Linux container images, with zero configuration for onboarding, and without deployment of any agents.
+
+> [!NOTE]
+> This feature supports scanning of images in the Azure Container Registry (ACR) only. Images that are stored in other container registries should be imported into ACR for coverage. Learn how to [import container images to a container registry](/azure/container-registry/container-registry-import-images).
+
+In every subscription where this capability is enabled, all images stored in ACR (existing and new) are automatically scanned for vulnerabilities without any extra configuration of users or registries. Recommendations with vulnerability reports are provided for all images in ACR as well as images that are currently running in AKS that were pulled from an ACR registry. Images are scanned shortly after being added to a registry, and rescanned for new vulnerabilities once every 24 hours.
+
+Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerability Management) has the following capabilities:
+
+- **Scanning OS packages** - container vulnerability assessment has the ability to scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-matrix-defender-for-containers.md#registries-and-imagespowered-by-mdvm).
+- **Language specific packages** ΓÇô support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [complete list of supported languages](support-matrix-defender-for-containers.md#registries-and-imagespowered-by-mdvm).
+- **Image scanning in Azure Private Link** - Azure container vulnerability assessment provides the ability to scan images in container registries that are accessible via Azure Private Links. This capability requires access to trusted services and authentication with the registry. Learn how to [allow access by trusted services](/azure/container-registry/allow-access-trusted-services).
+- **Exploitability information** - Each vulnerability report is searched through exploitability databases to assist our customers with determining actual risk associated with each reported vulnerability.
+- **Reporting** - Container Vulnerability Assessment for Azure powered by Microsoft Defender Vulnerability Management (MDVM) provides vulnerability reports using following recommendations:
+
+ | Recommendation | Description | Assessment Key
+ |--|--|--|
+ | [Container registry images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)-Preview](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/PhoenixContainerRegistryRecommendationDetailsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c0b7cfc6-3172-465a-b378-53c7ff2cc0d5 |
+ | [Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)  | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 |
+
+- **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via ARG](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-arg).
+- **Query vulnerability information via sub-assessment API** - You can get scan results via REST API. See the [subassessment list](/rest/api/defenderforcloud/sub-assessments/get?tabs=HTTP).
+- **Support for exemptions** - Learn how to [create exemption rules for a management group, resource group, or subscription](disable-vulnerability-findings-containers.md).
+- **Support for disabling vulnerabilities** - Learn how to [disable vulnerabilities on images](disable-vulnerability-findings-containers.md).
+
+## Scan triggers
+
+The triggers for an image scan are:
+
+- **One-time triggering** ΓÇô each image pushed or imported to a container registry is scanned shortly after being pushed or imported to a registry. In most cases, the scan is completed within a few minutes, but sometimes it may take up to an hour.
+
+ > [!NOTE]
+ > While Container vulnerability assessment powered by MDVM is generally available for Defender CSPM, scan-on-push is currently in public preview.
+
+- **Continuous rescan triggering** ΓÇô Continuous rescan is required to ensure images that have been previously scanned for vulnerabilities are rescanned to update their vulnerability reports in case a new vulnerability is published.
+ - **Re-scan** is performed once a day for:
+ - images pushed in the last 90 days.
+ - images currently running on the Kubernetes clusters monitored by Defender for Cloud (either via [agentless discovery and visibility for Kubernetes](how-to-enable-agentless-containers.md) or the [Defender for Containers agent](tutorial-enable-containers-azure.md#deploy-the-defender-profile-in-azure)).
+
+## How does image scanning work?
+
+A detailed description of the scan process is described as follows:
+
+- When you enable the [container vulnerability assessment for Azure powered by MDVM](enable-vulnerability-assessment.md), you authorize Defender for Cloud to scan container images in your Azure Container registries.
+- Defender for Cloud automatically discovers all containers registries, repositories and images (created before or after enabling this capability).
+- Defender for Cloud receives notifications whenever a new image is pushed to an Azure Container Registry. The new image is then immediately added to the catalog of images Defender for Cloud maintains, and queues an action to scan the image immediately.
+- Once a day, or when an image is pushed to a registry:
+
+ - All newly discovered images are pulled, and an inventory is created for each image. Image inventory is kept to avoid further image pulls, unless required by new scanner capabilities.ΓÇï
+ - Using the inventory, vulnerability reports are generated for new images, and updated for images previously scanned which were either pushed in the last 90 days to a registry, or are currently running. To determine if an image is currently running, Defender for Cloud uses both [agentless discovery and visibility within Kubernetes components](/azure/defender-for-cloud/concept-agentless-containers) and [inventory collected via the Defender agents running on AKS nodes](defender-for-containers-enable.md#deploy-the-defender-profile)
+ - Vulnerability reports for container images are provided as a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/PhoenixContainerRegistryRecommendationDetailsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5).
+- For customers using either [agentless discovery and visibility within Kubernetes components](concept-agentless-containers.md) or [inventory collected via the Defender agents running on AKS nodes](defender-for-containers-enable.md#deploy-the-defender-profile), Defender for Cloud also creates a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5) for remediating vulnerabilities for vulnerable images running on an AKS cluster.
+
+> [!NOTE]
+> For Defender for Container Registries (deprecated), images are scanned once on push, and rescanned only once a week.
+
+## If I remove an image from my registry, how long before vulnerabilities reports on that image would be removed?
+
+Azure Container Registries notifies Defender for Cloud when images are deleted, and removes the vulnerability assessment for deleted images within one hour. In some rare cases, Defender for Cloud may not be notified on the deletion, and deletion of associated vulnerabilities in such cases may take up to three days.
+
+## Next steps
+
+- Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
+- Check out [common questions](faq-defender-for-containers.yml) about Defender for Containers.
defender-for-cloud Concept Agentless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-containers.md
Title: Agentless Container Posture
-description: Learn how Agentless Container Posture offers discovery, visibility, and vulnerability assessment for Containers without installing an agent on your machines.
+ Title: Agentless container posture for Microsoft Defender for Cloud
+description: Learn how agentless container posture offers discovery, visibility, and vulnerability assessment for Containers without installing an agent on your machines.
Last updated 07/03/2023
-# Agentless Container Posture (Preview)
+# Agentless container posture
-Agentless Container Posture provides a holistic approach to improving your container posture within Defender CSPM (Cloud Security Posture Management). You can visualize and hunt for risks and threats to Kubernetes environments with attack path analysis and the cloud security explorer, and leverage agentless discovery and visibility within Kubernetes components.
+Agentless container posture provides a holistic approach to improving your container posture within Defender CSPM (Cloud Security Posture Management). You can visualize and hunt for risks and threats to Kubernetes environments with attack path analysis and the cloud security explorer, and leverage agentless discovery and visibility within Kubernetes components.
Learn more about [CSPM](concept-cloud-security-posture-management.md). ## Capabilities
-Agentless Container Posture provides the following capabilities:
+For support and prerequisites for agentless containers posture, see [Support and prerequisites for agentless containers posture](support-agentless-containers-posture.md).
+
+Agentless container Posture provides the following capabilities:
-- Using Kubernetes [attack path analysis](concept-attack-path.md) to visualize risks and threats to Kubernetes environments.-- Using [cloud security explorer](how-to-manage-cloud-security-explorer.md) for risk hunting by querying various risk scenarios. -- Viewing security insights, such as internet exposure, and other predefined security scenarios. For more information, search for `Kubernetes` in the [list of Insights](attack-path-reference.md#insights). - [Agentless discovery and visibility](#agentless-discovery-and-visibility-within-kubernetes-components) within Kubernetes components.-- [Agentless container registry vulnerability assessment](#agentless-container-registry-vulnerability-assessment), using the image scanning results of your Azure Container Registry (ACR) with cloud security explorer.
+- [Container registry vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) provides vulnerability assessment for all container images, with near real-time scan of new images and daily refresh of results for maximum visibility to current and emerging vulnerabilities, enriched with exploitability insights, and added to Defender CSPM security graph for contextual risk assessment and calculation of attack paths.
+- Using Kubernetes [attack path analysis](concept-attack-path.md) to visualize risks and threats to Kubernetes environments.
+- Using [cloud security explorer](how-to-manage-cloud-security-explorer.md) for risk hunting by querying various risk scenarios, including viewing security insights, such as internet exposure, and other predefined security scenarios. For more information, search for `Kubernetes` in the [list of Insights](attack-path-reference.md#insights).
-All of these capabilities are available as part of the [Defender Cloud Security Posture Management](concept-cloud-security-posture-management.md) plan.
+All of these capabilities are available as part of the [Defender Cloud Security Posture Management](concept-cloud-security-posture-management.md) plan.
## Agentless discovery and visibility within Kubernetes components Agentless discovery for Kubernetes provides API-based discovery of information about Kubernetes cluster architecture, workload objects, and setup.
-### How does Agentless Discovery for Kubernetes work?
+### How does agentless discovery for Kubernetes work?
The discovery process is based on snapshots taken at intervals: :::image type="content" source="media/concept-agentless-containers/diagram-permissions-architecture.png" alt-text="Diagram of the permissions architecture." lightbox="media/concept-agentless-containers/diagram-permissions-architecture.png":::
-When you enable the Agentless discovery for Kubernetes extension, the following process occurs:
+When you enable the agentless discovery for Kubernetes extension, the following process occurs:
- **Create**: MDC (Microsoft Defender for Cloud) creates an identity in customer environments called CloudPosture/securityOperator/DefenderCSPMSecurityOperator. - **Assign**: MDC assigns 1 built-in role called **Kubernetes Agentless Operator** to that identity on subscription scope. The role contains the following permissions:
- - AKS read (Microsoft.ContainerService/managedClusters/read)
- - AKS Trusted Access with the following permissions:
+ - AKS read (Microsoft.ContainerService/managedClusters/read)
+ - AKS Trusted Access with the following permissions:
- Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/write - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/read - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/delete
- Learn more about [AKS Trusted Access](/azure/aks/trusted-access-feature).
+ Learn more about [AKS Trusted Access](/azure/aks/trusted-access-feature).
-- **Discover**: Using the system assigned identity, MDC performs a discovery of the AKS clusters in your environment using API calls to the API server of AKS.
+- **Discover**: Using the system assigned identity, Defender for Cloud performs a discovery of the AKS clusters in your environment using API calls to the API server of AKS.
- **Bind**: Upon discovery of an AKS cluster, MDC performs an AKS bind operation between the created identity and the Kubernetes role ΓÇ£Microsoft.Security/pricings/microsoft-defender-operatorΓÇ¥. The role is visible via API and gives MDC data plane read permission inside the cluster.
When you enable the Agentless discovery for Kubernetes extension, the following
Agentless information in Defender CSPM is updated through a snapshot mechanism. It can take up to **24 hours** to see results in Cloud Security Explorer and Attack Path.
-## Agentless Container registry vulnerability assessment
-
-> [!NOTE]
-> This feature supports scanning of images in the Azure Container Registry (ACR) only. If you want to find vulnerabilities stored in other container registries, you can import the images into ACR, after which the imported images are scanned by the built-in vulnerability assessment solution. Learn how to [import container images to a container registry](/azure/container-registry/container-registry-import-images).
--- Container registry vulnerability assessment scans images in your Azure Container Registry (ACR) to provide recommendations for improving your posture by remediating vulnerabilities.--- Vulnerability assessment for Containers in Defender Cloud Security Posture Management (CSPM) gives you frictionless, wide, and instant visibility on actionable posture issues without the need for installed agents, network connectivity requirements, or container performance impact.-
-Container registries vulnerability assessment, powered by Microsoft Defender Vulnerability Management (MDVM), is an out-of-box solution that empowers security teams to discover vulnerabilities in your Azure Container images by providing frictionless native coverage in Azure for vulnerability scanning of container images.
-
-Azure Container Vulnerability Assessment provides automatic coverage for all registries and images in Azure, for each subscription where the CSPM plan is enabled, without any extra configuration of users or registries. New images are automatically scanned once a day and vulnerability reports for previously scanned images are refreshed daily.
-
-Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerability Management) has the following capabilities:
--- **Scanning OS packages** - container vulnerability assessment has the ability to scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-agentless-containers-posture.md#registries-and-images). -- **Language specific packages** ΓÇô support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [complete list of supported languages](support-agentless-containers-posture.md#registries-and-images). -- **Image scanning with Azure Private Link** - Azure container vulnerability assessment provides the ability to scan images in container registries that are accessible via Azure Private Links. This capability requires access to trusted services and authentication with the registry. Learn how to [allow access by trusted services](/azure/container-registry/container-registry-private-link#container-registry/allow-access-trusted-services) . -- **Exploitability information** - Each vulnerability report is searched through exploitability databases to assist our customers with determining actual risk associated with each reported vulnerability. -- **Reporting** - Defender for Containers powered by Microsoft Defender Vulnerability Management (MDVM) reports the vulnerabilities as the following recommendations:
-
- | Recommendation | Description | Assessment Key
- |--|--|--|
- | Container registry images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)-Preview | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c0b7cfc6-3172-465a-b378-53c7ff2cc0d5 |
- | Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)ΓÇ»| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 |
--- **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via the ARG](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-arg). -- **Query vulnerability information via sub-assessment API** - You can get scan results via REST API. See the [subassessment list](/rest/api/defenderforcloud/sub-assessments/get?tabs=HTTP). -- **Support for exemptions** - Learn how to [create exemption rules for a management group, resource group, or subscription](how-to-enable-agentless-containers.md#support-for-exemptions).-- **Support for disabling vulnerability findings** - Learn how to [disable vulnerability assessment findings on Container registry images](disable-vulnerability-findings-containers.md).
-
-### Scan Triggers
-
-The triggers for an image scan are:
--- **One-time triggering** ΓÇô each image added to a container registry is scanned within 24 hours.-- **Continuous rescan triggering** ΓÇô Continuous rescan is required to ensure images that have been previously scanned for vulnerabilities are rescanned to update their vulnerability reports in case a new vulnerability is published.
- - **Re-scan** is performed once a day for:
- - images pushed in the last 90 days.
- - images currently running on the Kubernetes clusters monitored by Defender for Cloud (either via agentless discovery and visibility for Kubernetes or Defender for Containers agent).
-
-### How does image scanning work?
-
-Container registry vulnerability assessment scans container images stored in your Azure Container Registry (ACR) as part of the protections provided within Microsoft Defender CSPM. A detailed description of the process is as follows:
-
-1. When you enable the vulnerability assessment extension in Defender CSPM, you authorize Defender CSPM to scan container images in your Azure Container registries.
-1. Defender CSPM automatically discovers all containers registries, repositories and images (created before or after enabling the plan).
-1. Once a day:
-
- 1. All newly discovered images are pulled, and an inventory is created for each image. Image inventory is kept to avoid further image pulls, unless required by new scanner capabilities.​
-
- 1. Using the inventory, vulnerability reports are generated for new images, and updated for images previously scanned which were either pushed in the last 90 days to a registry, or are currently running.
-
-> [!NOTE]
-> To determine if an image is currently running, Agentless Vulnerability Assessment uses [Agentless Discovery and Visibility within Kubernetes components](/azure/defender-for-cloud/concept-agentless-containers).
-### If I remove an image from my registry, how long before vulnerabilities reports on that image would be removed?
-
-It currently takes 3 days to remove findings for a deleted image. We are working on providing quicker deletion for removed images.
- ## Next steps - Learn about [support and prerequisites for agentless containers posture](support-agentless-containers-posture.md) - Learn how to [enable agentless containers](how-to-enable-agentless-containers.md)--
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
# Introduction to Microsoft Defender for container registries (deprecated)
+> [!IMPORTANT]
+> We have started a public preview of Azure Vulnerability Assessment powered by MDVM. For more information see [Vulnerability assessments for Azure with Microsoft Defender Vulnerability Management](agentless-container-registry-vulnerability-assessment.md).
+ Azure Container Registry (ACR) is a managed, private Docker registry service that stores and manages your container images for Azure deployments in a central registry. It's based on the open-source Docker Registry 2.0. To protect the Azure Resource Manager based registries in your subscription, enable **Microsoft Defender for container registries** at the subscription level. Defender for Cloud will then scan all images when theyΓÇÖre pushed to the registry, imported into the registry, or pulled within the last 30 days. YouΓÇÖll be charged for every image that gets scanned ΓÇô once per image.
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
A full list of supported alerts is available in the [reference table of all Defe
[!INCLUDE [Remove the profile](./includes/defender-for-containers-remove-profile.md)] ::: zone-end
-## Learn More
+## Learn more
You can check out the following blogs:
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Previously updated : 06/14/2023 Last updated : 07/25/2023 # Overview of Microsoft Defender for Containers
You can learn more about [Kubernetes data plane hardening](kubernetes-workload-p
## Vulnerability assessment
-Defender for Containers scans the container images in Azure Container Registry (ACR) and Amazon AWS Elastic Container Registry (ECR) to notify you if there are known vulnerabilities in your images. When the scan completes, Defender for Containers provides details for each vulnerability detected, a security classification for each vulnerability detected, and guidance on how to remediate issues and protect vulnerable attack surfaces.
+Defender for Containers scans the container images in Azure Container Registry (ACR) and Amazon AWS Elastic Container Registry (ECR) to provide vulnerability reports for your container images, providing details for each vulnerability detected, remediation guidance, real-world exploit insights, and more.
+
+There are two solutions for vulnerability assessment in Azure, one powered by Microsoft Defender Vulnerability Management and one powered by Qualys.
Learn more about: -- [Vulnerability assessment for Azure Container Registry (ACR)](defender-for-containers-vulnerability-assessment-azure.md)
+- [Vulnerability assessments for Azure with Microsoft Defender Vulnerability Management](agentless-container-registry-vulnerability-assessment.md)
+- [Vulnerability assessment for Azure powered by Qualys](defender-for-containers-vulnerability-assessment-azure.md)
- [Vulnerability assessment for Amazon AWS Elastic Container Registry (ECR)](defender-for-containers-vulnerability-assessment-elastic.md) ## Run-time protection for Kubernetes nodes and clusters
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
Title: Identify vulnerabilities in Azure Container Registry
+ Title: Vulnerability assessment for Azure powered by Qualys
description: Learn how to use Defender for Containers to scan images in your Azure Container Registry to find vulnerabilities. Previously updated : 06/21/2023 Last updated : 07/30/2023
-# Scan your Azure Container Registry images for vulnerabilities
+# Vulnerability assessment for Azure powered by Qualys
-As part of the protections provided within Microsoft Defender for Cloud, you can scan the container images that are stored in your Azure Resource Manager-based Azure Container Registry.
+Vulnerability assessment for Azure, powered by Qualys, is an out-of-box solution that empowers security teams to easily discover and remediate vulnerabilities in Linux container images, with zero configuration for onboarding, and without deployment of any agents.
-When the scanner, powered by Qualys, reports vulnerabilities, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
-
-Defender for Cloud filters and classifies findings from the scanner. Images without vulnerabilities are marked as healthy and Defender for Cloud doesn't send notifications about healthy images to keep you from getting unwanted informational alerts.
-
-The triggers for an image scan are:
--- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image. To trigger the scan of an image, push it to your repository.--- **On import** - Azure Container Registry has import tools to bring images to your registry from an existing registry. **Microsoft Defender for Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).
+> [!NOTE]
+> This feature supports scanning of images in the Azure Container Registry (ACR) only. If you want to find vulnerabilities stored in other container registries, you can import the images into ACR, after which the imported images are scanned by the built-in vulnerability assessment solution. Learn how to [import container images to a container registry](/azure/container-registry/container-registry-import-images).
-- **Continuous scan**- This trigger has two modes:
+In every subscription where this capability is enabled, all images stored in ACR (existing and new) are automatically scanned for vulnerabilities without any extra configuration of users or registries. Recommendations with vulnerability reports are provided for all images in ACR as well as images that are currently running in AKS that were pulled from an ACR registry. Images are scanned shortly after being added to a registry, and rescanned for new vulnerabilities once every week.
- - A continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
+Container vulnerability assessment powered by Qualys has the following capabilities:
- - Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
+- **Scanning OS packages** - container vulnerability assessment can scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-matrix-defender-for-containers.md#registries-and-images-support-for-akspowered-by-qualys).
-Once a scan is triggered, scan results will typically appear in the Defender for Cloud recommendations after a few minutes, but in some cases it may take up to an hour.
+- **Language specific packages** ΓÇô support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [full list of supported languages](support-matrix-defender-for-containers.md#registries-and-images-support-for-akspowered-by-qualys).
-## Prerequisites
+- **Image scanning in Azure Private Link** - Azure container vulnerability assessment provides the ability to scan images in container registries that are accessible via Azure Private Links. This capability requires access to trusted services and authentication with the registry. Learn how to [allow access by trusted services](/azure/container-registry/allow-access-trusted-services).
-Before you can scan your ACR images:
+- **Reporting** - Container Vulnerability Assessment for Azure powered by Qualys provides vulnerability reports using the following recommendations:
-- You must enable one of the following plans on your subscription:
+ | Recommendation | Description | Assessment Key
+ |--|--|--|
+ | [Container registry images should have vulnerability findings resolved (powered by Qualys)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainerRegistryRecommendationDetailsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648)| Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. | dbd0cb49-b563-45e7-9724-889e799fa648 |
+ | [Running container images should have vulnerability findings resolved (powered by Qualys)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c)ΓÇ»| Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. | 41503391-efa5-47ee-9282-4eff6131462c/ |
- - [Defender CSPM](concept-cloud-security-posture-management.md). When you enable this plan, ensure you enable the **Container registries vulnerability assessments (preview)** extension.
- - [Defender for Containers](defender-for-containers-enable.md).
+- **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via the ARG](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-arg).
- >[!NOTE]
- > This feature is charged per image. Learn more about the [pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+- **Query vulnerability information via sub-assessment API** - You can get scan results via REST API. See the [subassessment list](/rest/api/defenderforcloud/sub-assessments/get).
+- **Support for exemptions** - Learn how to [create exemption rules for a management group, resource group, or subscription](disable-vulnerability-findings-containers.md).
+- **Support for disabling vulnerability findings** - Learn how to [disable vulnerability assessment findings on Container registry images](defender-for-containers-vulnerability-assessment-azure.md#disable-specific-findings).
-To find vulnerabilities in images stored in other container registries, you can import the images into ACR and scan them.
+## Scan triggers
-Use the ACR tools to bring images to your registry from Docker Hub or Microsoft Container Registry. When the import completes, the imported images are scanned by the built-in vulnerability assessment solution.
+- **One-time triggering**
+ - Each image pushed/imported to a container registry is scanned shortly after being pushed to a registry. In most cases, the scan is completed within a few minutes, but sometimes it may take up to an hour.
+ - Each image pulled from a container registry is scanned if it wasn't scanned in the last seven days.
+- **Continuous rescan triggering** ΓÇô Continuous rescan is required to ensure images that have been previously scanned for vulnerabilities are rescanned to update their vulnerability reports in case a new vulnerability is published.
+ - **Rescan** is performed once every 7 days for:
+ - images pulled in the last 30 days
+ - images currently running on the Kubernetes clusters monitored by the Defender for Containers agent
-Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md)
+## Prerequisites
-You can also [scan images in Amazon AWS Elastic Container Registry](defender-for-containers-vulnerability-assessment-elastic.md) directly from the Azure portal.
+Before you can scan your ACR images, you must enable the [Defender for Containers](defender-for-containers-enable.md) plan on your subscription.
For a list of the types of images and container registries supported by Microsoft Defender for Containers, see [Availability](supported-machines-endpoint-solutions-clouds-containers.md?tabs=azure-aks#registries-and-images). ## View and remediate findings
-1. To view the findings, open the **Recommendations** page. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved-(powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
+1. To view the findings, open the **Recommendations** page. If issues are found, you'll see the recommendation [Container registry images should have vulnerability findings resolved-(powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
:::image type="content" source="media/defender-for-containers-vulnerability-assessment-azure/container-registry-images-name-line.png" alt-text="Screenshot showing the recommendation line." lightbox="media/defender-for-containers-vulnerability-assessment-azure/container-registry-images-name-line.png":::
For a list of the types of images and container registries supported by Microsof
If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
-When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings. Typical scenarios include:
+When a finding matches the criteria you've defined in your disable rules, it doesn't appear in the list of findings. Typical scenarios include:
- Disable findings with severity below medium-- Disable findings that are non-patchable
+- Disable findings that are nonpatchable
- Disable findings with CVSS score below 6.5 - Disable findings with specific text in the security check or category (for example, ΓÇ£RedHatΓÇ¥, ΓÇ£CentOS Security Update for sudoΓÇ¥)
When a finding matches the criteria you've defined in your disable rules, it won
You can use any of the following criteria: - Finding ID
+- CVE
- Category - Security check - CVSS v3 scores
defender-for-cloud Defender For Sql On Machines Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-on-machines-vulnerability-assessment.md
To create a rule:
:::image type="content" source="./media/defender-for-sql-on-machines-vulnerability-assessment/disable-rule-vulnerability-findings-sql.png" alt-text="Create a disable rule for VA findings on SQL servers on machines.":::
-1. Select **Apply rule**. Changes might take up to 24 hrs to take effect.
+1. Select **Apply rule**. Changes might take up to 24 hours to take effect.
1. To view, override, or delete a rule:
defender-for-cloud Defender For Storage Classic Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-classic-migrate.md
Title: Migrate from Defender for Storage (classic) description: Learn about how to migrate from Defender for Storage (classic) to the new Defender for Storage plan to take advantage of its enhanced capabilities and pricing. Previously updated : 03/16/2023 Last updated : 07/31/2023
# Migrate from Defender for Storage (classic) to the new plan
-The new Defender for Storage plan was launched on March 28, 2023. If you're currently using Microsoft Defender for Storage (classic) with the per-transaction or the per-storage account pricing plan, consider upgrading to the new Defender for Storage plan, which offers several new benefits that aren't included in the classic plan. The new plan includes advanced security capabilities to help protect against malicious file uploads, sensitive data exfiltration, and data corruption. It also provides a more predictable and flexible pricing structure for better control over coverage and costs.
+The new Defender for Storage plan was launched on March 28, 2023. If you're currently using Microsoft Defender for Storage (classic) with the per-transaction or the per-storage account pricing plan, consider upgrading to the new [Defender for Storage](defender-for-storage-introduction.md) plan, which offers several new benefits that aren't included in the classic plan.
## Why move to the new plan?
-The new plan includes more advanced capabilities that can help improve the security of your data and help prevent malicious file uploads, sensitive data exfiltration, and data corruption:
+The new plan includes advanced security capabilities to help protect against malicious file uploads, sensitive data exfiltration, and data corruption.
-### Malware Scanning
+The new plan also provides a more predictable and flexible pricing structure for better control over coverage and costs.
-Malware Scanning in Defender for Storage helps protect storage accounts from malicious content by performing a full malware scan on uploaded content in near real time, using Microsoft Defender Antivirus capabilities. It's designed to help fulfill security and compliance requirements to handle untrusted content. Every file type is scanned, and scan results are returned for every file.
-The Malware Scanning capability is an agentless SaaS solution that allows simple setup at scale, with zero maintenance, and supports automating response at scale.
-Learn more about [Malware Scanning](defender-for-storage-malware-scan.md).
-
-### Sensitive data threat detection
-
-The ΓÇÿsensitive data threat detectionΓÇÖ capability enables security teams to efficiently prioritize and examine security alerts by considering the sensitivity of the data that could be at risk, leading to better detection and preventing data breaches.
-ΓÇÿSensitive data threat detectionΓÇÖ is powered by the ΓÇ£Sensitive Data DiscoveryΓÇ¥ engine, an agentless engine that uses a smart sampling method to find resources with sensitive data.
-The service is easily integrated with Microsoft Purview's sensitive information types (SITs) and classification labels, allowing seamless inheritance of your organization's sensitivity settings.
-This is a configurable feature in the new Defender for Storage plan. You can choose to enable or disable it with no extra cost.
-
-Learn more about [sensitive data threat detection](defender-for-storage-data-sensitivity.md).
-
-### Detection of entities without identities
-
-This expansion of the security alerts suite helps identify suspicious activities generated by entities without identities, such as those using misconfigured or overly permissive Shared Access Signatures (SAS tokens) that may have leaked or been compromised. By detecting and addressing these issues, you can improve the security of your storage accounts and reduce the risk of unauthorized access.
-
-The new plan also includes a pricing plan that charges based on the number of storage accounts you protect, which simplifies cost calculations and allows for easy scaling as your needs change. You can enable it at the subscription or resource level and can also exclude specific storage accounts from protected subscriptions, providing more granular control over your security coverage. Extra charges may apply to storage accounts with high-volume transactions that exceed a high monthly threshold.
+The new pricing plan charges based on the number of storage accounts you protect, which simplifies cost calculations and allows for easy scaling as your needs change. You can enable it at the subscription or resource level and can also exclude specific storage accounts from protected subscriptions, providing more granular control over your security coverage. Extra charges may apply to storage accounts with high-volume transactions that exceed a high monthly threshold.
## Deprecation of Defender for Storage (classic)
The classic plan will be deprecated in the future, and the deprecation will be a
## Migration scenarios
-Migrating from the classic Defender for Storage plan to the new Defender for Storage plan is a straightforward process, and there are several ways to do it. You'll need to proactively enable the new plan to access its enhanced capabilities and pricing.
+Migrating from the classic Defender for Storage plan to the new Defender for Storage plan is a straightforward process, and there are several ways to do it. You'll need to proactively [enable the new plan](../storage/common/azure-defender-storage-configure.md) to access its enhanced capabilities and pricing.
>[!NOTE] > To enable the new plan, make sure to disable the old Defender for Storage policies. Look for and disable policies named "Configure Azure Defender for Storage to be enabled", "Azure Defender for Storage should be enabled", or "Configure Microsoft Defender for Storage to be enabled (per-storage account plan)". ### Migrating from the classic Defender for Storage plan enabled with per-transaction pricing
-If the classic Defender for Storage plan is enabled with per-transaction pricing, you can switch to the new plan at either the subscription or resource level. You can also [exclude specific storage accounts](../storage/common/azure-defender-storage-configure.md#) from protected subscriptions.
+If the classic Defender for Storage plan is enabled with per-transaction pricing, you can switch to the new plan at either the subscription or resource level. You can also [exclude specific storage accounts](../storage/common/azure-defender-storage-configure.md#override-defender-for-storage-subscription-level-settings) from protected subscriptions.
Storage accounts that were previously excluded from protected subscriptions in the per-transaction plan will not remain excluded when you switch to the new plan. However, the exclusion tags will remain on the resource and can be removed. In most cases, storage accounts that were previously excluded from protected subscriptions will benefit the most from the new pricing plan.
Learn more about how to [enable and configure Defender for Storage](../storage/c
## Next steps
-In this article, you learned about Microsoft Defender for Storage.
+In this article, you learned about migrating to the new Microsoft Defender for Storage plan.
> [!div class="nextstepaction"] > [Enable Defender for Storage](enable-enhanced-security.md)
defender-for-cloud Defender For Storage Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md
# Malware Scanning in Defender for Storage
-> [!NOTE]
-> Malware Scanning is offered for free during public preview. **Billing will begin when generally available (GA) on September 1, 2023 and priced at $0.15 (USD)/GB of data scanned.** You are encouraged to use the ΓÇ£Monthly cappingΓÇ¥ feature to define the cap on GB scanned per storage account per month and control costs.
- Malware Scanning in Defender for Storage helps protect your storage accounts from malicious content by performing a full malware scan on uploaded content in near real time, using Microsoft Defender Antivirus capabilities. It's designed to help fulfill security and compliance requirements for handling untrusted content. The Malware Scanning capability is an agentless SaaS solution that allows simple setup at scale, with zero maintenance, and supports automating response at scale.
Malware Scanning doesn't block access or change permissions to the uploaded blob
Upon uploading a blob to the storage account, the Malware Scanning will initiate an additional read operation and update the index tag. In most cases, these operations do not generate significant load.
-### Capping mechanism
-
-The ΓÇ£cappingΓÇ¥ mechanism, which would allow you to set limitations on the scanning process to manage cost, is currently not functional (Malware Scanning is free during preview). However, we encourage you to set the desired limitations now, and these will be automatically implemented when the "capping" feature becomes functional.
- ### Impact on access and storage IOPS Despite the scanning process, access to uploaded data remains unaffected, and the impact on storage Input/Output Operations Per Second (IOPS) is minimal.
defender-for-cloud Disable Vulnerability Findings Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/disable-vulnerability-findings-containers.md
Title: Disable vulnerability assessment findings on Container registry images and running images
-description: Microsoft Defender for Cloud includes a fully integrated agentless vulnerability assessment solution powered by MDVM (Microsoft Defender Vulnerability Management).
+ Title: Creating exemptions and disabling vulnerabilities
+description: Learn how to create exemptions and disable vulnerabilities
Last updated 07/09/2023
-# Disable vulnerability assessment findings on container registry images
+# Create exemptions and disable vulnerability assessment findings on Container registry images and running images
-If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
+>[!NOTE]
+>You can customize your vulnerability assessment experience by exempting management groups, subscriptions, or specific resources from your secure score. Learn how to [create an exemption](exempt-resource.md) for a resource or subscription.
-When a finding matches the criteria you've defined in your disable rules, it doesn't appear in the list of findings. Typical scenario examples include:
+If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
-- Disable findings with severity below medium -- Disable findings for images that the vendor will not fix
+When a finding matches the criteria you've defined in your disable rules, it doesn't appear in the list of findings. Typical scenario examples include:
-> [!IMPORTANT]
-> To create a rule, you need permissions to edit a policy in Azure Policy.
-> Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy).
+- Disable findings with severity below medium
+- Disable findings for images that the vendor will not fix
+> [!IMPORTANT]
+> To create a rule, you need permissions to edit a policy in Azure Policy.
+> Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy).
-You can use a combination of any of the following criteria:
+You can use a combination of any of the following criteria:
- **CVE** - Enter the CVEs of the findings you want to exclude. Ensure the CVEs are valid. Separate multiple CVEs with a semicolon. For example, CVE-2020-1347; CVE-2020-1346.-- **Image digest** - Specify images for which vulnerabilities should be excluded based on the image digest. Separate multiple digests with a semicolon, for example: sha256:9b920e938111710c2768b31699aac9d1ae80ab6284454e8a9ff42e887fa1db31;sha256:ab0ab32f75988da9b146de7a3589c47e919393ae51bbf2d8a0d55dd92542451c-- **OS version** - Specify images for which vulnerabilities should be excluded based on the image OS. Separate multiple versions with a semicolon, for example: ubuntu_linux_20.04;alpine_3.17
+- **Image digest** - Specify images for which vulnerabilities should be excluded based on the image digest. Separate multiple digests with a semicolon, for example: `sha256:9b920e938111710c2768b31699aac9d1ae80ab6284454e8a9ff42e887fa1db31;sha256:ab0ab32f75988da9b146de7a3589c47e919393ae51bbf2d8a0d55dd92542451c`
+- **OS version** - Specify images for which vulnerabilities should be excluded based on the image OS. Separate multiple versions with a semicolon, for example: ubuntu_linux_20.04;alpine_3.17
- **Minimum Severity** - Select low, medium, high, or critical to exclude vulnerabilities less than and equal to the specified severity level.-- **Fix status** - Select the option to exclude vulnerabilities based on their fix status. -
+- **Fix status** - Select the option to exclude vulnerabilities based on their fix status.
Disable rules apply per recommendation, for example, to disable [CVE-2017-17512](https://github.com/advisories/GHSA-fc69-2v7r-7r95) both on the registry images and runtime images, the disable rule has to be configured in both places.
-> [!NOTE]
+> [!NOTE]
> The [Azure Preview Supplemental Terms](//azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- To create a rule:
+ To create a rule:
1. From the recommendations detail page for [Container registry images should have vulnerability findings resolved powered by Microsoft Defender Vulnerability Management](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) or [Running container images should have vulnerability findings resolved powered by Microsoft Defender Vulnerability Management ](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5), select **Disable rule**. 1. Select the relevant scope.
-1. Define your criteria. You can use any of the following criteria:
-
+1. Define your criteria. You can use any of the following criteria:
+ - **CVE** - Enter the CVEs of the findings you want to exclude. Ensure the CVEs are valid. Separate multiple CVEs with a semicolon. For example, CVE-2020-1347; CVE-2020-1346. - **Image digest** - Specify images for which vulnerabilities should be excluded based on the image digest. Separate multiple digests with a semicolon, for example: sha256:9b920e938111710c2768b31699aac9d1ae80ab6284454e8a9ff42e887fa1db31;sha256:ab0ab32f75988da9b146de7a3589c47e919393ae51bbf2d8a0d55dd92542451c
- - **OS version** - Specify images for which vulnerabilities should be excluded based on the image OS. Separate multiple versions with a semicolon, for example: ubuntu_linux_20.04;alpine_3.17
+ - **OS version** - Specify images for which vulnerabilities should be excluded based on the image OS. Separate multiple versions with a semicolon, for example: ubuntu_linux_20.04;alpine_3.17
- **Minimum Severity** - Select low, medium, high, or critical to exclude vulnerabilities less than and equal to the specified severity level.
- - **Fix status** - Select the option to exclude vulnerabilities based on their fix status.
+ - **Fix status** - Select the option to exclude vulnerabilities based on their fix status.
1. In the justification text box, add your justification for why a specific vulnerability was disabled. This provides clarity and understanding for anyone reviewing the rule.
-
+ 1. Select **Apply rule**. :::image type="content" source="./media/disable-vulnerability-findings-containers/disable-rules.png" alt-text="Screenshot showing where to create a disable rule for vulnerability findings on registry images." lightbox="media/disable-vulnerability-findings-containers/disable-rules.png"::: > [!IMPORTANT]
- > Changes might take up to 24hrs to take effect.
+ > Changes might take up to 24 hours to take effect.
**To view, override, or delete a rule:**
Disable rules apply per recommendation, for example, to disable [CVE-2017-17512]
1. To view or delete the rule, select the ellipsis menu ("..."). 1. Do one of the following: - To view or override a disable rule - select **View rule**, make any changes you want, and select **Override rule**.
- - To delete a disable rule - select **Delete rule**.
+ - To delete a disable rule - select **Delete rule**.
:::image type="content" source="./media/disable-vulnerability-findings-containers/override-rules.png" alt-text="Screenshot showing where to view, delete or override a rule for vulnerability findings on registry images." lightbox="media/disable-vulnerability-findings-containers/override-rules.png"::: - ## Next steps - Learn how to [view and remediate vulnerability assessment findings for registry images](view-and-remediate-vulnerability-assessment-findings.md). - Learn about [agentless container posture](concept-agentless-containers.md).-
defender-for-cloud Enable Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-vulnerability-assessment.md
+
+ Title: Enable vulnerability assessment in Azure powered by MDVM
+description: Learn how to enable vulnerability assessment in Azure powered by Microsoft Defender Vulnerability Management (MDVM)
++ Last updated : 07/20/2023++
+# Enable vulnerability assessment in Azure powered by MDVM
+
+Vulnerability assessment for Azure, powered by Microsoft Defender Vulnerability Management (MDVM), is an out-of-box solution that empowers security teams to easily discover and remediate vulnerabilities in Linux container images, with zero configuration for onboarding, and without deployment of any agents.
+
+## How to enable vulnerability assessment in Azure powered by MDVM
+
+1. Before starting, verify that the subscription is [onboarded to Defender CSPM](tutorial-enable-cspm-plan.md), [Defender for Containers](tutorial-enable-containers-azure.md) or [Defender for Container Registries](defender-for-container-registries-introduction.md).
+1. In the Azure portal, navigate to the Defender for Cloud's **Environment Settings** page.
+
+1. Select the subscription that's onboarded to one of the above plans. Then select **Settings**.
+
+1. Ensure the **Container registries vulnerability assessments** extension is toggled to **On**.
+
+1. Select **Continue**.
+
+ :::image type="content" source="media/concept-agentless-containers/select-container-registries-vulnerability-assessments.png" alt-text="Screenshot of selecting agentless discovery for Kubernetes and Container registries vulnerability assessments." lightbox="media/concept-agentless-containers/select-container-registries-vulnerability-assessments.png":::
+
+1. Select **Save**.
+
+A notification message pops up in the top right corner that will verify that the settings were saved successfully.
+
+## How to enable runtime coverage
+
+- For Defender for CSPM, use agentless discovery for Kubernetes. For more information, see [Onboard agentless container posture in Defender CSPM](how-to-enable-agentless-containers.md).
+- For Defender for Containers, use the Defender for Containers agent. For more information, see [Deploy the Defender profile in Azure](tutorial-enable-containers-azure.md#deploy-the-defender-profile-in-azure).
+- For Defender for Container Registries, there is no runtime coverage.
+
+## Next steps
+
+- Learn more about [Trusted Access](/azure/aks/trusted-access-feature).
+- Learn how to [view and remediate vulnerability assessment findings for registry images and running images](view-and-remediate-vulnerability-assessment-findings.md).
+- Learn how to [create an exemption](exempt-resource.md) for a resource or subscription.
+- Learn more about [Cloud Security Posture Management](concept-cloud-security-posture-management.md).
defender-for-cloud Episode Thirty Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-three.md
Title: Agentless Container Posture Management | Defender for Cloud in the field -
-description: Learn about Agentless Container Posture Management
+ Title: Agentless container posture management | Defender for Cloud in the field
+description: Learn about agentless container posture management
Last updated 06/13/2023
-# Agentless Container Posture Management
+# Agentless container posture management
**Episode description**: In this episode of Defender for Cloud in the Field, Shani Freund Menscher joins Yuri Diogenes to talk about a new capability in Defender CSPM called Agentless Container Posture Management. Shani explains how Agentless Container Posture Management works, how to onboard, and how to leverage this feature to obtain more insights into the container's security. Shani also demonstrates how to visualize this information using Attack Path and Cloud Security Explorer.
Last updated 06/13/2023
## Recommended resources -- Learn more about [Agentless Container Posture](concept-agentless-containers.md)
+- Learn more about [agentless container posture](concept-agentless-containers.md)
- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS) - Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
defender-for-cloud How To Enable Agentless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-enable-agentless-containers.md
Title: How-to enable Agentless Container posture in Microsoft Defender CSPM
-description: Learn how to onboard Agentless Containers
+ Title: How-to enable agentless container posture in Microsoft Defender CSPM
+description: Learn how to onboard agentless containers
Previously updated : 06/13/2023 Last updated : 07/31/2023
-# Onboard Agentless Container posture in Defender CSPM
+# Onboard agentless container posture in Defender CSPM
-Onboarding Agentless Container posture in Defender CSPM will allow you to gain all its [capabilities](concept-agentless-containers.md#capabilities).
+Onboarding agentless container posture in Defender CSPM will allow you to gain all its [capabilities](concept-agentless-containers.md#capabilities).
-Defender CSPM includes [two extensions](#what-are-the-extensions-for-agentless-container-posture-management) that allow for agentless visibility into Kubernetes and containers registries across your organization's SDLC and runtime.
+Defender CSPM includes [two extensions](#what-are-the-extensions-for-agentless-container-posture-management) that allow for agentless visibility into Kubernetes and containers registries across your organization's software development lifecycle.
-**To onboard Agentless Container posture in Defender CSPM:**
+**To onboard agentless container posture in Defender CSPM:**
1. Before starting, verify that the subscription is [onboarded to Defender CSPM](enable-enhanced-security.md).
Defender CSPM includes [two extensions](#what-are-the-extensions-for-agentless-c
1. Select **Continue**.
- :::image type="content" source="media/concept-agentless-containers/settings-continue.png" alt-text="Screenshot of selecting agentless discovery for Kubernetes and Container registries vulnerability assessments." lightbox="media/concept-agentless-containers/settings-continue.png":::
+ :::image type="content" source="media/concept-agentless-containers/select-components.png" alt-text="Screenshot of selecting components." lightbox="media/concept-agentless-containers/select-components.png":::
1. Select **Save**. A notification message pops up in the top right corner that will verify that the settings were saved successfully.
-## What are the extensions for Agentless Container Posture management?
+## What are the extensions for agentless container posture management?
There are two extensions that provide agentless CSPM functionality: -- **Container registries vulnerability assessments**: Provides agentless containers registries vulnerability assessments. Recommendations are available based on the vulnerability assessment timeline. Learn more about [image scanning](concept-agentless-containers.md#agentless-container-registry-vulnerability-assessment).
+- **Container registries vulnerability assessments**: Provides agentless containers registries vulnerability assessments. Recommendations are available based on the vulnerability assessment timeline. Learn more about [image scanning](agentless-container-registry-vulnerability-assessment.md).
- **Agentless discovery for Kubernetes**: Provides API-based discovery of information about Kubernetes cluster architecture, workload objects, and setup. ## How can I onboard multiple subscriptions at once? To onboard multiple subscriptions at once, you can use this [script](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Powershell%20scripts/Agentless%20Container%20Posture). - ## Why don't I see results from my clusters? If you don't see results from your clusters, check the following:
If you don't see results from your clusters, check the following:
## What can I do if I have stopped clusters?
-We do not support or charge stopped clusters. To get the value of agentless capabilities on a stopped cluster, you can rerun the cluster.
-
-## What do I do if I have locked resource groups, subscriptions, or clusters?
+We do not support or charge stopped clusters. To get the value of agentless capabilities on a stopped cluster, you can rerun the cluster.
-We suggest that you unlock the locked resource group/subscription/cluster, make the relevant requests manually, and then re-lock the resource group/subscription/cluster by doing the following:
+## What do I do if I have locked resource groups, subscriptions, or clusters?
-1. Enable the feature flag manually via CLI by using [Trusted Access](/azure/aks/trusted-access-feature).
+We suggest that you unlock the locked resource group/subscription/cluster, make the relevant requests manually, and then re-lock the resource group/subscription/cluster by doing the following:
+1. Enable the feature flag manually via CLI by using [Trusted Access](/azure/aks/trusted-access-feature).
- ``` CLI
+ ``` CLI
ΓÇ£az feature register --namespace "Microsoft.ContainerService" --name "TrustedAccessPreviewΓÇ¥
- ```
+ ```
-2. Perform the bind operation in the CLI:
+2. Perform the bind operation in the CLI:
- ``` CLI
+ ``` CLI
az account set -s <SubscriptionId>
We suggest that you unlock the locked resource group/subscription/cluster, make
az aks trustedaccess rolebinding create --resource-group <cluster resource group> --cluster-name <cluster name> --name defender-cloudposture --source-resource-id /subscriptions/<SubscriptionId>/providers/Microsoft.Security/pricings/CloudPosture/securityOperators/DefenderCSPMSecurityOperator --roles "Microsoft.Security/pricings/microsoft-defender-operator"
- ```
+ ```
-For locked clusters, you can also do one of the following:
+For locked clusters, you can also do one of the following:
-- Remove the lock. -- Perform the bind operation manually by making an API request.
+- Remove the lock.
+- Perform the bind operation manually by making an API request.
Learn more about [locked resources](/azure/azure-resource-manager/management/lock-resources?tabs=json).
Learn more about [locked resources](/azure/azure-resource-manager/management/loc
Learn more about [supported Kubernetes versions in Azure Kubernetes Service (AKS)](/azure/aks/supported-kubernetes-versions?tabs=azure-cli).
- ## Support for exemptions
-
-You can customize your vulnerability assessment experience by exempting management groups, subscriptions, or specific resources from your secure score. Learn how to [create an exemption](exempt-resource.md) for a resource or subscription.
- ## Next Steps -- Learn more about [Trusted Access](/azure/aks/trusted-access-feature).
+- Learn more about [Trusted Access](/azure/aks/trusted-access-feature).
- Learn how to [view and remediate vulnerability assessment findings for registry images](view-and-remediate-vulnerability-assessment-findings.md).
+- Learn how to [view and remediate vulnerabilities for images running on your AKS clusters](view-and-remediate-vulnerabilities-for-images-running-on-aks.md).
- Learn how to [Test the Attack Path and Security Explorer using a vulnerable container image](how-to-test-attack-path-and-security-explorer-with-vulnerable-container-image.md) - Learn how to [create an exemption](exempt-resource.md) for a resource or subscription. - Learn more about [Cloud Security Posture Management](concept-cloud-security-posture-management.md).--
defender-for-cloud How To Manage Cloud Security Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-cloud-security-explorer.md
Learn more about [the cloud security graph, attack path analysis, and the cloud
## Prerequisites - You must [enable Defender CSPM](enable-enhanced-security.md).
- - For Agentless Container Posture, you must enable the following extensions:
- - Agentless discovery for Kubernetes (preview)
- - Container registries vulnerability assessments (preview)
+ - For agentless container posture, you must enable the following extensions:
+ - Agentless discovery for Kubernetes (preview)
+ - Container registries vulnerability assessments (preview)
- You must [enable agentless scanning](enable-vulnerability-assessment-agentless.md). -- Required roles and permissions:
- - Security Reader
- - Security Admin
- - Reader
- - Contributor
- - Owner
+- Required roles and permissions:
+ - Security Reader
+ - Security Admin
+ - Reader
+ - Contributor
+ - Owner
Check the [cloud availability tables](supported-machines-endpoint-solutions-clouds-servers.md) to see which government and cloud environments are supported.
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
You can configure the Microsoft Security DevOps tools on Azure Pipelines and Git
The following new recommendations are now available for DevOps:
-| Recommendation | Description | Severity |
+| Recommendation | Description | Severity |
|--|--|--|
-| (Preview) [Code repositories should have code scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/c68a8c2a-6ed4-454b-9e37-4b7654f2165f/showSecurityCenterCommandBar~/false) | Defender for DevOps has found vulnerabilities in code repositories. To improve the security posture of the repositories, it's highly recommended to remediate these vulnerabilities. (No related policy) | Medium |
-| (Preview) [Code repositories should have secret scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/4e07c7d0-e06c-47d7-a4a9-8c7b748d1b27/showSecurityCenterCommandBar~/false) | Defender for DevOps has found a secret in code repositories.  This should be remediated immediately to prevent a security breach.  Secrets found in repositories can be leaked or discovered by adversaries, leading to compromise of an application or service. For Azure DevOps, the Microsoft Security DevOps CredScan tool only scans builds on which it has been configured to run. Therefore, results may not reflect the complete status of secrets in your repositories. (No related policy) | High |
-| (Preview) [Code repositories should have Dependabot scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/822425e3-827f-4f35-bc33-33749257f851/showSecurityCenterCommandBar~/false) | Defender for DevOps has found vulnerabilities in code repositories. To improve the security posture of the repositories, it's highly recommended to remediate these vulnerabilities. (No related policy) | Medium |
-| (Preview) [Code repositories should have infrastructure as code scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/2ebc815f-7bc7-4573-994d-e1cc46fb4a35/showSecurityCenterCommandBar~/false) | (Preview) Code repositories should have infrastructure as code scanning findings resolved | Medium |
-| (Preview) [GitHub repositories should have code scanning enabled](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6672df26-ff2e-4282-83c3-e2f20571bd11/showSecurityCenterCommandBar~/false) | GitHub uses code scanning to analyze code in order to find security vulnerabilities and errors in code. Code scanning can be used to find, triage, and prioritize fixes for existing problems in your code. Code scanning can also prevent developers from introducing new problems. Scans can be scheduled for specific days and times, or scans can be triggered when a specific event occurs in the repository, such as a push. If code scanning finds a potential vulnerability or error in code, GitHub displays an alert in the repository. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project. (No related policy) | Medium |
-| (Preview) [GitHub repositories should have secret scanning enabled](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/1a600c61-6443-4ab4-bd28-7a6b6fb4691d/showSecurityCenterCommandBar~/false) | GitHub scans repositories for known types of secrets, to prevent fraudulent use of secrets that were accidentally committed to repositories. Secret scanning will scan the entire Git history on all branches present in the GitHub repository for any secrets. Examples of secrets are tokens and private keys that a service provider can issue for authentication. If a secret is checked into a repository, anyone who has read access to the repository can use the secret to access the external service with those privileges. Secrets should be stored in a dedicated, secure location outside the repository for the project. (No related policy) | High |
-| (Preview) [GitHub repositories should have Dependabot scanning enabled](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/92643c1f-1a95-4b68-bbd2-5117f92d6e35/showSecurityCenterCommandBar~/false) | GitHub sends Dependabot alerts when it detects vulnerabilities in code dependencies that affect repositories. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project or other projects that use its code. Vulnerabilities vary in type, severity, and method of attack. When code depends on a package that has a security vulnerability, this vulnerable dependency can cause a range of problems. (No related policy) | Medium |
+| (Preview)ΓÇ»[Code repositories should have code scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/c68a8c2a-6ed4-454b-9e37-4b7654f2165f/showSecurityCenterCommandBar~/false) | Defender for DevOps has found vulnerabilities in code repositories. To improve the security posture of the repositories, it's highly recommended to remediate these vulnerabilities. (No related policy) | Medium |
+| (Preview)ΓÇ»[Code repositories should have secret scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/4e07c7d0-e06c-47d7-a4a9-8c7b748d1b27/showSecurityCenterCommandBar~/false) | Defender for DevOps has found a secret in code repositories.ΓÇ» This should be remediated immediately to prevent a security breach.ΓÇ» Secrets found in repositories can be leaked or discovered by adversaries, leading to compromise of an application or service. For Azure DevOps, the Microsoft Security DevOps CredScan tool only scans builds on which it has been configured to run. Therefore, results may not reflect the complete status of secrets in your repositories. (No related policy) | High |
+| (Preview)ΓÇ»[Code repositories should have Dependabot scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/822425e3-827f-4f35-bc33-33749257f851/showSecurityCenterCommandBar~/false) | Defender for DevOps has found vulnerabilities in code repositories. To improve the security posture of the repositories, it's highly recommended to remediate these vulnerabilities. (No related policy) | Medium |
+| (Preview) [Code repositories should have infrastructure as code scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/2ebc815f-7bc7-4573-994d-e1cc46fb4a35/showSecurityCenterCommandBar~/false) | (Preview) Code repositories should have infrastructure as code scanning findings resolved | Medium |
+| (Preview)ΓÇ»[GitHub repositories should have code scanning enabled](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6672df26-ff2e-4282-83c3-e2f20571bd11/showSecurityCenterCommandBar~/false) | GitHub uses code scanning to analyze code in order to find security vulnerabilities and errors in code. Code scanning can be used to find, triage, and prioritize fixes for existing problems in your code. Code scanning can also prevent developers from introducing new problems. Scans can be scheduled for specific days and times, or scans can be triggered when a specific event occurs in the repository, such as a push. If code scanning finds a potential vulnerability or error in code, GitHub displays an alert in the repository. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project. (No related policy) | Medium |
+| (Preview)ΓÇ»[GitHub repositories should have secret scanning enabled](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/1a600c61-6443-4ab4-bd28-7a6b6fb4691d/showSecurityCenterCommandBar~/false) | GitHub scans repositories for known types of secrets, to prevent fraudulent use of secrets that were accidentally committed to repositories. Secret scanning will scan the entire Git history on all branches present in the GitHub repository for any secrets. Examples of secrets are tokens and private keys that a service provider can issue for authentication. If a secret is checked into a repository, anyone who has read access to the repository can use the secret to access the external service with those privileges. Secrets should be stored in a dedicated, secure location outside the repository for the project. (No related policy) | High |
+| (Preview)ΓÇ»[GitHub repositories should have Dependabot scanning enabled](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/92643c1f-1a95-4b68-bbd2-5117f92d6e35/showSecurityCenterCommandBar~/false) | GitHub sends Dependabot alerts when it detects vulnerabilities in code dependencies that affect repositories. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project or other projects that use its code. Vulnerabilities vary in type, severity, and method of attack. When code depends on a package that has a security vulnerability, this vulnerable dependency can cause a range of problems. (No related policy) | Medium |
The Defender for DevOps recommendations replaced the deprecated vulnerability scanner for CI/CD workflows that was included in Defender for Containers.
We've renamed the Auto-provisioning page to **Settings & monitoring**.
Auto-provisioning was meant to allow at-scale enablement of prerequisites, which are needed by Defender for Cloud's advanced features and capabilities. To better support our expanded capabilities, we're launching a new experience with the following changes: **The Defender for Cloud's plans page now includes**:+ - When you enable a Defender plan that requires monitoring components, those components are enabled for automatic provisioning with default settings. These settings can optionally be edited at any time. - You can access the monitoring component settings for each Defender plan from the Defender plan page. - The Defender plans page clearly indicates whether all the monitoring components are in place for each Defender plan, or if your monitoring coverage is incomplete. **The Settings & monitoring page**:+ - Each monitoring component indicates the Defender plans to which it's related. Learn more about [managing your monitoring settings](monitoring-components.md).
Learn more about [vulnerability assessment for Amazon ECR images](defender-for-c
Updates in September include: - [Suppress alerts based on Container and Kubernetes entities](#suppress-alerts-based-on-container-and-kubernetes-entities)-- [Defender for Servers supports File Integrity Monitoring with Azure Monitor Agent](#defender-for-servers-supports-file-integrity-monitoring-with-azure-monitor-agent)
+- [Defender for Servers supports File Integrity Monitoring with Azure Monitor Agent](#defender-for-servers-supports-file-integrity-monitoring-with-azure-monitor-agent)
- [Legacy Assessments APIs deprecation](#legacy-assessments-apis-deprecation) - [Extra recommendations added to identity](#extra-recommendations-added-to-identity) - [Removed security alerts for machines reporting to cross-tenant Log Analytics workspaces](#removed-security-alerts-for-machines-reporting-to-cross-tenant-log-analytics-workspaces)
Learn more about [alert suppression rules](alerts-suppression-rules.md).
### Defender for Servers supports File Integrity Monitoring with Azure Monitor Agent
-File integrity monitoring (FIM) examines operating system files and registries for changes that might indicate an attack.
+File integrity monitoring (FIM) examines operating system files and registries for changes that might indicate an attack.
FIM is now available in a new version based on Azure Monitor Agent (AMA), which you can [deploy through Defender for Cloud](auto-deploy-azure-monitoring-agent.md).
Updates in August include:
- [Azure Monitor Agent integration now in preview](#azure-monitor-agent-integration-now-in-preview) - [Deprecated VM alerts regarding suspicious activity related to a Kubernetes cluster](#deprecated-vm-alerts-regarding-suspicious-activity-related-to-a-kubernetes-cluster)
-### Vulnerabilities for running images are now visible with Defender for Containers on your Windows containers
+### Vulnerabilities for running images are now visible with Defender for Containers on your Windows containers
Defender for Containers now shows vulnerabilities for running Windows containers.
When vulnerabilities are detected, Defender for Cloud generates the following se
Learn more about [viewing vulnerabilities for running images](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters). ### Azure Monitor Agent integration now in preview
-
+ Defender for Cloud now includes preview support for the [Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) (AMA). AMA is intended to replace the legacy Log Analytics agent (also referred to as the Microsoft Monitoring Agent (MMA)), which is on a path to deprecation. AMA [provides many benefits](../azure-monitor/agents/azure-monitor-agent-migration.md#benefits) over legacy agents. In Defender for Cloud, when you [enable auto provisioning for AMA](auto-deploy-azure-monitoring-agent.md), the agent is deployed on **existing and new** VMs and Azure Arc-enabled machines that are detected in your subscriptions. If Defenders for Cloud plans are enabled, AMA collects configuration information and event logs from Azure VMs and Azure Arc machines. The AMA integration is in preview, so we recommend using it in test environments, rather than in production environments.
We deprecated the following policies to corresponding policies that already exis
| To be deprecated | Changing to | |--|--|
-|`Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'` | `App Service apps should have 'Client Certificates (Incoming client certificates)' enabled` |
+|`Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'` | `App Service apps should have 'Client Certificates (Incoming client certificates)' enabled` |
| `Ensure that 'Python version' is the latest, if used as a part of the API app` | `App Service apps that use Python should use the latest Python version'` | | `CORS should not allow every resource to access your API App` | `App Service apps should not have CORS configured to allow every resource to access your apps` | | `Managed identity should be used in your API App` | `App Service apps should use managed identity` |
Updates in June include:
### General availability (GA) for Microsoft Defender for Azure Cosmos DB
-Microsoft Defender for Azure Cosmos DB is now generally available (GA) and supports SQL (core) API account types.
+Microsoft Defender for Azure Cosmos DB is now generally available (GA) and supports SQL (core) API account types.
This new release to GA is a part of the Microsoft Defender for Cloud database protection suite, which includes different types of SQL databases, and MariaDB. Microsoft Defender for Azure Cosmos DB is an Azure native layer of security that detects attempts to exploit databases in your Azure Cosmos DB accounts.
Microsoft Defender for Azure Cosmos DB continuously analyzes the telemetry strea
Learn more about [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md).
-With the addition of support for Azure Cosmos DB, Defender for Cloud now provides one of the most comprehensive workload protection offerings for cloud-based databases. Security teams and database owners can now have a centralized experience to manage their database security of their environments.
+With the addition of support for Azure Cosmos DB, Defender for Cloud now provides one of the most comprehensive workload protection offerings for cloud-based databases. Security teams and database owners can now have a centralized experience to manage their database security of their environments.
Learn how to [enable protections](enable-enhanced-security.md) for your databases.
In many cases of attacks, you want to track alerts based on the IP address of th
### Alerts by resource group
-The ability to filter, sort and group by resource group has been added to the Security alerts page.
+The ability to filter, sort and group by resource group has been added to the Security alerts page.
A resource group column has been added to the alerts grid.
A new filter has been added which allows you to view all of the alerts for speci
:::image type="content" source="media/release-notes/filter-by-resource-group.png" alt-text="Screenshot that shows the new resource group filter." lightbox="media/release-notes/filter-by-resource-group.png":::
-You can now also group your alerts by resource group to view all of your alerts for each of your resource groups.
+You can now also group your alerts by resource group to view all of your alerts for each of your resource groups.
:::image type="content" source="media/release-notes/group-by-resource.png" alt-text="Screenshot that shows how to view your alerts when they're grouped by resource group." lightbox="media/release-notes/group-by-resource.png":::
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
If you're looking for items older than six months, you can find them in the [Arc
Updates in July include: |Date |Update |
-|||
+|-|-|
+| July 31 | [Preview release of containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries](#preview-release-of-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-mdvm-in-defender-for-containers-and-defender-for-container-registries)
+| July 30 | [Agentless container posture in Defender CSPM is now Generally Available](#agentless-container-posture-in-defender-cspm-is-now-generally-available) |
| July 20 | [Management of automatic updates to Defender for Endpoint for Linux](#management-of-automatic-updates-to-defender-for-endpoint-for-linux) | July 18 | [Agentless secret scanning for virtual machines in Defender for servers P2 & DCSPM](#agentless-secret-scanning-for-virtual-machines-in-defender-for-servers-p2--dcspm) | | July 12 | [New Security alert in Defender for Servers plan 2: Detecting Potential Attacks leveraging Azure VM GPU driver extensions](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions) | July 9 | [Support for disabling specific vulnerability findings](#support-for-disabling-specific-vulnerability-findings) | July 1 | [Data Aware Security Posture is now Generally Available](#data-aware-security-posture-is-now-generally-available) |
+### Preview release of containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries
+
+July 31, 2023
+
+We're announcing the release of Vulnerability Assessment (VA) for Linux container images in Azure container registries powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries. The new container VA offering will be provided alongside our existing Container VA offering powered by Qualys in both Defender for Containers and Defender for Container Registries, and include daily rescans of container images, exploitability information, support for OS and programming languages (SCA) and more.
+
+This new offering will start rolling out today, and is expected to be available to all customers By August 7.
+
+For more information, see [Container Vulnerability Assessment powered by MDVM](agentless-container-registry-vulnerability-assessment.md) and [Microsoft Defender Vulnerability Management (MDVM)](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management).
+
+### Agentless container posture in Defender CSPM is now Generally Available
+
+July 30, 2023
+
+Agentless container posture capabilities is now Generally Available (GA) as part of the Defender CSPM (Cloud Security Posture Management) plan.
+
+Learn more about [agentless container posture in Defender CSPM](concept-agentless-containers.md).
+ ### Management of automatic updates to Defender for Endpoint for Linux July 20, 2023
-By default, Defender for Cloud attempts to update your Defender for Endpoint for Linux agents onboarded with the `MDE.Linux` extension. With this release, you can manage this setting and opt-out from the default configuration to manage your update cycles manually.
+By default, Defender for Cloud attempts to update your Defender for Endpoint for Linux agents onboarded with the `MDE.Linux` extension. With this release, you can manage this setting and opt-out from the default configuration to manage your update cycles manually.
Learn how to [manage automatic updates configuration for Linux](integration-defender-for-endpoint.md#manage-automatic-updates-configuration-for-linux).
For a complete list of alerts, see the [reference table for all security alerts
July 9, 2023
-Release of support for disabling vulnerability findings for your container registry images or running images as part of agentless container posture. If you have an organizational need to ignore a vulnerability finding on your container registry image, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
+Release of support for disabling vulnerability findings for your container registry images or running images as part of agentless container posture. If you have an organizational need to ignore a vulnerability finding on your container registry image, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
-Learn how to [disable vulnerability assessment findings on Container registry images](disable-vulnerability-findings-containers.md).
+Learn how to [disable vulnerability assessment findings on Container registry images](disable-vulnerability-findings-containers.md).
### Data Aware Security Posture is now Generally Available
Defender for Cloud has improved the onboarding experience to include a new strea
For organizations that have adopted Hashicorp Terraform for automation, Defender for Cloud now includes the ability to use Terraform as the deployment method alongside AWS CloudFormation or GCP Cloud Shell. You can now customize the required role names when creating the integration. You can also select between: -- **Default access** - Allows Defender for Cloud to scan your resources and automatically include future capabilities.
+- **Default access** - Allows Defender for Cloud to scan your resources and automatically include future capabilities.
-- **Least privileged access** -Grants Defender for Cloud access only to the current permissions needed for the selected plans.
+- **Least privileged access** -Grants Defender for Cloud access only to the current permissions needed for the selected plans.
If you select the least privileged permissions, you'll only receive notifications on any new roles and permissions that are required to get full functionality on the connector health. Defender for Cloud allows you to distinguish between your cloud accounts by their native names from the cloud vendors. For example, AWS account aliases and GCP project names.
-### Private Endpoint support for Malware Scanning in Defender for Storage
+### Private Endpoint support for Malware Scanning in Defender for Storage
June 25, 2023
June 21, 2023
A new container recommendation in Defender CSPM powered by MDVM is released for preview: |Recommendation | Description | Assessment Key|
-|--|--|--|
+|--|--|--|
| Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)(Preview)ΓÇ»| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 This new recommendation replaces the current recommendation of the same name, powered by Qualys, only in Defender CSPM (replacing assessment key 41503391-efa5-47ee-9282-4eff6131462c).
This new recommendation replaces the current recommendation of the same name, po
June 15, 2023
-The NIST 800-53 standards (both R4 and R5) have recently been updated with control changes in Microsoft Defender for Cloud regulatory compliance. The Microsoft-managed controls have been removed from the standard, and the information on the Microsoft responsibility implementation (as part of the cloud shared responsibility model) is now available only in the control details pane under **Microsoft Actions**.
+The NIST 800-53 standards (both R4 and R5) have recently been updated with control changes in Microsoft Defender for Cloud regulatory compliance. The Microsoft-managed controls have been removed from the standard, and the information on the Microsoft responsibility implementation (as part of the cloud shared responsibility model) is now available only in the control details pane under **Microsoft Actions**.
These controls were previously calculated as passed controls, so you may see a significant dip in your compliance score for NIST standards between April 2023 and May 2023.
Check out this [blog](https://techcommunity.microsoft.com/t5/microsoft-defender-
You can learn the differences between [express and classic configuration](sql-azure-vulnerability-assessment-overview.md#what-are-the-express-and-classic-configurations).
-### More scopes added to existing Azure DevOps Connectors
+### More scopes added to existing Azure DevOps Connectors
June 6, 2023 Defender for DevOps added the following extra scopes to the Azure DevOps (ADO) application: -- **Advance Security management**: `vso.advsec_manage`. Which is needed in order to allow you to enable, disable and manage GitHub Advanced Security for ADO.
+- **Advance Security management**: `vso.advsec_manage`. Which is needed in order to allow you to enable, disable and manage GitHub Advanced Security for ADO.
- **Container Mapping**: `vso.extension_manage`, `vso.gallery_manager`; Which is needed in order to allow you to share the decorator extension with the ADO organization.
Updates in May include:
### New alert in Defender for Key Vault - | Alert (alert type) | Description | MITRE tactics | Severity | |||:-:|| | **Unusual access to the key vault from a suspicious IP (Non-Microsoft or External)**<br>(KV_UnusualAccessSuspiciousIP) | A user or service principal has attempted anomalous access to key vaults from a non-Microsoft IP in the last 24 hours. This anomalous access pattern may be legitimate activity. It could be an indication of a possible attempt to gain access of the key vault and the secrets contained within it. We recommend further investigations. | Credential Access | Medium |
Learn more about [agentless scanning](concept-agentless-data-collection.md) and
### Revised JIT (Just-In-Time) rule naming conventions in Defender for Cloud
-We revised the JIT (Just-In-Time) rules to align with the Microsoft Defender for Cloud brand. We changed the naming conventions for Azure Firewall and NSG (Network Security Group) rules.
+We revised the JIT (Just-In-Time) rules to align with the Microsoft Defender for Cloud brand. We changed the naming conventions for Azure Firewall and NSG (Network Security Group) rules.
The changes are listed as follows:
The following recommendations are now released as General Availability (GA) and
#### General Availability (GA) release of identity recommendations V2 The V2 release of identity recommendations introduces the following enhancements:+ - The scope of the scan has been expanded to include all Azure resources, not just subscriptions. Which enables security administrators to view role assignments per account. - Specific accounts can now be exempted from evaluation. Accounts such as break glass or service accounts can be excluded by security administrators.-- The scan frequency has been increased from 24 hours to 12 hours, thereby ensuring that the identity recommendations are more up-to-date and accurate.
+- The scan frequency has been increased from 24 hours to 12 hours, thereby ensuring that the identity recommendations are more up-to-date and accurate.
The following security recommendations are available in GA and replace the V1 recommendations:
Defender for DevOps Code and IaC has expanded its recommendation coverage in Mic
- `Code repositories should have infrastructure as code scanning findings resolved`
-Previously, coverage for Azure DevOps security scanning only included the secrets recommendation.
+Previously, coverage for Azure DevOps security scanning only included the secrets recommendation.
Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
Permissions can be granted in two different ways:
- In your organization, select **GitHub Apps**. Locate Your organization, and select **Review request**. -- You'll get an automated email from GitHub Support. In the email, select **Review permission request to accept or reject this change**.
+- You'll get an automated email from GitHub Support. In the email, select **Review permission request to accept or reject this change**.
After you have followed either of these options, you'll be navigated to the review screen where you should review the request. Select **Accept new permissions** to approve the request.
If a subscription has a VA solution enabled on any of its VMs, no changes are ma
Learn how to [Find vulnerabilities and collect software inventory with agentless scanning (Preview)](enable-vulnerability-assessment-agentless.md).
-### Defender for DevOps Pull Request annotations in Azure DevOps repositories now includes Infrastructure as Code misconfigurations
+### Defender for DevOps Pull Request annotations in Azure DevOps repositories now includes Infrastructure as Code misconfigurations
-Defender for DevOps has expanded its Pull Request (PR) annotation coverage in Azure DevOps to include Infrastructure as Code (IaC) misconfigurations that are detected in Azure Resource Manager and Bicep templates.
+Defender for DevOps has expanded its Pull Request (PR) annotation coverage in Azure DevOps to include Infrastructure as Code (IaC) misconfigurations that are detected in Azure Resource Manager and Bicep templates.
-Developers can now see annotations for IaC misconfigurations directly in their PRs. Developers can also remediate critical security issues before the infrastructure is provisioned into cloud workloads. To simplify remediation, developers are provided with a severity level, misconfiguration description, and remediation instructions within each annotation.
+Developers can now see annotations for IaC misconfigurations directly in their PRs. Developers can also remediate critical security issues before the infrastructure is provisioned into cloud workloads. To simplify remediation, developers are provided with a severity level, misconfiguration description, and remediation instructions within each annotation.
-Previously, coverage for Defender for DevOps PR annotations in Azure DevOps only included secrets.
+Previously, coverage for Defender for DevOps PR annotations in Azure DevOps only included secrets.
Learn more about [Defender for DevOps](defender-for-devops-introduction.md) and [Pull Request annotations](enable-pull-request-annotations.md). ## April 2023+ Updates in April include: - [Agentless Container Posture in Defender CSPM (Preview)](#agentless-container-posture-in-defender-cspm-preview)
Updates in April include:
The new Agentless Container Posture (Preview) capabilities are available as part of the Defender CSPM (Cloud Security Posture Management) plan.
-Agentless Container Posture allows security teams to identify security risks in containers and Kubernetes realms. An agentless approach allows security teams to gain visibility into their Kubernetes and containers registries across SDLC and runtime, removing friction and footprint from the workloads.
+Agentless Container Posture allows security teams to identify security risks in containers and Kubernetes realms. An agentless approach allows security teams to gain visibility into their Kubernetes and containers registries across SDLC and runtime, removing friction and footprint from the workloads.
Agentless Container Posture offers container vulnerability assessments that, combined with attack path analysis, enable security teams to prioritize and zoom into specific container vulnerabilities. You can also use cloud security explorer to uncover risks and hunt for container posture insights, such as discovery of applications running vulnerable images or exposed to the internet.
Learn more at [Agentless Container Posture (Preview)](concept-agentless-containe
### Unified Disk Encryption recommendation (preview)
-We have introduced a unified disk encryption recommendation in public preview, `Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost` and `Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost`.
+We have introduced a unified disk encryption recommendation in public preview, `Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost` and `Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost`.
-These recommendations replace `Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources`, which detected Azure Disk Encryption and the policy `Virtual machines and virtual machine scale sets should have encryption at host enabled`, which detected EncryptionAtHost. ADE and EncryptionAtHost provide comparable encryption at rest coverage, and we recommend enabling one of them on every virtual machine. The new recommendations detect whether either ADE or EncryptionAtHost are enabled and only warn if neither are enabled. We also warn if ADE is enabled on some, but not all disks of a VM (this condition isn't applicable to EncryptionAtHost).
+These recommendations replace `Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources`, which detected Azure Disk Encryption and the policy `Virtual machines and virtual machine scale sets should have encryption at host enabled`, which detected EncryptionAtHost. ADE and EncryptionAtHost provide comparable encryption at rest coverage, and we recommend enabling one of them on every virtual machine. The new recommendations detect whether either ADE or EncryptionAtHost are enabled and only warn if neither are enabled. We also warn if ADE is enabled on some, but not all disks of a VM (this condition isn't applicable to EncryptionAtHost).
The new recommendations require [Azure Automanage Machine Configuration](https://aka.ms/gcpol).
The following App Service language monitoring policies have been deprecated due
| [Function apps that use Python should use the latest 'Python version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7238174a-fd10-4ef0-817e-fc820a951d73) | 7238174a-fd10-4ef0-817e-fc820a951d73 | | [App Service apps that use PHP should use the latest 'PHP version'](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3)| 7261b898-8a84-4db8-9e04-18527132abb3 |
-Customers can use alternative built-in policies to monitor any specified language version for their App Services.
+Customers can use alternative built-in policies to monitor any specified language version for their App Services.
These policies are no longer available in Defender for Cloud's built-in recommendations. You can [add them as custom recommendations](create-custom-recommendations.md) to have Defender for Cloud monitor them.
The following three alerts for the Defender for Resource Manager plan have been
In a scenario where activity from a suspicious IP address is detected, one of the following Defenders for Resource Manager plan alerts `Azure Resource Manager operation from suspicious IP address` or `Azure Resource Manager operation from suspicious proxy IP address` will be present. - ### Alerts automatic export to Log Analytics workspace have been deprecated Defenders for Cloud security alerts are automatically exported to a default Log Analytics workspace on the resource level. This causes an indeterministic behavior and therefore we have deprecated this feature.
The security alert quality improvement process for Defender for Servers includes
If you already have the Defender for Endpoint integration enabled, no further action is required. You may experience a decrease in your alerts volume in April 2023.
-If you don't have the Defender for Endpoint integration enabled in Defender for Servers, you'll need to enable the Defender for Endpoint integration to maintain and improve your alert coverage.
+If you don't have the Defender for Endpoint integration enabled in Defender for Servers, you'll need to enable the Defender for Endpoint integration to maintain and improve your alert coverage.
All Defender for Servers customers, have full access to the Defender for EndpointΓÇÖs integration as a part of the [Defender for Servers plan](plan-defender-for-servers-select-plan.md#plan-features).
The recommendations `System updates should be installed on your machines (powere
To use the new recommendation, you need to: - Connect your non-Azure machines to Arc.-- [Enable the periodic assessment property](../update-center/assessment-options.md#periodic-assessment). You can use the [Fix button](implement-security-recommendations.md).
+- [Enable the periodic assessment property](../update-center/assessment-options.md#periodic-assessment). You can use the [Fix button](implement-security-recommendations.md).
in the new recommendation, `Machines should be configured to periodically check for missing system updates` to fix the recommendation. After completing these steps, you can remove the old recommendation `System updates should be installed on your machines`, by disabling it from Defender for Cloud's built-in initiative in Azure policy. The two versions of the recommendations: -- [`System updates should be installed on your machines`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SystemUpdatesRecommendationDetailsWithRulesBlade/assessmentKey/4ab6e3c5-74dd-8b35-9ab9-f61b30875b27/subscriptionIds~/%5B%220cd6095b-b140-41ec-ad1d-32f2f7493386%22%2C%220ee78edb-a0ad-456c-a0a2-901bf542c102%22%2C%2284ca48fe-c942-42e5-b492-d56681d058fa%22%2C%22b2a328a7-ffff-4c09-b643-a4758cf170bc%22%2C%22eef8b6d5-94da-4b36-9327-a662f2674efb%22%2C%228d5565a3-dec1-4ee2-86d6-8aabb315eec4%22%2C%22e0fd569c-e34a-4249-8c24-e8d723c7f054%22%2C%22dad45786-32e5-4ef3-b90e-8e0838fbadb6%22%2C%22a5f9f0d3-a937-4de5-8cf3-387fce51e80c%22%2C%220368444d-756e-4ca6-9ecd-e964248c227a%22%2C%22e686ef8c-d35d-4e9b-92f8-caaaa7948c0a%22%2C%222145a411-d149-4010-84d4-40fe8a55db44%22%2C%22212f9889-769e-45ae-ab43-6da33674bd26%22%2C%22615f5f56-4ba9-45cf-b644-0c09d7d325c8%22%2C%22487bb485-b5b0-471e-9c0d-10717612f869%22%2C%22cb9eb375-570a-4e75-b83a-77dd942bee9f%22%2C%224bbecc02-f2c3-402a-8e01-1dfb1ffef499%22%2C%22432a7068-99ae-4975-ad38-d96b71172cdf%22%2C%22c0620f27-ac38-468c-a26b-264009fe7c41%22%2C%22a1920ebd-59b7-4f19-af9f-5e80599e88e4%22%2C%22b43a6159-1bea-4fa2-9407-e875fdc0ff55%22%2C%22d07c0080-170c-4c24-861d-9c817742986a%22%2C%22ae71ef11-a03f-4b4f-a0e6-ef144727c711%22%2C%2255a24be0-d9c3-4ecd-86b6-566c7aac2512%22%2C%227afc2d66-d5b4-4e84-970b-a782e3e4cc46%22%2C%2252a442a2-31e9-42f9-8e3e-4b27dbf82673%22%2C%228c4b5b03-3b24-4ed0-91f5-a703cd91b412%22%2C%22e01de573-132a-42ac-9ee2-f9dea9dd2717%22%2C%22b5c0b80f-5932-4d47-ae25-cd617dac90ce%22%2C%22e4e06275-58d1-4081-8f1b-be12462eb701%22%2C%229b4236fe-df75-4289-bf00-40628ed41fd9%22%2C%2221d8f407-c4c4-452e-87a4-e609bfb86248%22%2C%227d411d23-59e5-4e2e-8566-4f59de4544f2%22%2C%22b74d5345-100f-408a-a7ca-47abb52ba60d%22%2C%22f30787b9-82a8-4e74-bb0f-f12d64ecc496%22%2C%22482e1993-01d4-4b16-bff4-1866929176a1%22%2C%2226596251-f2f3-4e31-8a1b-f0754e32ad73%22%2C%224628298e-882d-4f12-abf4-a9f9654960bb%22%2C%224115b323-4aac-47f4-bb13-22af265ed58b%22%2C%22911e3904-5112-4232-a7ee-0d1811363c28%22%2C%22cd0fa82d-b6b6-4361-b002-050c32f71353%22%2C%22dd4c2dac-db51-4cd0-b734-684c6cc360c1%22%2C%22d2c9544f-4329-4642-b73d-020e7fef844f%22%2C%22bac420ed-c6fc-4a05-8ac1-8c0c52da1d6e%22%2C%2250ff7bc0-cd15-49d5-abb2-e975184c2f65%22%2C%223cd95ff9-ac62-4b5c-8240-0cd046687ea0%22%2C%2213723929-6644-4060-a50a-cc38ebc5e8b1%22%2C%2209fa8e83-d677-474f-8f73-2a954a0b0ea4%22%2C%22ca38bc19-cf50-48e2-bbe6-8c35b40212d8%22%2C%22bf163a87-8506-4eb3-8d14-c2dc95908830%22%2C%221278a874-89fc-418c-b6b9-ac763b000415%22%2C%223b2fda06-3ef6-454a-9dd5-994a548243e9%22%2C%226560575d-fa06-4e7d-95fb-f962e74efd7a%22%2C%22c3547baf-332f-4d8f-96bd-0659b39c7a59%22%2C%222f96ae42-240b-4228-bafa-26d8b7b03bf3%22%2C%2229de2cfc-f00a-43bb-bdc8-3108795bd282%22%2C%22a1ffc958-d2c7-493e-9f1e-125a0477f536%22%2C%2254b875cc-a81a-4914-8bfd-1a36bc7ddf4d%22%2C%22407ff5d7-0113-4c5c-8534-f5cfb09298f5%22%2C%22365a62ee-6166-4d37-a936-03585106dd50%22%2C%226d17b59e-06c4-4203-89d2-de793ebf5452%22%2C%229372b318-ed3a-4504-95a6-941201300f78%22%2C%223c1bb38c-82e3-4f8d-a115-a7110ba70d05%22%2C%22c6dcd830-359f-44d0-b4d4-c1ba95e86f48%22%2C%2209e8ad18-7bdb-43b8-80c4-43ee53460e0b%22%2C%22dcbdac96-1896-478d-89fc-c95ed43f4596%22%2C%22d23422cf-c0f2-4edc-a306-6e32b181a341%22%2C%228c2c7b23-848d-40fe-b817-690d79ad9dfd%22%2C%221163fbbe-27e7-4b0f-8466-195fe5417043%22%2C%223905431d-c062-4c17-8fd9-c51f89f334c4%22%2C%227ea26ded-0260-4e78-9336-285d4d9e33d2%22%2C%225ccdbd03-f1b1-4b59-a609-300685e17ce3%22%2C%22bcdc6eb0-74cd-40b6-b3a9-584b33cea7b6%22%2C%22d557e825-27b1-4819-8af5-dc2429af91c9%22%2C%222bb50811-92b6-43a1-9d80-745962d9c759%22%2C%22409111bf-3097-421c-ad68-a44e716edf58%22%2C%2249e3f635-484a-43d1-b953-b29e1871ba88%22%2C%22b77ec8a9-04ed-48d2-a87a-e5887b978ba6%22%2C%22075423e9-7d33-4166-8bdf-3920b04e3735%22%2C%22ef143bbb-6a7e-4a3f-b64f-2f23330e0116%22%2C%2224afc59a-f969-4f83-95c9-3b70f52d833d%22%2C%22a8783cc5-1171-4c34-924f-6f71a20b21ec%22%2C%220079a9bb-e218-496a-9880-d27ad6192f52%22%2C%226f53185c-ea09-4fc3-9075-318dec805303%22%2C%22588845a8-a4a7-4ab1-83a1-1388452e8c0c%22%2C%22b68b2f37-1d37-4c2f-80f6-c23de402792e%22%2C%22eec2de82-6ab2-4a84-ae5f-57e9a10bf661%22%2C%22227531a4-d775-435b-a878-963ed8d0d18f%22%2C%228cff5d56-95fb-4a74-ab9d-079edb45313e%22%2C%22e72e5254-f265-4e95-9bd2-9ee8e7329051%22%2C%228ae1955e-f748-4273-a507-10159ba940f9%22%2C%22f6869ac6-2a40-404f-acd3-d07461be771a%22%2C%2285b3dbca-5974-4067-9669-67a141095a76%22%2C%228168a4f2-74d6-4663-9951-8e3a454937b7%22%2C%229ec1d932-0f3f-486c-acc6-e7d78b358f9b%22%2C%2279f57c16-00fe-48da-87d4-5192e86cd047%22%2C%22bac044cf-49e1-4843-8dda-1ce9662606c8%22%2C%22009d0e9f-a42a-470e-b315-82496a88cf0f%22%2C%2268f3658f-0090-4277-a500-f02227aaee97%22%5D/showSecurityCenterCommandBar~/false/assessmentOwners~/null)
+- [`System updates should be installed on your machines`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SystemUpdatesRecommendationDetailsWithRulesBlade/assessmentKey/4ab6e3c5-74dd-8b35-9ab9-f61b30875b27/subscriptionIds~/%5B%220cd6095b-b140-41ec-ad1d-32f2f7493386%22%2C%220ee78edb-a0ad-456c-a0a2-901bf542c102%22%2C%2284ca48fe-c942-42e5-b492-d56681d058fa%22%2C%22b2a328a7-ffff-4c09-b643-a4758cf170bc%22%2C%22eef8b6d5-94da-4b36-9327-a662f2674efb%22%2C%228d5565a3-dec1-4ee2-86d6-8aabb315eec4%22%2C%22e0fd569c-e34a-4249-8c24-e8d723c7f054%22%2C%22dad45786-32e5-4ef3-b90e-8e0838fbadb6%22%2C%22a5f9f0d3-a937-4de5-8cf3-387fce51e80c%22%2C%220368444d-756e-4ca6-9ecd-e964248c227a%22%2C%22e686ef8c-d35d-4e9b-92f8-caaaa7948c0a%22%2C%222145a411-d149-4010-84d4-40fe8a55db44%22%2C%22212f9889-769e-45ae-ab43-6da33674bd26%22%2C%22615f5f56-4ba9-45cf-b644-0c09d7d325c8%22%2C%22487bb485-b5b0-471e-9c0d-10717612f869%22%2C%22cb9eb375-570a-4e75-b83a-77dd942bee9f%22%2C%224bbecc02-f2c3-402a-8e01-1dfb1ffef499%22%2C%22432a7068-99ae-4975-ad38-d96b71172cdf%22%2C%22c0620f27-ac38-468c-a26b-264009fe7c41%22%2C%22a1920ebd-59b7-4f19-af9f-5e80599e88e4%22%2C%22b43a6159-1bea-4fa2-9407-e875fdc0ff55%22%2C%22d07c0080-170c-4c24-861d-9c817742986a%22%2C%22ae71ef11-a03f-4b4f-a0e6-ef144727c711%22%2C%2255a24be0-d9c3-4ecd-86b6-566c7aac2512%22%2C%227afc2d66-d5b4-4e84-970b-a782e3e4cc46%22%2C%2252a442a2-31e9-42f9-8e3e-4b27dbf82673%22%2C%228c4b5b03-3b24-4ed0-91f5-a703cd91b412%22%2C%22e01de573-132a-42ac-9ee2-f9dea9dd2717%22%2C%22b5c0b80f-5932-4d47-ae25-cd617dac90ce%22%2C%22e4e06275-58d1-4081-8f1b-be12462eb701%22%2C%229b4236fe-df75-4289-bf00-40628ed41fd9%22%2C%2221d8f407-c4c4-452e-87a4-e609bfb86248%22%2C%227d411d23-59e5-4e2e-8566-4f59de4544f2%22%2C%22b74d5345-100f-408a-a7ca-47abb52ba60d%22%2C%22f30787b9-82a8-4e74-bb0f-f12d64ecc496%22%2C%22482e1993-01d4-4b16-bff4-1866929176a1%22%2C%2226596251-f2f3-4e31-8a1b-f0754e32ad73%22%2C%224628298e-882d-4f12-abf4-a9f9654960bb%22%2C%224115b323-4aac-47f4-bb13-22af265ed58b%22%2C%22911e3904-5112-4232-a7ee-0d1811363c28%22%2C%22cd0fa82d-b6b6-4361-b002-050c32f71353%22%2C%22dd4c2dac-db51-4cd0-b734-684c6cc360c1%22%2C%22d2c9544f-4329-4642-b73d-020e7fef844f%22%2C%22bac420ed-c6fc-4a05-8ac1-8c0c52da1d6e%22%2C%2250ff7bc0-cd15-49d5-abb2-e975184c2f65%22%2C%223cd95ff9-ac62-4b5c-8240-0cd046687ea0%22%2C%2213723929-6644-4060-a50a-cc38ebc5e8b1%22%2C%2209fa8e83-d677-474f-8f73-2a954a0b0ea4%22%2C%22ca38bc19-cf50-48e2-bbe6-8c35b40212d8%22%2C%22bf163a87-8506-4eb3-8d14-c2dc95908830%22%2C%221278a874-89fc-418c-b6b9-ac763b000415%22%2C%223b2fda06-3ef6-454a-9dd5-994a548243e9%22%2C%226560575d-fa06-4e7d-95fb-f962e74efd7a%22%2C%22c3547baf-332f-4d8f-96bd-0659b39c7a59%22%2C%222f96ae42-240b-4228-bafa-26d8b7b03bf3%22%2C%2229de2cfc-f00a-43bb-bdc8-3108795bd282%22%2C%22a1ffc958-d2c7-493e-9f1e-125a0477f536%22%2C%2254b875cc-a81a-4914-8bfd-1a36bc7ddf4d%22%2C%22407ff5d7-0113-4c5c-8534-f5cfb09298f5%22%2C%22365a62ee-6166-4d37-a936-03585106dd50%22%2C%226d17b59e-06c4-4203-89d2-de793ebf5452%22%2C%229372b318-ed3a-4504-95a6-941201300f78%22%2C%223c1bb38c-82e3-4f8d-a115-a7110ba70d05%22%2C%22c6dcd830-359f-44d0-b4d4-c1ba95e86f48%22%2C%2209e8ad18-7bdb-43b8-80c4-43ee53460e0b%22%2C%22dcbdac96-1896-478d-89fc-c95ed43f4596%22%2C%22d23422cf-c0f2-4edc-a306-6e32b181a341%22%2C%228c2c7b23-848d-40fe-b817-690d79ad9dfd%22%2C%221163fbbe-27e7-4b0f-8466-195fe5417043%22%2C%223905431d-c062-4c17-8fd9-c51f89f334c4%22%2C%227ea26ded-0260-4e78-9336-285d4d9e33d2%22%2C%225ccdbd03-f1b1-4b59-a609-300685e17ce3%22%2C%22bcdc6eb0-74cd-40b6-b3a9-584b33cea7b6%22%2C%22d557e825-27b1-4819-8af5-dc2429af91c9%22%2C%222bb50811-92b6-43a1-9d80-745962d9c759%22%2C%22409111bf-3097-421c-ad68-a44e716edf58%22%2C%2249e3f635-484a-43d1-b953-b29e1871ba88%22%2C%22b77ec8a9-04ed-48d2-a87a-e5887b978ba6%22%2C%22075423e9-7d33-4166-8bdf-3920b04e3735%22%2C%22ef143bbb-6a7e-4a3f-b64f-2f23330e0116%22%2C%2224afc59a-f969-4f83-95c9-3b70f52d833d%22%2C%22a8783cc5-1171-4c34-924f-6f71a20b21ec%22%2C%220079a9bb-e218-496a-9880-d27ad6192f52%22%2C%226f53185c-ea09-4fc3-9075-318dec805303%22%2C%22588845a8-a4a7-4ab1-83a1-1388452e8c0c%22%2C%22b68b2f37-1d37-4c2f-80f6-c23de402792e%22%2C%22eec2de82-6ab2-4a84-ae5f-57e9a10bf661%22%2C%22227531a4-d775-435b-a878-963ed8d0d18f%22%2C%228cff5d56-95fb-4a74-ab9d-079edb45313e%22%2C%22e72e5254-f265-4e95-9bd2-9ee8e7329051%22%2C%228ae1955e-f748-4273-a507-10159ba940f9%22%2C%22f6869ac6-2a40-404f-acd3-d07461be771a%22%2C%2285b3dbca-5974-4067-9669-67a141095a76%22%2C%228168a4f2-74d6-4663-9951-8e3a454937b7%22%2C%229ec1d932-0f3f-486c-acc6-e7d78b358f9b%22%2C%2279f57c16-00fe-48da-87d4-5192e86cd047%22%2C%22bac044cf-49e1-4843-8dda-1ce9662606c8%22%2C%22009d0e9f-a42a-470e-b315-82496a88cf0f%22%2C%2268f3658f-0090-4277-a500-f02227aaee97%22%5D/showSecurityCenterCommandBar~/false/assessmentOwners~/null)
- [`System updates should be installed on your machines (powered by Update management center)`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SystemUpdatesV2RecommendationDetailsBlade/assessmentKey/e1145ab1-eb4f-43d8-911b-36ddf771d13f/subscriptionIds~/%5B%220cd6095b-b140-41ec-ad1d-32f2f7493386%22%2C%220ee78edb-a0ad-456c-a0a2-901bf542c102%22%2C%2284ca48fe-c942-42e5-b492-d56681d058fa%22%2C%22b2a328a7-ffff-4c09-b643-a4758cf170bc%22%2C%22eef8b6d5-94da-4b36-9327-a662f2674efb%22%2C%228d5565a3-dec1-4ee2-86d6-8aabb315eec4%22%2C%22e0fd569c-e34a-4249-8c24-e8d723c7f054%22%2C%22dad45786-32e5-4ef3-b90e-8e0838fbadb6%22%2C%22a5f9f0d3-a937-4de5-8cf3-387fce51e80c%22%2C%220368444d-756e-4ca6-9ecd-e964248c227a%22%2C%22e686ef8c-d35d-4e9b-92f8-caaaa7948c0a%22%2C%222145a411-d149-4010-84d4-40fe8a55db44%22%2C%22212f9889-769e-45ae-ab43-6da33674bd26%22%2C%22615f5f56-4ba9-45cf-b644-0c09d7d325c8%22%2C%22487bb485-b5b0-471e-9c0d-10717612f869%22%2C%22cb9eb375-570a-4e75-b83a-77dd942bee9f%22%2C%224bbecc02-f2c3-402a-8e01-1dfb1ffef499%22%2C%22432a7068-99ae-4975-ad38-d96b71172cdf%22%2C%22c0620f27-ac38-468c-a26b-264009fe7c41%22%2C%22a1920ebd-59b7-4f19-af9f-5e80599e88e4%22%2C%22b43a6159-1bea-4fa2-9407-e875fdc0ff55%22%2C%22d07c0080-170c-4c24-861d-9c817742986a%22%2C%22ae71ef11-a03f-4b4f-a0e6-ef144727c711%22%2C%2255a24be0-d9c3-4ecd-86b6-566c7aac2512%22%2C%227afc2d66-d5b4-4e84-970b-a782e3e4cc46%22%2C%2252a442a2-31e9-42f9-8e3e-4b27dbf82673%22%2C%228c4b5b03-3b24-4ed0-91f5-a703cd91b412%22%2C%22e01de573-132a-42ac-9ee2-f9dea9dd2717%22%2C%22b5c0b80f-5932-4d47-ae25-cd617dac90ce%22%2C%22e4e06275-58d1-4081-8f1b-be12462eb701%22%2C%229b4236fe-df75-4289-bf00-40628ed41fd9%22%2C%2221d8f407-c4c4-452e-87a4-e609bfb86248%22%2C%227d411d23-59e5-4e2e-8566-4f59de4544f2%22%2C%22b74d5345-100f-408a-a7ca-47abb52ba60d%22%2C%22f30787b9-82a8-4e74-bb0f-f12d64ecc496%22%2C%22482e1993-01d4-4b16-bff4-1866929176a1%22%2C%2226596251-f2f3-4e31-8a1b-f0754e32ad73%22%2C%224628298e-882d-4f12-abf4-a9f9654960bb%22%2C%224115b323-4aac-47f4-bb13-22af265ed58b%22%2C%22911e3904-5112-4232-a7ee-0d1811363c28%22%2C%22cd0fa82d-b6b6-4361-b002-050c32f71353%22%2C%22dd4c2dac-db51-4cd0-b734-684c6cc360c1%22%2C%22d2c9544f-4329-4642-b73d-020e7fef844f%22%2C%22bac420ed-c6fc-4a05-8ac1-8c0c52da1d6e%22%2C%2250ff7bc0-cd15-49d5-abb2-e975184c2f65%22%2C%223cd95ff9-ac62-4b5c-8240-0cd046687ea0%22%2C%2213723929-6644-4060-a50a-cc38ebc5e8b1%22%2C%2209fa8e83-d677-474f-8f73-2a954a0b0ea4%22%2C%22ca38bc19-cf50-48e2-bbe6-8c35b40212d8%22%2C%22bf163a87-8506-4eb3-8d14-c2dc95908830%22%2C%221278a874-89fc-418c-b6b9-ac763b000415%22%2C%223b2fda06-3ef6-454a-9dd5-994a548243e9%22%2C%226560575d-fa06-4e7d-95fb-f962e74efd7a%22%2C%22c3547baf-332f-4d8f-96bd-0659b39c7a59%22%2C%222f96ae42-240b-4228-bafa-26d8b7b03bf3%22%2C%2229de2cfc-f00a-43bb-bdc8-3108795bd282%22%2C%22a1ffc958-d2c7-493e-9f1e-125a0477f536%22%2C%2254b875cc-a81a-4914-8bfd-1a36bc7ddf4d%22%2C%22407ff5d7-0113-4c5c-8534-f5cfb09298f5%22%2C%22365a62ee-6166-4d37-a936-03585106dd50%22%2C%226d17b59e-06c4-4203-89d2-de793ebf5452%22%2C%229372b318-ed3a-4504-95a6-941201300f78%22%2C%223c1bb38c-82e3-4f8d-a115-a7110ba70d05%22%2C%22c6dcd830-359f-44d0-b4d4-c1ba95e86f48%22%2C%2209e8ad18-7bdb-43b8-80c4-43ee53460e0b%22%2C%22dcbdac96-1896-478d-89fc-c95ed43f4596%22%2C%22d23422cf-c0f2-4edc-a306-6e32b181a341%22%2C%228c2c7b23-848d-40fe-b817-690d79ad9dfd%22%2C%221163fbbe-27e7-4b0f-8466-195fe5417043%22%2C%223905431d-c062-4c17-8fd9-c51f89f334c4%22%2C%227ea26ded-0260-4e78-9336-285d4d9e33d2%22%2C%225ccdbd03-f1b1-4b59-a609-300685e17ce3%22%2C%22bcdc6eb0-74cd-40b6-b3a9-584b33cea7b6%22%2C%22d557e825-27b1-4819-8af5-dc2429af91c9%22%2C%222bb50811-92b6-43a1-9d80-745962d9c759%22%2C%22409111bf-3097-421c-ad68-a44e716edf58%22%2C%2249e3f635-484a-43d1-b953-b29e1871ba88%22%2C%22b77ec8a9-04ed-48d2-a87a-e5887b978ba6%22%2C%22075423e9-7d33-4166-8bdf-3920b04e3735%22%2C%22ef143bbb-6a7e-4a3f-b64f-2f23330e0116%22%2C%2224afc59a-f969-4f83-95c9-3b70f52d833d%22%2C%22a8783cc5-1171-4c34-924f-6f71a20b21ec%22%2C%220079a9bb-e218-496a-9880-d27ad6192f52%22%2C%226f53185c-ea09-4fc3-9075-318dec805303%22%2C%22588845a8-a4a7-4ab1-83a1-1388452e8c0c%22%2C%22b68b2f37-1d37-4c2f-80f6-c23de402792e%22%2C%22eec2de82-6ab2-4a84-ae5f-57e9a10bf661%22%2C%22227531a4-d775-435b-a878-963ed8d0d18f%22%2C%228cff5d56-95fb-4a74-ab9d-079edb45313e%22%2C%22e72e5254-f265-4e95-9bd2-9ee8e7329051%22%2C%228ae1955e-f748-4273-a507-10159ba940f9%22%2C%22f6869ac6-2a40-404f-acd3-d07461be771a%22%2C%2285b3dbca-5974-4067-9669-67a141095a76%22%2C%228168a4f2-74d6-4663-9951-8e3a454937b7%22%2C%229ec1d932-0f3f-486c-acc6-e7d78b358f9b%22%2C%2279f57c16-00fe-48da-87d4-5192e86cd047%22%2C%22bac044cf-49e1-4843-8dda-1ce9662606c8%22%2C%22009d0e9f-a42a-470e-b315-82496a88cf0f%22%2C%2268f3658f-0090-4277-a500-f02227aaee97%22%5D/showSecurityCenterCommandBar~/false/assessmentOwners~/null)
-
+ will both be available until the [Log Analytics agent is deprecated on August 31, 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/), which is when the older version (`System updates should be installed on your machines`) of the recommendation will be deprecated as well. Both recommendations return the same results and are available under the same control `Apply system updates`.
-The new recommendation `System updates should be installed on your machines (powered by Update management center)`, has a remediation flow available through the Fix button, which can be used to remediate any results through the Update Management Center (Preview). This remediation process is still in Preview.
+The new recommendation `System updates should be installed on your machines (powered by Update management center)`, has a remediation flow available through the Fix button, which can be used to remediate any results through the Update Management Center (Preview). This remediation process is still in Preview.
The new recommendation `System updates should be installed on your machines (powered by Update management center)`, isn't expected to affect your Secure Score, as it has the same results as the old recommendation `System updates should be installed on your machines`.
-The prerequisite recommendation ([Enable the periodic assessment property](../update-center/assessment-options.md#periodic-assessment)) has a negative effect on your Secure Score. You can remediate the negative effect with the available [Fix button](implement-security-recommendations.md).
+The prerequisite recommendation ([Enable the periodic assessment property](../update-center/assessment-options.md#periodic-assessment)) has a negative effect on your Secure Score. You can remediate the negative effect with the available [Fix button](implement-security-recommendations.md).
### Defender for APIs (Preview)
The new plan has new capabilities now in public preview:
These capabilities enhance the existing Activity Monitoring capability, based on control and data plane log analysis and behavioral modeling to identify early signs of breach.
-All these capabilities are available in a new predictable and flexible pricing plan that provides granular control over data protection at both the subscription and resource levels.
+All these capabilities are available in a new predictable and flexible pricing plan that provides granular control over data protection at both the subscription and resource levels.
Learn more at [Overview of Microsoft Defender for Storage](defender-for-storage-introduction.md).
We introduce an improved Azure security policy management experience for built-i
- A single view of all built-in security recommendations offered by the Microsoft cloud security benchmark (formerly the Azure security benchmark). Recommendations are organized into logical groups, making it easier to understand the types of resources covered, and the relationship between parameters and recommendations. - New features such as filters and search have been added.
-Learn how to [manage security policies](tutorial-security-policy.md).
+Learn how to [manage security policies](tutorial-security-policy.md).
Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/improved-experience-for-managing-the-default-azure-security/ba-p/3776522).
Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com
We're announcing that Defender CSPM is now Generally Available (GA). Defender CSPM offers all of the services available under the Foundational CSPM capabilities and adds the following benefits: - **Attack path analysis and ARG API** - Attack path analysis uses a graph-based algorithm that scans the cloud security graph to expose attack paths and suggests recommendations as to how best remediate issues that break the attack path and prevent successful breach. You can also consume attack paths programmatically by querying Azure Resource Graph (ARG) API. Learn how to use [attack path analysis](how-to-manage-attack-path.md)-- **Cloud Security explorer** - Use the Cloud Security Explorer to run graph-based queries on the cloud security graph, to proactively identify security risks in your multicloud environments. Learn more about [cloud security explorer](concept-attack-path.md#what-is-cloud-security-explorer).
+- **Cloud Security explorer** - Use the Cloud Security Explorer to run graph-based queries on the cloud security graph, to proactively identify security risks in your multicloud environments. Learn more about [cloud security explorer](concept-attack-path.md#what-is-cloud-security-explorer).
Learn more about [Defender CSPM](overview-page.md).
This feature is part of the Defender CSPM (Cloud Security Posture Management) pl
### Microsoft cloud security benchmark (MCSB) version 1.0 is now Generally Available (GA)
-Microsoft Defender for Cloud is announcing that the Microsoft cloud security benchmark (MCSB) version 1.0 is now Generally Available (GA).
+Microsoft Defender for Cloud is announcing that the Microsoft cloud security benchmark (MCSB) version 1.0 is now Generally Available (GA).
MCSB version 1.0 replaces the Azure Security Benchmark (ASB) version 3 as Microsoft Defender for Cloud's default security policy for identifying security vulnerabilities in your cloud environments according to common security frameworks and best practices. MCSB version 1.0 appears as the default compliance standard in the compliance dashboard and is enabled by default for all Defender for Cloud customers.
Learn more about [MCSB](https://aka.ms/mcsb).
We're announcing that the following regulatory standards are being updated with latest version and are available for customers in Azure Government and Azure China 21Vianet. **Azure Government**:+ - [PCI DSS v4](/azure/compliance/offerings/offering-pci-dss) - [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2) - [ISO 27001:2013](/azure/compliance/offerings/offering-iso-27001) **Azure China 21Vianet**:+ - [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2) - [ISO 27001:2013](/azure/compliance/offerings/offering-iso-27001)
Learn how to [Manage AWS assessments and standards](how-to-manage-aws-assessment
### Microsoft Defender for DevOps (preview) is now available in other regions
-Microsoft Defender for DevOps has expanded its preview and is now available in the West Europe and East Australia regions, when you onboard your Azure DevOps and GitHub resources.
+Microsoft Defender for DevOps has expanded its preview and is now available in the West Europe and East Australia regions, when you onboard your Azure DevOps and GitHub resources.
Learn more about [Microsoft Defender for DevOps](defender-for-devops-introduction.md).
The related [policy definition](https://portal.azure.com/#view/Microsoft_Azure_P
## Next steps For past changes to Defender for Cloud, see [Archive for what's new in Defender for Cloud?](release-notes-archive.md).-
defender-for-cloud Remediate Vulnerability Findings Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/remediate-vulnerability-findings-vm.md
When your vulnerability assessment tool reports vulnerabilities to Defender for
To view vulnerability assessment findings (from all of your configured scanners) and remediate identified vulnerabilities:
-1. From Defender for Cloud's menu, open the **Recommendations** page.
+1. From Defender for Cloud's menu, open the **Recommendations** page.
1. Select the recommendation **Machines should have vulnerability findings resolved**.
- Defender for Cloud shows you all the findings for all VMs in the currently selected subscriptions. The findings are ordered by severity.
+ Defender for Cloud shows you all the findings for all VMs in the currently selected subscriptions. The findings are ordered by severity.
:::image type="content" source="media/remediate-vulnerability-findings-vm/vulnerabilities-should-be-remediated.png" alt-text="The findings from your vulnerability assessment solutions for all selected subscriptions." lightbox="media/remediate-vulnerability-findings-vm/vulnerabilities-should-be-remediated.png"::: 1. To filter the findings by a specific VM, open the "Affected resources" section and click the VM that interests you. Or you can select a VM from the resource health view, and view all relevant recommendations for that resource.
- Defender for Cloud shows the findings for that VM, ordered by severity.
+ Defender for Cloud shows the findings for that VM, ordered by severity.
-1. To learn more about a specific vulnerability, select it.
+1. To learn more about a specific vulnerability, select it.
:::image type="content" source="media/remediate-vulnerability-findings-vm/vulnerability-details.png" alt-text="Details pane for a specific vulnerability." lightbox="media/remediate-vulnerability-findings-vm/vulnerability-details.png"::: The details pane that appears contains extensive information about the vulnerability, including:
-
- * Links to all relevant CVEs (where available)
- * Remediation steps
- * Any additional reference pages
-1. To remediate a finding, follow the remediation steps from this details pane.
+ - Links to all relevant CVEs (where available)
+ - Remediation steps
+ - Any additional reference pages
+1. To remediate a finding, follow the remediation steps from this details pane.
## Disable specific findings
To create a rule:
1. Select the relevant scope.
-1. Define your criteria. You can use any of the following criteria:
- - Finding ID
+1. Define your criteria. You can use any of the following criteria:
+ - Finding ID
- Category
- - Security check
- - CVSS scores (v2, v3)
- - Severity
- - Patchable status
+ - Security check
+ - CVSS scores (v2, v3)
+ - Severity
+ - Patchable status
1. Select **Apply rule**. :::image type="content" source="./media/remediate-vulnerability-findings-vm/new-disable-rule-for-finding.png" alt-text="Create a disable rule for VA findings on VM."::: > [!IMPORTANT]
- > Changes might take up to 24hrs to take effect.
+ > Changes might take up to 24 hours to take effect.
-1. To view, override, or delete a rule:
+1. To view, override, or delete a rule:
1. Select **Disable rule**. 1. From the scope list, subscriptions with active rules show as **Rule applied**. :::image type="content" source="./media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Modify or delete an existing rule."::: 1. To view or delete the rule, select the ellipsis menu ("...").
-
## Export the results
To export vulnerability assessment results, you'll need to use [Azure Resource G
For full instructions and a sample ARG query, see the following Tech Community post: [Exporting vulnerability assessment results in Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/azure-security-center/exporting-vulnerability-assessment-results-in-azure-security/ba-p/1212091). -- ## Next steps+ This article described the Microsoft Defender for Cloud vulnerability assessment extension (powered by Qualys) for scanning your VMs. For related material, see the following articles: - [Learn about the different elements of a recommendation](review-security-recommendations.md)
defender-for-cloud Sql Azure Vulnerability Assessment Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-manage.md
To create a rule:
- Benchmarks 1. Create a disable rule for VA findings on SQL servers on machines
-1. Select Apply rule. Changes might take up to 24 hrs to take effect.
+1. Select Apply rule. Changes might take up to 24 hours to take effect.
1. To view, override, or delete a rule: 1. Select Disable rule. 1. From the scope list, subscriptions with active rules show as Rule applied.
Typical scenarios may include:
:::image type="content" source="media/defender-for-sql-Azure-vulnerability-assessment/disable-rule-vulnerability-findings-sql.png" alt-text="Screenshot of create a disable rule for VA findings on SQL servers on machines.":::
-1. Select **Apply rule**. Changes might take up to 24 hrs to take effect.
+1. Select **Apply rule**. Changes might take up to 24 hours to take effect.
1. To view, override, or delete a rule: 1. Select **Disable rule**. 1. From the scope list, subscriptions with active rules show as **Rule applied**.
defender-for-cloud Support Agentless Containers Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-agentless-containers-posture.md
Title: Support and prerequisites for agentless container posture
description: Learn about the requirements for agentless container posture in Microsoft Defender for Cloud Previously updated : 06/14/2023 Last updated : 07/02/2023 # Support and prerequisites for agentless containers posture
All of the agentless container capabilities are available as part of the [Defend
Review the requirements on this page before setting up [agentless containers posture](concept-agentless-containers.md) in Microsoft Defender for Cloud.
-> [!IMPORTANT]
-> Agentless Posture is currently in Preview. Previews are provided "as is" and "as available" and are excluded from the service-level agreements and limited warranty.
- ## Availability | Aspect | Details | |||
-|Release state:|Preview |
+|Release state:| General Availability (GA) |
|Pricing:|Requires [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) and is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP accounts | | Permissions | You need to have access as a:<br><br> - Subscription Owner, **or** <br> - User Access Admin and Security Admin permissions for the Azure subscription used for onboarding |
-## Registries and images
+## Registries and images - powered by MDVM
-| Aspect | Details |
-|--|--|
-| Registries and images | **Supported**<br> ΓÇó ACR registries <br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Container images in Docker V2 format <br> **Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> is currently unsupported <br> ΓÇó Images in [Open Container Initiative (OCI)](https://github.com/opencontainers/image-spec/blob/main/spec.md) <br> ΓÇó Windows images<br>|
-| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.16 <br> ΓÇó Red Hat Enterprise Linux 6-9 <br> ΓÇó CentOS 6-9<br> ΓÇó Oracle Linux 6-9 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap, openSUSE Tumbleweed <br> ΓÇó SUSE Enterprise Linux 11-15 <br> ΓÇó Debian GNU/Linux 7-12 <br> ΓÇó Ubuntu 12.04-22.04 <br> ΓÇó Fedora 31-37<br> ΓÇó Mariner 1-2|
-| Language specific packages <br><br> | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
## Prerequisites
Learn more about [supported Kubernetes versions in Azure Kubernetes Service (AKS
### Are attack paths triggered on workloads that are running on Azure Container Instances?
-Attack paths are currently not triggered for workloads running on[ Azure Container Instances](/azure/container-instances/).
+Attack paths are currently not triggered for workloads running on [Azure Container Instances](/azure/container-instances/).
## Next steps Learn how to [enable agentless containers](how-to-enable-agentless-containers.md).-
defender-for-cloud Support Matrix Defender For Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-containers.md
This article summarizes support information for the [Defender for Containers pla
| Feature | Supported Resources | Linux release state | Windows release state | Agentless/Agent-based | Pricing Tier | Azure clouds availability | |--|--|--|--|--|--|--| | Compliance-Docker CIS | VM, Virtual Machine Scale Set | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md)-registry scan [OS packages](#registries-and-images-support-aks)| ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md)-registry scan [language packages](#registries-and-images-support-aks) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| [Vulnerability assessment-running images](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) | AKS | GA | Preview | Defender profile | Defender for Containers | Commercial clouds |
+| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) (powered by Qualys) - registry scan [OS packages](#registries-and-images-support-for-akspowered-by-qualys) | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) (powered by Qualys) -registry scan [language packages](#registries-and-images-support-for-akspowered-by-qualys) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| [Vulnerability assessment (powered by Qualys) - running images](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) | AKS | GA | Preview | Defender profile | Defender for Containers | Commercial clouds |
+| [Vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) powered by MDVM - registry scan | ACR, Private ACR | Preview | | Agentless | Defender for Containers | Commercial clouds |
+| [Vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) powered by MDVM - running images | AKS | Preview | | Defender profile | Defender for Containers | Commercial clouds |
| [Hardening (control plane)](defender-for-containers-architecture.md) | ACR, AKS | GA | Preview | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | [Hardening (Kubernetes data plane)](kubernetes-workload-protections.md) | AKS | GA | - | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | [Runtime threat detection](defender-for-containers-introduction.md#run-time-protection-for-kubernetes-nodes-and-clusters) (control plane)| AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
This article summarizes support information for the [Defender for Containers pla
| Discovery/provisioning-Defender profile auto provisioning | AKS | GA | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Discovery/provisioning-Azure policy add-on auto provisioning | AKS | GA | - | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-### Registries and images support-AKS
+### Registries and images support for AKS - powered by Qualys
| Aspect | Details | |--|--|
This article summarizes support information for the [Defender for Containers pla
| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.16 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35| | Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
+### Registries and images - powered by MDVM
++ ### Kubernetes distributions and configurations | Aspect | Details |
Outbound proxy without authentication and outbound proxy with basic authenticati
| Aspect | Details | |--|--|
-| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> |
+| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br><br />**Unsupported**<br /> ΓÇó Private network clusters<br /> ΓÇó GKE autopilot<br /> ΓÇó GKE AuthorizedNetworksConfig |
<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
defender-for-cloud Tutorial Enable Container Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-container-gcp.md
There are two dedicated Defender for Cloud recommendations you can use to instal
- `GKE clusters should have Microsoft Defender's extension for Azure Arc installed` - `GKE clusters should have the Azure Policy extension installed`
+> [!NOTE]
+> When installing Arc extensions, you must verify that the GCP project provided is identical to the one in the relevant connector.
+ **To deploy the solution to specific clusters**: 1. Sign in to the [Azure portal](https://portal.azure.com).
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Estimated date for change | |--|--|
+| [Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled"](#replacing-the-key-vaults-should-have-purge-protection-enabled-recommendation-with-combined-recommendation-key-vaults-should-have-deletion-protection-enabled) | June 2023
+| [Changes to the Defender for DevOps recommendations environment source and resource ID](#changes-to-the-defender-for-devops-recommendations-environment-source-and-resource-id) | July 2023 |
+| [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | July 2023 |
+| [Business model and pricing updates for Defender for Cloud plans](#business-model-and-pricing-updates-for-defender-for-cloud-plans) | July 2023 |
| [Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled"](#replacing-the-key-vaults-should-have-purge-protection-enabled-recommendation-with-combined-recommendation-key-vaults-should-have-deletion-protection-enabled) | June 2023|
-| [General availability release of agentless container posture in Defender CSPM](#general-availability-ga-release-of-agentless-container-posture-in-defender-cspm) | July 2023 |
| [Changes to the Defender for DevOps recommendations environment source and resource ID](#changes-to-the-defender-for-devops-recommendations-environment-source-and-resource-id) | August 2023 | | [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | August 2023 | | [Business model and pricing updates for Defender for Cloud plans](#business-model-and-pricing-updates-for-defender-for-cloud-plans) | August 2023 |
The following table explains how each capability will be provided after the Log
To ensure the security of your servers and receive all the security updates from Defender for Servers, make sure to have [Defender for Endpoint integration](integration-defender-for-endpoint.md) and [agentless disk scanning](concept-agentless-data-collection.md) enabled on your subscriptions. This will also keep your servers up-to-date with the alternative deliverables.
+> [!IMPORTANT]
+> For more information about how to plan for this change, see [Microsoft Defender for Cloud - strategy and plan towards Log Analytics Agent (MMA) deprecation](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-strategy-and-plan-towards-log/ba-p/3883341).
+ #### Defender for SQL Server on machines The Defender for SQL Server on machines plan relies on the Log Analytics agent (MMA) / Azure monitoring agent (AMA) to provide Vulnerability Assessment and Advanced Threat Protection to IaaS SQL Server instances. The plan supports Log Analytics agent autoprovisioning in GA, and Azure Monitoring agent autoprovisioning in Public Preview.
The `Key Vaults should have purge protection enabled` recommendation is deprecat
See the [full index of Azure Policy built-in policy definitions for Key Vault](../key-vault/policy-reference.md)
-### General Availability (GA) release of Agentless Container Posture in Defender CSPM
-
-**Estimated date for change: July 2023**
-
-The new Agentless Container Posture capabilities are set for General Availability (GA) as part of the Defender CSPM (Cloud Security Posture Management) plan.
-
-Learn more about [Agentless Containers Posture in Defender CSPM](concept-agentless-containers.md).
- ### Changes to the Defender for DevOps recommendations environment source and resource ID **Estimated date for change: August 2023**
defender-for-cloud View And Remediate Vulnerabilities For Images Running On Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/view-and-remediate-vulnerabilities-for-images-running-on-aks.md
Last updated 07/11/2023
Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462ce) recommendation.
-To provide findings for the recommendation, Defender CSPM uses [agentless container registry vulnerability assessment](concept-agentless-containers.md#agentless-container-registry-vulnerability-assessment) to create a full inventory of your K8s clusters and their workloads and correlates that inventory with the [agentless container registry vulnerability assessment](concept-agentless-containers.md#agentless-container-registry-vulnerability-assessment). The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and provides vulnerability reports and remediation steps.
+To provide findings for the recommendation, Defender CSPM uses [agentless container registry vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) or the [Defender Container agent](tutorial-enable-containers-azure.md#deploy-the-defender-profile-in-azure) to create a full inventory of your Kubernetes clusters and their workloads and correlates that inventory with the vulnerability reports created for your registry images. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and remediation steps.
-Vulnerability assessment for containers reports vulnerabilities to Defender for Cloud, Defender for Cloud presents the findings and related information as recommendations, including related information such as remediation steps and relevant CVEs. You can view the identified vulnerabilities for one or more subscriptions, or for a specific resource.
+Defender for Cloud presents the findings and related information as recommendations, including related information such as remediation steps and relevant CVEs. You can view the identified vulnerabilities for one or more subscriptions, or for a specific resource.
-The resources are grouped into tabs:
+Within each recommendation, resources are grouped into tabs:
- **Healthy resources** ΓÇô relevant resources, which either aren't impacted or on which you've already remediated the issue. - **Unhealthy resources** ΓÇô resources that are still impacted by the identified issue. - **Not applicable resources** ΓÇô resources for which the recommendation can't give a definitive answer. The not applicable tab also includes reasons for each resource.
-First review and remediate vulnerabilities exposed via [attack paths](how-to-manage-attack-path.md), as they pose the greatest risk to your security posture; then use the following procedures to view, remediate, prioritize, and monitor vulnerabilities for your containers.
+If you are using Defender CSPM, first review and remediate vulnerabilities exposed via [attack paths](how-to-manage-attack-path.md), as they pose the greatest risk to your security posture. Then use the following procedures to view, remediate, prioritize, and monitor vulnerabilities for your containers.
## View vulnerabilities on a specific cluster **To view vulnerabilities for a specific cluster, do the following:**
-1. Open the **Recommendations** page, using the **>** arrow to open the sub-levels. If issues were found, you'll see the recommendation [Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5). Select the recommendation.
+1. Open the **Recommendations** page, using the **>** arrow to open the sub-levels. If issues were found, you'll see the recommendation [Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5). Select the recommendation.
- :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-image-recommendation-line.png" alt-text="Screenshot showing the recommendation line for running container images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-image-recommendation-line.png":::
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-image-recommendation-line.png" alt-text="Screenshot showing the recommendation line for running container images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-image-recommendation-line.png":::
1. The recommendation details page opens showing the list of Kubernetes clusters ("affected resources") and categorizes them as healthy, unhealthy and not applicable, based on the images used by your workloads. Select the relevant cluster for which you want to remediate vulnerabilities.
First review and remediate vulnerabilities exposed via [attack paths](how-to-man
## View container images affected by a specific vulnerability **To view findings for a specific vulnerability, do the following:**
-1. Open the **Recommendations** page, using the **>** arrow to open the sub-levels. If issues were found, you'll see the recommendation [Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5). Select the recommendation.
- :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-image-recommendation-line.png" alt-text="Screenshot showing the recommendation line for running container images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-image-recommendation-line.png":::
+1. Open the **Recommendations** page, using the **>** arrow to open the sub-levels. If issues were found, you'll see the recommendation [Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5). Select the recommendation.
+
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-image-recommendation-line.png" alt-text="Screenshot showing the recommendation line for running container images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-image-recommendation-line.png":::
1. The recommendation details page opens with additional information. This information includes the list of vulnerabilities impacting the clusters. Select the specific vulnerability.
- :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-select-vulnerability.png" alt-text="Screenshot showing the list of vulnerabilities impacting the container clusters." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-select-vulnerability.png":::
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-select-vulnerability.png" alt-text="Screenshot showing the list of vulnerabilities impacting the container clusters." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-select-vulnerability.png":::
1. The vulnerability details pane opens. This pane includes a detailed description of the vulnerability, images affected by that vulnerability, and links to external resources to help mitigate the threats, affected resources, and information on the software version that contributes to [resolving the vulnerability](#remediate-vulnerabilities).
- :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-containers-affected.png" alt-text="Screenshot showing the list of container images impacted by the vulnerability." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-containers-affected.png":::
+ :::image type="content" source="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-containers-affected.png" alt-text="Screenshot showing the list of container images impacted by the vulnerability." lightbox="media/view-and-remediate-vulnerabilities-for-images-running-on-aks/running-containers-affected.png":::
## Remediate vulnerabilities Use these steps to remediate each of the affected images found either in a specific cluster or for a specific vulnerability:
-1. Follow the steps in the remediation section of the recommendation pane.
+1. Follow the steps in the remediation section of the recommendation pane.
1. When you've completed the steps required to remediate the security issue, replace each affected image in your cluster, or replace each affected image for a specific vulnerability: 1. Build a new image (including updates for each of the packages) that resolves the vulnerability according to the remediation details.
- 1. Push the updated image to trigger a scan; it may take up to 24 hours for the previous image to be removed from the results, and for the new image to be included in the results.
+ 1. Push the updated image to trigger a scan and delete the old image. It may take up to 24 hours for the previous image to be removed from the results, and for the new image to be included in the results.
1. Use the new image across all vulnerable workloads.
- 1. Remove the vulnerable image from the registry.
-1. Check the recommendations page for the recommendation [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c).
-1. If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
+1. Check the recommendations page for the recommendation [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c).
+1. If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
-## Next Steps
+## Next steps
- Learn how to [view and remediate vulnerabilities for registry images](view-and-remediate-vulnerability-assessment-findings.md).-- Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
+- Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads)
defender-for-cloud View And Remediate Vulnerability Assessment Findings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/view-and-remediate-vulnerability-assessment-findings.md
Last updated 07/11/2023
# View and remediate vulnerabilities for registry images
-Vulnerability assessment for containers reports vulnerabilities to Defender for Cloud, Defender for Cloud presents them and related information as recommendations, including related information such as remediation steps and relevant CVEs. You can view the identified vulnerabilities for one or more subscriptions, or for a specific resource.
+Defender for Cloud gives its customers the ability to remediate vulnerabilities in container images while still stored in the registry by using the [Container registry images should have vulnerability findings resolved (powered by MDVM)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) recommendation.
-The resources are grouped into tabs:
+Within the recommendation, resources are grouped into tabs:
- **Healthy resources** ΓÇô relevant resources, which either aren't impacted or on which you've already remediated the issue. - **Unhealthy resources** ΓÇô resources that are still impacted by the identified issue. - **Not applicable resources** ΓÇô resources for which the recommendation can't give a definitive answer. The not applicable tab also includes reasons for each resource.
-First review and remediate vulnerabilities exposed via [attack paths](how-to-manage-attack-path.md), as they pose the greatest risk to your security posture; then use the following procedures to view, remediate, prioritize, and monitor vulnerabilities for your containers.
+If you are using Defender CSPM, first review and remediate vulnerabilities exposed via [attack paths](how-to-manage-attack-path.md), as they pose the greatest risk to your security posture. Then [view remediate vulnerabilities for running images](view-and-remediate-vulnerabilities-for-images-running-on-aks.md), and finally use the following procedures described here to view, remediate, prioritize, and monitor vulnerabilities in your registry images.
-## View vulnerabilities on a specific container registry
+## View vulnerabilities on a specific container registry
-1. Open the **Recommendations** page, using the **>** arrow to open the sub-levels. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved (powered by MDVM)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5). Select the recommendation.
+1. Open the **Recommendations** page, using the **>** arrow to open the sublevels. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved (powered by MDVM)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5). Select the recommendation.
- :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/open-recommendations-page.png" alt-text="Screenshot showing the line for recommendation container registry images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerability-assessment-findings/open-recommendations-page.png":::
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/open-recommendations-page.png" alt-text="Screenshot showing the line for recommendation container registry images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerability-assessment-findings/open-recommendations-page.png":::
1. The recommendation details page opens with additional information. This information includes the list of registries with vulnerable images ("affected resources") and the remediation steps. Select the affected registry. :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-registry.png" alt-text="Screenshot showing the recommendation details and affected registries." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-registry.png":::
-1. This opens the registry details with a list of repositories in it that have vulnerable images. Select the affected repository to see the images in it that are vulnerable.
+1. This opens the registry details with a list of repositories in it that have vulnerable images. Select the affected repository to see the images in it that are vulnerable.
:::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-repo.png" alt-text="Screenshot showing where to select the specific repository." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-repo.png":::
-1. The repository details page opens. It lists all vulnerable images on that repository with distribution of the severity of vulnerabilities per image. Select the unhealthy image to see the vulnerabilities.
+1. The repository details page opens. It lists all vulnerable images on that repository with distribution of the severity of vulnerabilities per image. Select the unhealthy image to see the vulnerabilities.
+
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-unhealthy-image.png" alt-text="Screenshot showing where to select the unhealthy image." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-unhealthy-image.png":::
- :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-unhealthy-image.png" alt-text="Screenshot showing where to select the unhealthy image." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-unhealthy-image.png":::
- ΓÇ»
1. The list of vulnerabilities for the selected image opens. To learn more about a finding, select the finding. :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-image-finding.png" alt-text="Screenshot showing the list of findings on the specific image." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-image-finding.png":::
First review and remediate vulnerabilities exposed via [attack paths](how-to-man
1. Open the **Recommendations** page. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved (powered by MDVM)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5). Select the recommendation.
- :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/open-recommendations-page.png" alt-text="Screenshot showing the line for recommendation container registry images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerability-assessment-findings/open-recommendations-page.png":::
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/open-recommendations-page.png" alt-text="Screenshot showing the line for recommendation container registry images should have vulnerability findings resolved." lightbox="media/view-and-remediate-vulnerability-assessment-findings/open-recommendations-page.png":::
1. The recommendation details page opens with additional information. This information includes the list of vulnerabilities impacting the images. Select the specific vulnerability.
- :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-specific-vulnerability.png" alt-text="Screenshot showing the list of vulnerabilities impacting the images." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-specific-vulnerability.png":::
+ :::image type="content" source="media/view-and-remediate-vulnerability-assessment-findings/select-specific-vulnerability.png" alt-text="Screenshot showing the list of vulnerabilities impacting the images." lightbox="media/view-and-remediate-vulnerability-assessment-findings/select-specific-vulnerability.png":::
1. The vulnerability finding details pane opens. This pane includes a detailed description of the vulnerability, images affected by that vulnerability, and links to external resources to help mitigate the threats, affected resources, and information on the software version that contributes to [resolving the vulnerability](#remediate-vulnerabilities).
First review and remediate vulnerabilities exposed via [attack paths](how-to-man
Use these steps to remediate each of the affected images found either in a specific cluster or for a specific vulnerability:
-1. Follow the steps in the remediation section of the recommendation pane.
-1. When you've completed the steps required to remediate the security issue, replace each affected image in your registry or replace each affected image for a specific vulnerability:
- 1. Build a new image, (including updates for each of the packages) that resolves the vulnerability according to the remediation details.
- 1. Push the updated image to trigger a scan; it may take up to 24 hours for the previous image to be removed from the results, and for the new image to be included in the results.
- 1.Delete the vulnerable image from the registry.
-
+1. Follow the steps in the remediation section of the recommendation pane.
+1. When you've completed the steps required to remediate the security issue, replace each affected image in your registry or replace each affected image for a specific vulnerability:
+ 1. Build a new image (including updates for each of the packages) that resolves the vulnerability according to the remediation details.
+ 1. Push the updated image to trigger a scan and delete the old image. It may take up to 24 hours for the previous image to be removed from the results, and for the new image to be included in the results.
-1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved (powered by MDVM)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5).
-If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
+1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved (powered by MDVM)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5).
+If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
-## Next Steps
+## Next steps
- Learn how to [view and remediate vulnerabilities for images running on Azure Kubernetes clusters](view-and-remediate-vulnerabilities-for-images-running-on-aks.md). - Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
defender-for-iot Cli Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/cli-ot-sensor.md
The basic use case for capture filters uses the same filter for all Defender for
- `traffic-monitor`: Captures communication statistics > [!NOTE]
-> Capture filters don't apply to [Defender for IoT malware alerts](../alert-engine-messages.md#malware-engine-alerts), which are triggered on all detected network traffic.
+> - Capture filters don't apply to [Defender for IoT malware alerts](../alert-engine-messages.md#malware-engine-alerts), which are triggered on all detected network traffic.
>
+> - The capture filter command has a character length limit that's based on the complexity of the capture filter definition and the available network interface card capabilities. If your requested filter commmand fails, try grouping subnets into larger scopes and using a shorter capture filter command.
### Create a basic filter for all components
defender-for-iot Detect Windows Endpoints Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/detect-windows-endpoints-script.md
Title: Detect Windows workstations and servers with a local script
-description: Learn about how to detect Windows workstations and servers on your network using a local script.
+ Title: Enrich Windows workstation and server data with a local script
+description: Learn about how to enrich Windows workstation and server data on your OT sensor using a local script.
Last updated 07/12/2022
-# Detect Windows workstations and servers with a local script
+# Enrich Windows workstation and server data with a local script (Public preview)
-In addition to detecting OT devices on your network, use Defender for IoT to discover Microsoft Windows workstations and servers. Same as other detected devices, detected Windows workstations and servers are displayed in the Device inventory. The **Device inventory** pages on the sensor and on-premises management console show enriched data about Windows devices, including data about the Windows operating system and applications installed, patch-level data, open ports, and more.
+> [!NOTE]
+> This feature is in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+In addition to detecting OT devices on your network, use Defender for IoT to discover Microsoft Windows workstations and servers and enrich workstation and server data for devices already detected. Same as other detected devices, detected Windows workstations and servers are displayed in the Device inventory. The **Device inventory** pages on the sensor and on-premises management console show enriched data about Windows devices, including data about the Windows operating system and applications installed, patch-level data, open ports, and more.
This article describes how to use a Defender for IoT Windows-based WMI tool to get extended information from Windows devices, such as workstations, servers, and more. Run the WMI script on your Windows devices to get extended information, increasing your device inventory and security coverage. While you can also use [scheduled WMI scans](configure-windows-endpoint-monitoring.md) to obtain this data, scripts can be run locally for regulated networks with waterfalls and one-way elements if WMI connectivity isn't possible.
defender-for-iot How To Create Risk Assessment Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-risk-assessment-reports.md
Enrich your sensor with extra data to provide fuller risk assessment reports:
### Import firewall rules to an OT sensor Import firewall rules to your OT sensor for analysis in **Risk assessment** reports. Importing firewall rules is supported for the following firewalls:-- Checkpoint (firewall export to R77, *.zip file)-- Fortinet (configuration backup, *.conf file)-- Juniper (ScreenOS CLI configuration, *.txt file)+
+|Name |Description | File type |
+||||
+| **Check Point** | Firewall export to R77 | .ZIP |
+| **Fortinet** | Configuration backup | .CONF|
+|**Juniper** | ScreenOS CLI configuration | .TXT |
**To import firewall rules**:
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
This version includes:
**Supported until**: 03/2024 -- [Download WMI script from OT sensor console](detect-windows-endpoints-script.md#download-and-run-the-script)
+- [Enrich Windows workstation and server data with a local script (Public preview)](detect-windows-endpoints-script.md)
- [Automatically resolved notifications for operating system changes and device type changes](how-to-work-with-the-sensor-device-map.md#device-notification-responses) - [UI enhancements when uploading SSL/TLS certificates](how-to-deploy-certificates.md#deploy-a-certificate-on-an-ot-sensor)
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
For more information, see [Sensor setting reference](configure-sensor-settings-p
|Service area |Updates | ||| | **Documentation** | [End-to-end deployment guides](#end-to-end-deployment-guides) |
-| **OT networks** | **Sensor version 22.3.8**: <br>- [Proxy support for client SSL/TLS certificates](#proxy-support-for-client-ssltls-certificates) <br>- [Download WMI script from OT sensor console](#download-wmi-script-from-ot-sensor-console) <br>- [Automatically resolved OS notifications](#automatically-resolved-os-notifications) <br>- [UI enhancement when uploading SSL/TLS certificates](#ui-enhancement-when-uploading-ssltls-certificates) |
+| **OT networks** | **Sensor version 22.3.8**: <br>- [Proxy support for client SSL/TLS certificates](#proxy-support-for-client-ssltls-certificates) <br>- [Enrich Windows workstation and server data with a local script (Public preview)](#enrich-windows-workstation-and-server-data-with-a-local-script-public-preview) <br>- [Automatically resolved OS notifications](#automatically-resolved-os-notifications) <br>- [UI enhancement when uploading SSL/TLS certificates](#ui-enhancement-when-uploading-ssltls-certificates) |
### End-to-end deployment guides
A client SSL/TLS certificate is required for proxy servers that inspect SSL/TLS
For more information, see [Configure a proxy](connect-sensors.md#configure-proxy-settings-on-an-ot-sensor).
-### Download WMI script from OT sensor console
+### Enrich Windows workstation and server data with a local script (Public preview)
-The script used to configure OT sensors to detect Microsoft Windows workstations and servers is now available for download from the OT sensor itself.
-
-For more information, see [Download the script](detect-windows-endpoints-script.md#download-and-run-the-script)
+Use a local script, available from the OT sensor UI, to enrich Microsoft Windows workstation and server data on your OT sensor. The script runs as a utility to detect devices and enrich data, and can be run manually or using standard automation tools.
+
+For more information, see [Enrich Windows workstation and server data with a local script](detect-windows-endpoints-script.md).
### Automatically resolved OS notifications
dms Howto Sql Server To Azure Sql Managed Instance Powershell Offline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md
$vSubNet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $vNet -Name MySubnet
$service = New-AzDms -ResourceGroupName myResourceGroup ` -ServiceName MyDMS ` -Location EastUS `
- -Sku Basic_2vCores `
+ -Sku Basic_2vCores `
-VirtualSubnetId $vSubNet.Id` ```
foreach($DataBase in $Databases.Database_Name)
{ $SourceDB=$DataBase $TargetDB=$DataBase
-
+ $selectedDbs += New-AzureRmDmsSelectedDB -MigrateSqlServerSqlDbMi ` -Name $SourceDB ` -TargetDatabaseName $TargetDB `
Use the `New-AzDataMigrationTask` cmdlet to create and start a migration task.
The `New-AzDataMigrationTask` cmdlet expects the following parameters:
-* *TaskType*. Type of migration task to create for SQL Server to Azure SQL Managed Instance migration type *MigrateSqlServerSqlDbMi* is expected.
+* *TaskType*. Type of migration task to create for SQL Server to Azure SQL Managed Instance migration type *MigrateSqlServerSqlDbMi* is expected.
* *Resource Group Name*. Name of Azure resource group in which to create the task. * *ServiceName*. Azure Database Migration Service instance in which to create the task.
-* *ProjectName*. Name of Azure Database Migration Service project in which to create the task.
-* *TaskName*. Name of task to be created.
+* *ProjectName*. Name of Azure Database Migration Service project in which to create the task.
+* *TaskName*. Name of task to be created.
* *SourceConnection*. AzDmsConnInfo object representing source SQL Server connection. * *TargetConnection*. AzDmsConnInfo object representing target Azure SQL Managed Instance connection. * *SourceCred*. [PSCredential](/dotnet/api/system.management.automation.pscredential) object for connecting to source server.
To monitor the migration, perform the following tasks.
To combine migration details such as properties, state, and database information associated with the migration, use the following code snippet: ```powershell
- $CheckTask= Get-AzDataMigrationTask -ResourceGroupName myResourceGroup `
- -ServiceName $service.Name `
- -ProjectName $project.Name `
- -Name myDMSTask `
- -ResultType DatabaseLevelOutput `
- -Expand
+ $CheckTask = Get-AzDataMigrationTask -ResourceGroupName myResourceGroup `
+ -ServiceName $service.Name `
+ -ProjectName $project.Name `
+ -Name myDMSTask `
+ -ResultType DatabaseLevelOutput `
+ -Expand
Write-Host ΓÇÿ$CheckTask.ProjectTask.Properties.OutputΓÇÖ ```
To monitor the migration, perform the following tasks.
Write-Host "migration task running" } else if($CheckTask.ProjectTask.Properties.State -eq "Succeeded")
- {
+ {
Write-Host "Migration task is completed Successfully" } else if($CheckTask.ProjectTask.Properties.State -eq "Failed" -or $CheckTask.ProjectTask.Properties.State -eq "FailedInputValidation" -or $CheckTask.ProjectTask.Properties.State -eq "Faulted")
- {
+ {
Write-Host "Migration Task Failed" } ```
dns Dns Operations Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-recordsets.md
To add a record to an existing record set, follow the following three steps:
1. Get the existing record set
- ```azurepowershell-interactive
- $rs = Get-AzDnsRecordSet -Name www ΓÇôZoneName "contoso.com" -ResourceGroupName "MyResourceGroup" -RecordType A
- ```
+ ```azurepowershell-interactive
+ $rs = Get-AzDnsRecordSet -Name www ΓÇôZoneName "contoso.com" -ResourceGroupName "MyResourceGroup" -RecordType A
+ ```
1. Add the new record to the local record set.
- ```azurepowershell-interactive
- Add-AzDnsRecordConfig -RecordSet $rs -Ipv4Address "5.6.7.8"
- ```
+ ```azurepowershell-interactive
+ Add-AzDnsRecordConfig -RecordSet $rs -Ipv4Address "5.6.7.8"
+ ```
3. Update the changes so it reflects to the Azure DNS service.
- ```azurepowershell-interactive
- Set-AzDnsRecordSet -RecordSet $rs
- ```
+ ```azurepowershell-interactive
+ Set-AzDnsRecordSet -RecordSet $rs
+ ```
Using `Set-AzDnsRecordSet` *replaces* the existing record set in Azure DNS (and all records it contains) with the record set specified. [Etag checks](dns-zones-records.md#etags) are used to ensure concurrent changes aren't overwritten. You can use the optional `-Overwrite` switch to suppress these checks.
The process to remove a record from a record set is similar to the process to ad
1. Get the existing record set
- ```azurepowershell-interactive
- $rs = Get-AzDnsRecordSet -Name www ΓÇôZoneName "contoso.com" -ResourceGroupName "MyResourceGroup" -RecordType A
- ```
+ ```azurepowershell-interactive
+ $rs = Get-AzDnsRecordSet -Name www ΓÇôZoneName "contoso.com" -ResourceGroupName "MyResourceGroup" -RecordType A
+ ```
2. Remove the record from the local record set object. The record that's being removed must be an exact match with an existing record across all parameters.
- ```azurepowershell-interactive
- Remove-AzDnsRecordConfig -RecordSet $rs -Ipv4Address "5.6.7.8"
- ```
+ ```azurepowershell-interactive
+ Remove-AzDnsRecordConfig -RecordSet $rs -Ipv4Address "5.6.7.8"
+ ```
3. Commit the change back to the Azure DNS service. Use the optional `-Overwrite` switch to suppress [Etag checks](dns-zones-records.md#etags) for concurrent changes.
- ```azurepowershell-interactive
- Set-AzDnsRecordSet -RecordSet $Rs
- ```
+ ```azurepowershell-interactive
+ Set-AzDnsRecordSet -RecordSet $Rs
+ ```
Using the above sequence to remove the last record from a record set doesn't delete the record set, rather it leaves an empty record set. To remove a record set entirely, see [Delete a record set](#delete-a-record-set).
energy-data-services Resources Partner Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/resources-partner-solutions.md
This article highlights Microsoft partners with software solutions officially su
| Partner | Description | Website/Product link | | - | -- | -- |
-| Bluware | Bluware enables you to explore the full value of seismic data for exploration, carbon capture, wind farms, and geothermal workflows. Bluware technology on Azure Data Manager for Energy is increasing workflow productivity utilizing the power of Azure. Bluware's flagship seismic deep learning solution, InteractivAI&trade; drastically improves the effectiveness of interpretation workflows. The interactive experience reduces seismic interpretation time by ten times from weeks to hours and provides full control over interpretation results. | [Bluware technologies on Azure](https://go.bluware.com/bluware-on-azure-markeplace) [Bluware Products and Evaluation Packages](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bluwarecorp1581537274084.bluwareazurelisting)|
+| Bluware | Bluware enables you to explore the full value of seismic data for exploration, carbon capture, wind farms, and geothermal workflows. Bluware technology on Azure Data Manager for Energy is increasing workflow productivity utilizing the power of Azure. Bluware's flagship seismic deep learning solution, InteractivAI&trade; drastically improves the effectiveness of interpretation workflows. The interactive experience reduces seismic interpretation time by 10 times from weeks to hours and provides full control over interpretation results. | [Bluware technologies on Azure](https://go.bluware.com/bluware-on-azure-markeplace) [Bluware Products and Evaluation Packages](https://azuremarketplace.microsoft.com/marketplace/apps/bluwarecorp1581537274084.bluwareazurelisting)|
| Katalyst | Katalyst Data Management&reg; provides the only integrated, end-to-end subsurface data management solution for the oil and gas industry. Over 160 employees operate in North America, Europe, and Asia-Pacific, dedicated to enabling digital transformation and optimizing the value of geotechnical information for exploration, production, and M&A activity. |[Katalyst Data Management solution](https://www.katalystdm.com/seismic-news/katalyst-announces-sub-surface-data-management-solution-powered-by-microsoft-energy-data-services/) |
-| Interica | Interica OneView&trade; harnesses the power of application connectors to extract rich metadata from live projects discovered across the organization. IOV scans automatically discover content and extract detailed metadata at the sub-element level. Quickly and easily discover data across multiple file systems and data silos and determine which projects contain selected data objects to inform business decisions. Live data discovery enables businesses to see a holistic view of subsurface project landscapes for improved time to decisions, more efficient data search, and effective storage management. | [Accelerate Azure Data Manager for Energy adoption with Interica OneView&trade;](https://www.petrosys.com.au/interica-oneview-connecting-to-microsoft-data-services/) [Interica OneView&trade;](https://www.petrosys.com.au/assets/Interica_OneView_Accelerate_MEDS_Azure_adoption.pdf) [Interica OneView&trade; connecting to Microsoft Data Services](https://youtu.be/uPEOo3H01w4)|
-|Aspentech|AspenTech and Microsoft are working together to accelerate your digital transformation by optimizing assets to run safer, greener, longer, and faster. With MicrosoftΓÇÖs end-to-end solutions and AspenTechΓÇÖs deep domain expertise, we provide capital-intensive industries with a scalable, trusted data environment that delivers the insights you need to optimize assets, performance, and reliability. As partners, we are innovating to achieve operational excellence and empowering the workforce by unlocking new efficiency, safety, sustainability, and profitability levels.|[Help your energy customers transform with new Microsoft Azure Data Manager for Energy](https://blogs.partner.microsoft.com/partner/help-your-energy-customers-transform-with-new-microsoft-energy-data-services/)|
-
+| Interica | Interica OneView&trade; harnesses the power of application connectors to extract rich metadata from live projects discovered across the organization. IOV scans automatically discover content and extract detailed metadata at the subelement level. Quickly and easily discover data across multiple file systems and data silos and determine which projects contain selected data objects to inform business decisions. Live data discovery enables businesses to see a holistic view of subsurface project landscapes for improved time to decisions, more efficient data search, and effective storage management. | [Accelerate Azure Data Manager for Energy adoption with Interica OneView&trade;](https://www.petrosys.com.au/interica-oneview-connecting-to-microsoft-data-services/) [Interica OneView&trade;](https://www.petrosys.com.au/assets/Interica_OneView_Accelerate_MEDS_Azure_adoption.pdf) [Interica OneView&trade; connecting to Microsoft Data Services](https://youtu.be/uPEOo3H01w4)|
+| Aspentech | AspenTech and Microsoft are working together to accelerate your digital transformation by optimizing assets to run safer, greener, longer, and faster. With MicrosoftΓÇÖs end-to-end solutions and AspenTechΓÇÖs deep domain expertise, we provide capital-intensive industries with a scalable, trusted data environment that delivers the insights you need to optimize assets, performance, and reliability. As partners, we're innovating to achieve operational excellence and empowering the workforce by unlocking new efficiency, safety, sustainability, and profitability levels.| [Help your energy customers transform with the new Microsoft Azure Data Manager for Energy](https://blogs.partner.microsoft.com/partner/help-your-energy-customers-transform-with-new-microsoft-energy-data-services/) |
+| Accenture | As a leading partner, Accenture helps operators overcome the challenges of OSDU&trade; Data Platform implementation, mitigate the risks of deployment, and unlock the full potential of your data. Accenture has the unique capabilities to deliver on these promises and enable your value based on their deep industry knowledge and investments in accelerators like the Accenture OnePlatform. They have 14,000+ dedicated oil and gas skilled global professionals with 250+ OSDU&trade;-certified experts and strong ecosystem partnerships. | [Accenture and Microsoft drive digital transformation with OnePlatform on Microsoft Energy Data Services for OSDU&trade;](https://azure.microsoft.com/blog/accenture-and-microsoft-drive-digital-transformation-with-oneplatform-on-microsoft-energy-data-services-for-osdu/) |
+| INT | INT is among the first to use Microsoft Azure Data Manager for Energy. As an OSDU&trade; Forum member, INT offers IVAAP&trade; a data visualization platform that allows geoscientists to access and interact with data easily. Dashboards can be created within Microsoft Azure using this platform.| [Microsoft and INT deploy IVAAP for OSDU Data Platform on Microsoft Energy Data Services](https://azure.microsoft.com/blog/microsoft-and-int-deploy-ivaap-for-osdu-data-platform-on-microsoft-energy-data-services/)|
+| RoQC | RoQC Data Management AS is a Software, Advisory, and Consultancy company specializing in Subsurface Data Management. RoQCΓÇÖs LogQA provides powerful native, machine learningΓÇôbased QA and cleanup tools for log data once the data has been migrated to Microsoft Azure Data Manager for Energy, an enterprise-grade OSDU&trade; Data Platform on the Microsoft Cloud.| [RoQC and Microsoft simplify cloud migration with Microsoft Energy Data Services](https://azure.microsoft.com/blog/roqc-and-microsoft-simplify-cloud-migration-with-microsoft-energy-data-services/)|
+| EPAM | EPAM has industry knowledge, technical expertise, and strong relationships with software vendors. They offer world-class delivery through Microsoft Azure Data Manager for Energy. EPAM has also created the Document Extraction and Processing System (DEPS) accelerator, which enables customizable workflows for extracting and processing unstructured data from scanned or digitalized document formats. | [EPAM and Microsoft partner on data governance solutions with Microsoft Energy Data Services](https://azure.microsoft.com/blog/epam-and-microsoft-partner-on-data-governance-solutions-with-microsoft-energy-data-services/) |
+| Cegal | Cegal specializes in energy software solutions. Their cloud-based platform, [Cetegra](https://www.cegal.com/en/cloud-operations/cetegra), caters to digitalization and data management needs with a pay-as-you-go model. It uses Microsoft Cloud and supports Azure Data Manager for Energy. | [Cegal and Microsoft break down data silos and offer open collaboration with Microsoft Energy Data Services](https://azure.microsoft.com/blog/cegal-and-microsoft-break-down-data-silos-and-offer-open-collaboration-with-microsoft-energy-data-services/) |
+| Wipro | Wipro offers services and accelerators that use the WINS (Wipro INgestion Service) framework, which speeds up the time-to-market and allows for seamless execution of domain workflows with data stored in Microsoft Azure Data Manager for Energy with minimal effort. | [Wipro and Microsoft partner on services and accelerators for the new Microsoft Energy Data Services](https://azure.microsoft.com/blog/wipro-and-microsoft-partner-on-services-and-accelerators-for-the-new-microsoft-energy-data-services/)|
## Next steps To learn more about Azure Data Manager for Energy, visit
event-grid Auth0 Log Stream Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/auth0-log-stream-blob-storage.md
Last updated 10/12/2022
# Send Auth0 events to Azure Blob Storage
-This article shows you how to send Auth0 events to Azure Blob Storage via Azure Event Grid by using Azure Functions.
+This article shows you how to send Auth0 events to Azure Blob Storage via Azure Event Grid by using Azure Functions.
## Prerequisites - [Create an Azure Event Grid stream on Auth0](https://marketplace.auth0.com/integrations/azure-log-streaming).
This article shows you how to send Auth0 events to Azure Blob Storage via Azure
## Create an Azure function 1. Create an Azure function by following instructions from the **Create a local project** section of [Quickstart: Create a JavaScript function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-node.md?pivots=nodejs-model-v3).
- 1. Select **Azure Event Grid trigger** for the function template instead of **HTTP trigger** as mentioned in the quickstart.
- 1. Continue to follow the steps, but use the following **index.js** and **function.json** files.
-
- > [!IMPORTANT]
- > Update the **package.json** to include `@azure/storage-blob` as a dependency.
-
- **function.json**
- ```json
- {
- "bindings": [{
- "type": "eventGridTrigger",
- "name": "eventGridEvent",
- "direction": "in"
-
- },
- {
- "type": "blob",
- "name": "outputBlob",
- "path": "events/{rand-guid}.json",
- "connection": "OUTPUT_STORAGE_ACCOUNT",
- "direction": "out"
-
- }
- ]
- }
- ```
-
- **index.js**
-
- ```javascript
- // Event Grid always sends an array of data and may send more
- // than one event in the array. The runtime invokes this function
- // once for each array element, so we are always dealing with one.
- // See: https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-event-grid-trigger?tabs=
- module.exports = async function (context, eventGridEvent) {
- context.log(JSON.stringify(context.bindings));
- context.log(JSON.stringify(context.bindingData));
-
- context.bindings.outputBlob = JSON.stringify(eventGridEvent);
- };
- ```
+ 1. Select **Azure Event Grid trigger** for the function template instead of **HTTP trigger** as mentioned in the quickstart.
+ 1. Continue to follow the steps, but use the following **index.js** and **function.json** files.
+
+ > [!IMPORTANT]
+ > Update the **package.json** to include `@azure/storage-blob` as a dependency.
+
+ **function.json**
+
+ ```json
+ {
+ "bindings": [
+ {
+ "type": "eventGridTrigger",
+ "name": "eventGridEvent",
+ "direction": "in"
+ },
+ {
+ "type": "blob",
+ "name": "outputBlob",
+ "path": "events/{rand-guid}.json",
+ "connection": "OUTPUT_STORAGE_ACCOUNT",
+ "direction": "out"
+ }
+ ]
+ }
+ ```
+
+ **index.js**
+
+ ```javascript
+ // Event Grid always sends an array of data and may send more
+ // than one event in the array. The runtime invokes this function
+ // once for each array element, so we are always dealing with one.
+ // See: https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-event-grid-trigger?tabs=
+ module.exports = async function (context, eventGridEvent) {
+ context.log(JSON.stringify(context.bindings));
+ context.log(JSON.stringify(context.bindingData));
+
+ context.bindings.outputBlob = JSON.stringify(eventGridEvent);
+ };
+ ```
+ 1. Create an Azure function app using instructions from [Quick function app create](../azure-functions/functions-develop-vs-code.md?tabs=csharp#quick-function-app-create). 1. Deploy your function to the function app on Azure using instructions from [Deploy project files](../azure-functions/functions-develop-vs-code.md?tabs=csharp#republish-project-files).
-
## Configure Azure function to use your blob storage 1. Configure your Azure function to use your storage account. 1. Select **Configuration** under **Settings** on the left menu.
- 1. On the **Application settings** page, select **+ New connection string** on the command bar.
+ 1. On the **Application settings** page, select **+ New connection string** on the command bar.
1. Set **Name** to **AzureWebJobsOUTPUT_STORAGE_ACCOUNT**.
- 1. Set **Value** to the connection string to the storage account that you copied to the clipboard in the previous step.
+ 1. Set **Value** to the connection string to the storage account that you copied to the clipboard in the previous step.
1. Select **OK**. ## Create event subscription for partner topic using function
This article shows you how to send Auth0 events to Azure Blob Storage via Azure
1. On the **Create Event Subscription** page, follow these steps: 1. Enter a **name** for the event subscription. 1. For **Endpoint type**, select **Azure Function**.
-
+ :::image type="content" source="./media/auth0-log-stream-blob-storage/select-endpoint-type.png" alt-text="Screenshot showing the Create Event Subscription page with Azure Functions selected as the endpoint type.":::
- 1. Click **Select an endpoint** to specify details about the function.
+ 1. Click **Select an endpoint** to specify details about the function.
1. On the **Select Azure Function** page, follow these steps. 1. Select the **Azure subscription** that contains the function. 1. Select the **resource group** that contains the function. 1. Select your **function app**. 1. Select your **Azure function**.
- 1. Then, select **Confirm Selection**.
-1. Now, back on the **Create Event Subscription** page, select **Create** to create the event subscription.
+ 1. Then, select **Confirm Selection**.
+1. Now, back on the **Create Event Subscription** page, select **Create** to create the event subscription.
1. After the event subscription is created successfully, you see the event subscription in the bottom pane of the **Event Grid Partner Topic - Overview** page.
-1. Select the link to your Azure function at the bottom of the page.
+1. Select the link to your Azure function at the bottom of the page.
1. On the **Azure Function** page, select **Monitor** and confirm data is successfully being sent. You may need to trigger logs from Auth0. ## Verify that logs are stored in the storage account 1. Locate your storage account in the Azure portal. 1. Select **Containers** under **Data Storage** on the left menu.
-1. Confirm that you see a container named **events**.
-1. Select the container and verify that your Auth0 logs are being stored.
+1. Confirm that you see a container named **events**.
+1. Select the container and verify that your Auth0 logs are being stored.
> [!NOTE] > You can use steps in the article to handle events from other event sources too. For a generic example of sending Event Grid events to Azure Blob Storage or Azure Monitor Application Insights, see [this example on GitHub](https://github.com/awkwardindustries/azure-monitor-handler).
event-grid Event Hubs Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-hubs-integration.md
Title: 'Tutorial: Send Event Hubs data to data warehouse - Event Grid'
-description: Shows how to migrate Event Hubs captured data from Azure Blob Storage to Azure Synapse Analytics, specifically a dedicated SQL pool, using Azure Event Grid and Azure Functions.
+description: Shows how to migrate Event Hubs captured data from Azure Blob Storage to Azure Synapse Analytics, specifically a dedicated SQL pool, using Azure Event Grid and Azure Functions.
Last updated 11/14/2022 ms.devlang: csharp
# Tutorial: Migrate Event Hubs captured data from Azure Storage to Azure Synapse Analytics using Azure Event Grid and Azure Functions
-In this tutorial, you'll migrate Event Hubs captured data from Azure Blob Storage to Azure Synapse Analytics, specifically a dedicated SQL pool, using Azure Event Grid and Azure Functions.
+In this tutorial, you'll migrate Event Hubs captured data from Azure Blob Storage to Azure Synapse Analytics, specifically a dedicated SQL pool, using Azure Event Grid and Azure Functions.
:::image type="content" source="media/event-hubs-functions-synapse-analytics/overview.svg" alt-text="Application overview":::
-This diagram depicts the workflow of the solution you build in this tutorial:
+This diagram depicts the workflow of the solution you build in this tutorial:
1. Data sent to an Azure event hub is captured in an Azure blob storage.
-2. When the data capture is complete, an event is generated and sent to Azure Event Grid.
+2. When the data capture is complete, an event is generated and sent to Azure Event Grid.
3. Azure Event Grid forwards this event data to an Azure function app.
-4. The function app uses the blob URL in the event data to retrieve the blob from the storage.
-5. The function app migrates the blob data to an Azure Synapse Analytics.
+4. The function app uses the blob URL in the event data to retrieve the blob from the storage.
+5. The function app migrates the blob data to an Azure Synapse Analytics.
In this article, you take the following steps: > [!div class="checklist"] > - Deploy the required infrastructure for the tutorial > - Publish code to a Functions App
-> - Create an Event Grid subscription
+> - Create an Event Grid subscription
> - Stream sample data into Event Hubs > - Verify captured data in Azure Synapse Analytics ## Prerequisites To complete this tutorial, you must have: -- This article assumes that you are familiar with Event Grid and Event Hubs (especially the Capture feature). If you aren't familiar with Azure Event Grid, see [Introduction to Azure Event Grid](overview.md). To learn about the Capture feature of Azure Event Hubs, see [Capture events through Azure Event Hubs in Azure Blob Storage or Azure Data Lake Storage](../event-hubs/event-hubs-capture-overview.md).
+- This article assumes that you are familiar with Event Grid and Event Hubs (especially the Capture feature). If you aren't familiar with Azure Event Grid, see [Introduction to Azure Event Grid](overview.md). To learn about the Capture feature of Azure Event Hubs, see [Capture events through Azure Event Hubs in Azure Blob Storage or Azure Data Lake Storage](../event-hubs/event-hubs-capture-overview.md).
- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [Visual studio](https://www.visualstudio.com/vs/) with workloads for: .NET desktop development, Azure development, ASP.NET and web development, Node.js development, and Python development. - Download the [EventHubsCaptureEventGridDemo sample project](https://github.com/Azure/azure-event-hubs/tree/master/samples/e2e/EventHubsCaptureEventGridDemo) to your computer.
To complete this tutorial, you must have:
In this step, you deploy the required infrastructure with a [Resource Manager template](https://github.com/Azure/azure-docs-json-samples/blob/master/event-grid/EventHubsDataMigration.json). When you deploy the template, the following resources are created: * Event hub with the Capture feature enabled.
-* Storage account for the captured files.
+* Storage account for the captured files.
* App service plan for hosting the function app * Function app for processing the event * SQL Server for hosting the data warehouse
In this step, you deploy the required infrastructure with a [Resource Manager te
3. You see the Cloud Shell opened at the bottom of the browser. 1. If you're using the Cloud Shell for the first time: 1. If you see an option to select between **Bash** and **PowerShell**, select **Bash**.
-
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/launch-cloud-shell.png" alt-text="Screenshot of Cloud Shell with Bash selected.":::
- 1. Create a storage account by selecting **Create storage**. Azure Cloud Shell requires an Azure storage account to store some files.
+ 1. Create a storage account by selecting **Create storage**. Azure Cloud Shell requires an Azure storage account to store some files.
:::image type="content" source="media/event-hubs-functions-synapse-analytics/create-storage-cloud-shell.png" alt-text="Screenshot showing the creation of storage for Cloud Shell.":::
- 3. Wait until the Cloud Shell is initialized.
+ 3. Wait until the Cloud Shell is initialized.
:::image type="content" source="media/event-hubs-functions-synapse-analytics/cloud-shell-initialized.png" alt-text="Screenshot showing the Cloud Shell initialized.":::
-4. In the Cloud Shell, select **Bash** as shown in the above image, if it isn't already selected.
-1. Create an Azure resource group by running the following CLI command:
+4. In the Cloud Shell, select **Bash** as shown in the above image, if it isn't already selected.
+1. Create an Azure resource group by running the following CLI command:
1. Copy and paste the following command into the Cloud Shell window. Change the resource group name and location if you want. ```azurecli az group create -l eastus -n rgDataMigration ```
- 2. Press **ENTER**.
+ 2. Press **ENTER**.
Here's an example:
-
+ ```azurecli user@Azure:~$ az group create -l eastus -n rgDataMigration {
In this step, you deploy the required infrastructure with a [Resource Manager te
"tags": null } ```
-2. Deploy all the resources mentioned in the previous section (event hub, storage account, functions app, Azure Synapse Analytics) by running the following CLI command:
- 1. Copy and paste the command into the Cloud Shell window. Alternatively, you may want to copy/paste into an editor of your choice, set values, and then copy the command to the Cloud Shell.
+2. Deploy all the resources mentioned in the previous section (event hub, storage account, functions app, Azure Synapse Analytics) by running the following CLI command:
+ 1. Copy and paste the command into the Cloud Shell window. Alternatively, you may want to copy/paste into an editor of your choice, set values, and then copy the command to the Cloud Shell.
> [!IMPORTANT]
- > Specify values for the following entities before running the command:
+ > Specify values for the following entities before running the command:
> - Name of the resource group you created earlier.
- > - Name for the event hub namespace.
+ > - Name for the event hub namespace.
> - Name for the event hub. You can leave the value as it is (hubdatamigration). > - Name for the SQL server.
- > - Name of the SQL user and password.
+ > - Name of the SQL user and password.
> - Name for the database.
- > - Name of the storage account.
- > - Name for the function app.
+ > - Name of the storage account.
+ > - Name for the function app.
```azurecli
In this step, you deploy the required infrastructure with a [Resource Manager te
--template-uri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/event-grid/EventHubsDataMigration.json \ --parameters eventHubNamespaceName=<event-hub-namespace> eventHubName=hubdatamigration sqlServerName=<sql-server-name> sqlServerUserName=<user-name> sqlServerPassword=<password> sqlServerDatabaseName=<database-name> storageName=<unique-storage-name> functionAppName=<app-name> ```
- 3. Press **ENTER** in the Cloud Shell window to run the command. This process may take a while since you're creating a bunch of resources. In the result of the command, ensure that there have been no failures.
-1. Close the Cloud Shell by selecting the **Cloud Shell** button in the portal (or) **X** button in the top-right corner of the Cloud Shell window.
+ 3. Press **ENTER** in the Cloud Shell window to run the command. This process may take a while since you're creating a bunch of resources. In the result of the command, ensure that there have been no failures.
+1. Close the Cloud Shell by selecting the **Cloud Shell** button in the portal (or) **X** button in the top-right corner of the Cloud Shell window.
### Verify that the resources are created
-1. In the Azure portal, select **Resource groups** on the left menu.
-2. Filter the list of resource groups by entering the name of your resource group in the search box.
+1. In the Azure portal, select **Resource groups** on the left menu.
+2. Filter the list of resource groups by entering the name of your resource group in the search box.
3. Select your resource group in the list. :::image type="content" source="media/event-hubs-functions-synapse-analytics/select-resource-group.png" alt-text="Screenshot showing the selection of your resource group.":::
In this step, you deploy the required infrastructure with a [Resource Manager te
### Create a table in Azure Synapse Analytics In this section, you create a table in the dedicated SQL pool you created earlier.
-1. In the list of resources in the resource group, select your **dedicated SQL pool**.
-2. On the **Dedicated SQL pool** page, in the **Common Tasks** section on the left menu, select **Query editor (preview)**.
+1. In the list of resources in the resource group, select your **dedicated SQL pool**.
+2. On the **Dedicated SQL pool** page, in the **Common Tasks** section on the left menu, select **Query editor (preview)**.
:::image type="content" source="media/event-hubs-functions-synapse-analytics/sql-data-warehouse-page.png" alt-text="Screenshot showing the selection of Query Editor on a Dedicated SQL pool page in the Azure portal.":::
-2. Enter the name of **user** and **password** for the SQL server, and select **OK**. If you see a message about allowing your client to access the SQL server, select **Allowlist IP &lt;your IP Address&gt; on server &lt;your SQL server&gt;**, and then select **OK**.
-1. In the query window, copy and run the following SQL script:
+2. Enter the name of **user** and **password** for the SQL server, and select **OK**. If you see a message about allowing your client to access the SQL server, select **Allowlist IP &lt;your IP Address&gt; on server &lt;your SQL server&gt;**, and then select **OK**.
+1. In the query window, copy and run the following SQL script:
```sql CREATE TABLE [dbo].[Fact_WindTurbineMetrics] (
- [DeviceId] nvarchar(50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
- [MeasureTime] datetime NULL,
- [GeneratedPower] float NULL,
- [WindSpeed] float NULL,
+ [DeviceId] nvarchar(50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
+ [MeasureTime] datetime NULL,
+ [GeneratedPower] float NULL,
+ [WindSpeed] float NULL,
[TurbineSpeed] float NULL ) WITH (CLUSTERED COLUMNSTORE INDEX, DISTRIBUTION = ROUND_ROBIN); ``` :::image type="content" source="media/event-hubs-functions-synapse-analytics/run-sql-query.png" alt-text="Screenshot showing the query editor.":::
-5. Keep this tab or window open so that you can verify that the data is created at the end of the tutorial.
+5. Keep this tab or window open so that you can verify that the data is created at the end of the tutorial.
## Publish the Azure Functions app
-First, get the publish profile for the Functions app from the Azure portal. Then, use the publish profile to publish the Azure Functions project or app from Visual Studio.
+First, get the publish profile for the Functions app from the Azure portal. Then, use the publish profile to publish the Azure Functions project or app from Visual Studio.
### Get the publish profile
-1. On the **Resource Group** page, select the **Azure Functions app** in the list of resources.
+1. On the **Resource Group** page, select the **Azure Functions app** in the list of resources.
:::image type="content" source="media/event-hubs-functions-synapse-analytics/select-function-app.png" alt-text="Screenshot showing the selection of the function app in the list of resources for a resource group."::: 1. On the **Function App** page for your app, select **Get publish profile** on the command bar. :::image type="content" source="media/event-hubs-functions-synapse-analytics/get-publish-profile.png" alt-text="Screenshot showing the selection of the **Get Publish Profile** button on the command bar of the function app page.":::
-1. Download and save the file into the **FunctionEGDDumper** subfolder of the **EventHubsCaptureEventGridDemo** folder.
+1. Download and save the file into the **FunctionEGDDumper** subfolder of the **EventHubsCaptureEventGridDemo** folder.
### Use the publish profile to publish the Functions app 1. Launch Visual Studio.
-2. Open the **EventHubsCaptureEventGridDemo.sln** solution that you downloaded from the [GitHub](https://github.com/Azure/azure-event-hubs/tree/master/samples/e2e/EventHubsCaptureEventGridDemo) as part of the prerequisites. You can find it in the `/samples/e2e/EventHubsCaptureEventGridDemo` folder.
+2. Open the **EventHubsCaptureEventGridDemo.sln** solution that you downloaded from the [GitHub](https://github.com/Azure/azure-event-hubs/tree/master/samples/e2e/EventHubsCaptureEventGridDemo) as part of the prerequisites. You can find it in the `/samples/e2e/EventHubsCaptureEventGridDemo` folder.
3. In Solution Explorer, right-click **FunctionEGDWDumper** project, and select **Publish**.
-4. In the following screen, select **Start** or **Add a publish profile**.
-5. In the **Publish** dialog box, select **Import Profile** for **Target**, and select **Next**.
+4. In the following screen, select **Start** or **Add a publish profile**.
+5. In the **Publish** dialog box, select **Import Profile** for **Target**, and select **Next**.
:::image type="content" source="media/event-hubs-functions-synapse-analytics/import-profile.png" alt-text="Screenshot showing the selection **Import Profile** on the **Publish** dialog box.":::
-1. On the **Import profile** tab, select the publish settings file that you saved earlier in the **FunctionEGDWDumper** folder, and then select **Finish**.
+1. On the **Import profile** tab, select the publish settings file that you saved earlier in the **FunctionEGDWDumper** folder, and then select **Finish**.
1. When Visual Studio has configured the profile, select **Publish**. Confirm that the publishing succeeded.
-2. In the web browser that has the **Azure Function** page open, select **Functions** on the left menu. Confirm that the **EventGridTriggerMigrateData** function shows up in the list. If you don't see it, try publishing from Visual Studio again, and then refresh the page in the portal.
+2. In the web browser that has the **Azure Function** page open, select **Functions** on the left menu. Confirm that the **EventGridTriggerMigrateData** function shows up in the list. If you don't see it, try publishing from Visual Studio again, and then refresh the page in the portal.
- :::image type="content" source="media/event-hubs-functions-synapse-analytics/confirm-function-creation.png" alt-text="Screenshot showing the confirmation of function creation.":::
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/confirm-function-creation.png" alt-text="Screenshot showing the confirmation of function creation.":::
After publishing the function, you're ready to subscribe to the event. ## Subscribe to the event 1. In a new tab or new window of a web browser, sign in to the [Azure portal](https://portal.azure.com).
-2. In the Azure portal, select **Resource groups** on the left menu.
-3. Filter the list of resource groups by entering the name of your resource group in the search box.
+2. In the Azure portal, select **Resource groups** on the left menu.
+3. Filter the list of resource groups by entering the name of your resource group in the search box.
4. Select your resource group in the list. 1. Select the **Event Hubs namespace** from the list of resources.
-1. On the **Event Hubs Namespace** page, select **Events** on the left menu, and then select **+ Event Subscription** on the toolbar.
+1. On the **Event Hubs Namespace** page, select **Events** on the left menu, and then select **+ Event Subscription** on the toolbar.
:::image type="content" source="media/event-hubs-functions-synapse-analytics/event-hub-add-subscription-link.png" alt-text="Screenshot of the Events page for an Event Hubs namespace with Add event subscription link selected. "::: 1. On the **Create Event Subscription** page, follow these steps:
- 1. Enter a name for the **event subscription**.
+ 1. Enter a name for the **event subscription**.
1. Enter a name for the **system topic**. A system topic provides an endpoint for the sender to send events. For more information, see [System topics](system-topics.md) 1. For **Endpoint Type**, select **Azure Function**. 1. For **Endpoint**, select the link. 1. On the **Select Azure Function** page, follow these steps if they aren't automatically filled.
- 1. Select the Azure subscription that has the Azure function.
- 1. Select the resource group for the function.
+ 1. Select the Azure subscription that has the Azure function.
+ 1. Select the resource group for the function.
1. Select the function app.
- 1. Select the deployment slot.
- 1. Select the function **EventGridTriggerMigrateData**.
+ 1. Select the deployment slot.
+ 1. Select the function **EventGridTriggerMigrateData**.
1. On the **Select Azure Function** page, select **Confirm Selection**.
- 1. Then, back on the **Create Event Subscription** page, select **Create**.
-
+ 1. Then, back on the **Create Event Subscription** page, select **Create**.
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/event-subscription-select-function.png" alt-text="Screenshot of the Create an event subscription page." lightbox="media/event-hubs-functions-synapse-analytics/event-subscription-select-function.png":::
-1. Verify that the event subscription is created. Switch to the **Event Subscriptions** tab on the **Events** page for the Event Hubs namespace.
-
+1. Verify that the event subscription is created. Switch to the **Event Subscriptions** tab on the **Events** page for the Event Hubs namespace.
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/confirm-event-subscription.png" alt-text="Screenshot showing the Event Subscriptions tab on the Events page." lightbox="media/event-hubs-functions-synapse-analytics/confirm-event-subscription.png":::
-1. Select the App Service plan (not the App Service) in the list of resources in the resource group.
+1. Select the App Service plan (not the App Service) in the list of resources in the resource group.
## Run the app to generate data You've finished setting up your event hub, dedicate SQL pool (formerly SQL Data Warehouse), Azure function app, and event subscription. Before running an application that generates data for event hub, you need to configure a few values.
-1. In the Azure portal, navigate to your resource group as you did earlier.
+1. In the Azure portal, navigate to your resource group as you did earlier.
2. Select the Event Hubs namespace. 3. In the **Event Hubs Namespace** page, select **Shared access policies** on the left menu.
-4. Select **RootManageSharedAccessKey** in the list of policies.
+4. Select **RootManageSharedAccessKey** in the list of policies.
- :::image type="content" source="media/event-hubs-functions-synapse-analytics/event-hub-namespace-shared-access-policies.png" alt-text="Screenshot showing the Shared access policies page for an Event Hubs namespace.":::
-1. Select the copy button next to the **Connection string-primary key** text box.
-1. Go back to your Visual Studio solution.
-1. Right-click **WindTurbineDataGenerator** project, and select **Set as Startup project**.
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/event-hub-namespace-shared-access-policies.png" alt-text="Screenshot showing the Shared access policies page for an Event Hubs namespace.":::
+1. Select the copy button next to the **Connection string-primary key** text box.
+1. Go back to your Visual Studio solution.
+1. Right-click **WindTurbineDataGenerator** project, and select **Set as Startup project**.
1. In the WindTurbineDataGenerator project, open **program.cs**.
-1. Replace `<EVENT HUBS NAMESPACE CONNECTION STRING>` with the connection string you copied from the portal.
-1. If you've used a different name for the event hub other than `hubdatamigration`, replace `<EVENT HUB NAME>` with the name of the event hub.
+1. Replace `<EVENT HUBS NAMESPACE CONNECTION STRING>` with the connection string you copied from the portal.
+1. If you've used a different name for the event hub other than `hubdatamigration`, replace `<EVENT HUB NAME>` with the name of the event hub.
```cs private const string EventHubConnectionString = "Endpoint=sb://demomigrationnamespace.servicebus.windows.net/..."; private const string EventHubName = "hubdatamigration"; ```
-6. Build the solution. Run the **WindTurbineGenerator.exe** application.
+6. Build the solution. Run the **WindTurbineGenerator.exe** application.
7. After a couple of minutes, in the other browser tab where you have the query window open, query the table in your data warehouse for the migrated data. ```sql
- select * from [dbo].[Fact_WindTurbineMetrics]
+ select * from [dbo].[Fact_WindTurbineMetrics]
``` :::image type="content" source="media/event-hubs-functions-synapse-analytics/query-results.png" alt-text="Screenshot showing the query results."::: ## Monitor the solution
-This section helps you with monitoring or troubleshooting the solution.
+This section helps you with monitoring or troubleshooting the solution.
### View captured data in the storage account
-1. Navigate to the resource group and select the storage account used for capturing event data.
+1. Navigate to the resource group and select the storage account used for capturing event data.
1. On the **Storage account** page, select **Storage Explorer (preview**) on the left menu.
-1. Expand **BLOB CONTAINERS**, and select **windturbinecapture**.
-1. Open the folder named same as your **Event Hubs namespace** in the right pane.
-1. Open the folder named same as your event hub (**hubdatamigration**).
+1. Expand **BLOB CONTAINERS**, and select **windturbinecapture**.
+1. Open the folder named same as your **Event Hubs namespace** in the right pane.
+1. Open the folder named same as your event hub (**hubdatamigration**).
1. Drill through the folders and you see the AVRO files. Here's an example: :::image type="content" source="media/event-hubs-functions-synapse-analytics/storage-captured-file.png" alt-text="Screenshot showing the captured file in the storage." lightbox="media/event-hubs-functions-synapse-analytics/storage-captured-file.png":::
-
+ ### Verify that the Event Grid trigger invoked the function
-1. Navigate to the resource group and select the function app.
+1. Navigate to the resource group and select the function app.
1. Select **Functions** on the left menu.
-1. Select the **EventGridTriggerMigrateData** function from the list.
-1. On the **Function** page, select **Monitor** on the left menu.
-1. Select **Configure** to configure application insights to capture invocation logs.
-1. Create a new **Application Insights** resource or use an existing resource.
-1. Navigate back to the **Monitor** page for the function.
-1. Confirm that the client application (**WindTurbineDataGenerator**) that's sending the events is still running. If not, run the app.
-1. Wait for a few minutes (5 minutes or more) and select the **Refresh** button to see function invocations.
+1. Select the **EventGridTriggerMigrateData** function from the list.
+1. On the **Function** page, select **Monitor** on the left menu.
+1. Select **Configure** to configure application insights to capture invocation logs.
+1. Create a new **Application Insights** resource or use an existing resource.
+1. Navigate back to the **Monitor** page for the function.
+1. Confirm that the client application (**WindTurbineDataGenerator**) that's sending the events is still running. If not, run the app.
+1. Wait for a few minutes (5 minutes or more) and select the **Refresh** button to see function invocations.
:::image type="content" source="media/event-hubs-functions-synapse-analytics/function-invocations.png" alt-text="Screenshot showing the Function invocations."::: 1. Select an invocation to see details.
This section helps you with monitoring or troubleshooting the solution.
```json {
- "topic": "/subscriptions/<AZURE SUBSCRIPTION ID>/resourcegroups/rgDataMigration/providers/Microsoft.EventHub/namespaces/spehubns1207",
- "subject": "hubdatamigration",
- "eventType": "Microsoft.EventHub.CaptureFileCreated",
- "id": "4538f1a5-02d8-4b40-9f20-36301ac976ba",
- "data": {
- "fileUrl": "https://spehubstorage1207.blob.core.windows.net/windturbinecapture/spehubns1207/hubdatamigration/0/2020/12/07/21/49/12.avro",
- "fileType": "AzureBlockBlob",
- "partitionId": "0",
- "sizeInBytes": 473444,
- "eventCount": 2800,
- "firstSequenceNumber": 55500,
- "lastSequenceNumber": 58299,
- "firstEnqueueTime": "2020-12-07T21:49:12.556Z",
- "lastEnqueueTime": "2020-12-07T21:50:11.534Z"
- },
- "dataVersion": "1",
- "metadataVersion": "1",
- "eventTime": "2020-12-07T21:50:12.7065524Z"
+ "topic": "/subscriptions/<AZURE SUBSCRIPTION ID>/resourcegroups/rgDataMigration/providers/Microsoft.EventHub/namespaces/spehubns1207",
+ "subject": "hubdatamigration",
+ "eventType": "Microsoft.EventHub.CaptureFileCreated",
+ "id": "4538f1a5-02d8-4b40-9f20-36301ac976ba",
+ "data": {
+ "fileUrl": "https://spehubstorage1207.blob.core.windows.net/windturbinecapture/spehubns1207/hubdatamigration/0/2020/12/07/21/49/12.avro",
+ "fileType": "AzureBlockBlob",
+ "partitionId": "0",
+ "sizeInBytes": 473444,
+ "eventCount": 2800,
+ "firstSequenceNumber": 55500,
+ "lastSequenceNumber": 58299,
+ "firstEnqueueTime": "2020-12-07T21:49:12.556Z",
+ "lastEnqueueTime": "2020-12-07T21:50:11.534Z"
+ },
+ "dataVersion": "1",
+ "metadataVersion": "1",
+ "eventTime": "2020-12-07T21:50:12.7065524Z"
} ```
In the browser tab where you have the query window open, query the table in your
## Next steps * For more information about setting up and running the sample, see [Event Hubs Capture and Event Grid sample](https://github.com/Azure/azure-event-hubs/tree/master/samples/e2e/EventHubsCaptureEventGridDemo).
-* In this tutorial, you created an event subscription for the `CaptureFileCreated` event. For more information about this event and all the events supported by Azure Blob Storage, see [Azure Event Hubs as an Event Grid source](event-schema-event-hubs.md).
+* In this tutorial, you created an event subscription for the `CaptureFileCreated` event. For more information about this event and all the events supported by Azure Blob Storage, see [Azure Event Hubs as an Event Grid source](event-schema-event-hubs.md).
* To learn more about the Event Hubs Capture feature, see [Capture events through Azure Event Hubs in Azure Blob Storage or Azure Data Lake Storage](../event-hubs/event-hubs-capture-overview.md).
event-grid Event Schema Event Grid Namespace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-event-grid-namespace.md
Azure Event Grid namespace (Preview) emits the following event types:
| - | -- | | Microsoft.EventGrid.MQTTClientSessionConnected | Published when an MQTT clientΓÇÖs session is connected to Event Grid. | | Microsoft.EventGrid.MQTTClientSessionDisconnected | Published when an MQTT clientΓÇÖs session is disconnected from Event Grid. | -
+| Microsoft.EventGrid.MQTTClientCreatedOrUpdated | Published when an MQTT client is created or updated in the Event Grid Namespace. |
+| Microsoft.EventGrid.MQTTClientDeleted | Published when an MQTT client is deleted from the Event Grid Namespace. |
## Example event
This sample event shows the schema of an event raised when an MQTT clientΓÇÖs se
```json [{
- "id": "6f1b70b8-557a-4865-9a1c-94cc3def93db",
- "eventTime": "2023-04-28T00:49:04.0211141Z",
+ "id": "5249c38a-a048-46dd-8f60-df34fcdab06c",
+ "eventTime": "2023-07-29T01:23:49.6454046Z",
"eventType": "Microsoft.EventGrid.MQTTClientSessionConnected",
- "topic": "/subscriptions/ 00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
- "subject": "/clients/device1/sessions/session1",
+ "topic": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
+ "subject": "clients/client1/sessions/session1",
"dataVersion": "1", "metadataVersion": "1", "data": { "namespaceName": "myns",
- "clientAuthenticationName": "device1",
+ "clientAuthenticationName": "client1",
"clientSessionName": "session1", "sequenceNumber": 1 }
This sample event shows the schema of an event raised when an MQTT clientΓÇÖs se
```json [{
- "id": "6f1b70b8-557a-4865-9a1c-94cc3def93db",
- "eventTime": "2023-04-28T00:49:04.0211141Z",
- "eventType": "Microsoft.EventGrid.MQTTClientSessionConnected",
- "topic": "/subscriptions/ 00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
- "subject": "/clients/device1/sessions/session1",
+ "id": "e30e5174-787d-4e19-8812-580148bfcf7b",
+ "eventTime": "2023-07-29T01:27:40.2446871Z",
+ "eventType": "Microsoft.EventGrid.MQTTClientSessionDisconnected",
+ "topic": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
+ "subject": "clients/client1/sessions/session1",
"dataVersion": "1", "metadataVersion": "1", "data": { "namespaceName": "myns",
- "clientAuthenticationName": "device1",
+ "clientAuthenticationName": "client1",
"clientSessionName": "session1",
- "sequenceNumber": 1
+ "sequenceNumber": 1,
+ "disconnectionReason": "ClientInitiatedDisconnect"
+ }
+}]
+```
+This sample event shows the schema of an event raised when an MQTT client is created or updated in the Event Grid Namespace:
+
+```json
+[{
+ "id": "383d1562-c95f-4095-936c-688e72c6b2bb",
+ "eventTime": "2023-07-29T01:14:35.8928724Z",
+ "eventType": "Microsoft.EventGrid.MQTTClientCreatedOrUpdated",
+ "topic": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
+ "subject": "clients/client1",
+ "dataVersion": "1",
+ "metadataVersion": "1",
+ "data": {
+ "createdOn": "2023-07-29T01:14:34.2048108Z",
+ "updatedOn": "2023-07-29T01:14:34.2048108Z",
+ "namespaceName": "myns",
+ "clientName": "client1",
+ "clientAuthenticationName": "client1",
+ "state": "Enabled",
+ "attributes": {
+ "attribute1": "value1"
+ }
+ }
+}]
+```
+This sample event shows the schema of an event raised when an MQTT client is deleted from the Event Grid Namespace:
+
+```json
+[{
+ "id": "2a93aaf9-66c2-4f8e-9ba3-8d899c10bf17",
+ "eventTime": "2023-07-29T01:30:52.5620566Z",
+ "eventType": "Microsoft.EventGrid.MQTTClientDeleted",
+ "topic": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
+ "subject": "clients/client1",
+ "dataVersion": "1",
+ "metadataVersion": "1",
+ "data": {
+ "clientName": "client1",
+ "clientAuthenticationName": "client1",
+ "namespaceName": "myns"
} }] ```
This sample event shows the schema of an event raised when an MQTT client's sess
```json [{
- "id": "6f1b70b8-557a-4865-9a1c-94cc3def93db",
- "time": "2023-04-28T00:49:04.0211141Z",
+ "specversion": "1.0",
+ "id": "5249c38a-a048-46dd-8f60-df34fcdab06c",
+ "time": "2023-07-29T01:23:49.6454046Z",
"type": "Microsoft.EventGrid.MQTTClientSessionConnected", "source": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
- "subject": "/clients/device1/sessions/session1",
- "specversion": "1.0",
+ "subject": "clients/client1/sessions/session1",
"data": { "namespaceName": "myns",
- "clientAuthenticationName": "device1",
+ "clientAuthenticationName": "client1",
"clientSessionName": "session1", "sequenceNumber": 1 }
This sample event shows the schema of an event raised when an MQTT clientΓÇÖs se
```json [{
- "id": "3b93123d-5427-4dec-88d5-3b6da87b0f64",
- "time": "2023-04-28T00:51:28.6037385Z",
- "type": "Microsoft.EventGrid.MQTTClientSessionDisconnected",
- "source": "/subscriptions/ 00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
- "subject": "/clients/device1/sessions/session1",
"specversion": "1.0",
+ "id": "e30e5174-787d-4e19-8812-580148bfcf7b",
+ "time": "2023-07-29T01:27:40.2446871Z",
+ "type": "Microsoft.EventGrid.MQTTClientSessionDisconnected",
+ "source": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
+ "subject": "clients/client1/sessions/session1",
"data": { "namespaceName": "myns",
- "clientAuthenticationName": "device1",
+ "clientAuthenticationName": "client1",
"clientSessionName": "session1", "sequenceNumber": 1,
- "disconnectionReason": "ClientError"
+ "disconnectionReason": "ClientInitiatedDisconnect"
} }] ```
+This sample event shows the schema of an event raised when an MQTT client is created or updated in the Event Grid Namespace:
+```json
+[{
+ "specversion": "1.0",
+ "id": "383d1562-c95f-4095-936c-688e72c6b2bb",
+ "time": "2023-07-29T01:14:35.8928724Z",
+ "type": "Microsoft.EventGrid.MQTTClientCreatedOrUpdated",
+ "source": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
+ "subject": "clients/client1",
+ "data": {
+ "createdOn": "2023-07-29T01:14:34.2048108Z",
+ "updatedOn": "2023-07-29T01:14:34.2048108Z",
+ "namespaceName": "myns",
+ "clientName": "client1",
+ "clientAuthenticationName": "client1",
+ "state": "Enabled",
+ "attributes": {
+ "attribute1": "value1"
+ }
+ }
+}]
+```
+This sample event shows the schema of an event raised when an MQTT client is deleted from the Event Grid Namespace:
+
+```json
+[{
+ "specversion": "1.0",
+ "id": "2a93aaf9-66c2-4f8e-9ba3-8d899c10bf17",
+ "time": "2023-07-29T01:30:52.5620566Z",
+ "type": "Microsoft.EventGrid.MQTTClientDeleted",
+ "source": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
+ "subject": "clients/client1",
+ "data": {
+ "namespaceName": "myns",
+ "clientName": "client1",
+ "clientAuthenticationName": "client1"
+ }
+}]
+```
+ ### Event properties # [Event Grid event schema](#tab/event-grid-event-schema)
All events contain the same top-level data:
-For all Event Grid namespace events, the data object contains the following properties:
+The data object contains the following properties:
| Property | Type | Description | | -- | - | -- |
For all Event Grid namespace events, the data object contains the following prop
| `clientAuthenticationName` | string | Unique identifier for the MQTT client that the client presents to the service for authentication. This case-sensitive string can be up to 128 characters long, and supports UTF-8 characters.| | `clientSessionName` | string | Unique identifier for the MQTT client's session. This case-sensitive string can be up to 128 characters long, and supports UTF-8 characters.| | `sequenceNumber` | string | A number that helps indicate order of MQTT client session connected or disconnected events. Latest event will have a sequence number that is higher than the previous event. |-
-For the **MQTT Client Session Disconnected** event, the data object also contains the following property:
-
-| Property | Type | Description |
-| -- | - | -- |
| `disconnectionReason` | string | Reason for the disconnection of the MQTT client's session. The value could be one of the values in the disconnection reasons table. |
+| `createdOn` | string | The time the client resource is created based on the provider's UTC time. |
+| `updatedOn` | string | The time the client resource is last updated based on the provider's UTC time. If the client resource was never updated, this value is identical to the value of the 'createdOn' property |
+| `clientName` | string | The time the client resource is last updated based on the provider's UTC time. If the client resource was never updated, this value is identical to the value of the 'createdOn' property. |
+| `state` | string | The configured state of the client. The value could be Enabled or Disabled.|
+| `attributes` | string | The array of key-value pair attributes that are assigned to the client resource.|
### Disconnection reasons:
event-grid Event Handlers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/kubernetes/event-handlers.md
An event handler is any system that exposes an endpoint and is the destination f
The way to configure Event Grid to send events to a destination is through the creation of an event subscription. It can be done through [Azure CLI](/cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create), [management SDK](../sdk-overview.md#management-sdks), or using direct HTTPs calls using the [2020-10-15-preview API](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions/create-or-update) version.
-In general, Event Grid on Kubernetes can send events to any destination via **Webhooks**. Webhooks are HTTP(s) endpoints exposed by a service or workload to which Event Grid has access. The webhook can be a workload hosted in the same cluster, in the same network space, on the cloud, on-premises or anywhere that Event Grid can reach.
+In general, Event Grid on Kubernetes can send events to any destination via **Webhooks**. Webhooks are HTTP(s) endpoints exposed by a service or workload to which Event Grid has access. The webhook can be a workload hosted in the same cluster, in the same network space, on the cloud, on-premises or anywhere that Event Grid can reach.
[!INCLUDE [preview-feature-note.md](../includes/preview-feature-note.md)] Through Webhooks, Event Grid supports the following destinations **hosted on a Kubernetes cluster**:
-* Azure App Service on Kubernetes with Azure Arc.
-* Azure Functions on Kubernetes with Azure Arc.
+* Azure App Service on Kubernetes with Azure Arc.
+* Azure Functions on Kubernetes with Azure Arc.
* Azure Logic Apps on Kubernetes with Azure Arc. In addition to Webhooks, Event Grid on Kubernetes can send events to the following destinations **hosted on Azure**:
Event Grid on Kubernetes offers a good level of feature parity with Azure Event
2. [Azure Event Grid trigger for Azure Functions](../../azure-functions/functions-bindings-event-grid-trigger.md?tabs=csharp%2Cconsole) isn't supported. You can use a WebHook destination type to deliver events to Azure Functions. 3. There's no [dead letter location](../manage-event-delivery.md#set-dead-letter-location) support. That means that you can't use ``properties.deadLetterDestination`` in your event subscription payload. 4. Azure Relay's Hybrid Connections as a destination isn't supported yet.
-5. Only CloudEvents schema is supported. The supported schema value is "[CloudEventSchemaV1_0](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions/create-or-update#eventdeliveryschema)". Cloud Events schema is extensible and based on open standards.
+5. Only CloudEvents schema is supported. The supported schema value is "[CloudEventSchemaV1_0](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions/create-or-update#eventdeliveryschema)". Cloud Events schema is extensible and based on open standards.
6. Labels ([properties.labels](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions/create-or-update#request-body)) aren't applicable to Event Grid on Kubernetes. Hence, they aren't available. 7. [Delivery with resource identity](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions/create-or-update#deliverywithresourceidentity) isn't supported. So, all properties for [Event Subscription Identity](/rest/api/eventgrid/controlplane-version2023-06-01-preview/event-subscriptions/create-or-update#eventsubscriptionidentity) aren't supported. 8. [Destination endpoint validation](../webhook-event-delivery.md#endpoint-validation-with-event-grid-events) isn't supported yet.
To publish to a WebHook endpoint, set the `endpointType` to `WebHook` and provid
To publish to an Azure Event Grid cloud endpoint, set the `endpointType` to `WebHook` and provide:
-* **endpointUrl**: Azure Event Grid topic URL in the cloud with the API version parameter set to **2018-01-01** and `aeg-sas-key` set to the URL encoded SAS key.
+* **endpointUrl**: Azure Event Grid topic URL in the cloud with the API version parameter set to **2018-01-01** and `aeg-sas-key` set to the URL encoded SAS key.
```json {
- "properties": {
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "<your-event-grid-cloud-topic-endpoint-url>?api-version=2018-01-01&aeg-sas-key=urlencoded(sas-key-value)"
- }
- }
- }
+ "properties": {
+ "destination": {
+ "endpointType": "WebHook",
+ "properties": {
+ "endpointUrl": "<your-event-grid-cloud-topic-endpoint-url>?api-version=2018-01-01&aeg-sas-key=urlencoded(sas-key-value)"
+ }
+ }
+ }
} ```
To publish to a Service Bus topic, set the `endpointType` to `serviceBusTopic` a
* **resourceId**: resource ID for the specific Service Bus topic. ```json
- {
+ {
+ "properties": {
+ "destination": {
+ "endpointType": "serviceBusTopic",
"properties": {
- "destination": {
- "endpointType": "serviceBusTopic",
- "properties": {
- "resourceId": "<Azure Resource ID of your Service Bus topic>"
- }
- }
+ "resourceId": "<Azure Resource ID of your Service Bus topic>"
} }
+ }
+ }
``` ## Storage Queues
To publish to a Storage Queue, set the `endpointType` to `storageQueue` and pro
* **resourceID**: Azure resource ID of the storage account that contains the queue. ```json
- {
+ {
+ "properties": {
+ "destination": {
+ "endpointType": "storageQueue",
"properties": {
- "destination": {
- "endpointType": "storageQueue",
- "properties": {
- "queueName": "<your-storage-queue-name>",
- "resourceId": "<Azure Resource ID of your Storage account>"
- }
- }
+ "queueName": "<your-storage-queue-name>",
+ "resourceId": "<Azure Resource ID of your Storage account>"
} }
+ }
+ }
``` ## Next steps
-* Add [filter configuration](filter-events.md) to your event subscription to select the events to be delivered.
+
+* Add [filter configuration](filter-events.md) to your event subscription to select the events to be delivered.
* To learn about schemas supported by Event Grid on Azure Arc for Kubernetes, see [Event Grid on Kubernetes - Event schemas](event-schemas.md).
event-grid Monitor Mqtt Delivery Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/monitor-mqtt-delivery-reference.md
This article provides a reference of log and metric data collected to analyze th
| MQTT.SuccessfulPublishedMessages | MQTT: Successful Published Messages | CountΓÇ»| Total | The number of MQTT messages that were published successfully into the namespace. | Protocol, QoS | | MQTT.FailedPublishedMessages | MQTT: Failed Published Messages | CountΓÇ»| Total | The number of MQTT messages that failed to be published into the namespace. | Protocol, QoS, Error | | MQTT.SuccessfulDeliveredMessages | MQTT: Successful Delivered Messages | CountΓÇ»| TotalΓÇ»| The number of messages delivered by the namespace, regardless of the acknowledgments from MQTT clients. There are no failures for this operation. | Protocol, QoS |
+| MQTT.Throughput | MQTT: Throughput | Count | Total | The total bytes published to or delivered by the namespace. | Direction |
| MQTT.SuccessfulSubscriptionOperations | MQTT: Successful Subscription Operations | Count | Total | The number of successful subscription operations (Subscribe, Unsubscribe). This metric is incremented for every topic filter within your subscription request that gets accepted by Event Grid. | OperationType, Protocol | | MQTT.FailedSubscriptionOperations | MQTT: Failed Subscription Operations | Count | Total | The number of failed subscription operations (Subscribe, Unsubscribe). This metric is incremented for every topic filter within your subscription request that gets rejected by Event Grid. | OperationType, Protocol, Error |
+| Mqtt.SuccessfulRoutedMessages | MQTT: Successful Routed Messages | Count | Total | The number of MQTT messages that were routed successfully from the namespace. | |
+| Mqtt.FailedRoutedMessages | MQTT: Failed Routed Messages | Count | Total | The number of MQTT messages that failed to be routed from the namespace. | Error |
| MQTT.Connections | MQTT: Active Connections | Count | Total | The number of active connections in the namespace. The value for this metric is a point-in-time value. Connections that were active immediately after that point-in-time may not be reflected in the metric. | Protocol |
-| MQTT.Throughput | MQTT: Throughput | Count | Total | The total bytes published to or delivered by the namespace. | Direction |
+| Mqtt.DroppedSessions | MQTT: Dropped Sessions | Count | Total | The number of dropped sessions in the namespace. The value for this metric is a point-in-time value. Sessions that were dropped immediately after that point-in-time may not be reflected in the metric. | DropReason |
++ > [!NOTE] > Each subscription request increments the MQTT.RequestCount metric, while each topic filter within the subscription request increments the subscription operation metrics. For example, consider a subscription request that is sent with five different topic filters. Three of these topic filters were succeessfully processed while two of the topic filters failed to be processed. The following list represent the resulting increments to the metrics:
This article provides a reference of log and metric data collected to analyze th
| Error | Error occurred during the operation. The available values include: <br><br>-QuotaExceeded: the client exceeded one or more of the throttling limits that resulted in a failure <br>- AuthenticationError: a failure because of any authentication reasons. <br>- AuthorizationError: a failure because of any authorization reasons.<br>- ClientError: the client sent a bad request or used one of the unsupported features that resulted in a failure. <br>- ServiceError: a failure because of an unexpected server error or for a server's operational reason.| | QoS | Quality of service level. The available values are: 0, 1. | | Direction | The direction of the operation. The available values are: <br><br>- Inbound: inbound throughput to Event Grid. <br>- Outbound: outbound throughput from Event Grid. |
+| DropReason | The reason a session was dropped. The available values include: <br><br>- SessionExpiry: a persistent session has expired. <br>- TransientSession: a non-persistent session has expired. <br>- SessionOverflow: a client didn't connect during the lifespan of the session to receive queued QOS1 messages until the queue reached its maximum limit. <br>- AuthorizationError: a session drop because of any authorization reasons.
## Next steps See the following articles:
event-grid Mqtt Client Life Cycle Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-client-life-cycle-events.md
# MQTT Clients Life Cycle Events
-Client Life Cycle events allow applications to react to client connection or disconnection events. For example, you can build an application that updates a database, creates a ticket, and delivers an email notification every time a client is disconnected for mitigating action.
+Client Life Cycle events allow applications to react to events about the client connection status or the client resource operations. It allows you to:
+- Keep track of your client's connection status. For example, you can build an application that queries the connection status of each client before running a specific operation.
+- React with a mitigation action for client disconnections. For example, you can build an application that updates a database, creates a ticket, and delivers an email notification every time a client is disconnected for mitigating action.
+- Track the namespace that your clients are attached to during automated failovers.
[!INCLUDE [mqtt-preview-note](./includes/mqtt-preview-note.md)]
The Event Grid namespace publishes the following event types:
||| | **Microsoft.EventGrid.MQTTClientSessionConnected** | Published when an MQTT clientΓÇÖs session is connected to Event Grid. | | **Microsoft.EventGrid.MQTTClientSessionDisconnected** | Published when an MQTT clientΓÇÖs session is disconnected from Event Grid. |
+| **Microsoft.EventGrid.MQTTClientCreatedOrUpdated** | Published when an MQTT client is created or updated in the Event Grid Namespace. |
+| **Microsoft.EventGrid.MQTTClientDeleted** | Published when an MQTT client is deleted from the Event Grid Namespace. |
The Event Grid namespace publishes the following event types:
The client life cycle events provide you with all the information about the client and session that got connected or disconnected. It also provides a disconnectionReason that you can use for diagnostics scenarios as it enables you to have automated mitigating actions.
-# [Event Grid event schema](#tab/event-grid-event-schema)
+# [Cloud event schema](#tab/cloud-event-schema)
-This sample event shows the schema of an event raised when an MQTT clientΓÇÖs session is connected to Event Grid:
+This sample event shows the schema of an event raised when an MQTT client's session is connected to an Event Grid:
```json [{
- "id": "6f1b70b8-557a-4865-9a1c-94cc3def93db",
- "eventTime": "2023-04-28T00:49:04.0211141Z",
- "eventType": "Microsoft.EventGrid.MQTTClientSessionConnected",
- "topic": "/subscriptions/ 00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
- "subject": "/clients/device1/sessions/session1",
- "dataVersion": "1",
- "metadataVersion": "1",
+ "specversion": "1.0",
+ "id": "5249c38a-a048-46dd-8f60-df34fcdab06c",
+ "time": "2023-07-29T01:23:49.6454046Z",
+ "type": "Microsoft.EventGrid.MQTTClientSessionConnected",
+ "source": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
+ "subject": "clients/client1/sessions/session1",
"data": { "namespaceName": "myns",
- "clientAuthenticationName": "device1",
+ "clientAuthenticationName": "client1",
"clientSessionName": "session1", "sequenceNumber": 1 }
This sample event shows the schema of an event raised when an MQTT clientΓÇÖs se
```json [{
- "id": "6f1b70b8-557a-4865-9a1c-94cc3def93db",
- "eventTime": "2023-04-28T00:49:04.0211141Z",
- "eventType": "Microsoft.EventGrid.MQTTClientSessionConnected",
- "topic": "/subscriptions/ 00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
- "subject": "/clients/device1/sessions/session1",
- "dataVersion": "1",
- "metadataVersion": "1",
+ "specversion": "1.0",
+ "id": "e30e5174-787d-4e19-8812-580148bfcf7b",
+ "time": "2023-07-29T01:27:40.2446871Z",
+ "type": "Microsoft.EventGrid.MQTTClientSessionDisconnected",
+ "source": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
+ "subject": "clients/client1/sessions/session1",
"data": { "namespaceName": "myns",
- "clientAuthenticationName": "device1",
+ "clientAuthenticationName": "client1",
"clientSessionName": "session1",
- "sequenceNumber": 1
+ "sequenceNumber": 1,
+ "disconnectionReason": "ClientInitiatedDisconnect"
} }] ```-
-# [Cloud event schema](#tab/cloud-event-schema)
-
-This sample event shows the schema of an event raised when an MQTT client's session is connected to an Event Grid:
+This sample event shows the schema of an event raised when an MQTT client is created or updated in the Event Grid Namespace:
```json [{
- "id": "6f1b70b8-557a-4865-9a1c-94cc3def93db",
- "time": "2023-04-28T00:49:04.0211141Z",
- "type": "Microsoft.EventGrid.MQTTClientSessionConnected",
+ "specversion": "1.0",
+ "id": "383d1562-c95f-4095-936c-688e72c6b2bb",
+ "time": "2023-07-29T01:14:35.8928724Z",
+ "type": "Microsoft.EventGrid.MQTTClientCreatedOrUpdated",
"source": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
- "subject": "/clients/device1/sessions/session1",
+ "subject": "clients/client1",
+ "data": {
+ "createdOn": "2023-07-29T01:14:34.2048108Z",
+ "updatedOn": "2023-07-29T01:14:34.2048108Z",
+ "namespaceName": "myns",
+ "clientName": "client1",
+ "clientAuthenticationName": "client1",
+ "state": "Enabled",
+ "attributes": {
+ "attribute1": "value1"
+ }
+ }
+}]
+```
+This sample event shows the schema of an event raised when an MQTT client is deleted from the Event Grid Namespace:
+
+```json
+[{
"specversion": "1.0",
+ "id": "2a93aaf9-66c2-4f8e-9ba3-8d899c10bf17",
+ "time": "2023-07-29T01:30:52.5620566Z",
+ "type": "Microsoft.EventGrid.MQTTClientDeleted",
+ "source": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
+ "subject": "clients/client1",
+ "data": {
+ "namespaceName": "myns",
+ "clientName": "client1",
+ "clientAuthenticationName": "client1"
+ }
+}]
+```
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+This sample event shows the schema of an event raised when an MQTT clientΓÇÖs session is connected to Event Grid:
+
+```json
+[{
+ "id": "5249c38a-a048-46dd-8f60-df34fcdab06c",
+ "eventTime": "2023-07-29T01:23:49.6454046Z",
+ "eventType": "Microsoft.EventGrid.MQTTClientSessionConnected",
+ "topic": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
+ "subject": "clients/client1/sessions/session1",
+ "dataVersion": "1",
+ "metadataVersion": "1",
"data": { "namespaceName": "myns",
- "clientAuthenticationName": "device1",
+ "clientAuthenticationName": "client1",
"clientSessionName": "session1", "sequenceNumber": 1 }
This sample event shows the schema of an event raised when an MQTT clientΓÇÖs se
```json [{
- "id": "3b93123d-5427-4dec-88d5-3b6da87b0f64",
- "time": "2023-04-28T00:51:28.6037385Z",
- "type": "Microsoft.EventGrid.MQTTClientSessionDisconnected",
- "source": "/subscriptions/ 00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
- "subject": "/clients/device1/sessions/session1",
- "specversion": "1.0",
+ "id": "e30e5174-787d-4e19-8812-580148bfcf7b",
+ "eventTime": "2023-07-29T01:27:40.2446871Z",
+ "eventType": "Microsoft.EventGrid.MQTTClientSessionDisconnected",
+ "topic": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
+ "subject": "clients/client1/sessions/session1",
+ "dataVersion": "1",
+ "metadataVersion": "1",
"data": { "namespaceName": "myns",
- "clientAuthenticationName": "device1",
+ "clientAuthenticationName": "client1",
"clientSessionName": "session1", "sequenceNumber": 1,
- "disconnectionReason": "ClientError"
+ "disconnectionReason": "ClientInitiatedDisconnect"
+ }
+}]
+```
+This sample event shows the schema of an event raised when an MQTT client is created or updated in the Event Grid Namespace:
+
+```json
+[{
+ "id": "383d1562-c95f-4095-936c-688e72c6b2bb",
+ "eventTime": "2023-07-29T01:14:35.8928724Z",
+ "eventType": "Microsoft.EventGrid.MQTTClientCreatedOrUpdated",
+ "topic": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
+ "subject": "clients/client1",
+ "dataVersion": "1",
+ "metadataVersion": "1",
+ "data": {
+ "createdOn": "2023-07-29T01:14:34.2048108Z",
+ "updatedOn": "2023-07-29T01:14:34.2048108Z",
+ "namespaceName": "myns",
+ "clientName": "client1",
+ "clientAuthenticationName": "client1",
+ "state": "Enabled",
+ "attributes": {
+ "attribute1": "value1"
+ }
+ }
+}]
+```
+This sample event shows the schema of an event raised when an MQTT client is deleted from the Event Grid Namespace:
+
+```json
+[{
+ "id": "2a93aaf9-66c2-4f8e-9ba3-8d899c10bf17",
+ "eventTime": "2023-07-29T01:30:52.5620566Z",
+ "eventType": "Microsoft.EventGrid.MQTTClientDeleted",
+ "topic": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myrg/providers/Microsoft.EventGrid/namespaces/myns",
+ "subject": "clients/client1",
+ "dataVersion": "1",
+ "metadataVersion": "1",
+ "data": {
+ "clientName": "client1",
+ "clientAuthenticationName": "client1",
+ "namespaceName": "myns"
} }] ```
az eventgrid system-topic create --resource-group <Resource Group > --name <Syst
2. Create an Event Grid Subscription ```azurecli-interactive
- az eventgrid system-topic event-subscription create --name <Specify Event Subscription Name> -g <Resource Group> --system-topic-name <System Topic Name> --endpoint <Endpoint URL>
+ az eventgrid system-topic event-subscription create --name <Specify Event Subscription Name> -g <Resource Group> --system-topic-name <System Topic Name> --endpoint <Endpoint>
```
-## Limitations:
-- There's no latency guarantee for the client connection status events.-- The client life cycle events' timestamp indicates when the service detected the events, which may differ from the actual time of connection status change.-- The order of client connection status events isn't guaranteed, events may arrive out of order. However, the sequence number can be used to determine the original order of the events.-- Duplicate client connection status events may be published.
+## Behavior:
+- There's no latency guarantee for the client lifecycle events.
+- Duplicate client life cycle events may be published.
+- The client life cycle events' timestamp indicates when the service detected the events, which may differ from the actual time of the event.
+- The order of client life cycle events isn't guaranteed, events may arrive out of order. However, the sequence number on the connection status events can be used to determine the original order of the events.
+- For the Client Created or Updated event and the Client Deleted event:
+ - If there are multiple state changes to the client resource within a short amount of time, there will be one event emitted for the final state of the client.
+ - Example 1: if a client gets created, then updated twice within 3 seconds, EG will emit only one MQTTClientCreatedOrUpdated event with the final values for the metadata of the client.
+ - Example 2: if a client gets created, then deleted within 5 seconds, EG will emit only MQTTClientDeleted event.
+ ## Next steps
-Learn more about [System topics in Azure Event Grid](system-topics.md)
+- To learn more about system topics, go to [System topics in Azure Event Grid](system-topics.md)
+- To learn more about the client life cycle event properties, go to [Event Grid as an Event Grid source](event-schema-event-grid-namespace.md)
event-grid Mqtt Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-overview.md
MQTT is a publish-subscribe messaging transport protocol that was designed for c
- MQTT v3.1.1 features: - **Persistent sessions** ensure reliability by preserving the client's subscription information and messages when a client disconnects. - **QoS 0 and 1** provide your clients with control over the efficiency and reliability of the communication.-- Event Grid is adding more MQTT v5 features in the future to align more with the MQTT specifications. The following items detail the current differences in Event Grid's MQTT support from the MQTT v3.1.1 specification: Will message, Retain flag, Message ordering and QoS 2 aren't supported.
+- Event Grid is adding more MQTT v3.1.1 features in the future to align more with the MQTT specifications. The following items detail the current differences in Event Grid's MQTT support from the MQTT v3.1.1 specification: Will message, Retain flag, Message ordering and QoS 2 aren't supported.
[Learn more about Event GridΓÇÖs MQTT support and current limitations.](mqtt-support.md)
Use the following articles to learn more about the MQTT support in Event Grid an
- [Client authentication](mqtt-client-authentication.md) - [Access control](mqtt-access-control.md) - [MQTT support](mqtt-support.md) -- [Routing MQTT messages](mqtt-routing.md)
+- [Routing MQTT messages](mqtt-routing.md)
+- [MQTT Client Life Cycle Events](mqtt-client-life-cycle-events.md).
event-grid Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-support.md
Title: 'MQTT features support in Azure Event Grid'
-description: 'Describes the MQTT Support in Azure Event Grid.'
+ Title: 'MQTT Features Support in Azure Event Grid'
+description: 'Describes the MQTT feature support in Azure Event Grid.'
Last updated 05/23/2023 + # MQTT features support in Azure Event Grid MQTT is a publish-subscribe messaging transport protocol that was designed for constrained environments. ItΓÇÖs efficient, scalable, and reliable, which made it the gold standard for communication in IoT scenarios. Event Grid supports clients that publish and subscribe to messages over MQTT v3.1.1, MQTT v3.1.1 over WebSockets, MQTT v5, and MQTT v5 over WebSockets. Event Grid also supports cross MQTT version (MQTT 3.1.1 and MQTT 5) communication.
For more information, see [How to establish multiple sessions for a single clien
#### Handling sessions: -- If a client tries to take over another client's active session by presenting its session name, its connection request will be rejected with an unauthorized error. For example, if Client B tries to connect to session 123 that is assigned at that time for client A, Client B's connection request will be rejected.-- If a client resource is deleted without ending its session, other clients won't be able to use that session name until the session expires. For example, If client B creates a session with session name 123 then client B is deleted, client A won't be able to connect to session 123 until the session expires.
+- If a client tries to take over another client's active session by presenting its session name, its connection request is rejected with an unauthorized error. For example, if Client B tries to connect to session 123 that is assigned at that time for client A, Client B's connection request is rejected.
+- If a client resource is deleted without ending its session, other clients can't use its session name until the session expires. For example, If client B creates a session with session name 123 then client B deleted, client A can't connect to session 123 until it expires.
+ ## MQTT features Event Grid supports the following MQTT features:
-### Quality of Service (QoS)
-Event Grid supports QoS 0 and 1, which define the guarantee of message delivery on PUBLISH and SUBSCRIBE packets between clients and Event Grid. QoS 0 guarantees at-most-once delivery; messages with QoS 0 arenΓÇÖt acknowledged by the subscriber nor get retransmitted by the publisher. QoS 1 guarantees at-least-once delivery; messages are acknowledged by the subscriber and get retransmitted by the publisher if they didnΓÇÖt get acknowledged. QoS enables your clients to control the efficiency and reliability of the communication.
-
+### Quality of service (QoS)
+Event Grid supports QoS 0 and 1, which define the guarantee of message delivery on PUBLISH and SUBSCRIBE packets between clients and Event Grid. QoS 0 guarantees at-most-once delivery; messages with QoS 0 arenΓÇÖt acknowledged by the subscriber nor get retransmitted by the publisher. QoS 1 guarantees at-least-once delivery; messages are acknowledged by the subscriber and get retransmitted by the publisher if they didnΓÇÖt get acknowledged. QoS enables your clients to control the efficiency and reliability of the communication.
### Persistent sessions
-Event Grid supports persistent sessions for MQTT v3.1.1 such that Event Grid preserves information about a clientΓÇÖs session in case of disconnections to ensure reliability of the communication. This information includes the clientΓÇÖs subscriptions and missed/ unacknowledged QoS 1 messages. Clients can configure a persistent session through setting the cleanSession flag in the CONNECT packet to false.
-
-### Clean start and session expiry
+Event Grid supports persistent sessions for MQTT v3.1.1 such that Event Grid preserves information about a clientΓÇÖs session in case of disconnections to ensure reliability of the communication. This information includes the clientΓÇÖs subscriptions and missed/ unacknowledged QoS 1 messages. Clients can configure a persistent session through setting the cleanSession flag in the CONNECT packet to false.
+#### Clean start and session expiry
MQTT v5 has introduced the clean start and session expiry features as an improvement over MQTT v3.1.1 in handling session persistence. Clean Start is a feature that allows a client to start a new session with Event Grid, discarding any previous session data. Session Expiry allows a client to inform Event Grid when an inactive session is considered expired and automatically removed. In the CONNECT packet, a client can set Clean Start flag to true and/or short session expiry interval for security reasons or to avoid any potential data conflicts that may have occurred during the previous session. A client can also set a clean start to false and/or long session expiry interval to ensure the reliability and efficiency of persistent sessions.
-**Maximum session expiry interval:**
-On the Configuration page of an Event Grid namespace, you can configure the maximum session expiry interval at namespace scope. This setting will apply to all the clients within the namespace.
-
-If you are using MQTT v3.1.1, this setting provides the session expiration time and ensures that sessions for inactive clients are terminated once the time limit is reached.
-
-If you are using MQTT v5, this setting will provide the maximum limit for the Session Expiry Interval value. Any Session Expiry Interval chosen above this limit will be negotiated.
+#### Maximum session expiry interval configuration
+You can configure the maximum session expiry interval allowed for all your clients connecting to the Event Grid namespace. For MQTT v3.1.1 clients, the configured limit is applied as the default session expiry interval for all persistent sessions. For MQTT v5 clients, the configured limit is applied as the maximum value for the Session Expiry Interval property in the CONNECT packet; any value that exceeds the limit will be adjusted. The default value for this namespace property is 1 hour and can be extended up to 8 hours. Use the following steps to configure the maximum session expiry interval in the Azure portal:
+- Go to your namespace in the Azure portal.
+- Under **Configuration**, change the value for the **Maximum session expiry interval in hours** to the desired limit.
+- Select **Apply**.
-The default value for this namespace property is 1 hour and can be extended up to 8 hours.
+#### Session overflow
+Event Grid maintains a queue of messages for each active MQTT session that isn't connected, until the client connects with Event Grid again to receive the messages in the queue. If a client doesn't connect to receive the queued QOS1 messages, the session queue starts accumulating the messages until it reaches its limit: 100 messages or 1 MB. Once the queue reaches its limit during the lifespan of the session, the session is terminated.
### User properties Event Grid supports user properties on MQTT v5 PUBLISH packets that allow you to add custom key-value pairs in the message header to provide more context about the message. The use cases for user properties are versatile. You can use this feature to include the purpose or origin of the message so the receiver can handle the message without parsing the payload, saving computing resources. For example, a message with a user property indicating its purpose as a "warning" could trigger different handling logic than one with the purpose of "information."
event-grid Publish Iot Hub Events To Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-iot-hub-events-to-logic-apps.md
Test your logic app by quickly simulating a device connection using the Azure CL
1. Select the Cloud Shell button to reopen your terminal. 1. Run the following command to create a simulated device identity:
-
- ```azurecli
+
+ ```azurecli
az iot hub device-identity create --device-id simDevice --hub-name {YourIoTHubName} ```
event-hubs Dynamically Add Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/dynamically-add-partitions.md
You can specify the number of partitions at the time of creating an event hub. I
This section shows you how to update partition count of an event hub in different ways (PowerShell, CLI, and so on.). ### PowerShell
-Use the [Set-AzureRmEventHub](/powershell/module/azurerm.eventhub/Set-AzureRmEventHub) PowerShell command to update partitions in an event hub.
+Use the [Set-AzEventHub](/powershell/module/az.eventhub/set-azeventhub) PowerShell command to update partitions in an event hub.
```azurepowershell-interactive
-Set-AzureRmEventHub -ResourceGroupName MyResourceGroupName -Namespace MyNamespaceName -Name MyEventHubName -partitionCount 12
+Set-AzEventHub -ResourceGroupName MyResourceGroupName -Namespace MyNamespaceName -Name MyEventHubName -partitionCount 12
``` ### CLI
event-hubs Event Hubs C Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-c-getstarted-send.md
## Introduction Azure Event Hubs is a Big Data streaming platform and event ingestion service, capable of receiving and processing millions of events per second. Event Hubs can process and store events, data, or telemetry produced by distributed software and devices. Data sent to an event hub can be transformed and stored using any real-time analytics provider or batching/storage adapters. For detailed overview of Event Hubs, see [Event Hubs overview](event-hubs-about.md) and [Event Hubs features](event-hubs-features.md).
-This tutorial describes how to send events to an event hub using a console application in C.
+This tutorial describes how to send events to an event hub using a console application in C.
## Prerequisites To complete this tutorial, you need the following:
In this section shows how to write a C app to send events to your event hub. The
1. From the [Qpid AMQP Messenger page](https://qpid.apache.org/proton/messenger.html), follow the instructions to install Qpid Proton, depending on your environment. 2. To compile the Proton library, install the following packages:
-
+ ```shell sudo apt-get install build-essential cmake uuid-dev openssl libssl-dev ``` 3. Download the [Qpid Proton library](https://qpid.apache.org/proton/https://docsupdatetracker.net/index.html), and extract it, e.g.:
-
+ ```shell wget https://archive.apache.org/dist/qpid/proton/0.7/qpid-proton-0.7.tar.gz tar xvfz qpid-proton-0.7.tar.gz ``` 4. Create a build directory, compile and install:
-
+ ```shell cd qpid-proton-0.7 mkdir build
In this section shows how to write a C app to send events to your event hub. The
sudo make install ``` 5. In your work directory, create a new file called **sender.c** with the following code. Remember to replace the values for your SAS key/name, event hub name, and namespace. You must also substitute a URL-encoded version of the key for the **SendRule** created earlier. You can URL-encode it [here](https://www.w3schools.com/tags/ref_urlencode.asp).
-
+ ```c #include "proton/message.h" #include "proton/messenger.h"
-
+ #include <getopt.h> #include <proton/util.h> #include <sys/time.h>
In this section shows how to write a C app to send events to your event hub. The
#include <signal.h> volatile sig_atomic_t stop = 0;
-
+ #define check(messenger) \ { \ if(pn_messenger_errno(messenger)) \
In this section shows how to write a C app to send events to your event hub. The
stop = 1; } }
-
+ pn_timestamp_t time_now(void) { struct timeval now; if (gettimeofday(&now, NULL)) pn_fatal("gettimeofday failed\n"); return ((pn_timestamp_t)now.tv_sec) * 1000 + (now.tv_usec / 1000);
- }
-
+ }
+ void die(const char *file, int line, const char *message) { printf("Dead\n"); fprintf(stderr, "%s:%i: %s\n", file, line, message); exit(1); }
-
+ int sendMessage(pn_messenger_t * messenger) { char * address = (char *) "amqps://{SAS Key Name}:{SAS key}@{namespace name}.servicebus.windows.net/{event hub name}"; char * msgtext = (char *) "Hello from C!";
-
+ pn_message_t * message; pn_data_t * body; message = pn_message();
-
+ pn_message_set_address(message, address); pn_message_set_content_type(message, (char*) "application/octect-stream"); pn_message_set_inferred(message, true);
-
+ body = pn_message_body(message); pn_data_put_binary(body, pn_bytes(strlen(msgtext), msgtext));
-
+ pn_messenger_put(messenger, message); check(messenger); pn_messenger_send(messenger, 1); check(messenger);
-
+ pn_message_free(message); }
-
+ int main(int argc, char** argv) { printf("Press Ctrl-C to stop the sender process\n");
- signal(SIGINT, interrupt_handler);
-
+ signal(SIGINT, interrupt_handler);
+ pn_messenger_t *messenger = pn_messenger(NULL); pn_messenger_set_outgoing_window(messenger, 1); pn_messenger_start(messenger);
-
+ while(!stop) { sendMessage(messenger); printf("Sent message\n"); sleep(1); }
-
+ // release messenger resources pn_messenger_stop(messenger); pn_messenger_free(messenger);
-
+ return 0; } ``` 6. Compile the file, assuming **gcc**:
-
- ```
- gcc sender.c -o sender -lqpid-proton
- ```
- > [!NOTE]
- > This code uses an outgoing window of 1 to force the messages out as soon as possible. It is recommended that your application try to batch messages to increase throughput. See the [Qpid AMQP Messenger page](https://qpid.apache.org/proton/messenger.html) for information about how to use the Qpid Proton library in this and other environments, and from platforms for which bindings are provided (currently Perl, PHP, Python, and Ruby).
+ ```
+ gcc sender.c -o sender -lqpid-proton
+ ```
+
+ > [!NOTE]
+ > This code uses an outgoing window of 1 to force the messages out as soon as possible. It is recommended that your application try to batch messages to increase throughput. See the [Qpid AMQP Messenger page](https://qpid.apache.org/proton/messenger.html) for information about how to use the Qpid Proton library in this and other environments, and from platforms for which bindings are provided (currently Perl, PHP, Python, and Ruby).
-Run the application to send messages to the event hub.
+Run the application to send messages to the event hub.
Congratulations! You have now sent messages to an event hub.
event-hubs Event Hubs Go Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-go-get-started-send.md
Here's the code to send events to an event hub. The main steps in the code are:
package main import (
- "context"
+ "context"
- "github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs"
+ "github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs"
) func main() {
- // create an Event Hubs producer client using a connection string to the namespace and the event hub
- producerClient, err := azeventhubs.NewProducerClientFromConnectionString("NAMESPACE CONNECTION STRING", "EVENT HUB NAME", nil)
+ // create an Event Hubs producer client using a connection string to the namespace and the event hub
+ producerClient, err := azeventhubs.NewProducerClientFromConnectionString("NAMESPACE CONNECTION STRING", "EVENT HUB NAME", nil)
- if err != nil {
- panic(err)
- }
+ if err != nil {
+ panic(err)
+ }
- defer producerClient.Close(context.TODO())
+ defer producerClient.Close(context.TODO())
- // create sample events
- events := createEventsForSample()
+ // create sample events
+ events := createEventsForSample()
- // create a batch object and add sample events to the batch
- newBatchOptions := &azeventhubs.EventDataBatchOptions{}
+ // create a batch object and add sample events to the batch
+ newBatchOptions := &azeventhubs.EventDataBatchOptions{}
- batch, err := producerClient.NewEventDataBatch(context.TODO(), newBatchOptions)
+ batch, err := producerClient.NewEventDataBatch(context.TODO(), newBatchOptions)
- for i := 0; i < len(events); i++ {
- err = batch.AddEventData(events[i], nil)
- }
+ for i := 0; i < len(events); i++ {
+ err = batch.AddEventData(events[i], nil)
+ }
- // send the batch of events to the event hub
- producerClient.SendEventDataBatch(context.TODO(), batch, nil)
+ // send the batch of events to the event hub
+ producerClient.SendEventDataBatch(context.TODO(), batch, nil)
} func createEventsForSample() []*azeventhubs.EventData {
- return []*azeventhubs.EventData{
- {
- Body: []byte("hello"),
- },
- {
- Body: []byte("world"),
- },
- }
+ return []*azeventhubs.EventData{
+ {
+ Body: []byte("hello"),
+ },
+ {
+ Body: []byte("world"),
+ },
+ }
} ```
Here's the code to receive events from an event hub. The main steps in the code
package main import (
- "context"
- "errors"
- "fmt"
- "time"
+ "context"
+ "errors"
+ "fmt"
+ "time"
- "github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs"
- "github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs/checkpoints"
+ "github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs"
+ "github.com/Azure/azure-sdk-for-go/sdk/messaging/azeventhubs/checkpoints"
) func main() {
- // create a container client using a connection string and container name
- checkClient, err := container.NewClientFromConnectionString("AZURE STORAGE CONNECTION STRING", "CONTAINER NAME", nil)
-
- // create a checkpoint store that will be used by the event hub
- checkpointStore, err := checkpoints.NewBlobStore(checkClient, nil)
+ // create a container client using a connection string and container name
+ checkClient, err := container.NewClientFromConnectionString("AZURE STORAGE CONNECTION STRING", "CONTAINER NAME", nil)
+
+ // create a checkpoint store that will be used by the event hub
+ checkpointStore, err := checkpoints.NewBlobStore(checkClient, nil)
- if err != nil {
- panic(err)
- }
+ if err != nil {
+ panic(err)
+ }
- // create a consumer client using a connection string to the namespace and the event hub
- consumerClient, err := azeventhubs.NewConsumerClientFromConnectionString("NAMESPACE CONNECTION STRING", "EVENT HUB NAME", azeventhubs.DefaultConsumerGroup, nil)
+ // create a consumer client using a connection string to the namespace and the event hub
+ consumerClient, err := azeventhubs.NewConsumerClientFromConnectionString("NAMESPACE CONNECTION STRING", "EVENT HUB NAME", azeventhubs.DefaultConsumerGroup, nil)
- if err != nil {
- panic(err)
- }
+ if err != nil {
+ panic(err)
+ }
- defer consumerClient.Close(context.TODO())
+ defer consumerClient.Close(context.TODO())
- // create a processor to receive and process events
- processor, err := azeventhubs.NewProcessor(consumerClient, checkpointStore, nil)
+ // create a processor to receive and process events
+ processor, err := azeventhubs.NewProcessor(consumerClient, checkpointStore, nil)
- if err != nil {
- panic(err)
- }
+ if err != nil {
+ panic(err)
+ }
- // for each partition in the event hub, create a partition client with processEvents as the function to process events
- dispatchPartitionClients := func() {
- for {
- partitionClient := processor.NextPartitionClient(context.TODO())
+ // for each partition in the event hub, create a partition client with processEvents as the function to process events
+ dispatchPartitionClients := func() {
+ for {
+ partitionClient := processor.NextPartitionClient(context.TODO())
- if partitionClient == nil {
- break
- }
+ if partitionClient == nil {
+ break
+ }
- go func() {
- if err := processEvents(partitionClient); err != nil {
- panic(err)
- }
- }()
- }
- }
+ go func() {
+ if err := processEvents(partitionClient); err != nil {
+ panic(err)
+ }
+ }()
+ }
+ }
- // run all partition clients
- go dispatchPartitionClients()
+ // run all partition clients
+ go dispatchPartitionClients()
- processorCtx, processorCancel := context.WithCancel(context.TODO())
- defer processorCancel()
+ processorCtx, processorCancel := context.WithCancel(context.TODO())
+ defer processorCancel()
- if err := processor.Run(processorCtx); err != nil {
- panic(err)
- }
+ if err := processor.Run(processorCtx); err != nil {
+ panic(err)
+ }
} func processEvents(partitionClient *azeventhubs.ProcessorPartitionClient) error {
- defer closePartitionResources(partitionClient)
- for {
- receiveCtx, receiveCtxCancel := context.WithTimeout(context.TODO(), time.Minute)
- events, err := partitionClient.ReceiveEvents(receiveCtx, 100, nil)
- receiveCtxCancel()
-
- if err != nil && !errors.Is(err, context.DeadlineExceeded) {
- return err
- }
-
- fmt.Printf("Processing %d event(s)\n", len(events))
-
- for _, event := range events {
- fmt.Printf("Event received with body %v\n", string(event.Body))
- }
-
- if len(events) != 0 {
- if err := partitionClient.UpdateCheckpoint(context.TODO(), events[len(events)-1]); err != nil {
- return err
- }
- }
- }
+ defer closePartitionResources(partitionClient)
+ for {
+ receiveCtx, receiveCtxCancel := context.WithTimeout(context.TODO(), time.Minute)
+ events, err := partitionClient.ReceiveEvents(receiveCtx, 100, nil)
+ receiveCtxCancel()
+
+ if err != nil && !errors.Is(err, context.DeadlineExceeded) {
+ return err
+ }
+
+ fmt.Printf("Processing %d event(s)\n", len(events))
+
+ for _, event := range events {
+ fmt.Printf("Event received with body %v\n", string(event.Body))
+ }
+
+ if len(events) != 0 {
+ if err := partitionClient.UpdateCheckpoint(context.TODO(), events[len(events)-1]); err != nil {
+ return err
+ }
+ }
+ }
} func closePartitionResources(partitionClient *azeventhubs.ProcessorPartitionClient) {
- defer partitionClient.Close(context.TODO())
+ defer partitionClient.Close(context.TODO())
} ```
func closePartitionResources(partitionClient *azeventhubs.ProcessorPartitionClie
1. Wait for a minute to see the following output in the receiver window. ```bash
- Processing 2 event(s)
+ Processing 2 event(s)
Event received with body hello Event received with body world
- ```
+ ```
## Next steps See samples on GitHub at [https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/messaging/azeventhubs](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/messaging/azeventhubs).
event-hubs Event Hubs Storm Getstarted Receive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-storm-getstarted-receive.md
Before you start with the quickstart, **create an Event Hubs namespace and an ev
1. In the **src** folder, create a file called **Config.properties** and copy the following content, substituting the `receive rule key` and `event hub name` values:
- ```java
- eventhubspout.username = ReceiveRule
- eventhubspout.password = {receive rule key}
- eventhubspout.namespace = ioteventhub-ns
- eventhubspout.entitypath = {event hub name}
- eventhubspout.partitions.count = 16
-
- # if not provided, will use storm's zookeeper settings
- # zookeeper.connectionstring=localhost:2181
-
- eventhubspout.checkpoint.interval = 10
- eventhub.receiver.credits = 10
- ```
+ ```java
+ eventhubspout.username = ReceiveRule
+ eventhubspout.password = {receive rule key}
+ eventhubspout.namespace = ioteventhub-ns
+ eventhubspout.entitypath = {event hub name}
+ eventhubspout.partitions.count = 16
+
+ # if not provided, will use storm's zookeeper settings
+ # zookeeper.connectionstring=localhost:2181
+
+ eventhubspout.checkpoint.interval = 10
+ eventhub.receiver.credits = 10
+ ```
The value for **eventhub.receiver.credits** determines how many events are batched before releasing them to the Storm pipeline. For the sake of simplicity, this example sets this value to 10. In production, it should usually be set to higher values; for example, 1024. 1 . Create a new class called **LoggerBolt** with the following code:
- ```java
- import java.util.Map;
- import org.slf4j.Logger;
- import org.slf4j.LoggerFactory;
- import backtype.storm.task.OutputCollector;
- import backtype.storm.task.TopologyContext;
- import backtype.storm.topology.OutputFieldsDeclarer;
- import backtype.storm.topology.base.BaseRichBolt;
- import backtype.storm.tuple.Tuple;
+ ```java
+ import java.util.Map;
+ import org.slf4j.Logger;
+ import org.slf4j.LoggerFactory;
+ import backtype.storm.task.OutputCollector;
+ import backtype.storm.task.TopologyContext;
+ import backtype.storm.topology.OutputFieldsDeclarer;
+ import backtype.storm.topology.base.BaseRichBolt;
+ import backtype.storm.tuple.Tuple;
- public class LoggerBolt extends BaseRichBolt {
- private OutputCollector collector;
- private static final Logger logger = LoggerFactory
- .getLogger(LoggerBolt.class);
+ public class LoggerBolt extends BaseRichBolt {
+ private OutputCollector collector;
+ private static final Logger logger = LoggerFactory
+ .getLogger(LoggerBolt.class);
- @Override
- public void execute(Tuple tuple) {
- String value = tuple.getString(0);
- logger.info("Tuple value: " + value);
+ @Override
+ public void execute(Tuple tuple) {
+ String value = tuple.getString(0);
+ logger.info("Tuple value: " + value);
- collector.ack(tuple);
- }
+ collector.ack(tuple);
+ }
- @Override
- public void prepare(Map map, TopologyContext context, OutputCollector collector) {
- this.collector = collector;
- this.count = 0;
- }
-
- @Override
- public void declareOutputFields(OutputFieldsDeclarer declarer) {
- // no output fields
- }
+ @Override
+ public void prepare(Map map, TopologyContext context, OutputCollector collector) {
+ this.collector = collector;
+ this.count = 0;
+ }
+
+ @Override
+ public void declareOutputFields(OutputFieldsDeclarer declarer) {
+ // no output fields
+ }
- }
- ```
+ }
+ ```
This Storm bolt logs the content of the received events. This can easily be extended to store tuples in a storage service. The [HDInsight Storm with Event Hub example] uses this same approach to store data into Azure Storage and Power BI. 11. Create a class called **LogTopology** with the following code:
- ```java
- import java.io.FileReader;
- import java.util.Properties;
- import backtype.storm.Config;
- import backtype.storm.LocalCluster;
- import backtype.storm.StormSubmitter;
- import backtype.storm.generated.StormTopology;
- import backtype.storm.topology.TopologyBuilder;
- import com.microsoft.eventhubs.samples.EventCount;
- import com.microsoft.eventhubs.spout.EventHubSpout;
- import com.microsoft.eventhubs.spout.EventHubSpoutConfig;
-
- public class LogTopology {
- protected EventHubSpoutConfig spoutConfig;
- protected int numWorkers;
-
- protected void readEHConfig(String[] args) throws Exception {
- Properties properties = new Properties();
- if (args.length > 1) {
- properties.load(new FileReader(args[1]));
- } else {
- properties.load(EventCount.class.getClassLoader()
- .getResourceAsStream("Config.properties"));
- }
-
- String username = properties.getProperty("eventhubspout.username");
- String password = properties.getProperty("eventhubspout.password");
- String namespaceName = properties
- .getProperty("eventhubspout.namespace");
- String entityPath = properties.getProperty("eventhubspout.entitypath");
- String zkEndpointAddress = properties
- .getProperty("zookeeper.connectionstring"); // opt
- int partitionCount = Integer.parseInt(properties
- .getProperty("eventhubspout.partitions.count"));
- int checkpointIntervalInSeconds = Integer.parseInt(properties
- .getProperty("eventhubspout.checkpoint.interval"));
- int receiverCredits = Integer.parseInt(properties
- .getProperty("eventhub.receiver.credits")); // prefetch count
- // (opt)
- System.out.println("Eventhub spout config: ");
- System.out.println(" partition count: " + partitionCount);
- System.out.println(" checkpoint interval: "
- + checkpointIntervalInSeconds);
- System.out.println(" receiver credits: " + receiverCredits);
-
- spoutConfig = new EventHubSpoutConfig(username, password,
- namespaceName, entityPath, partitionCount, zkEndpointAddress,
- checkpointIntervalInSeconds, receiverCredits);
-
- // set the number of workers to be the same as partition number.
- // the idea is to have a spout and a logger bolt co-exist in one
- // worker to avoid shuffling messages across workers in storm cluster.
- numWorkers = spoutConfig.getPartitionCount();
-
- if (args.length > 0) {
- // set topology name so that sample Trident topology can use it as
- // stream name.
- spoutConfig.setTopologyName(args[0]);
- }
- }
-
- protected StormTopology buildTopology() {
- TopologyBuilder topologyBuilder = new TopologyBuilder();
-
- EventHubSpout eventHubSpout = new EventHubSpout(spoutConfig);
- topologyBuilder.setSpout("EventHubsSpout", eventHubSpout,
- spoutConfig.getPartitionCount()).setNumTasks(
- spoutConfig.getPartitionCount());
- topologyBuilder
- .setBolt("LoggerBolt", new LoggerBolt(),
- spoutConfig.getPartitionCount())
- .localOrShuffleGrouping("EventHubsSpout")
- .setNumTasks(spoutConfig.getPartitionCount());
- return topologyBuilder.createTopology();
- }
-
- protected void runScenario(String[] args) throws Exception {
- boolean runLocal = true;
- readEHConfig(args);
- StormTopology topology = buildTopology();
- Config config = new Config();
- config.setDebug(false);
-
- if (runLocal) {
- config.setMaxTaskParallelism(2);
- LocalCluster localCluster = new LocalCluster();
- localCluster.submitTopology("test", config, topology);
- Thread.sleep(5000000);
- localCluster.shutdown();
- } else {
- config.setNumWorkers(numWorkers);
- StormSubmitter.submitTopology(args[0], config, topology);
- }
- }
-
- public static void main(String[] args) throws Exception {
- LogTopology topology = new LogTopology();
- topology.runScenario(args);
- }
- }
- ```
+ ```java
+ import java.io.FileReader;
+ import java.util.Properties;
+ import backtype.storm.Config;
+ import backtype.storm.LocalCluster;
+ import backtype.storm.StormSubmitter;
+ import backtype.storm.generated.StormTopology;
+ import backtype.storm.topology.TopologyBuilder;
+ import com.microsoft.eventhubs.samples.EventCount;
+ import com.microsoft.eventhubs.spout.EventHubSpout;
+ import com.microsoft.eventhubs.spout.EventHubSpoutConfig;
+
+ public class LogTopology {
+ protected EventHubSpoutConfig spoutConfig;
+ protected int numWorkers;
+
+ protected void readEHConfig(String[] args) throws Exception {
+ Properties properties = new Properties();
+ if (args.length > 1) {
+ properties.load(new FileReader(args[1]));
+ } else {
+ properties.load(EventCount.class.getClassLoader()
+ .getResourceAsStream("Config.properties"));
+ }
+
+ String username = properties.getProperty("eventhubspout.username");
+ String password = properties.getProperty("eventhubspout.password");
+ String namespaceName = properties
+ .getProperty("eventhubspout.namespace");
+ String entityPath = properties.getProperty("eventhubspout.entitypath");
+ String zkEndpointAddress = properties
+ .getProperty("zookeeper.connectionstring"); // opt
+ int partitionCount = Integer.parseInt(properties
+ .getProperty("eventhubspout.partitions.count"));
+ int checkpointIntervalInSeconds = Integer.parseInt(properties
+ .getProperty("eventhubspout.checkpoint.interval"));
+ int receiverCredits = Integer.parseInt(properties
+ .getProperty("eventhub.receiver.credits")); // prefetch count
+ // (opt)
+ System.out.println("Eventhub spout config: ");
+ System.out.println(" partition count: " + partitionCount);
+ System.out.println(" checkpoint interval: "
+ + checkpointIntervalInSeconds);
+ System.out.println(" receiver credits: " + receiverCredits);
+
+ spoutConfig = new EventHubSpoutConfig(username, password,
+ namespaceName, entityPath, partitionCount, zkEndpointAddress,
+ checkpointIntervalInSeconds, receiverCredits);
+
+ // set the number of workers to be the same as partition number.
+ // the idea is to have a spout and a logger bolt co-exist in one
+ // worker to avoid shuffling messages across workers in storm cluster.
+ numWorkers = spoutConfig.getPartitionCount();
+
+ if (args.length > 0) {
+ // set topology name so that sample Trident topology can use it as
+ // stream name.
+ spoutConfig.setTopologyName(args[0]);
+ }
+ }
+
+ protected StormTopology buildTopology() {
+ TopologyBuilder topologyBuilder = new TopologyBuilder();
+
+ EventHubSpout eventHubSpout = new EventHubSpout(spoutConfig);
+ topologyBuilder.setSpout("EventHubsSpout", eventHubSpout,
+ spoutConfig.getPartitionCount()).setNumTasks(
+ spoutConfig.getPartitionCount());
+ topologyBuilder
+ .setBolt("LoggerBolt", new LoggerBolt(),
+ spoutConfig.getPartitionCount())
+ .localOrShuffleGrouping("EventHubsSpout")
+ .setNumTasks(spoutConfig.getPartitionCount());
+ return topologyBuilder.createTopology();
+ }
+
+ protected void runScenario(String[] args) throws Exception {
+ boolean runLocal = true;
+ readEHConfig(args);
+ StormTopology topology = buildTopology();
+ Config config = new Config();
+ config.setDebug(false);
+
+ if (runLocal) {
+ config.setMaxTaskParallelism(2);
+ LocalCluster localCluster = new LocalCluster();
+ localCluster.submitTopology("test", config, topology);
+ Thread.sleep(5000000);
+ localCluster.shutdown();
+ } else {
+ config.setNumWorkers(numWorkers);
+ StormSubmitter.submitTopology(args[0], config, topology);
+ }
+ }
+
+ public static void main(String[] args) throws Exception {
+ LogTopology topology = new LogTopology();
+ topology.runScenario(args);
+ }
+ }
+ ```
This class creates a new Event Hubs spout, using the properties in the configuration file to instantiate it. It is important to note that this example creates as many spouts tasks as the number of partitions in the event hub, in order to use the maximum parallelism allowed by that event hub.
expressroute Expressroute Troubleshooting Expressroute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-troubleshooting-expressroute-overview.md
This article helps you verify and troubleshoot Azure ExpressRoute connectivity. ExpressRoute extends an on-premises network into the Microsoft Cloud over a private connection commonly facilitated by a connectivity provider. ExpressRoute connectivity traditionally involves three distinct network zones: -- Customer network-- Provider network-- Microsoft datacenter
+- Customer network
+- Provider network
+- Microsoft datacenter
> [!NOTE] > In the ExpressRoute Direct connectivity model, you can directly connect to the port for Microsoft Enterprise Edge (MSEE) routers. The direct connectivity model includes only yours and Microsoft network zones.
-This article helps you identify if and where a connectivity issue exists. You can then seek support from the appropriate team to resolve the issue.
+This article helps you identify if and where a connectivity issue exists. You can then seek support from the appropriate team to resolve the issue.
> [!IMPORTANT] > This article is intended to help you diagnose and fix simple issues. It's not intended to be a replacement for Microsoft support. If you can't solve a problem by using the guidance in this article, open a support ticket with [Microsoft Support][Support].
The following diagram shows the logical connectivity of a customer network to th
In the preceding diagram, the numbers indicate key network points:
-1. Customer compute device (for example, a server or PC).
-2. Customer edge routers (CEs).
-3. Provider edge routers/switches (PEs) that face customer edge routers.
-4. PEs that face Microsoft Enterprise Edge ExpressRoute routers (MSEEs). This article calls them *PE-MSEEs*.
-5. MSEEs.
-6. Virtual network gateway.
-7. Compute device on the Azure virtual network.
+1. Customer compute device (for example, a server or PC).
+2. Customer edge routers (CEs).
+3. Provider edge routers/switches (PEs) that face customer edge routers.
+4. PEs that face Microsoft Enterprise Edge ExpressRoute routers (MSEEs). This article calls them *PE-MSEEs*.
+5. MSEEs.
+6. Virtual network gateway.
+7. Compute device on the Azure virtual network.
-At times, this article references these network points by their associated number.
+At times, this article references these network points by their associated number.
-Depending on the ExpressRoute connectivity model, network points 3 and 4 might be switches (layer 2 devices) or routers (layer 3 devices). The ExpressRoute connectivity models are cloud exchange colocation, point-to-point Ethernet connection, or any-to-any (IPVPN).
+Depending on the ExpressRoute connectivity model, network points 3 and 4 might be switches (layer 2 devices) or routers (layer 3 devices). The ExpressRoute connectivity models are cloud exchange colocation, point-to-point Ethernet connection, or any-to-any (IPVPN).
In the direct connectivity model, there are no network points 3 and 4. Instead, CEs (2) are directly connected to MSEEs via dark fiber.
-If the cloud exchange colocation, point-to-point Ethernet, or direct connectivity model is used, CEs (2) establish Border Gateway Protocol (BGP) peering with MSEEs (5).
+If the cloud exchange colocation, point-to-point Ethernet, or direct connectivity model is used, CEs (2) establish Border Gateway Protocol (BGP) peering with MSEEs (5).
If the any-to-any (IPVPN) connectivity model is used, PE-MSEEs (4) establish BGP peering with MSEEs (5). PE-MSEEs propagate the routes received from Microsoft back to the customer network via the IPVPN service provider network.
Provisioning an ExpressRoute circuit establishes a redundant layer 2 connection
In the Azure portal, open the page for the ExpressRoute circuit. The ![3][3] section of the page lists the ExpressRoute essentials, as shown in the following screenshot:
-![4][4]
+![4][4]
-In the ExpressRoute essentials, **Circuit status** indicates the status of the circuit on the Microsoft side. **Provider status** indicates if the circuit has been provisioned or not provisioned on the service-provider side.
+In the ExpressRoute essentials, **Circuit status** indicates the status of the circuit on the Microsoft side. **Provider status** indicates if the circuit has been provisioned or not provisioned on the service-provider side.
For an ExpressRoute circuit to be operational, **Circuit status** must be **Enabled**, and **Provider status** must be **Provisioned**.
Sku : {
"Name": "Standard_UnlimitedData", "Tier": "Standard", "Family": "UnlimitedData"
- }
+ }
CircuitProvisioningState : Enabled ServiceProviderProvisioningState : Provisioned
-ServiceProviderNotes :
+ServiceProviderNotes :
ServiceProviderProperties : { "ServiceProviderName": "****", "PeeringLocation": "******", "BandwidthInMbps": 100
- }
+ }
ServiceKey : ************************************** Peerings : [] Authorizations : []
ServiceProviderProvisioningState : Provisioned
## Validate peering configuration
-After the service provider has completed provisioning the ExpressRoute circuit, multiple routing configurations based on external BGP (eBGP) can be created over the ExpressRoute circuit between CEs/MSEE-PEs (2/4) and MSEEs (5). Each ExpressRoute circuit can have one or both of the following peering configurations:
+After the service provider has completed provisioning the ExpressRoute circuit, multiple routing configurations based on external BGP (eBGP) can be created over the ExpressRoute circuit between CEs/MSEE-PEs (2/4) and MSEEs (5). Each ExpressRoute circuit can have one or both of the following peering configurations:
- Azure private peering: traffic to private virtual networks in Azure-- Microsoft peering: traffic to public endpoints of platform as a service (PaaS) and software as a service (SaaS)
+- Microsoft peering: traffic to public endpoints of platform as a service (PaaS) and software as a service (SaaS)
For more information on how to create and modify routing configuration, see the article [Create and modify routing for an ExpressRoute circuit][CreatePeering].
In the Azure portal, you can check the status of an ExpressRoute circuit on the
![5][5]
-In the preceding example, Azure private peering is provisioned, but Azure public and Microsoft peerings aren't provisioned. A successfully provisioned peering context would also have the primary and secondary point-to-point subnets listed. The /30 subnets are used for the interface IP address of the MSEEs and CEs/PE-MSEEs. For the peerings that are provisioned, the listing also indicates who last modified the configuration.
+In the preceding example, Azure private peering is provisioned, but Azure public and Microsoft peerings aren't provisioned. A successfully provisioned peering context would also have the primary and secondary point-to-point subnets listed. The /30 subnets are used for the interface IP address of the MSEEs and CEs/PE-MSEEs. For the peerings that are provisioned, the listing also indicates who last modified the configuration.
> [!NOTE]
-> If enabling a peering fails, check if the assigned primary and secondary subnets match the configuration on the linked CE/PE-MSEE. Also check if the correct `VlanId`, `AzureASN`, and `PeerASN` values are used on MSEEs, and if these values map to the ones used on the linked CE/PE-MSEE.
+> If enabling a peering fails, check if the assigned primary and secondary subnets match the configuration on the linked CE/PE-MSEE. Also check if the correct `VlanId`, `AzureASN`, and `PeerASN` values are used on MSEEs, and if these values map to the ones used on the linked CE/PE-MSEE.
>
-> If MD5 hashing is chosen, the shared key should be the same on MSEE and CE/PE-MSEE pairs. Previously configured shared keys would not be displayed for security reasons.
+> If MD5 hashing is chosen, the shared key should be the same on MSEE and CE/PE-MSEE pairs. Previously configured shared keys would not be displayed for security reasons.
> > If you need to change any of these configurations on an MSEE router, see [Create and modify routing for an ExpressRoute circuit][CreatePeering].
AzureASN : 12076
PeerASN : 123## PrimaryPeerAddressPrefix : 172.16.0.0/30 SecondaryPeerAddressPrefix : 172.16.0.4/30
-PrimaryAzurePort :
-SecondaryAzurePort :
-SharedKey :
+PrimaryAzurePort :
+SecondaryAzurePort :
+SharedKey :
VlanId : 200 MicrosoftPeeringConfig : null ProvisioningState : Succeeded
If a peering isn't configured, you get an error message. Here's an example respo
```azurepowershell Get-AzExpressRouteCircuitPeeringConfig : Sequence contains no matching element At line:1 char:1
- + Get-AzExpressRouteCircuitPeeringConfig -Name "AzurePublicPeering ...
- + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- + CategoryInfo : CloseError: (:) [Get-AzExpr...itPeeringConfig], InvalidOperationException
- + FullyQualifiedErrorId : Microsoft.Azure.Commands.Network.GetAzureExpressRouteCircuitPeeringConfigCommand
+ + Get-AzExpressRouteCircuitPeeringConfig -Name "AzurePublicPeering ...
+ + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ + CategoryInfo : CloseError: (:) [Get-AzExpr...itPeeringConfig], InvalidOperationException
+ + FullyQualifiedErrorId : Microsoft.Azure.Commands.Network.GetAzureExpressRouteCircuitPeeringConfigCommand
``` > [!NOTE]
-> If enabling a peering fails, check if the assigned primary and secondary subnets match the configuration on the linked CE/PE-MSEE. Also check if the correct `VlanId`, `AzureASN`, and `PeerASN` values are used on MSEEs, and if these values map to the ones used on the linked CE/PE-MSEE.
->
-> If MD5 hashing is chosen, the shared key should be the same on MSEE and CE/PE-MSEE pairs. Previously configured shared keys would not be displayed for security reasons.
+> If enabling a peering fails, check if the assigned primary and secondary subnets match the configuration on the linked CE/PE-MSEE. Also check if the correct `VlanId`, `AzureASN`, and `PeerASN` values are used on MSEEs, and if these values map to the ones used on the linked CE/PE-MSEE.
>
-> If you need to change any of these configurations on an MSEE router, see [Create and modify routing for an ExpressRoute circuit][CreatePeering].
+> If MD5 hashing is chosen, the shared key should be the same on MSEE and CE/PE-MSEE pairs. Previously configured shared keys would not be displayed for security reasons.
+>
+> If you need to change any of these configurations on an MSEE router, see [Create and modify routing for an ExpressRoute circuit][CreatePeering].
> [!NOTE] > On a /30 subnet assigned for interface, Microsoft will pick the second usable IP address of the subnet for the MSEE interface. So, ensure that the first usable IP address of the subnet has been assigned on the peered CE/PE-MSEE.
Here's an example response:
```output Network : 10.1.0.0/16 NextHop : 10.17.17.141
-LocPrf :
+LocPrf :
Weight : 0 Path : 65515 Network : 10.1.0.0/16 NextHop : 10.17.17.140*
-LocPrf :
+LocPrf :
Weight : 0 Path : 65515 Network : 10.2.20.0/25 NextHop : 172.16.0.1
-LocPrf :
+LocPrf :
Weight : 0 Path : 123## ```
Test your private peering connectivity by counting packets arriving at and leavi
1. Run the [PsPing](/sysinternals/downloads/psping) test from your on-premises IP address to your Azure IP address, and keep it running during the connectivity test.
-1. Fill out the fields of the form. Be sure to enter the same on-premises and Azure IP addresses that you used in step 5. Then select **Submit** and wait for your results to load.
+1. Fill out the fields of the form. Be sure to enter the same on-premises and Azure IP addresses that you used in step 5. Then select **Submit** and wait for your results to load.
:::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/form.png" alt-text="Screenshot of the form for debugging an A C L.":::
This test result has the following properties:
## Verify availability of the virtual network gateway
-The ExpressRoute virtual network gateway facilitates the management and control plane connectivity to private link services and private IPs deployed to an Azure virtual network. Microsoft manages the virtual network gateway infrastructure and sometimes undergoes maintenance.
+The ExpressRoute virtual network gateway facilitates the management and control plane connectivity to private link services and private IPs deployed to an Azure virtual network. Microsoft manages the virtual network gateway infrastructure and sometimes undergoes maintenance.
During a maintenance period, performance of the virtual network gateway may reduce. To troubleshoot connectivity issues to the virtual network and see if a recent maintenance event caused reduce capacity, follow these steps:
During a maintenance period, performance of the virtual network gateway may redu
1. Wait for the diagnostics to run and interpret the results. :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/gateway-result.png" alt-text="Screenshot of the diagnostic results.":::
-
+ If maintenance was done on your virtual network gateway during a period when you experienced packet loss or latency. It's possible that the reduced capacity of the gateway contributed to connectivity issues you're experiencing for the targeted virtual network. Follow the recommended steps. To support a higher network throughput and avoid connectivity issues during future maintenance events, consider upgrading the [virtual network gateway SKU](expressroute-about-virtual-network-gateways.md#gwsku). ## Next steps
expressroute Expressroute Troubleshooting Network Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-troubleshooting-network-performance.md
There are three basic steps to use this toolkit for Performance testing.
1. Installing the PowerShell Module.
- ```powershell
- (new-object Net.WebClient).DownloadString("https://aka.ms/AzureCT") | Invoke-Expression
-
- ```
+ ```powershell
+ (new-object Net.WebClient).DownloadString("https://aka.ms/AzureCT") | Invoke-Expression
+
+ ```
- This command downloads the PowerShell module and installs it locally.
+ This command downloads the PowerShell module and installs it locally.
2. Install the supporting applications.
- ```powershell
- Install-LinkPerformance
- ```
- This AzureCT command installs iPerf and PSPing in a new directory "C:\ACTTools", it also opens the Windows Firewall ports to allow ICMP and port 5201 (iPerf) traffic.
+ ```powershell
+ Install-LinkPerformance
+ ```
+ This AzureCT command installs iPerf and PSPing in a new directory "C:\ACTTools", it also opens the Windows Firewall ports to allow ICMP and port 5201 (iPerf) traffic.
3. Run the performance test.
- First, on the remote host you must install and run iPerf in server mode. Also ensure the remote host is listening on either 3389 (RDP for Windows) or 22 (SSH for Linux) and allowing traffic on port 5201 for iPerf. If the remote host is Windows, install the AzureCT and run the Install-LinkPerformance command. The command will set up iPerf and the firewall rules needed to start iPerf in server mode successfully.
-
- Once the remote machine is ready, open PowerShell on the local machine and start the test:
- ```powershell
- Get-LinkPerformance -RemoteHost 10.0.0.1 -TestSeconds 10
- ```
+ First, on the remote host you must install and run iPerf in server mode. Also ensure the remote host is listening on either 3389 (RDP for Windows) or 22 (SSH for Linux) and allowing traffic on port 5201 for iPerf. If the remote host is Windows, install the AzureCT and run the Install-LinkPerformance command. The command will set up iPerf and the firewall rules needed to start iPerf in server mode successfully.
+
+ Once the remote machine is ready, open PowerShell on the local machine and start the test:
+ ```powershell
+ Get-LinkPerformance -RemoteHost 10.0.0.1 -TestSeconds 10
+ ```
- This command runs a series of concurrent load and latency tests to help estimate the bandwidth capacity and latency of your network link.
+ This command runs a series of concurrent load and latency tests to help estimate the bandwidth capacity and latency of your network link.
4. Review the output of the tests.
There are three basic steps to use this toolkit for Performance testing.
:::image type="content" source="./media/expressroute-troubleshooting-network-performance/powershell-output.png" alt-text="Screenshot of PowerShell output of the Link Performance test.":::
- The detailed results of all the iPerf and PSPing tests are in individual text files in the AzureCT tools directory at "C:\ACTTools."
+ The detailed results of all the iPerf and PSPing tests are in individual text files in the AzureCT tools directory at "C:\ACTTools."
## Troubleshooting
Test setup:
- A DS5v2 VM running Windows Server 2016 on the VNet. The VM was non-domain joined, built from the default Azure image (no optimization or customization) with AzureCT installed. - All tests use the AzureCT Get-LinkPerformance command with a 5-minute load test for each of the six test runs. For example:
- ```powershell
- Get-LinkPerformance -RemoteHost 10.0.0.1 -TestSeconds 300
- ```
+ ```powershell
+ Get-LinkPerformance -RemoteHost 10.0.0.1 -TestSeconds 300
+ ```
- The data flow for each test had the load flowing from the on-premises physical server (iPerf client in Seattle) up to the Azure VM (iPerf server in the listed Azure region). - The "Latency" column data is from the No Load test (a TCP latency test without iPerf running). - The "Max Bandwidth" column data is from the 16 TCP flow load test with a 1-Mb window size.
firewall Premium Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-migrate.md
Last updated 03/30/2022-+
Usage example:
This script requires the latest Azure PowerShell. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
-```azurepowershell
+```powershell
<# .SYNOPSIS
- Given an Azure firewall policy id the script will transform it to a Premium Azure firewall policy.
- The script will first pull the policy, transform/add various parameters and then upload a new premium policy.
- The created policy will be named <previous_policy_name>_premium if no new name provided else new policy will be named as the parameter passed.
+ Given an Azure firewall policy id the script will transform it to a Premium Azure firewall policy.
+ The script will first pull the policy, transform/add various parameters and then upload a new premium policy.
+ The created policy will be named <previous_policy_name>_premium if no new name provided else new policy will be named as the parameter passed.
.Example Transform-Policy -PolicyId /subscriptions/XXXXX-XXXXXX-XXXXX/resourceGroups/some-resource-group/providers/Microsoft.Network/firewallPolicies/policy-name -NewPolicyName <optional param for the new policy name> #> param (
- #Resource id of the azure firewall policy.
+ #Resource id of the azure firewall policy.
[Parameter(Mandatory=$true)] [string] $PolicyId,
function TransformPolicyToPremium {
[Parameter(Mandatory=$true)] [Microsoft.Azure.Commands.Network.Models.PSAzureFirewallPolicy] $Policy
- )
+ )
$NewPolicyParameters = @{
- Name = (GetPolicyNewName -Policy $Policy)
- ResourceGroupName = $Policy.ResourceGroupName
- Location = $Policy.Location
- BasePolicy = $Policy.BasePolicy.Id
+ Name = (GetPolicyNewName -Policy $Policy)
+ ResourceGroupName = $Policy.ResourceGroupName
+ Location = $Policy.Location
+ BasePolicy = $Policy.BasePolicy.Id
ThreatIntelMode = $Policy.ThreatIntelMode
- ThreatIntelWhitelist = $Policy.ThreatIntelWhitelist
- PrivateRange = $Policy.PrivateRange
- DnsSetting = $Policy.DnsSettings
- SqlSetting = $Policy.SqlSetting
- ExplicitProxy = $Policy.ExplicitProxy
- DefaultProfile = $Policy.DefaultProfile
- Tag = $Policy.Tag
- SkuTier = "Premium"
+ ThreatIntelWhitelist = $Policy.ThreatIntelWhitelist
+ PrivateRange = $Policy.PrivateRange
+ DnsSetting = $Policy.DnsSettings
+ SqlSetting = $Policy.SqlSetting
+ ExplicitProxy = $Policy.ExplicitProxy
+ DefaultProfile = $Policy.DefaultProfile
+ Tag = $Policy.Tag
+ SkuTier = "Premium"
} Write-Host "Creating new policy"
If you use Azure Firewall Standard SKU with firewall policy, you can use the All
The minimum Azure PowerShell version requirement is 6.5.0. For more information, see [Az 6.5.0](https://www.powershellgallery.com/packages/Az/6.5.0).
-
+ ### Migrate a VNET Hub Firewall -- Deallocate the Standard Firewall
+- Deallocate the Standard Firewall
```azurepowershell $azfw = Get-AzFirewall -Name "<firewall-name>" -ResourceGroupName "<resource-group-name>"
genomics Troubleshooting Guide Genomics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/genomics/troubleshooting-guide-genomics.md
You can locate the error messages associated with the workflow by:
### 1. Using the command line `msgen status` ```bash
-msgen status -u URL -k KEY -w ID
+msgen status -u URL -k KEY -w ID
```
There are three required arguments:
* KEY - the access key for your Genomics account * To find your URL and KEY, go to Azure portal and open your Microsoft Genomics account page. Under the **Management** heading, choose **Access keys**. There, you find both the API URL and your access keys.
-
+ * ID - the workflow ID
- * To find your workflow ID type in `msgen list` command. Assuming your config file contains the URL and your access keys, and is located is in the same location as your msgen exe, the command will look like this:
-
+ * To find your workflow ID type in `msgen list` command. Assuming your config file contains the URL and your access keys, and is located is in the same location as your msgen exe, the command will look like this:
+ ```bash msgen list -f "config.txt" ``` Output from this command will look like this :
-
+ ```bash
- Microsoft Genomics command-line client v0.7.4
+ Microsoft Genomics command-line client v0.7.4
Copyright (c) 2018 Microsoft. All rights reserved.
-
+ Workflow List - Total Count : 1
-
+ Workflow ID : 10001 Status : Completed successfully Message :
There are three required arguments:
``` > [!NOTE]
- > Alternatively you can include the path to the config file instead of directly entering the URL and KEY.
- If you include these arguments in the command line as well as the config file, the command-line arguments will take precedence.
+ > Alternatively you can include the path to the config file instead of directly entering the URL and KEY.
+ If you include these arguments in the command line as well as the config file, the command-line arguments will take precedence.
For workflow ID 1001, and config.txt file placed in the same path as the msgen executable, the command will look like this:
For workflow ID 1001, and config.txt file placed in the same path as the msgen e
msgen status -w 1001 -f "config.txt" ```
-### 2. Examine the contents of standardoutput.txt
+### 2. Examine the contents of standardoutput.txt
Locate the output container for the workflow in question. MSGEN creates a, `[workflowfilename].logs.zip` folder after every workflow execution. Unzip the folder to view its contents: * outputFileList.txt - a list of the output files produced during the workflow
For troubleshooting, examine the contents of standardoutput.txt and note any err
## Step 2: Try recommended steps for common errors
-This section briefly highlights common errors output by Microsoft Genomics service (msgen) and the strategies you can use to resolve them.
+This section briefly highlights common errors output by Microsoft Genomics service (msgen) and the strategies you can use to resolve them.
The Microsoft Genomics service (msgen) can throw the following two kinds of errors:
If you continue to have job failures, or if you have any other questions, contac
## Next steps
-In this article, you learned how to troubleshoot and resolve common issues with the Microsoft Genomics service. For more information and more general FAQ, see [Common questions](frequently-asked-questions-genomics.yml).
+In this article, you learned how to troubleshoot and resolve common issues with the Microsoft Genomics service. For more information and more general FAQ, see [Common questions](frequently-asked-questions-genomics.yml).
hdinsight Apache Ambari Troubleshoot Metricservice Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-ambari-troubleshoot-metricservice-issues.md
java.lang.OutOfMemoryError: Java heap space
``` 2021-04-13 05:57:37,546 INFO [timeline] timeline.HadoopTimelineMetricsSink: No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times. ```
-
+ 2. Get the Apache Ambari Metrics Collector pid and check GC performance ``` ps -fu ams | grep 'org.apache.ambari.metrics.AMSApplicationServer' ```
-
+ 3. Check the garbage collection status using `jstat -gcutil <pid> 1000 100`. If you see the **FGCT** increase a lot in short seconds, it indicates Apache Ambari Metrics Collector is busy in Full GC and unable to process the other requests. ### Resolution
To avoid these issues, consider using one of the following options:
> [!NOTE] > Cleaning up the AMS data removes all the historical AMS data available. If you need the history, this may not be the best option.
- 1. Login into the Ambari portal
- 1. Set AMS to maintenance
- 2. Stop AMS from Ambari
- 3. Identify the following from the **AMS Configs** screen
- 1. `hbase.rootdir` (Default value is `file:///mnt/data/ambari-metrics-collector/hbase`)
- 2. `hbase.tmp.dir`(Default value is `/var/lib/ambari-metrics-collector/hbase-tmp`)
+ 1. Login into the Ambari portal
+ 1. Set AMS to maintenance
+ 2. Stop AMS from Ambari
+ 3. Identify the following from the **AMS Configs** screen
+ 1. `hbase.rootdir` (Default value is `file:///mnt/data/ambari-metrics-collector/hbase`)
+ 2. `hbase.tmp.dir`(Default value is `/var/lib/ambari-metrics-collector/hbase-tmp`)
2. SSH into headnode where Apache Ambari Metrics Collector exists. As superuser: 1. Remove the AMS zookeeper data by **backing up** and removing the contents of `'hbase.tmp.dir'/zookeeper`
- 2. Remove any Phoenix spool files from `<hbase.tmp.dir>/phoenix-spool` folder
- 3. ***(It is worthwhile to skip this step initially and try restarting AMS to see if the issue is resolved. If AMS is still failing to come up, try this step)***
- AMS data would be stored in `hbase.rootdir` identified above. Use regular OS commands to back up and remove the files. Example:
- `tar czf /mnt/backupof-ambari-metrics-collector-hbase-$(date +%Y%m%d-%H%M%S).tar.gz /mnt/data/ambari-metrics-collector/hbase`
+ 2. Remove any Phoenix spool files from `<hbase.tmp.dir>/phoenix-spool` folder
+ 3. ***(It is worthwhile to skip this step initially and try restarting AMS to see if the issue is resolved. If AMS is still failing to come up, try this step)***
+ AMS data would be stored in `hbase.rootdir` identified above. Use regular OS commands to back up and remove the files. Example:
+ `tar czf /mnt/backupof-ambari-metrics-collector-hbase-$(date +%Y%m%d-%H%M%S).tar.gz /mnt/data/ambari-metrics-collector/hbase`
3. Restart AMS using Ambari. For Kafka cluster, if the above solutions do not help, consider the following solutions.
For Kafka cluster, if the above solutions do not help, consider the following so
- Ambari Metrics Service needs to deal with lots of kafka metrics, so it's a good idea to enable only metrics in the allowlist. Go to **Ambari** > **Ambari Metrics** > **CONFIGS** > **Advanced ams-env**, set below property to true. After this modification, need to restart the impacted services in Ambari UI as required. :::image type="content" source="./media/apache-ambari-troubleshoot-ams-issues/editing-allowed-metrics-ambari.png" alt-text="Screenshot of editing Ambari Metric Service allowlisted metrics properties." border="true":::
-
+ - Handling lots of metrics for standalone HBase with limited memory would impact HBase response time. Hence metrics would be unavailable. If Kafka cluster has many topics and still generates a lot of allowed metrics, increase the heap memory for HMaster and RegionServer in Ambari Metrics Service. Go to **Ambari** > **Ambari Metrics** > **CONFIGS** > **Advanced hbase-env** > **HBase Master Maximum Memory** and **HBase RegionServer Maximum Memory** and increase the values. Restart the required services in Ambari UI.
-
+ :::image type="content" source="./media/apache-ambari-troubleshoot-ams-issues/editing-hbase-memory-ambari.png" alt-text="Screenshot of editing Ambari Metric Service hbase memory properties." border="true"::: ## Next steps
hdinsight Apache Hbase Tutorial Get Started Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-tutorial-get-started-linux.md
The HBase REST API is secured via [basic authentication](https://en.wikipedia.or
1. To enable the HBase REST API in the HDInsight cluster, add the following custom startup script to the **Script Action** section. You can add the startup script when you create the cluster or after the cluster has been created. For **Node Type**, select **Region Servers** to ensure that the script executes only in HBase Region Servers. -
- ```bash
- #! /bin/bash
-
- THIS_MACHINE=`hostname`
-
- if [[ $THIS_MACHINE != wn* ]]
- then
- printf 'Script to be executed only on worker nodes'
- exit 0
- fi
-
- RESULT=`pgrep -f RESTServer`
- if [[ -z $RESULT ]]
- then
- echo "Applying mitigation; starting REST Server"
- sudo python /usr/lib/python2.7/dist-packages/hdinsight_hbrest/HbaseRestAgent.py
- else
- echo "REST server already running"
- exit 0
- fi
- ```
+ ```bash
+ #! /bin/bash
+
+ THIS_MACHINE=`hostname`
+
+ if [[ $THIS_MACHINE != wn* ]]
+ then
+ printf 'Script to be executed only on worker nodes'
+ exit 0
+ fi
+
+ RESULT=`pgrep -f RESTServer`
+ if [[ -z $RESULT ]]
+ then
+ echo "Applying mitigation; starting REST Server"
+ sudo python /usr/lib/python2.7/dist-packages/hdinsight_hbrest/HbaseRestAgent.py
+ else
+ echo "REST server already running"
+ exit 0
+ fi
+ ```
1. Set environment variable for ease of use. Edit the commands below by replacing `MYPASSWORD` with the cluster login password. Replace `MYCLUSTERNAME` with the name of your HBase cluster. Then enter the commands.
hdinsight Hdinsight Hadoop Create Linux Clusters Arm Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-arm-templates.md
description: Learn how to create clusters for HDInsight by using Resource Manage
Previously updated : 06/23/2022 Last updated : 07/31/2023 # Create Apache Hadoop clusters in HDInsight by using Resource Manager templates
hdinsight Hdinsight Hadoop Customize Cluster Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-customize-cluster-linux.md
description: Add custom components to HDInsight clusters by using script actions
Previously updated : 06/08/2022 Last updated : 07/31/2023 # Customize Azure HDInsight clusters by using script actions
hdinsight Hdinsight Troubleshoot Yarn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-troubleshoot-yarn.md
These changes are visible immediately on the YARN Scheduler UI.
- [Connect to HDInsight (Apache Hadoop) by using SSH](./hdinsight-hadoop-linux-use-ssh-unix.md) - [Apache Hadoop YARN concepts and applications](https://hadoop.apache.org/docs/r2.7.4/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html#Concepts_and_Flow)
+## How do I check Yarn Application Diagnostics Information?
+
+Diagnostics in Yarn UI is a feature that allows you to view the status and logs of your applications running on Yarn. Diagnostics can help you troubleshoot and debug your applications, as well as monitor their performance and resource usage.
+
+To view the diagnostics of a specific application, you can click on the application ID in the applications list. On the application details page, you can also see a list of all the attempts that have been made to run the application. You can click on any attempt to see more details, such as the attempt ID, container ID, node ID, start time, finish time, and diagnostics
+ ## How do I troubleshoot YARN common issues?
hdinsight Interactive Query Troubleshoot Hive Logs Diskspace Full Headnodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-hive-logs-diskspace-full-headnodes.md
Automatic hive log deletion is not configured in the advanced hive-log4j2 config
- **Date** - You also can uncomment and switch the conditions. Then change `appender.RFA.strategy.action.condition.nested_condition.lastMod.age` to an age of your choice.
- ```
- # Deletes logs based on total accumulated size, keeping the most recent
- #appender.RFA.strategy.action.condition.nested_condition.fileSize.type = IfAccumulatedFileSize
- #appender.RFA.strategy.action.condition.nested_condition.fileSize.exceeds = 60GB
- # Deletes logs IfLastModified date is greater than number of days
- appender.RFA.strategy.action.condition.nested_condition.lastMod.type = IfLastModified
- appender.RFA.strategy.action.condition.nested_condition.lastMod.age = 30D
- ```
+ ```
+ # Deletes logs based on total accumulated size, keeping the most recent
+ #appender.RFA.strategy.action.condition.nested_condition.fileSize.type = IfAccumulatedFileSize
+ #appender.RFA.strategy.action.condition.nested_condition.fileSize.exceeds = 60GB
+ # Deletes logs IfLastModified date is greater than number of days
+ appender.RFA.strategy.action.condition.nested_condition.lastMod.type = IfLastModified
+ appender.RFA.strategy.action.condition.nested_condition.lastMod.age = 30D
+ ```
- **Combination of Total Size and Date** - You can combine both options by uncommenting like below. The log4j2 will then behave as so: Start deleting logs when either condition is met.
- ```
- # Deletes logs based on total accumulated size, keeping the most recent
- appender.RFA.strategy.action.condition.nested_condition.fileSize.type = IfAccumulatedFileSize
- appender.RFA.strategy.action.condition.nested_condition.fileSize.exceeds = 60GB
- # Deletes logs IfLastModified date is greater than number of days
- appender.RFA.strategy.action.condition.nested_condition.lastMod.type = IfLastModified
- appender.RFA.strategy.action.condition.nested_condition.lastMod.age = 30D
- ```
+ ```
+ # Deletes logs based on total accumulated size, keeping the most recent
+ appender.RFA.strategy.action.condition.nested_condition.fileSize.type = IfAccumulatedFileSize
+ appender.RFA.strategy.action.condition.nested_condition.fileSize.exceeds = 60GB
+ # Deletes logs IfLastModified date is greater than number of days
+ appender.RFA.strategy.action.condition.nested_condition.lastMod.type = IfLastModified
+ appender.RFA.strategy.action.condition.nested_condition.lastMod.age = 30D
+ ```
5. Save the configurations and restart the required components. ## Next steps
hdinsight Apache Spark Intellij Tool Failure Debug https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-intellij-tool-failure-debug.md
keywords: debug remotely intellij, remote debugging intellij, ssh, intellij, hdi
Previously updated : 06/23/2022 Last updated : 07/31/2023 # Failure spark job debugging with Azure Toolkit for IntelliJ (preview)
hdinsight Apache Spark Job Debugging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-job-debugging.md
description: Use YARN UI, Spark UI, and Spark History server to track and debug
Previously updated : 06/23/2022 Last updated : 07/31/2023 # Debug Apache Spark jobs running on Azure HDInsight
hdinsight Apache Spark Use With Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-use-with-data-lake-store.md
If you created an HDInsight cluster with Data Lake Storage as additional storage
4. Because you created a notebook using the PySpark kernel, you do not need to create any contexts explicitly. The Spark and Hive contexts will be automatically created for you when you run the first code cell. You can start by importing the types required for this scenario. To do so, paste the following code snippet in a cell and press **SHIFT + ENTER**.
- ```scala
+ ```scala
from pyspark.sql.types import * ```
If you created an HDInsight cluster with Data Lake Storage as additional storage
6. Because you are using a PySpark kernel, you can now directly run a SQL query on the temporary table **hvac** that you just created by using the `%%sql` magic. For more information about the `%%sql` magic, as well as other magics available with the PySpark kernel, see [Kernels available on Jupyter Notebooks with Apache Spark HDInsight clusters](apache-spark-jupyter-notebook-kernels.md#parameters-supported-with-the-sql-magic).
- ```sql
+ ```sql
%%sql
- SELECT buildingID, (targettemp - actualtemp) AS temp_diff, date FROM hvac WHERE date = \"6/1/13\"
+ SELECT buildingID, (targettemp - actualtemp) AS temp_diff, date FROM hvac WHERE date = \"6/1/13\"
``` 7. Once the job is completed successfully, the following tabular output is displayed by default.
healthcare-apis Tutorial Web App Test Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-test-postman.md
Last updated 06/03/2022
# Testing the FHIR API on Azure API for FHIR
-In the previous tutorial, you deployed the Azure API for FHIR and registered your client application. You're now ready to test your Azure API for FHIR.
+In the previous tutorial, you deployed the Azure API for FHIR and registered your client application. You're now ready to test your Azure API for FHIR.
## Retrieve capability statement
-First we'll get the capability statement for your Azure API for FHIR.
+First we'll get the capability statement for your Azure API for FHIR.
1. Open Postman. 1. Retrieve the capability statement by doing `GET https://\<FHIR-SERVER-NAME>.azurehealthcareapis.com/metadata`. In the image below the FHIR server name is **fhirserver**.
Now you have access, you can create a new patient. Here's a sample of a simple p
``` json {
- "resourceType": "Patient",
- "active": true,
- "name": [
- {
- "use": "official",
- "family": "Kirk",
- "given": [
- "James",
- "Tiberious"
- ]
- },
- {
- "use": "usual",
- "given": [
- "Jim"
- ]
- }
- ],
- "gender": "male",
- "birthDate": "1960-12-25"
+ "resourceType": "Patient",
+ "active": true,
+ "name": [
+ {
+ "use": "official",
+ "family": "Kirk",
+ "given": [
+ "James",
+ "Tiberious"
+ ]
+ },
+ {
+ "use": "usual",
+ "given": [
+ "Jim"
+ ]
+ }
+ ],
+ "gender": "male",
+ "birthDate": "1960-12-25"
} ``` This POST will create a new patient in your FHIR server with the name James Tiberious Kirk.
If you do the GET command to retrieve a patient again, you'll see James Tiberiou
## Troubleshooting access issues
-If you ran into issues during any of these steps, review the documents we have put together on Azure Active Directory and the Azure API for FHIR.
+If you ran into issues during any of these steps, review the documents we have put together on Azure Active Directory and the Azure API for FHIR.
* [Azure AD and Azure API for FHIR](azure-active-directory-identity-configuration.md) - This document outlines some of the basic principles of Azure Active Directory and how it interacts with the Azure API for FHIR. * [Access token validation](azure-api-fhir-access-token-validation.md) - This how-to guide gives more specific details on access token validation and steps to take to resolve access issues.
iot-develop Quickstart Devkit Microchip Atsame54 Xpro Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-microchip-atsame54-xpro-iot-hub.md
To connect the Microchip E54 to Azure, you modify a configuration file for Azure
1. Comment out the following line near the top of the file as shown: ```c
- // #define ENABLE_DPS
+ // #define ENABLE_DPS
``` 1. Set the Azure IoT device information constants to the values that you saved after you created Azure resources.
You can use the **Termite** app to monitor communication and confirm that your d
```output Initializing DHCP
- MAC: *************
- IP address: 192.168.0.41
- Mask: 255.255.255.0
- Gateway: 192.168.0.1
+ MAC: *************
+ IP address: 192.168.0.41
+ Mask: 255.255.255.0
+ Gateway: 192.168.0.1
SUCCESS: DHCP initialized
-
+ Initializing DNS client
- DNS address: 192.168.0.1
- DNS address: ***********
+ DNS address: 192.168.0.1
+ DNS address: ***********
SUCCESS: DNS client initialized
-
+ Initializing SNTP time sync
- SNTP server 0.pool.ntp.org
- SNTP time update: Dec 3, 2022 0:5:35.572 UTC
+ SNTP server 0.pool.ntp.org
+ SNTP time update: Dec 3, 2022 0:5:35.572 UTC
SUCCESS: SNTP initialized
-
+ Initializing Azure IoT Hub client
- Hub hostname: ***************
- Device id: mydevice
- Model id: dtmi:azurertos:devkit:gsg;2
+ Hub hostname: ***************
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsg;2
SUCCESS: Connected to IoT Hub ```
Keep Termite open to monitor device output in the following steps.
## View device properties
-You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the Microchip E54. These capabilities rely on the device model published for the Microchip E54 in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart.
+You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the Microchip E54. These capabilities rely on the device model published for the Microchip E54 in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart.
To access IoT Plug and Play components for the device in IoT Explorer:
To access IoT Plug and Play components for the device in IoT Explorer:
To view device properties using Azure IoT Explorer:
-1. Select the **Properties (read-only)** tab. There's a single read-only property to indicate whether the led is on or off.
+1. Select the **Properties (read-only)** tab. There's a single read-only property to indicate whether the led is on or off.
1. Select the **Properties (writable)** tab. It displays the interval that telemetry is sent. 1. Change the `telemetryInterval` to *5*, and then select **Update desired value**. Your device now uses this interval to send telemetry.
To view device properties using Azure IoT Explorer:
1. IoT Explorer responds with a notification. You can also observe the update in Termite. 1. Set the telemetry interval back to 10.
-
+ To use Azure CLI to view device properties: 1. Run the [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command.
To use Azure CLI to call a method:
{ "payload": {}, "status": 200
- }
+ }
``` 1. Check your device to confirm the LED state.
For debugging the application, see [Debugging with Visual Studio Code](https://g
In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the Microchip E54 device. You connected the Microchip E54 to Azure IoT Hub, and carried out tasks such as viewing telemetry and calling a method on the device.
-As a next step, explore the following articles to learn more about using the IoT device SDKs to connect general devices, and embedded devices, to Azure IoT.
+As a next step, explore the following articles to learn more about using the IoT device SDKs to connect general devices, and embedded devices, to Azure IoT.
> [!div class="nextstepaction"] > [Connect a general simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
iot-develop Quickstart Devkit Mxchip Az3166 Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-mxchip-az3166-iot-hub.md
[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/MXChip/AZ3166)
-In this quickstart, you use Azure RTOS to connect an MXCHIP AZ3166 IoT DevKit (from now on, MXCHIP DevKit) to Azure IoT.
+In this quickstart, you use Azure RTOS to connect an MXCHIP AZ3166 IoT DevKit (from now on, MXCHIP DevKit) to Azure IoT.
You complete the following tasks:
To install the tools:
1. After the installation, open a new console window to recognize the configuration changes made by the setup script. Use this console to complete the remaining programming tasks in the quickstart. You can use Windows CMD, PowerShell, or Git Bash for Windows. 1. Run the following code to confirm that CMake version 3.14 or later is installed.
- ```shell
- cmake --version
- ```
+ ```shell
+ cmake --version
+ ```
[!INCLUDE [iot-develop-create-cloud-components](../../includes/iot-develop-create-cloud-components.md)]
To connect the MXCHIP DevKit to Azure, you modify a configuration file for Wi-Fi
1. Comment out the following line near the top of the file as shown:
- ```c
- // #define ENABLE_DPS
- ```
+ ```c
+ // #define ENABLE_DPS
+ ```
1. Set the Wi-Fi constants to the following values from your local environment.
You can use the **Termite** app to monitor communication and confirm that your d
1. Press the **Reset** button on the device. The button is labeled on the device and located near the Micro USB connector. 1. In the **Termite** app, check the following checkpoint values to confirm that the device is initialized and connected to Azure IoT.
- ```output
+ ```output
Starting Azure thread
-
-
++ Initializing WiFi
- MAC address: ******************
+ MAC address: ******************
SUCCESS: WiFi initialized
-
+ Connecting WiFi
- Connecting to SSID 'iot'
- Attempt 1...
+ Connecting to SSID 'iot'
+ Attempt 1...
SUCCESS: WiFi connected
-
+ Initializing DHCP
- IP address: 192.168.0.49
- Mask: 255.255.255.0
- Gateway: 192.168.0.1
+ IP address: 192.168.0.49
+ Mask: 255.255.255.0
+ Gateway: 192.168.0.1
SUCCESS: DHCP initialized
-
+ Initializing DNS client
- DNS address: 192.168.0.1
+ DNS address: 192.168.0.1
SUCCESS: DNS client initialized
-
+ Initializing SNTP time sync
- SNTP server 0.pool.ntp.org
- SNTP time update: Jan 4, 2023 22:57:32.658 UTC
+ SNTP server 0.pool.ntp.org
+ SNTP time update: Jan 4, 2023 22:57:32.658 UTC
SUCCESS: SNTP initialized
-
+ Initializing Azure IoT Hub client
- Hub hostname: ***.azure-devices.net
- Device id: mydevice
- Model id: dtmi:azurertos:devkit:gsgmxchip;2
+ Hub hostname: ***.azure-devices.net
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsgmxchip;2
SUCCESS: Connected to IoT Hub
-
+ Receive properties: {"desired":{"$version":1},"reported":{"deviceInformation":{"__t":"c","manufacturer":"MXCHIP","model":"AZ3166","swVersion":"1.0.0","osName":"Azure RTOS","processorArchitecture":"Arm Cortex M4","processorManufacturer":"STMicroelectronics","totalStorage":1024,"totalMemory":128},"ledState":false,"telemetryInterval":{"ac":200,"av":1,"value":10},"$version":4}} Sending property: $iothub/twin/PATCH/properties/reported/?$rid=3{"deviceInformation":{"__t":"c","manufacturer":"MXCHIP","model":"AZ3166","swVersion":"1.0.0","osName":"Azure RTOS","processorArchitecture":"Arm Cortex M4","processorManufacturer":"STMicroelectronics","totalStorage":1024,"totalMemory":128}} Sending property: $iothub/twin/PATCH/properties/reported/?$rid=5{"ledState":false} Sending property: $iothub/twin/PATCH/properties/reported/?$rid=7{"telemetryInterval":{"ac":200,"av":1,"value":10}}
-
+ Starting Main loop Telemetry message sent: {"humidity":31.01,"temperature":25.62,"pressure":927.3}. Telemetry message sent: {"magnetometerX":177,"magnetometerY":-36,"magnetometerZ":-346.5}. Telemetry message sent: {"accelerometerX":-22.5,"accelerometerY":0.54,"accelerometerZ":1049.01}. Telemetry message sent: {"gyroscopeX":0,"gyroscopeY":0,"gyroscopeZ":0}.
- ```
+ ```
Keep Termite open to monitor device output in the following steps. ## View device properties
-You can use Azure IoT Explorer to view and manage the properties of your devices. In this section and the following sections, you use the Plug and Play capabilities that surfaced in IoT Explorer to manage and interact with the MXCHIP DevKit. These capabilities rely on the device model published for the MXCHIP DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. You can perform many actions without using plug and play by selecting the action from the left side menu of your device pane in IoT Explorer. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
+You can use Azure IoT Explorer to view and manage the properties of your devices. In this section and the following sections, you use the Plug and Play capabilities that surfaced in IoT Explorer to manage and interact with the MXCHIP DevKit. These capabilities rely on the device model published for the MXCHIP DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. You can perform many actions without using plug and play by selecting the action from the left side menu of your device pane in IoT Explorer. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
To access IoT Plug and Play components for the device in IoT Explorer:
To view device properties using Azure IoT Explorer:
1. IoT Explorer responds with a notification. You can also observe the update in Termite. 1. Set the telemetry interval back to 10.
-
+ To use Azure CLI to view device properties: 1. Run the [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command.
- ```azurecli
- az iot hub device-twin show --device-id mydevice --hub-name {YourIoTHubName}
- ```
+ ```azurecli
+ az iot hub device-twin show --device-id mydevice --hub-name {YourIoTHubName}
+ ```
1. Inspect the properties for your device in the console output.
To use Azure CLI to view device telemetry:
1. Run the [az iot hub monitor-events](/cli/azure/iot/hub#az-iot-hub-monitor-events) command. Use the names that you created previously in Azure IoT for your device and IoT hub.
- ```azurecli
+ ```azurecli
az iot hub monitor-events --device-id mydevice --hub-name {YourIoTHubName}
- ```
+ ```
1. View the JSON output in the console.
- ```json
+ ```json
{ "event": { "origin": "mydevice",
To use Azure CLI to view device telemetry:
"payload": "{\"humidity\":41.21,\"temperature\":31.37,\"pressure\":1005.18}" } }
- ```
+ ```
1. Select CTRL+C to end monitoring. - ## Call a direct method on the device You can also use Azure IoT Explorer to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that turns an LED on or off. Optionally, you can do the same task using Azure CLI.
To use Azure CLI to call a method:
1. Run the [az iot hub invoke-device-method](/cli/azure/iot/hub#az-iot-hub-invoke-device-method) command, and specify the method name and payload. For this method, setting `method-payload` to `true` turns on the LED, and setting it to `false` turns it off.
- ```azurecli
- az iot hub invoke-device-method --device-id mydevice --method-name setLedState --method-payload true --hub-name {YourIoTHubName}
- ```
+ ```azurecli
+ az iot hub invoke-device-method --device-id mydevice --method-name setLedState --method-payload true --hub-name {YourIoTHubName}
+ ```
- The CLI console shows the status of your method call on the device, where `204` indicates success.
+ The CLI console shows the status of your method call on the device, where `204` indicates success.
- ```json
- {
- "payload": {},
- "status": 200
- }
- ```
+ ```json
+ {
+ "payload": {},
+ "status": 200
+ }
+ ```
1. Check your device to confirm the LED state. 1. View the Termite terminal to confirm the output messages:
- ```output
+ ```output
Receive direct method: setLedState
- Payload: true
+ Payload: true
LED is turned ON Device twin property sent: {"ledState":true}
- ```
+ ```
## Troubleshoot and debug
For debugging the application, see [Debugging with Visual Studio Code](https://g
In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the MXCHIP DevKit device. You also used the Azure CLI and/or IoT Explorer to create Azure resources, connect the MXCHIP DevKit securely to Azure, view telemetry, and send messages.
-As a next step, explore the following articles to learn more about using the IoT device SDKs to connect general devices, and embedded devices, to Azure IoT.
+As a next step, explore the following articles to learn more about using the IoT device SDKs to connect general devices, and embedded devices, to Azure IoT.
> [!div class="nextstepaction"] > [Connect a general simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
iot-develop Quickstart Devkit Nxp Mimxrt1060 Evk Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub.md
To install the tools:
1. After the installation, open a new console window to recognize the configuration changes made by the setup script. Use this console to complete the remaining programming tasks in the quickstart. You can use Windows CMD, PowerShell, or Git Bash for Windows. 1. Run the following code to confirm that CMake version 3.14 or later is installed.
- ```shell
- cmake --version
- ```
+ ```shell
+ cmake --version
+ ```
[!INCLUDE [iot-develop-create-cloud-components](../../includes/iot-develop-create-cloud-components.md)]
To connect the NXP EVK to Azure, you modify a configuration file for Azure IoT s
1. Comment out the following line near the top of the file as shown:
- ```c
- // #define ENABLE_DPS
- ```
+ ```c
+ // #define ENABLE_DPS
+ ```
1. Set the Azure IoT device information constants to the values that you saved after you created Azure resources.
You can use the **Termite** app to monitor communication and confirm that your d
1. Press the **Reset** button on the device. The button is labeled on the device and located near the Micro USB connector. 1. In the **Termite** app, check the following checkpoint values to confirm that the device is initialized and connected to Azure IoT.
- ```output
+ ```output
Initializing DHCP
- MAC: **************
- IP address: 192.168.0.56
- Mask: 255.255.255.0
- Gateway: 192.168.0.1
+ MAC: **************
+ IP address: 192.168.0.56
+ Mask: 255.255.255.0
+ Gateway: 192.168.0.1
SUCCESS: DHCP initialized
-
+ Initializing DNS client
- DNS address: 192.168.0.1
+ DNS address: 192.168.0.1
SUCCESS: DNS client initialized
-
+ Initializing SNTP time sync
- SNTP server 0.pool.ntp.org
- SNTP time update: Jan 11, 2023 20:37:37.90 UTC
+ SNTP server 0.pool.ntp.org
+ SNTP time update: Jan 11, 2023 20:37:37.90 UTC
SUCCESS: SNTP initialized
-
+ Initializing Azure IoT Hub client
- Hub hostname: **************.azure-devices.net
- Device id: mydevice
- Model id: dtmi:azurertos:devkit:gsg;2
+ Hub hostname: **************.azure-devices.net
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsg;2
SUCCESS: Connected to IoT Hub
-
+ Receive properties: {"desired":{"$version":1},"reported":{"$version":1}} Sending property: $iothub/twin/PATCH/properties/reported/?$rid=3{"deviceInformation":{"__t":"c","manufacturer":"NXP","model":"MIMXRT1060-EVK","swVersion":"1.0.0","osName":"Azure RTOS","processorArchitecture":"Arm Cortex M7","processorManufacturer":"NXP","totalStorage":8192,"totalMemory":768}} Sending property: $iothub/twin/PATCH/properties/reported/?$rid=5{"ledState":false} Sending property: $iothub/twin/PATCH/properties/reported/?$rid=7{"telemetryInterval":{"ac":200,"av":1,"value":10}}
-
+ Starting Main loop Telemetry message sent: {"temperature":40.61}.
- ```
+ ```
Keep Termite open to monitor device output in the following steps. ## View device properties
-You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the NXP EVK. These capabilities rely on the device model published for the NXP EVK in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
+You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the NXP EVK. These capabilities rely on the device model published for the NXP EVK in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
To access IoT Plug and Play components for the device in IoT Explorer:
To access IoT Plug and Play components for the device in IoT Explorer:
To view device properties using Azure IoT Explorer:
-1. Select the **Properties (read-only)** tab. There's a single read-only property to indicate whether the led is on or off.
+1. Select the **Properties (read-only)** tab. There's a single read-only property to indicate whether the led is on or off.
1. Select the **Properties (writable)** tab. It displays the interval that telemetry is sent. 1. Change the `telemetryInterval` to *5*, and then select **Update desired value**. Your device now uses this interval to send telemetry.
To view device properties using Azure IoT Explorer:
1. IoT Explorer responds with a notification. You can also observe the update in Termite. 1. Set the telemetry interval back to 10.
-
+ To use Azure CLI to view device properties: 1. Run the [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command.
- ```azurecli
- az iot hub device-twin show --device-id mydevice --hub-name {YourIoTHubName}
- ```
+ ```azurecli
+ az iot hub device-twin show --device-id mydevice --hub-name {YourIoTHubName}
+ ```
1. Inspect the properties for your device in the console output.
To use Azure CLI to view device telemetry:
1. Run the [az iot hub monitor-events](/cli/azure/iot/hub#az-iot-hub-monitor-events) command. Use the names that you created previously in Azure IoT for your device and IoT hub.
- ```azurecli
- az iot hub monitor-events --device-id mydevice --hub-name {YourIoTHubName}
- ```
+ ```azurecli
+ az iot hub monitor-events --device-id mydevice --hub-name {YourIoTHubName}
+ ```
1. View the JSON output in the console.
- ```json
+ ```json
{ "event": { "origin": "mydevice",
To use Azure CLI to view device telemetry:
} } }
- ```
+ ```
1. Select CTRL+C to end monitoring.
To use Azure CLI to call a method:
1. Run the [az iot hub invoke-device-method](/cli/azure/iot/hub#az-iot-hub-invoke-device-method) command, and specify the method name and payload. For this method, setting `method-payload` to `true` would turn on an LED. There's no change on the device as there isn't an available LED to toggle. However, you can view the output in Termite to monitor the status of the methods.
- ```azurecli
- az iot hub invoke-device-method --device-id mydevice --method-name setLedState --method-payload true --hub-name {YourIoTHubName}
- ```
+ ```azurecli
+ az iot hub invoke-device-method --device-id mydevice --method-name setLedState --method-payload true --hub-name {YourIoTHubName}
+ ```
- The CLI console shows the status of your method call on the device, where `204` indicates success.
+ The CLI console shows the status of your method call on the device, where `204` indicates success.
- ```json
- {
- "payload": {},
- "status": 200
- }
- ```
+ ```json
+ {
+ "payload": {},
+ "status": 200
+ }
+ ```
1. Check your device to confirm the LED state. 1. View the Termite terminal to confirm the output messages:
- ```output
+ ```output
Received command: setLedState Payload: true LED is turned ON Sending property: $iothub/twin/PATCH/properties/reported/?$rid=15{"ledState":true}
- ```
+ ```
## Troubleshoot and debug
For debugging the application, see [Debugging with Visual Studio Code](https://g
In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the NXP EVK device. You connected the NXP EVK to Azure IoT Hub, and carried out tasks such as viewing telemetry and calling a method on the device.
-As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT.
+As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT.
> [!div class="nextstepaction"] > [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
iot-develop Quickstart Devkit Renesas Rx65n Cloud Kit Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-renesas-rx65n-cloud-kit-iot-hub.md
You complete the following tasks:
## Prerequisites
-* A PC running Windows 10 or Windows 11
+* A PC running Windows 10 or Windows 11.
* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. * [Git](https://git-scm.com/downloads) for cloning the repository * Azure CLI. You have two options for running Azure CLI commands in this quickstart:
To install the tools:
1. After the installation, open a new console window to recognize the configuration changes made by the setup script. Use this console to complete the remaining programming tasks in the quickstart. You can use Windows CMD, PowerShell, or Git Bash for Windows. 1. Run the following commands to confirm that CMake version 3.14 or later is installed. Make certain that the RX compiler path is set up correctly.
- ```shell
- cmake --version
- rx-elf-gcc --version
- ```
+ ```shell
+ cmake --version
+ rx-elf-gcc --version
+ ```
To install the remaining tools: * Install [Renesas Flash Programmer](https://www.renesas.com/software-tool/renesas-flash-programmer-programming-gui) for Windows. The Renesas Flash Programmer development environment includes drivers and tools needed to flash the Renesas RX65N.
To connect the Renesas RX65N to Azure, you modify a configuration file for Wi-Fi
1. Comment out the following line near the top of the file as shown:
- ```c
- // #define ENABLE_DPS
- ```
+ ```c
+ // #define ENABLE_DPS
+ ```
1. Uncomment the following two lines near the end of the file as shown:
- ```c
- #define IOT_HUB_HOSTNAME ""
- #define IOT_HUB_DEVICE_ID ""
- ```
+ ```c
+ #define IOT_HUB_HOSTNAME ""
+ #define IOT_HUB_DEVICE_ID ""
+ ```
1. Set the Azure IoT device information constants to the values that you saved after you created Azure resources.
To connect the Renesas RX65N to Azure, you modify a configuration file for Wi-Fi
> For more information about setting up and getting started with the Renesas RX65N, see [Renesas RX65N Cloud Kit Quick Start](https://www.renesas.com/document/man/quick-start-guide-renesas-rx65n-cloud-kit). 1. Complete the following steps using the following image as a reference.
-
+ :::image type="content" source="media/quickstart-devkit-renesas-rx65n-cloud-kit-iot-hub/renesas-rx65n.jpg" alt-text="Photo of the Renesas RX65N board that shows the reset, USB, and E1/E2Lite."::: 1. Remove the **EJ2** link from the board to enable the E2 Lite debugger. The link is located underneath the **USER SW** button.
- > [!WARNING]
+ > [!WARNING]
> Failure to remove this link will result in being unable to flash the device. 1. Connect the **WiFi module** to the **Cloud Option Board**
To connect the Renesas RX65N to Azure, you modify a configuration file for Wi-Fi
:::image type="content" source="media/quickstart-devkit-renesas-rx65n-cloud-kit-iot-hub/rfp-auth.png" alt-text="Screenshot of Renesas Flash Programmer, Authentication.":::
-6. Select the *Connect Settings* tab, select the *Speed* dropdown, and set the speed to 1,000,000 bps.
+6. Select the *Connect Settings* tab, select the *Speed* dropdown, and set the speed to 1,000,000 bps.
> [!IMPORTANT]
- > If there are errors when you try to flash the board, you might need to lower the speed in this setting to 750,000 bps or lower.
+ > If there are errors when you try to flash the board, you might need to lower the speed in this setting to 750,000 bps or lower.
6. Select the *Operation* tab, then select the *Browse...* button and locate the *rx65n_azure_iot.hex* file created in the previous section.
-7. Press *Start* to begin flashing. This process takes less than a minute.
+7. Press *Start* to begin flashing. This process takes less than a minute.
### Confirm device connection details
You can use the **Termite** app to monitor communication and confirm that your d
1. Press the **Reset** button on the device. 1. In the **Termite** app, check the following checkpoint values to confirm that the device is initialized and connected to Azure IoT.
- ```output
+ ```output
Starting Azure thread
-
-
++ Initializing WiFi
- MAC address: ****************
- Firmware version 0.14
+ MAC address: ****************
+ Firmware version 0.14
SUCCESS: WiFi initialized
-
+ Connecting WiFi
- Connecting to SSID '*********'
- Attempt 1...
+ Connecting to SSID '*********'
+ Attempt 1...
SUCCESS: WiFi connected
-
+ Initializing DHCP
- IP address: 192.168.0.31
- Mask: 255.255.255.0
- Gateway: 192.168.0.1
+ IP address: 192.168.0.31
+ Mask: 255.255.255.0
+ Gateway: 192.168.0.1
SUCCESS: DHCP initialized
-
+ Initializing DNS client
- DNS address: 192.168.0.1
+ DNS address: 192.168.0.1
SUCCESS: DNS client initialized
-
+ Initializing SNTP time sync
- SNTP server 0.pool.ntp.org
- SNTP server 1.pool.ntp.org
- SNTP time update: May 19, 2023 20:40:56.472 UTC
+ SNTP server 0.pool.ntp.org
+ SNTP server 1.pool.ntp.org
+ SNTP time update: May 19, 2023 20:40:56.472 UTC
SUCCESS: SNTP initialized
-
+ Initializing Azure IoT Hub client
- Hub hostname: ******.azure-devices.net
- Device id: mydevice
- Model id: dtmi:azurertos:devkit:gsgrx65ncloud;1
+ Hub hostname: ******.azure-devices.net
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsgrx65ncloud;1
SUCCESS: Connected to IoT Hub
-
+ Receive properties: {"desired":{"$version":1},"reported":{"$version":1}} Sending property: $iothub/twin/PATCH/properties/reported/?$rid=3{"deviceInformation":{"__t":"c","manufacturer":"Renesas","model":"RX65N Cloud Kit","swVersion":"1.0.0","osName":"Azure RTOS","processorArchitecture":"RX65N","processorManufacturer":"Renesas","totalStorage":2048,"totalMemory":640}} Sending property: $iothub/twin/PATCH/properties/reported/?$rid=5{"ledState":false} Sending property: $iothub/twin/PATCH/properties/reported/?$rid=7{"telemetryInterval":{"ac":200,"av":1,"value":10}}
-
+ Starting Main loop Telemetry message sent: {"humidity":0,"temperature":0,"pressure":0,"gasResistance":0}. Telemetry message sent: {"accelerometerX":-632,"accelerometerY":62,"accelerometerZ":8283}. Telemetry message sent: {"gyroscopeX":2,"gyroscopeY":0,"gyroscopeZ":8}. Telemetry message sent: {"illuminance":107.17}.
- ```
+ ```
Keep Termite open to monitor device output in the following steps. ## View device properties
-You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the Renesas RX65N. These capabilities rely on the device model published for the Renesas RX65N in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
+You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the Renesas RX65N. These capabilities rely on the device model published for the Renesas RX65N in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
To access IoT Plug and Play components for the device in IoT Explorer:
To access IoT Plug and Play components for the device in IoT Explorer:
To view device properties using Azure IoT Explorer:
-1. Select the **Properties (read-only)** tab. There's a single read-only property to indicate whether the led is on or off.
+1. Select the **Properties (read-only)** tab. There's a single read-only property to indicate whether the led is on or off.
1. Select the **Properties (writable)** tab. It displays the interval that telemetry is sent. 1. Change the `telemetryInterval` to *5*, and then select **Update desired value**. Your device now uses this interval to send telemetry.
To view device properties using Azure IoT Explorer:
1. IoT Explorer responds with a notification. You can also observe the update in Termite. 1. Set the telemetry interval back to 10.
-
+ To use Azure CLI to view device properties: 1. Run the [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command.
- ```azurecli
- az iot hub device-twin show --device-id mydevice --hub-name {YourIoTHubName}
- ```
+ ```azurecli
+ az iot hub device-twin show --device-id mydevice --hub-name {YourIoTHubName}
+ ```
1. Inspect the properties for your device in the console output.
To use Azure CLI to view device telemetry:
1. Run the [az iot hub monitor-events](/cli/azure/iot/hub#az-iot-hub-monitor-events) command. Use the names that you created previously in Azure IoT for your device and IoT hub.
- ```azurecli
- az iot hub monitor-events --device-id mydevice --hub-name {YourIoTHubName}
- ```
+ ```azurecli
+ az iot hub monitor-events --device-id mydevice --hub-name {YourIoTHubName}
+ ```
1. View the JSON output in the console.
- ```json
+ ```json
{ "event": { "origin": "mydevice",
To use Azure CLI to view device telemetry:
} } }
- ```
+ ```
1. Select CTRL+C to end monitoring.
To use Azure CLI to call a method:
1. Run the [az iot hub invoke-device-method](/cli/azure/iot/hub#az-iot-hub-invoke-device-method) command, and specify the method name and payload. For this method, setting `method-payload` to `true` turns on the LED, and setting it to `false` turns it off.
- ```azurecli
- az iot hub invoke-device-method --device-id mydevice --method-name setLedState --method-payload true --hub-name {YourIoTHubName}
- ```
+ ```azurecli
+ az iot hub invoke-device-method --device-id mydevice --method-name setLedState --method-payload true --hub-name {YourIoTHubName}
+ ```
- The CLI console shows the status of your method call on the device, where `200` indicates success.
+ The CLI console shows the status of your method call on the device, where `200` indicates success.
- ```json
- {
- "payload": {},
- "status": 200
- }
- ```
+ ```json
+ {
+ "payload": {},
+ "status": 200
+ }
+ ```
1. Check your device to confirm the LED state. 1. View the Termite terminal to confirm the output messages:
- ```output
+ ```output
Received command: setLedState
- Payload: true
- LED is turned ON
+ Payload: true
+ LED is turned ON
Sending property: $iothub/twin/PATCH/properties/reported/?$rid=23{"ledState":true}
- ```
+ ```
## Troubleshoot and debug
For debugging the application, see [Debugging with Visual Studio Code](https://g
In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the Renesas RX65N device. You connected the Renesas RX65N to Azure, and carried out tasks such as viewing telemetry and calling a method on the device.
-As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT.
+As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT.
> [!div class="nextstepaction"] > [Connect a general simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
iot-develop Quickstart Devkit Renesas Rx65n Cloud Kit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-renesas-rx65n-cloud-kit.md
To connect the Renesas RX65N to Azure, you'll modify a configuration file for Wi
> For more information about setting up and getting started with the Renesas RX65N, see [Renesas RX65N Cloud Kit Quick Start](https://www.renesas.com/document/man/quick-start-guide-renesas-rx65n-cloud-kit). 1. Complete the following steps using the following image as a reference.
-
+ :::image type="content" source="media/quickstart-devkit-renesas-rx65n-cloud-kit/renesas-rx65n.jpg" alt-text="Locate reset, USB, and E1/E2Lite on the Renesas RX65N board"::: 1. Remove the **EJ2** link from the board to enable the E2 Lite debugger. The link is located underneath the **USER SW** button.
- > [!WARNING]
+ > [!WARNING]
> Failure to remove this link will result in being unable to flash the device. 1. Connect the **WiFi module** to the **Cloud Option Board**
To connect the Renesas RX65N to Azure, you'll modify a configuration file for Wi
:::image type="content" source="media/quickstart-devkit-renesas-rx65n-cloud-kit/rfp-auth.png" alt-text="Screenshot of Renesas Flash Programmer, Authentication":::
-6. Select the *Connect Settings* tab, select the *Speed* dropdown, and set the speed to 1,000,000 bps.
+6. Select the *Connect Settings* tab, select the *Speed* dropdown, and set the speed to 1,000,000 bps.
> [!IMPORTANT]
- > If there are errors when you try to flash the board, you might need to lower the speed in this setting to 750,000 bps or lower.
+ > If there are errors when you try to flash the board, you might need to lower the speed in this setting to 750,000 bps or lower.
6. Select the *Operation* tab, then select the *Browse...* button and locate the *rx65n_azure_iot.hex* file created in the previous section.
-7. Press *Start* to begin flashing. This process takes less than a minute.
+7. Press *Start* to begin flashing. This process takes less than a minute.
### Confirm device connection details
You can use the **Termite** app to monitor communication and confirm that your d
```output Starting Azure thread
-
+ Initializing WiFi
- MAC address:
- Firmware version 0.14
+ MAC address:
+ Firmware version 0.14
SUCCESS: WiFi initialized
-
+ Connecting WiFi
- Connecting to SSID
- Attempt 1...
+ Connecting to SSID
+ Attempt 1...
SUCCESS: WiFi connected
-
+ Initializing DHCP
- IP address: 192.168.0.31
- Mask: 255.255.255.0
- Gateway: 192.168.0.1
+ IP address: 192.168.0.31
+ Mask: 255.255.255.0
+ Gateway: 192.168.0.1
SUCCESS: DHCP initialized
-
+ Initializing DNS client
- DNS address: 192.168.0.1
+ DNS address: 192.168.0.1
SUCCESS: DNS client initialized
-
+ Initializing SNTP time sync
- SNTP server 0.pool.ntp.org
- SNTP server 1.pool.ntp.org
- SNTP time update: Oct 14, 2022 15:23:15.578 UTC
+ SNTP server 0.pool.ntp.org
+ SNTP server 1.pool.ntp.org
+ SNTP time update: Oct 14, 2022 15:23:15.578 UTC
SUCCESS: SNTP initialized
-
+ Initializing Azure IoT DPS client
- DPS endpoint: global.azure-devices-provisioning.net
- DPS ID scope:
- Registration ID: mydevice
+ DPS endpoint: global.azure-devices-provisioning.net
+ DPS ID scope:
+ Registration ID: mydevice
SUCCESS: Azure IoT DPS client initialized
-
+ Initializing Azure IoT Hub client
- Hub hostname:
- Device id: mydevice
- Model id: dtmi:azurertos:devkit:gsgrx65ncloud;1
+ Hub hostname:
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsgrx65ncloud;1
SUCCESS: Connected to IoT Hub
-
+ Receive properties: {"desired":{"$version":1},"reported":{"$version":1}} Sending property: $iothub/twin/PATCH/properties/reported/?$rid=3{"deviceInformation":{"__t":"c","manufacturer":"Renesas","model":"RX65N Cloud Kit","swVersion":"1.0.0","osName":"Azure RTOS","processorArchitecture":"RX65N","processorManufacturer":"Renesas","totalStorage":2048,"totalMemory":640}} Sending property: $iothub/twin/PATCH/properties/reported/?$rid=5{"ledState":false} Sending property: $iothub/twin/PATCH/properties/reported/?$rid=7{"telemetryInterval":{"ac":200,"av":1,"value":10}}
-
+ Starting Main loop Telemetry message sent: {"humidity":29.37,"temperature":25.83,"pressure":92818.25,"gasResistance":151671.25}. Telemetry message sent: {"accelerometerX":-887,"accelerometerY":236,"accelerometerZ":8272}.
To remove the entire Azure IoT Central sample application and all its devices an
In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the Renesas RX65N device. You also used the IoT Central portal to create Azure resources, connect the Renesas RX65N securely to Azure, view telemetry, and send messages.
-As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
+As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
> [!div class="nextstepaction"] > [Connect a simulated device to IoT Central](quickstart-send-telemetry-central.md)
iot-develop Quickstart Devkit Stm B L475e Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e-iot-hub.md
To connect the STM DevKit to Azure, you modify a configuration file for Wi-Fi an
1. Comment out the following line near the top of the file as shown: ```c
- // #define ENABLE_DPS
+ // #define ENABLE_DPS
``` 1. Set the Wi-Fi constants to the following values from your local environment.
You can use the **Termite** app to monitor communication and confirm that your d
```output Starting Azure thread
-
-
++ Initializing WiFi
- Module: ISM43362-M3G-L44-SPI
- MAC address: ****************
- Firmware revision: C3.5.2.5.STM
+ Module: ISM43362-M3G-L44-SPI
+ MAC address: ****************
+ Firmware revision: C3.5.2.5.STM
SUCCESS: WiFi initialized
-
+ Connecting WiFi
- Connecting to SSID 'iot'
- Attempt 1...
+ Connecting to SSID 'iot'
+ Attempt 1...
SUCCESS: WiFi connected
-
+ Initializing DHCP
- IP address: 192.168.0.35
- Mask: 255.255.255.0
- Gateway: 192.168.0.1
+ IP address: 192.168.0.35
+ Mask: 255.255.255.0
+ Gateway: 192.168.0.1
SUCCESS: DHCP initialized
-
+ Initializing DNS client
- DNS address 1: ************
- DNS address 2: ************
+ DNS address 1: ************
+ DNS address 2: ************
SUCCESS: DNS client initialized
-
+ Initializing SNTP time sync
- SNTP server 0.pool.ntp.org
- SNTP time update: Nov 18, 2022 0:56:56.127 UTC
+ SNTP server 0.pool.ntp.org
+ SNTP time update: Nov 18, 2022 0:56:56.127 UTC
SUCCESS: SNTP initialized
-
+ Initializing Azure IoT Hub client
- Hub hostname: *******.azure-devices.net
- Device id: mydevice
- Model id: dtmi:azurertos:devkit:gsgstml4s5;2
+ Hub hostname: *******.azure-devices.net
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsgstml4s5;2
SUCCESS: Connected to IoT Hub ``` > [!IMPORTANT]
Keep Termite open to monitor device output in the following steps.
## View device properties
-You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the STM DevKit. These capabilities rely on the device model published for the STM DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
+You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the STM DevKit. These capabilities rely on the device model published for the STM DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
To access IoT Plug and Play components for the device in IoT Explorer:
To access IoT Plug and Play components for the device in IoT Explorer:
To view device properties using Azure IoT Explorer:
-1. Select the **Properties (read-only)** tab. There's a single read-only property to indicate whether the led is on or off.
+1. Select the **Properties (read-only)** tab. There's a single read-only property to indicate whether the led is on or off.
1. Select the **Properties (writable)** tab. It displays the interval that telemetry is sent. 1. Change the `telemetryInterval` to *5*, and then select **Update desired value**. Your device now uses this interval to send telemetry.
To view device properties using Azure IoT Explorer:
1. IoT Explorer responds with a notification. You can also observe the update in Termite. 1. Set the telemetry interval back to 10.
-
+ To use Azure CLI to view device properties: 1. Run the [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command.
To use Azure CLI to call a method:
{ "payload": {}, "status": 200
- }
+ }
``` 1. Check your device to confirm the LED state.
For debugging the application, see [Debugging with Visual Studio Code](https://g
In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the STM DevKit device. You connected the STM DevKit to Azure, and carried out tasks such as viewing telemetry and calling a method on the device.
-As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT.
+As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT.
> [!div class="nextstepaction"] > [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
iot-develop Quickstart Devkit Stm B L475e https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e.md
You can use the **Termite** app to monitor communication and confirm that your d
Starting Azure thread Initializing WiFi
- Module: ISM43362-M3G-L44-SPI
- MAC address: C4:7F:51:8F:67:F6
- Firmware revision: C3.5.2.5.STM
- Connecting to SSID 'iot'
+ Module: ISM43362-M3G-L44-SPI
+ MAC address: C4:7F:51:8F:67:F6
+ Firmware revision: C3.5.2.5.STM
+ Connecting to SSID 'iot'
SUCCESS: WiFi connected to iot Initializing DHCP
- IP address: 192.168.0.22
- Gateway: 192.168.0.1
+ IP address: 192.168.0.22
+ Gateway: 192.168.0.1
SUCCESS: DHCP initialized Initializing DNS client
- DNS address: 75.75.75.75
+ DNS address: 75.75.75.75
SUCCESS: DNS client initialized Initializing SNTP client
- SNTP server 0.pool.ntp.org
- SNTP IP address: 108.62.122.57
- SNTP time update: May 21, 2021 22:42:8.394 UTC
+ SNTP server 0.pool.ntp.org
+ SNTP IP address: 108.62.122.57
+ SNTP time update: May 21, 2021 22:42:8.394 UTC
SUCCESS: SNTP initialized Initializing Azure IoT DPS client
- DPS endpoint: global.azure-devices-provisioning.net
- DPS ID scope: ***
- Registration ID: mydevice
+ DPS endpoint: global.azure-devices-provisioning.net
+ DPS ID scope: ***
+ Registration ID: mydevice
SUCCESS: Azure IoT DPS client initialized Initializing Azure IoT Hub client
- Hub hostname: ***.azure-devices.net
- Device id: mydevice
- Model id: dtmi:azurertos:devkit:gsgstml4s5;1
+ Hub hostname: ***.azure-devices.net
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsgstml4s5;1
Connected to IoT Hub SUCCESS: Azure IoT Hub client initialized ```
To remove the entire Azure IoT Central sample application and all its devices an
In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the STM DevKit device. You also used the IoT Central portal to create Azure resources, connect the STM DevKit securely to Azure, view telemetry, and send messages.
-As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
+As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
> [!div class="nextstepaction"] > [Connect a simulated device to IoT Central](quickstart-send-telemetry-central.md)
iot-develop Quickstart Devkit Stm B L4s5i Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l4s5i-iot-hub.md
To connect the STM DevKit to Azure, you modify a configuration file for Wi-Fi an
1. Comment out the following line near the top of the file as shown: ```c
- // #define ENABLE_DPS
+ // #define ENABLE_DPS
``` 1. Set the Wi-Fi constants to the following values from your local environment.
You can use the **Termite** app to monitor communication and confirm that your d
```output Starting Azure thread
-
-
++ Initializing WiFi
- Module: ISM43362-M3G-L44-SPI
- MAC address: ******************
- Firmware revision: C3.5.2.7.STM
+ Module: ISM43362-M3G-L44-SPI
+ MAC address: ******************
+ Firmware revision: C3.5.2.7.STM
SUCCESS: WiFi initialized
-
+ Connecting WiFi
- Connecting to SSID '************'
- Attempt 1...
+ Connecting to SSID '************'
+ Attempt 1...
SUCCESS: WiFi connected
-
+ Initializing DHCP
- IP address: 192.168.0.50
- Mask: 255.255.255.0
- Gateway: 192.168.0.1
+ IP address: 192.168.0.50
+ Mask: 255.255.255.0
+ Gateway: 192.168.0.1
SUCCESS: DHCP initialized
-
+ Initializing DNS client
- DNS address 1: 192.168.0.1
+ DNS address 1: 192.168.0.1
SUCCESS: DNS client initialized
-
+ Initializing SNTP time sync
- SNTP server 0.pool.ntp.org
- SNTP time update: Jan 6, 2023 20:10:23.522 UTC
+ SNTP server 0.pool.ntp.org
+ SNTP time update: Jan 6, 2023 20:10:23.522 UTC
SUCCESS: SNTP initialized
-
+ Initializing Azure IoT Hub client
- Hub hostname: ************.azure-devices.net
- Device id: mydevice
- Model id: dtmi:azurertos:devkit:gsgstml4s5;2
+ Hub hostname: ************.azure-devices.net
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsgstml4s5;2
SUCCESS: Connected to IoT Hub ``` > [!IMPORTANT]
Keep Termite open to monitor device output in the following steps.
## View device properties
-You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the STM DevKit. These capabilities rely on the device model published for the STM DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
+You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the STM DevKit. These capabilities rely on the device model published for the STM DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
To access IoT Plug and Play components for the device in IoT Explorer:
To access IoT Plug and Play components for the device in IoT Explorer:
To view device properties using Azure IoT Explorer:
-1. Select the **Properties (read-only)** tab. There's a single read-only property to indicate whether the led is on or off.
+1. Select the **Properties (read-only)** tab. There's a single read-only property to indicate whether the led is on or off.
1. Select the **Properties (writable)** tab. It displays the interval that telemetry is sent. 1. Change the `telemetryInterval` to *5*, and then select **Update desired value**. Your device now uses this interval to send telemetry.
To view device properties using Azure IoT Explorer:
1. IoT Explorer responds with a notification. You can also observe the update in Termite. 1. Set the telemetry interval back to 10.
-
+ To use Azure CLI to view device properties: 1. Run the [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command.
To use Azure CLI to call a method:
{ "payload": {}, "status": 200
- }
+ }
``` 1. Check your device to confirm the LED state.
For debugging the application, see [Debugging with Visual Studio Code](https://g
In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the STM DevKit device. You connected the STM DevKit to Azure, and carried out tasks such as viewing telemetry and calling a method on the device.
-As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT.
+As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT.
> [!div class="nextstepaction"] > [Connect a general device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
iot-develop Quickstart Devkit Stm B L4s5i https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l4s5i.md
You can use the **Termite** app to monitor communication and confirm that your d
Starting Azure thread Initializing WiFi
- Module: ISM43362-M3G-L44-SPI
- MAC address: C4:7F:51:8F:67:F6
- Firmware revision: C3.5.2.5.STM
- Connecting to SSID 'iot'
+ Module: ISM43362-M3G-L44-SPI
+ MAC address: C4:7F:51:8F:67:F6
+ Firmware revision: C3.5.2.5.STM
+ Connecting to SSID 'iot'
SUCCESS: WiFi connected to iot Initializing DHCP
- IP address: 192.168.0.22
- Gateway: 192.168.0.1
+ IP address: 192.168.0.22
+ Gateway: 192.168.0.1
SUCCESS: DHCP initialized Initializing DNS client
- DNS address: 75.75.75.75
+ DNS address: 75.75.75.75
SUCCESS: DNS client initialized Initializing SNTP client
- SNTP server 0.pool.ntp.org
- SNTP IP address: 108.62.122.57
- SNTP time update: May 21, 2021 22:42:8.394 UTC
+ SNTP server 0.pool.ntp.org
+ SNTP IP address: 108.62.122.57
+ SNTP time update: May 21, 2021 22:42:8.394 UTC
SUCCESS: SNTP initialized Initializing Azure IoT DPS client
- DPS endpoint: global.azure-devices-provisioning.net
- DPS ID scope: ***
- Registration ID: mydevice
+ DPS endpoint: global.azure-devices-provisioning.net
+ DPS ID scope: ***
+ Registration ID: mydevice
SUCCESS: Azure IoT DPS client initialized Initializing Azure IoT Hub client
- Hub hostname: ***.azure-devices.net
- Device id: mydevice
- Model id: dtmi:azurertos:devkit:gsgstml4s5;1
+ Hub hostname: ***.azure-devices.net
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsgstml4s5;1
Connected to IoT Hub SUCCESS: Azure IoT Hub client initialized ```
In IAR, select **Project > Batch Build** and choose **build_all** and select **M
1. In IAR, press the green **Download and Debug** button in the toolbar to download the program and run it. Then press ***Go***. 1. Check the Terminal I/O to verify that messages have been successfully sent to the Azure IoT hub.
- As the project runs, the demo displays the status information to the Terminal IO window (**View > Terminal I/O**). The demo also publishes the message to IoT Hub every few seconds.
-
+ As the project runs, the demo displays the status information to the Terminal IO window (**View > Terminal I/O**). The demo also publishes the message to IoT Hub every few seconds.
+ > [!NOTE] > The terminal output content varies depending on which sample you choose to build and run.
Select the **About** tab from the device page.
## Download the STM32Cube IDE
-You can download a free version of STM32Cube IDE, but you'll need to create an account. Follow the instructions on the ST website. The STM32Cube IDE can be downloaded from this website:
+You can download a free version of STM32Cube IDE, but you'll need to create an account. Follow the instructions on the ST website. The STM32Cube IDE can be downloaded from this website:
https://www.st.com/en/development-tools/stm32cubeide.html The sample distribution zip file contains the following subfolders that you'll use later:
To connect the device to Azure, you'll modify a configuration file for Azure IoT
|--|--| |`WIFI_SSID` |{*Use your Wi-Fi SSID*}| |`WIFI_PASSWORD` |{*se your Wi-Fi password*}|
-
+ 1. Expand the sample folder to open **sample_config.h** to set the Azure IoT device information constants to the values that you saved after you created Azure resources. |Constant name|Value| |-|--| |`ENDPOINT` |{*Use this value: "global.azure-devices-provisioning.net"*}| |`REGISTRATION_ID` |{*Use your Device ID value*}|
- |`ID_SCOPE` |{*Use your ID scope value*}|
- |`DEVICE_SYMMETRIC_KEY` |{*Use your Primary key value*}|
+ |`ID_SCOPE` |{*Use your ID scope value*}|
+ |`DEVICE_SYMMETRIC_KEY` |{*Use your Primary key value*}|
> [!NOTE] > The `ENDPOINT`, `DEVICE_ID`, `ID_SCOPE`, and `DEVICE_SYMMETRIC_KEY` values are set in a `#ifndef ENABLE_DPS_SAMPLE` statement. Make sure you set the values in the `#else` statement, which will be used when the `ENABLE_DPS_SAMPLE` value is defined.
Download and run the project
Stop bits: ***1*** 1. As the project runs, the demo displays status information to the terminal output window. The demo also publishes the message to IoT Hub every five seconds. Check the terminal output to verify that messages have been successfully sent to the Azure IoT hub.
-
+ > [!NOTE] > The terminal output content varies depending on which sample you choose to build and run.
If you experience issues building the device code, flashing the device, or conne
For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md). :::zone-end :::zone pivot="iot-toolset-iar-ewarm"
-For help with debugging the application, see the selections under **Help** in **IAR EW for ARM**.
+For help with debugging the application, see the selections under **Help** in **IAR EW for ARM**.
:::zone-end :::zone pivot="iot-toolset-stm32cube"
-For help with debugging the application, see the selections under **Help**.
+For help with debugging the application, see the selections under **Help**.
:::zone-end ## Clean up resources
To remove the entire Azure IoT Central sample application and all its devices an
In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the STM DevKit device. You also used the IoT Central portal to create Azure resources, connect the STM DevKit securely to Azure, view device data, and send messages.
-As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
+As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
> [!div class="nextstepaction"] > [Connect a simulated device to IoT Central](quickstart-send-telemetry-central.md)
iot-develop Quickstart Devkit Stm B U585i Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-u585i-iot-hub.md
You can use the **Termite** app to monitor communication and confirm that your d
```output Starting Azure thread
-
-
++ Initializing WiFi
- SSID: ***********
- Password: ***********
+ SSID: ***********
+ Password: ***********
SUCCESS: WiFi initialized
-
+ Connecting WiFi
- FW: V2.1.11
- MAC address: ***********
- Connecting to SSID '***********'
- Attempt 1...
+ FW: V2.1.11
+ MAC address: ***********
+ Connecting to SSID '***********'
+ Attempt 1...
SUCCESS: WiFi connected
-
+ Initializing DHCP
- IP address: 192.168.0.67
- Mask: 255.255.255.0
- Gateway: 192.168.0.1
+ IP address: 192.168.0.67
+ Mask: 255.255.255.0
+ Gateway: 192.168.0.1
SUCCESS: DHCP initialized
-
+ Initializing DNS client
- DNS address: 192.168.0.1
+ DNS address: 192.168.0.1
SUCCESS: DNS client initialized
-
+ Initializing SNTP time sync
- SNTP server 0.pool.ntp.org
- SNTP time update: Feb 24, 2023 21:20:23.71 UTC
+ SNTP server 0.pool.ntp.org
+ SNTP time update: Feb 24, 2023 21:20:23.71 UTC
SUCCESS: SNTP initialized
-
+ Initializing Azure IoT Hub client
- Hub hostname: ***********.azure-devices.net
- Device id: mydevice
- Model id: dtmi:azurertos:devkit:gsg;2
+ Hub hostname: ***********.azure-devices.net
+ Device id: mydevice
+ Model id: dtmi:azurertos:devkit:gsg;2
SUCCESS: Connected to IoT Hub ``` > [!IMPORTANT]
Keep Termite open to monitor device output in the following steps.
## View device properties
-You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the STM DevKit. These capabilities rely on the device model published for the STM DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
+You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the STM DevKit. These capabilities rely on the device model published for the STM DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
To access IoT Plug and Play components for the device in IoT Explorer:
To access IoT Plug and Play components for the device in IoT Explorer:
To view device properties using Azure IoT Explorer:
-1. Select the **Properties (read-only)** tab. There's a single read-only property to indicate whether the led is on or off.
+1. Select the **Properties (read-only)** tab. There's a single read-only property to indicate whether the led is on or off.
1. Select the **Properties (writable)** tab. It displays the interval that telemetry is sent. 1. Change the `telemetryInterval` to *5*, and then select **Update desired value**. Your device now uses this interval to send telemetry.
To view device properties using Azure IoT Explorer:
1. IoT Explorer responds with a notification. You can also observe the update in Termite. 1. Set the telemetry interval back to 10.
-
+ To use Azure CLI to view device properties: 1. Run the [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command.
To use Azure CLI to call a method:
{ "payload": {}, "status": 200
- }
+ }
``` 1. Check your device to confirm the LED state.
For debugging the application, see [Debugging with Visual Studio Code](https://g
In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the STM DevKit device. You connected the STM DevKit to Azure, and carried out tasks such as viewing telemetry and calling a method on the device.
-As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT.
+As a next step, explore the following articles to learn more about using the IoT device SDKs, or Azure RTOS to connect devices to Azure IoT.
> [!div class="nextstepaction"] > [Connect a general simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
iot-hub-device-update Device Update Agent Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-agent-provisioning.md
If you are migrating from a device level agent to adding the agent as a Module i
The following IoT device over the air update types are currently supported with Device Update: * Linux devices (IoT Edge and Non-IoT Edge devices):
- * [Image )
+ * [Image )
* [Package update](device-update-ubuntu-agent.md) * [Proxy update for downstream devices](device-update-howto-proxy-updates.md)
The following IoT device over the air update types are currently supported with
* [Understand support for disconnected device update](connected-cache-disconnected-device-update.md)
-## Prerequisites
+## Prerequisites
If you're setting up the IoT device/IoT Edge device for [package based updates](./understand-device-update.md#support-for-a-wide-range-of-update-artifacts), add packages.microsoft.com to your machineΓÇÖs repositories by following these steps:
If you're setting up the IoT device/IoT Edge device for [package based updates](
## How to provision the Device Update agent as a Module Identity This section describes how to provision the Device Update agent as a module identity on
-* IoT Edge enabled devices, or
+* IoT Edge enabled devices, or
* Non-Edge IoT devices, or
-* Other IoT devices.
+* Other IoT devices.
To check if you have IoT Edge enabled on your device, please refer to the [IoT Edge installation instructions](../iot-edge/how-to-provision-single-device-linux-symmetric.md?preserve-view=true&view=iotedge-2020-11).
-
-Follow all or any of the below sections to add the Device update agent based on the type of IoT device you are managing.
+
+Follow all or any of the below sections to add the Device update agent based on the type of IoT device you are managing.
### On IoT Edge enabled devices
Follow these instructions to provision the Device Update agent on [IoT Edge enab
1. Install the Device Update image update agent.
- We provide sample images in the [Assets here](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md).
+ We provide sample images in the [Assets here](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md).
1. Install the Device Update package update agent.
- - For latest agent versions from packages.microsoft.com: Update package lists on your device and install the Device Update agent package and its dependencies using:
+ - For latest agent versions from packages.microsoft.com: Update package lists on your device and install the Device Update agent package and its dependencies using:
```shell sudo apt-get update ```
-
+ ```shell sudo apt-get install deviceupdate-agent ```
-
+ - For any 'rc' i.e. release candidate agent versions from [Artifacts](https://github.com/Azure/iot-hub-device-update/releases) : Download the .deb file to the machine you want to install the Device Update agent on, then:
-
+ ```shell sudo apt-get install -y ./"<PATH TO FILE>"/"<.DEB FILE NAME>" ``` - If you are setting up a [MCC for a disconnected device scenario](connected-cache-disconnected-device-update.md), then install the Delivery Optimization APT plugin:
- ```shell
- sudo apt-get install deliveryoptimization-plugin-apt
- ```
-
-1. After you've installed the device update agent, you will need to edit the configuration file for Device Update by running the command below.
+ ```shell
+ sudo apt-get install deliveryoptimization-plugin-apt
+ ```
+
+1. After you've installed the device update agent, you will need to edit the configuration file for Device Update by running the command below.
```shell
- sudo nano /etc/adu/du-config.json
+ sudo nano /etc/adu/du-config.json
```
- Change the connectionType to "AIS" for agents who will be using the IoT Identity Service for provisioning. The ConnectionData field must be an empty string. Please note that all values with the 'Place value here' tag must be set. See [Configuring a DU agent](./device-update-configuration-file.md#example-du-configjson-file-contents).
-
-5. You are now ready to start the Device Update agent on your IoT device.
+
+ Change the connectionType to "AIS" for agents who will be using the IoT Identity Service for provisioning. The ConnectionData field must be an empty string. Please note that all values with the 'Place value here' tag must be set. See [Configuring a DU agent](./device-update-configuration-file.md#example-du-configjson-file-contents).
+
+1. You are now ready to start the Device Update agent on your IoT device.
### On Iot Linux devices without IoT Edge installed
Follow these instructions to provision the Device Update agent on your IoT Linux
1. Install the IoT Identity Service and add the latest version to your IoT device by following instructions in [Installing the Azure IoT Identity Service](https://azure.github.io/iot-identity-service/installation.html#install-from-packagesmicrosoftcom).
-2. Configure the IoT Identity Service by following the instructions in [Configuring the Azure IoT Identity Service](https://azure.github.io/iot-identity-service/configuration.html).
-
+2. Configure the IoT Identity Service by following the instructions in [Configuring the Azure IoT Identity Service](https://azure.github.io/iot-identity-service/configuration.html).
+ 3. Finally install the Device Update agent. We provide sample images in [Assets here](https://github.com/Azure/iot-hub-device-update/releases), the swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board, and the .gz file is the update you would import through Device Update for IoT Hub. See example of [how to flash the image to your IoT Hub device](./device-update-raspberry-pi.md).
-4. After you've installed the device update agent, you will need to edit the configuration file for Device Update by running the command below.
+4. After you've installed the device update agent, you will need to edit the configuration file for Device Update by running the command below.
```shell
- sudo nano /etc/adu/du-config.json
+ sudo nano /etc/adu/du-config.json
```
- Change the connectionType to "AIS" for agents who will be using the IoT Identity Service for provisioning. The ConnectionData field must be an empty string. Please note that all values with the 'Place value here' tag must be set. See [Configuring a DU agent](./device-update-configuration-file.md#example-du-configjson-file-contents).
+ Change the connectionType to "AIS" for agents who will be using the IoT Identity Service for provisioning. The ConnectionData field must be an empty string. Please note that all values with the 'Place value here' tag must be set. See [Configuring a DU agent](./device-update-configuration-file.md#example-du-configjson-file-contents).
-5. You are now ready to start the Device Update agent on your IoT device.
+5. You are now ready to start the Device Update agent on your IoT device.
### Other IoT devices
The Device Update agent can also be configured without the IoT Identity service
1. We provide sample images in the [Assets here](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md). 1. Log onto the machine or IoT Edge device/IoT device.
-
+ 1. Open a terminal window. 1. Add the connection string to the [Device Update configuration file](device-update-configuration-file.md): 1. Enter the below in the terminal window:
-
- - [For Ubuntu agent](device-update-ubuntu-agent.md) use: sudo nano /etc/adu/du-config.json
- - [For Yocto reference image](device-update-raspberry-pi.md) use: sudo nano /adu/du-config.json
-
+
+ - [For Ubuntu agent](device-update-ubuntu-agent.md) use: sudo nano /etc/adu/du-config.json
+ - [For Yocto reference image](device-update-raspberry-pi.md) use: sudo nano /adu/du-config.json
+ 1. Copy the primary connection string
-
- - If Device Update agent is configured as a module copy the module's primary connection string.
- - Otherwise copy the device's primary connection string.
-
- 3. Enter the copied primary connection string to the 'connectionData' field's value in the du-config.json file. Please note that all values with the 'Place value here' tag must be set. See [Configuring a DU agent](./device-update-configuration-file.md#example-du-configjson-file-contents)
-
-1. Now you are now ready to start the Device Update agent on your IoT device.
+
+ - If Device Update agent is configured as a module copy the module's primary connection string.
+ - Otherwise copy the device's primary connection string.
+
+ 1. Enter the copied primary connection string to the 'connectionData' field's value in the du-config.json file. Please note that all values with the 'Place value here' tag must be set. See [Configuring a DU agent](./device-update-configuration-file.md#example-du-configjson-file-contents)
+
+1. Now you are now ready to start the Device Update agent on your IoT device.
## How to start the Device Update Agent
This section describes how to start and verify the Device Update agent as a modu
```shell sudo systemctl restart deviceupdate-agent ```
-
+ 1. You can check the status of the agent using the command below. If you see any issues, refer to this [troubleshooting guide](troubleshoot-device-update.md).
-
+ ```shell sudo systemctl status deviceupdate-agent ```
-
+ You should see status OK. 1. On the IoT Hub portal, go to IoT device or IoT Edge devices to find the device that you configured with Device Update agent. There you will see the Device Update agent running as a module. For example:
If you run into issues, review the Device Update for IoT Hub [Troubleshooting Gu
You can use the following tutorials for a simple demonstration of Device Update for IoT Hub: - [Image Update: Getting Started with Raspberry Pi 3 B+ Reference Yocto Image](device-update-raspberry-pi.md) extensible via open source to build your own images for other architecture as needed.
-
+ - [Package Update: Getting Started using Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
-
+ - [Proxy Update: Getting Started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md)
-
+ - [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md) - [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
iot-hub-device-update Migration Public Preview Refresh To Ga https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/migration-public-preview-refresh-to-ga.md
For the GA release, the Device Update agent can be updated manually or using the
2. Add device update agent upgrade as the last step in your update. The import manifest version must be **"4.0"** to ensure it is targeted to the correct devices. See below a sample import manifest and APT manifest:
- **Example Import Manifest**
- ```json
- {
- "manifestVersion": "4",
- "updateId": {
- "provider": "Contoso",
- "name": "Sensor",
- "version": "1.0"
- },
- "compatibility": [
- {
- "manufacturer": "Contoso",
- "model": "Sensor"
- }
- ],
- "instructions": {
- "steps": [
- {
- "handler": "microsoft/apt:1",
- "handlerProperties": {
- "installedCriteria": "1.0"
- },
- "files": [
- "fileId0"
- ]
- }
- ]
- },
- "files": {
- "fileId0": {
- "filename": "sample-upgrade-apt-manifest.json",
- "sizeInBytes": 210,
- "hashes": {
- "sha256": "mcB5SexMU4JOOzqmlJqKbue9qMskWY3EI/iVjJxCtAs="
- }
- }
- },
- "createdDateTime": "2022-08-20T18:32:01.8404544Z"
- }
- ```
-
- **Example APT manifest**
-
- ```json
- {
- "name": "Sample DU agent upgrade update",
- "version": "1.0.0",
- "packages": [
- {
- "name": "deviceupdate-agent"
- }
- ]
- }
- ```
+ **Example Import Manifest**
+ ```json
+ {
+ "manifestVersion": "4",
+ "updateId": {
+ "provider": "Contoso",
+ "name": "Sensor",
+ "version": "1.0"
+ },
+ "compatibility": [
+ {
+ "manufacturer": "Contoso",
+ "model": "Sensor"
+ }
+ ],
+ "instructions": {
+ "steps": [
+ {
+ "handler": "microsoft/apt:1",
+ "handlerProperties": {
+ "installedCriteria": "1.0"
+ },
+ "files": [
+ "fileId0"
+ ]
+ }
+ ]
+ },
+ "files": {
+ "fileId0": {
+ "filename": "sample-upgrade-apt-manifest.json",
+ "sizeInBytes": 210,
+ "hashes": {
+ "sha256": "mcB5SexMU4JOOzqmlJqKbue9qMskWY3EI/iVjJxCtAs="
+ }
+ }
+ },
+ "createdDateTime": "2022-08-20T18:32:01.8404544Z"
+ }
+ ```
+
+ **Example APT manifest**
+
+ ```json
+ {
+ "name": "Sample DU agent upgrade update",
+ "version": "1.0.0",
+ "packages": [
+ {
+ "name": "deviceupdate-agent"
+ }
+ ]
+ }
+ ```
> [!NOTE] > It is required for the agent upgrade to be the last step. You may have other steps before the agent upgrade. Any steps added after the agent upgrade will not be executed and reported correctly as the device reconnects with the DU service. -
-3. Deploy the update
+3. Deploy the update.
4. Once the update is successfully deployed, the device attributes will now show the updated PnP model details.The **Contract Model Name** will show **Device Update Model V2** and **Contract Model ID** will show **dtmi:azure:iot:deviceUpdateContractModel;2**.
For the GA release, the Device Update agent can be updated manually or using the
- Devices with older agents (0.7.0/0.6.0) cannot be added to these groups. - ## Next steps+ [Understand Device Update agent configuration file](device-update-configuration-file.md) You can use the following tutorials for a simple demonstration of Device Update for IoT Hub:
-
+
- [Package Update: Getting Started using Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
-
+
- [Proxy Update: Getting Started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md)
-
+
- [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md) - [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
iot-hub Tutorial Message Enrichments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-message-enrichments.md
Follow these steps to add a location tag to your device's twin:
1. Select the **Device twin** tab at the top of the device page and add the following line just before the closing brace at the bottom of the device twin. Then select **Save**. ```json
- , "tags": {"location": "Plant 43"}
+ , "tags": {"location": "Plant 43"}
``` :::image type="content" source="./media/tutorial-message-enrichments/add-location-tag-to-device-twin.png" alt-text="Screenshot of adding location tag to device twin in Azure portal.":::
key-vault Developers Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/developers-guide.md
The data plane controls access to keys, certificates, and secrets. You can use l
|--|--|--|--|--|--|--|--| |[Reference](/cli/azure/keyvault/key)<br>[Quickstart](../keys/quick-create-cli.md)|[Reference](/powershell/module/az.keyvault/)<br>[Quickstart](../keys/quick-create-powershell.md)|[Reference](/rest/api/keyvault/#key-operations)|[Reference](/azure/templates/microsoft.keyvault/vaults/keys)<br>[Quickstart](../keys/quick-create-template.md)|[Reference](/dotnet/api/azure.security.keyvault.keys)<br>[Quickstart](../keys/quick-create-net.md)|[Reference](/python/api/azure-mgmt-keyvault/azure.mgmt.keyvault)<br>[Quickstart](../keys/quick-create-python.md)|[Reference](https://azuresdkdocs.blob.core.windows.net/$web/jav)|
+#### Other Libraries
+
+##### Cryptography client for Key Vault and Managed HSM
+This module provides a cryptography client for the [Azure Key Vault Keys client module for Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/azkeys).
+
+> [!Note]
+> This project is not supported by the Azure SDK team, but does align with the cryptography clients in other supported languages.
+
+| Language | Reference |
+|--|--|
+|Go|[Reference](https://pkg.go.dev/github.com/heaths/azcrypto@v1.0.0#section-readme)|
+ ### APIs and SDKs for certificates | Azure CLI | PowerShell | REST API | Resource Manager | .NET | Python | Java | JavaScript |
key-vault Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/security-features.md
Azure Private Link Service enables you to access Azure Key Vault and Azure hoste
- The Key Vault front end (data plane) is a multi-tenant server. This means that key vaults from different customers can share the same public IP address. In order to achieve isolation, each HTTP request is authenticated and authorized independently of other requests. - You may identify older versions of TLS to report vulnerabilities but because the public IP address is shared, it is not possible for key vault service team to disable old versions of TLS for individual key vaults at transport level.-- The HTTPS protocol allows the client to participate in TLS negotiation. **Clients can enforce the most recent version of TLS**, and whenever a client does so, the entire connection will use the corresponding level protection. Applications that are communicating with or authenticating against Azure Active Directory might not work as expected if they are NOT able to use TLS 1.2 or recent version to communicate.-- Despite known vulnerabilities in TLS protocol, there is no known attack that would allow a malicious agent to extract any information from your key vault when the attacker initiates a connection with a TLS version that has vulnerabilities. The attacker would still need to authenticate and authorize itself, and as long as legitimate clients always connect with recent TLS versions, there is no way that credentials could have been leaked from vulnerabilities at old TLS versions.
+- The HTTPS protocol allows the client to participate in TLS negotiation. **Clients can enforce the version of TLS**, and whenever a client does so, the entire connection will use the corresponding level protection. Applications that are communicating with or authenticating against Azure Active Directory might not work as expected if they are NOT able to use TLS 1.2 to communicate.
+- Despite known vulnerabilities in TLS protocol, there is no known attack that would allow a malicious agent to extract any information from your key vault when the attacker initiates a connection with a TLS version that has vulnerabilities. The attacker would still need to authenticate and authorize itself, and as long as legitimate clients always connect with TLS 1.2 version, there is no way that credentials could have been leaked from vulnerabilities at old TLS versions.
> [!NOTE]
-> For Azure Key Vault, ensure that the application accessing the Keyvault service should be running on a platform that supports TLS 1.2 or recent version. If the application is dependent on .NET Framework, it should be updated as well. You can also make the registry changes mentioned in [this article](/troubleshoot/azure/active-directory/enable-support-tls-environment) to explicitly enable the use of TLS 1.2 at OS level and for .NET Framework. To meet with compliance obligations and to improve security posture, Key Vault connections via TLS 1.0 & 1.1 are considered a security risk, and any connections using old TLS protocols will be disallowed starting June 2023. You can monitor TLS version used by clients by monitoring Key Vault logs with sample Kusto query [here](monitor-key-vault.md#sample-kusto-queries).
+> For Azure Key Vault, ensure that the application accessing the Keyvault service should be running on a platform that supports TLS 1.2. If the application is dependent on .NET Framework, it should be updated as well. You can also make the registry changes mentioned in [this article](/troubleshoot/azure/active-directory/enable-support-tls-environment) to explicitly enable the use of TLS 1.2 at OS level and for .NET Framework. To meet with compliance obligations and to improve security posture, Key Vault connections via TLS 1.0 & 1.1 are considered a security risk, and any connections using old TLS protocols will be disallowed starting June 2023. You can monitor TLS version used by clients by monitoring Key Vault logs with sample Kusto query [here](monitor-key-vault.md#sample-kusto-queries).
> [!WARNING] > TLS 1.0 and 1.1 is deprecated by Azure Active Directory and tokens to access key vault may not longer be issued for users or services requesting them with deprecated protocols. This may lead to loss of access to Key vaults. More information on AAD TLS support can be found in [Azure AD TLS 1.1 and 1.0 deprecation](/troubleshoot/azure/active-directory/enable-support-tls-environment/#why-this-change-is-being-made)
key-vault Quick Create Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-go.md
Create a file named *main.go*, and then paste the following code into it:
package main import (
- "context"
- "fmt"
- "log"
+ "context"
+ "fmt"
+ "log"
- "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
- "github.com/Azure/azure-sdk-for-go/sdk/keyvault/azsecrets"
+ "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+ "github.com/Azure/azure-sdk-for-go/sdk/keyvault/azsecrets"
) func main() {
- mySecretName := "secretName01"
- mySecretValue := "secretValue"
- vaultURI := os.Getenv("AZURE_KEY_VAULT_URI")
-
- // Create a credential using the NewDefaultAzureCredential type.
- cred, err := azidentity.NewDefaultAzureCredential(nil)
- if err != nil {
- log.Fatalf("failed to obtain a credential: %v", err)
- }
-
- // Establish a connection to the Key Vault client
- client, err := azsecrets.NewClient(vaultURI, cred, nil)
-
- // Create a secret
- params := azsecrets.SetSecretParameters{Value: &mySecretValue}
- _, err = client.SetSecret(context.TODO(), mySecretName, params, nil)
- if err != nil {
- log.Fatalf("failed to create a secret: %v", err)
- }
-
- // Get a secret. An empty string version gets the latest version of the secret.
- version := ""
- resp, err := client.GetSecret(context.TODO(), mySecretName, version, nil)
- if err != nil {
- log.Fatalf("failed to get the secret: %v", err)
- }
-
- fmt.Printf("secretValue: %s\n", *resp.Value)
-
- // List secrets
- pager := client.NewListSecretsPager(nil)
- for pager.More() {
- page, err := pager.NextPage(context.TODO())
- if err != nil {
- log.Fatal(err)
- }
- for _, secret := range page.Value {
- fmt.Printf("Secret ID: %s\n", *secret.ID)
- }
- }
-
- // Delete a secret. DeleteSecret returns when Key Vault has begun deleting the secret.
- // That can take several seconds to complete, so it may be necessary to wait before
- // performing other operations on the deleted secret.
- delResp, err := client.DeleteSecret(context.TODO(), mySecretName, nil)
- if err != nil {
- log.Fatalf("failed to delete secret: %v", err)
- }
-
- fmt.Println(delResp.ID.Name() + " has been deleted")
+ mySecretName := "secretName01"
+ mySecretValue := "secretValue"
+ vaultURI := os.Getenv("AZURE_KEY_VAULT_URI")
+
+ // Create a credential using the NewDefaultAzureCredential type.
+ cred, err := azidentity.NewDefaultAzureCredential(nil)
+ if err != nil {
+ log.Fatalf("failed to obtain a credential: %v", err)
+ }
+
+ // Establish a connection to the Key Vault client
+ client, err := azsecrets.NewClient(vaultURI, cred, nil)
+
+ // Create a secret
+ params := azsecrets.SetSecretParameters{Value: &mySecretValue}
+ _, err = client.SetSecret(context.TODO(), mySecretName, params, nil)
+ if err != nil {
+ log.Fatalf("failed to create a secret: %v", err)
+ }
+
+ // Get a secret. An empty string version gets the latest version of the secret.
+ version := ""
+ resp, err := client.GetSecret(context.TODO(), mySecretName, version, nil)
+ if err != nil {
+ log.Fatalf("failed to get the secret: %v", err)
+ }
+
+ fmt.Printf("secretValue: %s\n", *resp.Value)
+
+ // List secrets
+ pager := client.NewListSecretsPager(nil)
+ for pager.More() {
+ page, err := pager.NextPage(context.TODO())
+ if err != nil {
+ log.Fatal(err)
+ }
+ for _, secret := range page.Value {
+ fmt.Printf("Secret ID: %s\n", *secret.ID)
+ }
+ }
+
+ // Delete a secret. DeleteSecret returns when Key Vault has begun deleting the secret.
+ // That can take several seconds to complete, so it may be necessary to wait before
+ // performing other operations on the deleted secret.
+ delResp, err := client.DeleteSecret(context.TODO(), mySecretName, nil)
+ if err != nil {
+ log.Fatalf("failed to delete secret: %v", err)
+ }
+
+ fmt.Println(delResp.ID.Name() + " has been deleted")
} ```
func main() {
1. Before you run the code, create an environment variable named `KEY_VAULT_NAME`. Set the environment variable value to the name of the key vault that you created previously.
- ```azurecli
- export KEY_VAULT_NAME=quickstart-kv
- ```
+ ```azurecli
+ export KEY_VAULT_NAME=quickstart-kv
+ ```
1. To start the Go app, run the following command:
- ```azurecli
- go run main.go
- ```
+ ```azurecli
+ go run main.go
+ ```
- ```output
- secretValue: createdWithGO
- Secret ID: https://quickstart-kv.vault.azure.net/secrets/quickstart-secret
- Secret ID: https://quickstart-kv.vault.azure.net/secrets/secretName
- quickstart-secret has been deleted
- ```
+ ```output
+ secretValue: createdWithGO
+ Secret ID: https://quickstart-kv.vault.azure.net/secrets/quickstart-secret
+ Secret ID: https://quickstart-kv.vault.azure.net/secrets/secretName
+ quickstart-secret has been deleted
+ ```
## Code examples
lab-services Administrator Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide.md
To obtain lab VMs with unique SID, create a lab without a template VM. You must
If you plan to use an endpoint management tool or similar software, we recommend that you don't use template VMs for your labs.
+## Azure AD register/join, Hybrid Azure AD join, or AD domain join
+To make labs easy to set up and manage, Azure Lab Services is designed with *no* requirement to register/join lab VMs to either Active Directory (AD) or Azure Active Directory (Azure AD). As a result, Azure Lab Services *doesnΓÇÖt* currently offer built-in support to register/join lab VMs. Although it's possible to Azure AD register/join, Hybrid Azure AD join, or AD domain join lab VMs using other mechanisms, we do *not* recommend that you attempt to register/join lab VMs to either AD or Azure AD due to product limitations.
+ ## Pricing ### Azure Lab Services
lab-services Concept Lab Services Supported Networking Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-lab-services-supported-networking-scenarios.md
The following table lists common networking scenarios and topologies and their s
| Use a connection broker, such as Parsec, for high-framerate gaming scenarios | Not recommended | This scenario isnΓÇÖt directly supported with Azure Lab Services and would run into the same challenges as accessing lab VMs by private IP address. | | *Cyber field* scenario, consisting of a set of vulnerable VMs on the network for lab users to discover and hack into (ethical hacking) | Yes | This scenario works with advanced networking for lab plans. Learn about the [ethical hacking class type](./class-type-ethical-hacking.md). | | Enable using Azure Bastion for lab VMs | No | Azure Bastion isn't supported in Azure Lab Services. |
+| Set up line-of-sight to domain controller | Not recommended | Line-of-sight from a lab to a domain controller is required to Hybrid Azure AD join or AD domain join VMs; however, we currently do *not* recommend that lab VMs be Azure AD joined/registered, Hybrid Azure AD joined, or AD domain joined due to product limitations. |
## Next steps
lab-services How To Attach External Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-external-storage.md
The following table lists important considerations for each external storage sol
| -- | | | [Azure Files share with public endpoint](#azure-files-share) | <ul><li>Everyone has read/write access.</li><li>No virtual network peering is required.</li><li>Accessible to all VMs, not just lab VMs.</li><li>If you're using Linux, lab users have access to the storage account key.</li></ul> | | [Azure Files share with private endpoint](#azure-files-share) | <ul><li>Everyone has read/write access.</li><li>Virtual network peering is required.</li><li>Accessible only to VMs on the same network (or a peered network) as the storage account.</li><li>If you're using Linux, lab users have access to the storage account key.</li></ul> |
-| [Azure Files with identity-based authorization](#azure-files-with-identity-based-authorization) | <ul><li>Either read or read/write access permissions can be set for folder or file.</li><li>Virtual network peering is required.</li><li>Storage account must be connected to Active Directory.</li><li>Lab VMs must be domain-joined.</li><li>Storage account key isn't used for lab users to connect to the file share.</li></ul> |
| [Azure NetApp Files with NFS volumes](#azure-netapp-files-with-nfs-volumes) | <ul><li>Either read or read/write access can be set for volumes.</li><li>Permissions are set by using a lab VMΓÇÖs IP address.</li><li>Virtual network peering is required.</li><li>You might need to register to use the Azure NetApp Files service.</li><li>Linux only.</li></ul> The cost of using external storage isn't included in the cost of using Azure Lab Services. For more information about pricing, see [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) and [Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/).
Lab users should run `mount -a` to remount directories.
For more general information, see [Use Azure Files with Linux](/azure/storage/files/storage-how-to-use-files-linux).
-## Azure Files with identity-based authorization
-
-Azure Files shares can also be accessed by using Active Directory authentication, if the following are both true:
--- The lab VM is domain-joined.-- Active Directory authentication is [enabled on the Azure Storage account](/azure/storage/files/storage-files-active-directory-overview) that hosts the file share. -
-The network drive is mounted on the virtual machine by using the userΓÇÖs identity, not the key to the storage account. Public or private endpoints provide access to the storage account.
-
-Keep in mind the following important points:
--- You can set permissions on a directory or file level.-- You can use current user credentials to authenticate to the file share.-
-For a public endpoint, the virtual network for the storage account doesn't have to be connected to the lab virtual network. You can create the file share anytime before the template VM is published.
-
-For a private endpoint:
--- Access is restricted to traffic originating from the private network, and canΓÇÖt be accessed through the public internet. Only VMs in the private virtual network, VMs in a network peered to the private virtual network, or machines connected to a VPN for the private network, can access the file share. -- This approach requires the file share virtual network to be connected to the lab. To enable advanced networking for labs, see [Connect to your virtual network in Azure Lab Services using vnet injection](how-to-connect-vnet-injection.md). VNet injection must be done during lab plan creation.-
-To create an Azure Files share that's enabled for Active Directory authentication, and to domain-join the lab VMs, follow these steps:
-
-1. Create an [Azure Storage account](/azure/storage/files/storage-how-to-create-file-share).
-1. If you've chosen the private method, create a [private endpoint](/azure/private-link/tutorial-private-endpoint-storage-portal) in order for the file shares to be accessible from the virtual network. Create a [private DNS zone](/azure/dns/private-dns-privatednszone), or use an existing one. Private Azure DNS zones provide name resolution within a virtual network.
-1. Create an [Azure file share](/azure/storage/files/storage-how-to-create-file-share).
-1. Follow the steps to enable identity-based authorization. If you're using Active Directory on-premises, and you're synchronizing it with Azure Active Directory (Azure AD), see [On-premises Active Directory Domain Services authentication over SMB for Azure file shares](/azure/storage/files/storage-files-identity-auth-active-directory-enable). If you're using only Azure AD, see [Enable Azure Active Directory Domain Services authentication on Azure Files](/azure/storage/files/storage-files-identity-auth-active-directory-domain-service-enable).
- >[!IMPORTANT]
- >Talk to the team that manages your Active Directory instance to verify that all prerequisites listed in the instructions are met.
-1. Assign SMB share permission roles in Azure. For details about permissions that are granted to each role, see [share-level permissions](/azure/storage/files/storage-files-identity-ad-ds-assign-permissions).
- - **Storage File Data SMB Share Elevated Contributor** role must be assigned to the person or group that grants permissions for contents of the file share.
- - **Storage File Data SMB Share Contributor** role should be assigned to lab users who need to add or edit files on the file share.
- - **Storage File Data SMB Share Reader** role should be assigned to lab users who only need to read the files from the file share.
-
-1. Set up directory-level and/or file-level permissions for the file share. You must set up permissions from a domain-joined machine that has network access to the file share. To modify directory-level and/or file-level permissions, mount the file share by using the storage key, not your Azure AD credentials. To assign permissions, use the [Set-Acl](/powershell/module/microsoft.powershell.security/set-acl) PowerShell command, or [icacls](/windows-server/administration/windows-commands/icacls) in Windows.
-1. [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md).
-1. [Create the lab](how-to-manage-labs.md).
-1. Save a script on the template VM that lab users can run to connect to the network drive:
- 1. Open the storage account in the Azure portal.
- 1. Under **File Service**, select **File Shares**.
- 1. Find the share that you want to connect to, select the ellipses button on the far right, and choose **Connect**.
- 1. The page shows instructions for Windows, Linux, and macOS. If you're using Windows, set **Authentication method** to **Active Directory**.
- 1. Copy the code in the example, and save it on the template machine in a `.ps1` file for Windows, or an `.sh` file for Linux.
-
-1. On the template machine, download and run the script to [join lab user machines to the domain](https://aka.ms/azlabs/scripts/ActiveDirectoryJoin).
-
- The `Join-AzLabADTemplate` script [publishes the template VM](how-to-create-manage-template.md#publish-the-template-vm) automatically.
-
- > [!NOTE]
- > The template machine isn't domain-joined. To view files on the share, educators need to use a lab VM for themselves.
-
-1. Connect to the Azure Files share from the lab VM.
-
- - Lab users on Windows can connect to the Azure Files share by using [File Explorer](/azure/storage/files/storage-how-to-use-files-windows) with their credentials, after they've been given the path to the file share. Alternately, lab users can run the script you saved earlier to connect to the network drive.
- - For lab users who are using Linux, run the script you saved previously to connect to the network drive.
- ## Azure NetApp Files with NFS volumes [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) is an enterprise-class, high-performance, metered file storage service.
lab-services How To Prepare Windows Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-prepare-windows-template.md
New-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Microsoft\OneDrive"
-Name "FilesOnDemandEnabled" -Value "00000001" -PropertyType DWORD ```
-### Silently sign in users to OneDrive
-
-You can configure OneDrive to automatically sign in with the Windows credentials of the logged on lab user. Automatic sign-in is useful for scenarios where lab users signs in with their organizational account.
-
-Use the following PowerShell script to enable automatic sign-in:
-
-```powershell
-New-Item -Path "HKLM:\SOFTWARE\Policies\Microsoft\OneDrive"
-New-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Microsoft\OneDrive"
- -Name "SilentAccountConfig" -Value "00000001" -PropertyType DWORD
-```
- ### Disable the OneDrive tutorial By default, after you finish the OneDrive setup, a tutorial is launched in the browser. Use the following script to disable the tutorial from showing:
New-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Microsoft\OneDrive"
### Set the maximum download size of a user's OneDrive
-To prevent that OneDrive automatically uses a large amount of disk space on the lab virtual machine when syncing files, you can configure a maximum size threshold. When a lab user has a OneDrive that's larger than the threshold (in MB), the user receives a prompt to choose which folders they want to sync before the OneDrive sync client (OneDrive.exe) downloads the files to the machine. This setting is used in combination with [automatic sign-in of users to OneDrive](#silently-sign-in-users-to-onedrive) and where [on-demand files](#use-onedrive-files-on-demand) isn't enabled.
+To prevent that OneDrive automatically uses a large amount of disk space on the lab virtual machine when syncing files, you can configure a maximum size threshold. When a lab user has a OneDrive that's larger than the threshold (in MB), the user receives a prompt to choose which folders they want to sync before the OneDrive sync client (OneDrive.exe) downloads the files to the machine. This setting is used where [on-demand files](#use-onedrive-files-on-demand) isn't enabled.
Use the following PowerShell script to set the maximum size threshold. In our example, `1111-2222-3333-4444` is the organization ID and `0005000` sets a threshold of 5 GB.
load-balancer Load Balancer Ipv6 Internet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-cli.md
To create VMs, you must have a storage account. For load balancing, the VMs need
3. Create the virtual machines with the associated NICs: ```azurecli
- az vm create --resource-group $rgname --name $vm1Name --image $imageurn --admin-username $vmUserName --admin-password $mySecurePassword --nics $nic1Id --location $location --availability-set $availabilitySetName --size "Standard_A1"
+ az vm create --resource-group $rgname --name $vm1Name --image $imageurn --admin-username $vmUserName --admin-password $mySecurePassword --nics $nic1Id --location $location --availability-set $availabilitySetName --size "Standard_A1"
- az vm create --resource-group $rgname --name $vm2Name --image $imageurn --admin-username $vmUserName --admin-password $mySecurePassword --nics $nic2Id --location $location --availability-set $availabilitySetName --size "Standard_A1"
- ```
+ az vm create --resource-group $rgname --name $vm2Name --image $imageurn --admin-username $vmUserName --admin-password $mySecurePassword --nics $nic2Id --location $location --availability-set $availabilitySetName --size "Standard_A1"
+ ```
load-testing How To Define Test Criteria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-define-test-criteria.md
In this article, you learn how to define fail criteria or auto stop criteria for your load tests with Azure Load Testing. Fail criteria let you define performance and quality expectations for your application under load. Azure Load Testing supports various client metrics for defining fail criteria, such as error rate or response time. Auto stop criteria enable you to automatically stop your load test when the error rate surpasses a given threshold.
-## Prerequisites
+## Prerequisites
-- An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- An Azure load testing resource. If you need to create an Azure Load Testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).
+- An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure load testing resource. If you need to create an Azure Load Testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).
## Load test fail criteria
In this section, you configure test criteria for a load test in the Azure portal
The dashboard shows each of the test criteria and their status. The overall test status is failed if at least one criterion was met. :::image type="content" source="media/how-to-define-test-criteria/test-criteria-dashboard.png" alt-text="Screenshot that shows the test criteria on the load test dashboard.":::
-
+ # [Azure Pipelines / GitHub Actions](#tab/pipelines+github) In this section, you configure test criteria for a load test, as part of a CI/CD workflow. Learn how to [set up automated performance testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md).
To specify fail criteria in the YAML configuration file:
- percentage(error) > 50 - GetCustomerDetails: avg(latency) >200 ```
-
+ When you define a test criterion for a specific JMeter request, the request name should match the name of the JMeter sampler in the JMX file. :::image type="content" source="media/how-to-define-test-criteria/jmeter-request-name.png" alt-text="Screenshot of the JMeter user interface, highlighting the request name.":::
To configure auto stop for your load test in the Azure portal:
# [Azure Pipelines / GitHub Actions](#tab/pipelines+github)
-To configure auto stop for your load test in a CI/CD workflow, you update the [load test configuration YAML file](./reference-test-config-yaml.md).
+To configure auto stop for your load test in a CI/CD workflow, you update the [load test configuration YAML file](./reference-test-config-yaml.md).
To specify auto stop settings in the YAML configuration file: 1. Open the YAML test configuration file for your load test in your editor of choice. - To enable auto stop, add the `autoStop` setting and specify the `errorPercentage` and `timeWindow`.
-
+ The following example automatically stops the load test when the error percentage exceeds 80% during any 2-minute time window:
-
+ ```yaml version: v0.1 testId: SampleTestCICD
To specify auto stop settings in the YAML configuration file:
description: Load test website home page engineInstances: 1 autoStop:
- errorPercentage: 80
- timeWindow: 120
- ```
-
+ errorPercentage: 80
+ timeWindow: 120
+ ```
+ - To disable auto stop, add `autoStop: disable` to the configuration file. The following example disables auto stop for your load test:
-
+ ```yaml version: v0.1 testId: SampleTestCICD
To specify auto stop settings in the YAML configuration file:
description: Load test website home page engineInstances: 1 autoStop: disable
- ```
-
+ ```
+ 1. Save the YAML configuration file, and commit the changes to source control. Learn how to [set up automated performance testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md).
logic-apps Logic Apps Handle Large Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-handle-large-messages.md
If you're using built-in HTTP actions or specific managed connector actions, and
> [!NOTE] > Azure Logic Apps doesn't support chunking on triggers due to the increased overhead from exchanging multiple messages.
-> Also, Azure Logic Apps implements chunking for HTTP actions using its own protocol as described in this article.
-> So, even if your web site or web service supports chunking, they won't work with HTTP action chunking.
-> To use HTTP action chunking with your web site or web service, you have to implement the same protocol
-> that's used by Azure Logic Apps. Otherwise, don't enable chunking on the HTTP action.
+> Also, Azure Logic Apps implements chunking for HTTP actions using its own protocol as described in this article.
+> So, even if your web site or web service supports chunking, they won't work with HTTP action chunking.
+> To use HTTP action chunking with your web site or web service, you have to implement the same protocol
+> that's used by Azure Logic Apps. Otherwise, don't enable chunking on the HTTP action.
This article provides an overview about how chunking works in Azure Logic Apps and how to set up chunking on supported actions.
This article provides an overview about how chunking works in Azure Logic Apps a
## What makes messages "large"?
-Messages are "large" based on the service handling those messages.
-The exact size limit on large messages differs across Logic Apps and connectors.
-Both Logic Apps and connectors can't directly consume large messages,
-which must be chunked. For the Logic Apps message size limit,
+Messages are "large" based on the service handling those messages.
+The exact size limit on large messages differs across Logic Apps and connectors.
+Both Logic Apps and connectors can't directly consume large messages,
+which must be chunked. For the Logic Apps message size limit,
see [Logic Apps limits and configuration](../logic-apps/logic-apps-limits-and-config.md).
-For each connector's message size limit, see the
+For each connector's message size limit, see the
[connector's reference documentation](/connectors/connector-reference/connector-reference-logicapps-connectors). ### Chunked message handling for Logic Apps
-Logic Apps can't directly use outputs from chunked
-messages that are larger than the message size limit.
-Only actions that support chunking can access the message content in these outputs.
+Logic Apps can't directly use outputs from chunked
+messages that are larger than the message size limit.
+Only actions that support chunking can access the message content in these outputs.
So, an action that handles large messages must meet *either* these criteria:
-* Natively support chunking when that action belongs to a connector.
-* Have chunking support enabled in that action's runtime configuration.
+* Natively support chunking when that action belongs to a connector.
+* Have chunking support enabled in that action's runtime configuration.
-Otherwise, you get a runtime error when you try to access large content output.
+Otherwise, you get a runtime error when you try to access large content output.
To enable chunking, see [Set up chunking support](#set-up-chunking). ### Chunked message handling for connectors
-Services that communicate with Logic Apps can have their own message size limits.
-These limits are often smaller than the Logic Apps limit. For example, assuming that
-a connector supports chunking, a connector might consider a 30-MB message as large,
-while Logic Apps does not. To comply with this connector's limit,
+Services that communicate with Logic Apps can have their own message size limits.
+These limits are often smaller than the Logic Apps limit. For example, assuming that
+a connector supports chunking, a connector might consider a 30-MB message as large,
+while Logic Apps does not. To comply with this connector's limit,
Logic Apps splits any message larger than 30 MB into smaller chunks.
-For connectors that support chunking, the underlying chunking protocol is invisible to end users.
-However, not all connectors support chunking, so these connectors generate runtime
+For connectors that support chunking, the underlying chunking protocol is invisible to end users.
+However, not all connectors support chunking, so these connectors generate runtime
errors when incoming messages exceed the connectors' size limits. For actions that support and are enabled for chunking, you can't use trigger bodies, variables, and expressions such as `@triggerBody()?['Content']` because using any of these inputs prevents the chunking operation from happening. Instead, use the [**Compose** action](../logic-apps/logic-apps-perform-data-operations.md#compose-action). Specifically, you must create a `body` field by using the **Compose** action to store the data output from the trigger body, variable, expression, and so on, for example:
Then, to reference the data, in the chunking action, use `@body('Compose')` .
## Set up chunking over HTTP
-In generic HTTP scenarios, you can split up large content downloads and uploads over HTTP,
-so that your logic app and an endpoint can exchange large messages. However,
-you must chunk messages in the way that Logic Apps expects.
+In generic HTTP scenarios, you can split up large content downloads and uploads over HTTP,
+so that your logic app and an endpoint can exchange large messages. However,
+you must chunk messages in the way that Logic Apps expects.
-If an endpoint has enabled chunking for downloads or uploads,
-the HTTP actions in your logic app automatically chunk large messages. Otherwise,
-you must set up chunking support on the endpoint. If you don't own or control
+If an endpoint has enabled chunking for downloads or uploads,
+the HTTP actions in your logic app automatically chunk large messages. Otherwise,
+you must set up chunking support on the endpoint. If you don't own or control
the endpoint or connector, you might not have the option to set up chunking.
-Also, if an HTTP action doesn't already enable chunking,
-you must also set up chunking in the action's `runTimeConfiguration` property.
-You can set this property inside the action, either directly in the code view
+Also, if an HTTP action doesn't already enable chunking,
+you must also set up chunking in the action's `runTimeConfiguration` property.
+You can set this property inside the action, either directly in the code view
editor as described later, or in the Logic Apps Designer as described here:
-1. In the HTTP action's upper-right corner,
-choose the ellipsis button (**...**),
+1. In the HTTP action's upper-right corner,
+choose the ellipsis button (**...**),
and then choose **Settings**. ![On the action, open the settings menu](./media/logic-apps-handle-large-messages/http-settings.png)
and then choose **Settings**.
![Turn on chunking](./media/logic-apps-handle-large-messages/set-up-chunking.png)
-3. To continue setting up chunking for downloads or uploads,
+3. To continue setting up chunking for downloads or uploads,
continue with the following sections. <a name="download-chunks"></a> ## Download content in chunks
-Many endpoints automatically send large messages
-in chunks when downloaded through an HTTP GET request.
-To download chunked messages from an endpoint over HTTP,
-the endpoint must support partial content requests,
-or *chunked downloads*. When your logic app sends an HTTP GET
-request to an endpoint for downloading content,
-and the endpoint responds with a "206" status code,
-the response contains chunked content.
-Logic Apps can't control whether an endpoint supports partial requests.
-However, when your logic app gets the first "206" response,
+Many endpoints automatically send large messages
+in chunks when downloaded through an HTTP GET request.
+To download chunked messages from an endpoint over HTTP,
+the endpoint must support partial content requests,
+or *chunked downloads*. When your logic app sends an HTTP GET
+request to an endpoint for downloading content,
+and the endpoint responds with a "206" status code,
+the response contains chunked content.
+Logic Apps can't control whether an endpoint supports partial requests.
+However, when your logic app gets the first "206" response,
your logic app automatically sends multiple requests to download all the content.
-To check whether an endpoint can support partial content,
-send a HEAD request. This request helps you determine
-whether the response contains the `Accept-Ranges` header.
-That way, if the endpoint supports chunked downloads but
-doesn't send chunked content, you can *suggest*
-this option by setting the `Range` header in your HTTP GET request.
+To check whether an endpoint can support partial content,
+send a HEAD request. This request helps you determine
+whether the response contains the `Accept-Ranges` header.
+That way, if the endpoint supports chunked downloads but
+doesn't send chunked content, you can *suggest*
+this option by setting the `Range` header in your HTTP GET request.
-These steps describe the detailed process Logic Apps uses for
+These steps describe the detailed process Logic Apps uses for
downloading chunked content from an endpoint to your logic app: 1. Your logic app sends an HTTP GET request to the endpoint.
downloading chunked content from an endpoint to your logic app:
Your logic app sends follow-up GET requests until the entire content is retrieved.
-For example, this action definition shows an HTTP GET request that sets the `Range` header.
+For example, this action definition shows an HTTP GET request that sets the `Range` header.
The header *suggests* that the endpoint should respond with chunked content: ```json
The header *suggests* that the endpoint should respond with chunked content:
} ```
-The GET request sets the "Range" header to "bytes=0-1023",
-which is the range of bytes. If the endpoint supports
-requests for partial content, the endpoint responds
-with a content chunk from the requested range.
+The GET request sets the "Range" header to "bytes=0-1023",
+which is the range of bytes. If the endpoint supports
+requests for partial content, the endpoint responds
+with a content chunk from the requested range.
Based on the endpoint, the exact format for the "Range" header field can differ. <a name="upload-chunks"></a> ## Upload content in chunks
-To upload chunked content from an HTTP action, the action must have enabled
-chunking support through the action's `runtimeConfiguration` property.
-This setting permits the action to start the chunking protocol.
-Your logic app can then send an initial POST or PUT message to the target endpoint.
-After the endpoint responds with a suggested chunk size, your logic app follows
+To upload chunked content from an HTTP action, the action must have enabled
+chunking support through the action's `runtimeConfiguration` property.
+This setting permits the action to start the chunking protocol.
+Your logic app can then send an initial POST or PUT message to the target endpoint.
+After the endpoint responds with a suggested chunk size, your logic app follows
up by sending HTTP PATCH requests that contain the content chunks.
-The following steps describe the detailed process Logic Apps uses for uploading
+The following steps describe the detailed process Logic Apps uses for uploading
chunked content from your logic app to an endpoint: 1. Your logic app sends an initial HTTP POST or PUT request with an empty message body. The request header, includes the following information about the content that your logic app wants to upload in chunks:
chunked content from your logic app to an endpoint:
| Endpoint response header field | Type | Required | Description | |--||-|-|
- | **Range** | String | Yes | The byte range for content that has been received by the endpoint, for example: "bytes=0-1023" |
+ | **Range** | String | Yes | The byte range for content that has been received by the endpoint, for example: "bytes=0-1023" |
| **x-ms-chunk-size** | Integer | No | The suggested chunk size in bytes | ||||
-For example, this action definition shows an HTTP POST request for uploading chunked content to an endpoint. In the action's `runTimeConfiguration` property,
+For example, this action definition shows an HTTP POST request for uploading chunked content to an endpoint. In the action's `runTimeConfiguration` property,
the `contentTransfer` property sets `transferMode` to `chunked`: ```json
the `contentTransfer` property sets `transferMode` to `chunked`:
"runtimeConfiguration": { "contentTransfer": { "transferMode": "chunked"
- }
+ }
}, "inputs": { "method": "POST",
the `contentTransfer` property sets `transferMode` to `chunked`:
"body": "@body('getAction')" }, "runAfter": {
- "getAction": ["Succeeded"]
+ "getAction": ["Succeeded"]
}, "type": "Http" }
machine-learning Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/classification.md
Previously updated : 12/1/2022 Last updated : 07/1/2023 # AutoML Classification
AutoML creates a number of pipelines in parallel that try different algorithms a
1. For **classification**, you can also enable deep learning.
-If deep learning is enabled, validation is limited to _train_validation split_. [Learn more about validation options](../v1/how-to-configure-cross-validation-data-splits.md).
+If deep learning is enabled, validation is limited to _train_validation split_.
-
-1. (Optional) View addition configuration settings: additional settings you can use to better control the training job. Otherwise, defaults are applied based on experiment selection and data.
+4. (Optional) View addition configuration settings: additional settings you can use to better control the training job. Otherwise, defaults are applied based on experiment selection and data.
Additional configurations|Description | Primary metric| Main metric used for scoring your model. [Learn more about model metrics](../how-to-configure-auto-train.md#primary-metric).
- Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms](../v1/how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model).
+ Debug model via the Responsible AI dashboard | Generate a Responsible AI dashboard to do a holistic assessment and debugging of the recommended best model. This includes insights such as model explanations, fairness and performance explorer, data explorer, and model error analysis. [Learn more about how you can generate a Responsible AI dashboard.](../how-to-responsible-ai-insights-ui.md)
Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](../how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels). Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you do not spend more time on the training job than necessary. Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job will not run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](../how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
If deep learning is enabled, validation is limited to _train_validation split_.
1. The **[Optional] Validate and test** form allows you to do the following.
- 1. Specify the type of validation to be used for your training job. [Learn more about cross validation](../v1/how-to-configure-cross-validation-data-splits.md#prerequisites).
+ 1. Specify the type of validation to be used for your training job.
1. Provide a test dataset (preview) to evaluate the recommended model that automated ML generates for you at the end of your experiment. When you provide test data, a test job is automatically triggered at the end of your experiment. This test job is only job on the best model that was recommended by automated ML.
If deep learning is enabled, validation is limited to _train_validation split_.
> Providing a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time. * Test data is considered a separate from training and validation, so as to not bias the results of the test job of the recommended model. [Learn more about bias during model validation](../concept-automated-ml.md#training-validation-and-test-data).
- * You can either provide your own test dataset or opt to use a percentage of your training dataset. Test data must be in the form of an [Azure Machine Learning TabularDataset](../v1/how-to-create-register-datasets.md#tabulardataset).
+ * You can either provide your own test dataset or opt to use a percentage of your training dataset. Test data must be in the form of an [Azure Machine Learning TabularDataset](../how-to-create-data-assets.md).
* The schema of the test dataset should match the training dataset. The target column is optional, but if no target column is indicated no test metrics are calculated. * The test dataset should not be the same as the training dataset or the validation dataset. + ## Next steps See the [set of components available](../component-reference/component-reference.md) to Azure Machine Learning.
machine-learning Regression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/regression.md
Previously updated : 12/1/2022 Last updated : 07/17/2023 # AutoML Regression
AutoML creates a number of pipelines in parallel that try different algorithms a
Additional configurations|Description | Primary metric| Main metric used for scoring your model. [Learn more about model metrics](..//how-to-configure-auto-train.md#primary-metric).
- Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms](../v1/how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model).
+ Debug model via the [Responsible AI dashboard](..//concept-responsible-ai-dashboard.md) | Generate a Responsible AI dashboard to do a holistic assessment and debugging of the recommended best model. This includes insights such as model explanations, fairness and performance explorer, data explorer, and model error analysis. [Learn more about how you can generate a Responsible AI dashboard.](../how-to-responsible-ai-insights-ui.md)
Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](../how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels). Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you do not spend more time on the training job than necessary. Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job will not run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](../how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
AutoML creates a number of pipelines in parallel that try different algorithms a
1. The **[Optional] Validate and test** form allows you to do the following.
- 1. Specify the type of validation to be used for your training job. [Learn more about cross validation](../v1/how-to-configure-cross-validation-data-splits.md#prerequisites).
+ 1. Specify the type of validation to be used for your training job.
1. Provide a test dataset (preview) to evaluate the recommended model that automated ML generates for you at the end of your experiment. When you provide test data, a test job is automatically triggered at the end of your experiment. This test job is only job on the best model that was recommended by automated ML.
AutoML creates a number of pipelines in parallel that try different algorithms a
> Providing a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time. * Test data is considered a separate from training and validation, so as to not bias the results of the test job of the recommended model. [Learn more about bias during model validation](../concept-automated-ml.md#training-validation-and-test-data).
- * You can either provide your own test dataset or opt to use a percentage of your training dataset. Test data must be in the form of an [Azure Machine Learning TabularDataset](../v1/how-to-create-register-datasets.md#tabulardataset).
+ * You can either provide your own test dataset or opt to use a percentage of your training dataset. Test data must be in the form of an [Azure Machine Learning TabularDataset](../how-to-create-data-assets.md).
* The schema of the test dataset should match the training dataset. The target column is optional, but if no target column is indicated no test metrics are calculated. * The test dataset should not be the same as the training dataset or the validation dataset. * Forecasting jobs do not support train/test split. +
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-batch-endpoint.md
# Authorization on batch endpoints
-Batch endpoints support Azure Active Directory authentication, or `aad_token`. That means that in order to invoke a batch endpoint, the user must present a valid Azure Active Directory authentication token to the batch endpoint URI. Authorization is enforced at the endpoint level. The following article explains how to correctly interact with batch endpoints and the security requirements for it.
+Batch endpoints support Azure Active Directory authentication, or `aad_token`. That means that in order to invoke a batch endpoint, the user must present a valid Azure Active Directory authentication token to the batch endpoint URI. Authorization is enforced at the endpoint level. The following article explains how to correctly interact with batch endpoints and the security requirements for it.
## Prerequisites
You can either use one of the [built-in security roles](../role-based-access-con
The following examples show different ways to start batch deployment jobs using different types of credentials:
-> [!IMPORTANT]
+> [!IMPORTANT]
> When working on a private link-enabled workspaces, batch endpoints can't be invoked from the UI in Azure Machine Learning studio. Please use the Azure Machine Learning CLI v2 instead for job creation. ### Running jobs using user's credentials
In this case, we want to execute a batch endpoint using the identity of the user
```python job = ml_client.batch_endpoints.invoke(
- endpoint_name,
+ endpoint_name,
input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci") ) ```
When working with REST, we recommend invoking batch endpoints using a service pr
```azurecli az account get-access-token --resource https://ml.azure.com --query "accessToken" --output tsv ```
-
+ 1. Take note of the generated output. 1. Once authenticated, make a request to the invocation URI replacing `<TOKEN>` by the one you obtained before.
-
+ __Request__:
-
+ ```http POST jobs HTTP/1.1 Host: <ENDPOINT_URI>
When working with REST, we recommend invoking batch endpoints using a service pr
Content-Type: application/json ``` __Body:__
-
+ ```json { "properties": {
- "InputData": {
- "mnistinput": {
- "JobInputType" : "UriFolder",
- "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci"
- }
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci"
+ }
} } }
In this case, we want to execute a batch endpoint using a service principal alre
# [Azure CLI](#tab/cli)
-1. Create a secret to use for authentication as explained at [Option 32: Create a new application secret](../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-application-secret).
+1. Create a secret to use for authentication as explained at [Option 32: Create a new application secret](../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-application-secret).
1. To authenticate using a service principal, use the following command. For more details see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli). ```azurecli
In this case, we want to execute a batch endpoint using a service principal alre
```python job = ml_client.batch_endpoints.invoke(
- endpoint_name,
+ endpoint_name,
input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci") ) ``` # [REST](#tab/rest)
-1. Create a secret to use for authentication as explained at [Option 3: Create a new application secret](../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-application-secret).
+1. Create a secret to use for authentication as explained at [Option 3: Create a new application secret](../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-application-secret).
1. Use the login service from Azure to get an authorization token. Authorization tokens are issued to a particular scope. The resource type for Azure Machine Learning is `https://ml.azure.com`. The request would look as follows:
-
+ __Request__:
-
+ ```http POST /{TENANT_ID}/oauth2/token HTTP/1.1 Host: login.microsoftonline.com ```
-
+ __Body__:
-
+ ``` grant_type=client_credentials&client_id=<CLIENT_ID>&client_secret=<CLIENT_SECRET>&resource=https://ml.azure.com ```
-
+ > [!IMPORTANT] > Notice that the resource scope for invoking a batch endpoints (`https://ml.azure.com1) is different from the resource scope used to manage them. All management APIs in Azure use the resource scope `https://management.azure.com`, including Azure Machine Learning. 3. Once authenticated, use the query to run a batch deployment job:
-
+ __Request__:
-
+ ```http POST jobs HTTP/1.1 Host: <ENDPOINT_URI>
In this case, we want to execute a batch endpoint using a service principal alre
Content-Type: application/json ``` __Body:__
-
+ ```json { "properties": {
- "InputData": {
- "mnistinput": {
- "JobInputType" : "UriFolder",
- "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci"
- }
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci"
+ }
} } }
Once authenticated, use the following command to run a batch deployment job:
```python job = ml_client.batch_endpoints.invoke(
- endpoint_name,
+ endpoint_name,
input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci") ) ```
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
In general, files in MLflow are called artifacts. You can log artifacts in multi
|Log all the artifacts in an existing folder | `mlflow.log_artifacts("path/to/folder")`| Folder structure is copied to the run, but the root folder indicated is not included. | > [!TIP]
-> When __loggiging large files__ with `log_artifact` or `log_model`, you may encounter time out errors before the upload of the file is completed. Consider increasing the timeout value by adjusting the environment variable `AZUREML_ARTIFACTS_DEFAULT_TIMEOUT`. It's default value is `300` (seconds).
+> When __logging large files__ with `log_artifact` or `log_model`, you may encounter time out errors before the upload of the file is completed. Consider increasing the timeout value by adjusting the environment variable `AZUREML_ARTIFACTS_DEFAULT_TIMEOUT`. It's default value is `300` (seconds).
## Logging models
MLflow introduces the concept of "models" as a way to package all the artifacts
To save the model from a training run, use the `log_model()` API for the framework you're working with. For example, [mlflow.sklearn.log_model()](https://mlflow.org/docs/latest/python_api/mlflow.sklearn.html#mlflow.sklearn.log_model). For more details about how to log MLflow models see [Logging MLflow models](how-to-log-mlflow-models.md) For migrating existing models to MLflow, see [Convert custom models to MLflow](how-to-convert-custom-model-to-mlflow.md). > [!TIP]
-> When __loggiging large models__, you may encounter the error `Failed to flush the queue within 300 seconds`. Usually, it means the operation is timing out before the upload of the model artifacts is completed. Consider increasing the timeout value by adjusting the environment variable `AZUREML_ARTIFACTS_DEFAULT_VALUE`.
+> When __logging large models__, you may encounter the error `Failed to flush the queue within 300 seconds`. Usually, it means the operation is timing out before the upload of the model artifacts is completed. Consider increasing the timeout value by adjusting the environment variable `AZUREML_ARTIFACTS_DEFAULT_VALUE`.
## Automatic logging
machine-learning How To Understand Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-understand-automated-ml.md
Previously updated : 06/7/2023 Last updated : 07/20/2023 # Evaluate automated machine learning experiment results
-In this article, learn how to evaluate and compare models trained by your automated machine learning (automated ML) experiment. Over the course of an automated ML experiment, many jobs are created and each job creates a model. For each model, automated ML generates evaluation metrics and charts that help you measure the model's performance.
+In this article, learn how to evaluate and compare models trained by your automated machine learning (automated ML) experiment. Over the course of an automated ML experiment, many jobs are created and each job creates a model. For each model, automated ML generates evaluation metrics and charts that help you measure the model's performance. You can further generate a Responsible AI dashboard to do a holistic assessment and debugging of the recommended best model by default. This includes insights such as model explanations, fairness and performance explorer, data explorer, model error analysis. Learn more about how you can generate a [Responsible AI dashboard.](how-to-responsible-ai-insights-ui.md)
For example, automated ML generates the following charts based on experiment type.
After your automated ML experiment completes, a history of the jobs can be found
The following steps and video, show you how to view the run history and model evaluation metrics and charts in the studio: 1. [Sign into the studio](https://ml.azure.com/) and navigate to your workspace.
-1. In the left menu, select **Runs**.
+1. In the left menu, select **Jobs**.
1. Select your experiment from the list of experiments. 1. In the table at the bottom of the page, select an automated ML job. 1. In the **Models** tab, select the **Algorithm name** for the model you want to evaluate.
weighted_accuracy|Weighted accuracy is accuracy where each sample is weighted by
### Binary vs. multiclass classification metrics
-Automated ML automatically detects if the data is binary and also allows users to activate binary classification metrics even if the data is multiclass by specifying a `true` class. Multiclass classification metrics will be reported no matter if a dataset has two classes or more than two classes. Binary classification metrics will only be reported when the data is binary, or the users activate the option.
+Automated ML automatically detects if the data is binary and also allows users to activate binary classification metrics even if the data is multiclass by specifying a `true` class. Multiclass classification metrics is reported no matter if a dataset has two classes or more than two classes. Binary classification metrics is only reported when the data is binary, or the users activate the option.
> [!Note] > When a binary classification task is detected, we use `numpy.unique` to find the set of labels and the later label will be used as the `true` class. Since there is a sorting procedure in `numpy.unique`, the choice of `true` class will be stable.
-Note that multiclass classification metrics are intended for multiclass classification. When applied to a binary dataset, these metrics won't treat any class as the `true` class, as you might expect. Metrics that are clearly meant for multiclass are suffixed with `micro`, `macro`, or `weighted`. Examples include `average_precision_score`, `f1_score`, `precision_score`, `recall_score`, and `AUC`. For example, instead of calculating recall as `tp / (tp + fn)`, the multiclass averaged recall (`micro`, `macro`, or `weighted`) averages over both classes of a binary classification dataset. This is equivalent to calculating the recall for the `true` class and the `false` class separately, and then taking the average of the two.
+Note, multiclass classification metrics are intended for multiclass classification. When applied to a binary dataset, these metrics don't treat any class as the `true` class, as you might expect. Metrics that are clearly meant for multiclass are suffixed with `micro`, `macro`, or `weighted`. Examples include `average_precision_score`, `f1_score`, `precision_score`, `recall_score`, and `AUC`. For example, instead of calculating recall as `tp / (tp + fn)`, the multiclass averaged recall (`micro`, `macro`, or `weighted`) averages over both classes of a binary classification dataset. This is equivalent to calculating the recall for the `true` class and the `false` class separately, and then taking the average of the two.
Besides, although automatic detection of binary classification is supported, it is still recommended to always specify the `true` class manually to make sure the binary classification metrics are calculated for the correct class.
The mAP, precision and recall values are logged at an epoch-level for image obje
![Epoch-level charts for object detection](./media/how-to-understand-automated-ml/image-object-detection-map.png)
-## Model explanations and feature importances
+## Responsible AI dashboard for best recommended AutoML model (preview)
+
+The Azure Machine Learning Responsible AI dashboard provides a single interface to help you implement Responsible AI in practice effectively and efficiently. Responsible AI dashboard is only supported using tabular data and is only supported on classification and regression models. It brings together several mature Responsible AI tools in the areas of:
+
+* Model performance and fairness assessment
+* Data exploration
+* Machine learning interpretability
+* Error analysis
-While model evaluation metrics and charts are good for measuring the general quality of a model, inspecting which dataset features a model used to make its predictions is essential when practicing responsible AI. That's why automated ML provides a model explanations dashboard to measure and report the relative contributions of dataset features. See how to [view the explanations dashboard in the Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md#model-explanations-preview).
+While model evaluation metrics and charts are good for measuring the general quality of a model, operations such as inspecting you modelΓÇÖs fairness, viewing its explanations (also known as which dataset features a model used to make its predictions), inspecting its errors (what are the blindspots of the model) are essential when practicing responsible AI. That's why automated ML provides a Responsible AI dashboard to help you observe a variety of insights for your model. See how to view the Responsible AI dashboard in the [Azure Machine Learning studio.](how-to-use-automated-ml-for-ml-models.md#responsible-ai-dashboard-preview)
+
+See how you can generate this [dashboard via the UI or the SDK.](how-to-responsible-ai-insights-sdk-cli.md)
+
+## Model explanations and feature importances
-For a code first experience, see how to set up [model explanations for automated ML experiments with the Azure Machine Learning Python SDK (v1)](./v1/how-to-machine-learning-interpretability-automl.md).
+While model evaluation metrics and charts are good for measuring the general quality of a model, inspecting which dataset features a model used to make its predictions is essential when practicing responsible AI. That's why automated ML provides a model explanations dashboard to measure and report the relative contributions of dataset features. See how to [view the explanations dashboard in the Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md#responsible-ai-dashboard-preview).
> [!NOTE] > Interpretability, best model explanation, is not available for automated ML forecasting experiments that recommend the following algorithms as the best model or ensemble:
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Previously updated : 11/15/2021 Last updated : 07/20/2023
For a Python code-based experience, [configure your automated machine learning e
1. Select your subscription and workspace.
-1. Navigate to the left pane. Select **Automated ML** under the **Author** section.
+1. Navigate to the left pane. Select **Automated ML** under the **Authoring** section.
[![Azure Machine Learning studio navigation pane](media/how-to-use-automated-ml-for-ml-models/nav-pane.png)](media/how-to-use-automated-ml-for-ml-models/nav-pane-expanded.png#lightbox)
- If this is your first time doing any experiments, you'll see an empty list and links to documentation.
+ If this is your first time doing any experiments, you see an empty list and links to documentation.
-Otherwise, you'll see a list of your recent automated ML experiments, including those created with the SDK.
+Otherwise, you see a list of your recent automated ML experiments, including those created with the SDK.
## Create and run experiment
Otherwise, you'll see a list of your recent automated ML experiments, including
1. To create a new dataset from a file on your local computer, select **+Create dataset** and then select **From local file**.
- 1. In the **Basic info** form, give your dataset a unique name and provide an optional description.
-
- 1. Select **Next** to open the **Datastore and file selection form**. On this form you select where to upload your dataset; the default storage container that's automatically created with your workspace, or choose a storage container that you want to use for the experiment.
+ 1. Select **Next** to open the **Datastore and file selection form**. , you select where to upload your dataset; the default storage container that's automatically created with your workspace, or choose a storage container that you want to use for the experiment.
1. If your data is behind a virtual network, you need to enable the **skip the validation** function to ensure that the workspace can access your data. For more information, see [Use Azure Machine Learning studio in an Azure virtual network](how-to-enable-studio-virtual-network.md).
Otherwise, you'll see a list of your recent automated ML experiments, including
Select **Next.**
- 1. The **Confirm details** form is a summary of the information previously populated in the **Basic info** and **Settings and preview** forms. You also have the option to create a data profile for your dataset using a profiling enabled compute. Learn more about [data profiling (v1)](v1/how-to-connect-data-ui.md#profile).
+ 1. The **Confirm details** form is a summary of the information previously populated in the **Basic info** and **Settings and preview** forms. You also have the option to create a data profile for your dataset using a profiling enabled compute.
Select **Next**.
-1. Select your newly created dataset once it appears. You are also able to view a preview of the dataset and sample statistics.
+1. Select your newly created dataset once it appears. You're also able to view a preview of the dataset and sample statistics.
1. On the **Configure job** form, select **Create new** and enter **Tutorial-automl-deploy** for the experiment name.
Otherwise, you'll see a list of your recent automated ML experiments, including
Virtual machine priority| Low priority virtual machines are cheaper but don't guarantee the compute nodes. Virtual machine type| Select CPU or GPU for virtual machine type. Virtual machine size| Select the virtual machine size for your compute.
- Min / Max nodes| To profile data, you must specify 1 or more nodes. Enter the maximum number of nodes for your compute. The default is 6 nodes for an Azure Machine Learning Compute.
+ Min / Max nodes| To profile data, you must specify one or more nodes. Enter the maximum number of nodes for your compute. The default is six nodes for an Azure Machine Learning Compute.
Advanced settings | These settings allow you to configure a user account and existing virtual network for your experiment. Select **Create**. Creation of a new compute can take a few minutes.
- >[!NOTE]
- > Your compute name will indicate if the compute you select/create is *profiling enabled*. (See the section [data profiling (v1)](v1/how-to-connect-data-ui.md#profile) for more details).
- Select **Next**. 1. On the **Task type and settings** form, select the task type: classification, regression, or forecasting. See [supported task types](concept-automated-ml.md#when-to-use-automl-classification-regression-forecasting-computer-vision--nlp) for more information. 1. For **classification**, you can also enable deep learning.
-
- If deep learning is enabled, validation is limited to _train_validation split_. [Learn more about validation options (SDK v1)](./v1/how-to-configure-cross-validation-data-splits.md).
1. For **forecasting** you can,
Otherwise, you'll see a list of your recent automated ML experiments, including
1. Select *time column*: This column contains the time data to be used.
- 1. Select *forecast horizon*: Indicate how many time units (minutes/hours/days/weeks/months/years) will the model be able to predict to the future. The further the model is required to predict into the future, the less accurate it becomes. [Learn more about forecasting and forecast horizon](how-to-auto-train-forecast.md).
+ 1. Select *forecast horizon*: Indicate how many time units (minutes/hours/days/weeks/months/years) will the model be able to predict to the future. The further into the future the model is required to predict, the less accurate the model becomes. [Learn more about forecasting and forecast horizon](how-to-auto-train-forecast.md).
1. (Optional) View addition configuration settings: additional settings you can use to better control the training job. Otherwise, defaults are applied based on experiment selection and data. Additional configurations|Description | Primary metric| Main metric used for scoring your model. [Learn more about model metrics](how-to-configure-auto-train.md#primary-metric).
- Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms (SDK v1)](./v1/how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model).
+ Debug model via the Responsible AI dashboard | Generate a Responsible AI dashboard to do a holistic assessment and debugging of the recommended best model. This includes insights such as model explanations, fairness and performance explorer, data explorer, model error analysis. [Learn more about how you can generate a Responsible AI dashboard.](./how-to-responsible-ai-insights-ui.md). RAI Dashboard can only be run if 'Serverless' compute (preview) is specified in the experiment set-up step.
Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels).
- Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you do not spend more time on the training job than necessary.
- Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job will not run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
+ Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you don't spend more time on the training job than necessary.
+ Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job won't run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
-1. (Optional) View featurization settings: if you choose to enable **Automatic featurization** in the **Additional configuration settings** form, default featurization techniques are applied. In the **View featurization settings** you can change these defaults and customize accordingly. Learn how to [customize featurizations](#customize-featurization).
+1. (Optional) View featurization settings: if you choose to enable **Automatic featurization** in the **Additional configuration settings** form, default featurization techniques are applied. In the **View featurization settings**, you can change these defaults and customize accordingly. Learn how to [customize featurizations](#customize-featurization).
![Screenshot shows the Select task type dialog box with View featurization settings called out.](media/how-to-use-automated-ml-for-ml-models/view-featurization-settings.png) 1. The **[Optional] Validate and test** form allows you to do the following.
- 1. Specify the type of validation to be used for your training job. [Learn more about cross validation (SDK v1)](./v1/how-to-configure-cross-validation-data-splits.md#prerequisites).
-
- 1. Forecasting tasks only supports k-fold cross validation.
+a. Specify the type of validation to be used for your training job. If you do not explicitly specify either a `validation_data` or `n_cross_validations` parameter, automated ML applies default techniques depending on the number of rows provided in the single dataset `training_data`.
+
+| Training data size | Validation technique |
+||--|
+|**Larger than 20,000 rows**| Train/validation data split is applied. The default is to take 10% of the initial training data set as the validation set. In turn, that validation set is used for metrics calculation.
+|**Smaller than 20,000& rows**| Cross-validation approach is applied. The default number of folds depends on the number of rows. <br> **If the dataset is less than 1,000 rows**, 10 folds are used. <br> **If the rows are between 1,000 and 20,000**, then three folds are used.
- 1. Provide a test dataset (preview) to evaluate the recommended model that automated ML generates for you at the end of your experiment. When you provide test data, a test job is automatically triggered at the end of your experiment. This test job is only job on the best model that was recommended by automated ML. Learn how to get the [results of the remote test job](#view-remote-test-job-results-preview).
+b. Provide a test dataset (preview) to evaluate the recommended model that automated ML generates for you at the end of your experiment. When you provide test data, a test job is automatically triggered at the end of your experiment. This test job is only job on the best model that is recommended by automated ML. Learn how to get the [results of the remote test job](#view-remote-test-job-results-preview).
- >[!IMPORTANT]
- > Providing a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
-
- * Test data is considered a separate from training and validation, so as to not bias the results of the test job of the recommended model. [Learn more about bias during model validation](concept-automated-ml.md#training-validation-and-test-data).
- * You can either provide your own test dataset or opt to use a percentage of your training dataset. Test data must be in the form of an [Azure Machine Learning TabularDataset (v1)](./v1/how-to-create-register-datasets.md#tabulardataset).
- * The schema of the test dataset should match the training dataset. The target column is optional, but if no target column is indicated no test metrics are calculated.
- * The test dataset should not be the same as the training dataset or the validation dataset.
- * Forecasting jobs do not support train/test split.
+>[!IMPORTANT]
+> Providing a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
+ * Test data is considered a separate from training and validation, so as to not bias the results of the test job of the recommended model. [Learn more about bias during model validation](concept-automated-ml.md#training-validation-and-test-data).
+ * You can either provide your own test dataset or opt to use a percentage of your training dataset. Test data must be in the form of an [Azure Machine Learning TabularDataset](how-to-create-data-assets.md#create-data-assets).
+ * The schema of the test dataset should match the training dataset. The target column is optional, but if no target column is indicated no test metrics are calculated.
+ * The test dataset shouldn't be the same as the training dataset or the validation dataset.
+ * Forecasting jobs don't support train/test split.
- ![Screenshot shows the form where to select validation data and test data](media/how-to-use-automated-ml-for-ml-models/validate-test-form.png)
+![Screenshot shows the form where to select validation data and test data](media/how-to-use-automated-ml-for-ml-models/validate-test-form.png)
## Customize featurization
Impute with| Select what value to impute missing values with in your data.
## Run experiment and view results
-Select **Finish** to run your experiment. The experiment preparing process can take up to 10 minutes. Training jobs can take an additional 2-3 minutes more for each pipeline to finish running.
+Select **Finish** to run your experiment. The experiment preparing process can take up to 10 minutes. Training jobs can take an additional 2-3 minutes more for each pipeline to finish running. If you have specified to generate RAI dashboard for the best recommended model, it may take up to 40 minutes.
> [!NOTE] > The algorithms automated ML employs have inherent randomness that can cause slight variation in a recommended model's final metrics score, like accuracy. Automated ML also performs operations on data such as train-test split, train-validation split or cross-validation when necessary. So if you run an experiment with the same configuration settings and primary metric multiple times, you'll likely see variation in each experiments final metrics score due to these factors.
Select **Finish** to run your experiment. The experiment preparing process can t
The **Job Detail** screen opens to the **Details** tab. This screen shows you a summary of the experiment job including a status bar at the top next to the job number.
-The **Models** tab contains a list of the models created ordered by the metric score. By default, the model that scores the highest based on the chosen metric is at the top of the list. As the training job tries out more models, they are added to the list. Use this to get a quick comparison of the metrics for the models produced so far.
+The **Models** tab contains a list of the models created ordered by the metric score. By default, the model that scores the highest based on the chosen metric is at the top of the list. As the training job tries out more models, they're added to the list. Use this to get a quick comparison of the metrics for the models produced so far.
![Job detail](./media/how-to-use-automated-ml-for-ml-models/explore-models.gif) ### View training job details
-Drill down on any of the completed models to see training job details. On the **Model** tab view details like a model summary and the hyperparameters used for the selected model.
+Drill down on any of the completed models to see training job details. In the **Model** tab, you can view details like a model summary and the hyperparameters used for the selected model.
[![Hyperparameter details](media/how-to-use-automated-ml-for-ml-models/hyperparameter-button.png)](media/how-to-use-automated-ml-for-ml-models/hyperparameter-details.png#lightbox)
On the Data transformation tab, you can see a diagram of what data preprocessing
## View remote test job results (preview)
-If you specified a test dataset or opted for a train/test split during your experiment setup-- on the **Validate and test** form, automated ML automatically tests the recommended model by default. As a result, automated ML calculates test metrics to determine the quality of the recommended model and its predictions.
+If you specified a test dataset or opted for a train/test split during your experiment setup--on the **Validate and test** form, automated ML automatically tests the recommended model by default. As a result, automated ML calculates test metrics to determine the quality of the recommended model and its predictions.
>[!IMPORTANT] > Testing your models with a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
To view the test predictions used to calculate the test metrics,
Alternatively, the predictions file can also be viewed/downloaded from the Outputs + logs tab, expand Predictions folder to locate your predictions.csv file.
-The model test job generates the predictions.csv file that's stored in the default datastore created with the workspace. This datastore is visible to all users with the same subscription. Test jobs are not recommended for scenarios if any of the information used for or created by the test job needs to remain private.
+The model test job generates the predictions.csv file that's stored in the default datastore created with the workspace. This datastore is visible to all users with the same subscription. Test jobs aren't recommended for scenarios if any of the information used for or created by the test job needs to remain private.
## Test an existing automated ML model (preview)
After your experiment completes, you can test the model(s) that automated ML gen
![Test model form](./media/how-to-use-automated-ml-for-ml-models/test-model-form.png)
-## Model explanations (preview)
+## Responsible AI dashboard (preview)
+
+To better understand your model, you can see various insights about your model using the Responsible Ai dashboard. It allows you to evaluate and debug your best Automated machine learning model. The Responsible AI dashboard will evaluate model errors and fairness issues, diagnose why those errors are happening by evaluating your train and/or test data, and observing model explanations. Together, these insights could help you build trust with your model and pass the audit processes. Responsible AI dashboards can't be generated for an existing Automated machine learning model. It is only created for the best recommended model when a new AutoML job is created. Users should continue to just use Model Explanations (preview) until support is provided for existing models.
+
+To generate a Responsible AI dashboard for a particular model,
+
+1. While submitting an Automated ML job, proceed to the **Task settings** section on the left nav bar and select the **View additional configuration settings** option.
+
+2. In the new form appearing post that selection, select the **Explain best model** checkbox.
+++
+ ![Select Explain best model from the Automated ML job configuration page](media/how-to-use-automated-ml-for-ml-models/best-model-selection.png)
+
+3. Proceed to the **Compute** page of the setup form and choose the **Serverless** as your compute.
+
+ ![Serverless compute selection](media/how-to-use-automated-ml-for-ml-models/compute-serverless.png)
+
+4. Once complete, navigate to the Models page of your Automated ML job, which contains a list of your trained models. Select on the **View Responsible AI dashboard** link:
+
+ ![View dashboard page within an Automated ML job](media/how-to-use-automated-ml-for-ml-models/view-responsible-ai.png)
+
+The Responsible AI dashboard appears for that model as shown in this image:
-To better understand your model, you can see which data features (raw or engineered) influenced the model's predictions with the model explanations dashboard.
+ ![Responsible AI dashboard](media/how-to-use-automated-ml-for-ml-models/responsible-ai-dashboard.png)
-The model explanations dashboard provides an overall analysis of the trained model along with its predictions and explanations. It also lets you drill into an individual data point and its individual feature importance. [Learn more about the explanation dashboard visualizations (v1)](./v1/how-to-machine-learning-interpretability-aml.md#visualizations).
+In the dashboard, you'll find four components activated for your Automated MLΓÇÖs best model:
-To get explanations for a particular model,
+| Component | What does the component show? | How to read the chart? |
+| - | - | - |
+| [Error Analysis](concept-error-analysis.md) | Use error analysis when you need to: <br> Gain a deep understanding of how model failures are distributed across a dataset and across several input and feature dimensions. <br> Break down the aggregate performance metrics to automatically discover erroneous cohorts in order to inform your targeted mitigation steps. | [Error Analysis Charts](how-to-responsible-ai-dashboard.md) |
+| [Model Overview and Fairness](concept-fairness-ml.md) | Use this component to: <br> Gain a deep understanding of your model performance across different cohorts of data. <br> Understand your model fairness issues by looking at the disparity metrics. These metrics can evaluate and compare model behavior across subgroups identified in terms of sensitive (or nonsensitive) features. | [Model Overview and Fairness Charts](how-to-responsible-ai-dashboard.md#model-overview-and-fairness-metrics) |
+| [Model Explanations](how-to-machine-learning-interpretability.md) | Use the model explanation component to generate human-understandable descriptions of the predictions of a machine learning model by looking at: <br> Global explanations: For example, what features affect the overall behavior of a loan allocation model? <br> Local explanations: For example, why was a customer's loan application approved or rejected? | [Model Explainability Charts](how-to-responsible-ai-dashboard.md#feature-importances-model-explanations) |
+| [Data Analysis](concept-data-analysis.md) | Use data analysis when you need to: <br> Explore your dataset statistics by selecting different filters to slice your data into different dimensions (also known as cohorts). <br> Understand the distribution of your dataset across different cohorts and feature groups. <br> Determine whether your findings related to fairness, error analysis, and causality (derived from other dashboard components) are a result of your dataset's distribution. <br> Decide in which areas to collect more data to mitigate errors that come from representation issues, label noise, feature noise, label bias, and similar factors. | [Data Explorer Charts](how-to-responsible-ai-dashboard.md#data-analysis) |
-1. On the **Models** tab, select the model you want to understand.
-1. Select the **Explain model** button, and provide a compute that can be used to generate the explanations.
-1. Check the **Child jobs** tab for the status.
-1. Once complete, navigate to the **Explanations (preview)** tab which contains the explanations dashboard.
+5. You can further create cohorts (subgroups of data points that share specified characteristics) to focus your analysis of each component on different cohorts. The name of the cohort that's currently applied to the dashboard is always shown at the top left of your dashboard. The default view in your dashboard is your whole dataset, titled "All data" (by default). Learn more about the [global control of your dashboard here.](how-to-responsible-ai-dashboard.md#global-controls)
- ![Model explanation dashboard](media/how-to-use-automated-ml-for-ml-models/model-explanation-dashboard.png)
## Edit and submit jobs (preview)
In scenarios where you would like to create a new experiment based on the settin
This functionality is limited to experiments initiated from the studio UI and requires the data schema for the new experiment to match that of the original experiment.
-The **Edit and submit** button opens the **Create a new Automated ML job** wizard with the data, compute and experiment settings pre-populated. You can go through each form and edit selections as needed for your new experiment.
+The **Edit and submit** button opens the **Create a new Automated ML job** wizard with the data, compute and experiment settings prepopulated. You can go through each form and edit selections as needed for your new experiment.
## Deploy your model
-Once you have the best model at hand, it is time to deploy it as a web service to predict on new data.
+Once you have the best model at hand, it's time to deploy it as a web service to predict on new data.
>[!TIP]
-> If you are looking to deploy a model that was generated via the `automl` package with the Python SDK, you must [register your model (v1)](./v1/how-to-deploy-and-where.md) to the workspace.
+> If you are looking to deploy a model that was generated via the `automl` package with the Python SDK, you must [register your model)](./how-to-deploy-online-endpoints.md) to the workspace.
> > Once you're model is registered, find it in the studio by selecting **Models** on the left pane. Once you open your model, you can select the **Deploy** button at the top of the screen, and then follow the instructions as described in **step 2** of the **Deploy your model** section.
Automated ML helps you with deploying the model without writing code:
Compute type| Select the type of endpoint you want to deploy: [*Azure Kubernetes Service (AKS)*](../aks/intro-kubernetes.md) or [*Azure Container Instance (ACI)*](../container-instances/container-instances-overview.md). Compute name| *Applies to AKS only:* Select the name of the AKS cluster you wish to deploy to. Enable authentication | Select to allow for token-based or key-based authentication.
- Use custom deployment assets| Enable this feature if you want to upload your own scoring script and environment file. Otherwise, automated ML provides these assets for you by default. [Learn more about scoring scripts (v1)](./v1/how-to-deploy-and-where.md).
+ Use custom deployment assets| Enable this feature if you want to upload your own scoring script and environment file. Otherwise, automated ML provides these assets for you by default. [Learn more about scoring scripts](how-to-deploy-online-endpoints.md).
>[!Important] > File names must be under 32 characters and must begin and end with alphanumerics. May include dashes, underscores, dots, and alphanumerics between. Spaces are not allowed.
Now you have an operational web service to generate predictions! You can test th
## Next steps
-* [Learn how to consume a web service (SDK v1)](v1/how-to-consume-web-service.md).
* [Understand automated machine learning results](how-to-understand-automated-ml.md). * [Learn more about automated machine learning](concept-automated-ml.md) and Azure Machine Learning.
machine-learning How To Use Event Grid Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-event-grid-batch.md
The workflow looks as follows:
3. A Logic App is subscribed to listen to those events. Since the storage account can contain multiple data assets, event filtering will be applied to only react to events happening in a specific folder inside of it. Further filtering can be done if needed (for instance, based on file extensions). 4. The Logic App will be triggered, which in turns will:
- a. It will get an authorization token to invoke batch endpoints using the credentials from a Service Principal
-
- b. It will trigger the batch endpoint (default deployment) using the newly created file as input.
+ 1. It will get an authorization token to invoke batch endpoints using the credentials from a Service Principal
+
+ 1. It will trigger the batch endpoint (default deployment) using the newly created file as input.
5. The batch endpoint will return the name of the job that was created to process the file.
The workflow looks as follows:
Azure Logic Apps can invoke the REST APIs of batch endpoints by using the [HTTP](../connectors/connectors-native-http.md) activity. Batch endpoints support Azure Active Directory for authorization and hence the request made to the APIs require a proper authentication handling.
-We recommend to using a service principal for authentication and interaction with batch endpoints in this scenario.
+We recommend to using a service principal for authentication and interaction with batch endpoints in this scenario.
1. Create a service principal following the steps at [Register an application with Azure AD and create a service principal](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal). 1. Create a secret to use for authentication as explained at [Option 3: Create a new application secret](../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-application-secret).
We recommend to using a service principal for authentication and interaction wit
1. Grant access for the service principal you created to your workspace as explained at [Grant access](../role-based-access-control/quickstart-assign-role-user-portal.md#grant-access). In this example the service principal will require: 1. Permission in the workspace to read batch deployments and perform actions over them.
- 1. Permissions to read/write in data stores.
+ 1. Permissions to read/write in data stores.
## Enabling data access
We will be using cloud URIs provided by Event Grid to indicate the input data to
from azure.ai.ml import MLClient from azure.ai.ml.entities import AmlCompute, ManagedIdentityConfiguration from azure.ai.ml.constants import ManagedServiceIdentityType
-
+ compute_name = "batch-cluster" compute_cluster = ml_client.compute.get(name=compute_name)
-
+ compute_cluster.identity.type = ManagedServiceIdentityType.USER_ASSIGNED compute_cluster.identity.user_assigned_identities = [ ManagedIdentityConfiguration(resource_id=identity) ]
-
+ ml_client.compute.begin_create_or_update(compute_cluster) ```
We will be using cloud URIs provided by Event Grid to indicate the input data to
## Configure the workflow parameters
-This Logic App uses parameters to store specific pieces of information that you will need to run the batch deployment.
+This Logic App uses parameters to store specific pieces of information that you will need to run the batch deployment.
1. On the workflow designer, under the tool bar, select the option __Parameters__ and configure them as follows:
This Logic App uses parameters to store specific pieces of information that you
1. To create a parameter, use the __Add parameter__ option: :::image type="content" source="./media/how-to-use-event-grid-batch/parameter.png" alt-text="Screenshot showing how to add one parameter in designer.":::
-
+ 1. Create the following parameters. | Parameter | Description | Sample value |
This Logic App uses parameters to store specific pieces of information that you
| `client_id` | The client ID of the service principal used to invoke the endpoint. | `00000000-0000-0000-00000000` | | `client_secret` | The client secret of the service principal used to invoke the endpoint. | `ABCDEFGhijkLMNOPQRstUVwz` | | `endpoint_uri` | The endpoint scoring URI. | `https://<endpoint_name>.<region>.inference.ml.azure.com/jobs` |
-
+ > [!IMPORTANT] > `endpoint_uri` is the URI of the endpoint you are trying to execute. The endpoint must have a default deployment configured.
We want to trigger the Logic App each time a new file is created in a given fold
> __Prefix Filter__ allows Event Grid to only notify the workflow when a blob is created in the specific path we indicated. In this case, we are assumming that files will be created by some external process in the folder `<path_to_data_folder>` inside the container `<container_name>` in the selected Storage Account. Configure this parameter to match the location of your data. Otherwise, the event will be fired for any file created at any location of the Storage Account. See [Event filtering for Event Grid](../event-grid/event-filtering.md) for more details. The trigger will look as follows:
-
+ :::image type="content" source="./media/how-to-use-event-grid-batch/create-trigger.png" alt-text="Screenshot of the trigger activity of the Logic App."::: ## Configure the actions
-1. Click on __+ New step__.
+1. Click on __+ New step__.
1. On the workflow designer, under the search box, select **Built-in** and then click on __HTTP__:
We want to trigger the Logic App each time a new file is created in a given fold
| **URI** | `concat('https://login.microsoftonline.com/', parameters('tenant_id'), '/oauth2/token')` | Click on __Add dynamic context__, then __Expression__, to enter this expression. | | **Headers** | `Content-Type` with value `application/x-www-form-urlencoded` | | | **Body** | `concat('grant_type=client_credentials&client_id=', parameters('client_id'), '&client_secret=', parameters('client_secret'), '&resource=https://ml.azure.com')` | Click on __Add dynamic context__, then __Expression__, to enter this expression. |
-
+ The action will look as follows:
-
+ :::image type="content" source="./media/how-to-use-event-grid-batch/authorize.png" alt-text="Screenshot of the authorize activity of the Logic App.":::
-1. Click on __+ New step__.
+1. Click on __+ New step__.
1. On the workflow designer, under the search box, select **Built-in** and then click on __HTTP__:
We want to trigger the Logic App each time a new file is created in a given fold
| **URI** | `endpoint_uri` | Click on __Add dynamic context__, then select it under `parameters`. | | **Headers** | `Content-Type` with value `application/json` | | | **Headers** | `Authorization` with value `concat('Bearer ', body('Authorize')['access_token'])` | Click on __Add dynamic context__, then __Expression__, to enter this expression. |
-
+ 1. In the parameter __Body__, click on __Add dynamic context__, then __Expression__, to enter the following expression:
- ```fx
+ ```fx
replace('{ "properties": {
- "InputData": {
- "mnistinput": {
- "JobInputType" : "UriFile",
- "Uri" : "<JOB_INPUT_URI>"
- }
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFile",
+ "Uri" : "<JOB_INPUT_URI>"
+ }
} } }', '<JOB_INPUT_URI>', triggerBody()?[0]['data']['url']) ```
-
+ > [!TIP] > The previous payload correspond to a **Model deployment**. If you are working with a **Pipeline component deployment**, please adapt the format according to the expectations of the pipeline's inputs. Learn more about how to structure the input in REST calls at [Create jobs and input data for batch endpoints (REST)](how-to-access-data-batch-endpoints-jobs.md?tabs=rest).
-
+ The action will look as follows:
-
+ :::image type="content" source="./media/how-to-use-event-grid-batch/invoke.png" alt-text="Screenshot of the invoke activity of the Logic App.":::
-
+ > [!NOTE] > Notice that this last action will trigger the batch job, but it will not wait for its completion. Azure Logic Apps is not designed for long-running applications. If you need to wait for the job to complete, we recommend you to switch to [Run batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
machine-learning How To Use Pipelines Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipelines-prompt-flow.md
Azure Machine Learning offers notebook tutorials for several use cases with prom
**QA Data Generation**
-[QA Data Generation](https://github.com/Azure/azureml-insiders/blob/main/previews/retrieval-augmented-generation/examples/notebooks/qa_data_generation.ipynb) can be used to get the best prompt for RAG and to evaluation metrics for RAG. This notebook shows you how to create a QA dataset from your data (Git repo).
+[QA Data Generation](https://github.com/Azure/azureml-examples/blob/main/sdk/python/generative-ai/rag/notebooks/qa_data_generation.ipynb) can be used to get the best prompt for RAG and to evaluation metrics for RAG. This notebook shows you how to create a QA dataset from your data (Git repo).
**Test Data Generation and Auto Prompt**
-[Use vector indexes to build a retrieval augmented generation model](https://github.com/Azure/azureml-insiders/blob/main/previews/retrieval-augmented-generation/examples/notebooks/mlindex_with_testgen_autoprompt.ipynb) and to evaluate prompt flow on a test dataset.
+[Use vector indexes to build a retrieval augmented generation model](https://github.com/Azure/azureml-examples/blob/main/sdk/python/generative-ai/rag/notebooks/mlindex_with_testgen_autoprompt.ipynb) and to evaluate prompt flow on a test dataset.
**Create a FAISS based Vector Index**
-[Set up an Azure Machine Learning Pipeline](https://github.com/Azure/azureml-insiders/blob/main/previews/retrieval-augmented-generation/examples/notebooks/faiss/faiss_mlindex_with_langchain.ipynb) to pull a Git Repo, process the data into chunks, embed the chunks and create a langchain compatible FAISS Vector Index.
+[Set up an Azure Machine Learning Pipeline](https://github.com/Azure/azureml-examples/blob/main/sdk/python/generative-ai/rag/notebooks/faiss/faiss_mlindex_with_langchain.ipynb) to pull a Git Repo, process the data into chunks, embed the chunks and create a langchain compatible FAISS Vector Index.
## Next steps [How to create vector index in Azure Machine Learning prompt flow (preview)](how-to-create-vector-index.md)
-[Use Vector Stores](concept-vector-stores.md) with Azure Machine Learning (preview)
+[Use Vector Stores](concept-vector-stores.md) with Azure Machine Learning (preview)
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/azure-machine-learning-release-notes.md
__RSS feed__: Get notified when this page is updated by copying and pasting the
## 2023-13-02 ### Azure Machine Learning SDK for Python v1.49.0
- + **Breaking changes**
+ + **Breaking changes**
+ Starting with v1.49.0 and above, the following AutoML algorithms won't be supported. + Regression: FastLinearRegressor, OnlineGradientDescentRegressor + Classification: AveragedPerceptronClassifier.
__RSS feed__: Get notified when this page is updated by copying and pasting the
## 2022-12-05 ### Azure Machine Learning SDK for Python v1.48.0
- + **Breaking changes**
+ + **Breaking changes**
+ Python 3.6 support has been deprecated for Azure Machine Learning SDK packages.
-
+ + **Bug fixes and improvements** + **azureml-core** + Storage accounts created as a part of workspace creation now set blob public access to be disabled by default
__RSS feed__: Get notified when this page is updated by copying and pasting the
## 2022-09-26
-### Azure Machine Learning SDK for Python v1.46.0
+### Azure Machine Learning SDK for Python v1.46.0
+ **azureml-automl-dnn-nlp** + Customers will no longer be allowed to specify a line in CoNLL, which only comprises with a token. The line must always either be an empty newline or one with exactly one token followed by exactly one space followed by exactly one label. + **azureml-contrib-automl-dnn-forecasting**
__RSS feed__: Get notified when this page is updated by copying and pasting the
+ **azureml-core** + Added deprecation warning when inference customers use CLI/SDK v1 model deployment APIs to deploy models and also when Python version is 3.6 and less. + The following values of `AZUREML_LOG_DEPRECATION_WARNING_ENABLED` change the behavior as follows:
- + Default - displays the warning when customer uses Python 3.6 and less and for cli/sdk v1.
- + `True` - displays the sdk v1 deprecation warning on azureml-sdk packages.
- + `False` - disables the sdk v1 deprecation warning on azureml-sdk packages.
- + Command to be executed to set the environment variable to disable the deprecation message:
+ + Default - displays the warning when customer uses Python 3.6 and less and for cli/sdk v1.
+ + `True` - displays the sdk v1 deprecation warning on azureml-sdk packages.
+ + `False` - disables the sdk v1 deprecation warning on azureml-sdk packages.
+ + Command to be executed to set the environment variable to disable the deprecation message:
+ Windows - `setx AZUREML_LOG_DEPRECATION_WARNING_ENABLED "False"` + Linux - `export AZUREML_LOG_DEPRECATION_WARNING_ENABLED="False"` + **azureml-interpret**
__RSS feed__: Get notified when this page is updated by copying and pasting the
## 2022-08-29
-### Azure Machine Learning SDK for Python v1.45.0
+### Azure Machine Learning SDK for Python v1.45.0
+ **azureml-automl-runtime** + Fixed a bug where the sample_weight column wasn't properly validated. + Added rolling_forecast() public method to the forecasting pipeline wrappers for all supported forecasting models. This method replaces the deprecated rolling_evaluation() method.
__RSS feed__: Get notified when this page is updated by copying and pasting the
## 2022-08-01
-### Azure Machine Learning SDK for Python v1.44.0
-
- + **azureml-automl-dnn-nlp**
+### Azure Machine Learning SDK for Python v1.44.0
+
+ + **azureml-automl-dnn-nlp**
+ Weighted accuracy and Matthews correlation coefficient (MCC) will no longer be a metric displayed on calculated metrics for NLP Multilabel classification.
- + **azureml-automl-dnn-vision**
+ + **azureml-automl-dnn-vision**
+ Raise user error when invalid annotation format is provided + **azureml-cli-common** + Updated the v1 CLI description
- + **azureml-contrib-automl-dnn-forecasting**
+ + **azureml-contrib-automl-dnn-forecasting**
+ Fixed the "Failed to calculate TCN metrics." issues caused for TCNForecaster when different time series in the validation dataset have different lengths. + Added auto timeseries ID detection for DNN forecasting models like TCNForecaster.
- + Fixed a bug with the Forecast TCN model where validation data could be corrupted in some circumstances when the user provided the validation set.
+ + Fixed a bug with the Forecast TCN model where validation data could be corrupted in some circumstances when the user provided the validation set.
+ **azureml-core** + Allow setting a timeout_seconds parameter when downloading artifacts from a Run + Warning message added - Azure Machine Learning CLI v1 is getting retired on 2025-09-. Users are recommended to adopt CLI v2.
__RSS feed__: Get notified when this page is updated by copying and pasting the
+ **Feature deprecation** + **Deprecate Python 3.6 as a supported runtime for SDK v1 packages**
- + On December 05, 2022, Azure Machine Learning will deprecate Python 3.6 as a supported runtime, formally ending our Python 3.6 support for SDK v1 packages.
+ + On December 05, 2022, Azure Machine Learning will deprecate Python 3.6 as a supported runtime, formally ending our Python 3.6 support for SDK v1 packages.
+ From the deprecation date of December 05, 2022, Azure Machine Learning will no longer apply security patches and other updates to the Python 3.6 runtime used by Azure Machine Learning SDK v1 packages. + The existing Azure Machine Learning SDK v1 packages with Python 3.6 still continues to run. However, Azure Machine Learning strongly recommends that you migrate your scripts and dependencies to a supported Python runtime version so that you continue to receive security patches and remain eligible for technical support. + We recommend using Python 3.8 version as a runtime for Azure Machine Learning SDK v1 packages.
__RSS feed__: Get notified when this page is updated by copying and pasting the
+ **azureml-train-automl-client** + Now OutputDatasetConfig is supported as the input of the MM/HTS pipeline builder. The mappings are: 1) OutputTabularDatasetConfig -> treated as unpartitioned tabular dataset. 2) OutputFileDatasetConfig -> treated as filed dataset. + **azureml-train-automl-runtime**
- + Added data validation that requires the number of minority class samples in the dataset to be at least as much as the number of CV folds requested.
+ + Added data validation that requires the number of minority class samples in the dataset to be at least as much as the number of CV folds requested.
+ Automatic cross-validation parameter configuration is now available for AutoML forecasting tasks. Users can now specify "auto" for n_cross_validations and cv_step_size or leave them empty, and AutoML provides those configurations base on your data. However, currently this feature isn't supported when TCN is enabled. + Forecasting Parameters in Many Models and Hierarchical Time Series can now be passed via object rather than using individual parameters in dictionary. + Enabled forecasting model endpoints with quantiles support to be consumed in Power BI.
This breaking change comes from the June release of `azureml-inference-server-ht
+ Fix incorrect form displayed in PBI for integration with AutoML regression models + Adding min-label-classes check for both classification tasks (multi-class and multi-label). It throws an error for the customer's run if the unique number of classes in the input training dataset is fewer than 2. It is meaningless to run classification on fewer than two classes. + **azureml-automl-runtime**
- + Converting decimal type y-test into float to allow for metrics computation to proceed without errors.
- + AutoML training now supports numpy version 1.8.
+ + Converting decimal type y-test into float to allow for metrics computation to proceed without errors.
+ + AutoML training now supports numpy version 1.8.
+ **azureml-contrib-automl-dnn-forecasting** + Fixed a bug in the TCNForecaster model where not all training data would be used when cross-validation settings were provided. + TCNForecaster wrapper's forecast method that was corrupting inference-time predictions. Also fixed an issue where the forecast method would not use the most recent context data in train-valid scenarios.
This breaking change comes from the June release of `azureml-inference-server-ht
+ In AutoML, use lightgbm surrogate model instead of linear surrogate model for sparse case after latest lightgbm version upgrade + All internal intermediate artifacts that are produced by AutoML are now stored transparently on the parent run (instead of being sent to the default workspace blob store). Users should be able to see the artifacts that AutoML generates under the `outputs/` directory on the parent run.
-
-## 2022-01-24
-### Azure Machine Learning SDK for Python v1.38.0
+## 2022-01-24
+
+### Azure Machine Learning SDK for Python v1.38.0
+ **azureml-automl-core** + Tabnet Regressor and Tabnet Classifier support in AutoML + Saving data transformer in parent run outputs, which can be reused to produce same featurized dataset, which was used during the experiment run
This breaking change comes from the June release of `azureml-inference-server-ht
+ Update AML SDK dependencies to the latest version of Azure Resource Management Client Library for Python (azure-mgmt-resource>=15.0.0,<20.0.0) & adopt track2 SDK. + Starting in version 1.37.0, azure-ml-cli extension should be compatible with the latest version of Azure CLI >=2.30.0. + When using Azure CLI in a pipeline, like as Azure DevOps, ensure all tasks/stages are using versions of Azure CLI above v2.30.0 for MSAL-based Azure CLI. Azure CLI 2.30.0 is not backward compatible with prior versions and throws an error when using incompatible versions. To use Azure CLI credentials with Azure Machine Learning SDK, Azure CLI should be installed as pip package.
-
+ + **Bug fixes and improvements** + **azureml-core** + Removed instance types from the attach workflow for Kubernetes compute. Instance types can now directly be set up in the Kubernetes cluster. For more details, please visit aka.ms/amlarc/doc.
This breaking change comes from the June release of `azureml-inference-server-ht
+ Fixed a bug where the experiment "placeholder" might be created on submission of a Pipeline with an AutoMLStep. + Update AutoMLConfig test_data and test_size docs to reflect preview status. + **azureml-train-automl-runtime**
- + Added new feature that allows users to pass time series grains with one unique value.
+ + Added new feature that allows users to pass time series grains with one unique value.
+ In certain scenarios, an AutoML model can predict NaNs. The rows that correspond to these NaN predictions is removed from test datasets and predictions before computing metrics in test runs.
This breaking change comes from the June release of `azureml-inference-server-ht
+ Featurization summary is now stored as an artifact on the run (check for a file named 'featurization_summary.json' under the outputs folder) + Enable categorical indicators support for Tabnet Learner. + Add downsample parameter to automl_setup_model_explanations to allow users to get explanations on all data without downsampling by setting this parameter to be false.
-
+ ## 2021-10-11
This breaking change comes from the June release of `azureml-inference-server-ht
+ Replaced dependency on deprecated package(azureml-train) inside azureml-sdk. + Add azureml-responsibleai to azureml-sdk extras + **azureml-train-automl-client**
- + Expose the `test_data` and `test_size` parameters in `AutoMLConfig`. These parameters can be used to automatically start a test run after the model
- + training phase has been completed. The test run computes predictions using the best model and generates metrics given these predictions.
+ + Expose the `test_data` and `test_size` parameters in `AutoMLConfig`. These parameters can be used to automatically start a test run after the model training phase has been completed. The test run computes predictions using the best model and generates metrics given these predictions.
## 2021-08-24
This breaking change comes from the June release of `azureml-inference-server-ht
+ Run Delete is a new functionality that allows users to delete one or multiple runs from their workspace. + This functionality can help users reduce storage costs and manage storage capacity by regularly deleting runs and experiments from the UI directly. + **Batch Cancel Run**
- + Batch Cancel Run is new functionality that allows users to select one or multiple runs to cancel from their run list.
+ + Batch Cancel Run is new functionality that allows users to select one or multiple runs to cancel from their run list.
+ This functionality can help users cancel multiple queued runs and free up space on their cluster. ## 2021-08-18 ### Azure Machine Learning Experimentation User Interface + **Run Display Name**
- + The Run Display Name is a new, editable and optional display name that can be assigned to a run.
- + This name can help with more effectively tracking, organizing and discovering the runs.
- + The Run Display Name is defaulted to an adjective_noun_guid format (Example: awesome_watch_2i3uns).
- + This default name can be edited to a more customizable name. This can be edited from the Run details page in the Azure Machine Learning studio user interface.
+ + The Run Display Name is a new, editable and optional display name that can be assigned to a run.
+ + This name can help with more effectively tracking, organizing and discovering the runs.
+ + The Run Display Name is defaulted to an adjective_noun_guid format (Example: awesome_watch_2i3uns).
+ + This default name can be edited to a more customizable name. This can be edited from the Run details page in the Azure Machine Learning studio user interface.
## 2021-08-02
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ Deprecated Environment attributes under the DockerSection - "enabled", "shared_volume" and "arguments" are a part of DockerConfiguration in RunConfiguration now. + Updated Pipeline CLI clone documentation + Updated portal URIs to include tenant for authentication
- + Removed experiment name from run URIs to avoid redirects
+ + Removed experiment name from run URIs to avoid redirects
+ Updated experiment URO to use experiment ID. + Bug fixes for attaching remote compute with Azure Machine Learning CLI. + Updated portal URIs to include tenant for authentication.
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ [Experimental feature] Add `partition_keys` parameter to ParallelRunConfig, if specified, the input dataset(s) would be partitioned into mini-batches by the keys specified by it. It requires all input datasets to be partitioned dataset. + **azureml-pipeline-steps** + Bugfix - supporting path_on_compute while passing dataset configuration as download.
- + Deprecate RScriptStep in favor of using CommandStep for running R scripts in pipelines.
+ + Deprecate RScriptStep in favor of using CommandStep for running R scripts in pipelines.
+ Deprecate EstimatorStep in favor of using CommandStep for running ML training (including distributed training) in pipelines. + **azureml-sdk** + Update python_requires to < 3.9 for azureml-sdk
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ Bugfix - supporting path_on_compute while passing dataset configuration as download. + **azureml-pipeline-steps** + Bugfix - supporting path_on_compute while passing dataset configuration as download.
- + Deprecate RScriptStep in favor of using CommandStep for running R scripts in pipelines.
+ + Deprecate RScriptStep in favor of using CommandStep for running R scripts in pipelines.
+ Deprecate EstimatorStep in favor of using CommandStep for running ML training (including distributed training) in pipelines. + **azureml-train-automl-runtime** + Changed console output when submitting an AutoML run to show a portal link to the run.
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
## 2021-03-31 ### Azure Machine Learning studio Notebooks Experience (March Update) + **New features**
- + Render CSV/TSV. Users are able to render and TSV/CSV file in a grid format for easier data analysis.
- + SSO Authentication for Compute Instance. Users can now easily authenticate any new compute instances directly in the Notebook UI, making it easier to authenticate and use Azure SDKs directly in Azure Machine Learning.
+ + Render CSV/TSV. Users are able to render and TSV/CSV file in a grid format for easier data analysis.
+ + SSO Authentication for Compute Instance. Users can now easily authenticate any new compute instances directly in the Notebook UI, making it easier to authenticate and use Azure SDKs directly in Azure Machine Learning.
+ Compute Instance Metrics. Users are able to view compute metrics like CPU usage and memory via terminal. + File Details. Users can now see file details including the last modified time, and file size by clicking the three dots beside a file.
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ With setting show_output to True when deploy models, inference configuration and deployment configuration is replayed before sending the request to server. + **azureml-core** + Added functionality to filter Tabular Datasets by column values and File Datasets by metadata.
- + Previously, it was possibly for users to create provisioning configurations for ComputeTarget's that didn't satisfy the password strength requirements for the `admin_user_password` field (that is, that they must contain at least 3 of the following: One lowercase letter, one uppercase letter, one digit, and one special character from the following set: ``\`~!@#$%^&*()=+_[]{}|;:./'",<>?``). If the user created a configuration with a weak password and ran a job using that configuration, the job would fail at runtime. Now, the call to `AmlCompute.provisioning_configuration` throws a `ComputeTargetException` with an accompanying error message explaining the password strength requirements.
+ + Previously, it was possibly for users to create provisioning configurations for ComputeTarget's that didn't satisfy the password strength requirements for the `admin_user_password` field (that is, that they must contain at least 3 of the following: One lowercase letter, one uppercase letter, one digit, and one special character from the following set: ``\`~!@#$%^&*()=+_[]{}|;:./'",<>?``). If the user created a configuration with a weak password and ran a job using that configuration, the job would fail at runtime. Now, the call to `AmlCompute.provisioning_configuration` throws a `ComputeTargetException` with an accompanying error message explaining the password strength requirements.
+ Additionally, it was also possible in some cases to specify a configuration with a negative number of maximum nodes. It's no longer possible to do this. Now, `AmlCompute.provisioning_configuration` throws a `ComputeTargetException` if the `max_nodes` argument is a negative integer. + With setting show_output to True when deploy models, inference configuration and deployment configuration is displayed. + With setting show_output to True when wait for the completion of model deployment, the progress of deployment operation is displayed.
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
### Azure Machine Learning studio Notebooks Experience (February Update) + **New features** + [Native Terminal (GA)](../how-to-access-terminal.md). Users now have access to an integrated terminal and Git operation via the integrated terminal.
- + Notebook Snippets (preview). Common Azure Machine Learning code excerpts are now available at your fingertips. Navigate to the code snippets panel, accessible via the toolbar, or activate the in-code snippets menu using Ctrl + Space.
- + [Keyboard Shortcuts](../how-to-run-jupyter-notebooks.md#useful-keyboard-shortcuts). Full parity with keyboard shortcuts available in Jupyter.
+ + Notebook Snippets (preview). Common Azure Machine Learning code excerpts are now available at your fingertips. Navigate to the code snippets panel, accessible via the toolbar, or activate the in-code snippets menu using Ctrl + Space.
+ + [Keyboard Shortcuts](../how-to-run-jupyter-notebooks.md#useful-keyboard-shortcuts). Full parity with keyboard shortcuts available in Jupyter.
+ Indicate Cell parameters. Shows users which cells in a notebook are parameter cells and can run parameterized notebooks via [Papermill](https://github.com/nteract/papermill) on the Compute Instance. + Terminal and Kernel session + Sharing Button. Users can now share any file in the Notebook file explorer by right-clicking the file and using the share button.
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ **Bug fixes and improvements** + Improved page load times
- + Improved performance
+ + Improved performance
+ Improved speed and kernel reliability + Added spinning wheel to show progress for all ongoing [Compute Instance operations](../how-to-run-jupyter-notebooks.md#status-indicators). + Right click in File Explorer. Right-clicking any file now opens file operations.
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ **azureml-pipeline-steps** + [CommandStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.commandstep) now GA and no longer experimental. + [ParallelRunConfig](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunconfig): add argument allowed_failed_count and allowed_failed_percent to check error threshold on mini batch level. Error threshold has three flavors now:
- + error_threshold - the number of allowed failed mini batch items;
- + allowed_failed_count - the number of allowed failed mini batches;
- + allowed_failed_percent- the percent of allowed failed mini batches.
-
+ + error_threshold - the number of allowed failed mini batch items;
+ + allowed_failed_count - the number of allowed failed mini batches;
+ + allowed_failed_percent- the percent of allowed failed mini batches.
+ A job stops if exceeds any of them. error_threshold is required to keep it backward compatibility. Set the value to -1 to ignore it. + Fixed whitespace handling in AutoMLStep name. + ScriptRunConfig is now supported by HyperDriveStep
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ **Bug fixes and improvements** + Improved page load times
- + Improved performance
+ + Improved performance
+ Improved speed and kernel reliability
-
+ ## 2021-01-25
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ `run.get_details()` has an extra field named "submittedBy", which displays the author's name for this run. + Edited Model.register method documentation to mention how to register model from run directly + Fixed IOT-Server connection status change handling issue.
-
+ ## 2020-12-31 ### Azure Machine Learning studio Notebooks Experience (December Update) + **New features** + User Filename search. Users are now able to search all the files saved in a workspace. + Markdown Side by Side support per Notebook Cell. In a notebook cell, users can now have the option to view rendered markdown and markdown syntax side-by-side.
- + Cell Status Bar. The status bar indicates what state a code cell is in, whether a cell run was successful, and how long it took to run.
-
+ + Cell Status Bar. The status bar indicates what state a code cell is in, whether a cell run was successful, and how long it took to run.
+ + **Bug fixes and improvements** + Improved page load times
- + Improved performance
+ + Improved performance
+ Improved speed and kernel reliability
-
+ ## 2020-12-07 ### Azure Machine Learning SDK for Python v1.19.0
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ **azureml-train-core** + HyperDriveRun.get_children_sorted_by_primary_metric() should complete faster now + Improved error handling in HyperDrive SDK.
- + Deprecated all estimator classes in favor of using ScriptRunConfig to configure experiment runs. Deprecated classes include:
- + MMLBase
- + Estimator
- + PyTorch
- + TensorFlow
- + Chainer
- + SKLearn
+ + Deprecated all estimator classes in favor of using ScriptRunConfig to configure experiment runs. Deprecated classes include:
+ + MMLBase
+ + Estimator
+ + PyTorch
+ + TensorFlow
+ + Chainer
+ + SKLearn
+ Deprecated the use of Nccl and Gloo as valid input types for Estimator classes in favor of using PyTorchConfiguration with ScriptRunConfig. + Deprecated the use of Mpi as a valid input type for Estimator classes in favor of using MpiConfiguration with ScriptRunConfig. + Adding command property to run configuration. The feature enables users to run an actual command or executables on the compute through Azure Machine Learning SDK. + Deprecated all estimator classes in favor of using ScriptRunConfig to configure experiment runs. Deprecated classes include: + MMLBaseEstimator + Estimator + PyTorch + TensorFlow + Chainer + SKLearn
- + Deprecated the use of Nccl and Gloo as a valid type of input for Estimator classes in favor of using PyTorchConfiguration with ScriptRunConfig.
+ + Deprecated the use of Nccl and Gloo as a valid type of input for Estimator classes in favor of using PyTorchConfiguration with ScriptRunConfig.
+ Deprecated the use of Mpi as a valid type of input for Estimator classes in favor of using MpiConfiguration with ScriptRunConfig. ## 2020-11-30 ### Azure Machine Learning studio Notebooks Experience (November Update) + **New features** + Native Terminal. Users now have access to an integrated terminal and Git operation via the [integrated terminal.](../how-to-access-terminal.md)
- + Duplicate Folder
- + Costing for Compute Drop Down
- + Offline Compute Pylance
+ + Duplicate Folder
+ + Costing for Compute Drop Down
+ + Offline Compute Pylance
+ **Bug fixes and improvements** + Improved page load times
- + Improved performance
+ + Improved performance
+ Improved speed and kernel reliability + Large File Upload. You can now upload file >95 mb
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ Creating an experiment returns the active or last archived experiment with that same given name if such experiment exists or a new experiment. + Calling get_experiment by name returns the active or last archived experiment with that given name. + Users can't rename an experiment while reactivating it.
- + Improved error message to include potential fixes when a dataset is incorrectly passed to an experiment (for example, ScriptRunConfig).
+ + Improved error message to include potential fixes when a dataset is incorrectly passed to an experiment (for example, ScriptRunConfig).
+ Improved documentation for `OutputDatasetConfig.register_on_complete` to include the behavior of what happens when the name already exists. + Specifying dataset input and output names that have the potential to collide with common environment variables now results in a warning + Repurposed `grant_workspace_access` parameter when registering datastores. Set it to `True` to access data behind virtual network from Machine Learning studio.
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Pin major versions of direct dependencies of azureml-core + AKSWebservice and AKSEndpoints now support pod-level CPU and Memory resource limits. More information on [Kubernetes Resources and Limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits) + Updated run.log_table to allow individual rows to be logged.
- + Added static method `Run.get(workspace, run_id)` to retrieve a run only using a workspace
+ + Added static method `Run.get(workspace, run_id)` to retrieve a run only using a workspace
+ Added instance method `Workspace.get_run(run_id)` to retrieve a run within the workspace + Introducing command property in run configuration, which enables users to submit command instead of script & arguments. + **azureml-interpret**
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ **azureml-dataprep** + Enable execute permission on files for Dataset mount. + **azureml-mlflow**
- + Updated Azure Machine Learning MLflow documentation and notebook samples
+ + Updated Azure Machine Learning MLflow documentation and notebook samples
+ New support for MLflow projects with Azure Machine Learning backend + MLflow model registry support
- + Added Azure RBAC support for AzureML-MLflow operations
-
+ + Added Azure RBAC support for AzureML-MLflow operations
+ + **azureml-pipeline-core** + Improved the documentation of the PipelineOutputFileDataset.parse_* methods. + New Kusto Step and Kusto Compute Target.
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Added error handling in get_output for cases when local versions of pandas/sklearn don't match the ones used during training + **azureml-train-core** + Update description of the package for pypi overview page.
-
+ ## 2020-08-31 ### Azure Machine Learning SDK for Python v1.13.0 + **Preview features** + **azureml-core** With the new output datasets capability, you can write back to cloud storage including Blob, ADLS Gen 1, ADLS Gen 2, and FileShare. You can configure where to output data, how to output data (via mount or upload), whether to register the output data for future reuse and sharing and pass intermediate data between pipeline steps seamlessly. This enables reproducibility, sharing, prevents duplication of data, and results in cost efficiency and productivity gains. [Learn how to use it](/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig)
-
+ + **Bug fixes and improvements** + **azureml-automl-core** + Added validated_{platform}_requirements.txt file for pinning all pip dependencies for AutoML.
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Fixed the bug where submitting a child run with Dataset fails due to `TypeError: can't pickle _thread.RLock objects`. + Adding page_count default/documentation for Model list(). + Modify CLI&SDK to take adbworkspace parameter and Add workspace adb lin/unlink runner.
- + Fix bug in Dataset.update that caused newest Dataset version to be updated not the version of the Dataset update was called on.
+ + Fix bug in Dataset.update that caused newest Dataset version to be updated not the version of the Dataset update was called on.
+ Fix bug in Dataset.get_by_name that would show the tags for the newest Dataset version even when a specific older version was retrieved. + **azureml-interpret** + Added probability outputs to shap scoring explainers in azureml-interpret based on shap_values_output parameter from original explainer.
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
### Azure Machine Learning studio Notebooks Experience (August Update) + **New features**
- + New Getting started landing Page
-
+ + New Getting started landing Page
+ + **Preview features** + Gather feature in Notebooks. With the [Gather](../how-to-run-jupyter-notebooks.md#clean-your-notebook-preview) feature, users can now easily clean up notebooks with, Gather uses an automated dependency analysis of your notebook, ensuring the essential code is kept, but removing any irrelevant pieces.
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Multi-line R cells can now run + "I trust contents of this file" is now auto checked after first time + Improved Conflict resolution dialog, with new "Make a copy" option
-
+ ## 2020-08-17 ### Azure Machine Learning SDK for Python v1.12.0
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ **Bug fixes and improvements** + **azure-cli-ml** + Fix model framework and model framework not passed in run object in CLI model registration path
- + Fix CLI amlcompute identity show command to show tenant ID and principal ID
+ + Fix CLI amlcompute identity show command to show tenant ID and principal ID
+ **azureml-train-automl-client** + Added get_best_child () to AutoMLRun for fetching the best child run for an AutoML Run without downloading the associated model. + Added ModelProxy object that allows predict or forecast to be run on a remote training environment without downloading the model locally.
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ **azureml-datadrift** + Update matplotlib version from 3.0.2 to 3.2.1 to support Python 3.8. + **azureml-dataprep**
- + Added support of web url data sources with `Range` or `Head` request.
+ + Added support of web url data sources with `Range` or `Head` request.
+ Improved stability for file dataset mount and download. + **azureml-train-automl-client** + Fixed issues related to removal of `RequirementParseError` from setuptools.
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Renamed input parameter to "allowed_models" to remove a sensitive term. + Users can now specify a time series frequency for forecasting tasks by using the `freq` parameter.
-
+ ## 2020-07-06 ### Azure Machine Learning SDK for Python v1.9.0
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ **azureml-widgets** + Doc updates to azureml-widgets.
-
+ ## 2020-06-22 ### Azure Machine Learning SDK for Python v1.8.0
-
+ + **Preview features** + **azureml-contrib-fairness**
- The `azureml-contrib-fairness` package provides integration between the open-source fairness assessment and unfairness mitigation package [Fairlearn](https://fairlearn.github.io) and Azure Machine Learning studio. In particular, the package enables model fairness evaluation dashboards to be uploaded as part of an Azure Machine Learning Run and appear in Azure Machine Learning studio
+ The `azureml-contrib-fairness` package provides integration between the open-source fairness assessment and unfairness mitigation package [Fairlearn](https://fairlearn.github.io) and Azure Machine Learning studio. In particular, the package enables model fairness evaluation dashboards to be uploaded as part of an Azure Machine Learning Run and appear in Azure Machine Learning studio
+ **Bug fixes and improvements** + **azure-cli-ml**
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Users are now able to enable stack ensemble iteration for Time series tasks with a warning that it could potentially overfit. + Added a new type of user exception that is raised if the cache store contents have been tampered with + **azureml-automl-runtime**
- + Class Balancing Sweeping is no longer enabled if user disables featurization.
+ + Class Balancing Sweeping is no longer enabled if user disables featurization.
+ **azureml-contrib-notebook** + Doc improvements to azureml-contrib-notebook package. + **azureml-contrib-pipeline-steps**
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Changed AutoML run behavior to raise UserErrorException if service throws user error + AutoML runs are now marked as child run of Parallel Run Step.
-
+ ## 2020-06-08 ### Azure Machine Learning SDK for Python v1.7.0
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ **azureml-train-core** + Supporting TensorFlow version 2.1 in the PyTorch Estimator + Improvements to azureml-train-core package.
-
+ ## 2020-05-26 ### Azure Machine Learning SDK for Python v1.6.0
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ **New features** + **azureml-automl-runtime** + AutoML Forecasting now supports customers forecast beyond the prespecified max-horizon without retraining the model. When the forecast destination is farther into the future than the specified maximum horizon, the forecast() function still makes point predictions out to the later date using a recursive operation mode. For the illustration of the new feature, see the "Forecasting farther than the maximum horizon" section of "forecasting-forecast-function" notebook in [folder](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning)."
-
+ + **azureml-pipeline-steps** + ParallelRunStep is now released and is part of **azureml-pipeline-steps** package. Existing ParallelRunStep in **azureml-contrib-pipeline-steps** package is deprecated. Changes from public preview version: + Added `run_max_try` optional configurable parameter to control max call to run method for any given batch, default value is 3.
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ run_max_try + Default value for process_count_per_node is changed to 1. User should tune this value for better performance. Best practice is to set as the number of GPU or CPU node has. + ParallelRunStep does not inject any packages, user needs to include **azureml-core** and **azureml-dataprep[pandas, fuse]** packages in environment definition. If custom docker image is used with user_managed_dependencies, then user need to install conda on the image.
-
+ + **Breaking changes** + **azureml-pipeline-steps** + Deprecated the use of azureml.dprep.Dataflow as a valid type of input for AutoMLConfig
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Added a new set of HyperDrive specific exceptions. azureml.train.hyperdrive now throws detailed exceptions. + **azureml-widgets** + Azure Machine Learning Widgets is not displaying in JupyterLab
-
+ ## 2020-05-11
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Supporting PyTorch version 1.5 in the PyTorch Estimator + Fix the issue that framework image can't be fetched in Azure Government region when using training framework estimators
-
+ ## 2020-05-04 **New Notebook Experience**
To get started, visit the [Run Jupyter Notebooks in your workspace](../how-to-ru
**New Features Introduced:**
-+ Improved editor (Monaco editor) used by Visual Studio Code
++ Improved editor (Monaco editor) used by Visual Studio Code + UI/UX improvements + Cell Toolbar + New Notebook Toolbar and Compute Controls
-+ Notebook Status Bar
++ Notebook Status Bar + Inline Kernel Switching + R Support + Accessibility and Localization improvements
To get started, visit the [Run Jupyter Notebooks in your workspace](../how-to-ru
+ Improved performance and reliability Access the following web-based authoring tools from the studio:
-
-| Web-based tool | Description |
+
+| Web-based tool | Description |
|||
-| Azure Machine Learning Studio Notebooks | First in-class authoring for notebook files and support all operation available in the Azure Machine Learning Python SDK. |
+| Azure Machine Learning Studio Notebooks | First in-class authoring for notebook files and support all operation available in the Azure Machine Learning Python SDK. |
## 2020-04-27
Access the following web-based authoring tools from the studio:
+ **New features** + AmlCompute clusters now support setting up a managed identity on the cluster at the time of provisioning. Just specify whether you would like to use a system-assigned identity or a user-assigned identity, and pass an identityId for the latter. You can then set up permissions to access various resources like Storage or ACR in a way that the identity of the compute gets used to securely access the data, instead of a token-based approach that AmlCompute employs today. Check out our SDK reference for more information on the parameters.
-
+ + **Breaking changes**
- + AmlCompute clusters supported a Preview feature around run-based creation, that we are planning on deprecating in two weeks. You can continue to create persistent compute targets as always by using the Amlcompute class, but the specific approach of specifying the identifier "amlcompute" as the compute target in run config will not be supported soon.
+ + AmlCompute clusters supported a Preview feature around run-based creation, that we are planning on deprecating in two weeks. You can continue to create persistent compute targets as always by using the Amlcompute class, but the specific approach of specifying the identifier "amlcompute" as the compute target in run config will not be supported soon.
+ **Bug fixes and improvements** + **azureml-automl-runtime**
Access the following web-based authoring tools from the studio:
+ **azureml-contrib-pipeline-steps** + ParallelRunStep now supports dataset as pipeline parameter. User can construct pipeline with sample dataset and can change input dataset of the same type (file or tabular) for new pipeline run.
-
+ ## 2020-04-13 ### Azure Machine Learning SDK for Python v1.3.0
Access the following web-based authoring tools from the studio:
+ Added user_managed flag in RSection that indicates whether the environment is managed by user or by Azure Machine Learning. + Dataset: Fixed dataset download failure if data path containing unicode characters. + Dataset: Improved dataset mount caching mechanism to respect the minimum disk space requirement in Azure Machine Learning Compute, which avoids making the node unusable and causing the job to be canceled.
- + Dataset: We add an index for the time series column when you access a time series dataset as a pandas dataframe, which is used to speed up access to time series-based data access. Previously, the index was given the same name as the timestamp column, confusing users about which is the actual timestamp column and which is the index. We now don't give any specific name to the index since it should not be used as a column.
+ + Dataset: We add an index for the time series column when you access a time series dataset as a pandas dataframe, which is used to speed up access to time series-based data access. Previously, the index was given the same name as the timestamp column, confusing users about which is the actual timestamp column and which is the index. We now don't give any specific name to the index since it should not be used as a column.
+ Dataset: Fixed dataset authentication issue in sovereign cloud. + Dataset: Fixed `Dataset.to_spark_dataframe` failure for datasets created from Azure PostgreSQL datastores. + **azureml-interpret**
Access the following web-based authoring tools from the studio:
+ added sparse AutoML end to end support + **azureml-opendatasets** + Added another telemetry for service monitor.
- + Enable front door for blob to increase stability
+ + Enable front door for blob to increase stability
## 2020-03-23
Access the following web-based authoring tools from the studio:
+ **azure-cli-ml** + Adds "--subscription-id" to `az ml model/computetarget/service` commands in the CLI + Adding support for passing customer-managed key(CMK) vault_url, key_name and key_version for ACI deployment
- + **azureml-automl-core**
+ + **azureml-automl-core**
+ Enabled customized imputation with constant value for both X and y data forecasting tasks.
- + Fixed the issue in with showing error messages to user.
+ + Fixed the issue in with showing error messages to user.
+ **azureml-automl-runtime** + Fixed the issue in with forecasting on the data sets, containing grains with only one row + Decreased the amount of memory required by the forecasting tasks.
Access the following web-based authoring tools from the studio:
+ **Breaking changes** + **Semantic Versioning 2.0.0**
- + Starting with version 1.1 Azure Machine Learning Python SDK adopts [Semantic Versioning 2.0.0](https://semver.org/). All subsequent versions follow new numbering scheme and semantic versioning contract.
+ + Starting with version 1.1 Azure Machine Learning Python SDK adopts [Semantic Versioning 2.0.0](https://semver.org/). All subsequent versions follow new numbering scheme and semantic versioning contract.
+ **Bug fixes and improvements** + **azure-cli-ml**
Access the following web-based authoring tools from the studio:
+ Moved the `AutoMLStep` in the `azureml-pipeline-steps` package. Deprecated the `AutoMLStep` within `azureml-train-automl-runtime`. + **azureml-train-core** + Supporting PyTorch version 1.4 in the PyTorch Estimator
-
+ ## 2020-03-02 ### Azure Machine Learning SDK for Python v1.1.2rc0 (Pre-release)
Access the following web-based authoring tools from the studio:
+ Moved the `AutoMLStep` in the `azureml-pipeline-steps` package. Deprecated the `AutoMLStep` within `azureml-train-automl-runtime`. + **azureml-train-core** + Supporting PyTorch version 1.4 in the PyTorch Estimator
-
+ ## 2020-02-04 ### Azure Machine Learning SDK for Python v1.1.0rc0 (Pre-release) + **Breaking changes** + **Semantic Versioning 2.0.0**
- + Starting with version 1.1 Azure Machine Learning Python SDK adopts [Semantic Versioning 2.0.0](https://semver.org/). All subsequent versions follow new numbering scheme and semantic versioning contract.
-
+ + Starting with version 1.1 Azure Machine Learning Python SDK adopts [Semantic Versioning 2.0.0](https://semver.org/). All subsequent versions follow new numbering scheme and semantic versioning contract.
+ + **Bug fixes and improvements** + **azureml-automl-runtime** + Increased speed of featurization.
Access the following web-based authoring tools from the studio:
+ Added documentation example for dataset as PythonScriptStep input + **azureml-contrib-pipeline-steps** + Parameters passed in ParallelRunConfig can be overwritten by passing pipeline parameters now. New pipeline parameters supported aml_mini_batch_size, aml_error_threshold, aml_logging_level, aml_run_invocation_timeout (aml_node_count and aml_process_count_per_node are already part of earlier release).
-
+ ## 2020-01-21 ### Azure Machine Learning SDK for Python v1.0.85
Access the following web-based authoring tools from the studio:
+ **New features** + **azureml-core** + Get the current core usage and quota limitation for AmlCompute resources in a given workspace and subscription
-
+ + **azureml-contrib-pipeline-steps** + Enable user to pass tabular dataset as intermediate result from previous step to parallelrunstep + **Bug fixes and improvements** + **azureml-automl-runtime**
- + Removed the requirement of y_query column in the request to the deployed forecasting service.
+ + Removed the requirement of y_query column in the request to the deployed forecasting service.
+ The 'y_query' was removed from the Dominick's Orange Juice notebook service request section. + Fixed the bug preventing forecasting on the deployed models, operating on data sets with date time columns. + Added Matthews Correlation Coefficient as a classification metric, for both binary and multiclass classification.
Access the following web-based authoring tools from the studio:
+ Changed LocalWebservice.wait_for_deployment() to check the status of the local Docker container before trying to ping its health endpoint, greatly reducing the amount of time it takes to report a failed deployment. + Fixed the initialization of an internal property used in LocalWebservice.reload() when the service object is created from an existing deployment using the LocalWebservice() constructor. + Edited error message for clarification.
- + Added a new method called get_access_token() to AksWebservice that will return AksServiceAccessToken object, which contains access token, refresh after timestamp, expiry on timestamp and token type.
+ + Added a new method called get_access_token() to AksWebservice that will return AksServiceAccessToken object, which contains access token, refresh after timestamp, expiry on timestamp and token type.
+ Deprecated existing get_token() method in AksWebservice as the new method returns all of the information this method returns. + Modified output of az ml service get-access-token command. Renamed token to accessToken and refreshBy to refreshAfter. Added expiryOn and tokenType properties. + Fixed get_active_runs
Access the following web-based authoring tools from the studio:
+ Fixed bug in `datastore.upload_files` were relative path that didn't start with `./` was not able to be used. + Added deprecation messages for all Image class code paths + Fixed Model Management URL construction for Azure China 21Vianet region.
- + Fixed issue where models using source_dir couldn't be packaged for Azure Functions.
+ + Fixed issue where models using source_dir couldn't be packaged for Azure Functions.
+ Added an option to [Environment.build_local()](/python/api/azureml-core/azureml.core.environment.environment) to push an image into Azure Machine Learning workspace container registry + Updated the SDK to use new token library on Azure synapse in a back compatible manner. + **azureml-interpret**
From the studio, you can train, test, deploy, and manage Azure Machine Learning
Access the following web-based authoring tools from the studio:
-| Web-based tool | Description |
+| Web-based tool | Description |
|-|-|-|
-| Notebook VM(preview) | Fully managed cloud-based workstation |
-| [Automated machine learning](../tutorial-first-experiment-automated-ml.md) (preview) | No code experience for automating machine learning model development |
-| [Designer](concept-designer.md) | Drag-and-drop machine learning modeling tool formerly known as the visual interface |
+| Notebook VM(preview) | Fully managed cloud-based workstation |
+| [Automated machine learning](../tutorial-first-experiment-automated-ml.md) (preview) | No code experience for automating machine learning model development |
+| [Designer](concept-designer.md) | Drag-and-drop machine learning modeling tool formerly known as the visual interface |
### Azure Machine Learning designer enhancements
-+ Formerly known as the visual interface
++ Formerly known as the visual interface + 11 new [modules](../component-reference/component-reference.md) including recommenders, classifiers, and training utilities including feature engineering, cross validation, and data transformation.
-### R SDK
-
+### R SDK
+ Data scientists and AI developers use the [Azure Machine Learning SDK for R](https://github.com/Azure/azureml-sdk-for-r) to build and run machine learning workflows with Azure Machine Learning. The Azure Machine Learning SDK for R uses the `reticulate` package to bind to the Python SDK. By binding directly to Python, the SDK for R allows you access to core objects and methods implemented in the Python SDK from any R environment you choose.
Main capabilities of the SDK include:
See the [package website](https://azure.github.io/azureml-sdk-for-r) for complete documentation.
-### Azure Machine Learning integration with Event Grid
+### Azure Machine Learning integration with Event Grid
Azure Machine Learning is now a resource provider for Event Grid, you can configure machine learning events through the Azure portal or Azure CLI. Users can create events for run completion, model registration, model deployment, and data drift detected. These events can be routed to event handlers supported by Event Grid for consumption. See machine learning event [schema](../../event-grid/event-schema-machine-learning.md) and [tutorial](../how-to-use-event-grid.md) articles for more details.
The Experiment tab in the [new workspace portal](https://ml.azure.com) has been
+ **New features** + Introduced the `timeseries` trait on TabularDataset. This trait enables easy timestamp filtering on data a TabularDataset, such as taking all data between a range of time or the most recent data. https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb for an example notebook.
- + Enabled training with TabularDataset and FileDataset.
+ + Enabled training with TabularDataset and FileDataset.
+ **azureml-train-core** + Added `Nccl` and `Gloo` support in PyTorch estimator
At the time, of this release, the following browsers are supported: Chrome, Fire
### Azure Machine Learning SDK for Python v1.0.60 + **New features**
- + Introduced FileDataset, which references single or multiple files in your datastores or public urls. The files can be of any format. FileDataset provides you with the ability to download or mount the files to your compute.
+ + Introduced FileDataset, which references single or multiple files in your datastores or public urls. The files can be of any format. FileDataset provides you with the ability to download or mount the files to your compute.
+ Added Pipeline Yaml Support for PythonScript Step, Adla Step, Databricks Step, DataTransferStep, and AzureBatch Step + **Bug fixes and improvements**
mysql How To Troubleshoot Cli Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-troubleshoot-cli-errors.md
Currently, Azure CLI doesn't support turning on debug logging, but you can retri
>[!NOTE] >
-> - Replace ```examplegroup``` and ```exampledeployment``` with the correct resource group and deployment name for your database server.
+> - Replace ```examplegroup``` and ```exampledeployment``` with the correct resource group and deployment name for your database server.
> - You can see the Deployment name in the deployments page in your resource group. See [how to find the deployment name](../../azure-resource-manager/templates/deployment-history.md?tabs=azure-portal).
-1. List the deployments in resource group to identify the MySQL Server deployment
- ```azurecli
-
- az deployment operation group list \
- --resource-group examplegroup \
- --name exampledeployment
- ```
-
-2. Get the request content of the MySQL Server deployment
- ```azurecli
-
- az deployment operation group list \
- --name exampledeployment \
- -g examplegroup \
- --query [].properties.request
- ```
-3. Examine the response content
- ```azurecli
- az deployment operation group list \
- --name exampledeployment \
- -g examplegroup \
- --query [].properties.response
- ```
+1. List the deployments in resource group to identify the MySQL Server deployment.
+
+ ```azurecli
+ az deployment operation group list \
+ --resource-group examplegroup \
+ --name exampledeployment
+ ```
+
+2. Get the request content of the MySQL Server deployment.
+
+ ```azurecli
+ az deployment operation group list \
+ --name exampledeployment \
+ -g examplegroup \
+ --query [].properties.request
+ ```
+
+3. Examine the response content.
+
+ ```azurecli
+ az deployment operation group list \
+ --name exampledeployment \
+ -g examplegroup \
+ --query [].properties.response
+ ```
## Error codes
network-watcher Connection Monitor Create Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-create-using-portal.md
This article describes how to create a monitor in Connection Monitor by using the Azure portal. Connection Monitor supports hybrid and Azure cloud deployments. > [!IMPORTANT]
-> As of July 1, 2021, you can no longer add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You also can no longer add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors that were created prior to July 1, 2021.
+> As of July 1, 2021, you can no longer add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You also can no longer add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors that were created prior to July 1, 2021.
> > To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor ](migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new connection monitor in Azure Network Watcher before February 19, 2024. > [!IMPORTANT]
-> Connection Monitor supports end-to-end connectivity checks from and to Azure Virtual Machine Scale Sets. These checks enable faster performance monitoring and network troubleshooting across scale sets.
+> Connection Monitor supports end-to-end connectivity checks from and to Azure Virtual Machine Scale Sets. These checks enable faster performance monitoring and network troubleshooting across scale sets.
-## Before you begin
+## Before you begin
In monitors that you create by using Connection Monitor, you can add on-premises machines, Azure virtual machines (VMs), and Azure Virtual Machine Scale Sets as sources. These connection monitors can also monitor connectivity to endpoints. The endpoints can be on Azure or on any other URL or IP.
Here are some definitions to get you started:
* **Connection monitor resource**: A region-specific Azure resource. All the following entities are properties of a connection monitor resource. * **Endpoint**: A source or destination that participates in connectivity checks. Examples of endpoints include:
- * Azure VMs
- * Azure virtual networks
- * Azure subnets
- * On-premises agents
- * On-premises subnets
- * On-premises custom networks that include multiple subnets
- * URLs and IPs
+ * Azure VMs
+ * Azure virtual networks
+ * Azure subnets
+ * On-premises agents
+ * On-premises subnets
+ * On-premises custom networks that include multiple subnets
+ * URLs and IPs
* **Test configuration**: A protocol-specific configuration for a test. Depending on the protocol you choose, you can define the port, thresholds, test frequency, and other elements.
Here are some definitions to get you started:
:::image type="content" source="./media/connection-monitor-2-preview/cm-tg-2.png" alt-text="Diagram that shows a connection monitor and defines the relationship between test groups and tests."::: > [!NOTE]
- > Connection Monitor now supports the auto-enabling of monitoring extensions for Azure and non-Azure endpoints. You no longer have to install monitoring solutions manually while you're creating a connection monitor.
+ > Connection Monitor now supports the auto-enabling of monitoring extensions for Azure and non-Azure endpoints. You no longer have to install monitoring solutions manually while you're creating a connection monitor.
## Create a connection monitor > [!Note]
-> Connection Monitor now supports the Azure Monitor Agent extension. This support eliminates any dependency on the legacy Log Analytics agent.
+> Connection Monitor now supports the Azure Monitor Agent extension. This support eliminates any dependency on the legacy Log Analytics agent.
-To create a connection monitor by using the Azure portal, do the following:
+To create a connection monitor by using the Azure portal, do the following:
1. In the [Azure portal](https://portal.azure.com), go to **Network Watcher**. 1. On the left pane, in the **Monitoring** section, select **Connection monitor**.
To create a connection monitor by using the Azure portal, do the following:
All the monitors that have been created in Connection Monitor are displayed. To see the connection monitors that were created in classic Connection Monitor, select the **Connection monitor** tab. :::image type="content" source="./media/connection-monitor-2-preview/cm-resource-view.png" alt-text="Screenshot that lists the connection monitors that were created in Connection Monitor.":::
-
+ 1. On the **Connection Monitor** dashboard, select **Create**.
-1. On the **Basics** pane, enter the following details:
+1. On the **Basics** pane, enter the following details:
* **Connection Monitor Name**: Enter a name for your connection monitor. Use the standard naming rules for Azure resources. * **Subscription**: Select a subscription for your connection monitor. * **Region**: Select a region for your connection monitor. You can select only the source VMs that are created in this region. * **Workspace configuration**: Choose a custom workspace or the default workspace. Your workspace holds your monitoring data.
- To choose a custom workspace, clear the default workspace checkbox, and then select the subscription and region for your custom workspace.
+ To choose a custom workspace, clear the default workspace checkbox, and then select the subscription and region for your custom workspace.
:::image type="content" source="./media/connection-monitor-2-preview/create-cm-basics.png" alt-text="Screenshot that shows the 'Basics' pane in Connection Monitor.":::
-
+ 1. Select **Next: Test groups**.
-1. Add sources, destinations, and test configurations in your test groups. To learn about setting up your test groups, see [Create test groups in Connection Monitor](#create-test-groups-in-a-connection-monitor).
+1. Add sources, destinations, and test configurations in your test groups. To learn about setting up your test groups, see [Create test groups in Connection Monitor](#create-test-groups-in-a-connection-monitor).
:::image type="content" source="./media/connection-monitor-2-preview/create-tg.png" alt-text="Screenshot that shows the 'Test groups' pane in Connection Monitor.":::
To create a connection monitor by using the Azure portal, do the following:
1. At the bottom of the pane, select **Next: Review + create**.
-1. On the **Review + create** pane, review the basic information and test groups before you create the connection monitor. If you need to edit the connection monitor, you can do so by going back to the respective panes.
+1. On the **Review + create** pane, review the basic information and test groups before you create the connection monitor. If you need to edit the connection monitor, you can do so by going back to the respective panes.
:::image type="content" source="./media/connection-monitor-2-preview/review-create-cm.png" alt-text="Screenshot that shows the 'Review + create' pane in Connection Monitor.":::
- > [!NOTE]
- > The **Review + create** pane shows the cost per month during the connection monitor stage. Currently, the **Current Cost/Month** column shows no charge. When Connection Monitor becomes generally available, this column will show a monthly charge.
- >
+ > [!NOTE]
+ > The **Review + create** pane shows the cost per month during the connection monitor stage. Currently, the **Current Cost/Month** column shows no charge. When Connection Monitor becomes generally available, this column will show a monthly charge.
+ >
> Even during the connection monitor stage, Log Analytics ingestion charges apply. 1. When you're ready to create the connection monitor, at the bottom of the **Review + create** pane, select **Create**.
Connection Monitor creates the connection monitor resource in the background.
## Create test groups in a connection monitor > [!NOTE]
-> Connection Monitor now supports the auto-enabling of monitoring extensions for Azure and non-Azure endpoints. You no longer have to install monitoring solutions manually while you're creating a connection monitor.
+> Connection Monitor now supports the auto-enabling of monitoring extensions for Azure and non-Azure endpoints. You no longer have to install monitoring solutions manually while you're creating a connection monitor.
Each test group in a connection monitor includes sources and destinations that get tested on network elements. They're tested for the percentage of checks that fail and the RTT over test configurations.
In the Azure portal, to create a test group in a connection monitor, specify val
* **Test group Name**: Enter the name of your test group. * **Sources**: Select **Add sources** to specify both Azure VMs and on-premises machines as sources if agents are installed on them. To learn about installing an agent for your source, see [Install monitoring agents](./connection-monitor-overview.md#install-monitoring-agents).
- * To choose Azure agents, select the **Azure endpoints** tab. Here you see only VMs or Virtual Machine Scale Sets that are bound to the region that you specified when you created the connection monitor. By default, VMs and Virtual Machine Scale Sets are grouped into the subscription that they belong to. These groups are collapsed.
-
+ * To choose Azure agents, select the **Azure endpoints** tab. Here you see only VMs or Virtual Machine Scale Sets that are bound to the region that you specified when you created the connection monitor. By default, VMs and Virtual Machine Scale Sets are grouped into the subscription that they belong to. These groups are collapsed.
+ You can drill down to further levels in the hierarchy from the **Subscription** level:
- **Subscription** > **Resource group** > **VNET** > **Subnet** > **VMs with agents**
+ **Subscription** > **Resource group** > **VNET** > **Subnet** > **VMs with agents**
You can also change the **Group by** selector to start the tree from any other level. For example, if you group by virtual network, you see the VMs that have agents in the hierarchy **VNET** > **Subnet** > **VMs with agents**.
- When you select a virtual network, subnet, a single VM, or a virtual machine scale set, the corresponding resource ID is set as the endpoint. By default, all VMs in the selected virtual network or subnet participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
+ When you select a virtual network, subnet, a single VM, or a virtual machine scale set, the corresponding resource ID is set as the endpoint. By default, all VMs in the selected virtual network or subnet participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
:::image type="content" source="./media/connection-monitor-2-preview/add-sources-1.png" alt-text="Screenshot that shows the 'Add Sources' pane and the Azure endpoints, including the 'Virtual Machine Scale Sets' tab in Connection Monitor."::: * To choose on-premises agents, select the **NonΓÇôAzure endpoints** tab. Select from a list of on-premises hosts with a Log Analytics agent installed. Select **Arc Endpoint** as the **Type**, and select the subscriptions from the **Subscription** dropdown list. The list of hosts that have the [Azure Arc endpoint](azure-monitor-agent-with-connection-monitor.md) extension and the [Azure Monitor Agent extension](connection-monitor-install-azure-monitor-agent.md) enabled are displayed. :::image type="content" source="./media/connection-monitor-2-preview/arc-endpoint.png" alt-text="Screenshot of Azure Arc-enabled and Azure Monitor Agent-enabled hosts.":::
-
+ If you need to add Network Performance Monitor to your workspace, get it from Azure Marketplace. For information about how to add Network Performance Monitor, see [Monitoring solutions in Azure Monitor](/previous-versions/azure/azure-monitor/insights/solutions). For information about how to configure agents for on-premises machines, see [Agents for on-premises machines](connection-monitor-overview.md#agents-for-on-premises-machines).
-
+ Under **Create Connection Monitor**, on the **Basics** pane, the default region is selected. If you change the region, you can choose agents from workspaces in the new region. You can select one or more agents or subnets. In the **Subnet** view, you can select specific IPs for monitoring. If you add multiple subnets, a custom on-premises network named **OnPremises_Network_1** will be created. You can also change the **Group by** selector to group by agents. :::image type="content" source="./media/connection-monitor-2-preview/add-non-azure-sources.png" alt-text="Screenshot that shows the 'Add Sources' pane and the 'Non-Azure endpoints' pane in Connection Monitor.":::
- * Select the recently used endpoints from the **Recent endpoint** pane.
-
- * You need not choose the endpoints with monitoring agents enabled only. You can select Azure or non-Azure endpoints without the agent enabled and proceed with the creation of the connection monitor. During the creation process, the monitoring agents for the endpoints will be automatically enabled.
+ * Select the recently used endpoints from the **Recent endpoint** pane.
+
+ * You need not choose the endpoints with monitoring agents enabled only. You can select Azure or non-Azure endpoints without the agent enabled and proceed with the creation of the connection monitor. During the creation process, the monitoring agents for the endpoints will be automatically enabled.
:::image type="content" source="./media/connection-monitor-2-preview/unified-enablement.png" alt-text="Screenshot that shows the 'Add Sources' pane and the 'Non-Azure endpoints' pane in Connection Monitor with unified enablement.":::
-
- * When you finish setting up sources, select **Done** at the bottom of the pane. You can still edit basic properties such as the endpoint name by selecting the endpoint in the **Create Test Group** view.
+
+ * When you finish setting up sources, select **Done** at the bottom of the pane. You can still edit basic properties such as the endpoint name by selecting the endpoint in the **Create Test Group** view.
* **Destinations**: You can monitor connectivity to an Azure VM, an on-premises machine, or any endpoint (a public IP, URL, or FQDN) by specifying it as a destination. In a single test group, you can add Azure VMs, on-premises machines, Office 365 URLs, Dynamics 365 URLs, and custom endpoints. * To choose Azure VMs as destinations, select the **Azure endpoints** tab. By default, the Azure VMs are grouped into a subscription hierarchy that's in the region that you selected under **Create Connection Monitor** on the **Basics** pane. You can change the region and choose Azure VMs from the new region. Then you can drill down from the **Subscription** level to other levels in the hierarchy, just as you can when you set the source Azure endpoints.
- You can select virtual networks, subnets, or single VMs, as you can when you set the source Azure endpoints. When you select a virtual network, subnet, or single VM, the corresponding resource ID is set as the endpoint. By default, all VMs in the selected virtual network or subnet that have the Network Watcher extension participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
+ You can select virtual networks, subnets, or single VMs, as you can when you set the source Azure endpoints. When you select a virtual network, subnet, or single VM, the corresponding resource ID is set as the endpoint. By default, all VMs in the selected virtual network or subnet that have the Network Watcher extension participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
:::image type="content" source="./media/connection-monitor-2-preview/add-azure-dests1.png" alt-text="<Screenshot that shows the 'Add Destinations' pane and the 'Azure endpoints' tab.>"::: :::image type="content" source="./media/connection-monitor-2-preview/add-azure-dests2.png" alt-text="<Screenshot that shows the 'Add Destinations' pane at the Subscription level.>":::
-
-
- * To choose non-Azure agents as destinations, select the **Non-Azure endpoints** tab. By default, agents are grouped into workspaces by region. All these workspaces have Network Performance Monitor configured.
-
++
+ * To choose non-Azure agents as destinations, select the **Non-Azure endpoints** tab. By default, agents are grouped into workspaces by region. All these workspaces have Network Performance Monitor configured.
+ If you need to add Network Performance Monitor to your workspace, get it from Azure Marketplace. For information about how to add Network Performance Monitor, see [Monitoring solutions in Azure Monitor](/previous-versions/azure/azure-monitor/insights/solutions). For information about how to configure agents for on-premises machines, see [Agents for on-premises machines](connection-monitor-overview.md#agents-for-on-premises-machines).
- Under **Create Connection Monitor**, on the **Basics** pane, the default region is selected. If you change the region, you can choose agents from workspaces in the new region. You can select one or more agents or subnets. In the **Subnet** view, you can select specific IPs for monitoring. If you add multiple subnets, a custom on-premises network named **OnPremises_Network_1** will be created.
+ Under **Create Connection Monitor**, on the **Basics** pane, the default region is selected. If you change the region, you can choose agents from workspaces in the new region. You can select one or more agents or subnets. In the **Subnet** view, you can select specific IPs for monitoring. If you add multiple subnets, a custom on-premises network named **OnPremises_Network_1** will be created.
:::image type="content" source="./media/connection-monitor-2-preview/add-non-azure-dest.png" alt-text="Screenshot that shows the 'Add Destinations' pane and the 'Non-Azure endpoints' tab.":::
-
- * To choose public endpoints as destinations, select the **External Addresses** tab. The list of endpoints includes Office 365 test URLs and Dynamics 365 test URLs, grouped by name. You also can choose endpoints that were created in other test groups in the same connection monitor.
-
+
+ * To choose public endpoints as destinations, select the **External Addresses** tab. The list of endpoints includes Office 365 test URLs and Dynamics 365 test URLs, grouped by name. You also can choose endpoints that were created in other test groups in the same connection monitor.
+ To add an endpoint, at the upper right, select **Add Endpoint**, and then provide an endpoint name and URL, IP, or FQDN. :::image type="content" source="./media/connection-monitor-2-preview/add-endpoints.png" alt-text="Screenshot that shows where to add public endpoints as destinations in Connection Monitor."::: * To choose recently used endpoints, go to the **Recent endpoint** pane.
- * When you finish choosing destinations, select **Done**. You can still edit basic properties such as the endpoint name by selecting the endpoint in the **Create Test Group** view.
+ * When you finish choosing destinations, select **Done**. You can still edit basic properties such as the endpoint name by selecting the endpoint in the **Create Test Group** view.
* **Test configurations**: You can add one or more test configurations to a test group. Create a new test configuration by using the **New configuration** tab. Or add a test configuration from another test group in the same connection monitor from the **Choose existing** pane.
In the Azure portal, to create a test group in a connection monitor, specify val
* **Create TCP test configuration**: This checkbox appears only if you select **HTTP** in the **Protocol** list. Select this checkbox to create another test configuration that uses the same sources and destinations that you specified elsewhere in your configuration. The new test configuration is named **\<name of test configuration>_networkTestConfig**. * **Disable traceroute**: This checkbox applies when the protocol is TCP or ICMP. Select this box to stop sources from discovering topology and hop-by-hop RTT. * **Destination port**: You can provide a destination port of your choice.
- * **Listen on port**: This checkbox applies when the protocol is TCP. Select this checkbox to open the chosen TCP port if it's not already open.
+ * **Listen on port**: This checkbox applies when the protocol is TCP. Select this checkbox to open the chosen TCP port if it's not already open.
* **Test Frequency**: In this list, specify how frequently sources will ping destinations on the protocol and port that you specified. You can choose 30 seconds, 1 minute, 5 minutes, 15 minutes, or 30 minutes. Select **custom** to enter another frequency from 30 seconds to 30 minutes. Sources will test connectivity to destinations based on the value that you choose. For example, if you select 30 seconds, sources will check connectivity to the destination at least once in every 30-second period. * **Success Threshold**: You can set thresholds on the following network elements: * **Checks failed**: Set the percentage of checks that can fail when sources check connectivity to destinations by using the criteria that you specified. For the TCP or ICMP protocol, the percentage of failed checks can be equated to the percentage of packet loss. For HTTP protocol, this value represents the percentage of HTTP requests that received no response. * **Round trip time**: Set the RTT, in milliseconds, for how long sources can take to connect to the destination over the test configuration.
-
+ :::image type="content" source="./media/connection-monitor-2-preview/add-test-config.png" alt-text="Screenshot that shows where to set up a test configuration in Connection Monitor.":::
-
+ * **Test Groups**: You can add one or more test groups to a connection monitor. These test groups can consist of multiple Azure or non-Azure endpoints. * For selected Azure VMs or Azure Virtual Machine Scale Sets and non-Azure endpoints without monitoring extensions, the extension for Azure VMs and the npm solution for non-Azure endpoints will be auto enabled after the creation of the connection monitor begins. * If the selected virtual machine scale set is set for a manual upgrade, you'll have to upgrade the scale set after Network Watcher extension installation to continue setting up the connection monitor with virtual machine scale set as endpoints. If the virtual machine scale set is set to auto upgrade, you don't need to worry about any upgrading after the Network Watcher extension is installed.
- * In the previously mentioned scenario, you can consent to an auto upgrade of a virtual machine scale set with auto enabling of the Network Watcher extension during the creation of the connection monitor for Virtual Machine Scale Sets with manual upgrading. This would eliminate your having to manually upgrade the virtual machine scale set after you install the Network Watcher extension.
+ * In the previously mentioned scenario, you can consent to an auto upgrade of a virtual machine scale set with auto enabling of the Network Watcher extension during the creation of the connection monitor for Virtual Machine Scale Sets with manual upgrading. This would eliminate your having to manually upgrade the virtual machine scale set after you install the Network Watcher extension.
:::image type="content" source="./media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png" alt-text="Screenshot that shows where to set up test groups and consent for auto-upgrading of a virtual machine scale set in the connection monitor."::: * **Disable test group**: You can select this checkbox to disable monitoring for all sources and destinations that the test group specifies. This checkbox is cleared by default.
In the Azure portal, to create a test group in a connection monitor, specify val
You can set up alerts on tests that are failing, based on the thresholds set in the test configurations.
-In the Azure portal, to create alerts for a connection monitor, specify values for these fields:
+In the Azure portal, to create alerts for a connection monitor, specify values for these fields:
-- **Create alert**: You can select this checkbox to create a metric alert in Azure Monitor. When you select this checkbox, the other fields will be enabled for editing. Additional charges for the alert will be applicable, based on the [pricing for alerts](https://azure.microsoft.com/pricing/details/monitor/).
+- **Create alert**: You can select this checkbox to create a metric alert in Azure Monitor. When you select this checkbox, the other fields will be enabled for editing. Additional charges for the alert will be applicable, based on the [pricing for alerts](https://azure.microsoft.com/pricing/details/monitor/).
- **Scope** > **Resource** > **Hierarchy**: These values are automatically entered, based on the values specified on the **Basics** pane. -- **Condition name**: The alert is created on the `Test Result(preview)` metric. When the connection monitor test fails, the alert rule will fire.
+- **Condition name**: The alert is created on the `Test Result(preview)` metric. When the connection monitor test fails, the alert rule will fire.
-- **Action group name**: You can enter your email directly, or you can create alerts via action groups. If you enter your email directly, an action group with the name **NPM Email ActionGroup** is created. The email ID is added to that action group. If you choose to use action groups, you need to select a previously created action group. To learn how to create an action group, see [Create action groups in the Azure portal](../azure-monitor/alerts/action-groups.md). After the alert is created, you can [manage your alerts](../azure-monitor/alerts/alerts-metric.md#view-and-manage-with-azure-portal).
+- **Action group name**: You can enter your email directly, or you can create alerts via action groups. If you enter your email directly, an action group with the name **NPM Email ActionGroup** is created. The email ID is added to that action group. If you choose to use action groups, you need to select a previously created action group. To learn how to create an action group, see [Create action groups in the Azure portal](../azure-monitor/alerts/action-groups.md). After the alert is created, you can [manage your alerts](../azure-monitor/alerts/alerts-metric.md#view-and-manage-with-azure-portal).
- **Alert rule name**: The name of the connection monitor. -- **Enable rule upon creation**: Select this checkbox to enable the alert rule, based on the condition. Disable this checkbox if you want to create the rule without enabling it.
+- **Enable rule upon creation**: Select this checkbox to enable the alert rule, based on the condition. Disable this checkbox if you want to create the rule without enabling it.
:::image type="content" source="./media/connection-monitor-2-preview/unified-enablement-create.png" alt-text="Screenshot that shows the 'Create alert' pane in Connection Monitor.":::
-After you've completed all the steps, the process will proceed with unified enablement of monitoring extensions for all endpoints without monitoring agents enabled, followed by the creation of the connection monitor.
+After you've completed all the steps, the process will proceed with unified enablement of monitoring extensions for all endpoints without monitoring agents enabled, followed by the creation of the connection monitor.
-After the creation process is successful, it takes about 5 minutes for the connection monitor to be displayed on the dashboard.
+After the creation process is successful, it takes about 5 minutes for the connection monitor to be displayed on the dashboard.
## Scale limits
networking Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/networking-overview.md
Previously updated : 07/10/2023 Last updated : 07/28/2023
The following diagram illustrates multiple site-to-site VPN connections to the s
### <a name="dns"></a>Azure DNS
-[Azure DNS](../../dns/dns-overview.md) is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.
+[Azure DNS](../../dns/index.yml) provides DNS hosting and resolution using the Microsoft Azure infrastructure. Azure DNS consists of three
+- [Azure Public DNS](../../dns/dns-overview.md) is a hosting service for DNS domains. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.
+- [Azure Private DNS](../../dns/private-dns-overview.md) is a DNS service for your virtual networks. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution.
+- [Azure DNS Private Resolver](../../dns/dns-private-resolver-overview.md) is a service that enables you to query Azure DNS private zones from an on-premises environment and vice versa without deploying VM based DNS servers.
+
+Using Azure DNS, you can host and resolve public domains, manage DNS resolution in your virtual networks, and enable name resolution between Azure and your on-premises resources.
### <a name="bastion"></a>Azure Bastion
notification-hubs Ios Sdk Current https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/ios-sdk-current.md
configure push credentials in your notification hub. Even if you have no prior e
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict>
- <key>HUB_NAME</key>
- <string>--HUB-NAME--</string>
- <key>CONNECTION_STRING</key>
- <string>--CONNECTION-STRING--</string>
+ <key>HUB_NAME</key>
+ <string>--HUB-NAME--</string>
+ <key>CONNECTION_STRING</key>
+ <string>--CONNECTION-STRING--</string>
</dict> </plist> ```
configure push credentials in your notification hub. Even if you have no prior e
@end @implementation AppDelegate
-
+ @synthesize notificationPresentationCompletionHandler; @synthesize notificationResponseCompletionHandler;
configure push credentials in your notification hub. Even if you have no prior e
NSString *path = [[NSBundle mainBundle] pathForResource:@"DevSettings" ofType:@"plist"]; NSDictionary *configValues = [NSDictionary dictionaryWithContentsOfFile:path];
-
+ NSString *connectionString = [configValues objectForKey:@"CONNECTION_STRING"]; NSString *hubName = [configValues objectForKey:@"HUB_NAME"];
configure push credentials in your notification hub. Even if you have no prior e
[[UNUserNotificationCenter currentNotificationCenter] setDelegate:self]; [MSNotificationHub setDelegate:self]; [MSNotificationHub initWithConnectionString:connectionString withHubName:hubName];
-
+ return YES; }
configure push credentials in your notification hub. Even if you have no prior e
#import "SetupViewController.h" static NSString *const kNHMessageReceived = @"MessageReceived";
-
+ @interface SetupViewController ()
-
+ @end @implementation SetupViewController - (void)viewDidLoad { [super viewDidLoad];
-
+ // Listen for messages using NSNotificationCenter [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(didReceivePushNotification:) name:kNHMessageReceived object:nil]; }
configure push credentials in your notification hub. Even if you have no prior e
preferredStyle:UIAlertControllerStyleAlert]; [alertController addAction:[UIAlertAction actionWithTitle:@"OK" style:UIAlertActionStyleCancel handler:nil]]; [self presentViewController:alertController animated:YES completion:nil];
-
+ // Dismiss after 2 seconds dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(2.0 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{ [alertController dismissViewControllerAnimated:YES completion: nil];
postgresql How To Troubleshoot Cli Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-troubleshoot-cli-errors.md
Currently, Azure CLI doesn't support turning on debug logging, but you can retri
> - Replace ```examplegroup``` and ```exampledeployment``` with the correct resource group and deployment name for your database server. > - You can see the Deployment name in the deployments page in your resource group. See [how to find the deployment name](../../azure-resource-manager/templates/deployment-history.md?tabs=azure-portal)
+1. List the deployments in resource group to identify the PostgreSQL Server deployment.
-1. List the deployments in resource group to identify the PostgreSQL Server deployment
- ```azurecli
-
- az deployment operation group list \
- --resource-group examplegroup \
- --name exampledeployment
- ```
+ ```azurecli
+ az deployment operation group list \
+ --resource-group examplegroup \
+ --name exampledeployment
+ ```
2. Get the request content of the PostgreSQL Server deployment
- ```azurecli
+ ```azurecli
+ az deployment operation group list \
+ --name exampledeployment \
+ -g examplegroup \
+ --query [].properties.request
+ ```
- az deployment operation group list \
- --name exampledeployment \
- -g examplegroup \
- --query [].properties.request
- ```
3. Examine the response content
- ```azurecli
- az deployment operation group list \
- --name exampledeployment \
- -g examplegroup \
- --query [].properties.response
- ```
+
+ ```azurecli
+ az deployment operation group list \
+ --name exampledeployment \
+ -g examplegroup \
+ --query [].properties.response
+ ```
## Error codes
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
## Release: July 2023
-* Support for [minor versions](./concepts-supported-versions.md) 15.3 (preview), 14.8, 13.11, 12.15, 11.20 <sup>$</sup>
+* Support for [minor versions](./concepts-supported-versions.md) 15.3, 14.8, 13.11, 12.15, 11.20 <sup>$</sup>
* General Availability of PostgreSQL 15 for Azure Database for PostgreSQL ΓÇô Flexible Server. * Public preview of [Automation Tasks](./create-automation-tasks.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
postgresql Troubleshooting Networking And Connectivity Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/troubleshooting-networking-and-connectivity-issues.md
If both Single and Flexible server are in public access, you are unlikely to hit
Let us look at these scenarios in detail.
+The following table can help to jump start troubleshooting connectivity issues.
+
+| Single Server | Flexible Server | Troubleshooting Tips |
+| : | : | : |
+| Public Access | Public access | No action needed. Connectivity should be established automatically. |
+| Private Access | Public access | Non supported network configuration. [Visit this section to learn more](#private-access-in-source-and-public-access-in-target) |
+| Public Access in source without private end point | Private access | [Visit this section for troubleshooting](#public-access-in-source-without-private-end-points) |
+| Public Access in source with private end point | Private access | [Visit this section for troubleshooting](#public-access-in-source-with-private-end-points) |
+| Private Access | Private access | [Visit this section for troubleshooting](#private-access-in-source-and-private-access-in-target) |
+ ## Private access in source and public access in target
-This network configuration is not supported by Single to Flex migration tooling. In this case, you can opt for other migration tools to perform migration from Single Server to Flexible server.
+This network configuration is not supported by Single to Flex migration tooling. In this case, you can opt for other migration tools to perform migration from Single Server to Flexible server such as [pg_dump/pg_restore](../single-server/how-to-upgrade-using-dump-and-restore.md).
## Public access in source and private access in target
+There are two possible configurations for your source server in this scenario.
+- Public access in source without private end points.
+- Public access in source with private end points.
+
+Let us look into the details of setting network connectivity between the target and source in the above scenarios.
+
+### Public access in source without private end points
In this case, single server needs to allowlist connections from the subnet in which flexible server is deployed. You can perform the following steps to set up connectivity between single and flexible server. 1. Go to the VNet rules sections in the Connection Security blade of your single server and click on the option **Adding existing virtual network**.
In this case, single server needs to allowlist connections from the subnet in wh
Once the settings are applied, the connection from flexible server to single server will be established and you'll no longer hit this issue.
+### Public Access in source with private end points
+In this case, the connection will be routed through private end point. Refer to the steps mentioned in the following section about establishing connectivity in case of private access in source and private access in target.
## Private access in source and private access in target 1. If a single server is in private access, then it can be accessed only through private end points. Get the VNet and subnet details of the private end point by clicking on the private endpoint name.
private-5g-core Azure Private 5G Core Release Notes 2307 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2307.md
+
+ Title: Azure Private 5G Core 2307 release notes
+description: Discover what's new in the Azure Private 5G Core 2307 release
++++ Last updated : 07/31/2023++
+# Azure Private 5G Core 2307 release notes
+
+The following release notes identify the new features, critical open issues, and resolved issues for the 2307 release of Azure Private 5G Core (AP5GC). The release notes are continuously updated, with critical issues requiring a workaround added as theyΓÇÖre discovered. Before deploying this new version, review the information contained in these release notes.
+
+This article applies to the AP5GC 2307 release (PMN-2307-0). This release is compatible with the ASE Pro 1 GPU and ASE Pro 2 running the ASE 2303 release, and supports the 2023-06-01, 2022-11-01-preview and 2022-04-01 [Microsoft.MobileNetwork](/rest/api/mobilenetwork) API versions.
+
+## Support lifetime
+
+Packet core versions are supported until two subsequent versions have been released (unless otherwise noted). This is typically two months after the release date. You should plan to upgrade your packet core in this time frame to avoid losing support.
+
+## What's new
+### UE usage tracking
+The UE usage tracking messages in Azure Event Hubs are now encoded in AVRO file container format, which enables you to consume these events via Power BI or Azure Stream Analytics (ASA). If you want to enable this feature for your deployment, contact your support representative.
+
+### Unknown User cause code mapping in 4G deployments
+In this release the 4G NAS EMM cause code for ΓÇ£unknown userΓÇ¥ (subscriber not provisioned on AP5GC) changes to ΓÇ£no-suitable-cells-in-ta-15ΓÇ¥ by default. This provides better interworking in scenarios where a single PLMN is used for multiple, independent mobile networks.
+
+## Issues fixed in the AP5GC 2307 release
+
+The following table provides a summary of issues fixed in this release.
+
+ |No. |Feature | Issue |
+ |--|--|--|
+ | 1 | Local distributed tracing | The distributed tracing web GUI fails to display & decode some fields of 4G NAS messages. Specifically, 'Initial Context Setup Request' and 'Attach Accept messages' information elements.
+ | 2 | 4G/5G Signaling | Removal of static or dynamic UE IP pool as part of attached data network modification on an existing AP5GC setup still requires reinstall of packet core.
+ | 3 | Install/Upgrade | In some cases, the packet core reports successful installation even when the underlying platform or networking is misconfigured.
+ | 4 | 4G/5G Signaling | AP5GC may intermittently fail to recover after underlying platform is rebooted and may require another reboot to recover.
+ | 5 | Packet Forwarding | Azure Private 5G Core may not forward buffered packets if NAT is enabled |
+
+## Known issues in the AP5GC 2307 release
+ |No. |Feature | Issue | Workaround/comments |
+ |--|--|--|--|
+ | 1 | Azure Active Directory | When a web proxy is enabled on the Azure Stack Edge appliance that the packet core is running on and Azure Active Directory is used to authenticate access to AP5GC Local Dashboards, the traffic to Azure Active Directory does not transmit via the web proxy. If there's a firewall blocking traffic that does not go via the web proxy then enabling Azure Active Directory causes the packet core install to fail. | Disable Azure Active Directory and use password based authentication to authenticate access to AP5GC Local Dashboards instead. |
+ | 2 | Install/Upgrade | Transient issues with the Arc Connected Kubernetes Cluster resource might trigger errors in packet core operations such as upgrade, rollback or reinstall. | Check the availability of the Kubernetes cluster resource: navigate to the resource in the Portal and check the Resource Health. Ensure it's available and retry the operation. |
+
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+ |No. |Feature | Issue | Workaround/comments |
+ |--|--|--|--|
+ | 1 | Local Dashboards | When a web proxy is enabled on the Azure Stack Edge appliance that the packet core is running on and Azure Active Directory is used to authenticate access to AP5GC Local Dashboards, the traffic to Azure Active Directory doesn't transmit via the web proxy. If there's a firewall blocking traffic that doesn't go via the web proxy then enabling Azure Active Directory causes the packet core install to fail. | Disable Azure Active Directory and use password based authentication to authenticate access to AP5GC Local Dashboards instead. |
+
+
+## Next steps
+
+- [Upgrade the packet core instance in a site - Azure portal](upgrade-packet-core-azure-portal.md)
+- [Upgrade the packet core instance in a site - ARM template](upgrade-packet-core-arm-template.md)
private-5g-core Data Plane Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/data-plane-packet-capture.md
Title: Perform data plane packet capture for a packet core instance
+ Title: Perform data plane packet capture on a packet core instance
-description: In this how-to guide, you'll learn how to perform data plane packet capture for a packet core instance.
+description: In this how-to guide, you'll learn how to perform data plane packet capture on a packet core instance.
-+ Last updated 12/13/2022
-# Perform data plane packet capture for a packet core instance
+# Perform data plane packet capture on a packet core instance
Packet capture for data plane packets is performed using the **UPF Trace (UPFT)** tool. UPFT is similar to **tcpdump**, a data-network packet analyzer computer program that runs on a command line interface. You can use this tool to monitor and record packets on any user plane interface on the access network (N3 interface) or data network (N6 interface) on your device.
-Data plane packet capture works by mirroring packets to a Linux kernel interface, which can then be monitored using tcpdump. In this how-to guide, you'll learn how to perform data plane packet capture for a packet core instance.
+Data plane packet capture works by mirroring packets to a Linux kernel interface, which can then be monitored using tcpdump. In this how-to guide, you'll learn how to perform data plane packet capture on a packet core instance.
> [!IMPORTANT] > Performing packet capture will reduce the performance of your system and the throughput of your data plane. It is therefore only recommended to use this tool at low scale during initial testing.
Data plane packet capture works by mirroring packets to a Linux kernel interface
## Prerequisites - Identify the **Kubernetes - Azure Arc** resource representing the Azure Arc-enabled Kubernetes cluster on which your packet core instance is running.-- Ensure you have [Contributor](../role-based-access-control/built-in-roles.md#contributor) role assignment on the Azure subscription containing the **Kubernetes - Azure Arc** resource. - Ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Set up kubectl access](commission-cluster.md#set-up-kubectl-access). ## Performing packet capture
Data plane packet capture works by mirroring packets to a Linux kernel interface
kubectl exec -it -n core core-upf-pp-0 -c troubleshooter -- bash ```
-1. View the list of interfaces that can be monitored:
+1. View the list of configured user plane interfaces:
```azurecli upft list ```
+ This should report a single interface on the access network (N3) and an interface for each attached data network (N6). For example:
+
+ ```azurecli
+ n6trace1 (Data Network: enterprise)
+ n6trace2 (Data Network: test)
+ n3trace
+ n6trace0 (Data Network: internet)
+ ```
+ 1. Run `upftdump` with any parameters that you would usually pass to tcpdump. In particular, `-i` to specify the interface, and `-w` to specify where to write to. Close the UPFT tool when done by pressing <kbd>Ctrl + C</kbd>. The following examples are common use cases: - To run capture packets on all interfaces run `upftdump -i any -w any.pcap` - To run capture packets for the N3 interface and the N6 interface for a single data network, enter the UPF-PP troubleshooter pod in two separate windows. In one window run `upftdump -i n3trace -w n3.pcap` and in the other window run `upftdump -i <N6 interface> -w n6.pcap` (use the N6 interface for the data network as identified in step 2).
Data plane packet capture works by mirroring packets to a Linux kernel interface
1. Remove the output files: ```azurecli
- kubectl exec -it -n core core-upf-pp-0 -c troubleshooter -- rm <path to output file>`
+ kubectl exec -it -n core core-upf-pp-0 -c troubleshooter -- rm <path to output file>
``` ## Next steps
Data plane packet capture works by mirroring packets to a Linux kernel interface
For more options to monitor your deployment and view analytics: - [Learn more about monitoring Azure Private 5G Core using Azure Monitor platform metrics](monitor-private-5g-core-with-platform-metrics.md)
+- If you have found identified a problem and don't know how to resolve it, you can [Get support for your Azure Private 5G Core service](open-support-request.md)
private-5g-core Ping Traceroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/ping-traceroute.md
+
+ Title: Use ping and traceroute on a packet core instance
+
+description: In this how-to guide, you'll learn how to use the ping and traceroute utilities to check a packet core instance's network connectivity.
++++ Last updated : 07/31/2023+++
+# Use ping and traceroute on a packet core instance
+
+Azure Private 5G Core supports the standard **ping** and **traceroute** diagnostic tools, enhanced with an option to select a specific network interface. You can use ping and traceroute to help diagnose network connectivity problems. In this how-to guide, you'll learn how to use ping and traceroute to check connectivity to the access or data networks over the user plane interfaces on your device.
+
+## Prerequisites
+
+- Identify the **Kubernetes - Azure Arc** resource representing the Azure Arc-enabled Kubernetes cluster on which your packet core instance is running.
+- Ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Set up kubectl access](commission-cluster.md#set-up-kubectl-access).
+
+## Choose the IP address to test
+
+You can use the ping and traceroute tools to check the reachability of any IP address over the specified interface. A common example is the default gateway. If you don't know the default gateway address for the interface you want to test, you can find it on the **Advanced Networking** blade on the Azure Stack Edge (ASE) local UI.
+
+To access the local UI, see [Tutorial: Connect to Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-connect.md).
+
+## Run the ping and traceroute tools
+
+1. In a command line with kubectl access to the Azure Arc-enabled Kubernetes cluster, enter the UPF-PP troubleshooter pod:
+
+ ```azurecli
+ kubectl exec -it -n core core-upf-pp-0 -c troubleshooter -- bash
+ ```
+
+1. View the list of configured user plane interfaces:
+
+ ```azurecli
+ upft list
+ ```
+
+ This should report a single interface on the access network (N3) and an interface for each attached data network (N6). For example:
+
+ ```azurecli
+ n6trace1 (Data Network: enterprise)
+ n6trace2 (Data Network: test)
+ n3trace
+ n6trace0 (Data Network: internet)
+ ```
+
+1. Run the ping command, specifying the network and IP address to test. You can specify `access` for the access network or the network name for a data network.
+
+ ```azurecli
+ ping --net <network name> <IP address>
+ ```
+
+ For example:
+
+ ```azurecli
+ ping --net enterprise 10.0.0.1
+ ```
+
+ The tool should report a list of packets transmitted and received with 0% packet loss.
+
+1. Run the traceroute command, specifying the network and IP address to test. You can specify `access` for the access network or the network name for a data network.
+
+ ```azurecli
+ traceroute --net <network name> <IP address>
+ ```
+
+ For example:
+
+ ```azurecli
+ traceroute --net enterprise 10.0.0.1
+ ```
+
+ The tool should report a series of hops, with the specified IP address as the final hop.
+
+## Next steps
+
+- For more detailed diagnostics, you can [Perform data plane packet capture on a packet core instance](data-plane-packet-capture.md)
+- If you have found identified a connectivity issue and don't know how to resolve it, you can [Get support for your Azure Private 5G Core service](open-support-request.md)
private-5g-core Ue Usage Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/ue-usage-event-hub.md
You can monitor UE usage based on the monitoring data generated by Azure Event H
UE usage monitoring can be configured during site creation or at a later stage. If you want to configure UE usage monitoring for a site, please contact your support representative.
-Once configured for the site, you must add the [UE usage schema](#ue-usage-schema) to a Schema Registry in order to use monitor UE usage in your deployment - see [Azure Schema Registry in Azure Event Hubs](/azure/event-hubs/schema-registry-overview).
- Once Event Hubs is receiving data from your AP5GC deployment you can write an application, using SDKs [such as .NET](/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send?tabs=passwordless%2Croles-azure-portal), to consume event data and produce useful metric data. ## Reported UE usage data
When configured, AP5GC will send data usage reports per QoS flow level for all P
|- **Preemption Capability**|String |See **ARP** above.| |- **Preemption Vulnerability**|String |See **ARP** above.|
+## Azure Stream Analytics
+
+Azure Stream Analytics allow you to process and analyze streaming data from Event Hubs. See [Process data from your event hub using Azure Stream Analytics](/azure/event-hubs/process-data-azure-stream-analytics) for more information.
+ ## UE usage schema
-The following schema is used by Event Hubs to validate the UE usage messages. You must add this schema to a Schema Registry in order to monitor UE usage in your deployment - see [Validate schemas for Apache Kafka applications using Avro (Java)](/azure/event-hubs/schema-registry-kafka-java-send-receive-quickstart).
+The following schema is used by Event Hubs to validate the UE usage messages.
```json {
private-5g-core Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/whats-new.md
To help you stay up to date with the latest developments, this article covers:
This page is updated regularly with the latest developments in Azure Private 5G Core. ## July 2023
+### Packet core 2307
+
+**Type:** New release
+
+**Date available:** July 31, 2023
+
+The 2307 release for the Azure Private 5G Core packet core is now available. For more information, see [Azure Private 5G Core 2307 release notes](azure-private-5g-core-release-notes-2307.md).
### 2023-06-01 API
This page is updated regularly with the latest developments in Azure Private 5G
**Date available:** July 19, 2023
-The 2023-06-01 ARM API release introduces the ability to configure several upcoming Azure Private 5G Core features. From July 19th, 2023-06-01 is the default API version for Azure Private 5G Core deployments.
+The 2023-06-01 ARM API release introduces the ability to configure several upcoming Azure Private 5G Core features. From July 19, 2023-06-01 is the default API version for Azure Private 5G Core deployments.
If you use the Azure portal to manage your deployment and all your resources were created using the 2022-04-01-preview API or 2022-11-01, you don't need to do anything. Your portal will use the new API.
The 2305 release for the Azure Private 5G Core packet core is now available. For
**Date available:** May 31, 2023
-New-MobileNetworkSite now supports an additional parameter that makes it easier to create a site and its dependant resources.
+New-MobileNetworkSite now supports a parameter that makes it easier to create a site and its dependant resources.
-For details, see [Create additional Packet Core instances for a site using the Azure portal](create-additional-packet-core.md).
+For details, see [Create more Packet Core instances for a site using the Azure portal](create-additional-packet-core.md).
### Multiple Packet Cores under the same Site
The Azure Private 5G Core online service now reports the provisioning status of
**Date available:** January 31, 2023
-You can now gather diagnostics for a site remotely using the Azure portal. Diagnostics packages will be collected from the edge site and uploaded to an Azure storage account, which can be shared with AP5GC support or others for assistance with issues. Follow [Gather diagnostics using the Azure portal](gather-diagnostics.md) to gather a remote diagnostics package for an Azure Private 5G Core site using the Azure portal.
+You can now gather diagnostics for a site remotely using the Azure portal. Diagnostics packages are collected from the edge site and uploaded to an Azure storage account, which can be shared with AP5GC support or others for assistance with issues. Follow [Gather diagnostics using the Azure portal](gather-diagnostics.md) to gather a remote diagnostics package for an Azure Private 5G Core site using the Azure portal.
### West Europe region
The **Diagnose and solve problems** option in the left content menu can now prov
**Date available:** December 16, 2022
-If you're experiencing issues with your packet core deployment, you can now reinstall the packet core to return it to a known state. Reinstalling the packet core deletes the existing packet core deployment and attempts to deploy the packet core at the edge with the existing site configuration. Already created **Site**-dependent resources such as the **Packet Core Control Plane**, **Packet Core Data Plane** and **Attached Data Network** will continue to be used in the deployment.
+If you're experiencing issues with your packet core deployment, you can now reinstall the packet core to return it to a known state. Reinstalling the packet core deletes the existing packet core deployment and attempts to deploy the packet core at the edge with the existing site configuration. Already created **Site**-dependent resources such as the **Packet Core Control Plane**, **Packet Core Data Plane** and **Attached Data Network** continue to be used in the deployment.
-You can check the installation state on the **Packet Core Control Plane** resource's overview page. Upon successful redeployment, the installation state will change from **Reinstalling** to either **Installed** or **Failed**, depending on the outcome. You can reinstall the packet core if the installation state is **Installed** or **Failed**.
+You can check the installation state on the **Packet Core Control Plane** resource's overview page. Upon successful redeployment, the installation state changes from **Reinstalling** to either **Installed** or **Failed**, depending on the outcome. You can reinstall the packet core if the installation state is **Installed** or **Failed**.
If you attempt a reinstall after an upgrade, redeployment will be attempted with the upgraded packet core version. The reinstall is done using the latest packet core version currently defined in the ARM API version.
You can add a custom certificate to secure access to your local monitoring tools
The 2022-11-01 ARM API release introduces the ability to configure several upcoming Azure Private 5G Core features. From December 12, 2022-11-01 is the default API version for Azure Private 5G Core deployments.
-If you use the Azure portal to manage your deployment and all your resources were created using the 2022-04-01-preview API, you don't need to do anything. Your portal will use the new API and any differences between the APIs are handled automatically.
+If you use the Azure portal to manage your deployment and all your resources were created using the 2022-04-01-preview API, you don't need to do anything. Your portal uses the new API and any differences between the APIs are handled automatically.
If you use ARM templates and want to keep using your existing templates, follow [Upgrade your ARM templates to the 2022-11-01 API](#upgrade-your-arm-templates-to-the-2022-11-01-api) to upgrade your 2022-04-01-preview API templates to the 2022-11-01 API.
role-based-access-control Elevate Access Global Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/elevate-access-global-admin.md
Last updated 03/21/2023-+ # Elevate access to manage all Azure subscriptions and management groups
Follow these steps to elevate access for a Global Administrator using the Azure
> [!NOTE] > If you're using [Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md), deactivating your role assignment does not change the **Access management for Azure resources** toggle to **No**. To maintain least privileged access, we recommend that you set this toggle to **No** before you deactivate your role assignment.
-
+ 1. Click **Save** to save your setting. This setting is not a global property and applies only to the currently signed in user. You can't elevate access for all members of the Global Administrator role.
When you call `elevateAccess`, you create a role assignment for yourself, so to
```http GET https://management.azure.com/providers/Microsoft.Authorization/roleAssignments?api-version=2022-04-01&$filter=principalId+eq+'{objectid}' ```
-
- >[!NOTE]
- >A directory administrator should not have many assignments, if the previous query returns too many assignments, you can also query for all assignments just at directory scope level, then filter the results:
+
+ >[!NOTE]
+ >A directory administrator should not have many assignments, if the previous query returns too many assignments, you can also query for all assignments just at directory scope level, then filter the results:
> `GET https://management.azure.com/providers/Microsoft.Authorization/roleAssignments?api-version=2022-04-01&$filter=atScope()`
-
-1. The previous calls return a list of role assignments. Find the role assignment where the scope is `"/"` and the `roleDefinitionId` ends with the role name ID you found in step 1 and `principalId` matches the objectId of the directory administrator.
-
+
+1. The previous calls return a list of role assignments. Find the role assignment where the scope is `"/"` and the `roleDefinitionId` ends with the role name ID you found in step 1 and `principalId` matches the objectId of the directory administrator.
+ Sample role assignment:
-
+ ```json { "value": [
When you call `elevateAccess`, you create a role assignment for yourself, so to
"nextLink": null } ```
-
+ Again, save the ID from the `name` parameter, in this case 11111111-1111-1111-1111-111111111111. 1. Finally, Use the role assignment ID to remove the assignment added by `elevateAccess`:
When access is elevated, an entry is added to the logs. As a Global Administrato
az rest --url "https://management.azure.com/providers/Microsoft.Insights/eventtypes/management/values?api-version=2015-04-01&$filter=eventTimestamp ge '2021-09-10T20:00:00Z'" > output.txt ```
-1. In the output file, search for `elevateAccess`.
+1. In the output file, search for `elevateAccess`.
The log will resemble the following where you can see the timestamp of when the action occurred and who called it.
sap Large Instance High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/large-instance-high-availability-rhel.md
Before you can begin configuring the cluster, set up SSH key exchange to establi
1. Use the following commands to create identical `/etc/hosts` on both nodes.
- ```
- root@sollabdsm35 ~]# cat /etc/hosts
- 27.0.0.1 localhost localhost.azlinux.com
- 10.60.0.35 sollabdsm35.azlinux.com sollabdsm35 node1
- 10.60.0.36 sollabdsm36.azlinux.com sollabdsm36 node2
- 10.20.251.150 sollabdsm36-st
- 10.20.251.151 sollabdsm35-st
- 10.20.252.151 sollabdsm36-back
- 10.20.252.150 sollabdsm35-back
- 10.20.253.151 sollabdsm36-node
- 10.20.253.150 sollabdsm35-node
- ```
-
-2. Create and exchange the SSH keys.
+ ```
+ root@sollabdsm35 ~]# cat /etc/hosts
+ 27.0.0.1 localhost localhost.azlinux.com
+ 10.60.0.35 sollabdsm35.azlinux.com sollabdsm35 node1
+ 10.60.0.36 sollabdsm36.azlinux.com sollabdsm36 node2
+ 10.20.251.150 sollabdsm36-st
+ 10.20.251.151 sollabdsm35-st
+ 10.20.252.151 sollabdsm36-back
+ 10.20.252.150 sollabdsm35-back
+ 10.20.253.151 sollabdsm36-node
+ 10.20.253.150 sollabdsm35-node
+ ```
+
+2. Create and exchange the SSH keys.
1. Generate ssh keys.
- ```
- [root@sollabdsm35 ~]# ssh-keygen -t rsa -b 1024
- [root@sollabdsm36 ~]# ssh-keygen -t rsa -b 1024
- ```
+ ```
+ [root@sollabdsm35 ~]# ssh-keygen -t rsa -b 1024
+ [root@sollabdsm36 ~]# ssh-keygen -t rsa -b 1024
+ ```
2. Copy keys to the other hosts for passwordless ssh.
-
+ ``` [root@sollabdsm35 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub sollabdsm35 [root@sollabdsm35 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub sollabdsm36
Before you can begin configuring the cluster, set up SSH key exchange to establi
[root@sollabdsm36 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub sollabdsm36 ```
-3. Disable selinux on both nodes.
- ```
- [root@sollabdsm35 ~]# vi /etc/selinux/config
+3. Disable selinux on both nodes.
+ ```
+ [root@sollabdsm35 ~]# vi /etc/selinux/config
- ...
+ ...
- SELINUX=disabled
+ SELINUX=disabled
- [root@sollabdsm36 ~]# vi /etc/selinux/config
+ [root@sollabdsm36 ~]# vi /etc/selinux/config
- ...
+ ...
- SELINUX=disabled
+ SELINUX=disabled
- ```
+ ```
4. Reboot the servers and then use the following command to verify the status of selinux.
- ```
- [root@sollabdsm35 ~]# sestatus
+ ```
+ [root@sollabdsm35 ~]# sestatus
- SELinux status: disabled
+ SELinux status: disabled
- [root@sollabdsm36 ~]# sestatus
+ [root@sollabdsm36 ~]# sestatus
- SELinux status: disabled
- ```
+ SELinux status: disabled
+ ```
5. Configure NTP (Network Time Protocol). The time and time zones for both cluster nodes must match. Use the following command to open `chrony.conf` and verify the contents of the file. 1. The following contents should be added to config file. Change the actual values as per your environment.
- ```
- vi /etc/chrony.conf
-
- Use public servers from the pool.ntp.org project.
-
- Please consider joining the pool (http://www.pool.ntp.org/join.html).
-
- server 0.rhel.pool.ntp.org iburst
+ ```
+ vi /etc/chrony.conf
+
+ Use public servers from the pool.ntp.org project.
+
+ Please consider joining the pool (http://www.pool.ntp.org/join.html).
+
+ server 0.rhel.pool.ntp.org iburst
```
-
- 2. Enable chrony service.
-
+
+ 2. Enable chrony service.
+
+ ```
+ systemctl enable chronyd
+
+ systemctl start chronyd
+++
+ chronyc tracking
+
+ Reference ID : CC0BC90A (voipmonitor.wci.com)
+
+ Stratum : 3
+
+ Ref time (UTC) : Thu Jan 28 18:46:10 2021
+
+ chronyc sources
+
+ 210 Number of sources = 8
+
+ MS Name/IP address Stratum Poll Reach LastRx Last sample
+
+ ===============================================================================
+
+ ^+ time.nullroutenetworks.c> 2 10 377 1007 -2241us[-2238us] +/- 33ms
+
+ ^* voipmonitor.wci.com 2 10 377 47 +956us[ +958us] +/- 15ms
+
+ ^- tick.srs1.ntfo.org 3 10 177 801 -3429us[-3427us] +/- 100ms
```
- systemctl enable chronyd
-
- systemctl start chronyd
-
-
-
- chronyc tracking
-
- Reference ID : CC0BC90A (voipmonitor.wci.com)
-
- Stratum : 3
-
- Ref time (UTC) : Thu Jan 28 18:46:10 2021
-
- chronyc sources
-
- 210 Number of sources = 8
-
- MS Name/IP address Stratum Poll Reach LastRx Last sample
-
- ===============================================================================
-
- ^+ time.nullroutenetworks.c> 2 10 377 1007 -2241us[-2238us] +/- 33ms
-
- ^* voipmonitor.wci.com 2 10 377 47 +956us[ +958us] +/- 15ms
-
- ^- tick.srs1.ntfo.org 3 10 177 801 -3429us[-3427us] +/- 100ms
- ```
6. Update the System 1. First, install the latest updates on the system before you start to install the SBD device.
- 1. Customers must make sure that they have at least version 4.1.1-12.el7_6.26 of the resource-agents-sap-hana package installed, as documented in [Support Policies for RHEL High Availability Clusters - Management of SAP HANA in a Cluster](https://access.redhat.com/articles/3397471)
+ 1. Customers must make sure that they have at least version 4.1.1-12.el7_6.26 of the resource-agents-sap-hana package installed, as documented in [Support Policies for RHEL High Availability Clusters - Management of SAP HANA in a Cluster](https://access.redhat.com/articles/3397471)
1. If you donΓÇÖt want a complete update of the system, even if it is recommended, update the following packages at a minimum. 1. `resource-agents-sap-hana` 1. `selinux-policy` 1. `iscsi-initiator-utils`
-
- ```
- node1:~ # yum update
- ```
+
+ ```
+ node1:~ # yum update
+ ```
7. Install the SAP HANA and RHEL-HA repositories.
- ```
- subscription-manager repos ΓÇôlist
+ ```
+ subscription-manager repos ΓÇôlist
- subscription-manager repos
- --enable=rhel-sap-hana-for-rhel-7-server-rpms
+ subscription-manager repos
+ --enable=rhel-sap-hana-for-rhel-7-server-rpms
+
+ subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms
+ ```
- subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms
- ```
-
8. Install the Pacemaker, SBD, OpenIPMI, ipmitool, and fencing_sbd tools on all nodes.
- ```
- yum install pcs sbd fence-agent-sbd.x86_64 OpenIPMI
- ipmitool
- ```
+ ```
+ yum install pcs sbd fence-agent-sbd.x86_64 OpenIPMI
+ ipmitool
+ ```
## Configure Watchdog In this section, you learn how to configure Watchdog. This section uses the same two hosts, `sollabdsm35` and `sollabdsm36`, referenced at the beginning of this article. 1. Make sure that the watchdog daemon is not running on any systems.
- ```
- [root@sollabdsm35 ~]# systemctl disable watchdog
- [root@sollabdsm36 ~]# systemctl disable watchdog
- [root@sollabdsm35 ~]# systemctl stop watchdog
- [root@sollabdsm36 ~]# systemctl stop watchdog
- [root@sollabdsm35 ~]# systemctl status watchdog
+ ```
+ [root@sollabdsm35 ~]# systemctl disable watchdog
+ [root@sollabdsm36 ~]# systemctl disable watchdog
+ [root@sollabdsm35 ~]# systemctl stop watchdog
+ [root@sollabdsm36 ~]# systemctl stop watchdog
+ [root@sollabdsm35 ~]# systemctl status watchdog
- ΓùÅ watchdog.service - watchdog daemon
+ ΓùÅ watchdog.service - watchdog daemon
- Loaded: loaded (/usr/lib/systemd/system/watchdog.service; disabled;
- vendor preset: disabled)
+ Loaded: loaded (/usr/lib/systemd/system/watchdog.service; disabled;
+ vendor preset: disabled)
- Active: inactive (dead)
+ Active: inactive (dead)
- Nov 28 23:02:40 sollabdsm35 systemd[1]: Collecting watchdog.service
+ Nov 28 23:02:40 sollabdsm35 systemd[1]: Collecting watchdog.service
- ```
+ ```
2. The default Linux watchdog, that will be installed during the installation, is the iTCO watchdog which is not supported by UCS and HPE SDFlex systems. Therefore, this watchdog must be disabled. 1. The wrong watchdog is installed and loaded on the system:
- ```
- sollabdsm35:~ # lsmod |grep iTCO
-
- iTCO_wdt 13480 0
-
- iTCO_vendor_support 13718 1 iTCO_wdt
- ```
+ ```
+ sollabdsm35:~ # lsmod |grep iTCO
+
+ iTCO_wdt 13480 0
+
+ iTCO_vendor_support 13718 1 iTCO_wdt
+ ```
2. Unload the wrong driver from the environment:
- ```
- sollabdsm35:~ # modprobe -r iTCO_wdt iTCO_vendor_support
-
- sollabdsm36:~ # modprobe -r iTCO_wdt iTCO_vendor_support
- ```
-
+ ```
+ sollabdsm35:~ # modprobe -r iTCO_wdt iTCO_vendor_support
+
+ sollabdsm36:~ # modprobe -r iTCO_wdt iTCO_vendor_support
+ ```
+ 3. To make sure the driver is not loaded during the next system boot, the driver must be blocklisted. To blocklist the iTCO modules, add the following to the end of the `50-blacklist.conf` file:
- ```
- sollabdsm35:~ # vi /etc/modprobe.d/50-blacklist.conf
-
- unload the iTCO watchdog modules
-
- blacklist iTCO_wdt
-
- blacklist iTCO_vendor_support
+ ```
+ sollabdsm35:~ # vi /etc/modprobe.d/50-blacklist.conf
+
+ unload the iTCO watchdog modules
+
+ blacklist iTCO_wdt
+
+ blacklist iTCO_vendor_support
``` 4. Copy the file to secondary host. ```
- sollabdsm35:~ # scp /etc/modprobe.d/50-blacklist.conf sollabdsm35:
- /etc/modprobe.d/50-blacklist.conf
- ```
+ sollabdsm35:~ # scp /etc/modprobe.d/50-blacklist.conf sollabdsm35:
+ /etc/modprobe.d/50-blacklist.conf
+ ```
5. Test if the ipmi service is started. It is important that the IPMI timer is not running. The timer management will be done from the SBD pacemaker service.
- ```
- sollabdsm35:~ # ipmitool mc watchdog get
-
- Watchdog Timer Use: BIOS FRB2 (0x01)
-
- Watchdog Timer Is: Stopped
-
- Watchdog Timer Actions: No action (0x00)
-
- Pre-timeout interval: 0 seconds
-
- Timer Expiration Flags: 0x00
-
- Initial Countdown: 0 sec
-
- Present Countdown: 0 sec
-
- ```
+ ```
+ sollabdsm35:~ # ipmitool mc watchdog get
-3. By default the required device is /dev/watchdog will not be created.
+ Watchdog Timer Use: BIOS FRB2 (0x01)
- ```
- sollabdsm35:~ # ls -l /dev/watchdog
+ Watchdog Timer Is: Stopped
- ls: cannot access /dev/watchdog: No such file or directory
- ```
+ Watchdog Timer Actions: No action (0x00)
-4. Configure the IPMI watchdog.
+ Pre-timeout interval: 0 seconds
- ```
- sollabdsm35:~ # mv /etc/sysconfig/ipmi /etc/sysconfig/ipmi.org
+ Timer Expiration Flags: 0x00
- sollabdsm35:~ # vi /etc/sysconfig/ipmi
+ Initial Countdown: 0 sec
- IPMI_SI=yes
- DEV_IPMI=yes
- IPMI_WATCHDOG=yes
- IPMI_WATCHDOG_OPTIONS="timeout=20 action=reset nowayout=0
- panic_wdt_timeout=15"
- IPMI_POWEROFF=no
- IPMI_POWERCYCLE=no
- IPMI_IMB=no
+ Present Countdown: 0 sec
+
+ ```
+
+3. By default the required device is /dev/watchdog will not be created.
+
+ ```
+ sollabdsm35:~ # ls -l /dev/watchdog
+
+ ls: cannot access /dev/watchdog: No such file or directory
+ ```
+
+4. Configure the IPMI watchdog.
+
+ ```
+ sollabdsm35:~ # mv /etc/sysconfig/ipmi /etc/sysconfig/ipmi.org
+
+ sollabdsm35:~ # vi /etc/sysconfig/ipmi
+
+ IPMI_SI=yes
+ DEV_IPMI=yes
+ IPMI_WATCHDOG=yes
+ IPMI_WATCHDOG_OPTIONS="timeout=20 action=reset nowayout=0
+ panic_wdt_timeout=15"
+ IPMI_POWEROFF=no
+ IPMI_POWERCYCLE=no
+ IPMI_IMB=no
``` 5. Copy the watchdog config file to secondary. ```
- sollabdsm35:~ # scp /etc/sysconfig/ipmi
- sollabdsm36:/etc/sysconfig/ipmi
+ sollabdsm35:~ # scp /etc/sysconfig/ipmi
+ sollabdsm36:/etc/sysconfig/ipmi
```
-6. Enable and start the ipmi service.
- ```
- [root@sollabdsm35 ~]# systemctl enable ipmi
+6. Enable and start the ipmi service.
+ ```
+ [root@sollabdsm35 ~]# systemctl enable ipmi
- Created symlink from
- /etc/systemd/system/multi-user.target.wants/ipmi.service to
- /usr/lib/systemd/system/ipmi.service.
+ Created symlink from
+ /etc/systemd/system/multi-user.target.wants/ipmi.service to
+ /usr/lib/systemd/system/ipmi.service.
- [root@sollabdsm35 ~]# systemctl start ipmi
+ [root@sollabdsm35 ~]# systemctl start ipmi
- [root@sollabdsm36 ~]# systemctl enable ipmi
+ [root@sollabdsm36 ~]# systemctl enable ipmi
- Created symlink from
- /etc/systemd/system/multi-user.target.wants/ipmi.service to
- /usr/lib/systemd/system/ipmi.service.
+ Created symlink from
+ /etc/systemd/system/multi-user.target.wants/ipmi.service to
+ /usr/lib/systemd/system/ipmi.service.
- [root@sollabdsm36 ~]# systemctl start ipmi
- ```
- Now the IPMI service is started and the device /dev/watchdog is created ΓÇô But the timer is still stopped. Later the SBD will manage the watchdog reset and enables the IPMI timer.
-7. Check that the /dev/watchdog exists but is not in use.
+ [root@sollabdsm36 ~]# systemctl start ipmi
+ ```
+ Now the IPMI service is started and the device /dev/watchdog is created ΓÇô But the timer is still stopped. Later the SBD will manage the watchdog reset and enables the IPMI timer.
+7. Check that the /dev/watchdog exists but is not in use.
+ ```
+ [root@sollabdsm35 ~]# ipmitool mc watchdog get
+ Watchdog Timer Use: SMS/OS (0x04)
+ Watchdog Timer Is: Stopped
+ Watchdog Timer Actions: No action (0x00)
+ Pre-timeout interval: 0 seconds
+ Timer Expiration Flags: 0x10
+ Initial Countdown: 20 sec
+ Present Countdown: 20 sec
+
+ [root@sollabdsm35 ~]# ls -l /dev/watchdog
+ crw- 1 root root 10, 130 Nov 28 23:12 /dev/watchdog
+ [root@sollabdsm35 ~]# lsof /dev/watchdog
```
- [root@sollabdsm35 ~]# ipmitool mc watchdog get
- Watchdog Timer Use: SMS/OS (0x04)
- Watchdog Timer Is: Stopped
- Watchdog Timer Actions: No action (0x00)
- Pre-timeout interval: 0 seconds
- Timer Expiration Flags: 0x10
- Initial Countdown: 20 sec
- Present Countdown: 20 sec
-
- [root@sollabdsm35 ~]# ls -l /dev/watchdog
- crw- 1 root root 10, 130 Nov 28 23:12 /dev/watchdog
- [root@sollabdsm35 ~]# lsof /dev/watchdog
- ```
## SBD configuration In this section, you learn how to configure SBD. This section uses the same two hosts, `sollabdsm35` and `sollabdsm36`, referenced at the beginning of this article.
-1. Make sure the iSCSI or FC disk is visible on both nodes. This example uses an FC-based SBD device. For more information about SBD fencing, see [Design Guidance for RHEL High Availability Clusters - SBD Considerations](https://access.redhat.com/articles/2941601) and [Support Policies for RHEL High Availability Clusters - sbd and fence_sbd](https://access.redhat.com/articles/2800691)
-2. The LUN-ID must be identically on all nodes.
-
-3. Check multipath status for the sbd device.
- ```
- multipath -ll
- 3600a098038304179392b4d6c6e2f4b62 dm-5 NETAPP ,LUN C-Mode
- size=1.0G features='4 queue_if_no_path pg_init_retries 50
- retain_attached_hw_handle' hwhandler='1 alua' wp=rw
- |-+- policy='service-time 0' prio=50 status=active
- | |- 8:0:1:2 sdi 8:128 active ready running
- | `- 10:0:1:2 sdk 8:160 active ready running
- `-+- policy='service-time 0' prio=10 status=enabled
- |- 8:0:3:2 sdj 8:144 active ready running
- `- 10:0:3:2 sdl 8:176 active ready running
- ```
-
-4. Creating the SBD discs and setup the cluster primitive fencing. This step must be executed on first node.
- ```
- sbd -d /dev/mapper/3600a098038304179392b4d6c6e2f4b62 -4 20 -1 10 create
-
- Initializing device /dev/mapper/3600a098038304179392b4d6c6e2f4b62
- Creating version 2.1 header on device 4 (uuid:
- ae17bd40-2bf9-495c-b59e-4cb5ecbf61ce)
-
- Initializing 255 slots on device 4
-
- Device /dev/mapper/3600a098038304179392b4d6c6e2f4b62 is initialized.
- ```
-
-5. Copy the SBD config over to node2.
- ```
- vi /etc/sysconfig/sbd
-
- SBD_DEVICE="/dev/mapper/3600a09803830417934d6c6e2f4b62"
- SBD_PACEMAKER=yes
- SBD_STARTMODE=always
- SBD_DELAY_START=no
- SBD_WATCHDOG_DEV=/dev/watchdog
- SBD_WATCHDOG_TIMEOUT=15
- SBD_TIMEOUT_ACTION=flush,reboot
- SBD_MOVE_TO_ROOT_CGROUP=auto
- SBD_OPTS=
-
- scp /etc/sysconfig/sbd node2:/etc/sysconfig/sbd
- ```
-
-6. Check that the SBD disk is visible from both nodes.
- ```
- sbd -d /dev/mapper/3600a098038304179392b4d6c6e2f4b62 dump
-
- ==Dumping header on disk /dev/mapper/3600a098038304179392b4d6c6e2f4b62
-
- Header version : 2.1
-
- UUID : ae17bd40-2bf9-495c-b59e-4cb5ecbf61ce
-
- Number of slots : 255
- Sector size : 512
- Timeout (watchdog) : 5
- Timeout (allocate) : 2
- Timeout (loop) : 1
- Timeout (msgwait) : 10
-
- ==Header on disk /dev/mapper/3600a098038304179392b4d6c6e2f4b62 is dumped
- ```
-
-7. Add the SBD device in the SBD config file.
-
- ```
- # SBD_DEVICE specifies the devices to use for exchanging sbd messages
- # and to monitor. If specifying more than one path, use ";" as
- # separator.
- #
-
- SBD_DEVICE="/dev/mapper/3600a098038304179392b4d6c6e2f4b62"
- ## Type: yesno
- Default: yes
- # Whether to enable the pacemaker integration.
- SBD_PACEMAKER=yes
- ```
+1. Make sure the iSCSI or FC disk is visible on both nodes. This example uses an FC-based SBD device. For more information about SBD fencing, see [Design Guidance for RHEL High Availability Clusters - SBD Considerations](https://access.redhat.com/articles/2941601) and [Support Policies for RHEL High Availability Clusters - sbd and fence_sbd](https://access.redhat.com/articles/2800691)
+2. The LUN-ID must be identically on all nodes.
+
+3. Check multipath status for the sbd device.
+ ```
+ multipath -ll
+ 3600a098038304179392b4d6c6e2f4b62 dm-5 NETAPP ,LUN C-Mode
+ size=1.0G features='4 queue_if_no_path pg_init_retries 50
+ retain_attached_hw_handle' hwhandler='1 alua' wp=rw
+ |-+- policy='service-time 0' prio=50 status=active
+ | |- 8:0:1:2 sdi 8:128 active ready running
+ | `- 10:0:1:2 sdk 8:160 active ready running
+ `-+- policy='service-time 0' prio=10 status=enabled
+ |- 8:0:3:2 sdj 8:144 active ready running
+ `- 10:0:3:2 sdl 8:176 active ready running
+ ```
+
+4. Creating the SBD discs and setup the cluster primitive fencing. This step must be executed on first node.
+ ```
+ sbd -d /dev/mapper/3600a098038304179392b4d6c6e2f4b62 -4 20 -1 10 create
+
+ Initializing device /dev/mapper/3600a098038304179392b4d6c6e2f4b62
+ Creating version 2.1 header on device 4 (uuid:
+ ae17bd40-2bf9-495c-b59e-4cb5ecbf61ce)
+
+ Initializing 255 slots on device 4
+
+ Device /dev/mapper/3600a098038304179392b4d6c6e2f4b62 is initialized.
+ ```
+
+5. Copy the SBD config over to node2.
+ ```
+ vi /etc/sysconfig/sbd
+
+ SBD_DEVICE="/dev/mapper/3600a09803830417934d6c6e2f4b62"
+ SBD_PACEMAKER=yes
+ SBD_STARTMODE=always
+ SBD_DELAY_START=no
+ SBD_WATCHDOG_DEV=/dev/watchdog
+ SBD_WATCHDOG_TIMEOUT=15
+ SBD_TIMEOUT_ACTION=flush,reboot
+ SBD_MOVE_TO_ROOT_CGROUP=auto
+ SBD_OPTS=
+
+ scp /etc/sysconfig/sbd node2:/etc/sysconfig/sbd
+ ```
+
+6. Check that the SBD disk is visible from both nodes.
+ ```
+ sbd -d /dev/mapper/3600a098038304179392b4d6c6e2f4b62 dump
+
+ ==Dumping header on disk /dev/mapper/3600a098038304179392b4d6c6e2f4b62
+
+ Header version : 2.1
+
+ UUID : ae17bd40-2bf9-495c-b59e-4cb5ecbf61ce
+
+ Number of slots : 255
+ Sector size : 512
+ Timeout (watchdog) : 5
+ Timeout (allocate) : 2
+ Timeout (loop) : 1
+ Timeout (msgwait) : 10
+
+ ==Header on disk /dev/mapper/3600a098038304179392b4d6c6e2f4b62 is dumped
+ ```
+
+7. Add the SBD device in the SBD config file.
+
+ ```
+ # SBD_DEVICE specifies the devices to use for exchanging sbd messages
+ # and to monitor. If specifying more than one path, use ";" as
+ # separator.
+ #
+
+ SBD_DEVICE="/dev/mapper/3600a098038304179392b4d6c6e2f4b62"
+ ## Type: yesno
+ Default: yes
+ # Whether to enable the pacemaker integration.
+ SBD_PACEMAKER=yes
+ ```
## Cluster initialization In this section, you initialize the cluster. This section uses the same two hosts, `sollabdsm35` and `sollabdsm36`, referenced at the beginning of this article.
-1. Set up the cluster user password (all nodes).
- ```
- passwd hacluster
- ```
-2. Start PCS on all systems.
- ```
- systemctl enable pcsd
- ```
-
+1. Set up the cluster user password (all nodes).
+ ```
+ passwd hacluster
+ ```
+2. Start PCS on all systems.
+ ```
+ systemctl enable pcsd
+ ```
-3. Stop the firewall and disable it on (all nodes).
- ```
- systemctl disable firewalld
- systemctl mask firewalld
+3. Stop the firewall and disable it on (all nodes).
+ ```
+ systemctl disable firewalld
- systemctl stop firewalld
- ```
+ systemctl mask firewalld
-4. Start pcsd service.
- ```
- systemctl start pcsd
- ```
+ systemctl stop firewalld
+ ```
-5. Run the cluster authentication only from node1.
+4. Start pcsd service.
+ ```
+ systemctl start pcsd
+ ```
- ```
- pcs cluster auth sollabdsm35 sollabdsm36
+5. Run the cluster authentication only from node1.
- Username: hacluster
+ ```
+ pcs cluster auth sollabdsm35 sollabdsm36
- Password:
- sollabdsm35.localdomain: Authorized
- sollabdsm36.localdomain: Authorized
+ Username: hacluster
- ```
+ Password:
+ sollabdsm35.localdomain: Authorized
+ sollabdsm36.localdomain: Authorized
-6. Create the cluster.
- ```
- pcs cluster setup --start --name hana sollabdsm35 sollabdsm36
- ```
-
+ ```
-7. Check the cluster status.
+6. Create the cluster.
+ ```
+ pcs cluster setup --start --name hana sollabdsm35 sollabdsm36
+ ```
- ```
- pcs cluster status
- Cluster name: hana
+7. Check the cluster status.
- WARNINGS:
+ ```
+ pcs cluster status
+
+ Cluster name: hana
- No stonith devices and `stonith-enabled` is not false
+ WARNINGS:
- Stack: corosync
+ No stonith devices and `stonith-enabled` is not false
- Current DC: sollabdsm35 (version 1.1.20-5.el7_7.2-3c4c782f70) -
- partition with quorum
+ Stack: corosync
- Last updated: Sat Nov 28 20:56:57 2020
+ Current DC: sollabdsm35 (version 1.1.20-5.el7_7.2-3c4c782f70) -
+ partition with quorum
- Last change: Sat Nov 28 20:54:58 2020 by hacluster via crmd on
- sollabdsm35
+ Last updated: Sat Nov 28 20:56:57 2020
- 2 nodes configured
+ Last change: Sat Nov 28 20:54:58 2020 by hacluster via crmd on
+ sollabdsm35
- 0 resources configured
+ 2 nodes configured
- Online: [ sollabdsm35 sollabdsm36 ]
+ 0 resources configured
- No resources
+ Online: [ sollabdsm35 sollabdsm36 ]
- Daemon Status:
+ No resources
- corosync: active/disabled
+ Daemon Status:
- pacemaker: active/disabled
+ corosync: active/disabled
- pcsd: active/disabled
- ```
+ pacemaker: active/disabled
+
+ pcsd: active/disabled
+ ```
8. If one node is not joining the cluster check if the firewall is still running. 9. Create and enable the SBD Device
- ```
- pcs stonith create SBD fence_sbd devices=/dev/mapper/3600a098038303f4c467446447a
- ```
-
+ ```
+ pcs stonith create SBD fence_sbd devices=/dev/mapper/3600a098038303f4c467446447a
+ ```
+ 10. Stop the cluster restart the cluster services (on all nodes).
- ```
- pcs cluster stop --all
- ```
+ ```
+ pcs cluster stop --all
+ ```
11. Restart the cluster services (on all nodes).
- ```
- systemctl stop pcsd
- systemctl stop pacemaker
- systemctl stop corocync
- systemctl enable sbd
- systemctl start corosync
- systemctl start pacemaker
- systemctl start pcsd
- ```
+ ```
+ systemctl stop pcsd
+ systemctl stop pacemaker
+ systemctl stop corocync
+ systemctl enable sbd
+ systemctl start corosync
+ systemctl start pacemaker
+ systemctl start pcsd
+ ```
12. Corosync must start the SBD service.
- ```
- systemctl status sbd
+ ```
+ systemctl status sbd
- ΓùÅ sbd.service - Shared-storage based fencing daemon
+ ΓùÅ sbd.service - Shared-storage based fencing daemon
- Loaded: loaded (/usr/lib/systemd/system/sbd.service; enabled; vendor
- preset: disabled)
+ Loaded: loaded (/usr/lib/systemd/system/sbd.service; enabled; vendor
+ preset: disabled)
- Active: active (running) since Wed 2021-01-20 01:43:41 EST; 9min ago
- ```
+ Active: active (running) since Wed 2021-01-20 01:43:41 EST; 9min ago
+ ```
13. Restart the cluster (if not automatically started from pcsd).
- ```
- pcs cluster start ΓÇô-all
+ ```
+ pcs cluster start ΓÇô-all
+
+ sollabdsm35: Starting Cluster (corosync)...
- sollabdsm35: Starting Cluster (corosync)...
+ sollabdsm36: Starting Cluster (corosync)...
- sollabdsm36: Starting Cluster (corosync)...
+ sollabdsm35: Starting Cluster (pacemaker)...
- sollabdsm35: Starting Cluster (pacemaker)...
+ sollabdsm36: Starting Cluster (pacemaker)...
+ ```
- sollabdsm36: Starting Cluster (pacemaker)...
- ```
-
14. Enable fencing device settings.
- ```
- pcs stonith enable SBD --device=/dev/mapper/3600a098038304179392b4d6c6e2f4d65
- pcs property set stonith-watchdog-timeout=20
- pcs property set stonith-action=reboot
- ```
-
+ ```
+ pcs stonith enable SBD --device=/dev/mapper/3600a098038304179392b4d6c6e2f4d65
+ pcs property set stonith-watchdog-timeout=20
+ pcs property set stonith-action=reboot
+ ```
+ 15. Check the new cluster status with now one resource.
- ```
- pcs status
+ ```
+ pcs status
+
+ Cluster name: hana
- Cluster name: hana
+ Stack: corosync
- Stack: corosync
+ Current DC: sollabdsm35 (version 1.1.16-12.el7-94ff4df) - partition
+ with quorum
- Current DC: sollabdsm35 (version 1.1.16-12.el7-94ff4df) - partition
- with quorum
+ Last updated: Tue Oct 16 01:50:45 2018
- Last updated: Tue Oct 16 01:50:45 2018
+ Last change: Tue Oct 16 01:48:19 2018 by root via cibadmin on
+ sollabdsm35
- Last change: Tue Oct 16 01:48:19 2018 by root via cibadmin on
- sollabdsm35
+ 2 nodes configured
- 2 nodes configured
+ 1 resource configured
- 1 resource configured
+ Online: [ sollabdsm35 sollabdsm36 ]
- Online: [ sollabdsm35 sollabdsm36 ]
+ Full list of resources:
- Full list of resources:
+ SBD (stonith:fence_sbd): Started sollabdsm35
- SBD (stonith:fence_sbd): Started sollabdsm35
+ Daemon Status:
- Daemon Status:
+ corosync: active/disabled
- corosync: active/disabled
+ pacemaker: active/disabled
- pacemaker: active/disabled
+ pcsd: active/enabled
- pcsd: active/enabled
+ sbd: active/enabled
- sbd: active/enabled
+ [root@node1 ~]#
+ ```
- [root@node1 ~]#
- ```
-
16. Now the IPMI timer must run and the /dev/watchdog device must be opened by sbd.
- ```
- ipmitool mc watchdog get
+ ```
+ ipmitool mc watchdog get
- Watchdog Timer Use: SMS/OS (0x44)
+ Watchdog Timer Use: SMS/OS (0x44)
- Watchdog Timer Is: Started/Running
+ Watchdog Timer Is: Started/Running
- Watchdog Timer Actions: Hard Reset (0x01)
+ Watchdog Timer Actions: Hard Reset (0x01)
- Pre-timeout interval: 0 seconds
+ Pre-timeout interval: 0 seconds
- Timer Expiration Flags: 0x10
+ Timer Expiration Flags: 0x10
- Initial Countdown: 20 sec
+ Initial Countdown: 20 sec
- Present Countdown: 19 sec
+ Present Countdown: 19 sec
- [root@sollabdsm35 ~] lsof /dev/watchdog
+ [root@sollabdsm35 ~] lsof /dev/watchdog
- COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
+ COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
- sbd 117569 root 5w CHR 10,130 0t0 323812 /dev/watchdog
- ```
+ sbd 117569 root 5w CHR 10,130 0t0 323812 /dev/watchdog
+ ```
17. Check the SBD status.
- ```
- sbd -d /dev/mapper/3600a098038304445693f4c467446447a list
+ ```
+ sbd -d /dev/mapper/3600a098038304445693f4c467446447a list
+
+ 0 sollabdsm35 clear
- 0 sollabdsm35 clear
+ 1 sollabdsm36 clear
+ ```
- 1 sollabdsm36 clear
- ```
-
18. Test the SBD fencing by crashing the kernel. * Trigger the Kernel Crash.
- ```
- echo c > /proc/sysrq-trigger
+ ```
+ echo c > /proc/sysrq-trigger
+
+ System must reboot after 5 Minutes (BMC timeout) or the value which is
+ set as panic_wdt_timeout in the /etc/sysconfig/ipmi config file.
+ ```
- System must reboot after 5 Minutes (BMC timeout) or the value which is
- set as panic_wdt_timeout in the /etc/sysconfig/ipmi config file.
- ```
-
* Second test to run is to fence a node using PCS commands.
- ```
- pcs stonith fence sollabdsm36
- ```
-
+ ```
+ pcs stonith fence sollabdsm36
+ ```
+ 19. For the rest of the SAP HANA clustering you can disable fencing by setting:
The default and supported way is to create a performance optimized scenario wher
| **Synchronous** | Synchronous (mode=sync) means the log write is considered as successful when the log entry has been written to the log volume of the primary and the secondary instance. When the connection to the secondary system is lost, the primary system continues transaction processing and writes the changes only to the local disk. No data loss occurs in this scenario as long as the secondary system is connected. Data loss can occur, when a takeover is executed while the secondary system is disconnected. Additionally, this replication mode can run with a full sync option. This means that log write is successful when the log buffer has been written to the log file of the primary and the secondary instance. In addition, when the secondary system is disconnected (for example, because of network failure) the primary systems suspends transaction processing until the connection to the secondary system is reestablished. No data loss occurs in this scenario. You can set the full sync option for system replication only with the parameter \[system\_replication\]/enable\_full\_sync). For more information on how to enable the full sync option, see Enable Full Sync Option for System Replication. | | **Asynchronous** | Asynchronous (mode=async) means the primary system sends redo log buffers to the secondary system asynchronously. The primary system commits a transaction when it has been written to the log file of the primary system and sent to the secondary system through the network. It does not wait for confirmation from the secondary system. This option provides better performance because it is not necessary to wait for log I/O on the secondary system. Database consistency across all services on the secondary system is guaranteed. However, it is more vulnerable to data loss. Data changes may be lost on takeover. |
-1. These are the actions to execute on node1 (primary).
+1. These are the actions to execute on node1 (primary).
1. Make sure that the database log mode is set to normal.
- ```
-
- * su - hr2adm
-
- * hdbsql -u system -p $YourPass -i 00 "select value from
- "SYS"."M_INIFILE_CONTENTS" where key='log_mode'"
-
-
-
- VALUE
-
- "normal"
```
- 2. SAP HANA system replication will only work after initial backup has been performed. The following command creates an initial backup in the `/tmp/` directory. Select a proper backup filesystem for the database.
+
+ * su - hr2adm
+
+ * hdbsql -u system -p $YourPass -i 00 "select value from
+ "SYS"."M_INIFILE_CONTENTS" where key='log_mode'"
+++
+ VALUE
+
+ "normal"
+ ```
+ 2. SAP HANA system replication will only work after initial backup has been performed. The following command creates an initial backup in the `/tmp/` directory. Select a proper backup filesystem for the database.
+ ```
+ * hdbsql -i 00 -u system -p $YourPass "BACKUP DATA USING FILE
+ ('/tmp/backup')"
+++
+ Backup files were created
+
+ ls -l /tmp
+
+ total 2031784
+
+ -rw-r-- 1 hr2adm sapsys 155648 Oct 26 23:31 backup_databackup_0_1
+
+ -rw-r-- 1 hr2adm sapsys 83894272 Oct 26 23:31 backup_databackup_2_1
+
+ -rw-r-- 1 hr2adm sapsys 1996496896 Oct 26 23:31 backup_databackup_3_1
+
+ ```
+
+ 3. Backup all database containers of this database.
+ ```
+ * hdbsql -i 00 -u system -p $YourPass -d SYSTEMDB "BACKUP DATA USING
+ FILE ('/tmp/sydb')"
+
+ * hdbsql -i 00 -u system -p $YourPass -d SYSTEMDB "BACKUP DATA FOR HR2
+ USING FILE ('/tmp/rh2')"
+ ```
- * hdbsql -i 00 -u system -p $YourPass "BACKUP DATA USING FILE
- ('/tmp/backup')"
-
-
-
- Backup files were created
-
- ls -l /tmp
-
- total 2031784
-
- -rw-r-- 1 hr2adm sapsys 155648 Oct 26 23:31 backup_databackup_0_1
-
- -rw-r-- 1 hr2adm sapsys 83894272 Oct 26 23:31 backup_databackup_2_1
-
- -rw-r-- 1 hr2adm sapsys 1996496896 Oct 26 23:31 backup_databackup_3_1
-
- ```
-
- 3. Backup all database containers of this database.
- ```
- * hdbsql -i 00 -u system -p $YourPass -d SYSTEMDB "BACKUP DATA USING
- FILE ('/tmp/sydb')"
-
- * hdbsql -i 00 -u system -p $YourPass -d SYSTEMDB "BACKUP DATA FOR HR2
- USING FILE ('/tmp/rh2')"
-
- ```
-
- 4. Enable the HSR process on the source system.
+
+ 4. Enable the HSR process on the source system.
```
- hdbnsutil -sr_enable --name=DC1
+ hdbnsutil -sr_enable --name=DC1
- nameserver is active, proceeding ...
+ nameserver is active, proceeding ...
- successfully enabled system as system replication source site
+ successfully enabled system as system replication source site
- done.
+ done.
```
- 5. Check the status of the primary system.
+ 5. Check the status of the primary system.
+ ```
+ hdbnsutil -sr_state
+
+ System Replication State
++
+ online: true
+
+ mode: primary
+
+ operation mode: primary
+
+ site id: 1
+
+ site name: DC1
+++
+ is source system: true
+
+ is secondary/consumer system: false
+
+ has secondaries/consumers attached: false
+
+ is a takeover active: false
+++
+ Host Mappings:
+
+ ~~~~~~~~~~~~~~
+
+ Site Mappings:
+
+ ~~~~~~~~~~~~~~
+
+ DC1 (primary/)
+
+ Tier of DC1: 1
+
+ Replication mode of DC1: primary
+
+ Operation mode of DC1:
+
+ done.
```
- hdbnsutil -sr_state
-
- System Replication State
-
-
- online: true
-
- mode: primary
-
- operation mode: primary
-
- site id: 1
-
- site name: DC1
-
-
-
- is source system: true
-
- is secondary/consumer system: false
-
- has secondaries/consumers attached: false
-
- is a takeover active: false
-
-
-
- Host Mappings:
-
- ~~~~~~~~~~~~~~
-
- Site Mappings:
-
- ~~~~~~~~~~~~~~
-
- DC1 (primary/)
-
- Tier of DC1: 1
-
- Replication mode of DC1: primary
-
- Operation mode of DC1:
-
- done.
- ```
2. These are the actions to execute on node2 (secondary). 1. Stop the database.
- ```
- su ΓÇô hr2adm
-
- sapcontrol -nr 00 -function StopSystem
```
-
-
- 2. For SAP HANA2.0 only, copy the SAP HANA system `PKI SSFS_HR2.KEY` and `SSFS_HR2.DAT` files from primary node to secondary node.
- ```
- scp
- root@node1:/usr/sap/HR2/SYS/global/security/rsecssfs/key/SSFS_HR2.KEY
- /usr/sap/HR2/SYS/global/security/rsecssfs/key/SSFS_HR2.KEY
-
-
-
- scp
- root@node1:/usr/sap/HR2/SYS/global/security/rsecssfs/data/SSFS_HR2.DAT
- /usr/sap/HR2/SYS/global/security/rsecssfs/data/SSFS_HR2.DAT
+ su ΓÇô hr2adm
+
+ sapcontrol -nr 00 -function StopSystem
+ ```
++
+ 2. For SAP HANA2.0 only, copy the SAP HANA system `PKI SSFS_HR2.KEY` and `SSFS_HR2.DAT` files from primary node to secondary node.
+ ```
+ scp
+ root@node1:/usr/sap/HR2/SYS/global/security/rsecssfs/key/SSFS_HR2.KEY
+ /usr/sap/HR2/SYS/global/security/rsecssfs/key/SSFS_HR2.KEY
+++
+ scp
+ root@node1:/usr/sap/HR2/SYS/global/security/rsecssfs/data/SSFS_HR2.DAT
+ /usr/sap/HR2/SYS/global/security/rsecssfs/data/SSFS_HR2.DAT
``` 3. Enable secondary as the replication site.
- ```
- su - hr2adm
-
- hdbnsutil -sr_register --remoteHost=node1 --remoteInstance=00
- --replicationMode=syncmem --name=DC2
-
-
-
- adding site ...
-
- --operationMode not set; using default from global.ini/[system_replication]/operation_mode: logreplay
-
- nameserver node2:30001 not responding.
-
- collecting information ...
-
- updating local ini files ...
-
- done.
-
- ```
-
- 4. Start the database.
```
- sapcontrol -nr 00 -function StartSystemΓÇâ
+ su - hr2adm
+
+ hdbnsutil -sr_register --remoteHost=node1 --remoteInstance=00
+ --replicationMode=syncmem --name=DC2
+++
+ adding site ...
+
+ --operationMode not set; using default from global.ini/[system_replication]/operation_mode: logreplay
+
+ nameserver node2:30001 not responding.
+
+ collecting information ...
+
+ updating local ini files ...
+
+ done.
+
+ ```
+
+ 4. Start the database.
+ ```
+ sapcontrol -nr 00 -function StartSystem
```
-
+ 5. Check the database state. ```
- hdbnsutil -sr_state
-
- ~~~~~~~~~
- System Replication State
-
- online: true
-
- mode: syncmem
-
- operation mode: logreplay
-
- site id: 2
-
- site name: DC2
-
- is source system: false
-
- is secondary/consumer system: true
-
- has secondaries/consumers attached: false
-
- is a takeover active: false
-
- active primary site: 1
-
-
-
- primary primarys: node1
-
- Host Mappings:
-
-
-
- node2 -> [DC2] node2
-
- node2 -> [DC1] node1
-
-
-
- Site Mappings:
-
-
-
- DC1 (primary/primary)
-
- |DC2 (syncmem/logreplay)
-
-
-
- Tier of DC1: 1
-
- Tier of DC2: 2
-
-
-
- Replication mode of DC1: primary
-
- Replication mode of DC2: syncmem
-
- Operation mode of DC1: primary
-
- Operation mode of DC2: logreplay
-
-
-
- Mapping: DC1 -> DC2
-
- done.
- ~~~~~~~~~~~~~~
- ```
+ hdbnsutil -sr_state
+
+ ~~~~~~~~~
+ System Replication State
+
+ online: true
+
+ mode: syncmem
+
+ operation mode: logreplay
+
+ site id: 2
+
+ site name: DC2
+
+ is source system: false
+
+ is secondary/consumer system: true
+
+ has secondaries/consumers attached: false
+
+ is a takeover active: false
+
+ active primary site: 1
+++
+ primary primarys: node1
+
+ Host Mappings:
+++
+ node2 -> [DC2] node2
+
+ node2 -> [DC1] node1
+++
+ Site Mappings:
+++
+ DC1 (primary/primary)
+
+ |DC2 (syncmem/logreplay)
+++
+ Tier of DC1: 1
+
+ Tier of DC2: 2
+++
+ Replication mode of DC1: primary
+
+ Replication mode of DC2: syncmem
+
+ Operation mode of DC1: primary
+
+ Operation mode of DC2: logreplay
+++
+ Mapping: DC1 -> DC2
+
+ done.
+ ~~~~~~~~~~~~~~
+ ```
3. It is also possible to get more information on the replication status:
- ```
- ~~~~~
- hr2adm@node1:/usr/sap/HR2/HDB00> python
- /usr/sap/HR2/HDB00/exe/python_support/systemReplicationStatus.py
+ ```
+ ~~~~~
+ hr2adm@node1:/usr/sap/HR2/HDB00> python
+ /usr/sap/HR2/HDB00/exe/python_support/systemReplicationStatus.py
- | Database | Host | Port | Service Name | Volume ID | Site ID | Site
- Name | Secondary | Secondary | Secondary | Secondary | Secondary |
- Replication | Replication | Replication |
+ | Database | Host | Port | Service Name | Volume ID | Site ID | Site
+ Name | Secondary | Secondary | Secondary | Secondary | Secondary |
+ Replication | Replication | Replication |
- | | | | | | | | Host | Port | Site ID | Site Name | Active Status |
- Mode | Status | Status Details |
+ | | | | | | | | Host | Port | Site ID | Site Name | Active Status |
+ Mode | Status | Status Details |
- | SYSTEMDB | node1 | 30001 | nameserver | 1 | 1 | DC1 | node2 | 30001
- | 2 | DC2 | YES | SYNCMEM | ACTIVE | |
+ | SYSTEMDB | node1 | 30001 | nameserver | 1 | 1 | DC1 | node2 | 30001
+ | 2 | DC2 | YES | SYNCMEM | ACTIVE | |
- | HR2 | node1 | 30007 | xsengine | 2 | 1 | DC1 | node2 | 30007 | 2 |
- DC2 | YES | SYNCMEM | ACTIVE | |
+ | HR2 | node1 | 30007 | xsengine | 2 | 1 | DC1 | node2 | 30007 | 2 |
+ DC2 | YES | SYNCMEM | ACTIVE | |
- | HR2 | node1 | 30003 | indexserver | 3 | 1 | DC1 | node2 | 30003 | 2
- | DC2 | YES | SYNCMEM | ACTIVE | |
+ | HR2 | node1 | 30003 | indexserver | 3 | 1 | DC1 | node2 | 30003 | 2
+ | DC2 | YES | SYNCMEM | ACTIVE | |
- status system replication site "2": ACTIVE
+ status system replication site "2": ACTIVE
- overall system replication status: ACTIVE
+ overall system replication status: ACTIVE
- Local System Replication State
-
+ Local System Replication State
- mode: PRIMARY
- site id: 1
+ mode: PRIMARY
+
+ site id: 1
+
+ site name: DC1
+ ```
- site name: DC1
- ```
-
#### Log Replication Mode Description For more information about log replication mode, see the [official SAP documentation](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.01/627bd11e86c84ec2b9fcdf585d24011c.html).
-
+ #### Network Setup for HANA System Replication
In the first example, the `[system_replication_communication]listeninterface` pa
In the following example, the `[system_replication_communication]listeninterface` parameter has been set to `.internal` and all hosts of both sites are specified.
-
+ For more information, see [Network Configuration for SAP HANA System Replication](https://www.sap.com/documents/2016/06/18079a1c-767c-0010-82c7-eda71af511fa.html).
-
+ For system replication, it is not necessary to edit the `/etc/hosts` file, internal ('virtual') host names must be mapped to IP addresses in the `global.ini` file to create a dedicated network for system replication. The syntax for this is as follows:
global.ini
## Configure SAP HANA in a Pacemaker cluster In this section, you learn how to configure SAP HANA in a Pacemaker cluster. This section uses the same two hosts, `sollabdsm35` and `sollabdsm36`, referenced at the beginning of this article.
-Ensure you have met the following prerequisites:
+Ensure you have met the following prerequisites:
* Pacemaker cluster is configured according to documentation and has proper and working fencing
Ensure you have met the following prerequisites:
* Both nodes are subscribed to 'High-availability' and 'RHEL for SAP HANA' (RHEL 6,RHEL 7) channels
-
+ * In general, please execute all pcs commands only from on node because the CIB will be automatically updated from the pcs shell. * [More info on quorum policy](https://access.redhat.com/solutions/645843)
-### Steps to configure
+### Steps to configure
1. Configure pcs.
- ```
- [root@node1 ~]# pcs property unset no-quorum-policy (optional ΓÇô only if it was set before)
- [root@node1 ~]# pcs resource defaults resource-stickiness=1000
- [root@node1 ~]# pcs resource defaults migration-threshold=5000
- ```
-2. Configure corosync.
- For more information, see [How can I configure my RHEL 7 High Availability Cluster with pacemaker and corosync](https://access.redhat.com/solutions/1293523).
- ```
- cat /etc/corosync/corosync.conf
+ ```
+ [root@node1 ~]# pcs property unset no-quorum-policy (optional ΓÇô only if it was set before)
+ [root@node1 ~]# pcs resource defaults resource-stickiness=1000
+ [root@node1 ~]# pcs resource defaults migration-threshold=5000
+ ```
+2. Configure corosync.
+ For more information, see [How can I configure my RHEL 7 High Availability Cluster with pacemaker and corosync](https://access.redhat.com/solutions/1293523).
+ ```
+ cat /etc/corosync/corosync.conf
- totem {
+ totem {
- version: 2
+ version: 2
- secauth: off
+ secauth: off
- cluster_name: hana
+ cluster_name: hana
- transport: udpu
+ transport: udpu
- }
+ }
-
- nodelist {
- node {
+ nodelist {
- ring0_addr: node1.localdomain
+ node {
- nodeid: 1
+ ring0_addr: node1.localdomain
- }
+ nodeid: 1
-
+ }
- node {
- ring0_addr: node2.localdomain
- nodeid: 2
+ node {
- }
+ ring0_addr: node2.localdomain
- }
+ nodeid: 2
-
+ }
- quorum {
+ }
- provider: corosync_votequorum
- two_node: 1
- }
+ quorum {
-
+ provider: corosync_votequorum
- logging {
+ two_node: 1
- to_logfile: yes
+ }
- logfile: /var/log/cluster/corosync.log
- to_syslog: yes
- }
+ logging {
- ```
-
+ to_logfile: yes
+
+ logfile: /var/log/cluster/corosync.log
+
+ to_syslog: yes
+
+ }
+
+ ```
-3. Create cloned SAPHanaTopology resource.
- SAPHanaTopology resource is gathering status and configuration of SAP
- HANA System Replication on each node. SAPHanaTopology requires
- following attributes to be configured.
- ```
- pcs resource create SAPHanaTopology_HR2_00 SAPHanaTopology SID=HR2 op start timeout=600 \
- op stop timeout=300 \
- op monitor interval=10 timeout=600 \
- clone clone-max=2 clone-node-max=1 interleave=true
- ```
+3. reate cloned SAPHanaTopology resource.
- | Attribute Name | Description |
- |||
+ SAPHanaTopology resource is gathering status and configuration of SAP
+ HANA System Replication on each node. SAPHanaTopology requires
+ following attributes to be configured.
+
+ ```
+ pcs resource create SAPHanaTopology_HR2_00 SAPHanaTopology SID=HR2 op start timeout=600 \
+ op stop timeout=300 \
+ op monitor interval=10 timeout=600 \
+ clone clone-max=2 clone-node-max=1 interleave=true
+ ```
+
+ | Attribute Name | Description |
+ |||
| SID | SAP System Identifier (SID) of SAP HANA installation. Must be the same for all nodes. |
- | InstanceNumber | 2-digit SAP Instance Identifier.|
-
- * Resource status
- ```
- pcs resource show SAPHanaTopology_HR2_00
-
- Clone: SAPHanaTopology_HR2_00-clone
- Meta Attrs: clone-max=2 clone-node-max=1 interleave=true
- Resource: SAPHanaTopology_HR2_00 (class=ocf provider=heartbeat type=SAPHanaTopology)
- Attributes: InstanceNumber=00 SID=HR2
- Operations: monitor interval=60 timeout=60 (SAPHanaTopology_HR2_00-monitor-interval-60)
- start interval=0s timeout=180 (SAPHanaTopology_HR2_00-start-interval-0s)
- stop interval=0s timeout=60 (SAPHanaTopology_HR2_00-stop-interval-0s)
-
-
- ```
-
-4. Create Primary/Secondary SAPHana resource.
- * SAPHana resource is responsible for starting, stopping, and relocating the SAP HANA database. This resource must be run as a Primary/Secondary cluster resource. The resource has the following attributes.
-
-| Attribute Name | Required? | Default value | Description |
-||--||-|
-| SID | Yes | None | SAP System Identifier (SID) of SAP HANA installation. Must be same for all nodes. |
-| InstanceNumber | Yes | none | 2-digit SAP Instance identifier. |
-| PREFER_SITE_TAKEOVER | no | yes | Should cluster prefer to switchover to secondary instance instead of restarting primary locally? ("no": Do prefer restart locally; "yes": Do prefer takeover to remote site) |
-| | | | |
-| AUTOMATED_REGISTER | no | FALSE | Should the former SAP HANA primary be registered as secondary after takeover and DUPLICATE_PRIMARY_TIMEOUT? ("false": no, manual intervention will be needed; "true": yes, the former primary will be registered by resource agent as secondary) |
-| DUPLICATE_PRIMARY_TIMEOUT | no | 7200 | Time difference (in seconds) needed between primary time stamps, if a dual-primary situation occurs. If the time difference is less than the time gap, then the cluster holds one or both instances in a "WAITING" status. This is to give an admin a chance to react on a failover. A failed former primary will be registered after the time difference is passed. After this registration to the new primary, all data will be overwritten by the system replication. |
-
-5. Create the HANA resource.
- ```
- pcs resource create SAPHana_HR2_00 SAPHana SID=HR2 InstanceNumber=00 PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=true op start timeout=3600 \
+ | InstanceNumber | 2-digit SAP Instance Identifier.|
+
+ * Resource status
+
+ ```
+ pcs resource show SAPHanaTopology_HR2_00
+
+ Clone: SAPHanaTopology_HR2_00-clone
+ Meta Attrs: clone-max=2 clone-node-max=1 interleave=true
+ Resource: SAPHanaTopology_HR2_00 (class=ocf provider=heartbeat type=SAPHanaTopology)
+ Attributes: InstanceNumber=00 SID=HR2
+ Operations: monitor interval=60 timeout=60 (SAPHanaTopology_HR2_00-monitor-interval-60)
+ start interval=0s timeout=180 (SAPHanaTopology_HR2_00-start-interval-0s)
+ stop interval=0s timeout=60 (SAPHanaTopology_HR2_00-stop-interval-0s)
+ ```
+
+4. Create Primary/Secondary SAPHana resource.
+ * SAPHana resource is responsible for starting, stopping, and relocating the SAP HANA database. This resource must be run as a Primary/Secondary cluster resource. The resource has the following attributes.
+
+ | Attribute Name | Required? | Default value | Description |
+ ||--||-|
+ | SID | Yes | None | SAP System Identifier (SID) of SAP HANA installation. Must be same for all nodes. |
+ | InstanceNumber | Yes | none | 2-digit SAP Instance identifier. |
+ | PREFER_SITE_TAKEOVER | no | yes | Should cluster prefer to switchover to secondary instance instead of restarting primary locally? ("no": Do prefer restart locally; "yes": Do prefer takeover to remote site) |
+ | | | | |
+ | AUTOMATED_REGISTER | no | FALSE | Should the former SAP HANA primary be registered as secondary after takeover and DUPLICATE_PRIMARY_TIMEOUT? ("false": no, manual intervention will be needed; "true": yes, the former primary will be registered by resource agent as secondary) |
+ | DUPLICATE_PRIMARY_TIMEOUT | no | 7200 | Time difference (in seconds) needed between primary time stamps, if a dual-primary situation occurs. If the time difference is less than the time gap, then the cluster holds one or both instances in a "WAITING" status. This is to give an admin a chance to react on a failover. A failed former primary will be registered after the time difference is passed. After this registration to the new primary, all data will be overwritten by the system replication. |
+
+5. Create the HANA resource.
+
+ ```
+ pcs resource create SAPHana_HR2_00 SAPHana SID=HR2 InstanceNumber=00 PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=true op start timeout=3600 \
op stop timeout=3600 \ op monitor interval=61 role="Slave" timeout=700 \ op monitor interval=59 role="Master" timeout=700 \
Ensure you have met the following prerequisites:
op demote timeout=3600 \ master meta notify=true clone-max=2 clone-node-max=1 interleave=true -
- pcs resource show SAPHana_HR2_00-primary
-
+ pcs resource show SAPHana_HR2_00-primary
Primary: SAPHana_HR2_00-primary Meta Attrs: clone-max=2 clone-node-max=1 interleave=true notify=true
Ensure you have met the following prerequisites:
promote interval=0s timeout=320 (SAPHana_HR2_00-promote-interval-0s) start interval=0s timeout=180 (SAPHana_HR2_00-start-interval-0s) stop interval=0s timeout=240 (SAPHana_HR2_00-stop-interval-0s)
-
-
-
- crm_mon -A1
+ crm_mon -A1
+ ....
+
+ 2 nodes configured
- ....
+ 5 resources configured
- 2 nodes configured
+ Online: [ node1.localdomain node2.localdomain ]
- 5 resources configured
+ Active resources:
- Online: [ node1.localdomain node2.localdomain ]
+ .....
- Active resources:
+ Node Attributes:
+
+ * Node node1.localdomain:
- .....
+ + hana_hr2_clone_state : PROMOTED
- Node Attributes:
+ + hana_hr2_remoteHost : node2
- * Node node1.localdomain:
+ + hana_hr2_roles : 4:P:primary1:primary:worker:primary
- + hana_hr2_clone_state : PROMOTED
+ + hana_hr2_site : DC1
- + hana_hr2_remoteHost : node2
+ + hana_hr2_srmode : syncmem
- + hana_hr2_roles : 4:P:primary1:primary:worker:primary
+ + hana_hr2_sync_state : PRIM
- + hana_hr2_site : DC1
+ + hana_hr2_version : 2.00.033.00.1535711040
- + hana_hr2_srmode : syncmem
+ + hana_hr2_vhost : node1
- + hana_hr2_sync_state : PRIM
+ + lpa_hr2_lpt : 1540866498
- + hana_hr2_version : 2.00.033.00.1535711040
+ + primary-SAPHana_HR2_00 : 150
- + hana_hr2_vhost : node1
+ * Node node2.localdomain:
- + lpa_hr2_lpt : 1540866498
+ + hana_hr2_clone_state : DEMOTED
- + primary-SAPHana_HR2_00 : 150
+ + hana_hr2_op_mode : logreplay
- * Node node2.localdomain:
+ + hana_hr2_remoteHost : node1
- + hana_hr2_clone_state : DEMOTED
+ + hana_hr2_roles : 4:S:primary1:primary:worker:primary
- + hana_hr2_op_mode : logreplay
+ + hana_hr2_site : DC2
- + hana_hr2_remoteHost : node1
+ + hana_hr2_srmode : syncmem
- + hana_hr2_roles : 4:S:primary1:primary:worker:primary
+ + hana_hr2_sync_state : SOK
- + hana_hr2_site : DC2
+ + hana_hr2_version : 2.00.033.00.1535711040
- + hana_hr2_srmode : syncmem
+ + hana_hr2_vhost : node2
- + hana_hr2_sync_state : SOK
+ + lpa_hr2_lpt : 30
- + hana_hr2_version : 2.00.033.00.1535711040
+ + primary-SAPHana_HR2_00 : 100
+ ```
- + hana_hr2_vhost : node2
+6. Create Virtual IP address resource.
- + lpa_hr2_lpt : 30
+ Cluster will contain Virtual IP address in order to reach the Primary instance of SAP HANA. Below is example command to create IPaddr2 resource with IP 10.7.0.84/24.
- + primary-SAPHana_HR2_00 : 100
- ```
+ ```
+ pcs resource create vip_HR2_00 IPaddr2 ip="10.7.0.84"
+ pcs resource show vip_HR2_00
-6. Create Virtual IP address resource.
- Cluster will contain Virtual IP address in order to reach the Primary instance of SAP HANA. Below is example command to create IPaddr2 resource with IP 10.7.0.84/24.
- ```
- pcs resource create vip_HR2_00 IPaddr2 ip="10.7.0.84"
- pcs resource show vip_HR2_00
+ Resource: vip_HR2_00 (class=ocf provider=heartbeat type=IPaddr2)
- Resource: vip_HR2_00 (class=ocf provider=heartbeat type=IPaddr2)
+ Attributes: ip=10.7.0.84
- Attributes: ip=10.7.0.84
+ Operations: monitor interval=10s timeout=20s
+ (vip_HR2_00-monitor-interval-10s)
- Operations: monitor interval=10s timeout=20s
- (vip_HR2_00-monitor-interval-10s)
+ start interval=0s timeout=20s (vip_HR2_00-start-interval-0s)
- start interval=0s timeout=20s (vip_HR2_00-start-interval-0s)
+ stop interval=0s timeout=20s (vip_HR2_00-stop-interval-0s)
+ ```
- stop interval=0s timeout=20s (vip_HR2_00-stop-interval-0s)
- ```
+7. Create constraints.
-7. Create constraints.
- * For correct operation, we need to ensure that SAPHanaTopology resources are started before starting the SAPHana resources, and also that the virtual IP address is present on the node where the Primary resource of SAPHana is running. To achieve this, the following 2 constraints need to be created.
- ```
- pcs constraint order SAPHanaTopology_HR2_00-clone then SAPHana_HR2_00-primary symmetrical=false
- pcs constraint colocation add vip_HR2_00 with primary SAPHana_HR2_00-primary 2000
- ```
+ * For correct operation, we need to ensure that SAPHanaTopology resources are started before starting the SAPHana resources, and also that the virtual IP address is present on the node where the Primary resource of SAPHana is running. To achieve this, the following 2 constraints need to be created.
+ ```
+ pcs constraint order SAPHanaTopology_HR2_00-clone then SAPHana_HR2_00-primary symmetrical=false
+ pcs constraint colocation add vip_HR2_00 with primary SAPHana_HR2_00-primary 2000
+ ```
### Testing the manual move of SAPHana resource to another node #### (SAP Hana takeover by cluster) -
-To test out the move of the SAPHana resource from one node to another, use the command below. Note that the option `--primary` should not be used when running the following command because of how the SAPHana resource works internally.
+To test out the move of the SAPHana resource from one node to another, use the command below. Note that the option `--primary` should not be used when running the following command because of how the SAPHana resource works internally.
`pcs resource move SAPHana_HR2_00-primary`
Node Attributes:
+ lpa_hr2_lpt : 1540867311 + primary-SAPHana_HR2_00 : 100 ```
-
* Login to HANA as verification. * demoted host:
- ```
- hdbsql -i 00 -u system -p $YourPass -n 10.7.0.82
+ ```
+ hdbsql -i 00 -u system -p $YourPass -n 10.7.0.82
+
+ result:
- result:
+ * -10709: Connection failed (RTE:[89006] System call 'connect'
+ failed, rc=111:Connection refused (10.7.0.82:30015))
+ ```
- * -10709: Connection failed (RTE:[89006] System call 'connect'
- failed, rc=111:Connection refused (10.7.0.82:30015))
- ```
-
* Promoted host:
- ```
- hdbsql -i 00 -u system -p $YourPass -n 10.7.0.84
-
- Welcome to the SAP HANA Database interactive terminal.
-
-
-
- Type: \h for help with commands
-
- \q to quit
-
-
-
- hdbsql HR2=>
-
-
-
-
- DB is online
- ```
-
+ ```
+ hdbsql -i 00 -u system -p $YourPass -n 10.7.0.84
+
+ Welcome to the SAP HANA Database interactive terminal.
+++
+ Type: \h for help with commands
+
+ \q to quit
+++
+ hdbsql HR2=>
++++
+ DB is online
+ ```
With option the `AUTOMATED_REGISTER=false`, you cannot switch back and forth. If this option is set to false, you must re-register the node:+ ``` hdbnsutil -sr_register --remoteHost=node2 --remoteInstance=00 --replicationMode=syncmem --name=DC1 ```
hdbnsutil -sr_register --remoteHost=node2 --remoteInstance=00 --replicationMode=
Now node2, which was the primary, acts as the secondary host. Consider setting this option to true to automate the registration of the demoted host.
-
+ ``` pcs resource update SAPHana_HR2_00-primary AUTOMATED_REGISTER=true pcs cluster node clear node1
search Cognitive Search Tutorial Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-blob.md
Title: 'REST Tutorial: AI on Azure blobs'
-description: Step through an example of text extraction and natural language processing over content in Blob Storage using Postman and the Azure Cognitive Search REST APIs.
+description: Step through an example of text extraction and natural language processing over content in Blob Storage using Postman and the Azure Cognitive Search REST APIs.
If you don't have an Azure subscription, open a [free account](https://azure.mic
## Overview
-This tutorial uses Postman and the [Azure Cognitive Search REST APIs](/rest/api/searchservice/) to create a data source, index, indexer, and skillset.
+This tutorial uses Postman and the [Azure Cognitive Search REST APIs](/rest/api/searchservice/) to create a data source, index, indexer, and skillset.
-The indexer connects to Azure Blob Storage and retrieves the content, which you must load in advance. The indexer then invokes a [skillset](cognitive-search-working-with-skillsets.md) for specialized processing, and ingests the enriched content into a [search index](search-what-is-an-index.md).
+The indexer connects to Azure Blob Storage and retrieves the content, which you must load in advance. The indexer then invokes a [skillset](cognitive-search-working-with-skillsets.md) for specialized processing, and ingests the enriched content into a [search index](search-what-is-an-index.md).
The skillset is attached to an [indexer](search-indexer-overview.md). It uses built-in skills from Microsoft to find and extract information. Steps in the pipeline include Optical Character Recognition (OCR) on images, language detection, key phrase extraction, and entity recognition (organizations, locations, people). New information created by the pipeline is stored in new fields in an index. Once the index is populated, you can use those fields in queries, facets, and filters.
The skillset is attached to an [indexer](search-indexer-overview.md). It uses bu
The sample data consists of 14 files of mixed content type that you'll upload to Azure Blob Storage in a later step.
-1. Get the files from [azure-search-sample-data/ai-enrichment-mixed-media/](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/ai-enrichment-mixed-media) and copy them to your local computer.
+1. Get the files from [azure-search-sample-data/ai-enrichment-mixed-media/](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/ai-enrichment-mixed-media) and copy them to your local computer.
1. Next, get the source code, a Postman collection file, for this tutorial. Source code can be found at [azure-search-postman-samples/tree/master/Tutorial](https://github.com/Azure-Samples/azure-search-postman-samples/tree/master/Tutorial).
If possible, create both in the same region and resource group for proximity and
+ **Resource group**. Select an existing one or create a new one, but use the same group for all services so that you can manage them collectively.
- + **Storage account name**. If you think you might have multiple resources of the same type, use the name to disambiguate by type and region, for example *blobstoragewestus*.
+ + **Storage account name**. If you think you might have multiple resources of the same type, use the name to disambiguate by type and region, for example *blobstoragewestus*.
+ **Location**. If possible, choose the same location used for Azure Cognitive Search and Azure AI services. A single location voids bandwidth charges.
If possible, create both in the same region and resource group for proximity and
:::image type="content" source="media/cognitive-search-tutorial-blob/sample-files.png" alt-text="Screenshot of the files in File Explorer." border="true":::
-1. Before you leave Azure Storage, get a connection string so that you can formulate a connection in Azure Cognitive Search.
+1. Before you leave Azure Storage, get a connection string so that you can formulate a connection in Azure Cognitive Search.
- 1. Browse back to the Overview page of your storage account (we used *blobstragewestus* as an example).
+ 1. Browse back to the Overview page of your storage account (we used *blobstragewestus* as an example).
- 1. In the left navigation pane, select **Access keys** and copy one of the connection strings.
+ 1. In the left navigation pane, select **Access keys** and copy one of the connection strings.
The connection string is a URL similar to the following example:
For this exercise, however, you can skip resource provisioning because Azure Cog
The third component is Azure Cognitive Search, which you can [create in the portal](search-create-service-portal.md) or [find an existing search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in your subscription.
-You can use the Free tier to complete this walkthrough.
+You can use the Free tier to complete this walkthrough.
### Copy an admin api-key and URL for Azure Cognitive Search
All HTTP requests to a search service require an API key. A valid key establishe
## 2 - Set up Postman
-1. Start Postman, import the collection, and set up the environment variables. If you're unfamiliar with this tool, see [Explore Azure Cognitive Search REST APIs](search-get-started-rest.md).
+1. Start Postman, import the collection, and set up the environment variables. If you're unfamiliar with this tool, see [Explore Azure Cognitive Search REST APIs](search-get-started-rest.md).
1. You'll need to provide a search service name, an admin API key, an index name, a connection string to your Azure Storage account, and the container name.
The request methods used in this collection are **PUT** and **GET**. You'll use
## 3 - Create the pipeline
-In Azure Cognitive Search, enrichment occurs during indexing (or data ingestion). This part of the walkthrough creates four objects: data source, index definition, skillset, indexer.
+In Azure Cognitive Search, enrichment occurs during indexing (or data ingestion). This part of the walkthrough creates four objects: data source, index definition, skillset, indexer.
### Step 1: Create a data source
Call [Create Data Source](/rest/api/searchservice/create-data-source) to set the
1. The body of the request is JSON and includes properties of an indexer data source object. The connection string includes credentials for accessing the service. ```json
- {
- "description" : "Demo files to demonstrate cognitive search capabilities.",
+ {
+ "description" : "Demo files to demonstrate cognitive search capabilities.",
"type" : "azureblob",
- "credentials" : {
+ "credentials" : {
"connectionString": "{{azure_storage_connection_string}}"
- },
- "container" : {
+ },
+ "container" : {
"name" : "{{container_name}}" } } ```
-1. Send the request. You should see a status code of 201 confirming success.
+1. Send the request. You should see a status code of 201 confirming success.
If you got a 403 or 404 error, check the search admin API key and the Azure Storage connection string. ### Step 2: Create a skillset
-Call [Create Skillset](/rest/api/searchservice/create-skillset) to specify which enrichment steps are applied to your content.
+Call [Create Skillset](/rest/api/searchservice/create-skillset) to specify which enrichment steps are applied to your content.
1. Select the "Create a skillset" request.
Call [Create Skillset](/rest/api/searchservice/create-skillset) to specify which
"insertPostTag": " ", "inputs": [ {
- "name":"text",
+ "name":"text",
"source": "/document/content" }, {
- "name": "itemsToInsert",
+ "name": "itemsToInsert",
"source": "/document/normalized_images/*/text" }, {
- "name":"offsets",
- "source": "/document/normalized_images/*/contentOffset"
+ "name":"offsets",
+ "source": "/document/normalized_images/*/contentOffset"
} ], "outputs": [ {
- "name": "mergedText",
+ "name": "mergedText",
"targetName" : "merged_text" } ]
Call [Create Skillset](/rest/api/searchservice/create-skillset) to specify which
} ```
- A graphical representation of a portion of the skillset is shown below.
+ A graphical representation of a portion of the skillset is shown below.
![Understand a skillset](media/cognitive-search-tutorial-blob/skillset.png "Understand a skillset")
-1. Send the request. Postman should return a status code of 201 confirming success.
+1. Send the request. Postman should return a status code of 201 confirming success.
> [!NOTE] > Outputs can be mapped to an index, used as input to a downstream skill, or both as is the case with language code. In the index, a language code is useful for filtering. For more information about skillset fundamentals, see [How to define a skillset](cognitive-search-defining-skillset.md).
Call [Create Index](/rest/api/searchservice/create-index) to provide the schema
1. Select the "Create an index" request. 1. The body of the request defines the schema of the search index. A fields collection requires one field to be designated as the key. For blob content, this field is often the "metadata_storage_path" that uniquely identifies each blob in the container.
-
+ In this schema, the "text" field receives OCR output, "content" receives merged output, "language" receives language detection output. Key phrases, entities, and several fields lifted from blob storage comprise the remaining entries. ```json
Call [Create Index](/rest/api/searchservice/create-index) to provide the schema
} ```
-1. Send the request. Postman should return a status code of 201 confirming success.
+1. Send the request. Postman should return a status code of 201 confirming success.
### Step 4: Create and run an indexer
Call [Create Indexer](/rest/api/searchservice/create-indexer) to drive the pipel
"mappingFunction" : { "name" : "base64Encode" } }, {
- "sourceFieldName": "metadata_storage_name",
- "targetFieldName": "metadata_storage_name"
+ "sourceFieldName": "metadata_storage_name",
+ "targetFieldName": "metadata_storage_name"
} ],
- "outputFieldMappings" :
- [
- {
- "sourceFieldName": "/document/merged_text",
- "targetFieldName": "content"
+ "outputFieldMappings" :
+ [
+ {
+ "sourceFieldName": "/document/merged_text",
+ "targetFieldName": "content"
}, { "sourceFieldName" : "/document/normalized_images/*/text", "targetFieldName" : "text" },
- {
- "sourceFieldName" : "/document/organizations",
+ {
+ "sourceFieldName" : "/document/organizations",
"targetFieldName" : "organizations" }, {
- "sourceFieldName": "/document/language",
- "targetFieldName": "language"
+ "sourceFieldName": "/document/language",
+ "targetFieldName": "language"
},
- {
- "sourceFieldName" : "/document/persons",
+ {
+ "sourceFieldName" : "/document/persons",
"targetFieldName" : "persons" },
- {
- "sourceFieldName" : "/document/locations",
+ {
+ "sourceFieldName" : "/document/locations",
"targetFieldName" : "locations" }, {
- "sourceFieldName" : "/document/pages/*/keyPhrases/*",
+ "sourceFieldName" : "/document/pages/*/keyPhrases/*",
"targetFieldName" : "keyPhrases" } ], "parameters": {
- "batchSize": 1,
- "maxFailedItems":-1,
- "maxFailedItemsPerBatch":-1,
- "configuration":
- {
- "dataToExtract": "contentAndMetadata",
- "imageAction": "generateNormalizedImages"
- }
+ "batchSize": 1,
+ "maxFailedItems":-1,
+ "maxFailedItemsPerBatch":-1,
+ "configuration":
+ {
+ "dataToExtract": "contentAndMetadata",
+ "imageAction": "generateNormalizedImages"
+ }
} } ```
-1. Send the request. Postman should return a status code of 201 confirming successful processing.
+1. Send the request. Postman should return a status code of 201 confirming successful processing.
- Expect this step to take several minutes to complete. Even though the data set is small, analytical skills are computation-intensive.
+ Expect this step to take several minutes to complete. Even though the data set is small, analytical skills are computation-intensive.
> [!NOTE] > Creating an indexer invokes the pipeline. If there are problems reaching the data, mapping inputs and outputs, or order of operations, they appear at this stage. To re-run the pipeline with code or script changes, you might need to drop objects first. For more information, see [Reset and re-run](#reset).
Call [Create Indexer](/rest/api/searchservice/create-indexer) to drive the pipel
The script sets ```"maxFailedItems"``` to -1, which instructs the indexing engine to ignore errors during data import. This is acceptable because there are so few documents in the demo data source. For a larger data source, you would set the value to greater than 0.
-The ```"dataToExtract":"contentAndMetadata"``` statement tells the indexer to automatically extract the values from the blob's content property and the metadata of each object.
+The ```"dataToExtract":"contentAndMetadata"``` statement tells the indexer to automatically extract the values from the blob's content property and the metadata of each object.
When content is extracted, you can set ```imageAction``` to extract text from images found in the data source. The ```"imageAction":"generateNormalizedImages"``` configuration, combined with the OCR Skill and Text Merge Skill, tells the indexer to extract text from the images (for example, the word "stop" from a traffic Stop sign), and embed it as part of the content field. This behavior applies to both embedded images (think of an image inside a PDF) and standalone image files, for instance a JPG file. ## 4 - Monitor indexing
-Indexing and enrichment commence as soon as you submit the Create Indexer request. Depending on which cognitive skills you defined, indexing can take a while.
+Indexing and enrichment commence as soon as you submit the Create Indexer request. Depending on which cognitive skills you defined, indexing can take a while.
To find out whether the indexer is still running, call [Get Indexer Status](/rest/api/searchservice/get-indexer-status) to check the indexer status. 1. Select and then send the "Check indexer status" request.
-1. Review the response to learn whether the indexer is running, or to view error and warning information.
+1. Review the response to learn whether the indexer is running, or to view error and warning information.
Warnings are common in some scenarios and do not always indicate a problem. For example, if a blob container includes image files, and the pipeline doesn't handle images, you'll get a warning stating that images were not processed.
This tutorial demonstrates the basic steps for building an enriched indexing pip
[Built-in skills](cognitive-search-predefined-skills.md) were introduced, along with skillset definition and the mechanics of chaining skills together through inputs and outputs. You also learned that `outputFieldMappings` in the indexer definition is required for routing enriched values from the pipeline into a searchable index on an Azure Cognitive Search service.
-Finally, you learned how to test results and reset the system for further iterations. You learned that issuing queries against the index returns the output created by the enriched indexing pipeline.
+Finally, you learned how to test results and reset the system for further iterations. You learned that issuing queries against the index returns the output created by the enriched indexing pipeline.
## Clean up resources
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
Previously updated : 07/10/2023 Last updated : 07/28/2023 # Vector search within Azure Cognitive Search
Last updated 07/10/2023
> [!IMPORTANT] > Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [alpha SDKs](https://github.com/Azure/cognitive-search-vector-pr#readme).
-This article is a high-level introduction to vector search in Azure Cognitive Search. It also explains integration with other Azure services and covers the core concepts you should know for vector search development.
+This article is a high-level introduction to vector search in Azure Cognitive Search. It also explains integration with other Azure services and covers [terms and concepts](#vector-search-concepts) related to vector search development.
We recommend this article for background, but if you'd rather get started, follow these steps:
Scenarios for vector search include:
+ **Filtered vector search**. A query request can include a vector query and a [filter expression](search-filters.md). Filters apply to text and numeric fields, and are useful for including or excluding search documents based on filter criteria. Although a vector field isn't filterable itself, you can set up a filterable text or numeric field. The search engine processes the filter first, reducing the surface area of the search corpus before running the vector query.
-+ **Vector database**. Use Cognitive Search as a vector store to serve as long-term memory or an external knowledge base for Large Language Models (LLMs), or other applications.
++ **Vector database**. Use Cognitive Search as a vector store to serve as long-term memory or an external knowledge base for Large Language Models (LLMs), or other applications. For example, you can use Azure Cognitive Search as a [*vector index* in an Azure Machine Learning prompt flow](/azure/machine-learning/concept-vector-stores) for Retrieval Augmented Generation (RAG) applications. ## Azure integration and related services
If you're new to vectors, this section explains some core concepts.
### About vector search
-Vector search is a method of information retrieval that aims to overcome the limitations of traditional keyword-based search. Rather than relying solely on lexical analysis and matching of individual query terms, vector search uses machine learning models to capture the meaning of words and phrases in context. This is done by representing documents and queries as vectors in a high-dimensional space, called an embedding. By capturing the intent of the query with the embedding, vector search can return more relevant results that match the user's needs, even if the exact terms aren't present in the document. Additionally, vector search can be applied to different types of content, such as images and videos, not just text. This enables new search experiences such as multi-modal search or cross-language search.
+Vector search is a method of information retrieval where documents and queries are represented as vectors instead of plain text. In vector search, machine learning models generate the vector representations of source inputs, which can be text, images, audio, or video content. Having a mathematic representation of content provides a common basis for search scenarios. If everything is a vector, a query can find a match in vector space, even if the associated original content is in different media or in a different language than the query.
+
+### Why use vector search
+
+Vectors can overcome the limitations of traditional keyword-based search by using machine learning models to capture the meaning of words and phrases in context, rather than relying solely on lexical analysis and matching of individual query terms. By capturing the intent of the query, vector search can return more relevant results that match the user's needs, even if the exact terms aren't present in the document. Additionally, vector search can be applied to different types of content, such as images and videos, not just text. This enables new search experiences such as multi-modal search or cross-language search.
### Embeddings and vectorization
-*Embeddings* are a specific type of vector representation created by machine learning models that capture the semantic meaning of text, or representations of other content such as images. Natural language machine learning models are trained on large amounts of data to identify patterns and relationships between words. During training, they learn to represent any input as a vector of real numbers in an intermediary step called the *encoder*. After training is complete, these language models can be modified so the intermediary vector representation becomes the model's output. The resulting embeddings are high-dimensional vectors, where words with similar meanings are closer together in the vector space, as explained in [this Azure OpenAI Service article](/azure/ai-services/openai/concepts/understand-embeddings).
+*Embeddings* are a specific type of vector representation of content or a query, created by machine learning models that capture the semantic meaning of text or representations of other content such as images. Natural language machine learning models are trained on large amounts of data to identify patterns and relationships between words. During training, they learn to represent any input as a vector of real numbers in an intermediary step called the *encoder*. After training is complete, these language models can be modified so the intermediary vector representation becomes the model's output. The resulting embeddings are high-dimensional vectors, where words with similar meanings are closer together in the vector space, as explained in [this Azure OpenAI Service article](/azure/ai-services/openai/concepts/understand-embeddings).
The effectiveness of vector search in retrieving relevant information depends on the effectiveness of the embedding model in distilling the meaning of documents and queries into the resulting vector. The best models are well-trained on the types of data they're representing. You can evaluate existing models such as Azure OpenAI text-embedding-ada-002, bring your own model that's trained directly on the problem space, or fine-tune a general-purpose model. Azure Cognitive Search doesn't impose constraints on which model you choose, so pick the best one for your data.
sentinel Connect Cef Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-ama.md
This example collects events for:
## Next steps
-In this article, you learned how to set up the Windows CEF via AMA connector to upload data from appliances that support CEF over Syslog. To learn more about Microsoft Sentinel, see the following articles:
+In this article, you learned how to set up the CEF via AMA connector to upload data from appliances that support CEF over Syslog. To learn more about Microsoft Sentinel, see the following articles:
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md). - [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Logstash Data Connection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-logstash-data-connection-rules.md
Title: Use Logstash to stream logs with pipeline transformations via DCR-based API
-description: Use Logstash to forward logs from external data sources into custom and standard tables in Microsoft Sentinel, and to configure the output with DCRs.
+description: Use Logstash to forward logs from external data sources into custom and standard tables in Microsoft Sentinel, and to configure the output with DCRs.
Last updated 11/07/2022
> [!IMPORTANT] > Data ingestion using the Logstash output plugin with Data Collection Rules (DCRs) is currently in public preview. This feature is provided without a service level agreement. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Microsoft Sentinel's new Logstash output plugin supports pipeline transformations and advanced configuration via Data Collection Rules (DCRs). The plugin forwards any type of logs from external data sources into custom or standard tables in Microsoft Sentinel.
+Microsoft Sentinel's new Logstash output plugin supports pipeline transformations and advanced configuration via Data Collection Rules (DCRs). The plugin forwards any type of logs from external data sources into custom or standard tables in Microsoft Sentinel.
In this article, you learn how to set up the new Logstash plugin to stream the data into Microsoft Sentinel using DCRs, with full control over the output schema. Learn how to **[deploy the plugin](#deploy-the-microsoft-sentinel-output-plugin-in-logstash)**. > [!NOTE]
-> A [previous version of the Logstash plugin](connect-logstash.md) allows you to connect data sources through Logstash via the Data Collection API.
+> A [previous version of the Logstash plugin](connect-logstash.md) allows you to connect data sources through Logstash via the Data Collection API.
With the new plugin, you can: - Control the configuration of the column names and types.-- Perform ingestion-time transformations like filtering or enrichment.
+- Perform ingestion-time transformations like filtering or enrichment.
- Ingest custom logs into a custom table, or ingest a Syslog input stream into the Microsoft Sentinel Syslog table. Ingestion into standard tables is limited only to [standard tables supported for custom logs ingestion](data-transformation.md#data-transformation-support-for-custom-data-connectors).
The Logstash engine is comprised of three components:
The Microsoft Sentinel output plugin for Logstash sends JSON-formatted data to your Log Analytics workspace, using the Log Analytics HTTP Data Collector REST API. The data is ingested into custom logs. - Learn more about the [Log Analytics REST API](/rest/api/loganalytics/create-request).-- Learn more about [custom logs](../azure-monitor/agents/data-sources-custom-logs.md).
+- Learn more about [custom logs](../azure-monitor/agents/data-sources-custom-logs.md).
## Deploy the Microsoft Sentinel output plugin in Logstash
The Microsoft Sentinel output plugin for Logstash sends JSON-formatted data to y
1. Review the [prerequisites](#prerequisites) 1. [Install the plugin](#install-the-plugin)
-1. [Create a sample file](#create-a-sample-file)
+1. [Create a sample file](#create-a-sample-file)
1. [Create the required DCR-related resources](#create-the-required-dcr-resources)
-1. [Configure Logstash configuration file](#configure-logstash-configuration-file)
+1. [Configure Logstash configuration file](#configure-logstash-configuration-file)
1. [Restart Logstash](#restart-logstash) 1. [View inc oming logs in Microsoft Sentinel](#view-incoming-logs-in-microsoft-sentinel) 1. [Monitor output plugin audit logs](#monitor-output-plugin-audit-logs) ### Prerequisites -- Install a supported version of Logstash. The plugin supports:
+- Install a supported version of Logstash. The plugin supports:
- Logstash version 7.0 to 7.17.10.
- - Logstash version 8.0 to 8.8.1.
-
+ - Logstash version 8.0 to 8.8.1.
+ > [!NOTE] > If you use Logstash 8, we recommended that you [disable ECS in the pipeline](https://www.elastic.co/guide/en/logstash/8.4/ecs-ls.html).
The Microsoft Sentinel output plugin for Logstash sends JSON-formatted data to y
The Microsoft Sentinel output plugin is available in the Logstash collection. -- Follow the instructions in the Logstash [Working with plugins](https://www.elastic.co/guide/en/logstash/current/working-with-plugins.html) document to install the **[microsoft-sentinel-logstash-output-plugin](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-sentinel-logstash-output-plugin)** plugin.
+- Follow the instructions in the Logstash [Working with plugins](https://www.elastic.co/guide/en/logstash/current/working-with-plugins.html) document to install the **[microsoft-sentinel-logstash-output-plugin](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-sentinel-logstash-output-plugin)** plugin.
- If your Logstash system does not have Internet access, follow the instructions in the Logstash [Offline Plugin Management](https://www.elastic.co/guide/en/logstash/current/offline-plugins.html) document to prepare and use an offline plugin pack. (This will require you to build another Logstash system with Internet access.)
-
+ ### Create a sample file In this section, you create a sample file in one of these scenarios:
In this section, you create a sample file in one of these scenarios:
- [Create a sample file to ingest logs into the Syslog table](#create-a-sample-file-to-ingest-logs-into-the-syslog-table) #### Create a sample file for custom logs
-
-In this scenario, you configure the Logstash input plugin to send events to Microsoft Sentinel. For this example, we use the generator input plugin to simulate events. You can use any other input plugin.
+
+In this scenario, you configure the Logstash input plugin to send events to Microsoft Sentinel. For this example, we use the generator input plugin to simulate events. You can use any other input plugin.
In this example, the Logstash configuration file looks like this: ``` input {
- generator {
- lines => [
- "This is a test log message"
- ]
- count => 10
- }
+ generator {
+ lines => [
+ "This is a test log message"
+ ]
+ count => 10
+ }
} ```
-1. Copy the output plugin configuration below to your Logstash configuration file.
+1. Copy the output plugin configuration below to your Logstash configuration file.
``` output { microsoft-sentinel-logstash-output-plugin { create_sample_file => true
- sample_file_path => "<enter the path to the file in which the sample data will be written>" #for example: "c:\\temp" (for windows) or "/tmp" for Linux.
+ sample_file_path => "<enter the path to the file in which the sample data will be written>" #for example: "c:\\temp" (for windows) or "/tmp" for Linux.
} } ```
-1. To make sure that the referenced file path exists before creating the sample file, start Logstash.
-
- The plugin writes ten records to a sample file named `sampleFile<epoch seconds>.json` in the configured path. For example: *c:\temp\sampleFile1648453501.json*.
+1. To make sure that the referenced file path exists before creating the sample file, start Logstash.
+
+ The plugin writes ten records to a sample file named `sampleFile<epoch seconds>.json` in the configured path. For example: *c:\temp\sampleFile1648453501.json*.
Here is part of a sample file that the plugin creates:
-
- ```
+
+ ```json
[
- {
- "host": "logstashMachine",
- "sequence": 0,
- "message": "This is a test log message",
- "ls_timestamp": "2022-03-28T17:45:01.690Z",
- "ls_version": "1"
- },
- {
- "host": "logstashMachine",
- "sequence": 1
- ...
-
- ]
+ {
+ "host": "logstashMachine",
+ "sequence": 0,
+ "message": "This is a test log message",
+ "ls_timestamp": "2022-03-28T17:45:01.690Z",
+ "ls_version": "1"
+ },
+ {
+ "host": "logstashMachine",
+ "sequence": 1
+ ...
+ }
+ ]
```
- The plugin automatically adds these properties to every record:
+ The plugin automatically adds these properties to every record:
- `ls_timestamp`: The time when the record is received from the input plugin - `ls_version`: The Logstash pipeline version.
-
- You can remove these fields when you [create the DCR](#create-the-required-dcr-resources).
-#### Create a sample file to ingest logs into the Syslog table
+ You can remove these fields when you [create the DCR](#create-the-required-dcr-resources).
+
+#### Create a sample file to ingest logs into the Syslog table
-In this scenario, you configure the Logstash input plugin to send syslog events to Microsoft Sentinel.
+In this scenario, you configure the Logstash input plugin to send syslog events to Microsoft Sentinel.
1. If you don't already have syslog messages forwarded into your Logstash machine, you can use the logger command to generate messages. For example (for Linux):
In this scenario, you configure the Logstash input plugin to send syslog events
} } ```
-1. Copy the output plugin configuration below to your Logstash configuration file.
+1. Copy the output plugin configuration below to your Logstash configuration file.
``` output { microsoft-sentinel-logstash-output-plugin { create_sample_file => true
- sample_file_path => "<enter the path to the file in which the sample data will be written>" #for example: "c:\\temp" (for windows) or "/tmp" for Linux.
+ sample_file_path => "<enter the path to the file in which the sample data will be written>" #for example: "c:\\temp" (for windows) or "/tmp" for Linux.
} } ```
-1. To make sure that the file path exists before creating the sample file, start Logstash.
+1. To make sure that the file path exists before creating the sample file, start Logstash.
- The plugin writes ten records to a sample file named `sampleFile<epoch seconds>.json` in the configured path. For example: *c:\temp\sampleFile1648453501.json*.
+ The plugin writes ten records to a sample file named `sampleFile<epoch seconds>.json` in the configured path. For example: *c:\temp\sampleFile1648453501.json*.
Here is part of a sample file that the plugin creates:+
+ ```json
+ [
+ {
+ "logsource": "logstashMachine",
+ "facility": 20,
+ "severity_label": "Warning",
+ "severity": 4,
+ "timestamp": "Apr 7 08:26:04",
+ "program": "CEF:",
+ "host": "127.0.0.1",
+ "facility_label": "local4",
+ "priority": 164,
+ "message": "0|Microsoft|Device|cef-test|example|data|1|here is some more data for the example",
+ "ls_timestamp": "2022-04-07T08:26:04.000Z",
+ "ls_version": "1"
+ }
+ ]
```
- [
- {
- "logsource": "logstashMachine",
- "facility": 20,
- "severity_label": "Warning",
- "severity": 4,
- "timestamp": "Apr 7 08:26:04",
- "program": "CEF:",
- "host": "127.0.0.1",
- "facility_label": "local4",
- "priority": 164,
- "message": 0|Microsoft|Device|cef-test|example|data|1|here is some more data for the example",
- "ls_timestamp": "2022-04-07T08:26:04.000Z",
- "ls_version": "1"
- }
- ]
-
- ```
- The plugin automatically adds these properties to every record:
+
+ The plugin automatically adds these properties to every record:
- `ls_timestamp`: The time when the record is received from the input plugin - `ls_version`: The Logstash pipeline version.
-
+ You can remove these fields when you [create the DCR](#create-the-required-dcr-resources). ### Create the required DCR resources
To configure the Microsoft Sentinel DCR-based Logstash plugin, you first need to
In this section, you create resources to use for your DCR, in one of these scenarios: - [Create DCR resources for ingestion into a custom table](#create-dcr-resources-for-ingestion-into-a-custom-table)-- [Create DCR resources for ingestion into a standard table](#create-dcr-resources-for-ingestion-into-a-standard-table)
+- [Create DCR resources for ingestion into a standard table](#create-dcr-resources-for-ingestion-into-a-standard-table)
#### Create DCR resources for ingestion into a custom table
-To ingest the data to a custom table, follow these steps (based on the [Send data to Azure Monitor Logs using REST API (Azure portal) tutorial](../azure-monitor/logs/tutorial-logs-ingestion-portal.md)):
+To ingest the data to a custom table, follow these steps (based on the [Send data to Azure Monitor Logs using REST API (Azure portal) tutorial](../azure-monitor/logs/tutorial-logs-ingestion-portal.md)):
1. Review the [prerequisites](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#prerequisites). 1. [Configure the application](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#create-azure-ad-application). 1. [Create a data collection endpoint](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#create-data-collection-endpoint).
-1. [Add a custom log table](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#create-new-table-in-log-analytics-workspace).
+1. [Add a custom log table](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#create-new-table-in-log-analytics-workspace).
1. [Parse and filter sample data](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#parse-and-filter-sample-data) using [the sample file you created in the previous section](#create-a-sample-file). 1. [Collect information from the DCR](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#collect-information-from-the-dcr). 1. [Assign permissions to the DCR](../azure-monitor/logs/tutorial-logs-ingestion-portal.md#assign-permissions-to-the-dcr).
If you come across any issues, see the [troubleshooting steps](../azure-monitor/
#### Create DCR resources for ingestion into a standard table
-To ingest the data to a standard table like Syslog or CommonSecurityLog, you use a process based on the [Send data to Azure Monitor Logs using REST API (Resource Manager templates) tutorial](../azure-monitor/logs/tutorial-logs-ingestion-api.md). While the tutorial explains how to ingest data into a custom table, you can easily adjust the process to ingest data into a standard table. The steps below indicate relevant changes in the steps.
-
+To ingest the data to a standard table like Syslog or CommonSecurityLog, you use a process based on the [Send data to Azure Monitor Logs using REST API (Resource Manager templates) tutorial](../azure-monitor/logs/tutorial-logs-ingestion-api.md). While the tutorial explains how to ingest data into a custom table, you can easily adjust the process to ingest data into a standard table. The steps below indicate relevant changes in the steps.
+ 1. Review the [prerequisites](../azure-monitor/logs/tutorial-logs-ingestion-api.md#prerequisites). 1. [Collect workspace details](../azure-monitor/logs/tutorial-logs-ingestion-api.md#collect-workspace-details).
-1. [Configure an application](../azure-monitor/logs/tutorial-logs-ingestion-api.md#create-azure-ad-application).
-
+1. [Configure an application](../azure-monitor/logs/tutorial-logs-ingestion-api.md#create-azure-ad-application).
+ Skip the Create new table in Log Analytics workspace step. This step isn't relevant when ingesting data into a standard table, because the table is already defined in Log Analytics. 1. [Create data collection endpoint](../azure-monitor/logs/tutorial-logs-ingestion-api.md#create-data-collection-endpoint).
-1. [Create the DCR](../azure-monitor/logs/tutorial-logs-ingestion-api.md#create-data-collection-rule). In this step:
- - Provide [the sample file you created in the previous section](#create-a-sample-file).
- - Use the sample file you created to define the `streamDeclarations` property. Each of the fields in the sample file should have a corresponding column with the same name and the appropriate type (see the [example](#example-dcr-that-ingests-data-into-the-syslog-table) below).
- - Configure the value of the `outputStream` property with the name of the standard table instead of the custom table. Unlike custom tables, standard table names don't have the `_CL` suffix.
- - The prefix of the table name should be `Microsoft-` instead of `Custom-`. In our example, the `outputStream` property value is `Microsoft-Syslog`.
+1. [Create the DCR](../azure-monitor/logs/tutorial-logs-ingestion-api.md#create-data-collection-rule). In this step:
+ - Provide [the sample file you created in the previous section](#create-a-sample-file).
+ - Use the sample file you created to define the `streamDeclarations` property. Each of the fields in the sample file should have a corresponding column with the same name and the appropriate type (see the [example](#example-dcr-that-ingests-data-into-the-syslog-table) below).
+ - Configure the value of the `outputStream` property with the name of the standard table instead of the custom table. Unlike custom tables, standard table names don't have the `_CL` suffix.
+ - The prefix of the table name should be `Microsoft-` instead of `Custom-`. In our example, the `outputStream` property value is `Microsoft-Syslog`.
1. [Assign permissions to a DCR](../azure-monitor/logs/tutorial-logs-ingestion-api.md#assign-permissions-to-a-dcr). Skip the Send sample data step.
If you come across any issues, see the [troubleshooting steps](../azure-monitor/
##### Example: DCR that ingests data into the Syslog table
-Note that:
-- The `streamDeclarations` column names and types should be the same as the sample file fields, but you do not have to specify all of them. For example, in the DCR below, the `PRI`, `type` and `ls_version` fields are omitted from the `streamDeclarations` column.
+Note that:
+- The `streamDeclarations` column names and types should be the same as the sample file fields, but you do not have to specify all of them. For example, in the DCR below, the `PRI`, `type` and `ls_version` fields are omitted from the `streamDeclarations` column.
- The `dataflows` property transforms the input to the Syslog table format, and sets the `outputStream` to `Microsoft-Syslog`.
-```
+```json
{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "dataCollectionRuleName": {
- "type": "String",
- "metadata": {
- "description": "Specifies the name of the Data Collection Rule to create."
- }
- },
- "location": {
- "defaultValue": "westus2",
- "allowedValues": [
- "westus2",
- "eastus2",
- "eastus2euap"
- ],
- "type": "String",
- "metadata": {
- "description": "Specifies the location in which to create the Data Collection Rule."
- }
- },
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "dataCollectionRuleName": {
+ "type": "String",
+ "metadata": {
+ "description": "Specifies the name of the Data Collection Rule to create."
+ }
+ },
+ "location": {
+ "defaultValue": "westus2",
+ "allowedValues": [
+ "westus2",
+ "eastus2",
+ "eastus2euap"
+ ],
+ "type": "String",
+ "metadata": {
+ "description": "Specifies the location in which to create the Data Collection Rule."
+ }
+ },
"location": {
- "defaultValue": "[resourceGroup().location]",
- "type": "String",
+ "defaultValue": "[resourceGroup().location]",
+ "type": "String",
"metadata": {
- "description": "Specifies the location in which to create the Data Collection Rule."
- }
+ "description": "Specifies the location in which to create the Data Collection Rule."
+ }
},
- "workspaceResourceId": {
- "type": "String",
- "metadata": {
- "description": "Specifies the Azure resource ID of the Log Analytics workspace to use."
- }
- },
- "endpointResourceId": {
- "type": "String",
- "metadata": {
- "description": "Specifies the Azure resource ID of the Data Collection Endpoint to use."
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.Insights/dataCollectionRules",
- "apiVersion": "2021-09-01-preview",
- "name": "[parameters('dataCollectionRuleName')]",
- "location": "[parameters('location')]",
- "properties": {
- "dataCollectionEndpointId": "[parameters('endpointResourceId')]",
- "streamDeclarations": {
- "Custom-SyslogStream": {
- "columns": [
- {
+ "workspaceResourceId": {
+ "type": "String",
+ "metadata": {
+ "description": "Specifies the Azure resource ID of the Log Analytics workspace to use."
+ }
+ },
+ "endpointResourceId": {
+ "type": "String",
+ "metadata": {
+ "description": "Specifies the Azure resource ID of the Data Collection Endpoint to use."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "apiVersion": "2021-09-01-preview",
+ "name": "[parameters('dataCollectionRuleName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "dataCollectionEndpointId": "[parameters('endpointResourceId')]",
+ "streamDeclarations": {
+ "Custom-SyslogStream": {
+ "columns": [
+ {
"name": "ls_timestamp", "type": "datetime"
- }, {
+ }, {
"name": "timestamp", "type": "datetime" }, { "name": "message", "type": "string"
- },
- {
+ },
+ {
"name": "facility_label", "type": "string" },
- {
+ {
"name": "severity_label", "type": "string" },
Note that:
"name": "logsource", "type": "string" }
- ]
- }
- },
- "destinations": {
- "logAnalytics": [
- {
- "workspaceResourceId": "[parameters('workspaceResourceId')]",
- "name": "clv2ws1"
- }
- ]
- },
- "dataFlows": [
- {
- "streams": [
- "Custom-SyslogStream"
- ],
- "destinations": [
- "clv2ws1"
- ],
- "transformKql": "source | project TimeGenerated = ls_timestamp, EventTime = todatetime(timestamp), Computer = logsource, HostName = logsource, HostIP = host, SyslogMessage = message, Facility = facility_label, SeverityLevel = severity_label",
- "outputStream": "Microsoft-Syslog"
- }
- ]
- }
- }
- ],
- "outputs": {
- "dataCollectionRuleId": {
- "type": "String",
- "value": "[resourceId('Microsoft.Insights/dataCollectionRules', parameters('dataCollectionRuleName'))]"
- }
- }
+ ]
+ }
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "[parameters('workspaceResourceId')]",
+ "name": "clv2ws1"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Custom-SyslogStream"
+ ],
+ "destinations": [
+ "clv2ws1"
+ ],
+ "transformKql": "source | project TimeGenerated = ls_timestamp, EventTime = todatetime(timestamp), Computer = logsource, HostName = logsource, HostIP = host, SyslogMessage = message, Facility = facility_label, SeverityLevel = severity_label",
+ "outputStream": "Microsoft-Syslog"
+ }
+ ]
+ }
+ }
+ ],
+ "outputs": {
+ "dataCollectionRuleId": {
+ "type": "String",
+ "value": "[resourceId('Microsoft.Insights/dataCollectionRules', parameters('dataCollectionRuleName'))]"
+ }
+ }
} ```
To configure the Logstash configuration file to ingest the logs into a custom ta
|`dcr_immutable_id` |The value of the DCR `immutableId` in step 6 when you [create the DCR resources](#create-the-required-dcr-resources), according to the tutorial you used in this section. | |`dcr_stream_name` |For custom tables, as explained in step 6 when you [create the DCR resources](#create-dcr-resources-for-ingestion-into-a-custom-table), go to the JSON view of the DCR, and copy the `dataFlows` > `streams` property. See the `dcr_stream_name` in the [example](#example-output-plugin-configuration-section) below.<br><br>For standard tables, the value is `Custom-SyslogStream`. |
-After you retrieve the required values:
+After you retrieve the required values:
-1. Replace the output section of the [Logstash configuration file](#create-a-sample-file) you created in the previous step with the example below.
-1. Replace the placeholder strings in the [example](#example-output-plugin-configuration-section) below with the values you retrieved.
-1. Make sure you change the `create_sample_file` attribute to `false`.
+1. Replace the output section of the [Logstash configuration file](#create-a-sample-file) you created in the previous step with the example below.
+1. Replace the placeholder strings in the [example](#example-output-plugin-configuration-section) below with the values you retrieved.
+1. Make sure you change the `create_sample_file` attribute to `false`.
#### Optional configuration
output {
To set other parameters for the Microsoft Sentinel Logstash output plugin, see the output plugin's readme file. > [!NOTE]
-> For security reasons, we recommend that you don't implicitly state the `client_app_Id`, `client_app_secret`, `tenant_id`, `data_collection_endpoint`, and `dcr_immutable_id` attributes in your Logstash configuration file. We recommend that you store this sensitive information in a [Logstash KeyStore](https://www.elastic.co/guide/en/logstash/current/keystore.html#keystore).
+> For security reasons, we recommend that you don't implicitly state the `client_app_Id`, `client_app_secret`, `tenant_id`, `data_collection_endpoint`, and `dcr_immutable_id` attributes in your Logstash configuration file. We recommend that you store this sensitive information in a [Logstash KeyStore](https://www.elastic.co/guide/en/logstash/current/keystore.html#keystore).
### Restart Logstash
If you are not seeing any data in this log file, generate and send some events l
- Ingestion into standard tables is limited only to [standard tables supported for custom logs ingestion](data-transformation.md#data-transformation-support-for-custom-data-connectors). - The columns of the input stream in the `streamDeclarations` property must start with a letter. If you start a column with other characters (for example `@` or `_`), the operation fails. - The `TimeGenerated` datetime field is required. You must include this field in the KQL transform.-- For additional possible issues, review the [troubleshooting section](../azure-monitor/logs/tutorial-logs-ingestion-code.md#troubleshooting) in the tutorial.
+- For additional possible issues, review the [troubleshooting section](../azure-monitor/logs/tutorial-logs-ingestion-code.md#troubleshooting) in the tutorial.
## Next steps
sentinel Normalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization.md
Microsoft Sentinel ingests data from many sources. Working with various data typ
Sometimes, you'll need separate rules, workbooks, and queries, even when data types share common elements, such as firewall devices. Correlating between different types of data during an investigation and hunting can also be challenging.
-The Advanced Security Information Model (ASIM) is a layer that is located between these diverse sources and the user. ASIM follows the [robustness principle](https://en.wikipedia.org/wiki/Robustness_principle): **"Be strict in what you send, be flexible in what you accept"**. Using the robustness principle as design pattern, ASIM transforms Microsoft Sentinel's inconsistent and hard to use source telemetry to user friendly data.
+The Advanced Security Information Model (ASIM) is a layer that is located between these diverse sources and the user. ASIM follows the [robustness principle](https://en.wikipedia.org/wiki/Robustness_principle): **"Be strict in what you send, be flexible in what you accept"**. Using the robustness principle as design pattern, ASIM transforms the proprietary source telemetry collected by Microsoft Sentinel to user friendly data to facilitate exchange and integration.
This article provides an overview of the Advanced Security Information Model (ASIM), its use cases and major components. Refer to the [next steps](#next-steps) section for more details.
service-bus-messaging Service Bus Java How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-java-how-to-use-queues.md
> * [JavaScript](service-bus-nodejs-how-to-use-queues.md) > * [Python](service-bus-python-how-to-use-queues.md)
-In this quickstart, you create a Java app to send messages to and receive messages from an Azure Service Bus queue.
+In this quickstart, you create a Java app to send messages to and receive messages from an Azure Service Bus queue.
> [!NOTE]
-> This quick start provides step-by-step instructions for a simple scenario of sending messages to a Service Bus queue and receiving them. You can find pre-built Java samples for Azure Service Bus in the [Azure SDK for Java repository on GitHub](https://github.com/azure/azure-sdk-for-java/tree/main/sdk/servicebus/azure-messaging-servicebus/src/samples).
+> This quick start provides step-by-step instructions for a simple scenario of sending messages to a Service Bus queue and receiving them. You can find pre-built Java samples for Azure Service Bus in the [Azure SDK for Java repository on GitHub](https://github.com/azure/azure-sdk-for-java/tree/main/sdk/servicebus/azure-messaging-servicebus/src/samples).
> [!TIP] > If you're working with Azure Service Bus resources in a Spring application, we recommend that you consider [Spring Cloud Azure](/azure/developer/java/spring-framework/) as an alternative. Spring Cloud Azure is an open-source project that provides seamless Spring integration with Azure services. To learn more about Spring Cloud Azure, and to see an example using Service Bus, see [Spring Cloud Stream with Azure Service Bus](/azure/developer/java/spring-framework/configure-spring-cloud-stream-binder-java-app-with-service-bus). ## Prerequisites - An Azure subscription. To complete this tutorial, you need an Azure account. You can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/?WT.mc_id=A85619ABF) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).-- Install [Azure SDK for Java][Azure SDK for Java]. If you're using Eclipse, you can install the [Azure Toolkit for Eclipse][Azure Toolkit for Eclipse] that includes the Azure SDK for Java. You can then add the **Microsoft Azure Libraries for Java** to your project. If you're using IntelliJ, see [Install the Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/installation).
+- Install [Azure SDK for Java][Azure SDK for Java]. If you're using Eclipse, you can install the [Azure Toolkit for Eclipse][Azure Toolkit for Eclipse] that includes the Azure SDK for Java. You can then add the **Microsoft Azure Libraries for Java** to your project. If you're using IntelliJ, see [Install the Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/installation).
[!INCLUDE [service-bus-create-namespace-portal](./includes/service-bus-create-namespace-portal.md)]
In this quickstart, you create a Java app to send messages to and receive messag
## Send messages to a queue
-In this section, you create a Java console project, and add code to send messages to the queue that you created earlier.
+In this section, you create a Java console project, and add code to send messages to the queue that you created earlier.
### Create a Java console project
-Create a Java project using Eclipse or a tool of your choice.
+Create a Java project using Eclipse or a tool of your choice.
### Configure your application to use Service Bus
-Add references to Azure Core and Azure Service Bus libraries.
+Add references to Azure Core and Azure Service Bus libraries.
If you're using Eclipse and created a Java console application, convert your Java project to a Maven: right-click the project in the **Package Explorer** window, select **Configure** -> **Convert to Maven project**. Then, add dependencies to these two libraries as shown in the following example. ### [Passwordless (Recommended)](#tab/passwordless)
-Update the `pom.xml` file to add dependencies to Azure Service Bus and Azure Identity packages.
+Update the `pom.xml` file to add dependencies to Azure Service Bus and Azure Identity packages.
```xml <dependencies>
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-messaging-servicebus</artifactId>
- <version>7.13.3</version>
- </dependency>
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-identity</artifactId>
- <version>1.8.0</version>
- <scope>compile</scope>
- </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-messaging-servicebus</artifactId>
+ <version>7.13.3</version>
+ </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.8.0</version>
+ <scope>compile</scope>
+ </dependency>
</dependencies> ``` ### [Connection String](#tab/connection-string)
-Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
+Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
```xml
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-messaging-servicebus</artifactId>
- <version>7.13.3</version>
- </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-messaging-servicebus</artifactId>
+ <version>7.13.3</version>
+ </dependency>
``` ### Add code to send messages to the queue
-1. Add the following `import` statements at the topic of the Java file.
+1. Add the following `import` statements at the topic of the Java file.
### [Passwordless (Recommended)](#tab/passwordless)
-
+ ```java import com.azure.messaging.servicebus.*; import com.azure.identity.*;
-
+ import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.Arrays; import java.util.List; ```
-
+ ### [Connection String](#tab/connection-string)
-
+ ```java import com.azure.messaging.servicebus.*;
-
+ import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.Arrays; import java.util.List;
- ```
+ ```
-2. In the class, define variables to hold connection string and queue name.
+2. In the class, define variables to hold connection string and queue name.
### [Passwordless (Recommended)](#tab/passwordless)
-
+ ```java
- static String queueName = "<QUEUE NAME>";
+ static String queueName = "<QUEUE NAME>";
``` > [!IMPORTANT]
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
```java static String connectionString = "<NAMESPACE CONNECTION STRING>";
- static String queueName = "<QUEUE NAME>";
+ static String queueName = "<QUEUE NAME>";
``` > [!IMPORTANT] > Replace `<NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace, and `<QUEUE NAME>` with the name of the queue.
-3. Add a method named `sendMessage` in the class to send one message to the queue.
+3. Add a method named `sendMessage` in the class to send one message to the queue.
### [Passwordless (Recommended)](#tab/passwordless)
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
> Replace `NAMESPACENAME` with the name of your Service Bus namespace. ```java
- static void sendMessage()
- {
- // create a token using the default Azure credential
+ static void sendMessage()
+ {
+ // create a token using the default Azure credential
DefaultAzureCredential credential = new DefaultAzureCredentialBuilder() .build();
-
- ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
- .fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
- .credential(credential)
- .sender()
- .queueName(queueName)
- .buildClient();
-
- // send one message to the queue
- senderClient.sendMessage(new ServiceBusMessage("Hello, World!"));
- System.out.println("Sent a single message to the queue: " + queueName);
- }
-
+
+ ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
+ .fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
+ .credential(credential)
+ .sender()
+ .queueName(queueName)
+ .buildClient();
+
+ // send one message to the queue
+ senderClient.sendMessage(new ServiceBusMessage("Hello, World!"));
+ System.out.println("Sent a single message to the queue: " + queueName);
+ }
+ ``` ### [Connection String](#tab/connection-string)
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
```java static void sendMessage() {
- // create a Service Bus Sender client for the queue
+ // create a Service Bus Sender client for the queue
ServiceBusSenderClient senderClient = new ServiceBusClientBuilder() .connectionString(connectionString) .sender() .queueName(queueName) .buildClient();
-
+ // send one message to the queue senderClient.sendMessage(new ServiceBusMessage("Hello, World!"));
- System.out.println("Sent a single message to the queue: " + queueName);
+ System.out.println("Sent a single message to the queue: " + queueName);
} ```
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
{ // create a list of messages and return it to the caller ServiceBusMessage[] messages = {
- new ServiceBusMessage("First message"),
- new ServiceBusMessage("Second message"),
- new ServiceBusMessage("Third message")
+ new ServiceBusMessage("First message"),
+ new ServiceBusMessage("Second message"),
+ new ServiceBusMessage("Third message")
}; return Arrays.asList(messages); } ```
-5. Add a method named `sendMessageBatch` method to send messages to the queue you created. This method creates a `ServiceBusSenderClient` for the queue, invokes the `createMessages` method to get the list of messages, prepares one or more batches, and sends the batches to the queue.
+5. Add a method named `sendMessageBatch` method to send messages to the queue you created. This method creates a `ServiceBusSenderClient` for the queue, invokes the `createMessages` method to get the list of messages, prepares one or more batches, and sends the batches to the queue.
### [Passwordless (Recommended)](#tab/passwordless)
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
```java
- static void sendMessageBatch()
- {
- // create a token using the default Azure credential
+ static void sendMessageBatch()
+ {
+ // create a token using the default Azure credential
DefaultAzureCredential credential = new DefaultAzureCredentialBuilder() .build();
-
- ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
- .fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
- .credential(credential)
- .sender()
- .queueName(queueName)
- .buildClient();
-
- // Creates an ServiceBusMessageBatch where the ServiceBus.
- ServiceBusMessageBatch messageBatch = senderClient.createMessageBatch();
-
- // create a list of messages
- List<ServiceBusMessage> listOfMessages = createMessages();
-
- // We try to add as many messages as a batch can fit based on the maximum size and send to Service Bus when
- // the batch can hold no more messages. Create a new batch for next set of messages and repeat until all
- // messages are sent.
- for (ServiceBusMessage message : listOfMessages) {
- if (messageBatch.tryAddMessage(message)) {
- continue;
- }
-
- // The batch is full, so we create a new batch and send the batch.
- senderClient.sendMessages(messageBatch);
- System.out.println("Sent a batch of messages to the queue: " + queueName);
-
- // create a new batch
- messageBatch = senderClient.createMessageBatch();
-
- // Add that message that we couldn't before.
- if (!messageBatch.tryAddMessage(message)) {
- System.err.printf("Message is too large for an empty batch. Skipping. Max size: %s.", messageBatch.getMaxSizeInBytes());
- }
- }
-
- if (messageBatch.getCount() > 0) {
- senderClient.sendMessages(messageBatch);
- System.out.println("Sent a batch of messages to the queue: " + queueName);
- }
-
- //close the client
- senderClient.close();
- }
+
+ ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
+ .fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
+ .credential(credential)
+ .sender()
+ .queueName(queueName)
+ .buildClient();
+
+ // Creates an ServiceBusMessageBatch where the ServiceBus.
+ ServiceBusMessageBatch messageBatch = senderClient.createMessageBatch();
+
+ // create a list of messages
+ List<ServiceBusMessage> listOfMessages = createMessages();
+
+ // We try to add as many messages as a batch can fit based on the maximum size and send to Service Bus when
+ // the batch can hold no more messages. Create a new batch for next set of messages and repeat until all
+ // messages are sent.
+ for (ServiceBusMessage message : listOfMessages) {
+ if (messageBatch.tryAddMessage(message)) {
+ continue;
+ }
+
+ // The batch is full, so we create a new batch and send the batch.
+ senderClient.sendMessages(messageBatch);
+ System.out.println("Sent a batch of messages to the queue: " + queueName);
+
+ // create a new batch
+ messageBatch = senderClient.createMessageBatch();
+
+ // Add that message that we couldn't before.
+ if (!messageBatch.tryAddMessage(message)) {
+ System.err.printf("Message is too large for an empty batch. Skipping. Max size: %s.", messageBatch.getMaxSizeInBytes());
+ }
+ }
+
+ if (messageBatch.getCount() > 0) {
+ senderClient.sendMessages(messageBatch);
+ System.out.println("Sent a batch of messages to the queue: " + queueName);
+ }
+
+ //close the client
+ senderClient.close();
+ }
``` ### [Connection String](#tab/connection-string)
-
+ ```java static void sendMessageBatch() {
- // create a Service Bus Sender client for the queue
+ // create a Service Bus Sender client for the queue
ServiceBusSenderClient senderClient = new ServiceBusClientBuilder() .connectionString(connectionString) .sender()
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
.buildClient(); // Creates an ServiceBusMessageBatch where the ServiceBus.
- ServiceBusMessageBatch messageBatch = senderClient.createMessageBatch();
-
- // create a list of messages
+ ServiceBusMessageBatch messageBatch = senderClient.createMessageBatch();
+
+ // create a list of messages
List<ServiceBusMessage> listOfMessages = createMessages();
-
+ // We try to add as many messages as a batch can fit based on the maximum size and send to Service Bus when // the batch can hold no more messages. Create a new batch for next set of messages and repeat until all
- // messages are sent.
+ // messages are sent.
for (ServiceBusMessage message : listOfMessages) { if (messageBatch.tryAddMessage(message)) { continue;
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
// The batch is full, so we create a new batch and send the batch. senderClient.sendMessages(messageBatch); System.out.println("Sent a batch of messages to the queue: " + queueName);
-
+ // create a new batch messageBatch = senderClient.createMessageBatch();
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
System.err.printf("Message is too large for an empty batch. Skipping. Max size: %s.", messageBatch.getMaxSizeInBytes()); } }
-
+ if (messageBatch.getCount() > 0) { senderClient.sendMessages(messageBatch); System.out.println("Sent a batch of messages to the queue: " + queueName);
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
## Receive messages from a queue
-In this section, you add code to retrieve messages from the queue.
+In this section, you add code to retrieve messages from the queue.
1. Add a method named `receiveMessages` to receive messages from the queue. This method creates a `ServiceBusProcessorClient` for the queue by specifying a handler for processing messages and another one for handling errors. Then, it starts the processor, waits for few seconds, prints the messages that are received, and then stops and closes the processor.
In this section, you add code to retrieve messages from the queue.
> - Replace `QueueTest` in `QueueTest::processMessage` in the code with the name of your class. ```java
- // handles received messages
- static void receiveMessages() throws InterruptedException
- {
- CountDownLatch countdownLatch = new CountDownLatch(1);
+ // handles received messages
+ static void receiveMessages() throws InterruptedException
+ {
+ CountDownLatch countdownLatch = new CountDownLatch(1);
DefaultAzureCredential credential = new DefaultAzureCredentialBuilder() .build();
-
- ServiceBusProcessorClient processorClient = new ServiceBusClientBuilder()
- .fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
- .credential(credential)
- .processor()
- .queueName(queueName)
- .processMessage(QueueTest::processMessage)
- .processError(context -> processError(context, countdownLatch))
- .buildProcessorClient();
-
- System.out.println("Starting the processor");
- processorClient.start();
-
- TimeUnit.SECONDS.sleep(10);
- System.out.println("Stopping and closing the processor");
- processorClient.close();
- }
+
+ ServiceBusProcessorClient processorClient = new ServiceBusClientBuilder()
+ .fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
+ .credential(credential)
+ .processor()
+ .queueName(queueName)
+ .processMessage(QueueTest::processMessage)
+ .processError(context -> processError(context, countdownLatch))
+ .buildProcessorClient();
+
+ System.out.println("Starting the processor");
+ processorClient.start();
+
+ TimeUnit.SECONDS.sleep(10);
+ System.out.println("Stopping and closing the processor");
+ processorClient.close();
+ }
``` ### [Connection String](#tab/connection-string) > [!IMPORTANT]
- > Replace `QueueTest` in `QueueTest::processMessage` in the code with the name of your class.
+ > Replace `QueueTest` in `QueueTest::processMessage` in the code with the name of your class.
```java // handles received messages
In this section, you add code to retrieve messages from the queue.
TimeUnit.SECONDS.sleep(10); System.out.println("Stopping and closing the processor");
- processorClient.close();
- }
+ processorClient.close();
+ }
```
-2. Add the `processMessage` method to process a message received from the Service Bus subscription.
+2. Add the `processMessage` method to process a message received from the Service Bus subscription.
```java private static void processMessage(ServiceBusReceivedMessageContext context) { ServiceBusReceivedMessage message = context.getMessage(); System.out.printf("Processing message. Session: %s, Sequence #: %s. Contents: %s%n", message.getMessageId(), message.getSequenceNumber(), message.getBody());
- }
+ }
``` 3. Add the `processError` method to handle error messages.
In this section, you add code to retrieve messages from the queue.
System.out.printf("Error source %s, reason %s, message: %s%n", context.getErrorSource(), reason, context.getException()); }
- }
+ }
```
-2. Update the `main` method to invoke `sendMessage`, `sendMessageBatch`, and `receiveMessages` methods and to throw `InterruptedException`.
+2. Update the `main` method to invoke `sendMessage`, `sendMessageBatch`, and `receiveMessages` methods and to throw `InterruptedException`.
```java
- public static void main(String[] args) throws InterruptedException {
+ public static void main(String[] args) throws InterruptedException {
sendMessage();
- sendMessageBatch();
- receiveMessages();
- }
+ sendMessageBatch();
+ receiveMessages();
+ }
``` ## Run the app ### [Passwordless (Recommended)](#tab/passwordless)
-1. If you're using Eclipse, right-click the project, select **Export**, expand **Java**, select **Runnable JAR file**, and follow the steps to create a runnable JAR file.
-1. If you are signed into the machine using a user account that's different from the user account added to the **Azure Service Bus Data Owner** role, follow these steps. Otherwise, skip this step and move on to run the Jar file in the next step.
+1. If you're using Eclipse, right-click the project, select **Export**, expand **Java**, select **Runnable JAR file**, and follow the steps to create a runnable JAR file.
+1. If you are signed into the machine using a user account that's different from the user account added to the **Azure Service Bus Data Owner** role, follow these steps. Otherwise, skip this step and move on to run the Jar file in the next step.
- 1. [Install Azure CLI](/cli/azure/install-azure-cli-windows) on your machine.
- 1. Run the following CLI command to sign in to Azure. Use the same user account that you added to the **Azure Service Bus Data Owner** role.
+ 1. [Install Azure CLI](/cli/azure/install-azure-cli-windows) on your machine.
+ 1. Run the following CLI command to sign in to Azure. Use the same user account that you added to the **Azure Service Bus Data Owner** role.
```azurecli az login
In this section, you add code to retrieve messages from the queue.
```java java -jar <JAR FILE NAME> ```
-1. You see the following output in the console window.
+1. You see the following output in the console window.
```console Sent a single message to the queue: myqueue
In this section, you add code to retrieve messages from the queue.
``` ### [Connection String](#tab/connection-string)
-When you run the application, you see the following messages in the console window.
+When you run the application, you see the following messages in the console window.
```console Sent a single message to the queue: myqueue
Stopping and closing the processor
```
-On the **Overview** page for the Service Bus namespace in the Azure portal, you can see **incoming** and **outgoing** message count. You may need to wait for a minute or so and then refresh the page to see the latest values.
+On the **Overview** page for the Service Bus namespace in the Azure portal, you can see **incoming** and **outgoing** message count. You may need to wait for a minute or so and then refresh the page to see the latest values.
:::image type="content" source="./media/service-bus-java-how-to-use-queues/overview-incoming-outgoing-messages.png" alt-text="Incoming and outgoing message count" lightbox="./media/service-bus-java-how-to-use-queues/overview-incoming-outgoing-messages.png":::
-Select the queue on this **Overview** page to navigate to the **Service Bus Queue** page. You see the **incoming** and **outgoing** message count on this page too. You also see other information such as the **current size** of the queue, **maximum size**, **active message count**, and so on.
+Select the queue on this **Overview** page to navigate to the **Service Bus Queue** page. You see the **incoming** and **outgoing** message count on this page too. You also see other information such as the **current size** of the queue, **maximum size**, **active message count**, and so on.
:::image type="content" source="./media/service-bus-java-how-to-use-queues/queue-details.png" alt-text="Queue details" lightbox="./media/service-bus-java-how-to-use-queues/queue-details.png":::
service-bus-messaging Service Bus Java How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-java-how-to-use-topics-subscriptions.md
In this quickstart, you write Java code using the azure-messaging-servicebus package to send messages to an Azure Service Bus topic and then receive messages from subscriptions to that topic. > [!NOTE]
-> This quick start provides step-by-step instructions for a simple scenario of sending a batch of messages to a Service Bus topic and receiving those messages from a subscription of the topic. You can find pre-built Java samples for Azure Service Bus in the [Azure SDK for Java repository on GitHub](https://github.com/azure/azure-sdk-for-java/tree/main/sdk/servicebus/azure-messaging-servicebus/src/samples).
+> This quick start provides step-by-step instructions for a simple scenario of sending a batch of messages to a Service Bus topic and receiving those messages from a subscription of the topic. You can find pre-built Java samples for Azure Service Bus in the [Azure SDK for Java repository on GitHub](https://github.com/azure/azure-sdk-for-java/tree/main/sdk/servicebus/azure-messaging-servicebus/src/samples).
> [!TIP] > If you're working with Azure Service Bus resources in a Spring application, we recommend that you consider [Spring Cloud Azure](/azure/developer/java/spring-framework/) as an alternative. Spring Cloud Azure is an open-source project that provides seamless Spring integration with Azure services. To learn more about Spring Cloud Azure, and to see an example using Service Bus, see [Spring Cloud Stream with Azure Service Bus](/azure/developer/java/spring-framework/configure-spring-cloud-stream-binder-java-app-with-service-bus).
In this quickstart, you write Java code using the azure-messaging-servicebus pac
## Prerequisites - An Azure subscription. To complete this tutorial, you need an Azure account. You can activate your [Visual Studio or MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A85619ABF) or sign-up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).-- Install [Azure SDK for Java][Azure SDK for Java]. If you're using Eclipse, you can install the [Azure Toolkit for Eclipse][Azure Toolkit for Eclipse] that includes the Azure SDK for Java. You can then add the **Microsoft Azure Libraries for Java** to your project. If you're using IntelliJ, see [Install the Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/installation).
+- Install [Azure SDK for Java][Azure SDK for Java]. If you're using Eclipse, you can install the [Azure Toolkit for Eclipse][Azure Toolkit for Eclipse] that includes the Azure SDK for Java. You can then add the **Microsoft Azure Libraries for Java** to your project. If you're using IntelliJ, see [Install the Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/installation).
[!INCLUDE [service-bus-create-namespace-portal](./includes/service-bus-create-namespace-portal.md)]
In this quickstart, you write Java code using the azure-messaging-servicebus pac
[!INCLUDE [service-bus-passwordless-template-tabbed](../../includes/passwordless/service-bus/service-bus-passwordless-template-tabbed.md)] ## Send messages to a topic
-In this section, you create a Java console project, and add code to send messages to the topic you created.
+In this section, you create a Java console project, and add code to send messages to the topic you created.
### Create a Java console project
-Create a Java project using Eclipse or a tool of your choice.
+Create a Java project using Eclipse or a tool of your choice.
### Configure your application to use Service Bus
-Add references to Azure Core and Azure Service Bus libraries.
+Add references to Azure Core and Azure Service Bus libraries.
If you're using Eclipse and created a Java console application, convert your Java project to a Maven: right-click the project in the **Package Explorer** window, select **Configure** -> **Convert to Maven project**. Then, add dependencies to these two libraries as shown in the following example. ### [Passwordless (Recommended)](#tab/passwordless)
-Update the `pom.xml` file to add dependencies to Azure Service Bus and Azure Identity packages.
+Update the `pom.xml` file to add dependencies to Azure Service Bus and Azure Identity packages.
```xml <dependencies>
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-messaging-servicebus</artifactId>
- <version>7.13.3</version>
- </dependency>
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-identity</artifactId>
- <version>1.8.0</version>
- <scope>compile</scope>
- </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-messaging-servicebus</artifactId>
+ <version>7.13.3</version>
+ </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.8.0</version>
+ <scope>compile</scope>
+ </dependency>
</dependencies> ``` ### [Connection String](#tab/connection-string)
-Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
+Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
```xml
- <dependency>
- <groupId>com.azure</groupId>
- <artifactId>azure-messaging-servicebus</artifactId>
- <version>7.13.3</version>
- </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-messaging-servicebus</artifactId>
+ <version>7.13.3</version>
+ </dependency>
``` ### Add code to send messages to the topic
-1. Add the following `import` statements at the topic of the Java file.
+1. Add the following `import` statements at the topic of the Java file.
### [Passwordless (Recommended)](#tab/passwordless)
-
+ ```java import com.azure.messaging.servicebus.*; import com.azure.identity.*;
-
+ import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.Arrays; import java.util.List; ```
-
+ ### [Connection String](#tab/connection-string)
-
+ ```java import com.azure.messaging.servicebus.*;
-
+ import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.Arrays; import java.util.List;
- ```
-
+ ```
+
2. In the class, define variables to hold connection string (not needed for passwordless scenario), topic name, and subscription name. ### [Passwordless (Recommended)](#tab/passwordless)
-
+ ```java
- static String topicName = "<TOPIC NAME>";
+ static String topicName = "<TOPIC NAME>";
static String subName = "<SUBSCRIPTION NAME>"; ```
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
```java static String connectionString = "<NAMESPACE CONNECTION STRING>";
- static String topicName = "<TOPIC NAME>";
+ static String topicName = "<TOPIC NAME>";
static String subName = "<SUBSCRIPTION NAME>"; ```
-
+ > [!IMPORTANT] > Replace `<NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace. Replace `<TOPIC NAME>` with the name of the topic, and `<SUBSCRIPTION NAME>` with the name of the > [!IMPORTANT] > Replace `<NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace. Replace `<TOPIC NAME>` with the name of the topic, and `<SUBSCRIPTION NAME>` with the name of the topic's subscription.
-3. Add a method named `sendMessage` in the class to send one message to the topic.
+3. Add a method named `sendMessage` in the class to send one message to the topic.
### [Passwordless (Recommended)](#tab/passwordless)
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
> Replace `NAMESPACENAME` with the name of your Service Bus namespace. ```java
- static void sendMessage()
- {
- // create a token using the default Azure credential
+ static void sendMessage()
+ {
+ // create a token using the default Azure credential
DefaultAzureCredential credential = new DefaultAzureCredentialBuilder() .build();
-
- ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
- .fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
- .credential(credential)
- .sender()
- .topicName(topicName)
- .buildClient();
-
- // send one message to the topic
- senderClient.sendMessage(new ServiceBusMessage("Hello, World!"));
- System.out.println("Sent a single message to the topic: " + topicName);
- }
-
+
+ ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
+ .fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
+ .credential(credential)
+ .sender()
+ .topicName(topicName)
+ .buildClient();
+
+ // send one message to the topic
+ senderClient.sendMessage(new ServiceBusMessage("Hello, World!"));
+ System.out.println("Sent a single message to the topic: " + topicName);
+ }
+ ``` ### [Connection String](#tab/connection-string) ```java static void sendMessage() {
- // create a Service Bus Sender client for the topic
+ // create a Service Bus Sender client for the topic
ServiceBusSenderClient senderClient = new ServiceBusClientBuilder() .connectionString(connectionString) .sender() .topicName(topicName) .buildClient();
-
+ // send one message to the topic senderClient.sendMessage(new ServiceBusMessage("Hello, World!"));
- System.out.println("Sent a single message to the topic: " + topicName);
+ System.out.println("Sent a single message to the topic: " + topicName);
} ```
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
{ // create a list of messages and return it to the caller ServiceBusMessage[] messages = {
- new ServiceBusMessage("First message"),
- new ServiceBusMessage("Second message"),
- new ServiceBusMessage("Third message")
+ new ServiceBusMessage("First message"),
+ new ServiceBusMessage("Second message"),
+ new ServiceBusMessage("Third message")
}; return Arrays.asList(messages); } ```
-1. Add a method named `sendMessageBatch` method to send messages to the topic you created. This method creates a `ServiceBusSenderClient` for the topic, invokes the `createMessages` method to get the list of messages, prepares one or more batches, and sends the batches to the topic.
+1. Add a method named `sendMessageBatch` method to send messages to the topic you created. This method creates a `ServiceBusSenderClient` for the topic, invokes the `createMessages` method to get the list of messages, prepares one or more batches, and sends the batches to the topic.
### [Passwordless (Recommended)](#tab/passwordless)
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
> Replace `NAMESPACENAME` with the name of your Service Bus namespace. ```java
- static void sendMessageBatch()
- {
- // create a token using the default Azure credential
+ static void sendMessageBatch()
+ {
+ // create a token using the default Azure credential
DefaultAzureCredential credential = new DefaultAzureCredentialBuilder() .build();
-
- ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
- .fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
- .credential(credential)
- .sender()
- .topicName(topicName)
- .buildClient();
-
- // Creates an ServiceBusMessageBatch where the ServiceBus.
- ServiceBusMessageBatch messageBatch = senderClient.createMessageBatch();
-
- // create a list of messages
- List<ServiceBusMessage> listOfMessages = createMessages();
-
- // We try to add as many messages as a batch can fit based on the maximum size and send to Service Bus when
- // the batch can hold no more messages. Create a new batch for next set of messages and repeat until all
- // messages are sent.
- for (ServiceBusMessage message : listOfMessages) {
- if (messageBatch.tryAddMessage(message)) {
- continue;
- }
-
- // The batch is full, so we create a new batch and send the batch.
- senderClient.sendMessages(messageBatch);
- System.out.println("Sent a batch of messages to the topic: " + topicName);
-
- // create a new batch
- messageBatch = senderClient.createMessageBatch();
-
- // Add that message that we couldn't before.
- if (!messageBatch.tryAddMessage(message)) {
- System.err.printf("Message is too large for an empty batch. Skipping. Max size: %s.", messageBatch.getMaxSizeInBytes());
- }
- }
-
- if (messageBatch.getCount() > 0) {
- senderClient.sendMessages(messageBatch);
- System.out.println("Sent a batch of messages to the topic: " + topicName);
- }
-
- //close the client
- senderClient.close();
- }
+
+ ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
+ .fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
+ .credential(credential)
+ .sender()
+ .topicName(topicName)
+ .buildClient();
+
+ // Creates an ServiceBusMessageBatch where the ServiceBus.
+ ServiceBusMessageBatch messageBatch = senderClient.createMessageBatch();
+
+ // create a list of messages
+ List<ServiceBusMessage> listOfMessages = createMessages();
+
+ // We try to add as many messages as a batch can fit based on the maximum size and send to Service Bus when
+ // the batch can hold no more messages. Create a new batch for next set of messages and repeat until all
+ // messages are sent.
+ for (ServiceBusMessage message : listOfMessages) {
+ if (messageBatch.tryAddMessage(message)) {
+ continue;
+ }
+
+ // The batch is full, so we create a new batch and send the batch.
+ senderClient.sendMessages(messageBatch);
+ System.out.println("Sent a batch of messages to the topic: " + topicName);
+
+ // create a new batch
+ messageBatch = senderClient.createMessageBatch();
+
+ // Add that message that we couldn't before.
+ if (!messageBatch.tryAddMessage(message)) {
+ System.err.printf("Message is too large for an empty batch. Skipping. Max size: %s.", messageBatch.getMaxSizeInBytes());
+ }
+ }
+
+ if (messageBatch.getCount() > 0) {
+ senderClient.sendMessages(messageBatch);
+ System.out.println("Sent a batch of messages to the topic: " + topicName);
+ }
+
+ //close the client
+ senderClient.close();
+ }
``` ### [Connection String](#tab/connection-string)
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
```java static void sendMessageBatch() {
- // create a Service Bus Sender client for the topic
+ // create a Service Bus Sender client for the topic
ServiceBusSenderClient senderClient = new ServiceBusClientBuilder() .connectionString(connectionString) .sender()
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
.buildClient(); // Creates an ServiceBusMessageBatch where the ServiceBus.
- ServiceBusMessageBatch messageBatch = senderClient.createMessageBatch();
-
- // create a list of messages
+ ServiceBusMessageBatch messageBatch = senderClient.createMessageBatch();
+
+ // create a list of messages
List<ServiceBusMessage> listOfMessages = createMessages();
-
+ // We try to add as many messages as a batch can fit based on the maximum size and send to Service Bus when // the batch can hold no more messages. Create a new batch for next set of messages and repeat until all
- // messages are sent.
+ // messages are sent.
for (ServiceBusMessage message : listOfMessages) { if (messageBatch.tryAddMessage(message)) { continue;
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
// The batch is full, so we create a new batch and send the batch. senderClient.sendMessages(messageBatch); System.out.println("Sent a batch of messages to the topic: " + topicName);
-
+ // create a new batch messageBatch = senderClient.createMessageBatch();
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
## Receive messages from a subscription
-In this section, you add code to retrieve messages from a subscription to the topic.
+In this section, you add code to retrieve messages from a subscription to the topic.
1. Add a method named `receiveMessages` to receive messages from the subscription. This method creates a `ServiceBusProcessorClient` for the subscription by specifying a handler for processing messages and another one for handling errors. Then, it starts the processor, waits for few seconds, prints the messages that are received, and then stops and closes the processor.
In this section, you add code to retrieve messages from a subscription to the to
> [!IMPORTANT] > - Replace `NAMESPACENAME` with the name of your Service Bus namespace.
- > - Replace `ServiceBusTopicTest` in `ServiceBusTopicTest::processMessage` in the code with the name of your class.
+ > - Replace `ServiceBusTopicTest` in `ServiceBusTopicTest::processMessage` in the code with the name of your class.
```java // handles received messages
In this section, you add code to retrieve messages from a subscription to the to
.build(); // Create an instance of the processor through the ServiceBusClientBuilder
- ServiceBusProcessorClient processorClient = new ServiceBusClientBuilder()
+ ServiceBusProcessorClient processorClient = new ServiceBusClientBuilder()
.fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
- .credential(credential)
+ .credential(credential)
.processor() .topicName(topicName) .subscriptionName(subName)
In this section, you add code to retrieve messages from a subscription to the to
TimeUnit.SECONDS.sleep(10); System.out.println("Stopping and closing the processor");
- processorClient.close();
- }
+ processorClient.close();
+ }
``` ### [Connection String](#tab/connection-string) > [!IMPORTANT]
- > Replace `ServiceBusTopicTest` in `ServiceBusTopicTest::processMessage` in the code with the name of your class.
+ > Replace `ServiceBusTopicTest` in `ServiceBusTopicTest::processMessage` in the code with the name of your class.
```java // handles received messages
In this section, you add code to retrieve messages from a subscription to the to
TimeUnit.SECONDS.sleep(10); System.out.println("Stopping and closing the processor");
- processorClient.close();
- }
+ processorClient.close();
+ }
```
-2. Add the `processMessage` method to process a message received from the Service Bus subscription.
+2. Add the `processMessage` method to process a message received from the Service Bus subscription.
```java private static void processMessage(ServiceBusReceivedMessageContext context) { ServiceBusReceivedMessage message = context.getMessage(); System.out.printf("Processing message. Session: %s, Sequence #: %s. Contents: %s%n", message.getMessageId(), message.getSequenceNumber(), message.getBody());
- }
+ }
``` 3. Add the `processError` method to handle error messages.
In this section, you add code to retrieve messages from a subscription to the to
System.out.printf("Error source %s, reason %s, message: %s%n", context.getErrorSource(), reason, context.getException()); }
- }
+ }
```
-1. Update the `main` method to invoke `sendMessage`, `sendMessageBatch`, and `receiveMessages` methods and to throw `InterruptedException`.
+1. Update the `main` method to invoke `sendMessage`, `sendMessageBatch`, and `receiveMessages` methods and to throw `InterruptedException`.
```java
- public static void main(String[] args) throws InterruptedException {
- sendMessage();
- sendMessageBatch();
- receiveMessages();
- }
+ public static void main(String[] args) throws InterruptedException {
+ sendMessage();
+ sendMessageBatch();
+ receiveMessages();
+ }
``` ## Run the app
Run the program to see the output similar to the following output:
### [Passwordless (Recommended)](#tab/passwordless)
-1. If you're using Eclipse, right-click the project, select **Export**, expand **Java**, select **Runnable JAR file**, and follow the steps to create a runnable JAR file.
-1. If you are signed into the machine using a user account that's different from the user account added to the **Azure Service Bus Data Owner** role, follow these steps. Otherwise, skip this step and move on to run the Jar file in the next step.
+1. If you're using Eclipse, right-click the project, select **Export**, expand **Java**, select **Runnable JAR file**, and follow the steps to create a runnable JAR file.
+1. If you are signed into the machine using a user account that's different from the user account added to the **Azure Service Bus Data Owner** role, follow these steps. Otherwise, skip this step and move on to run the Jar file in the next step.
- 1. [Install Azure CLI](/cli/azure/install-azure-cli-windows) on your machine.
- 1. Run the following CLI command to sign in to Azure. Use the same user account that you added to the **Azure Service Bus Data Owner** role.
+ 1. [Install Azure CLI](/cli/azure/install-azure-cli-windows) on your machine.
+ 1. Run the following CLI command to sign in to Azure. Use the same user account that you added to the **Azure Service Bus Data Owner** role.
```azurecli az login
Run the program to see the output similar to the following output:
```java java -jar <JAR FILE NAME> ```
-1. You see the following output in the console window.
+1. You see the following output in the console window.
```console Sent a single message to the topic: mytopic
Run the program to see the output similar to the following output:
Processing message. Session: 7bd3bd3e966a40ebbc9b29b082da14bb, Sequence #: 4. Contents: Third message ``` ### [Connection String](#tab/connection-string)
-When you run the application, you see the following messages in the console window.
+When you run the application, you see the following messages in the console window.
```console Sent a single message to the topic: mytopic
Processing message. Session: 7bd3bd3e966a40ebbc9b29b082da14bb, Sequence #: 4. Co
```
-On the **Overview** page for the Service Bus namespace in the Azure portal, you can see **incoming** and **outgoing** message count. You may need to wait for a minute or so and then refresh the page to see the latest values.
+On the **Overview** page for the Service Bus namespace in the Azure portal, you can see **incoming** and **outgoing** message count. You may need to wait for a minute or so and then refresh the page to see the latest values.
:::image type="content" source="./media/service-bus-java-how-to-use-queues/overview-incoming-outgoing-messages.png" alt-text="Incoming and outgoing message count" lightbox="./media/service-bus-java-how-to-use-queues/overview-incoming-outgoing-messages.png":::
-Switch to the **Topics** tab in the middle-bottom pane, and select the topic to see the **Service Bus Topic** page for your topic. On this page, you should see four incoming and four outgoing messages in the **Messages** chart.
+Switch to the **Topics** tab in the middle-bottom pane, and select the topic to see the **Service Bus Topic** page for your topic. On this page, you should see four incoming and four outgoing messages in the **Messages** chart.
:::image type="content" source="./media/service-bus-java-how-to-use-topics-subscriptions/topic-page-portal.png" alt-text="Incoming and outgoing messages" lightbox="./media/service-bus-java-how-to-use-topics-subscriptions/topic-page-portal.png":::
-If you comment out the `receiveMessages` call in the `main` method and run the app again, on the **Service Bus Topic** page, you see 8 incoming messages (4 new) but four outgoing messages.
+If you comment out the `receiveMessages` call in the `main` method and run the app again, on the **Service Bus Topic** page, you see 8 incoming messages (4 new) but four outgoing messages.
:::image type="content" source="./media/service-bus-java-how-to-use-topics-subscriptions/updated-topic-page.png" alt-text="Updated topic page" lightbox="./media/service-bus-java-how-to-use-topics-subscriptions/updated-topic-page.png":::
-On this page, if you select a subscription, you get to the **Service Bus Subscription** page. You can see the active message count, dead-letter message count, and more on this page. In this example, there are four active messages that the receiver hasn't received yet.
+On this page, if you select a subscription, you get to the **Service Bus Subscription** page. You can see the active message count, dead-letter message count, and more on this page. In this example, there are four active messages that the receiver hasn't received yet.
:::image type="content" source="./media/service-bus-java-how-to-use-topics-subscriptions/active-message-count.png" alt-text="Active message count" lightbox="./media/service-bus-java-how-to-use-topics-subscriptions/active-message-count.png":::
service-bus-messaging Service Bus Nodejs How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-nodejs-how-to-use-queues.md
> * [JavaScript](service-bus-nodejs-how-to-use-queues.md) > * [Python](service-bus-python-how-to-use-queues.md)
-In this tutorial, you complete the following steps:
+In this tutorial, you complete the following steps:
1. Create a Service Bus namespace, using the Azure portal. 2. Create a Service Bus queue, using the Azure portal.
In this tutorial, you complete the following steps:
1. Receive those messages from the queue. > [!NOTE]
-> This quick start provides step-by-step instructions for a simple scenario of sending messages to a Service Bus queue and receiving them. You can find pre-built JavaScript and TypeScript samples for Azure Service Bus in the [Azure SDK for JavaScript repository on GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/servicebus/service-bus/samples/v7).
+> This quick start provides step-by-step instructions for a simple scenario of sending messages to a Service Bus queue and receiving them. You can find pre-built JavaScript and TypeScript samples for Azure Service Bus in the [Azure SDK for JavaScript repository on GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/servicebus/service-bus/samples/v7).
## Prerequisites
If you're new to the service, see [Service Bus overview](service-bus-messaging-o
To use this quickstart with your own Azure account, you need: * Install [Azure CLI](/cli/azure/install-azure-cli), which provides the passwordless authentication to your developer machine.
-* Sign in with your Azure account at the terminal or command prompt with `az login`.
+* Sign in with your Azure account at the terminal or command prompt with `az login`.
* Use the same account when you add the appropriate data role to your resource. * Run the code in the same terminal or command prompt.
-* Note down your **queue** name for your Service Bus namespace. You'll need that in the code.
+* Note down your **queue** name for your Service Bus namespace. You'll need that in the code.
### [Connection string](#tab/connection-string) Note down the following, which you'll use in the code below:
-* Service Bus namespace **connection string**
+* Service Bus namespace **connection string**
* Service Bus namespace **queue** you created
Note down the following, which you'll use in the code below:
1. To install the required npm package(s) for Service Bus, open a command prompt that has `npm` in its path, change the directory to the folder where you want to have your samples and then run this command.
-1. Install the following packages:
+1. Install the following packages:
```bash npm install @azure/service-bus @azure/identity
Note down the following, which you'll use in the code below:
1. To install the required npm package(s) for Service Bus, open a command prompt that has `npm` in its path, change the directory to the folder where you want to have your samples and then run this command.
-1. Install the following package:
+1. Install the following package:
```bash npm install @azure/service-bus
Note down the following, which you'll use in the code below:
## Send messages to a queue
-The following sample code shows you how to send a message to a queue.
+The following sample code shows you how to send a message to a queue.
### [Passwordless](#tab/passwordless)
-You must have signed in with the Azure CLI's `az login` in order for your local machine to provide the passwordless authentication required in this code.
+You must have signed in with the Azure CLI's `az login` in order for your local machine to provide the passwordless authentication required in this code.
1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/). 1. Create a file called `send.js` and paste the below code into it. This code sends the names of scientists as messages to your queue.
-
+ The passwordless credential is provided with the [**DefaultAzureCredential**](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/identity/identity#defaultazurecredential).
-
+ ```javascript const { ServiceBusClient } = require("@azure/service-bus"); const { DefaultAzureCredential } = require("@azure/identity");
-
+ // Replace `<SERVICE-BUS-NAMESPACE>` with your namespace const fullyQualifiedNamespace = "<SERVICE-BUS-NAMESPACE>.servicebus.windows.net";
-
+ // Passwordless credential const credential = new DefaultAzureCredential();
-
+ // name of the queue const queueName = "<QUEUE NAME>"
-
+ const messages = [
- { body: "Albert Einstein" },
- { body: "Werner Heisenberg" },
- { body: "Marie Curie" },
- { body: "Steven Hawking" },
- { body: "Isaac Newton" },
- { body: "Niels Bohr" },
- { body: "Michael Faraday" },
- { body: "Galileo Galilei" },
- { body: "Johannes Kepler" },
- { body: "Nikolaus Kopernikus" }
- ];
-
+ { body: "Albert Einstein" },
+ { body: "Werner Heisenberg" },
+ { body: "Marie Curie" },
+ { body: "Steven Hawking" },
+ { body: "Isaac Newton" },
+ { body: "Niels Bohr" },
+ { body: "Michael Faraday" },
+ { body: "Galileo Galilei" },
+ { body: "Johannes Kepler" },
+ { body: "Nikolaus Kopernikus" }
+ ];
+ async function main() {
- // create a Service Bus client using the passwordless authentication to the Service Bus namespace
- const sbClient = new ServiceBusClient(fullyQualifiedNamespace, credential);
-
- // createSender() can also be used to create a sender for a topic.
- const sender = sbClient.createSender(queueName);
-
- try {
- // Tries to send all messages in a single batch.
- // Will fail if the messages cannot fit in a batch.
- // await sender.sendMessages(messages);
-
- // create a batch object
- let batch = await sender.createMessageBatch();
- for (let i = 0; i < messages.length; i++) {
- // for each message in the array
-
- // try to add the message to the batch
- if (!batch.tryAddMessage(messages[i])) {
- // if it fails to add the message to the current batch
- // send the current batch as it is full
- await sender.sendMessages(batch);
-
- // then, create a new batch
- batch = await sender.createMessageBatch();
-
- // now, add the message failed to be added to the previous batch to this batch
- if (!batch.tryAddMessage(messages[i])) {
- // if it still can't be added to the batch, the message is probably too big to fit in a batch
- throw new Error("Message too big to fit in a batch");
- }
- }
- }
-
- // Send the last created batch of messages to the queue
- await sender.sendMessages(batch);
-
- console.log(`Sent a batch of messages to the queue: ${queueName}`);
-
- // Close the sender
- await sender.close();
- } finally {
- await sbClient.close();
- }
+ // create a Service Bus client using the passwordless authentication to the Service Bus namespace
+ const sbClient = new ServiceBusClient(fullyQualifiedNamespace, credential);
+
+ // createSender() can also be used to create a sender for a topic.
+ const sender = sbClient.createSender(queueName);
+
+ try {
+ // Tries to send all messages in a single batch.
+ // Will fail if the messages cannot fit in a batch.
+ // await sender.sendMessages(messages);
+
+ // create a batch object
+ let batch = await sender.createMessageBatch();
+ for (let i = 0; i < messages.length; i++) {
+ // for each message in the array
+
+ // try to add the message to the batch
+ if (!batch.tryAddMessage(messages[i])) {
+ // if it fails to add the message to the current batch
+ // send the current batch as it is full
+ await sender.sendMessages(batch);
+
+ // then, create a new batch
+ batch = await sender.createMessageBatch();
+
+ // now, add the message failed to be added to the previous batch to this batch
+ if (!batch.tryAddMessage(messages[i])) {
+ // if it still can't be added to the batch, the message is probably too big to fit in a batch
+ throw new Error("Message too big to fit in a batch");
+ }
+ }
+ }
+
+ // Send the last created batch of messages to the queue
+ await sender.sendMessages(batch);
+
+ console.log(`Sent a batch of messages to the queue: ${queueName}`);
+
+ // Close the sender
+ await sender.close();
+ } finally {
+ await sbClient.close();
+ }
}
-
+ // call the main function main().catch((err) => {
- console.log("Error occurred: ", err);
- process.exit(1);
- });
+ console.log("Error occurred: ", err);
+ process.exit(1);
+ });
``` 3. Replace `<SERVICE-BUS-NAMESPACE>` with your Service Bus namespace.
-4. Replace `<QUEUE NAME>` with the name of the queue.
+4. Replace `<QUEUE NAME>` with the name of the queue.
5. Then run the command in a command prompt to execute this file. ```console
- node send.js
+ node send.js
``` 6. You should see the following output.
You must have signed in with the Azure CLI's `az login` in order for your local
```javascript const { ServiceBusClient } = require("@azure/service-bus");
-
+ // connection string to your Service Bus namespace const connectionString = "<CONNECTION STRING TO SERVICE BUS NAMESPACE>" // name of the queue const queueName = "<QUEUE NAME>"
-
+ const messages = [
- { body: "Albert Einstein" },
- { body: "Werner Heisenberg" },
- { body: "Marie Curie" },
- { body: "Steven Hawking" },
- { body: "Isaac Newton" },
- { body: "Niels Bohr" },
- { body: "Michael Faraday" },
- { body: "Galileo Galilei" },
- { body: "Johannes Kepler" },
- { body: "Nikolaus Kopernikus" }
+ { body: "Albert Einstein" },
+ { body: "Werner Heisenberg" },
+ { body: "Marie Curie" },
+ { body: "Steven Hawking" },
+ { body: "Isaac Newton" },
+ { body: "Niels Bohr" },
+ { body: "Michael Faraday" },
+ { body: "Galileo Galilei" },
+ { body: "Johannes Kepler" },
+ { body: "Nikolaus Kopernikus" }
];
-
+ async function main() {
- // create a Service Bus client using the connection string to the Service Bus namespace
- const sbClient = new ServiceBusClient(connectionString);
-
- // createSender() can also be used to create a sender for a topic.
- const sender = sbClient.createSender(queueName);
-
- try {
- // Tries to send all messages in a single batch.
- // Will fail if the messages cannot fit in a batch.
- // await sender.sendMessages(messages);
-
- // create a batch object
- let batch = await sender.createMessageBatch();
- for (let i = 0; i < messages.length; i++) {
- // for each message in the array
-
- // try to add the message to the batch
- if (!batch.tryAddMessage(messages[i])) {
- // if it fails to add the message to the current batch
- // send the current batch as it is full
- await sender.sendMessages(batch);
-
- // then, create a new batch
- batch = await sender.createMessageBatch();
-
- // now, add the message failed to be added to the previous batch to this batch
- if (!batch.tryAddMessage(messages[i])) {
- // if it still can't be added to the batch, the message is probably too big to fit in a batch
- throw new Error("Message too big to fit in a batch");
- }
- }
- }
-
- // Send the last created batch of messages to the queue
- await sender.sendMessages(batch);
-
- console.log(`Sent a batch of messages to the queue: ${queueName}`);
-
- // Close the sender
- await sender.close();
- } finally {
- await sbClient.close();
- }
+ // create a Service Bus client using the connection string to the Service Bus namespace
+ const sbClient = new ServiceBusClient(connectionString);
+
+ // createSender() can also be used to create a sender for a topic.
+ const sender = sbClient.createSender(queueName);
+
+ try {
+ // Tries to send all messages in a single batch.
+ // Will fail if the messages cannot fit in a batch.
+ // await sender.sendMessages(messages);
+
+ // create a batch object
+ let batch = await sender.createMessageBatch();
+ for (let i = 0; i < messages.length; i++) {
+ // for each message in the array
+
+ // try to add the message to the batch
+ if (!batch.tryAddMessage(messages[i])) {
+ // if it fails to add the message to the current batch
+ // send the current batch as it is full
+ await sender.sendMessages(batch);
+
+ // then, create a new batch
+ batch = await sender.createMessageBatch();
+
+ // now, add the message failed to be added to the previous batch to this batch
+ if (!batch.tryAddMessage(messages[i])) {
+ // if it still can't be added to the batch, the message is probably too big to fit in a batch
+ throw new Error("Message too big to fit in a batch");
+ }
+ }
+ }
+
+ // Send the last created batch of messages to the queue
+ await sender.sendMessages(batch);
+
+ console.log(`Sent a batch of messages to the queue: ${queueName}`);
+
+ // Close the sender
+ await sender.close();
+ } finally {
+ await sbClient.close();
+ }
}
-
+ // call the main function main().catch((err) => {
- console.log("Error occurred: ", err);
- process.exit(1);
+ console.log("Error occurred: ", err);
+ process.exit(1);
}); ``` 3. Replace `<CONNECTION STRING TO SERVICE BUS NAMESPACE>` with the connection string to your Service Bus namespace.
-4. Replace `<QUEUE NAME>` with the name of the queue.
+4. Replace `<QUEUE NAME>` with the name of the queue.
5. Then run the command in a command prompt to execute this file. ```console
- node send.js
+ node send.js
``` 6. You should see the following output.
You must have signed in with the Azure CLI's `az login` in order for your local
### [Passwordless](#tab/passwordless)
-You must have signed in with the Azure CLI's `az login` in order for your local machine to provide the passwordless authentication required in this code.
+You must have signed in with the Azure CLI's `az login` in order for your local machine to provide the passwordless authentication required in this code.
1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/) 2. Create a file called `receive.js` and paste the following code into it.
You must have signed in with the Azure CLI's `az login` in order for your local
```javascript const { delay, ServiceBusClient, ServiceBusMessage } = require("@azure/service-bus"); const { DefaultAzureCredential } = require("@azure/identity");
-
+ // Replace `<SERVICE-BUS-NAMESPACE>` with your namespace const fullyQualifiedNamespace = "<SERVICE-BUS-NAMESPACE>.servicebus.windows.net";
-
+ // Passwordless credential const credential = new DefaultAzureCredential();
You must have signed in with the Azure CLI's `az login` in order for your local
const queueName = "<QUEUE NAME>" async function main() {
- // create a Service Bus client using the passwordless authentication to the Service Bus namespace
- const sbClient = new ServiceBusClient(fullyQualifiedNamespace, credential);
-
- // createReceiver() can also be used to create a receiver for a subscription.
- const receiver = sbClient.createReceiver(queueName);
-
- // function to handle messages
- const myMessageHandler = async (messageReceived) => {
- console.log(`Received message: ${messageReceived.body}`);
- };
-
- // function to handle any errors
- const myErrorHandler = async (error) => {
- console.log(error);
- };
-
- // subscribe and specify the message and error handlers
- receiver.subscribe({
- processMessage: myMessageHandler,
- processError: myErrorHandler
- });
-
- // Waiting long enough before closing the sender to send messages
- await delay(20000);
-
- await receiver.close();
- await sbClient.close();
- }
+ // create a Service Bus client using the passwordless authentication to the Service Bus namespace
+ const sbClient = new ServiceBusClient(fullyQualifiedNamespace, credential);
+
+ // createReceiver() can also be used to create a receiver for a subscription.
+ const receiver = sbClient.createReceiver(queueName);
+
+ // function to handle messages
+ const myMessageHandler = async (messageReceived) => {
+ console.log(`Received message: ${messageReceived.body}`);
+ };
+
+ // function to handle any errors
+ const myErrorHandler = async (error) => {
+ console.log(error);
+ };
+
+ // subscribe and specify the message and error handlers
+ receiver.subscribe({
+ processMessage: myMessageHandler,
+ processError: myErrorHandler
+ });
+
+ // Waiting long enough before closing the sender to send messages
+ await delay(20000);
+
+ await receiver.close();
+ await sbClient.close();
+ }
// call the main function main().catch((err) => {
- console.log("Error occurred: ", err);
- process.exit(1);
+ console.log("Error occurred: ", err);
+ process.exit(1);
}); ``` 3. Replace `<SERVICE-BUS-NAMESPACE>` with your Service Bus namespace.
-4. Replace `<QUEUE NAME>` with the name of the queue.
+4. Replace `<QUEUE NAME>` with the name of the queue.
5. Then run the command in a command prompt to execute this file. ```console
- node receive.js
+ node receive.js
``` ### [Connection string](#tab/connection-string)
You must have signed in with the Azure CLI's `az login` in order for your local
const queueName = "<QUEUE NAME>" async function main() {
- // create a Service Bus client using the connection string to the Service Bus namespace
- const sbClient = new ServiceBusClient(connectionString);
-
- // createReceiver() can also be used to create a receiver for a subscription.
- const receiver = sbClient.createReceiver(queueName);
-
- // function to handle messages
- const myMessageHandler = async (messageReceived) => {
- console.log(`Received message: ${messageReceived.body}`);
- };
-
- // function to handle any errors
- const myErrorHandler = async (error) => {
- console.log(error);
- };
-
- // subscribe and specify the message and error handlers
- receiver.subscribe({
- processMessage: myMessageHandler,
- processError: myErrorHandler
- });
-
- // Waiting long enough before closing the sender to send messages
- await delay(20000);
-
- await receiver.close();
- await sbClient.close();
- }
+ // create a Service Bus client using the connection string to the Service Bus namespace
+ const sbClient = new ServiceBusClient(connectionString);
+
+ // createReceiver() can also be used to create a receiver for a subscription.
+ const receiver = sbClient.createReceiver(queueName);
+
+ // function to handle messages
+ const myMessageHandler = async (messageReceived) => {
+ console.log(`Received message: ${messageReceived.body}`);
+ };
+
+ // function to handle any errors
+ const myErrorHandler = async (error) => {
+ console.log(error);
+ };
+
+ // subscribe and specify the message and error handlers
+ receiver.subscribe({
+ processMessage: myMessageHandler,
+ processError: myErrorHandler
+ });
+
+ // Waiting long enough before closing the sender to send messages
+ await delay(20000);
+
+ await receiver.close();
+ await sbClient.close();
+ }
// call the main function main().catch((err) => {
- console.log("Error occurred: ", err);
- process.exit(1);
+ console.log("Error occurred: ", err);
+ process.exit(1);
}); ``` 3. Replace `<CONNECTION STRING TO SERVICE BUS NAMESPACE>` with the connection string to your Service Bus namespace.
-4. Replace `<QUEUE NAME>` with the name of the queue.
+4. Replace `<QUEUE NAME>` with the name of the queue.
5. Then run the command in a command prompt to execute this file. ```console
- node receive.js
+ node receive.js
```
Received message: Johannes Kepler
Received message: Nikolaus Kopernikus ```
-On the **Overview** page for the Service Bus namespace in the Azure portal, you can see **incoming** and **outgoing** message count. You may need to wait for a minute or so and then refresh the page to see the latest values.
+On the **Overview** page for the Service Bus namespace in the Azure portal, you can see **incoming** and **outgoing** message count. You may need to wait for a minute or so and then refresh the page to see the latest values.
:::image type="content" source="./media/service-bus-java-how-to-use-queues/overview-incoming-outgoing-messages.png" alt-text="Incoming and outgoing message count":::
-Select the queue on this **Overview** page to navigate to the **Service Bus Queue** page. You see the **incoming** and **outgoing** message count on this page too. You also see other information such as the **current size** of the queue, **maximum size**, **active message count**, and so on.
+Select the queue on this **Overview** page to navigate to the **Service Bus Queue** page. You see the **incoming** and **outgoing** message count on this page too. You also see other information such as the **current size** of the queue, **maximum size**, **active message count**, and so on.
:::image type="content" source="./media/service-bus-java-how-to-use-queues/queue-details.png" alt-text="Queue details":::
If you receive one of the following errors when running the **passwordless** ver
Navigate to your Service Bus namespace in the Azure portal, and select **Delete** on the Azure portal to delete the namespace and the queue in it. ## Next steps
-See the following documentation and samples:
+See the following documentation and samples:
- [Azure Service Bus client library for JavaScript](https://www.npmjs.com/package/@azure/service-bus) - [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/servicebus/service-bus/samples/v7/javascript)
service-bus-messaging Service Bus Nodejs How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-nodejs-how-to-use-topics-subscriptions.md
> * [JavaScript](service-bus-nodejs-how-to-use-topics-subscriptions.md) > * [Python](service-bus-python-how-to-use-topics-subscriptions.md)
-In this tutorial, you complete the following steps:
+In this tutorial, you complete the following steps:
1. Create a Service Bus namespace, using the Azure portal. 2. Create a Service Bus topic, using the Azure portal. 3. Create a Service Bus subscription to that topic, using the Azure portal.
-4. Write a JavaScript application to use the [@azure/service-bus](https://www.npmjs.com/package/@azure/service-bus) package to:
+4. Write a JavaScript application to use the [@azure/service-bus](https://www.npmjs.com/package/@azure/service-bus) package to:
* Send a set of messages to the topic. * Receive those messages from the subscription. > [!NOTE]
-> This quick start provides step-by-step instructions for a simple scenario of sending a batch of messages to a Service Bus topic and receiving those messages from a subscription of the topic. You can find pre-built JavaScript and TypeScript samples for Azure Service Bus in the [Azure SDK for JavaScript repository on GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/servicebus/service-bus/samples/v7).
+> This quick start provides step-by-step instructions for a simple scenario of sending a batch of messages to a Service Bus topic and receiving those messages from a subscription of the topic. You can find pre-built JavaScript and TypeScript samples for Azure Service Bus in the [Azure SDK for JavaScript repository on GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/servicebus/service-bus/samples/v7).
## Prerequisites - An Azure subscription. To complete this tutorial, you need an Azure account. You can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/?WT.mc_id=A85619ABF) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF). - [Node.js LTS](https://nodejs.org/en/download/)-- Follow steps in the [Quickstart: Use the Azure portal to create a Service Bus topic and subscriptions to the topic](service-bus-quickstart-topics-subscriptions-portal.md). You will use only one subscription for this quickstart.
+- Follow steps in the [Quickstart: Use the Azure portal to create a Service Bus topic and subscriptions to the topic](service-bus-quickstart-topics-subscriptions-portal.md). You will use only one subscription for this quickstart.
### [Passwordless](#tab/passwordless) To use this quickstart with your own Azure account, you need: * Install [Azure CLI](/cli/azure/install-azure-cli), which provides the passwordless authentication to your developer machine.
-* Sign in with your Azure account at the terminal or command prompt with `az login`.
+* Sign in with your Azure account at the terminal or command prompt with `az login`.
* Use the same account when you add the appropriate role to your resource. * Run the code in the same terminal or command prompt.
-* Note down your **topic** name and **subscription** for your Service Bus namespace. You'll need that in the code.
+* Note down your **topic** name and **subscription** for your Service Bus namespace. You'll need that in the code.
### [Connection string](#tab/connection-string) Note down the following, which you'll use in the code below:
-* Service Bus namespace **connection string**
+* Service Bus namespace **connection string**
* Service Bus namespace **topic** name you created
-* Service Bus namespace **subscription**
+* Service Bus namespace **subscription**
Note down the following, which you'll use in the code below:
1. To install the required npm package(s) for Service Bus, open a command prompt that has `npm` in its path, change the directory to the folder where you want to have your samples and then run this command.
-1. Install the following packages:
+1. Install the following packages:
```bash npm install @azure/service-bus @azure/identity
Note down the following, which you'll use in the code below:
1. To install the required npm package(s) for Service Bus, open a command prompt that has `npm` in its path, change the directory to the folder where you want to have your samples and then run this command.
-1. Install the following package:
+1. Install the following package:
```bash npm install @azure/service-bus
Note down the following, which you'll use in the code below:
## Send messages to a topic
-The following sample code shows you how to send a batch of messages to a Service Bus topic. See code comments for details.
+The following sample code shows you how to send a batch of messages to a Service Bus topic. See code comments for details.
### [Passwordless](#tab/passwordless)
-You must have signed in with the Azure CLI's `az login` in order for your local machine to provide the passwordless authentication required in this code.
+You must have signed in with the Azure CLI's `az login` in order for your local machine to provide the passwordless authentication required in this code.
1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/) 2. Create a file called `sendtotopic.js` and paste the below code into it. This code will send a message to your topic.
You must have signed in with the Azure CLI's `az login` in order for your local
```javascript const { ServiceBusClient } = require("@azure/service-bus"); const { DefaultAzureCredential } = require("@azure/identity");
-
+ // Replace `<SERVICE-BUS-NAMESPACE>` with your namespace const fullyQualifiedNamespace = "<SERVICE-BUS-NAMESPACE>.servicebus.windows.net";
You must have signed in with the Azure CLI's `az login` in order for your local
const credential = new DefaultAzureCredential(); const topicName = "<TOPIC NAME>";
-
+ const messages = [
- { body: "Albert Einstein" },
- { body: "Werner Heisenberg" },
- { body: "Marie Curie" },
- { body: "Steven Hawking" },
- { body: "Isaac Newton" },
- { body: "Niels Bohr" },
- { body: "Michael Faraday" },
- { body: "Galileo Galilei" },
- { body: "Johannes Kepler" },
- { body: "Nikolaus Kopernikus" }
+ { body: "Albert Einstein" },
+ { body: "Werner Heisenberg" },
+ { body: "Marie Curie" },
+ { body: "Steven Hawking" },
+ { body: "Isaac Newton" },
+ { body: "Niels Bohr" },
+ { body: "Michael Faraday" },
+ { body: "Galileo Galilei" },
+ { body: "Johannes Kepler" },
+ { body: "Nikolaus Kopernikus" }
];
-
+ async function main() {
- // create a Service Bus client using the passwordless authentication to the Service Bus namespace
- const sbClient = new ServiceBusClient(fullyQualifiedNamespace, credential);
-
- // createSender() can also be used to create a sender for a queue.
- const sender = sbClient.createSender(topicName);
-
- try {
- // Tries to send all messages in a single batch.
- // Will fail if the messages cannot fit in a batch.
- // await sender.sendMessages(messages);
-
- // create a batch object
- let batch = await sender.createMessageBatch();
- for (let i = 0; i < messages.length; i++) {
- // for each message in the arry
-
- // try to add the message to the batch
- if (!batch.tryAddMessage(messages[i])) {
- // if it fails to add the message to the current batch
- // send the current batch as it is full
- await sender.sendMessages(batch);
-
- // then, create a new batch
- batch = await sender.createMessageBatch();
-
- // now, add the message failed to be added to the previous batch to this batch
- if (!batch.tryAddMessage(messages[i])) {
- // if it still can't be added to the batch, the message is probably too big to fit in a batch
- throw new Error("Message too big to fit in a batch");
- }
- }
- }
-
- // Send the last created batch of messages to the topic
- await sender.sendMessages(batch);
-
- console.log(`Sent a batch of messages to the topic: ${topicName}`);
-
- // Close the sender
- await sender.close();
- } finally {
- await sbClient.close();
- }
+ // create a Service Bus client using the passwordless authentication to the Service Bus namespace
+ const sbClient = new ServiceBusClient(fullyQualifiedNamespace, credential);
+
+ // createSender() can also be used to create a sender for a queue.
+ const sender = sbClient.createSender(topicName);
+
+ try {
+ // Tries to send all messages in a single batch.
+ // Will fail if the messages cannot fit in a batch.
+ // await sender.sendMessages(messages);
+
+ // create a batch object
+ let batch = await sender.createMessageBatch();
+ for (let i = 0; i < messages.length; i++) {
+ // for each message in the array
+
+ // try to add the message to the batch
+ if (!batch.tryAddMessage(messages[i])) {
+ // if it fails to add the message to the current batch
+ // send the current batch as it is full
+ await sender.sendMessages(batch);
+
+ // then, create a new batch
+ batch = await sender.createMessageBatch();
+
+ // now, add the message failed to be added to the previous batch to this batch
+ if (!batch.tryAddMessage(messages[i])) {
+ // if it still can't be added to the batch, the message is probably too big to fit in a batch
+ throw new Error("Message too big to fit in a batch");
+ }
+ }
+ }
+
+ // Send the last created batch of messages to the topic
+ await sender.sendMessages(batch);
+
+ console.log(`Sent a batch of messages to the topic: ${topicName}`);
+
+ // Close the sender
+ await sender.close();
+ } finally {
+ await sbClient.close();
+ }
}
-
+ // call the main function main().catch((err) => {
- console.log("Error occurred: ", err);
- process.exit(1);
- });
+ console.log("Error occurred: ", err);
+ process.exit(1);
+ });
``` 3. Replace `<SERVICE BUS NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace.
-1. Replace `<TOPIC NAME>` with the name of the topic.
+1. Replace `<TOPIC NAME>` with the name of the topic.
1. Then run the command in a command prompt to execute this file. ```console
- node sendtotopic.js
+ node sendtotopic.js
``` 1. You should see the following output.
You must have signed in with the Azure CLI's `az login` in order for your local
```javascript const { ServiceBusClient } = require("@azure/service-bus");
-
+ const connectionString = "<SERVICE BUS NAMESPACE CONNECTION STRING>" const topicName = "<TOPIC NAME>";
-
+ const messages = [
- { body: "Albert Einstein" },
- { body: "Werner Heisenberg" },
- { body: "Marie Curie" },
- { body: "Steven Hawking" },
- { body: "Isaac Newton" },
- { body: "Niels Bohr" },
- { body: "Michael Faraday" },
- { body: "Galileo Galilei" },
- { body: "Johannes Kepler" },
- { body: "Nikolaus Kopernikus" }
+ { body: "Albert Einstein" },
+ { body: "Werner Heisenberg" },
+ { body: "Marie Curie" },
+ { body: "Steven Hawking" },
+ { body: "Isaac Newton" },
+ { body: "Niels Bohr" },
+ { body: "Michael Faraday" },
+ { body: "Galileo Galilei" },
+ { body: "Johannes Kepler" },
+ { body: "Nikolaus Kopernikus" }
];
-
+ async function main() {
- // create a Service Bus client using the connection string to the Service Bus namespace
- const sbClient = new ServiceBusClient(connectionString);
-
- // createSender() can also be used to create a sender for a queue.
- const sender = sbClient.createSender(topicName);
-
- try {
- // Tries to send all messages in a single batch.
- // Will fail if the messages cannot fit in a batch.
- // await sender.sendMessages(messages);
-
- // create a batch object
- let batch = await sender.createMessageBatch();
- for (let i = 0; i < messages.length; i++) {
- // for each message in the arry
-
- // try to add the message to the batch
- if (!batch.tryAddMessage(messages[i])) {
- // if it fails to add the message to the current batch
- // send the current batch as it is full
- await sender.sendMessages(batch);
-
- // then, create a new batch
- batch = await sender.createMessageBatch();
-
- // now, add the message failed to be added to the previous batch to this batch
- if (!batch.tryAddMessage(messages[i])) {
- // if it still can't be added to the batch, the message is probably too big to fit in a batch
- throw new Error("Message too big to fit in a batch");
- }
- }
- }
-
- // Send the last created batch of messages to the topic
- await sender.sendMessages(batch);
-
- console.log(`Sent a batch of messages to the topic: ${topicName}`);
-
- // Close the sender
- await sender.close();
- } finally {
- await sbClient.close();
- }
+ // create a Service Bus client using the connection string to the Service Bus namespace
+ const sbClient = new ServiceBusClient(connectionString);
+
+ // createSender() can also be used to create a sender for a queue.
+ const sender = sbClient.createSender(topicName);
+
+ try {
+ // Tries to send all messages in a single batch.
+ // Will fail if the messages cannot fit in a batch.
+ // await sender.sendMessages(messages);
+
+ // create a batch object
+ let batch = await sender.createMessageBatch();
+ for (let i = 0; i < messages.length; i++) {
+ // for each message in the array
+
+ // try to add the message to the batch
+ if (!batch.tryAddMessage(messages[i])) {
+ // if it fails to add the message to the current batch
+ // send the current batch as it is full
+ await sender.sendMessages(batch);
+
+ // then, create a new batch
+ batch = await sender.createMessageBatch();
+
+ // now, add the message failed to be added to the previous batch to this batch
+ if (!batch.tryAddMessage(messages[i])) {
+ // if it still can't be added to the batch, the message is probably too big to fit in a batch
+ throw new Error("Message too big to fit in a batch");
+ }
+ }
+ }
+
+ // Send the last created batch of messages to the topic
+ await sender.sendMessages(batch);
+
+ console.log(`Sent a batch of messages to the topic: ${topicName}`);
+
+ // Close the sender
+ await sender.close();
+ } finally {
+ await sbClient.close();
+ }
}
-
+ // call the main function main().catch((err) => {
- console.log("Error occurred: ", err);
- process.exit(1);
- });
+ console.log("Error occurred: ", err);
+ process.exit(1);
+ });
``` 3. Replace `<SERVICE BUS NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace.
-1. Replace `<TOPIC NAME>` with the name of the topic.
+1. Replace `<TOPIC NAME>` with the name of the topic.
1. Then run the command in a command prompt to execute this file. ```console
- node sendtotopic.js
+ node sendtotopic.js
``` 1. You should see the following output.
You must have signed in with the Azure CLI's `az login` in order for your local
### [Passwordless](#tab/passwordless)
-You must have signed in with the Azure CLI's `az login` in order for your local machine to provide the passwordless authentication required in this code.
+You must have signed in with the Azure CLI's `az login` in order for your local machine to provide the passwordless authentication required in this code.
1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/)
-2. Create a file called **receivefromsubscription.js** and paste the following code into it. See code comments for details.
+2. Create a file called **receivefromsubscription.js** and paste the following code into it. See code comments for details.
```javascript const { delay, ServiceBusClient, ServiceBusMessage } = require("@azure/service-bus"); const { DefaultAzureCredential } = require("@azure/identity");
-
+ // Replace `<SERVICE-BUS-NAMESPACE>` with your namespace const fullyQualifiedNamespace = "<SERVICE-BUS-NAMESPACE>.servicebus.windows.net";
You must have signed in with the Azure CLI's `az login` in order for your local
const topicName = "<TOPIC NAME>"; const subscriptionName = "<SUBSCRIPTION NAME>";
-
+ async function main() {
- // create a Service Bus client using the passwordless authentication to the Service Bus namespace
- const sbClient = new ServiceBusClient(fullyQualifiedNamespace, credential);
-
- // createReceiver() can also be used to create a receiver for a queue.
- const receiver = sbClient.createReceiver(topicName, subscriptionName);
-
- // function to handle messages
- const myMessageHandler = async (messageReceived) => {
- console.log(`Received message: ${messageReceived.body}`);
- };
-
- // function to handle any errors
- const myErrorHandler = async (error) => {
- console.log(error);
- };
-
- // subscribe and specify the message and error handlers
- receiver.subscribe({
- processMessage: myMessageHandler,
- processError: myErrorHandler
- });
-
- // Waiting long enough before closing the sender to send messages
- await delay(5000);
-
- await receiver.close();
- await sbClient.close();
+ // create a Service Bus client using the passwordless authentication to the Service Bus namespace
+ const sbClient = new ServiceBusClient(fullyQualifiedNamespace, credential);
+
+ // createReceiver() can also be used to create a receiver for a queue.
+ const receiver = sbClient.createReceiver(topicName, subscriptionName);
+
+ // function to handle messages
+ const myMessageHandler = async (messageReceived) => {
+ console.log(`Received message: ${messageReceived.body}`);
+ };
+
+ // function to handle any errors
+ const myErrorHandler = async (error) => {
+ console.log(error);
+ };
+
+ // subscribe and specify the message and error handlers
+ receiver.subscribe({
+ processMessage: myMessageHandler,
+ processError: myErrorHandler
+ });
+
+ // Waiting long enough before closing the sender to send messages
+ await delay(5000);
+
+ await receiver.close();
+ await sbClient.close();
}
-
+ // call the main function main().catch((err) => {
- console.log("Error occurred: ", err);
- process.exit(1);
- });
+ console.log("Error occurred: ", err);
+ process.exit(1);
+ });
```
-3. Replace `<SERVICE BUS NAMESPACE CONNECTION STRING>` with the connection string to the namespace.
-4. Replace `<TOPIC NAME>` with the name of the topic.
-5. Replace `<SUBSCRIPTION NAME>` with the name of the subscription to the topic.
+3. Replace `<SERVICE BUS NAMESPACE CONNECTION STRING>` with the connection string to the namespace.
+4. Replace `<TOPIC NAME>` with the name of the topic.
+5. Replace `<SUBSCRIPTION NAME>` with the name of the subscription to the topic.
6. Then run the command in a command prompt to execute this file. ```console
You must have signed in with the Azure CLI's `az login` in order for your local
### [Connection string](#tab/connection-string) 1. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/)
-2. Create a file called **receivefromsubscription.js** and paste the following code into it. See code comments for details.
+2. Create a file called **receivefromsubscription.js** and paste the following code into it. See code comments for details.
```javascript const { delay, ServiceBusClient, ServiceBusMessage } = require("@azure/service-bus");
-
+ const connectionString = "<SERVICE BUS NAMESPACE CONNECTION STRING>" const topicName = "<TOPIC NAME>"; const subscriptionName = "<SUBSCRIPTION NAME>";
-
+ async function main() {
- // create a Service Bus client using the connection string to the Service Bus namespace
- const sbClient = new ServiceBusClient(connectionString);
-
- // createReceiver() can also be used to create a receiver for a queue.
- const receiver = sbClient.createReceiver(topicName, subscriptionName);
-
- // function to handle messages
- const myMessageHandler = async (messageReceived) => {
- console.log(`Received message: ${messageReceived.body}`);
- };
-
- // function to handle any errors
- const myErrorHandler = async (error) => {
- console.log(error);
- };
-
- // subscribe and specify the message and error handlers
- receiver.subscribe({
- processMessage: myMessageHandler,
- processError: myErrorHandler
- });
-
- // Waiting long enough before closing the sender to send messages
- await delay(5000);
-
- await receiver.close();
- await sbClient.close();
+ // create a Service Bus client using the connection string to the Service Bus namespace
+ const sbClient = new ServiceBusClient(connectionString);
+
+ // createReceiver() can also be used to create a receiver for a queue.
+ const receiver = sbClient.createReceiver(topicName, subscriptionName);
+
+ // function to handle messages
+ const myMessageHandler = async (messageReceived) => {
+ console.log(`Received message: ${messageReceived.body}`);
+ };
+
+ // function to handle any errors
+ const myErrorHandler = async (error) => {
+ console.log(error);
+ };
+
+ // subscribe and specify the message and error handlers
+ receiver.subscribe({
+ processMessage: myMessageHandler,
+ processError: myErrorHandler
+ });
+
+ // Waiting long enough before closing the sender to send messages
+ await delay(5000);
+
+ await receiver.close();
+ await sbClient.close();
}
-
+ // call the main function main().catch((err) => {
- console.log("Error occurred: ", err);
- process.exit(1);
- });
+ console.log("Error occurred: ", err);
+ process.exit(1);
+ });
```
-3. Replace `<SERVICE BUS NAMESPACE CONNECTION STRING>` with the connection string to the namespace.
-4. Replace `<TOPIC NAME>` with the name of the topic.
-5. Replace `<SUBSCRIPTION NAME>` with the name of the subscription to the topic.
+3. Replace `<SERVICE BUS NAMESPACE CONNECTION STRING>` with the connection string to the namespace.
+4. Replace `<TOPIC NAME>` with the name of the topic.
+5. Replace `<SUBSCRIPTION NAME>` with the name of the subscription to the topic.
6. Then run the command in a command prompt to execute this file. ```console
Received message: Johannes Kepler
Received message: Nikolaus Kopernikus ```
-In the Azure portal, navigate to your Service Bus namespace, switch to **Topics** in the bottom pane, and select your topic to see the **Service Bus Topic** page for your topic. On this page, you should see 10 incoming and 10 outgoing messages in the **Messages** chart.
+In the Azure portal, navigate to your Service Bus namespace, switch to **Topics** in the bottom pane, and select your topic to see the **Service Bus Topic** page for your topic. On this page, you should see 10 incoming and 10 outgoing messages in the **Messages** chart.
:::image type="content" source="./media/service-bus-nodejs-how-to-use-topics-subscriptions/topic-page-portal.png" alt-text="Incoming and outgoing messages":::
-If you run only the send app next time, on the **Service Bus Topic** page, you see 20 incoming messages (10 new) but 10 outgoing messages.
+If you run only the send app next time, on the **Service Bus Topic** page, you see 20 incoming messages (10 new) but 10 outgoing messages.
:::image type="content" source="./media/service-bus-nodejs-how-to-use-topics-subscriptions/updated-topic-page.png" alt-text="Updated topic page":::
-On this page, if you select a subscription in the bottom pane, you get to the **Service Bus Subscription** page. You can see the active message count, dead-letter message count, and more on this page. In this example, there are 10 active messages that haven't been received by a receiver yet.
+On this page, if you select a subscription in the bottom pane, you get to the **Service Bus Subscription** page. You can see the active message count, dead-letter message count, and more on this page. In this example, there are 10 active messages that haven't been received by a receiver yet.
:::image type="content" source="./media/service-bus-nodejs-how-to-use-topics-subscriptions/active-message-count.png" alt-text="Active message count":::
If you receive an error when running the **passwordless** version of the JavaScr
Navigate to your Service Bus namespace in the Azure portal, and select **Delete** on the Azure portal to delete the namespace and the queue in it. ## Next steps
-See the following documentation and samples:
+See the following documentation and samples:
- [Azure Service Bus client library for JavaScript](https://www.npmjs.com/package/@azure/service-bus) - [JavaScript samples](/samples/azure/azure-sdk-for-js/service-bus-javascript/)
service-fabric Service Fabric Cluster Upgrade Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-upgrade-windows-server.md
For usage details, see the [Start-ServiceFabricClusterConfigurationUpgrade](/pow
```powershell ###### Get list of all upgrade compatible packages Get-ServiceFabricRuntimeUpgradeVersion -BaseVersion <TargetCodeVersion as noted in Step 1>
- ```
+ ```
3. Connect to the cluster from any machine that has administrator access to all the machines that are listed as nodes in the cluster. The machine that this script is run on doesn't have to be part of the cluster.
service-fabric Service Fabric Cluster Windows Server Add Remove Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-windows-server-add-remove-nodes.md
Last updated 07/14/2022
# Add or remove nodes to a standalone Service Fabric cluster running on Windows Server+ After you have [created your standalone Service Fabric cluster on Windows Server machines](service-fabric-cluster-creation-for-windows-server.md), your (business) needs may change and you will need to add or remove nodes to your cluster, as described in this article. > [!NOTE]
After you have [created your standalone Service Fabric cluster on Windows Server
Unsecure (prototyping):
- ```
+ ```powershell
.\AddNode.ps1 -NodeName VM5 -NodeType NodeType0 -NodeIPAddressorFQDN 182.17.34.52 -ExistingClientConnectionEndpoint 182.17.34.50:19000 -UpgradeDomain UD1 -FaultDomain fd:/dc1/r0 -AcceptEULA ``` Secure (certificate-based):
- ```
+ ```powershell
$CertThumbprint= "***********************"
-
- .\AddNode.ps1 -NodeName VM5 -NodeType NodeType0 -NodeIPAddressorFQDN 182.17.34.52 -ExistingClientConnectionEndpoint 182.17.34.50:19000 -UpgradeDomain UD1 -FaultDomain fd:/dc1/r0 -X509Credential -ServerCertThumbprint $CertThumbprint -AcceptEULA
+ .\AddNode.ps1 -NodeName VM5 -NodeType NodeType0 -NodeIPAddressorFQDN 182.17.34.52 -ExistingClientConnectionEndpoint 182.17.34.50:19000 -UpgradeDomain UD1 -FaultDomain fd:/dc1/r0 -X509Credential -ServerCertThumbprint $CertThumbprint -AcceptEULA
``` When the script finishes running, you can check whether the new node has been added by running the [Get-ServiceFabricNode](/powershell/module/servicefabric/get-servicefabricnode) cmdlet. 7. To ensure consistency across different nodes in the cluster, you must initiate a configuration upgrade. Run [Get-ServiceFabricClusterConfiguration](/powershell/module/servicefabric/get-servicefabricclusterconfiguration) to get the latest configuration file and add the newly added node to the "Nodes" section. It is also recommended to always have the latest cluster configuration available in case you need to redeploy a cluster that has the same configuration.
- ```
- {
- "nodeName": "vm5",
- "iPAddress": "182.17.34.52",
- "nodeTypeRef": "NodeType0",
- "faultDomain": "fd:/dc1/r0",
- "upgradeDomain": "UD1"
- }
+ ```json
+ {
+ "nodeName": "vm5",
+ "iPAddress": "182.17.34.52",
+ "nodeTypeRef": "NodeType0",
+ "faultDomain": "fd:/dc1/r0",
+ "upgradeDomain": "UD1"
+ }
``` 8. Run [Start-ServiceFabricClusterConfigurationUpgrade](/powershell/module/servicefabric/start-servicefabricclusterconfigurationupgrade) to begin the upgrade.
- ```
+ ```powershell
Start-ServiceFabricClusterConfigurationUpgrade -ClusterConfigPath <Path to Configuration File> ```
- You can monitor the progress of the upgrade on Service Fabric Explorer. Alternatively, you can run [Get-ServiceFabricClusterUpgrade](/powershell/module/servicefabric/get-servicefabricclusterupgrade).
+ You can monitor the progress of the upgrade on Service Fabric Explorer. Alternatively, you can run [Get-ServiceFabricClusterUpgrade](/powershell/module/servicefabric/get-servicefabricclusterupgrade).
### Add nodes to clusters configured with Windows Security using gMSA+ For clusters configured with Group Managed Service Account(gMSA)(https://technet.microsoft.com/library/hh831782.aspx), a new node can be added using a configuration upgrade:+ 1. Run [Get-ServiceFabricClusterConfiguration](/powershell/module/servicefabric/get-servicefabricclusterconfiguration) on any of the existing nodes to get the latest configuration file and add details about the new node you want to add in the "Nodes" section. Make sure the new node is part of the same group managed account. This account should be an Administrator on all machines.
- ```
- {
- "nodeName": "vm5",
- "iPAddress": "182.17.34.52",
- "nodeTypeRef": "NodeType0",
- "faultDomain": "fd:/dc1/r0",
- "upgradeDomain": "UD1"
- }
- ```
+ ```json
+ {
+ "nodeName": "vm5",
+ "iPAddress": "182.17.34.52",
+ "nodeTypeRef": "NodeType0",
+ "faultDomain": "fd:/dc1/r0",
+ "upgradeDomain": "UD1"
+ }
+ ```
+ 2. Run [Start-ServiceFabricClusterConfigurationUpgrade](/powershell/module/servicefabric/start-servicefabricclusterconfigurationupgrade) to begin the upgrade.
- ```
- Start-ServiceFabricClusterConfigurationUpgrade -ClusterConfigPath <Path to Configuration File>
- ```
- You can monitor the progress of the upgrade on Service Fabric Explorer. Alternatively, you can run [Get-ServiceFabricClusterUpgrade](/powershell/module/servicefabric/get-servicefabricclusterupgrade)
+ ```powershell
+ Start-ServiceFabricClusterConfigurationUpgrade -ClusterConfigPath <Path to Configuration File>
+ ```
+
+ You can monitor the progress of the upgrade on Service Fabric Explorer. Alternatively, you can run [Get-ServiceFabricClusterUpgrade](/powershell/module/servicefabric/get-servicefabricclusterupgrade)
### Add node types to your cluster In order to add a new node type, modify your configuration to include the new node type in "NodeTypes" section under "Properties" and begin a configuration upgrade using [Start-ServiceFabricClusterConfigurationUpgrade](/powershell/module/servicefabric/start-servicefabricclusterconfigurationupgrade). Once the upgrade completes, you can add new nodes to your cluster with this node type.
In order to add a new node type, modify your configuration to include the new no
## Remove nodes from your cluster A node can be removed from a cluster using a configuration upgrade, in the following manner:
-1. Run [Get-ServiceFabricClusterConfiguration](/powershell/module/servicefabric/get-servicefabricclusterconfiguration) to get the latest configuration file and *remove* the node from "Nodes" section.
-Add the "NodesToBeRemoved" parameter to "Setup" section inside "FabricSettings" section. The "value" should be a comma-separated list of node names of nodes that need to be removed.
-
- ```
- "fabricSettings": [
- {
- "name": "Setup",
- "parameters": [
- {
- "name": "FabricDataRoot",
- "value": "C:\\ProgramData\\SF"
- },
- {
- "name": "FabricLogRoot",
- "value": "C:\\ProgramData\\SF\\Log"
- },
- {
- "name": "NodesToBeRemoved",
- "value": "vm0, vm1"
- }
- ]
- }
- ]
- ```
+1. Run [Get-ServiceFabricClusterConfiguration](/powershell/module/servicefabric/get-servicefabricclusterconfiguration) to get the latest configuration file and *remove* the node from "Nodes" section. Add the "NodesToBeRemoved" parameter to "Setup" section inside "FabricSettings" section. The "value" should be a comma-separated list of node names of nodes that need to be removed.
+
+ ```json
+ "fabricSettings": [
+ {
+ "name": "Setup",
+ "parameters": [
+ {
+ "name": "FabricDataRoot",
+ "value": "C:\\ProgramData\\SF"
+ },
+ {
+ "name": "FabricLogRoot",
+ "value": "C:\\ProgramData\\SF\\Log"
+ },
+ {
+ "name": "NodesToBeRemoved",
+ "value": "vm0, vm1"
+ }
+ ]
+ }
+ ]
+ ```
+ 2. Run [Start-ServiceFabricClusterConfigurationUpgrade](/powershell/module/servicefabric/start-servicefabricclusterconfigurationupgrade) to begin the upgrade.
- ```
- Start-ServiceFabricClusterConfigurationUpgrade -ClusterConfigPath <Path to Configuration File>
+ ```powershell
+ Start-ServiceFabricClusterConfigurationUpgrade -ClusterConfigPath <Path to Configuration File>
+ ```
- ```
- You can monitor the progress of the upgrade on Service Fabric Explorer. Alternatively, you can run [Get-ServiceFabricClusterUpgrade](/powershell/module/servicefabric/get-servicefabricclusterupgrade).
+ You can monitor the progress of the upgrade on Service Fabric Explorer. Alternatively, you can run [Get-ServiceFabricClusterUpgrade](/powershell/module/servicefabric/get-servicefabricclusterupgrade).
> [!NOTE] > Removal of nodes may initiate multiple upgrades. Some nodes are marked with `IsSeedNode=ΓÇ¥trueΓÇ¥` tag and can be identified by querying the cluster manifest using `Get-ServiceFabricClusterManifest`. Removal of such nodes may take longer than others since the seed nodes will have to be moved around in such scenarios. The cluster must maintain a minimum of 3 primary node type nodes.
->
->
### Remove node types from your cluster
-Before removing a node type, check if there are any nodes referencing the node type. Remove these nodes before removing the corresponding node type. Once all corresponding nodes are removed, you can remove the NodeType from the cluster configuration and begin a configuration upgrade using [Start-ServiceFabricClusterConfigurationUpgrade](/powershell/module/servicefabric/start-servicefabricclusterconfigurationupgrade).
+Before removing a node type, check if there are any nodes referencing the node type. Remove these nodes before removing the corresponding node type. Once all corresponding nodes are removed, you can remove the NodeType from the cluster configuration and begin a configuration upgrade using [Start-ServiceFabricClusterConfigurationUpgrade](/powershell/module/servicefabric/start-servicefabricclusterconfigurationupgrade).
### Replace primary nodes of your cluster
-The replacement of primary nodes should be performed one node after another, instead of removing and then adding in batches.
+The replacement of primary nodes should be performed one node after another, instead of removing and then adding in batches.
## Next steps+ * [Configuration settings for standalone Windows cluster](service-fabric-cluster-manifest.md) * [Secure a standalone cluster on Windows using X509 certificates](service-fabric-windows-cluster-x509-security.md) * [Create a standalone Service Fabric cluster with Azure VMs running Windows](./service-fabric-cluster-creation-via-arm.md)
service-fabric Service Fabric Java Rest Api Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-java-rest-api-usage.md
Follow the steps mentioned below to generate Service Fabric Java client code usi
1. Install nodejs and NPM on your machine
- If you are using Linux then:
- ```bash
- sudo apt-get install npm
- sudo apt install nodejs
- ```
- If you are using Mac OS X then:
- ```bash
- brew install node
- ```
+ If you are using Linux then:
+ ```bash
+ sudo apt-get install npm
+ sudo apt install nodejs
+ ```
+ If you are using Mac OS X then:
+ ```bash
+ brew install node
+ ```
2. Install AutoRest using NPM.
- ```bash
- npm install -g autorest
- ```
+ ```bash
+ npm install -g autorest
+ ```
3. Fork and clone [azure-rest-api-specs](https://github.com/Azure/azure-rest-api-specs) repository in your local machine and go to the cloned location from the terminal of your machine. 4. Go to the location mentioned below in your cloned repo.
- ```bash
- cd specification\servicefabric\data-plane\Microsoft.ServiceFabric\stable\6.0
- ```
+ ```bash
+ cd specification\servicefabric\data-plane\Microsoft.ServiceFabric\stable\6.0
+ ```
- > [!NOTE]
- > If your cluster version is not 6.0.* then go to the appropriate directory in the stable folder.
- >
+ > [!NOTE]
+ > If your cluster version is not 6.0.* then go to the appropriate directory in the stable folder.
5. Run the following autorest command to generate the Java client code.
-
- ```bash
- autorest --input-file= servicefabric.json --java --output-folder=[output-folder-name] --namespace=[namespace-of-generated-client]
- ```
+
+ ```bash
+ autorest --input-file= servicefabric.json --java --output-folder=[output-folder-name] --namespace=[namespace-of-generated-client]
+ ```
+ Below is an example demonstrating the usage of autorest.
-
- ```bash
- autorest --input-file=servicefabric.json --java --output-folder=java-rest-api-code --namespace=servicefabricrest
- ```
-
- The following command takes ``servicefabric.json`` specification file as input and generates Java client code in ``java-rest-api- code`` folder and encloses the code in ``servicefabricrest`` namespace. After this step you would find two folders ``models``, ``implementation`` and two files ``ServiceFabricClientAPIs.java`` and ``package-info.java`` generated in the ``java-rest-api-code`` folder.
+ ```bash
+ autorest --input-file=servicefabric.json --java --output-folder=java-rest-api-code --namespace=servicefabricrest
+ ```
+
+ The following command takes ``servicefabric.json`` specification file as input and generates Java client code in ``java-rest-api-code`` folder and encloses the code in ``servicefabricrest`` namespace. After this step you would find two folders ``models``, ``implementation`` and two files ``ServiceFabricClientAPIs.java`` and ``package-info.java`` generated in the ``java-rest-api-code`` folder.
## Include and use the generated client in your project 1. Add the generated code appropriately into your project. We recommend that you create a library using the generated code and include this library in your project.+ 2. If you are creating a library then include the following dependency in your library's project. If you are following a different approach then include the dependency appropriately.
- ```
- GroupId: com.microsoft.rest
- Artifactid: client-runtime
- Version: 1.2.1
- ```
- For example, if you are using Maven build system include the following in your ``pom.xml`` file:
-
- ```xml
- <dependency>
- <groupId>com.microsoft.rest</groupId>
- <artifactId>client-runtime</artifactId>
- <version>1.2.1</version>
- </dependency>
- ```
+ ```
+ GroupId: com.microsoft.rest
+ Artifactid: client-runtime
+ Version: 1.2.1
+ ```
+
+ For example, if you are using Maven build system include the following in your ``pom.xml`` file:
+
+ ```xml
+ <dependency>
+ <groupId>com.microsoft.rest</groupId>
+ <artifactId>client-runtime</artifactId>
+ <version>1.2.1</version>
+ </dependency>
+ ```
3. Create a RestClient using the following code:
- ```java
- RestClient simpleClient = new RestClient.Builder()
- .withBaseUrl("http://<cluster-ip or name:port>")
- .withResponseBuilderFactory(new ServiceResponseBuilder.Factory())
- .withSerializerAdapter(new JacksonAdapter())
- .build();
- ServiceFabricClientAPIs client = new ServiceFabricClientAPIsImpl(simpleClient);
- ```
+ ```java
+ RestClient simpleClient = new RestClient.Builder()
+ .withBaseUrl("http://<cluster-ip or name:port>")
+ .withResponseBuilderFactory(new ServiceResponseBuilder.Factory())
+ .withSerializerAdapter(new JacksonAdapter())
+ .build();
+ ServiceFabricClientAPIs client = new ServiceFabricClientAPIsImpl(simpleClient);
+ ```
+ 4. Use the client object and make the appropriate calls as required. Here are some examples which demonstrate the usage of client object. We assume that the application package is built and uploaded into image store before using the below API's.
- * Provision an application
-
- ```java
- ApplicationTypeImageStorePath imageStorePath = new ApplicationTypeImageStorePath();
- imageStorePath.withApplicationTypeBuildPath("<application-path-in-image-store>");
- client.provisionApplicationType(imageStorePath);
- ```
- * Create an application
-
- ```java
- ApplicationDescription applicationDescription = new ApplicationDescription();
- applicationDescription.withName("<application-uri>");
- applicationDescription.withTypeName("<application-type>");
- applicationDescription.withTypeVersion("<application-version>");
- client.createApplication(applicationDescription);
- ```
+
+ * Provision an application
+
+ ```java
+ ApplicationTypeImageStorePath imageStorePath = new ApplicationTypeImageStorePath();
+ imageStorePath.withApplicationTypeBuildPath("<application-path-in-image-store>");
+ client.provisionApplicationType(imageStorePath);
+ ```
+
+ * Create an application
+
+ ```java
+ ApplicationDescription applicationDescription = new ApplicationDescription();
+ applicationDescription.withName("<application-uri>");
+ applicationDescription.withTypeName("<application-type>");
+ applicationDescription.withTypeVersion("<application-version>");
+ client.createApplication(applicationDescription);
+ ```
## Understanding the generated code+ For every API you will find four overloads of implementation. If there are optional parameters then you would find four more variations including those optional parameters. For example consider the API ``removeReplica``.
- 1. **public void removeReplica(String nodeName, UUID partitionId, String replicaId, Boolean forceRemove, Long timeout)**
- * This is the synchronous variant of the removeReplica API call
- 2. **public ServiceFuture\<Void> removeReplicaAsync(String nodeName, UUID partitionId, String replicaId, Boolean forceRemove, Long timeout, final ServiceCallback\<Void> serviceCallback)**
- * This variant of API call can be used if you want to use future based asynchronous programming and use callbacks
- 3. **public Observable\<Void> removeReplicaAsync(String nodeName, UUID partitionId, String replicaId)**
- * This variant of API call can be used if you want to use reactive asynchronous programming
- 4. **public Observable\<ServiceResponse\<Void>> removeReplicaWithServiceResponseAsync(String nodeName, UUID partitionId, String replicaId)**
- * This variant of API call can be used if you want to use reactive asynchronous programming and deal with RAW rest response
+
+1. **public void removeReplica(String nodeName, UUID partitionId, String replicaId, Boolean forceRemove, Long timeout)**
+ * This is the synchronous variant of the removeReplica API call
+2. **public ServiceFuture\<Void> removeReplicaAsync(String nodeName, UUID partitionId, String replicaId, Boolean forceRemove, Long timeout, final ServiceCallback\<Void> serviceCallback)**
+ * This variant of API call can be used if you want to use future based asynchronous programming and use callbacks
+3. **public Observable\<Void> removeReplicaAsync(String nodeName, UUID partitionId, String replicaId)**
+ * This variant of API call can be used if you want to use reactive asynchronous programming
+4. **public Observable\<ServiceResponse\<Void>> removeReplicaWithServiceResponseAsync(String nodeName, UUID partitionId, String replicaId)**
+ * This variant of API call can be used if you want to use reactive asynchronous programming and deal with RAW rest response
## Next steps+ * Learn about [Service Fabric REST APIs](/rest/api/servicefabric/)
service-health Service Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/service-notifications.md
For more information on the various classes of service health notifications, see
1. In the [Azure portal](https://portal.azure.com), select **Monitor**.
- ![Screenshot of Azure portal menu, with Monitor selected](./media/service-notifications/home-monitor.png)
- Azure Monitor brings together all your monitoring settings and data into one consolidated view. It first opens to the **Activity log** section.
-1. Select **Alerts**.
-
- ![Screenshot of Monitor Activity log, with Alerts selected](./media/service-notifications/service-health-summary.png)
+1. Select **Service health**.
-1. Select **+Add activity log alert**, and set up an alert to ensure you are notified for future service notifications. For more information, see [Create activity log alerts on service notifications](./alerts-activity-log-service-notifications-portal.md).
+1. Select **+Create/Add activity log alert**, and set up an alert to ensure you are notified for future service notifications. For more information, see [Create activity log alerts on service notifications](./alerts-activity-log-service-notifications-portal.md).
## Next steps
-* Learn more about [activity log alerts](../azure-monitor/alerts/activity-log-alerts.md).
+* Learn more about [activity log alerts](https://learn.microsoft.com/azure/azure-monitor/alerts/alerts-types).
site-recovery Azure To Azure How To Enable Replication Ade Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-ade-vms.md
Title: Enable replication for encrypted Azure VMs in Azure Site Recovery
+ Title: Enable replication for encrypted Azure VMs in Azure Site Recovery
description: This article describes how to configure replication for Azure Disk Encryption-enabled VMs from one Azure region to another by using Site Recovery.
Site Recovery requires the user to have permissions to create the key vault in t
To enable replication of Disk Encryption-enabled VMs from the Azure portal, the user needs the following permissions on both the **source region and target region** key vaults. - Key vault permissions
- - List, Create and Get
-
+ - List, Create and Get
+ - Key vault secret permissions
- - Secret Management Operations
- - Get, List and Set
-
+ - Secret Management Operations
+ - Get, List and Set
+ - Key vault key permissions (required only if the VMs use key encryption key to encrypt disk encryption keys)
- - Key Management Operations
- - Get, List and Create
- - Cryptographic Operations
- - Decrypt and Encrypt
+ - Key Management Operations
+ - Get, List and Create
+ - Cryptographic Operations
+ - Decrypt and Encrypt
To manage permissions, go to the key vault resource in the portal. Add the required permissions for the user. The following example shows how to enable permissions to the key vault *ContosoWeb2Keyvault*, which is in the source region.
Use the following procedure to replicate Azure Disk Encryption-enabled VMs to an
- **Resource group**: Select the resource group to which your source virtual machines belong. All the VMs in the selected resource group are listed for protection in the next step. - **Virtual machine deployment model**: Select the Azure deployment model of the source machines. - **Disaster recovery between availability zones**: Select **Yes** if you want to perform zonal disaster recovery on virtual machines.
-
+ :::image type="fields needed to configure replication" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/source.png" alt-text="Screenshot that highlights the fields needed to configure replication.":::
-1. Select **Next**.
+1. Select **Next**.
1. In **Virtual machines**, select each VM that you want to replicate. You can only select machines for which replication can be enabled. You can select up to ten VMs. Then, select **Next**. :::image type="Virtual machine selection" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/virtual-machine-selection.png" alt-text="Screenshot that highlights where you select virtual machines.":::
Use the following procedure to replicate Azure Disk Encryption-enabled VMs to an
- If the resource group created by Site Recovery already exists, it's reused. - You can customize the resource group settings. - The location of the target resource group can be any Azure region, except the region in which the source VMs are hosted.
-
+ >[!Note]
- > You can also create a new target resource group by selecting **Create new**.
-
- :::image type="Location and resource group" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/resource-group.png" alt-text="Screenshot of Location and resource group.":::
+ > You can also create a new target resource group by selecting **Create new**.
+
+ :::image type="Location and resource group" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/resource-group.png" alt-text="Screenshot of Location and resource group.":::
1. Under **Network**, - **Failover virtual network**: Select the failover virtual network. >[!Note] > You can also create a new failover virtual network by selecting **Create new**. - **Failover subnet**: Select the failover subnet.
-
- :::image type="Network" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/network.png" alt-text="Screenshot of Network.":::
+
+ :::image type="Network" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/network.png" alt-text="Screenshot of Network.":::
1. **Storage**: Select **View/edit storage configuration**. **Customize target settings** page opens.
-
- :::image type="Storage" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/storage.png" alt-text="Screenshot of Storage.":::
-
+
+ :::image type="Storage" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/storage.png" alt-text="Screenshot of Storage.":::
+ - **Replica-managed disk**: Site Recovery creates new replica-managed disks in the target region to mirror the source VM's managed disks with the same storage type (Standard or premium) as the source VM's managed disk.
- - **Cache storage**: Site Recovery needs extra storage account called cache storage in the source region. All the changes happening on the source VMs are tracked and sent to cache storage account before replicating them to the target location. This storage account should be Standard.
-
+ - **Cache storage**: Site Recovery needs extra storage account called cache storage in the source region. All the changes happening on the source VMs are tracked and sent to cache storage account before replicating them to the target location. This storage account should be Standard.
+ 1. **Availability options**: Select appropriate availability option for your VM in the target region. If an availability set that was created by Site Recovery already exists, it's reused. Select **View/edit availability options** to view or edit the availability options. >[!NOTE] >- While configuring the target availability sets, configure different availability sets for differently sized VMs.
- >- You cannot change the availability type - single instance, availability set or availability zone, after you enable replication. You must disable and enable replication to change the availability type.
+ >- You cannot change the availability type - single instance, availability set or availability zone, after you enable replication. You must disable and enable replication to change the availability type.
+
+ :::image type="Availability option" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/availability-option.png" alt-text="Screenshot of availability option.":::
- :::image type="Availability option" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/availability-option.png" alt-text="Screenshot of availability option.":::
-
1. **Capacity reservation**: Capacity Reservation lets you purchase capacity in the recovery region, and then failover to that capacity. You can either create a new Capacity Reservation Group or use an existing one. For more information, see [how capacity reservation works](../virtual-machines/capacity-reservation-overview.md). Select **View or Edit Capacity Reservation group assignment** to modify the capacity reservation settings. On triggering Failover, the new VM will be created in the assigned Capacity Reservation Group.
-
+ :::image type="Capacity reservation" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/capacity-reservation.png" alt-text="Screenshot of capacity reservation.":::
-
+ 1. **Encryption settings**: Select **View/edit configuration** to configure the Disk Encryption and Key Encryption key Vaults. - **Disk encryption key vaults**: By default, Site Recovery creates a new key vault in the target region. It has an *asr* suffix that's based on the source VM disk encryption keys. If a key vault that was created by Azure Site Recovery already exists, it's reused. - **Key encryption key vaults**: By default, Site Recovery creates a new key vault in the target region. The name has an *asr* suffix that's based on the source VM key encryption keys. If a key vault created by Azure Site Recovery already exists, it's reused.
-
+ :::image type="Encryption settings" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/encryption-settings.png" alt-text="Screenshot of encryption settings."::: 1. Select **Next**.
Use the following procedure to replicate Azure Disk Encryption-enabled VMs to an
1. Under **Replication policy**, - **Replication policy**: Select the replication policy. Defines the settings for recovery point retention history and app-consistent snapshot frequency. By default, Site Recovery creates a new replication policy with default settings of 24 hours for recovery point retention. - **Replication group**: Create replication group to replicate VMs together to generate Multi-VM consistent recovery points. Note that enabling multi-VM consistency can impact workload performance and should only be used if machines are running the same workload and you need consistency across multiple machines.
- 1. Under **Extension settings**,
+ 1. Under **Extension settings**,
- Select **Update settings** and **Automation account**.
-
+ :::image type="manage" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/manage.png" alt-text="Screenshot that displays the manage tab.":::
You can use [a script](#copy-disk-encryption-keys-to-the-dr-region-by-using-the-
## <a id="trusted-root-certificates-error-code-151066"></a>Troubleshoot key vault permission issues during Azure-to-Azure VM replication
-Azure Site Recovery requires at least read permission on the Source region Key vault and write permission on the target region key vault to read the secret and copy it to the target region key vault.
+Azure Site Recovery requires at least read permission on the Source region Key vault and write permission on the target region key vault to read the secret and copy it to the target region key vault.
**Cause 1:** You don't have "GET" permission on the **source region Key vault** to read the keys. </br> **How to fix:** Regardless of whether you are a subscription admin or not, it is important that you have get permission on the key vault.
-1. Go to source region Key vault which in this example is "ContososourceKeyvault" > **Access policies**
+1. Go to source region Key vault which in this example is "ContososourceKeyvault" > **Access policies**
2. Under **Select Principal** add your user name for example: "dradmin@contoso.com"
-3. Under **Key permissions** select GET
-4. Under **Secret Permission** select GET
+3. Under **Key permissions** select GET
+4. Under **Secret Permission** select GET
5. Save the access policy **Cause 2:** You don't have required permission on the **Target region Key vault** to write the keys. </br>
site-recovery How To Enable Replication Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-enable-replication-proximity-placement-groups.md
You can easily update your selection of a proximity placement group in the DR re
- Make sure that you have the Azure PowerShell Az module. If you need to install or upgrade Azure PowerShell, follow the [guide to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell). - The minimum Azure PowerShell Az version should be 4.1.0. To check the current version, use the following command:
- ```
- Get-InstalledModule -Name Az
- ```
+ ```powershell
+ Get-InstalledModule -Name Az
+ ```
> [!NOTE] > Make sure that you have the unique ID of the target proximity placement group handy. The command that you use depends on whether you're [creating a new proximity placement group](../virtual-machines/windows/proximity-placement-groups.md#create-a-proximity-placement-group) or [using an existing proximity placement group](../virtual-machines/windows/proximity-placement-groups.md#list-proximity-placement-groups).
You can easily update your selection of a proximity placement group in the DR re
5. [Install the provider and agent](./hyper-v-azure-powershell-resource-manager.md#step-5-install-the-provider-and-agent). 6. [Create a replication policy](./hyper-v-azure-powershell-resource-manager.md#step-6-create-a-replication-policy). 7. Enable replication by using the following steps:
-
- a. Retrieve the protectable item that corresponds to the VM you want to protect:
+
+ 1. Retrieve the protectable item that corresponds to the VM you want to protect:
```azurepowershell $VMFriendlyName = "Fabrikam-app" #Name of the VM $ProtectableItem = Get-AzRecoveryServicesAsrProtectableItem -ProtectionContainer $protectionContainer -FriendlyName $VMFriendlyName ```
- b. Protect the VM. If the VM you're protecting has more than one disk attached to it, specify the operating system disk by using the `OSDiskName` parameter:
-
+
+ 1. Protect the VM. If the VM you're protecting has more than one disk attached to it, specify the operating system disk by using the `OSDiskName` parameter:
+ ```azurepowershell $OSType = "Windows" # "Windows" or "Linux" $DRjob = New-AzRecoveryServicesAsrReplicationProtectedItem -ProtectableItem $VM -Name $VM.Name -ProtectionContainerMapping $ProtectionContainerMapping -RecoveryAzureStorageAccountId $StorageAccountID -OSDiskName $OSDiskNameList[$i] -OS $OSType -RecoveryResourceGroupId $ResourceGroupID -RecoveryProximityPlacementGroupId $targetPpg.Id ```
- c. Wait for the VMs to reach a protected state after the initial replication. This process can take a while, depending on factors like the amount of data to be replicated and the available upstream bandwidth to Azure.
+
+ 1. Wait for the VMs to reach a protected state after the initial replication. This process can take a while, depending on factors like the amount of data to be replicated and the available upstream bandwidth to Azure.
When a protected state is in place, `State` and `StateDescription` for the job are updated as follows:
-
+ ```azurepowershell $DRjob = Get-AzRecoveryServicesAsrJob -Job $DRjob $DRjob | Select-Object -ExpandProperty State $DRjob | Select-Object -ExpandProperty StateDescription ```
- d. Update recovery properties (such as the VM role size) and the Azure network to which to attach the VM NIC after failover:
+
+ 1. Update recovery properties (such as the VM role size) and the Azure network to which to attach the VM NIC after failover:
```azurepowershell $nw1 = Get-AzVirtualNetwork -Name "FailoverNw" -ResourceGroupName "MyRG"
site-recovery Site Recovery Runbook Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-runbook-automation.md
When a script runs, it injects a recovery plan context to the runbook. The conte
The following example shows a context variable:
-```
-{"RecoveryPlanName":"hrweb-recovery",
+```json
+{
+"RecoveryPlanName":"hrweb-recovery",
"FailoverType":"Test", "FailoverDirection":"PrimaryToSecondary", "GroupId":"1", "VmMap":{"7a1069c6-c1d6-49c5-8c5d-33bfce8dd183":
- { "SubscriptionId":"7a1111111-c1d6-49c5-8c5d-111ce8dd183",
- "ResourceGroupName":"ContosoRG",
- "CloudServiceName":"pod02hrweb-Chicago-test",
- "RoleName":"Fabrikam-Hrweb-frontend-test",
- "RecoveryPointId":"TimeStamp"}
- }
+ { "SubscriptionId":"7a1111111-c1d6-49c5-8c5d-111ce8dd183",
+ "ResourceGroupName":"ContosoRG",
+ "CloudServiceName":"pod02hrweb-Chicago-test",
+ "RoleName":"Fabrikam-Hrweb-frontend-test",
+ "RecoveryPointId":"TimeStamp"}
+ }
} ```
In this example, a script takes the input of a Network Security Group (NSG) and
1. So that the script can detect which recovery plan is running, use this recovery plan context:
- ```
+ ```powershell
workflow AddPublicIPAndNSG { param ( [parameter(Mandatory=$false)]
In this example, a script takes the input of a Network Security Group (NSG) and
) $RPName = $RecoveryPlanContext.RecoveryPlanName
+ }
```
-2. Note the NSG name and resource group. You use these variables as inputs for recovery plan scripts.
+1. Note the NSG name and resource group. You use these variables as inputs for recovery plan scripts.
+ 1. In the Automation account assets. create a variable to store the NSG name. Add a prefix to the variable name with the name of the recovery plan.
- ![Create an NSG name variable](media/site-recovery-runbook-automation-new/var1.png)
+ ![Create an NSG name variable](media/site-recovery-runbook-automation-new/var1.png)
2. Create a variable to store the resource group name for the NSG resource. Add a prefix to the variable name with the name of the recovery plan.
- ![Create an NSG resource group name](media/site-recovery-runbook-automation-new/var2.png)
-
+ ![Create an NSG resource group name](media/site-recovery-runbook-automation-new/var2.png)
-3. In the script, use this reference code to get the variable values:
- ```
- $NSGValue = $RecoveryPlanContext.RecoveryPlanName + "-NSG"
- $NSGRGValue = $RecoveryPlanContext.RecoveryPlanName + "-NSGRG"
+3. In the script, use this reference code to get the variable values:
- $NSGnameVar = Get-AutomationVariable -Name $NSGValue
- $RGnameVar = Get-AutomationVariable -Name $NSGRGValue
- ```
+ ```powershell
+ $NSGValue = $RecoveryPlanContext.RecoveryPlanName + "-NSG"
+ $NSGRGValue = $RecoveryPlanContext.RecoveryPlanName + "-NSGRG"
-4. Use the variables in the runbook to apply the NSG to the network interface of the failed-over VM:
+ $NSGnameVar = Get-AutomationVariable -Name $NSGValue
+ $RGnameVar = Get-AutomationVariable -Name $NSGRGValue
+ ```
- ```
- InlineScript {
- if (($Using:NSGname -ne $Null) -And ($Using:NSGRGname -ne $Null)) {
- $NSG = Get-AzureRmNetworkSecurityGroup -Name $Using:NSGname -ResourceGroupName $Using:NSGRGname
- Write-output $NSG.Id
- #Apply the NSG to a network interface
- #$vnet = Get-AzureRmVirtualNetwork -ResourceGroupName TestRG -Name TestVNet
- #Set-AzureRmVirtualNetworkSubnetConfig -VirtualNetwork $vnet -Name FrontEnd `
- # -AddressPrefix 192.168.1.0/24 -NetworkSecurityGroup $NSG
- }
- }
- ```
+4. Use the variables in the runbook to apply the NSG to the network interface of the failed-over VM:
+
+ ```powershell
+ InlineScript {
+ if (($Using:NSGname -ne $Null) -And ($Using:NSGRGname -ne $Null)) {
+ $NSG = Get-AzureRmNetworkSecurityGroup -Name $Using:NSGname -ResourceGroupName $Using:NSGRGname
+ Write-output $NSG.Id
+ #Apply the NSG to a network interface
+ #$vnet = Get-AzureRmVirtualNetwork -ResourceGroupName TestRG -Name TestVNet
+ #Set-AzureRmVirtualNetworkSubnetConfig -VirtualNetwork $vnet -Name FrontEnd `
+ # -AddressPrefix 192.168.1.0/24 -NetworkSecurityGroup $NSG
+ }
+ }
+ ```
For each recovery plan, create independent variables so that you can reuse the script. Add a prefix by using the recovery plan name.
We do this by specifying multiple values, using Azure PowerShell.
1. In PowerShell, sign in to your Azure subscription:
- ```
- Connect-AzureRmAccount
- $sub = Get-AzureRmSubscription -Name <SubscriptionName>
- $sub | Select-AzureRmSubscription
- ```
+ ```powershell
+ Connect-AzureRmAccount
+ $sub = Get-AzureRmSubscription -Name <SubscriptionName>
+ $sub | Select-AzureRmSubscription
+ ```
2. To store the parameters, create the complex variable using the name of the recovery plan:
- ```
- $VMDetails = @{"VMGUID"=@{"ResourceGroupName"="RGNameOfNSG";"NSGName"="NameOfNSG"};"VMGUID2"=@{"ResourceGroupName"="RGNameOfNSG";"NSGName"="NameOfNSG"}}
- New-AzureRmAutomationVariable -ResourceGroupName <RG of Automation Account> -AutomationAccountName <AA Name> -Name <RecoveryPlanName> -Value $VMDetails -Encrypted $false
- ```
+ ```powershell
+ $VMDetails = @{"VMGUID"=@{"ResourceGroupName"="RGNameOfNSG";"NSGName"="NameOfNSG"};"VMGUID2"=@{"ResourceGroupName"="RGNameOfNSG";"NSGName"="NameOfNSG"}}
+ New-AzureRmAutomationVariable -ResourceGroupName <RG of Automation Account> -AutomationAccountName <AA Name> -Name <RecoveryPlanName> -Value $VMDetails -Encrypted $false
+ ```
3. In this complex variable, **VMDetails** is the VM ID for the protected VM. To get the VM ID, in the Azure portal, view the VM properties. The following screenshot shows a variable that stores the details of two VMs:
- ![Use the VM ID as the GUID](media/site-recovery-runbook-automation-new/vmguid.png)
+ ![Use the VM ID as the GUID](media/site-recovery-runbook-automation-new/vmguid.png)
4. Use this variable in your runbook. If the indicated VM GUID is found in the recovery plan context, apply the NSG on the VM:
- ```
- $VMDetailsObj = (Get-AutomationVariable -Name $RecoveryPlanContext.RecoveryPlanName).ToObject([hashtable])
- ```
+ ```powershell
+ $VMDetailsObj = (Get-AutomationVariable -Name $RecoveryPlanContext.RecoveryPlanName).ToObject([hashtable])
+ ```
4. In your runbook, loop through the VMs of the recovery plan context. Check whether the VM exists in **$VMDetailsObj**. If it exists, access the properties of the variable to apply the NSG:
- ```
- $VMinfo = $RecoveryPlanContext.VmMap | Get-Member | Where-Object MemberType -EQ NoteProperty | select -ExpandProperty Name
- $vmMap = $RecoveryPlanContext.VmMap
-
- foreach($VMID in $VMinfo) {
- $VMDetails = $VMDetailsObj[$VMID].ToObject([hashtable]);
- Write-output $VMDetails
- if ($VMDetails -ne $Null) { #If the VM exists in the context, this will not be Null
- $VM = $vmMap.$VMID
- # Access the properties of the variable
- $NSGname = $VMDetails.NSGName
- $NSGRGname = $VMDetails.NSGResourceGroupName
-
- # Add code to apply the NSG properties to the VM
- }
- }
- ```
+ ```powershell
+ $VMinfo = $RecoveryPlanContext.VmMap | Get-Member | Where-Object MemberType -EQ NoteProperty | select -ExpandProperty Name
+ $vmMap = $RecoveryPlanContext.VmMap
+
+ foreach ($VMID in $VMinfo) {
+ $VMDetails = $VMDetailsObj[$VMID].ToObject([hashtable]);
+ Write-output $VMDetails
+ if ($VMDetails -ne $Null) { #If the VM exists in the context, this will not be Null
+ $VM = $vmMap.$VMID
+ # Access the properties of the variable
+ $NSGname = $VMDetails.NSGName
+ $NSGRGname = $VMDetails.NSGResourceGroupName
+
+ # Add code to apply the NSG properties to the VM
+ }
+ }
+ ```
You can use the same script for different recovery plans. Enter different parameters by storing the value that corresponds to a recovery plan in different variables.
site-recovery Vmware Azure Install Linux Master Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-linux-master-target.md
Post comments or questions at the end of this article or on the [Microsoft Q&A q
## Prerequisites * To choose the host on which to deploy the master target, determine if the failback is going to be to an existing on-premises virtual machine or to a new virtual machine.
- * For an existing virtual machine, the host of the master target should have access to the data stores of the virtual machine.
- * If the on-premises virtual machine does not exist (in case of Alternate Location Recovery), the failback virtual machine is created on the same host as the master target. You can choose any ESXi host to install the master target.
+ * For an existing virtual machine, the host of the master target should have access to the data stores of the virtual machine.
+ * If the on-premises virtual machine does not exist (in case of Alternate Location Recovery), the failback virtual machine is created on the same host as the master target. You can choose any ESXi host to install the master target.
* The master target should be on a network that can communicate with the process server and the configuration server. * The version of the master target must be equal to or earlier than the versions of the process server and the configuration server. For example, if the version of the configuration server is 9.4, the version of the master target can be 9.4 or 9.3 but not 9.5. * The master target can only be a VMware virtual machine and not a physical server.
To apply custom configuration changes, use the following steps as a ROOT user:
``` 3. Run the following command to run the script.
-
+ ```bash sudo ./ApplyCustomChanges.sh ```
Use the following steps to create a retention disk:
![Multipath ID](./media/vmware-azure-install-linux-master-target/image27.png) 3. Format the drive, and then create a file system on the new drive: **mkfs.ext4 /dev/mapper/\<Retention disk's multipath id>**.
-
+ ![File system](./media/vmware-azure-install-linux-master-target/image23-centos.png) 4. After you create the file system, mount the retention disk.
Use the following steps to create a retention disk:
``` 5. Create the **fstab** entry to mount the retention drive every time the system starts.
-
- ```bash
- sudo vi /etc/fstab
- ```
-
- Select **Insert** to begin editing the file. Create a new line, and then insert the following text. Edit the disk multipath ID based on the highlighted multipath ID from the previous command.
+
+ ```bash
+ sudo vi /etc/fstab
+ ```
- **/dev/mapper/\<Retention disks multipath id> /mnt/retention ext4 rw 0 0**
+ Select **Insert** to begin editing the file. Create a new line, and then insert the following text. Edit the disk multipath ID based on the highlighted multipath ID from the previous command.
- Select **Esc**, and then type **:wq** (write and quit) to close the editor window.
+ **/dev/mapper/\<Retention disks multipath id> /mnt/retention ext4 rw 0 0**
+
+ Select **Esc**, and then type **:wq** (write and quit) to close the editor window.
### Install the master target
Use the following steps to create a retention disk:
2. Copy the passphrase from **C:\ProgramData\Microsoft Azure Site Recovery\private\connection.passphrase** on the configuration server. Then save it as **passphrase.txt** in the same local directory by running the following command:
- ```bash
- sudo echo <passphrase> >passphrase.txt
- ```
+ ```bash
+ sudo echo <passphrase> >passphrase.txt
+ ```
Example:
Use the following steps to create a retention disk:
3. Note down the configuration server's IP address. Run the following command to register the server with the configuration server. ```bash
- sudo /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i <ConfigurationServer IP Address> -P passphrase.txt
+ sudo /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i <ConfigurationServer IP Address> -P passphrase.txt
```
- Example:
-
+ Example:
+
```bash
- sudo /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i 104.40.75.37 -P passphrase.txt
+ sudo /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i 104.40.75.37 -P passphrase.txt
``` Wait until the script finishes. If the master target registers successfully, the master target is listed on the **Site Recovery Infrastructure** page of the portal.
Wait until the script finishes. If the master target registers successfully, the
1. Run the following command to install the master target. For the agent role, choose **master target**. ```bash
- sudo ./install
+ sudo ./install
``` 2. Choose the default location for installation, and then select **Enter** to continue.
- ![Choosing a default location for installation of master target](./media/vmware-azure-install-linux-master-target/image17.png)
+ ![Choosing a default location for installation of master target](./media/vmware-azure-install-linux-master-target/image17.png)
After the installation has finished, register the configuration server by using the command line.
After the installation has finished, register the configuration server by using
2. Run the following command to register the server with the configuration server. ```bash
- sudo /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh
+ sudo /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh
``` Wait until the script finishes. If the master target is registered successfully, the master target is listed on the **Site Recovery Infrastructure** page of the portal.
From 9.42 version, ASR supports Linux master target server on Ubuntu 20.04. To u
* The master target should not have any snapshots on the virtual machine. If there are snapshots, failback fails. * Due to some custom NIC configurations, the network interface is disabled during startup, and the master target agent cannot initialize. Make sure that the following properties are correctly set. Check these properties in the Ethernet card file's /etc/network/interfaces.
- * auto eth0
- * iface eth0 inet dhcp <br>
+ * auto eth0
+ * iface eth0 inet dhcp <br>
Restart the networking service using the following command: <br>
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
Locate the installer files for the serverΓÇÖs operating system using the followi
```cmd
- .\UnifiedAgentInstaller.exe /Platform vmware /Silent /Role MS /InstallLocation "C:\Program Files (x86)\Microsoft Azure Site Recovery"
+ .\UnifiedAgentInstaller.exe /Platform vmware /Silent /Role MS /CSType CSPrime /InstallLocation "C:\Program Files (x86)\Microsoft Azure Site Recovery"
``` Once the installation is complete, copy the string that is generated alongside the parameter *Agent Config Input*. This string is required to [generate the Mobility Service configuration file](#generate-mobility-service-configuration-file).
storage Blob V11 Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v11-samples-dotnet.md
description: View code samples that use the Azure Blob Storage client library for .NET version 11.x. -+ Last updated 04/03/2023
storage Blob V11 Samples Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v11-samples-javascript.md
description: View code samples that use the Azure Blob Storage client library for JavaScript version 11.x. -+ Last updated 04/03/2023
storage Blob V2 Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v2-samples-python.md
description: View code samples that use the Azure Blob Storage client library for Python version 2.1. -+ Last updated 04/03/2023
storage Client Side Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/client-side-encryption.md
description: The Blob Storage client library supports client-side encryption and
-+ Last updated 12/12/2022
storage Network File System Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support-how-to.md
Create a directory on your Linux system and then mount the container in the stor
- For a temporary mount that doesn't persist across reboots, run the following command: ```
- mount -t aznfs -o sec=sys,vers=3,nolock,proto=tcp <storage-account-name>.blob.core.windows.net:/<storage-account-name>/<container-name> /nfsdatain
+ mount -t aznfs -o sec=sys,vers=3,nolock,proto=tcp <storage-account-name>.blob.core.windows.net:/<storage-account-name>/<container-name> /nfsdata
``` > [!TIP]
storage Quickstart Blobs C Plus Plus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/quickstart-blobs-c-plus-plus.md
Last updated 06/21/2021-+ ms.devlang: cpp
storage Sas Service Create Dotnet Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-dotnet-container.md
description: Learn how to create a service shared access signature (SAS) for a container using the Azure Blob Storage client library for .NET. -+ Last updated 06/22/2023
storage Sas Service Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-dotnet.md
description: Learn how to create a service shared access signature (SAS) for a blob using the Azure Blob Storage client library for .NET. -+ Last updated 06/22/2023
storage Sas Service Create Java Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-java-container.md
description: Learn how to create a service shared access signature (SAS) for a container using the Azure Blob Storage client library for Java. -+ Last updated 06/23/2023
storage Sas Service Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-java.md
description: Learn how to create a service shared access signature (SAS) for a blob using the Azure Blob Storage client library for Java. -+ Last updated 06/23/2023
storage Sas Service Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-javascript.md
description: Learn how to create a service shared access signature (SAS) for a container or blob using the Azure Blob Storage client library for JavaScript. -+ Last updated 01/19/2023
storage Sas Service Create Python Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-python-container.md
description: Learn how to create a service shared access signature (SAS) for a container using the Azure Blob Storage client library for Python. -+ Last updated 06/09/2023
storage Sas Service Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-python.md
description: Learn how to create a service shared access signature (SAS) for a blob using the Azure Blob Storage client library for Python. -+ Last updated 06/09/2023
storage Simulate Primary Region Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/simulate-primary-region-failure.md
description: Simulate an error in reading data from the primary region when the storage account is configured for read-access geo-zone-redundant storage (RA-GZRS). -+ Last updated 09/06/2022
storage Snapshots Manage Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/snapshots-manage-dotnet.md
description: Learn how to use the .NET client library to create a read-only snap
-+ Last updated 08/27/2020 ms.devlang: csharp
storage Storage Blob Account Delegation Sas Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-account-delegation-sas-create-javascript.md
description: Create and use account SAS tokens in a JavaScript application that
-+ Last updated 11/30/2022
storage Storage Blob Append https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-append.md
Last updated 03/28/2022-+ ms.devlang: csharp, python
storage Storage Blob Client Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-client-management.md
description: Learn how to create and manage clients that interact with data reso
-+ Last updated 02/08/2023
storage Storage Blob Container Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-java.md
description: Learn how to create a blob container in your Azure Storage account
-+ Last updated 11/16/2022
storage Storage Blob Container Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-javascript.md
description: Learn how to create a blob container in your Azure Storage account using the JavaScript client library. -+ Last updated 11/30/2022
storage Storage Blob Container Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-python.md
description: Learn how to create a blob container in your Azure Storage account
-+ Last updated 01/24/2023
storage Storage Blob Container Create Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-typescript.md
description: Learn how to create a blob container in your Azure Storage account using the JavaScript client library using TypeScript. -+ Last updated 03/21/2023
storage Storage Blob Container Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create.md
description: Learn how to create a blob container in your Azure Storage account
-+ Last updated 07/25/2022
storage Storage Blob Container Delete Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-java.md
description: Learn how to delete and restore a blob container in your Azure Stor
-+ Last updated 11/15/2022
storage Storage Blob Container Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-javascript.md
-+ Last updated 11/30/2022 ms.devlang: javascript
storage Storage Blob Container Delete Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-python.md
description: Learn how to delete and restore a blob container in your Azure Stor
-+ Last updated 01/24/2023
storage Storage Blob Container Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-typescript.md
-+ Last updated 03/21/2023 ms.devlang: TypeScript
storage Storage Blob Container Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete.md
-+ Last updated 03/28/2022
storage Storage Blob Container Lease Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-java.md
-+ Last updated 11/15/2022 ms.devlang: java
storage Storage Blob Container Lease Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-javascript.md
-+ Last updated 05/01/2023 ms.devlang: javascript
storage Storage Blob Container Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-python.md
-+ Last updated 01/24/2023 ms.devlang: python
storage Storage Blob Container Lease Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-typescript.md
-+ Last updated 05/01/2023 ms.devlang: typescript
storage Storage Blob Container Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease.md
-+ Last updated 04/10/2023 ms.devlang: csharp
storage Storage Blob Container Properties Metadata Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-java.md
description: Learn how to set and retrieve system properties and store custom me
-+ Last updated 12/22/2022
storage Storage Blob Container Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-javascript.md
-+ Last updated 11/30/2022
storage Storage Blob Container Properties Metadata Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-python.md
description: Learn how to set and retrieve system properties and store custom me
-+ Last updated 01/24/2023
storage Storage Blob Container Properties Metadata Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-typescript.md
-+ Last updated 03/21/2023
storage Storage Blob Container Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata.md
-+ Last updated 03/28/2022 ms.devlang: csharp
storage Storage Blob Container User Delegation Sas Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-dotnet.md
description: Learn how to create a user delegation SAS for a container with Azur
-+ Last updated 06/22/2023
storage Storage Blob Container User Delegation Sas Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-java.md
description: Learn how to create a user delegation SAS for a container with Azur
-+ Last updated 06/12/2023
storage Storage Blob Container User Delegation Sas Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-python.md
description: Learn how to create a user delegation SAS for a container with Azur
-+ Last updated 06/09/2023
storage Storage Blob Containers List Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-java.md
description: Learn how to list blob containers in your Azure Storage account usi
-+ Last updated 11/16/2022
storage Storage Blob Containers List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-javascript.md
-+ Last updated 11/30/2022
storage Storage Blob Containers List Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-python.md
description: Learn how to list blob containers in your Azure Storage account usi
-+ Last updated 01/24/2023
storage Storage Blob Containers List Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-typescript.md
-+ Last updated 03/21/2023
storage Storage Blob Containers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list.md
-+ Last updated 03/28/2022 ms.devlang: csharp
storage Storage Blob Copy Async Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-dotnet.md
Last updated 04/11/2023-+ ms.devlang: csharp
storage Storage Blob Copy Async Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-java.md
Last updated 04/18/2023-+ ms.devlang: java
storage Storage Blob Copy Async Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-javascript.md
Last updated 05/08/2023-+ ms.devlang: javascript
storage Storage Blob Copy Async Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-python.md
Last updated 04/28/2023-+ ms.devlang: python
storage Storage Blob Copy Async Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-typescript.md
Last updated 05/08/2023-+ ms.devlang: typescript
storage Storage Blob Copy Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-java.md
Last updated 04/18/2023-+ ms.devlang: java
storage Storage Blob Copy Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-javascript.md
Last updated 05/08/2023-+ ms.devlang: javascript
storage Storage Blob Copy Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-python.md
Last updated 04/28/2023-+ ms.devlang: python
storage Storage Blob Copy Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-typescript.md
Last updated 05/08/2023-+ ms.devlang: typescript
storage Storage Blob Copy Url Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-dotnet.md
Last updated 04/11/2023-+ ms.devlang: csharp
storage Storage Blob Copy Url Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-java.md
Last updated 04/18/2023-+ ms.devlang: java
storage Storage Blob Copy Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-javascript.md
Last updated 05/08/2023-+ ms.devlang: javascript
storage Storage Blob Copy Url Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-python.md
Last updated 04/28/2023-+ ms.devlang: python
storage Storage Blob Copy Url Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-typescript.md
Last updated 05/08/2023-+ ms.devlang: typescript
storage Storage Blob Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy.md
Last updated 04/14/2023-+ ms.devlang: csharp
storage Storage Blob Create User Delegation Sas Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-create-user-delegation-sas-javascript.md
-+ Last updated 07/15/2022
storage Storage Blob Delete Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-java.md
Last updated 05/11/2023-+ ms.devlang: java
storage Storage Blob Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-javascript.md
Last updated 11/30/2022-+ ms.devlang: javascript
storage Storage Blob Delete Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-python.md
Last updated 05/11/2023-+ ms.devlang: python
storage Storage Blob Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-typescript.md
Last updated 03/21/2023-+ ms.devlang: typescript
storage Storage Blob Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete.md
Last updated 05/11/2023-+ ms.devlang: csharp
storage Storage Blob Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-dotnet-get-started.md
-+ Last updated 07/12/2023 ms.devlang: csharp
storage Storage Blob Download Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-java.md
Last updated 11/16/2022-+ ms.devlang: java
storage Storage Blob Download Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-javascript.md
Last updated 04/21/2023-+ ms.devlang: javascript
storage Storage Blob Download Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-python.md
Last updated 06/02/2023-+ ms.devlang: python
storage Storage Blob Download Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-typescript.md
Last updated 06/21/2023-+ ms.devlang: typescript
storage Storage Blob Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download.md
Last updated 05/23/2023-+ ms.devlang: csharp
storage Storage Blob Get Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-javascript.md
Last updated 09/13/2022-+ ms.devlang: javascript
storage Storage Blob Get Url Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-typescript.md
Last updated 03/21/2023-+ ms.devlang: typescript
storage Storage Blob Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-java-get-started.md
-+ Last updated 07/12/2023
storage Storage Blob Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-javascript-get-started.md
-+ Last updated 11/30/2022
storage Storage Blob Lease Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-java.md
-+ Last updated 12/13/2022 ms.devlang: java
storage Storage Blob Lease Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-javascript.md
-+ Last updated 05/01/2023 ms.devlang: javascript
storage Storage Blob Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-python.md
-+ Last updated 01/25/2023 ms.devlang: python
storage Storage Blob Lease Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-typescript.md
-+ Last updated 05/01/2023 ms.devlang: typescript
storage Storage Blob Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease.md
-+ Last updated 04/10/2023 ms.devlang: csharp
storage Storage Blob Object Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-object-model.md
-+ Last updated 03/07/2023
storage Storage Blob Properties Metadata Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-java.md
Last updated 12/22/2022-+ ms.devlang: java
storage Storage Blob Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-javascript.md
Last updated 11/30/2022-+ ms.devlang: javascript
storage Storage Blob Properties Metadata Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-python.md
Last updated 01/25/2023-+ ms.devlang: python
storage Storage Blob Properties Metadata Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-typescript.md
Last updated 03/21/2023-+ ms.devlang: typescript
storage Storage Blob Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata.md
Last updated 03/28/2022-+ ms.devlang: csharp
storage Storage Blob Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-python-get-started.md
-+ Last updated 07/12/2023
storage Storage Blob Query Endpoint Srp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-query-endpoint-srp.md
-+ Last updated 06/07/2023
storage Storage Blob Tags Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-java.md
Last updated 11/16/2022-+ ms.devlang: java
storage Storage Blob Tags Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-javascript.md
Last updated 11/30/2022-+ ms.devlang: javascript
storage Storage Blob Tags Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-python.md
Last updated 01/25/2023-+ ms.devlang: python
storage Storage Blob Tags Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-typescript.md
Last updated 03/21/2023-+ ms.devlang: typescript
storage Storage Blob Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags.md
Last updated 03/28/2022-+ ms.devlang: csharp
storage Storage Blob Typescript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-typescript-get-started.md
-+ Last updated 03/21/2023
storage Storage Blob Upload Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-java.md
Last updated 06/16/2023-+ ms.devlang: java
storage Storage Blob Upload Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-javascript.md
Last updated 06/20/2023-+ ms.devlang: javascript
storage Storage Blob Upload Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-python.md
Last updated 07/07/2023-+ ms.devlang: python
storage Storage Blob Upload Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-typescript.md
Last updated 06/21/2023-+ ms.devlang: typescript
storage Storage Blob Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload.md
Last updated 07/07/2023-+ ms.devlang: csharp
storage Storage Blob Use Access Tier Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-dotnet.md
-+ Last updated 07/03/2023 ms.devlang: csharp
storage Storage Blob Use Access Tier Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-java.md
-+ Last updated 07/11/2023 ms.devlang: java
storage Storage Blob Use Access Tier Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-javascript.md
-+ Last updated 06/28/2023 ms.devlang: javascript
storage Storage Blob Use Access Tier Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-python.md
-+ Last updated 07/05/2023 ms.devlang: python
storage Storage Blob Use Access Tier Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-typescript.md
-+ Last updated 06/28/2023 ms.devlang: typescript
storage Storage Blob User Delegation Sas Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-dotnet.md
description: Learn how to create a user delegation SAS for a blob with Azure Act
-+ Last updated 06/22/2023
storage Storage Blob User Delegation Sas Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-java.md
description: Learn how to create a user delegation SAS for a blob with Azure Act
-+ Last updated 06/12/2023
storage Storage Blob User Delegation Sas Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-python.md
description: Learn how to create a user delegation SAS for a blob with Azure Act
-+ Last updated 06/06/2023
storage Storage Blobs List Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-java.md
description: Learn how to list blobs in your storage account using the Azure Sto
-+ Last updated 11/16/2022
storage Storage Blobs List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-javascript.md
-+ Last updated 11/30/2022
storage Storage Blobs List Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-python.md
description: Learn how to list blobs in your storage account using the Azure Sto
-+ Last updated 01/25/2023
storage Storage Blobs List Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-typescript.md
-+ Last updated 03/21/2023
storage Storage Blobs List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list.md
-+ Last updated 02/14/2023 ms.devlang: csharp
storage Storage Blobs Tune Upload Download Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download-python.md
description: Learn how to tune your uploads and downloads for better performance
-+ Last updated 07/07/2023 ms.devlang: python
storage Storage Blobs Tune Upload Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download.md
description: Learn how to tune your uploads and downloads for better performance
-+ Last updated 12/09/2022 ms.devlang: csharp
storage Storage Create Geo Redundant Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-create-geo-redundant-storage.md
description: Use read-access geo-zone-redundant (RA-GZRS) storage to make your a
-+ Last updated 09/02/2022
storage Storage Encrypt Decrypt Blobs Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-encrypt-decrypt-blobs-key-vault.md
Title: Encrypt and decrypt blobs using Azure Key Vault
description: Learn how to encrypt and decrypt a blob using client-side encryption with Azure Key Vault. -+ Last updated 11/2/2022
storage Storage Quickstart Blobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet.md
description: In this quickstart, you will learn how to use the Azure Blob Storag
Last updated 11/09/2022-+ ms.devlang: csharp
storage Storage Quickstart Blobs Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-go.md
description: In this quickstart, you learn how to use the Azure Blob Storage cli
Last updated 02/13/2023-+ ms.devlang: golang
storage Storage Quickstart Blobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java.md
Last updated 10/24/2022-+ ms.devlang: java
storage Storage Quickstart Blobs Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-nodejs.md
description: In this quickstart, you learn how to use the Azure Blob Storage for
Last updated 10/28/2022-+ ms.devlang: javascript
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
Last updated 10/24/2022 -+ ms.devlang: python
storage Storage Retry Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy.md
description: Learn about retry policies and how to implement them for Blob Storage. This article helps you set up a retry policy for Blob Storage requests using the Azure Storage client library for .NET. -+ Last updated 12/14/2022
storage Versioning Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versioning-enable.md
To enable blob versioning for a storage account in the Azure portal:
:::image type="content" source="media/versioning-enable/portal-enable-versioning.png" alt-text="Screenshot showing how to enable blob versioning in Azure portal":::
+> [!IMPORTANT]
+> If you set the **Delete versions after** option, a rule is automatically added to the lifecycle management policy of the storage account. Once that rule is added, the **Delete versions after** option no appears in the **Data protection** configuration page.
+>
+> You can make that option reappear in the **Data protection** page by removing the rule. If your lifecycle management policy contains other rules that delete versions, then you'll have to remove those rules as well before the **Delete versions after** option can reappear.
+ # [PowerShell](#tab/powershell) To enable blob versioning for a storage account with PowerShell, first install the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage) module version 2.3.0 or later. Then call the [Update-AzStorageBlobServiceProperty](/powershell/module/az.storage/update-azstorageblobserviceproperty) command to enable versioning, as shown in the following example. Remember to replace the values in angle brackets with your own values:
For more information about deploying resources with templates in the Azure porta
-> [!IMPORTANT]
-> Currently, once you configure the retention, there will be a rule created in the lifecycle management policy to delete the older version based on the retention period set. Thereafter, the settings shall not be visible in the data protection options. In case you want to change the retention period, you will have to delete the rule, which shall make the settings visible for editing again. In case you have any other rule already to delete the versions, then also this setting shall not appear.
- ## List blob versions To display a blob's versions, use the Azure portal, PowerShell, or Azure CLI. You can also list a blob's versions using one of the Blob Storage SDKs.
storage Storage Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-disaster-recovery-guidance.md
Previously updated : 07/20/2023 Last updated : 07/30/2023
Two important exceptions to consider are:
#### Classic storage accounts
-Customer-managed account failover is only supported for storage accounts deployed using the Azure Resource Manager (ARM) deployment model. The Azure Service Manager (ASM) deployment model, also known as *classic*, is not supported. To make classic storage accounts eligible for customer-managed account failover, they must first be [migrated to the ARM model](../../virtual-machines/migration-classic-resource-manager-overview.md#migration-of-storage-accounts). Your storage account must be accessible to perform the upgrade, so the primary region cannot currently be in a failed state.
+Customer-managed account failover is only supported for storage accounts deployed using the Azure Resource Manager (ARM) deployment model. The Azure Service Manager (ASM) deployment model, also known as *classic*, is not supported. To make classic storage accounts eligible for customer-managed account failover, they must first be [migrated to the ARM model](classic-account-migration-overview.md). Your storage account must be accessible to perform the upgrade, so the primary region cannot currently be in a failed state.
#### Azure Data Lake Storage Gen2
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
Previously updated : 07/20/2023 Last updated : 07/30/2023
Two important exceptions to consider are:
#### Classic storage accounts
-Customer-managed account failover is only supported for storage accounts deployed using the Azure Resource Manager (ARM) deployment model. The Azure Service Manager (ASM) deployment model, also known as *classic*, is not supported. To make classic storage accounts eligible for customer-managed account failover, they must first be [migrated to the ARM model](../../virtual-machines/migration-classic-resource-manager-overview.md#migration-of-storage-accounts). Your storage account must be accessible to perform the upgrade, so the primary region cannot currently be in a failed state.
+Customer-managed account failover is only supported for storage accounts deployed using the Azure Resource Manager (ARM) deployment model. The Azure Service Manager (ASM) deployment model, also known as *classic*, is not supported. To make classic storage accounts eligible for customer-managed account failover, they must first be [migrated to the ARM model](classic-account-migration-overview.md). Your storage account must be accessible to perform the upgrade, so the primary region cannot currently be in a failed state.
#### Azure Data Lake Storage Gen2
stream-analytics Job Config Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/job-config-json.md
The following fields are supported in the *JobConfig.json* file used to [create
"EventsLateArrivalMaxDelayInSeconds": "integer", "EventsOutOfOrderMaxDelayInSeconds": "integer", "EventsOutOfOrderPolicy": "string",
- "StreamingUnits": "integer",
+ "Sku": {
+ "Name": "string",
+ "StreamingUnits": "integer"
+ },
"CompatibilityLevel": "string", "UseSystemAssignedIdentity": "boolean", "GlobalStorage": {
The following fields are supported in the *JobConfig.json* file used to [create
|EventsLateArrivalMaxDelayInSeconds|integer|No|The maximum tolerable delay in seconds where events arriving late could be included. Supported range is -1 to 1814399 (20.23:59:59 days) and -1 is used to specify indefinite time. If the property is absent, it's interpreted to have a value of -1.| |EventsOutOfOrderMaxDelayInSeconds|integer|No|The maximum tolerable delay in seconds where out-of-order events can be adjusted to be back in order.| |EventsOutOfOrderPolicy|string|No|Indicates the policy to apply to events that arrive out of order in the input event stream. - Adjust or Drop|
-|StreamingUnits|integer|Yes|Specifies the number of streaming units that the streaming job uses.|
+|Sku.Name|string|No|Specifies the SKU name of the job. Acceptable values are "Standard" and "StandardV2".|
+|Sku.StreamingUnits|integer|Yes|Specifies the number of streaming units that the streaming job uses. [Learn more](stream-analytics-streaming-unit-consumption.md).|
|CompatibilityLevel|string|No|Controls certain runtime behaviors of the streaming job. - Acceptable values are "1.0", "1.1", "1.2"| |UseSystemAssignedIdentity|boolean|No|Set true to enable this job to communicate with other Azure services as itself using a Managed Azure Active Directory Identity.| |GlobalStorage.AccountName|string|No|Global storage account is used for storing content related to your stream analytics job, such as SQL reference data snapshots.|
stream-analytics Stream Analytics Real Time Fraud Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-real-time-fraud-detection.md
When you use a join with streaming data, the join must provide some limits on ho
1. Paste the following query in the query editor:
- ```SQL
- SELECT System.Timestamp AS WindowEnd, COUNT(*) AS FraudulentCalls
- INTO "MyPBIoutput"
- FROM "CallStream" CS1 TIMESTAMP BY CallRecTime
- JOIN "CallStream" CS2 TIMESTAMP BY CallRecTime
- ON CS1.CallingIMSI = CS2.CallingIMSI
- AND DATEDIFF(ss, CS1, CS2) BETWEEN 1 AND 5
- WHERE CS1.SwitchNum != CS2.SwitchNum
- GROUP BY TumblingWindow(Duration(second, 1))
- ```
+ ```sql
+ SELECT System.Timestamp AS WindowEnd, COUNT(*) AS FraudulentCalls
+ INTO "MyPBIoutput"
+ FROM "CallStream" CS1 TIMESTAMP BY CallRecTime
+ JOIN "CallStream" CS2 TIMESTAMP BY CallRecTime
+ ON CS1.CallingIMSI = CS2.CallingIMSI
+ AND DATEDIFF(ss, CS1, CS2) BETWEEN 1 AND 5
+ WHERE CS1.SwitchNum != CS2.SwitchNum
+ GROUP BY TumblingWindow(Duration(second, 1))
+ ```
This query is like any SQL join except for the `DATEDIFF` function in the join. This version of `DATEDIFF` is specific to Streaming Analytics, and it must appear in the `ON...BETWEEN` clause. The parameters are a time unit (seconds in this example) and the aliases of the two sources for the join. This function is different from the standard SQL `DATEDIFF` function.
synapse-analytics 5 Minimize Sql Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/5-minimize-sql-issues.md
There are some SQL DML syntax differences between Oracle SQL and Azure Synapse T
- Oracle outer join syntax: although more recent versions of Oracle support ANSI outer join syntax, older Oracle systems use a proprietary syntax for outer joins that uses a plus sign (`+`) within the SQL statement. If you're migrating an older Oracle environment, you might encounter the older syntax. For example:
- ```SQL
+ ```sql
SELECT d.deptno, e.job FROM
There are some SQL DML syntax differences between Oracle SQL and Azure Synapse T
AND e.job (+) = 'CLERK' GROUP BY d.deptno, e.job;
- ```
+ ```
The equivalent ANSI standard syntax is:
- ```SQL
+ ```sql
SELECT d.deptno, e.job FROM
synapse-analytics Load Data From Azure Blob Storage Using Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/load-data-from-azure-blob-storage-using-copy.md
-+ Last updated 11/23/2020
This tutorial uses the [COPY statement](/sql/t-sql/statements/copy-into-transact
> [!div class="checklist"] > > * Create a user designated for loading data
-> * Create the tables for the sample dataset
+> * Create the tables for the sample dataset
> * Use the COPY T-SQL statement to load data into your data warehouse > * View the progress of data as it is loading
If you don't have an Azure subscription, [create a free Azure account](https://a
## Before you begin
-Before you begin this tutorial, download and install the newest version of [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) (SSMS).
+Before you begin this tutorial, download and install the newest version of [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) (SSMS).
This tutorial assumes you have already created a SQL dedicated pool from the following [tutorial](./create-data-warehouse-portal.md#connect-to-the-server-as-server-admin).
Connect as the server admin so you can create logins and users. Use these steps
3. Select **Execute**.
-4. Right-click **mySampleDataWarehouse**, and choose **New Query**. A new query Window opens.
+4. Right-click **mySampleDataWarehouse**, and choose **New Query**. A new query Window opens.
![New query on sample data warehouse](./media/load-data-from-azure-blob-storage-using-polybase/create-loading-user.png)
Connect as the server admin so you can create logins and users. Use these steps
## Connect to the server as the loading user
-The first step toward loading data is to login as LoaderRC20.
+The first step toward loading data is to login as LoaderRC20.
1. In Object Explorer, select the **Connect** drop down menu and select **Database Engine**. The **Connect to Server** dialog box appears.
Run the following SQL scripts and specify information about the data you wish to
DISTRIBUTION = ROUND_ROBIN, CLUSTERED COLUMNSTORE INDEX );
-
+ CREATE TABLE [dbo].[Geography] ( [GeographyID] int NOT NULL,
Run the following SQL scripts and specify information about the data you wish to
DISTRIBUTION = ROUND_ROBIN, CLUSTERED COLUMNSTORE INDEX );
-
+ CREATE TABLE [dbo].[HackneyLicense] ( [HackneyLicenseID] int NOT NULL,
Run the following SQL scripts and specify information about the data you wish to
DISTRIBUTION = ROUND_ROBIN, CLUSTERED COLUMNSTORE INDEX );
-
+ CREATE TABLE [dbo].[Medallion] ( [MedallionID] int NOT NULL,
Run the following SQL scripts and specify information about the data you wish to
DISTRIBUTION = ROUND_ROBIN, CLUSTERED COLUMNSTORE INDEX );
-
+ CREATE TABLE [dbo].[Time] ( [TimeID] int NOT NULL,
Run the following SQL scripts and specify information about the data you wish to
DISTRIBUTION = ROUND_ROBIN, CLUSTERED COLUMNSTORE INDEX );
-
+ CREATE TABLE [dbo].[Trip] ( [DateID] int NOT NULL,
Run the following SQL scripts and specify information about the data you wish to
DISTRIBUTION = ROUND_ROBIN, CLUSTERED COLUMNSTORE INDEX );
-
+ CREATE TABLE [dbo].[Weather] ( [DateID] int NOT NULL,
Run the following SQL scripts and specify information about the data you wish to
CLUSTERED COLUMNSTORE INDEX ); ```
-
+ ## Load the data into your data warehouse
-This section uses the [COPY statement to load](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true) the sample data from Azure Storage Blob.
+This section uses the [COPY statement to load](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true) the sample data from Azure Storage Blob.
> [!NOTE]
-> This tutorial loads the data directly into the final table. You would typically load into a staging table for your production workloads. While data is in the staging table you can perform any necessary transformations.
+> This tutorial loads the data directly into the final table. You would typically load into a staging table for your production workloads. While data is in the staging table you can perform any necessary transformations.
1. Run the following statements to load the data:
This section uses the [COPY statement to load](/sql/t-sql/statements/copy-into-t
WITH ( FILE_TYPE = 'CSV',
- FIELDTERMINATOR = ',',
- FIELDQUOTE = ''
+ FIELDTERMINATOR = ',',
+ FIELDQUOTE = ''
) OPTION (LABEL = 'COPY : Load [dbo].[Date] - Taxi dataset');
-
-
++ COPY INTO [dbo].[Geography] FROM 'https://nytaxiblob.blob.core.windows.net/2013/Geography' WITH ( FILE_TYPE = 'CSV',
- FIELDTERMINATOR = ',',
- FIELDQUOTE = ''
+ FIELDTERMINATOR = ',',
+ FIELDQUOTE = ''
) OPTION (LABEL = 'COPY : Load [dbo].[Geography] - Taxi dataset');
-
+ COPY INTO [dbo].[HackneyLicense] FROM 'https://nytaxiblob.blob.core.windows.net/2013/HackneyLicense' WITH ( FILE_TYPE = 'CSV',
- FIELDTERMINATOR = ',',
- FIELDQUOTE = ''
+ FIELDTERMINATOR = ',',
+ FIELDQUOTE = ''
) OPTION (LABEL = 'COPY : Load [dbo].[HackneyLicense] - Taxi dataset');
-
+ COPY INTO [dbo].[Medallion] FROM 'https://nytaxiblob.blob.core.windows.net/2013/Medallion' WITH ( FILE_TYPE = 'CSV',
- FIELDTERMINATOR = ',',
- FIELDQUOTE = ''
+ FIELDTERMINATOR = ',',
+ FIELDQUOTE = ''
) OPTION (LABEL = 'COPY : Load [dbo].[Medallion] - Taxi dataset');
-
+ COPY INTO [dbo].[Time] FROM 'https://nytaxiblob.blob.core.windows.net/2013/Time' WITH ( FILE_TYPE = 'CSV',
- FIELDTERMINATOR = ',',
- FIELDQUOTE = ''
+ FIELDTERMINATOR = ',',
+ FIELDQUOTE = ''
) OPTION (LABEL = 'COPY : Load [dbo].[Time] - Taxi dataset');
-
+ COPY INTO [dbo].[Weather] FROM 'https://nytaxiblob.blob.core.windows.net/2013/Weather' WITH ( FILE_TYPE = 'CSV',
- FIELDTERMINATOR = ',',
- FIELDQUOTE = '',
- ROWTERMINATOR='0X0A'
+ FIELDTERMINATOR = ',',
+ FIELDQUOTE = '',
+ ROWTERMINATOR='0X0A'
) OPTION (LABEL = 'COPY : Load [dbo].[Weather] - Taxi dataset');
-
+ COPY INTO [dbo].[Trip] FROM 'https://nytaxiblob.blob.core.windows.net/2013/Trip2013' WITH ( FILE_TYPE = 'CSV',
- FIELDTERMINATOR = '|',
- FIELDQUOTE = '',
- ROWTERMINATOR='0X0A',
- COMPRESSION = 'GZIP'
+ FIELDTERMINATOR = '|',
+ FIELDQUOTE = '',
+ ROWTERMINATOR='0X0A',
+ COMPRESSION = 'GZIP'
) OPTION (LABEL = 'COPY : Load [dbo].[Trip] - Taxi dataset'); ```
This section uses the [COPY statement to load](/sql/t-sql/statements/copy-into-t
2. View your data as it loads. You're loading several GBs of data and compressing it into highly performant clustered columnstore indexes. Run the following query that uses a dynamic management views (DMVs) to show the status of the load. ```sql
- SELECT r.[request_id]
- , r.[status]
- , r.resource_class
+ SELECT r.[request_id]
+ , r.[status]
+ , r.resource_class
, r.command , sum(bytes_processed) AS bytes_processed , sum(rows_processed) AS rows_processed
This section uses the [COPY statement to load](/sql/t-sql/statements/copy-into-t
[label] = 'COPY : Load [dbo].[Medallion] - Taxi dataset' OR [label] = 'COPY : Load [dbo].[Time] - Taxi dataset' OR [label] = 'COPY : Load [dbo].[Weather] - Taxi dataset' OR
- [label] = 'COPY : Load [dbo].[Trip] - Taxi dataset'
+ [label] = 'COPY : Load [dbo].[Trip] - Taxi dataset'
and session_id <> session_id() and type = 'WRITER'
- GROUP BY r.[request_id]
- , r.[status]
- , r.resource_class
+ GROUP BY r.[request_id]
+ , r.[status]
+ , r.resource_class
, r.command; ```
-
+ 3. View all system queries. ```sql
update-center Dynamic Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/dynamic-scope-overview.md
The criteria will be evaluated at the scheduled run time, which will be the fina
1. In search, enter and select **Subscriptions**. 1. In **Subscriptions** home page, select your subscription from the list. 1. In the **Subscription | Preview features** page, under **Settings**, select **Preview features**.
- 1. Search for **Dynamic Scope (preview)**.
+ 1. Search for **Dynamic scoping**.
1. Select **Register** and then select **OK** to get started with Dynamic scope (preview). #### [Arc-enabled VMs](#tab/arcvms)
virtual-desktop Windows 11 Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/windows-11-language-packs.md
You can create a custom image by following these steps:
"Language.TextToSpeech~~~$sourceLanguage~0.0.1.0" )
- ##Install all FODs or fonts from the CSV file###
+ ##Install all FODs or fonts from the CSV file###
Dism /Online /Add-Package /PackagePath:$LIPContent\Microsoft-Windows-Client-Language-Pack_x64_$sourceLanguage.cab Dism /Online /Add-Package /PackagePath:$LIPContent\Microsoft-Windows-Lip-Language-Pack_x64_$sourceLanguage.cab foreach($capability in $additionalCapabilityList){ Dism /Online /Add-Capability /CapabilityName:$capability /Source:$LIPContent
- }
+ }
- foreach($feature in $additionalFODList){
+ foreach($feature in $additionalFODList){
Dism /Online /Add-Package /PackagePath:$feature }
- if($langGroup){
+ if($langGroup){
Dism /Online /Add-Capability /CapabilityName:Language.Fonts.$langGroup~~~und-$langGroup~0.0.1.0 }
- ##Add installed language to language list##
- $LanguageList = Get-WinUserLanguageList
- $LanguageList.Add("$targetlanguage")
- Set-WinUserLanguageList $LanguageList -force
- ```
+ ##Add installed language to language list##
+ $LanguageList = Get-WinUserLanguageList
+ $LanguageList.Add("$targetlanguage")
+ Set-WinUserLanguageList $LanguageList -force
+ ```
>[!NOTE]
- >This example script uses the Spanish (es-es) language code. To automatically install the appropriate files for a different language change the *$targetLanguage* parameter to the correct language code. For a list of language codes, see [Available language packs for Windows](/windows-hardware/manufacture/desktop/available-language-packs-for-windows).
+ >This example script uses the Spanish (es-es) language code. To automatically install the appropriate files for a different language change the *$targetLanguage* parameter to the correct language code. For a list of language codes, see [Available language packs for Windows](/windows-hardware/manufacture/desktop/available-language-packs-for-windows).
The script might take a while to finish depending on the number of languages you need to install. You can also install additional languages after initial setup by running the script again with a different *$targetLanguage* parameter.
virtual-machines Bsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/bsv2-series.md
Bsv2-series virtual machines offer a balance of compute, memory, and network res
|-||--|--||||--|--|-||-| | Standard_B2ts_v2 | 2 | 1 | 20% | 60 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.250 | 2 | | Standard_B2ls_v2 | 2 | 4 | 30% | 60 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.50 | 2 |
-| Standard_B2s_v2 | 2 | 8 | 40% | 600 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.50 | 2 |
+| Standard_B2s_v2 | 2 | 8 | 40% | 60 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.50 | 2 |
| Standard_B4ls_v2 | 4 | 8 | 30% | 120 | 48 | 1152 | 6,400/145 | 20,000/960 | 8 | 6.250 | 2 | | Standard_B4s_v2 | 4 | 16 | 40% | 120 | 48 | 1150 | 6,400/145 | 20,000/960 | 8 | 6.250 | 2 | | Standard_B8ls_v2 | 8 | 16 | 30% | 240 | 96 | 2304 | 12,800/290 | 20,000/960 | 16 | 3.250 | 2 |
virtual-machines Capture Image Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capture-image-resource.md
To create a VM image, follow these steps:
1. Create some variables. ```azurepowershell-interactive
- $vmName = "myVM"
- $rgName = "myResourceGroup"
- $location = "EastUS"
- $imageName = "myImage"
- ```
+ $vmName = "myVM"
+ $rgName = "myResourceGroup"
+ $location = "EastUS"
+ $imageName = "myImage"
+ ```
+ 2. Make sure the VM has been deallocated. ```azurepowershell-interactive
- Stop-AzVM -ResourceGroupName $rgName -Name $vmName -Force
- ```
-
+ Stop-AzVM -ResourceGroupName $rgName -Name $vmName -Force
+ ```
+
3. Set the status of the virtual machine to **Generalized**. ```azurepowershell-interactive Set-AzVm -ResourceGroupName $rgName -Name $vmName -Generalized
- ```
-
+ ```
+
4. Get the virtual machine. ```azurepowershell-interactive
- $vm = Get-AzVM -Name $vmName -ResourceGroupName $rgName
- ```
+ $vm = Get-AzVM -Name $vmName -ResourceGroupName $rgName
+ ```
5. Create the image configuration. ```azurepowershell-interactive
- $image = New-AzImageConfig -Location $location -SourceVirtualMachineId $vm.Id
- ```
+ $image = New-AzImageConfig -Location $location -SourceVirtualMachineId $vm.Id
+ ```
6. Create the image. ```azurepowershell-interactive New-AzImage -Image $image -ImageName $imageName -ResourceGroupName $rgName
- ```
+ ```
## PowerShell: Create a legacy managed image from a managed disk If you want to create an image of only the OS disk, specify the managed disk ID as the OS disk:
-
+
1. Create some variables. ```azurepowershell-interactive
- $vmName = "myVM"
- $rgName = "myResourceGroup"
- $location = "EastUS"
- $imageName = "myImage"
- ```
+ $vmName = "myVM"
+ $rgName = "myResourceGroup"
+ $location = "EastUS"
+ $imageName = "myImage"
+ ```
2. Get the VM.
If you want to create an image of only the OS disk, specify the managed disk ID
3. Get the ID of the managed disk. ```azurepowershell-interactive
- $diskID = $vm.StorageProfile.OsDisk.ManagedDisk.Id
- ```
+ $diskID = $vm.StorageProfile.OsDisk.ManagedDisk.Id
+ ```
3. Create the image configuration. ```azurepowershell-interactive
- $imageConfig = New-AzImageConfig -Location $location
- $imageConfig = Set-AzImageOsDisk -Image $imageConfig -OsState Generalized -OsType Windows -ManagedDiskId $diskID
- ```
-
+ $imageConfig = New-AzImageConfig -Location $location
+ $imageConfig = Set-AzImageOsDisk -Image $imageConfig -OsState Generalized -OsType Windows -ManagedDiskId $diskID
+ ```
+
4. Create the image. ```azurepowershell-interactive New-AzImage -ImageName $imageName -ResourceGroupName $rgName -Image $imageConfig
- ```
+ ```
## PowerShell: Create a legacy managed image from a snapshot You can create a managed image from a snapshot of a generalized VM by following these steps:
-
+
1. Create some variables. ```azurepowershell-interactive
- $rgName = "myResourceGroup"
- $location = "EastUS"
- $snapshotName = "mySnapshot"
- $imageName = "myImage"
- ```
+ $rgName = "myResourceGroup"
+ $location = "EastUS"
+ $snapshotName = "mySnapshot"
+ $imageName = "myImage"
+ ```
2. Get the snapshot.
You can create a managed image from a snapshot of a generalized VM by following
3. Create the image configuration. ```azurepowershell-interactive
- $imageConfig = New-AzImageConfig -Location $location
- $imageConfig = Set-AzImageOsDisk -Image $imageConfig -OsState Generalized -OsType Windows -SnapshotId $snapshot.Id
- ```
+ $imageConfig = New-AzImageConfig -Location $location
+ $imageConfig = Set-AzImageOsDisk -Image $imageConfig -OsState Generalized -OsType Windows -SnapshotId $snapshot.Id
+ ```
4. Create the image. ```azurepowershell-interactive New-AzImage -ImageName $imageName -ResourceGroupName $rgName -Image $imageConfig
- ```
+ ```
## PowerShell: Create a legacy managed image from a VM that uses a storage account
To create a managed image from a VM that doesn't use managed disks, you need the
1. Create some variables. ```azurepowershell-interactive
- $vmName = "myVM"
- $rgName = "myResourceGroup"
- $location = "EastUS"
- $imageName = "myImage"
- $osVhdUri = "https://mystorageaccount.blob.core.windows.net/vhdcontainer/vhdfilename.vhd"
+ $vmName = "myVM"
+ $rgName = "myResourceGroup"
+ $location = "EastUS"
+ $imageName = "myImage"
+ $osVhdUri = "https://mystorageaccount.blob.core.windows.net/vhdcontainer/vhdfilename.vhd"
``` 2. Stop/deallocate the VM. ```azurepowershell-interactive
- Stop-AzVM -ResourceGroupName $rgName -Name $vmName -Force
- ```
-
+ Stop-AzVM -ResourceGroupName $rgName -Name $vmName -Force
+ ```
+
3. Mark the VM as generalized. ```azurepowershell-interactive
- Set-AzVm -ResourceGroupName $rgName -Name $vmName -Generalized
- ```
+ Set-AzVm -ResourceGroupName $rgName -Name $vmName -Generalized
+ ```
4. Create the image by using your generalized OS VHD. ```azurepowershell-interactive
- $imageConfig = New-AzImageConfig -Location $location
- $imageConfig = Set-AzImageOsDisk -Image $imageConfig -OsType Windows -OsState Generalized -BlobUri $osVhdUri
- $image = New-AzImage -ImageName $imageName -ResourceGroupName $rgName -Image $imageConfig
+ $imageConfig = New-AzImageConfig -Location $location
+ $imageConfig = Set-AzImageOsDisk -Image $imageConfig -OsType Windows -OsState Generalized -BlobUri $osVhdUri
+ $image = New-AzImage -ImageName $imageName -ResourceGroupName $rgName -Image $imageConfig
```
The following example creates a VM named *myVMFromImage*, in the *myResourceGrou
New-AzVm ` -ResourceGroupName "myResourceGroup" ` -Name "myVMfromImage" `
- -ImageName "myImage" `
+ -ImageName "myImage" `
-Location "East US" ` -VirtualNetworkName "myImageVnet" ` -SubnetName "myImageSubnet" `
New-AzVm `
-PublicIpAddressName "myImagePIP" ``` -
-
## Next steps - Learn more about using an [Azure Compute Gallery](shared-image-galleries.md) (formerly known as Shared Image Gallery)
virtual-machines Ebdsv5 Ebsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ebdsv5-ebsv5-series.md
Ebdsv5-series sizes run on the Intel® Xeon® Platinum 8370C (Ice Lake) processo
| Standard_E2bds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 5500/156 | 10000/1200 | 7370/156 | 15000/1200 | 2 | 12500 | | Standard_E4bds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 | | Standard_E8bds_v5 | 8 | 64 | 300 | 16 | 38000/500 | 22000/625 | 40000/1200 |29480/625 |60000/1200 | 4 | 12500 |
-| Standard_E16bds_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 44000/1250 | 64000/2000 |58960/1250 |96000/2000 | 8 | 12500 |
+| Standard_E16bds_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 44000/1250 | 64000/2000 |58960/1250 |96000/2000 | 4 | 12500 |
| Standard_E32bds_v5 | 32 | 256 | 1200 | 32 | 150000/2000 | 88000/2500 | 120000/4000 | 117920/2500|160000/4000| 8 | 16000 | | Standard_E48bds_v5 | 48 | 384 | 1800 | 32 | 225000/3000 | 120000/4000 | 120000/4000 | 160000/4000|160000/4000 | 8 | 16000 | | Standard_E64bds_v5 | 64 | 512 | 2400 | 32 | 300000/4000 | 120000/4000 | 120000/4000 |160000/4000 | 160000/4000| 8 | 20000 |
virtual-machines Hpccompute Gpu Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpccompute-gpu-linux.md
vm-linux Previously updated : 07/19/2023 Last updated : 07/28/2023
This extension installs NVIDIA GPU drivers on Linux N-series virtual machines (VMs). Depending on the VM family, the extension installs CUDA or GRID drivers. When you install NVIDIA drivers by using this extension, you're accepting and agreeing to the terms of the [NVIDIA End-User License Agreement](https://www.nvidia.com/en-us/data-center/products/nvidia-ai-enterprise/eula/). During the installation process, the VM might reboot to complete the driver setup.
-Instructions on manual installation of the drivers and the current supported versions are available. For more information including secure boot enabled setting, see [Azure N-series GPU driver setup for Linux](../linux/n-series-driver-setup.md).
-An extension is also available to install NVIDIA GPU drivers on [Windows N-series VMs](hpccompute-gpu-windows.md).
+Instructions on manual installation of the drivers and the current supported versions are available. An extension is also available to install NVIDIA GPU drivers on [Windows N-series VMs](hpccompute-gpu-windows.md).
> [!NOTE]
-> With Secure Boot enabled, all OS boot components (boot loader, kernel, kernel drivers) must be signed by trusted publishers (key trusted by the system). Both Windows and select Linux distributions support Secure Boot.
+> With Secure Boot enabled, all OS boot components (boot loader, kernel, kernel drivers) must be signed by trusted publishers (key trusted by the system). Secure Boot is not supported using Windows or Linux extensions. For more information on manually installing GPU drivers with Secure Boot enabled, see [Azure N-series GPU driver setup for Linux](../linux/n-series-driver-setup.md).
## Prerequisites
virtual-machines Generalize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generalize.md
To generalize your Windows VM, follow these steps:
6. The VM will shut down when Sysprep is finished generalizing the VM. Do not restart the VM. -
-> [!TIP]
-> **Optional** Use [DISM](/windows-hardware/manufacture/desktop/dism-optimize-image-command-line-options) to optimize your image and reduce your VM's first boot time.
->
-> To optimize your image, mount your VHD by double-clicking on it in Windows explorer, and then run DISM with the `/optimize-image` parameter.
->
-> ```cmd
-> DISM /image:D:\ /optimize-image /boot
-> ```
-> Where D: is the mounted VHD's path.
->
-> Running `DISM /optimize-image` should be the last modification you make to your VHD. If you make any changes to your VHD prior to deployment, you'll have to run `DISM /optimize-image` again.
- Once Sysprep has finished, set the status of the virtual machine to **Generalized**. ```azurepowershell-interactive
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cli-ps-findimage.md
# Find Azure Marketplace image information using the Azure CLI
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
This topic describes how to use the Azure CLI to find VM images in the Azure Marketplace. Use this information to specify a Marketplace image when you create a VM programmatically with the CLI, Resource Manager templates, or other tools.
virtual-machines Create Upload Centos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-centos.md
This article assumes that you've already installed a CentOS (or similar derivati
9. Add the following line to /etc/yum.conf:
- ```config
- http_caching=packages
- ```
+ ```config
+ http_caching=packages
+ ```
10. Run the following command to clear the current yum metadata and update the system with the latest packages:
- ```bash
- sudo yum clean all
- ```
+ ```bash
+ sudo yum clean all
+ ```
Unless you're creating an image for an older version of CentOS, it's recommended to update all the packages to the latest:
- ```bash
- sudo yum -y update
- ```
+ ```bash
+ sudo yum -y update
+ ```
A reboot may be required after running this command.
This article assumes that you've already installed a CentOS (or similar derivati
12. Install the Azure Linux Agent and dependencies. Start and enable waagent service:
- ```bash
- sudo yum install python-pyasn1 WALinuxAgent
- sudo service waagent start
- sudo chkconfig waagent on
- ```
+ ```bash
+ sudo yum install python-pyasn1 WALinuxAgent
+ sudo service waagent start
+ sudo chkconfig waagent on
+ ```
The WALinuxAgent package removes the NetworkManager and NetworkManager-gnome packages if they were not already removed as described in step 3. 13. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this, open `/boot/grub/menu.lst` in a text editor and ensure that the default kernel includes the following parameters:
- ```config
- console=ttyS0 earlyprintk=ttyS0 rootdelay=300
- ```
+ ```config
+ console=ttyS0 earlyprintk=ttyS0 rootdelay=300
+ ```
This will also ensure all console messages are sent to the first serial port, which can assist Azure support with debugging issues. In addition to the above, it's recommended to *remove* the following parameters:
- ```config
- rhgb quiet crashkernel=auto
- ```
+ ```config
+ rhgb quiet crashkernel=auto
+ ```
Graphical and `quiet boot` aren't useful in a cloud environment where we want all the logs to be sent to the serial port. The `crashkernel` option may be left configured if desired, but note that this parameter will reduce the amount of available memory in the VM by 128 MB or more, which may be problematic on the smaller VM sizes.
This article assumes that you've already installed a CentOS (or similar derivati
The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. The local resource disk is a *temporary* disk, and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify the following parameters in `/etc/waagent.conf` appropriately:
- ```config
- ResourceDisk.Format=y
- ResourceDisk.Filesystem=ext4
- ResourceDisk.MountPoint=/mnt/resource
- ResourceDisk.EnableSwap=y
- ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it to be.
- ```
+ ```config
+ ResourceDisk.Format=y
+ ResourceDisk.Filesystem=ext4
+ ResourceDisk.MountPoint=/mnt/resource
+ ResourceDisk.EnableSwap=y
+ ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it to be.
+ ```
16. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
* XFS is now the default file system. The ext4 file system can still be used if desired. * Since CentOS 8 Stream and newer no longer include `network.service` by default, you need to install it manually:
- ```bash
- sudo yum install network-scripts
- sudo systemctl enable network.service
- ```
+ ```bash
+ sudo yum install network-scripts
+ sudo systemctl enable network.service
+ ```
**Configuration Steps**
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
3. Create or edit the file `/etc/sysconfig/network` and add the following text:
- ```config
- NETWORKING=yes
- HOSTNAME=localhost.localdomain
- ```
+ ```config
+ NETWORKING=yes
+ HOSTNAME=localhost.localdomain
+ ```
4. Create or edit the file `/etc/sysconfig/network-scripts/ifcfg-eth0` and add the following text:
- ```config
- DEVICE=eth0
- ONBOOT=yes
- BOOTPROTO=dhcp
- TYPE=Ethernet
- USERCTL=no
- PEERDNS=yes
- IPV6INIT=no
- NM_CONTROLLED=no
- ```
+ ```config
+ DEVICE=eth0
+ ONBOOT=yes
+ BOOTPROTO=dhcp
+ TYPE=Ethernet
+ USERCTL=no
+ PEERDNS=yes
+ IPV6INIT=no
+ NM_CONTROLLED=no
+ ```
5. Modify udev rules to avoid generating static rules for the Ethernet interface(s). These rules can cause problems when cloning a virtual machine in Microsoft Azure or Hyper-V:
- ```bash
- sudo ln -s /etc/udev/rules.d/75-persistent-net-generator.rules
- ```
+ ```bash
+ sudo ln -s /etc/udev/rules.d/75-persistent-net-generator.rules
+ ```
6. If you would like to use the OpenLogic mirrors that are hosted within the Azure datacenters, then replace the `/etc/yum.repos.d/CentOS-Base.repo` file with the following repositories. This will also add the **[openlogic]** repository that includes packages for the Azure Linux agent:
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
baseurl=http://olcentgbl.trafficmanager.net/openlogic/$releasever/openlogic/$basearch/ enabled=1 gpgcheck=0
-
+
[base] name=CentOS-$releasever - Base #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/os/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
-
+
#released updates [updates] name=CentOS-$releasever - Updates
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/updates/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
-
+
#additional packages that may be useful [extras] name=CentOS-$releasever - Extras
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/extras/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
-
+
#additional packages that extend functionality of existing packages [centosplus] name=CentOS-$releasever - Plus
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 ```
-
+
> [!Note] > The rest of this guide will assume you're using at least the `[openlogic]` repo, which will be used to install the Azure Linux agent below. 7. Run the following command to clear the current yum metadata and install any updates:
- ```bash
- sudo yum clean all
- ```
+ ```bash
+ sudo yum clean all
+ ```
Unless you're creating an image for an older version of CentOS, it's recommended to update all the packages to the latest:
- ```bash
- sudo yum -y update
- ```
+ ```bash
+ sudo yum -y update
+ ```
A reboot maybe required after running this command. 8. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this, open `/etc/default/grub` in a text editor and edit the `GRUB_CMDLINE_LINUX` parameter, for example:
- ```config
- GRUB_CMDLINE_LINUX="rootdelay=300 console=ttyS0 earlyprintk=ttyS0 net.ifnames=0"
- ```
+ ```config
+ GRUB_CMDLINE_LINUX="rootdelay=300 console=ttyS0 earlyprintk=ttyS0 net.ifnames=0"
+ ```
This will also ensure all console messages are sent to the first serial port, which can assist Azure support with debugging issues. It also turns off the new CentOS 7 naming conventions for NICs. In addition to the above, it's recommended to *remove* the following parameters:
- ```config
- rhgb quiet crashkernel=auto
- ```
+ ```config
+ rhgb quiet crashkernel=auto
+ ```
Graphical and quiet boot isn't useful in a cloud environment where we want all the logs to be sent to the serial port. The `crashkernel` option may be left configured if desired, but note that this parameter will reduce the amount of available memory in the VM by 128 MB or more, which may be problematic on the smaller VM sizes. 9. Once you're done editing `/etc/default/grub` per above, run the following command to rebuild the grub configuration:
- ```bash
- sudo grub2-mkconfig -o /boot/grub2/grub.cfg
- ```
+ ```bash
+ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
+ ```
> [!NOTE] > If uploading an UEFI enabled VM, the command to update grub is `grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg`. Also, the vfat kernel module must be enabled in the kernel otherwise provisioning will fail.
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
Edit `/etc/dracut.conf`, add content:
- ```config
- add_drivers+=" hv_vmbus hv_netvsc hv_storvsc "
- ```
+ ```config
+ add_drivers+=" hv_vmbus hv_netvsc hv_storvsc "
+ ```
Rebuild the initramfs:
- ```bash
- sudo dracut -f -v
- ```
+ ```bash
+ sudo dracut -f -v
+ ```
11. Install the Azure Linux Agent and dependencies for Azure VM Extensions:
- ```bash
- sudo yum install python-pyasn1 WALinuxAgent
- sudo systemctl enable waagent
- ```
+ ```bash
+ sudo yum install python-pyasn1 WALinuxAgent
+ sudo systemctl enable waagent
+ ```
12. Install cloud-init to handle the provisioning
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
* Use a cloud-init directive baked into the image that will do this every time the VM is created: ```bash
- sudo echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
+ sudo echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
sudo cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF #cloud-config # Generated by Azure cloud image build
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
> [!NOTE] > If you are migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step.
- ```bash
- sudo rm -f /var/log/waagent.log
- sudo cloud-init clean
- sudo waagent -force -deprovision+user
- sudo rm -f ~/.bash_history
- sudo export HISTSIZE=0
- ```
+ ```bash
+ sudo rm -f /var/log/waagent.log
+ sudo cloud-init clean
+ sudo waagent -force -deprovision+user
+ sudo rm -f ~/.bash_history
+ sudo export HISTSIZE=0
+ ```
15. Click **Action -> Shut Down** in Hyper-V Manager. Your Linux VHD is now ready to be [uploaded to Azure](./upload-vhd.md#option-1-upload-a-vhd).
virtual-machines Create Upload Ubuntu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-ubuntu.md
This article assumes that you've already installed an Ubuntu Linux operating sys
sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak ```
- Ubuntu 18.04 and Ubuntu 20.04:
+ Ubuntu 18.04 and Ubuntu 20.04:
```bash sudo sed -i 's/http:\/\/archive\.ubuntu\.com\/ubuntu\//http:\/\/azure\.archive\.ubuntu\.com\/ubuntu\//g' /etc/apt/sources.list
This article assumes that you've already installed an Ubuntu Linux operating sys
5. Modify the kernel boot line for Grub to include additional kernel parameters for Azure. To do this open `/etc/default/grub` in a text editor, find the variable called `GRUB_CMDLINE_LINUX_DEFAULT` (or add it if needed) and edit it to include the following parameters:
- ```config
- GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 rootdelay=300 quiet splash"
- ```
+ ```config
+ GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 rootdelay=300 quiet splash"
+ ```
Save and close this file, and then run `sudo update-grub`. This will ensure all console messages are sent to the first serial port, which can assist Azure technical support with debugging issues.
This article assumes that you've already installed an Ubuntu Linux operating sys
9. Configure cloud-init to provision the system using the Azure datasource: ```bash
- sudo cat > /etc/cloud/cloud.cfg.d/90_dpkg.cfg << EOF
- datasource_list: [ Azure ]
+ sudo cat > /etc/cloud/cloud.cfg.d/90_dpkg.cfg << EOF
+ datasource_list: [ Azure ]
EOF
- cat > /etc/cloud/cloud.cfg.d/90-azure.cfg << EOF
+ cat > /etc/cloud/cloud.cfg.d/90-azure.cfg << EOF
system_info: package_mirrors: - arches: [i386, amd64]
This article assumes that you've already installed an Ubuntu Linux operating sys
security: http://ports.ubuntu.com/ubuntu-ports EOF
- cat > /etc/cloud/cloud.cfg.d/10-azure-kvp.cfg << EOF
+ cat > /etc/cloud/cloud.cfg.d/10-azure-kvp.cfg << EOF
reporting: logging: type: log
This article assumes that you've already installed an Ubuntu Linux operating sys
12. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
- > [!NOTE]
- > The `sudo waagent -force -deprovision+user` command generalizes the image by attempting to clean the system and make it suitable for re-provisioning. The `+user` option deletes the last provisioned user account and associated data.
+ > [!NOTE]
+ > The `sudo waagent -force -deprovision+user` command generalizes the image by attempting to clean the system and make it suitable for re-provisioning. The `+user` option deletes the last provisioned user account and associated data.
- > [!WARNING]
- > Deprovisioning using the command above does not guarantee that the image is cleared of all sensitive information and is suitable for redistribution.
+ > [!WARNING]
+ > Deprovisioning using the command above does not guarantee that the image is cleared of all sensitive information and is suitable for redistribution.
```bash sudo waagent -force -deprovision+user
This article assumes that you've already installed an Ubuntu Linux operating sys
``` 2. Copy the ubuntu directory to a new directory named boot:
-
+
```bash sudo cp -r ubuntu/ boot ```
This article assumes that you've already installed an Ubuntu Linux operating sys
```bash cd boot ```
-
+
4. Rename the shimx64.efi file: ```bash
virtual-machines Image Builder Gallery Update Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-gallery-update-image-version.md
You can review the JSON example you're about to use at [helloImageTemplateforSIG
1. Configure the JSON with your variables:
- ```console
- curl https://raw.githubusercontent.com/azure/azvmimagebuilder/master/quickquickstarts/8_Creating_a_Custom_Linux_Shared_Image_Gallery_Image_from_SIG/helloImageTemplateforSIGfromSIG.json -o helloImageTemplateforSIGfromSIG.json
- sed -i -e "s/<subscriptionID>/$subscriptionID/g" helloImageTemplateforSIGfromSIG.json
- sed -i -e "s/<rgName>/$sigResourceGroup/g" helloImageTemplateforSIGfromSIG.json
- sed -i -e "s/<imageDefName>/$imageDefName/g" helloImageTemplateforSIGfromSIG.json
- sed -i -e "s/<sharedImageGalName>/$sigName/g" helloImageTemplateforSIGfromSIG.json
- sed -i -e "s%<sigDefImgVersionId>%$sigDefImgVersionId%g" helloImageTemplateforSIGfromSIG.json
- sed -i -e "s/<region1>/$location/g" helloImageTemplateforSIGfromSIG.json
- sed -i -e "s/<region2>/$additionalregion/g" helloImageTemplateforSIGfromSIG.json
- sed -i -e "s/<runOutputName>/$runOutputName/g" helloImageTemplateforSIGfromSIG.json
- sed -i -e "s%<imgBuilderId>%$imgBuilderId%g" helloImageTemplateforSIGfromSIG.json
- ```
+ ```console
+ curl https://raw.githubusercontent.com/azure/azvmimagebuilder/master/quickquickstarts/8_Creating_a_Custom_Linux_Shared_Image_Gallery_Image_from_SIG/helloImageTemplateforSIGfromSIG.json -o helloImageTemplateforSIGfromSIG.json
+ sed -i -e "s/<subscriptionID>/$subscriptionID/g" helloImageTemplateforSIGfromSIG.json
+ sed -i -e "s/<rgName>/$sigResourceGroup/g" helloImageTemplateforSIGfromSIG.json
+ sed -i -e "s/<imageDefName>/$imageDefName/g" helloImageTemplateforSIGfromSIG.json
+ sed -i -e "s/<sharedImageGalName>/$sigName/g" helloImageTemplateforSIGfromSIG.json
+ sed -i -e "s%<sigDefImgVersionId>%$sigDefImgVersionId%g" helloImageTemplateforSIGfromSIG.json
+ sed -i -e "s/<region1>/$location/g" helloImageTemplateforSIGfromSIG.json
+ sed -i -e "s/<region2>/$additionalregion/g" helloImageTemplateforSIGfromSIG.json
+ sed -i -e "s/<runOutputName>/$runOutputName/g" helloImageTemplateforSIGfromSIG.json
+ sed -i -e "s%<imgBuilderId>%$imgBuilderId%g" helloImageTemplateforSIGfromSIG.json
+ ```
## Create the image 1. Submit the image configuration to the VM Image Builder service:
- ```azurecli-interactive
- az resource create \
- --resource-group $sigResourceGroup \
- --properties @helloImageTemplateforSIGfromSIG.json \
- --is-full-object \
- --resource-type Microsoft.VirtualMachineImages/imageTemplates \
- -n helloImageTemplateforSIGfromSIG01
- ```
+ ```azurecli-interactive
+ az resource create \
+ --resource-group $sigResourceGroup \
+ --properties @helloImageTemplateforSIGfromSIG.json \
+ --is-full-object \
+ --resource-type Microsoft.VirtualMachineImages/imageTemplates \
+ -n helloImageTemplateforSIGfromSIG01
+ ```
1. Start the image build:
- ```azurecli-interactive
- az resource invoke-action \
- --resource-group $sigResourceGroup \
- --resource-type Microsoft.VirtualMachineImages/imageTemplates \
- -n helloImageTemplateforSIGfromSIG01 \
- --action Run
- ```
+ ```azurecli-interactive
+ az resource invoke-action \
+ --resource-group $sigResourceGroup \
+ --resource-type Microsoft.VirtualMachineImages/imageTemplates \
+ -n helloImageTemplateforSIGfromSIG01 \
+ --action Run
+ ```
Wait for the image to be built and replicated before you move along to the next step.
Wait for the image to be built and replicated before you move along to the next
1. Create the VM by doing the following:
- ```azurecli-interactive
- az vm create \
- --resource-group $sigResourceGroup \
- --name aibImgVm001 \
- --admin-username azureuser \
- --location $location \
- --image "/subscriptions/$subscriptionID/resourceGroups/$sigResourceGroup/providers/Microsoft.Compute/galleries/$sigName/images/$imageDefName/versions/latest" \
- --generate-ssh-keys
- ```
+ ```azurecli-interactive
+ az vm create \
+ --resource-group $sigResourceGroup \
+ --name aibImgVm001 \
+ --admin-username azureuser \
+ --location $location \
+ --image "/subscriptions/$subscriptionID/resourceGroups/$sigResourceGroup/providers/Microsoft.Compute/galleries/$sigName/images/$imageDefName/versions/latest" \
+ --generate-ssh-keys
+ ```
1. Create a Secure Shell (SSH) connection to the VM by using the public IP address of the VM.
- ```console
- ssh azureuser@<pubIp>
- ```
+ ```console
+ ssh azureuser@<pubIp>
+ ```
- After the SSH connection is established, you should receive a "Message of the Day" saying that the image was customized:
+ After the SSH connection is established, you should receive a "Message of the Day" saying that the image was customized:
- ```output
- *******************************************************
- ** This VM was built from the: **
- ** !! AZURE VM IMAGE BUILDER Custom Image !! **
- ** You have just been Customized :-) **
- *******************************************************
- ```
+ ```output
+ *******************************************************
+ ** This VM was built from the: **
+ ** !! AZURE VM IMAGE BUILDER Custom Image !! **
+ ** You have just been Customized :-) **
+ *******************************************************
+ ```
1. Type `exit` to close the SSH connection. 1. To list the image versions that are now available in your gallery, run:
- ```azurecli-interactive
- az sig image-version list -g $sigResourceGroup -r $sigName -i $imageDefName -o table
- ```
+ ```azurecli-interactive
+ az sig image-version list -g $sigResourceGroup -r $sigName -i $imageDefName -o table
+ ```
## Next steps
virtual-machines Redhat Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/redhat-create-upload-vhd.md
EOF
15. Deprovision
- Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
+ Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
> [!CAUTION] > If you are migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step. Running the command `waagent -force -deprovision+user` will render the source machine unusable, this step is intended only to create a generalized image.
- ```bash
- sudo rm -f /var/log/waagent.log
- sudo cloud-init clean
- sudo waagent -force -deprovision+user
- sudo rm -f ~/.bash_history
- sudo export HISTSIZE=0
- ```
+
+ ```bash
+ sudo rm -f /var/log/waagent.log
+ sudo cloud-init clean
+ sudo waagent -force -deprovision+user
+ sudo rm -f ~/.bash_history
+ sudo export HISTSIZE=0
+ ```
16. Click **Action** > **Shut Down** in Hyper-V Manager. Your Linux VHD is now ready to be [**uploaded to Azure**](./upload-vhd.md#option-1-upload-a-vhd).
EOF
13. Deprovision
- Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
+ Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
- ```bash
- sudo cloud-init clean
- sudo waagent -force -deprovision+user
- sudo rm -f ~/.bash_history
- sudo sudo rm -f /var/log/waagent.log
- sudo export HISTSIZE=0
- ```
+ ```bash
+ sudo cloud-init clean
+ sudo waagent -force -deprovision+user
+ sudo rm -f ~/.bash_history
+ sudo sudo rm -f /var/log/waagent.log
+ sudo export HISTSIZE=0
+ ```
> [!CAUTION] > If you are migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step. Running the command `waagent -force -deprovision+user` will render the source machine unusable, this step is intended only to create a generalized image.
EOF
sudo qemu-img convert -f raw -o subformat=fixed,force_size -O vpc rhel-6.9.raw rhel-6.9.vhd ```
-
### RHEL 7 using KVM 1. Download the KVM image of RHEL 7 from the Red Hat website. This procedure uses RHEL 7 as the example.
This section shows you how to prepare a RHEL 7 distro from an ISO using a kickst
4. Open the virtual machine settings:
- a. Attach a new virtual hard disk to the virtual machine. Make sure to select **VHD Format** and **Fixed Size**.
+ 1. Attach a new virtual hard disk to the virtual machine. Make sure to select **VHD Format** and **Fixed Size**.
- b. Attach the installation ISO to the DVD drive.
+ 1. Attach the installation ISO to the DVD drive.
- c. Set the BIOS to boot from CD.
+ 1. Set the BIOS to boot from CD.
5. Start the virtual machine. When the installation guide appears, press **Tab** to configure the boot options.
virtual-machines Manage Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/manage-restore-points.md
Use the following steps:
To copy an existing VM restore point from one region to another, your first step is to create a restore point collection in the target or destination region. To do this, reference the restore point collection from the source region as detailed in [Create a VM restore point collection](create-restore-points.md#step-1-create-a-vm-restore-point-collection).
+```azurepowershell-interactive
+New-AzRestorePointCollection `
+ -ResourceGroupName 'myResourceGroup' `
+ -Name 'myRPCollection' `
+ -Location 'WestUS' `
+ -RestorePointCollectionId '/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RG>/providers/Microsoft.Compute/restorePointCollections/<SOURCE RESTORE POINT COLLECTION>'
+```
+ ### Step 2: Create the destination VM restore point After the restore point collection is created, trigger the creation of a restore point in the target restore point collection. Ensure that you've referenced the restore point in the source region that you want to copy and specified the source restore point's identifier in the request body. The source VM's location is inferred from the target restore point collection in which the restore point is being created. See the [Restore Points - Create](/rest/api/compute/restore-points/create) API documentation to create a `RestorePoint`.
+```azurepowershell-interactive
+New-AzRestorePoint `
+ -ResourceGroupName 'myResourceGroup' `
+ -RestorePointCollectionName 'myRPCollection'
+ -Name 'myRestorePoint'
+```
+ ### Step 3: Track copy status To track the status of the copy operation, follow the guidance in the [Get restore point copy or replication status](#get-restore-point-copy-or-replication-status) section below. This is only applicable for scenarios where the restore points are copied to a different region than the source VM.
+```azurepowershell-interactive
+Get-AzRestorePoint `
+ -ResourceGroupName 'myResourceGroup' `
+ -RestorePointCollectionName 'myRPCollection'
+ -Name 'myRestorePoint'
+```
+ ## Get restore point copy or replication status Copying the first VM restore point to another region is a long running operation. The VM restore point can be used to restore a VM only after the operation is completed for all disk restore points. To track the operation's status, call the [Restore Point - Get](/rest/api/compute/restore-points/get) API on the target VM restore point and include the `instanceView` parameter. The return will include the percentage of data that has been copied at the time of the request.
virtual-machines Prepare For Upload Vhd Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/prepare-for-upload-vhd-image.md
configured them.
interfere and block the Windows Provisioning Agent scripts executed when you deploy a new VM from your image.
+> [!TIP]
+> **Optional** Use [DISM](/windows-hardware/manufacture/desktop/dism-optimize-image-command-line-options) to optimize your image and reduce your VM's first boot time.
+>
+> To optimize your image, mount your VHD by double-clicking on it in Windows explorer, and then run DISM with the `/optimize-image` parameter.
+>
+> ```cmd
+> DISM /image:D:\ /optimize-image /boot
+> ```
+> Where D: is the mounted VHD's path.
+>
+> Running `DISM /optimize-image` should be the last modification you make to your VHD. If you make any changes to your VHD prior to deployment, you'll have to run `DISM /optimize-image` again.
+ ## Next steps - [Upload a Windows VM image to Azure for Resource Manager deployments](upload-generalized-managed.md)
virtual-machines Scheduled Event Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/scheduled-event-service.md
We now want to connect a Log Analytics Workspace to the collector VM. The Log An
1. Open the page for the workspace you created. 1. Under **Connect to a data source** select **Azure virtual machines (VMs)**.
- ![Connect to a VM as a data source](./media/notifications/connect-to-data-source.png)
+ ![Connect to a VM as a data source](./media/notifications/connect-to-data-source.png)
1. Search for and select **myCollectorVM**. 1. On the new page for **myCollectorVM**, select **Connect**.
This will install the [Microsoft Monitoring agent](../extensions/oms-windows.md)
1. Select **Data** from the left menu, then select **Windows Event Logs**. 1. In **Collect from the following event logs**, start typing *application* and then select **Application** from the list.
- ![Select Advanced settings](./media/notifications/advanced.png)
+ ![Select Advanced settings](./media/notifications/advanced.png)
1. Leave **ERROR**, **WARNING**, and **INFORMATION** selected and then select **Save** to save the settings.
This will install the [Microsoft Monitoring agent](../extensions/oms-windows.md)
## Creating an alert rule with Azure Monitor - Once the events are pushed to Log Analytics, you can run the following [query](../../azure-monitor/logs/log-analytics-tutorial.md) to look for the schedule Events. 1. At the top of the page, select **Logs** and paste the following into the text box:
- ```
- Event
- | where EventLog == "Application" and Source contains "AzureScheduledEvents" and RenderedDescription contains "Scheduled" and RenderedDescription contains "EventStatus"
- | project TimeGenerated, RenderedDescription
- | extend ReqJson= parse_json(RenderedDescription)
- | extend EventId = ReqJson["EventId"]
- ,EventStatus = ReqJson["EventStatus"]
- ,EventType = ReqJson["EventType"]
- ,NotBefore = ReqJson["NotBefore"]
- ,ResourceType = ReqJson["ResourceType"]
- ,Resources = ReqJson["Resources"]
- | project-away RenderedDescription,ReqJson
- ```
+ ```
+ Event
+ | where EventLog == "Application" and Source contains "AzureScheduledEvents" and RenderedDescription contains "Scheduled" and RenderedDescription contains "EventStatus"
+ | project TimeGenerated, RenderedDescription
+ | extend ReqJson= parse_json(RenderedDescription)
+ | extend EventId = ReqJson["EventId"]
+ ,EventStatus = ReqJson["EventStatus"]
+ ,EventType = ReqJson["EventType"]
+ ,NotBefore = ReqJson["NotBefore"]
+ ,ResourceType = ReqJson["ResourceType"]
+ ,Resources = ReqJson["Resources"]
+ | project-away RenderedDescription,ReqJson
+ ```
1. Select **Save**, and then type `ogQuery` for the name, leave **Query** as the type, type `VMLogs` as the **Category**, and then select **Save**.
- ![Save the query](./media/notifications/save-query.png)
+ ![Save the query](./media/notifications/save-query.png)
1. Select **New alert rule**. 1. In the **Create rule** page, leave `collectorworkspace` as the **Resource**.
virtual-machines Install Ibm Z Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/ibm/install-ibm-z-environment.md
The installation file for the web server is **ZDT\_Install\_EE\_V12.0.0.1.tgz**.
1. From the command line, enter the following command to make sure everything is up to date in the newly created image:
- ```
- sudo apt-get update
- ```
+ ```
+ sudo apt-get update
+ ```
2. Create the directory to install to:
- ```
+ ```
mkdir ZDT
- ```
+ ```
3. Copy the file from your local machine to the VM:
- ```
+ ```
scp ZDT_Install_EE_V12.0.0.1.tgz your_userid@<IP Address /ZDT> =>
- ```
-
+ ```
+
> [!NOTE] > This command copies the installation file to the ZDT directory in your Home directory, which varies depending on whether your client runs Windows or Linux.
The installation file for the web server is **ZDT\_Install\_EE\_V12.0.0.1.tgz**.
1. Go to the ZDT directory and decompress the ZDT\_Install\_EE\_V12.0.0.1.tgz file using the following commands:
- ```
- cd ZDT
- tar zxvf ZDT\_Install\_EE\_V12.0.0.0.tgz
- ```
+ ```
+ cd ZDT
+ tar zxvf ZDT\_Install\_EE\_V12.0.0.0.tgz
+ ```
2. Run the installer:
- ```
- chmod 755 ZDT\_Install\_EE\_V12.0.0.0.x86_64
- ./ZDT_Install_EE_V12.0.0.0.x86_64
- ```
+ ```
+ chmod 755 ZDT\_Install\_EE\_V12.0.0.0.x86_64
+ ./ZDT_Install_EE_V12.0.0.0.x86_64
+ ```
3. Select **1** to install Enterprise Server.
The installation file for the web server is **ZDT\_Install\_EE\_V12.0.0.1.tgz**.
6. To verify if the installation was successful enter
- ```
- dpkg -l | grep zdtapp
- ```
+ ```
+ dpkg -l | grep zdtapp
+ ```
7. Verify that the output contains the string **zdtapp 12.0.0.0**, indicating that the package gas been installed successfully
Keep in mind that when the web server starts, it runs under the zD&T user ID tha
1. To start the web server, use the root User ID to run the following command:
- ```
- sudo /opt/ibm/zDT/bin/startServer
- ```
+ ```
+ sudo /opt/ibm/zDT/bin/startServer
+ ```
2. Copy the URL output by the script, which looks like:
- ```
- https://<your IP address or domain name>:9443/ZDTMC/login.htm
- ```
+ ```
+ https://<your IP address or domain name>:9443/ZDTMC/login.htm
+ ```
3. Paste the URL into a web browser to open the management component for your zD&T installation.
virtual-machines Redhat Rhui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-rhui.md
Use the following procedure to lock a RHEL 8.x VM to a particular minor release.
1. Get the EUS repository `config` file. ```bash
- sudo wget https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8-eus.config
+ curl -O https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8-eus.config
``` 1. Add EUS repositories.
To remove the version lock, use the following commands. Run the commands as `roo
1. Get the regular repositories `config` file. ```bash
- sudo wget https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8.config
+ curl -O https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8.config
``` 1. Add non-EUS repository.
vpn-gateway Azure Vpn Client Optional Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/azure-vpn-client-optional-configurations.md
To add DNS suffixes, modify the downloaded profile XML file and add the **\<dnss
<dnssuffix>.xyz.com</dnssuffix> <dnssuffix>.etc.net</dnssuffix> </dnssuffixes>
-
+ </clientconfig> </azvpnprofile> ```
To add custom DNS servers, modify the downloaded profile XML file and add the **
<azvpnprofile> <clientconfig>
- <dnsservers>
- <dnsserver>x.x.x.x</dnsserver>
- <dnsserver>y.y.y.y</dnsserver>
- </dnsservers>
-
+ <dnsservers>
+ <dnsserver>x.x.x.x</dnsserver>
+ <dnsserver>y.y.y.y</dnsserver>
+ </dnsservers>
+ </clientconfig> </azvpnprofile> ```
You can configure forced tunneling in order to direct all traffic to the VPN tun
```xml <azvpnprofile> <clientconfig>
-
+ <includeroutes>
- <route>
- <destination>0.0.0.0</destination><mask>1</mask>
- </route>
- <route>
- <destination>128.0.0.0</destination><mask>1</mask>
- </route>
+ <route>
+ <destination>0.0.0.0</destination><mask>1</mask>
+ </route>
+ <route>
+ <destination>128.0.0.0</destination><mask>1</mask>
+ </route>
</includeroutes>
-
+ </clientconfig> </azvpnprofile> ```
-
+ > [!NOTE] > - The default status for the clientconfig tag is `<clientconfig i:nil="true" />`, which can be modified based on the requirement. > - A duplicate clientconfig tag is not supported on macOS, so make sure the clientconfig tag is not duplicated in the XML file.
You can add custom routes. Modify the downloaded profile XML file and add the **
<azvpnprofile> <clientconfig>
- <includeroutes>
- <route>
- <destination>x.x.x.x</destination><mask>24</mask>
- </route>
- <route>
- <destination>y.y.y.y</destination><mask>24</mask>
- </route>
- </includeroutes>
-
+ <includeroutes>
+ <route>
+ <destination>x.x.x.x</destination><mask>24</mask>
+ </route>
+ <route>
+ <destination>y.y.y.y</destination><mask>24</mask>
+ </route>
+ </includeroutes>
+ </clientconfig> </azvpnprofile> ```
The ability to completely block routes isn't supported by the Azure VPN Client.
<azvpnprofile> <clientconfig>
- <excluderoutes>
- <route>
- <destination>x.x.x.x</destination><mask>24</mask>
- </route>
- <route>
- <destination>y.y.y.y</destination><mask>24</mask>
- </route>
- </excluderoutes>
-
+ <excluderoutes>
+ <route>
+ <destination>x.x.x.x</destination><mask>24</mask>
+ </route>
+ <route>
+ <destination>y.y.y.y</destination><mask>24</mask>
+ </route>
+ </excluderoutes>
+ </clientconfig> </azvpnprofile> ```
vpn-gateway Ikev2 Openvpn From Sstp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/ikev2-openvpn-from-sstp.md
description: Learn how to transition to OpenVPN protocol or IKEv2 from SSTP to o
Previously updated : 05/04/2022 Last updated : 07/28/2023
There may be cases when you want to support more than 128 concurrent P2S connect
This is the simplest option. SSTP and IKEv2 can coexist on the same gateway and give you a higher number of concurrent connections. You can simply enable IKEv2 on the existing gateway and redownload the client.
-Adding IKEv2 to an existing SSTP VPN gateway won't affect existing clients and you can configure them to use IKEv2 in small batches or just configure the new clients to use IKEv2. If a Windows client is configured for both SSTP and IKEv2, it will try to connect using IKEV2 first and if that fails, it will fall back to SSTP.
+Adding IKEv2 to an existing SSTP VPN gateway won't affect existing clients and you can configure them to use IKEv2 in small batches or just configure the new clients to use IKEv2. If a Windows client is configured for both SSTP and IKEv2, it tries to connect using IKEV2 first and if that fails, it falls back to SSTP.
**IKEv2 uses non-standard UDP ports so you need to ensure that these ports are not blocked on the user's firewall. The ports in use are UDP 500 and 4500.**
To add IKEv2 to an existing gateway, go to the "point-to-site configuration" tab
### Option 2 - Remove SSTP and enable OpenVPN on the Gateway
-Since SSTP and OpenVPN are both TLS-based protocol, they can't coexist on the same gateway. If you decide to move away from SSTP to OpenVPN, you'll have to disable SSTP and enable OpenVPN on the gateway. This operation will cause the existing clients to lose connectivity to the VPN gateway until the new profile has been configured on the client.
+Since SSTP and OpenVPN are both TLS-based protocol, they can't coexist on the same gateway. If you decide to move away from SSTP to OpenVPN, you'll have to disable SSTP and enable OpenVPN on the gateway. This operation causes the existing clients to lose connectivity to the VPN gateway until the new profile has been configured on the client.
You can enable OpenVPN along side with IKEv2 if you desire. OpenVPN is TLS-based and uses the standard TCP 443 port. To switch to OpenVPN, go to the "point-to-site configuration" tab under the Virtual Network Gateway in portal, and select **OpenVPN (SSL)** or **IKEv2 and OpenVPN (SSL)** from the drop-down box.
vpn-gateway Reset Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/reset-gateway.md
description: Learn how to reset a gateway or a gateway connection to reestablish
Previously updated : 06/13/2022 Last updated : 07/28/2023 # Reset a VPN gateway or a connection
Before you reset your gateway, verify the key items listed below for each IPsec
Verify the following items before resetting your gateway: * The Internet IP addresses (VIPs) for both the Azure VPN gateway and the on-premises VPN gateway are configured correctly in both the Azure and the on-premises VPN policies.
-* The pre-shared key must be the same on both Azure and on-premises VPN gateways.
+* The preshared key must be the same on both Azure and on-premises VPN gateways.
* If you apply specific IPsec/IKE configuration, such as encryption, hashing algorithms, and PFS (Perfect Forward Secrecy), ensure both the Azure and on-premises VPN gateways have the same configurations. ### <a name="portal"></a>Azure portal
-You can reset a Resource Manager VPN gateway using the Azure portal. If you want to reset a classic gateway, see the PowerShell steps for the [Classic deployment model](#resetclassic).
+You can reset a Resource Manager VPN gateway using the Azure portal.
[!INCLUDE [portal steps](../../includes/vpn-gateway-reset-gw-portal-include.md)] ### <a name="ps"></a>PowerShell
-#### Resource Manager deployment model
-- The cmdlet for resetting a gateway is **Reset-AzVirtualNetworkGateway**. Before performing a reset, make sure you have the latest version of the [PowerShell Az cmdlets](/powershell/module/az.network). The following example resets a virtual network gateway named VNet1GW in the TestRG1 resource group: ```azurepowershell-interactive
$gw = Get-AzVirtualNetworkGateway -Name VNet1GW -ResourceGroupName TestRG1
Reset-AzVirtualNetworkGateway -VirtualNetworkGateway $gw ```
-Result:
+When you receive a return result, you can assume the gateway reset was successful. However, there's nothing in the return result that indicates explicitly that the reset was successful. If you want to look closely at the history to see exactly when the gateway reset occurred, you can view that information in the [Azure portal](https://portal.azure.com). In the portal, navigate to **'GatewayName' -> Resource Health**.
+
+### <a name="cli"></a>Azure CLI
+
+To reset the gateway, use the [az network vnet-gateway reset](/cli/azure/network/vnet-gateway) command. The following example resets a virtual network gateway named VNet5GW in the TestRG5 resource group:
+
+```azurecli-interactive
+az network vnet-gateway reset -n VNet5GW -g TestRG5
+```
When you receive a return result, you can assume the gateway reset was successful. However, there's nothing in the return result that indicates explicitly that the reset was successful. If you want to look closely at the history to see exactly when the gateway reset occurred, you can view that information in the [Azure portal](https://portal.azure.com). In the portal, navigate to **'GatewayName' -> Resource Health**.
-#### <a name="resetclassic"></a>Classic deployment model
+### <a name="resetclassic"></a>Reset a classic gateway
-The cmdlet for resetting a gateway is **Reset-AzureVNetGateway**. The Azure PowerShell cmdlets for Service Management must be installed locally on your desktop. You can't use Azure Cloud Shell. Before performing a reset, make sure you have the latest version of the [Service Management (SM) PowerShell cmdlets](/powershell/azure/servicemanagement/install-azure-ps#azure-service-management-cmdlets). When using this command, make sure you're using the full name of the virtual network. Classic VNets that were created using the portal have a long name that is required for PowerShell. You can view the long name by using 'Get-AzureVNetConfig -ExportToFile C:\Myfoldername\NetworkConfig.xml'.
+The cmdlet for resetting a classic gateway is **Reset-AzureVNetGateway**. The Azure PowerShell cmdlets for Service Management must be installed locally on your desktop. You can't use Azure Cloud Shell. Before performing a reset, make sure you have the latest version of the [Service Management (SM) PowerShell cmdlets](/powershell/azure/servicemanagement/install-azure-ps#azure-service-management-cmdlets).
+
+When using this command, make sure you're using the full name of the virtual network. Classic VNets that were created using the portal have a long name that is required for PowerShell. You can view the long name by using 'Get-AzureVNetConfig -ExportToFile C:\Myfoldername\NetworkConfig.xml'.
The following example resets the gateway for a virtual network named "Group TestRG1 TestVNet1" (which shows as simply "TestVNet1" in the portal):
RequestId : 9ca273de2c4d01e986480ce1ffa4d6d9
StatusCode : OK ```
-### <a name="cli"></a>Azure CLI
-
-To reset the gateway, use the [az network vnet-gateway reset](/cli/azure/network/vnet-gateway) command. The following example resets a virtual network gateway named VNet5GW in the TestRG5 resource group:
+## Next steps
-```azurecli-interactive
-az network vnet-gateway reset -n VNet5GW -g TestRG5
-```
-
-Result:
-
-When you receive a return result, you can assume the gateway reset was successful. However, there's nothing in the return result that indicates explicitly that the reset was successful. If you want to look closely at the history to see exactly when the gateway reset occurred, you can view that information in the [Azure portal](https://portal.azure.com). In the portal, navigate to **'GatewayName' -> Resource Health**.
+For more information about VPN Gateway, see the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md).
vpn-gateway Site To Site Vpn Private Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/site-to-site-vpn-private-peering.md
description: Learn how to configure site-to-site VPN connections over ExpressRou
Previously updated : 09/21/2022 Last updated : 07/28/2023
You can configure a Site-to-Site VPN to a virtual network gateway over an Expres
This feature is available for the following SKUs:
-* VpnGw1, VpnGw2, VpnGw3, VpnGw4, VpnGw5 with standard public IP with no zones
* VpnGw1AZ, VpnGw2AZ, VpnGw3AZ, VpnGw4AZ, VpnGw5AZ with standard public IP with one or more zones >[!NOTE]
vpn-gateway Vpn Gateway 3Rdparty Device Config Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-3rdparty-device-config-overview.md
description: Learn about partner VPN device configurations for connecting to Azu
Previously updated : 09/02/2020 Last updated : 07/28/2023
vpn-gateway Vpn Gateway About Point To Site Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-point-to-site-routing.md
description: Learn about Azure Point-to-Site VPN routing for different operating
Previously updated : 12/03/2021 Last updated : 07/28/2023
There are a number of different diagrams in this article. Each section shows a d
## <a name="isolatedvnet"></a>One isolated VNet
-The Point-to-Site VPN gateway connection in this example is for a VNet that is not connected or peered with any other virtual network (VNet1). In this example, clients can access VNet1.
+The Point-to-Site VPN gateway connection in this example is for a VNet that isn't connected or peered with any other virtual network (VNet1). In this example, clients can access VNet1.
:::image type="content" source="./media/vpn-gateway-about-point-to-site-routing/isolated.jpg" alt-text="Isolated VNet routing" lightbox="./media/vpn-gateway-about-point-to-site-routing/isolated.jpg":::
The Point-to-Site VPN gateway connection in this example is for a VNet that is n
In this example, the Point-to-Site VPN gateway connection is for VNet1. VNet1 is peered with VNet2. VNet 2 is peered with VNet3. VNet1 is peered with VNet4. There is no direct peering between VNet1 and VNet3. VNet1 has ΓÇ£Allow gateway transitΓÇ¥ and VNet2 and VNet4 have ΓÇ£Use remote gatewaysΓÇ¥ enabled.
-Clients using Windows can access directly peered VNets, but the VPN client must be downloaded again if any changes are made to VNet peering or the network topology. Non-Windows clients can access directly peered VNets. Access is not transitive and is limited to only directly peered VNets.
+Clients using Windows can access directly peered VNets, but the VPN client must be downloaded again if any changes are made to VNet peering or the network topology. Non-Windows clients can access directly peered VNets. Access isn't transitive and is limited to only directly peered VNets.
:::image type="content" source="./media/vpn-gateway-about-point-to-site-routing/multiple.jpg" alt-text="Multiple peered VNets" lightbox="./media/vpn-gateway-about-point-to-site-routing/multiple.jpg":::
Clients using Windows can access directly peered VNets, but the VPN client must
## <a name="multis2s"></a>Multiple VNets connected using an S2S VPN
-In this example, the Point-to-Site VPN gateway connection is for VNet1. VNet1 is connected to VNet2 using a Site-to-Site VPN connection. VNet2 is connected to VNet3 using a Site-to-Site VPN connection. There is no direct peering or Site-to-Site VPN connection between VNet1 and VNet3. All Site-to-Site connections are not running BGP for routing.
+In this example, the Point-to-Site VPN gateway connection is for VNet1. VNet1 is connected to VNet2 using a Site-to-Site VPN connection. VNet2 is connected to VNet3 using a Site-to-Site VPN connection. There is no direct peering or Site-to-Site VPN connection between VNet1 and VNet3. All Site-to-Site connections aren't running BGP for routing.
Clients using Windows, or another supported OS, can only access VNet1. To access additional VNets, BGP must be used.
Clients using Windows, or another supported OS, can access all VNets that are co
## <a name="vnetbranch"></a>One VNet and a branch office
-In this example, the Point-to-Site VPN gateway connection is for VNet1. VNet1 is not connected/ peered with any other virtual network, but is connected to an on-premises site through a Site-to-Site VPN connection that is not running BGP.
+In this example, the Point-to-Site VPN gateway connection is for VNet1. VNet1 isn't connected/ peered with any other virtual network, but is connected to an on-premises site through a Site-to-Site VPN connection that isn't running BGP.
Windows and non-Windows clients can only access VNet1.
Windows and non-Windows clients can only access VNet1.
## <a name="vnetbranchbgp"></a>One VNet and a branch office (BGP)
-In this example, the Point-to-Site VPN gateway connection is for VNet1. VNet1 is not connected or peered with any other virtual network, but is connected to an on-premises site (Site1) through a Site-to-Site VPN connection running BGP.
+In this example, the Point-to-Site VPN gateway connection is for VNet1. VNet1 isn't connected or peered with any other virtual network, but is connected to an on-premises site (Site1) through a Site-to-Site VPN connection running BGP.
-Windows clients can access the VNet and the branch office (Site1), but the routes to Site1 must be manually added to the client. Non-Windows clients can access the VNet as well as the on-premises branch office.
+Windows clients can access the VNet and the branch office (Site1), but the routes to Site1 must be manually added to the client. Non-Windows clients can access the VNet and the on-premises branch office.
:::image type="content" source="./media/vpn-gateway-about-point-to-site-routing/branch-bgp.jpg" alt-text="Routing with a VNet and a branch office - BGP" lightbox="./media/vpn-gateway-about-point-to-site-routing/branch-bgp.jpg":::
vpn-gateway Vpn Gateway Certificates Point To Site Makecert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-certificates-point-to-site-makecert.md
description: Learn how to create a self-signed root certificate, export a public
Previously updated : 09/02/2020 Last updated : 07/28/2023 # Generate and export certificates for Point-to-Site connections using MakeCert
-Point-to-Site connections use certificates to authenticate. This article shows you how to create a self-signed root certificate and generate client certificates using MakeCert. If you are looking for different certificate instructions, see [Certificates - PowerShell](vpn-gateway-certificates-point-to-site.md) or [Certificates - Linux](vpn-gateway-certificates-point-to-site-linux.md).
+Point-to-Site connections use certificates to authenticate. This article shows you how to create a self-signed root certificate and generate client certificates using MakeCert. If you're looking for different certificate instructions, see [Certificates - PowerShell](vpn-gateway-certificates-point-to-site.md) or [Certificates - Linux](vpn-gateway-certificates-point-to-site-linux.md).
While we recommend using the [Windows 10 or later PowerShell steps](vpn-gateway-certificates-point-to-site.md) to create your certificates, we provide these MakeCert instructions as an optional method. The certificates that you generate using either method can be installed on [any supported client operating system](vpn-gateway-howto-point-to-site-resource-manager-portal.md#faq). However, MakeCert has the following limitation:
While we recommend using the [Windows 10 or later PowerShell steps](vpn-gateway-
## <a name="rootcert"></a>Create a self-signed root certificate
-The following steps show you how to create a self-signed certificate using MakeCert. These steps are not deployment-model specific. They are valid for both Resource Manager and classic.
+The following steps show you how to create a self-signed certificate using MakeCert. These steps aren't deployment-model specific. They're valid for both Resource Manager and classic.
1. Download and install [MakeCert](/windows/win32/seccrypto/makecert). 2. After installation, you can typically find the makecert.exe utility under this path: 'C:\Program Files (x86)\Windows Kits\10\bin\<arch>'. Although, it's possible that it was installed to another location. Open a command prompt as administrator and navigate to the location of the MakeCert utility. You can use the following example, adjusting for the proper location:
The exported.cer file must be uploaded to Azure. For instructions, see [Configur
### Export the self-signed certificate and private key to store it (optional)
-You may want to export the self-signed root certificate and store it safely. If need be, you can later install it on another computer and generate more client certificates, or export another .cer file. To export the self-signed root certificate as a .pfx, select the root certificate and use the same steps as described in [Export a client certificate](#clientexport).
+You may want to export the self-signed root certificate and store it safely. You can later install it on another computer and generate more client certificates, or export another .cer file. To export the self-signed root certificate as a .pfx, select the root certificate and use the same steps as described in [Export a client certificate](#clientexport).
## Create and install client certificates
-You don't install the self-signed certificate directly on the client computer. You need to generate a client certificate from the self-signed certificate. You then export and install the client certificate to the client computer. The following steps are not deployment-model specific. They are valid for both Resource Manager and classic.
+You don't install the self-signed certificate directly on the client computer. You need to generate a client certificate from the self-signed certificate. You then export and install the client certificate to the client computer. The following steps aren't deployment-model specific. They're valid for both Resource Manager and classic.
### <a name="clientcert"></a>Generate a client certificate
-Each client computer that connects to a VNet using Point-to-Site must have a client certificate installed. You generate a client certificate from the self-signed root certificate, and then export and install the client certificate. If the client certificate is not installed, authentication fails.
+Each client computer that connects to a VNet using Point-to-Site must have a client certificate installed. You generate a client certificate from the self-signed root certificate, and then export and install the client certificate. If the client certificate isn't installed, authentication fails.
-The following steps walk you through generating a client certificate from a self-signed root certificate. You may generate multiple client certificates from the same root certificate. When you generate client certificates using the steps below, the client certificate is automatically installed on the computer that you used to generate the certificate. If you want to install a client certificate on another client computer, you can export the certificate.
+The following steps walk you through generating a client certificate from a self-signed root certificate. You may generate multiple client certificates from the same root certificate. When you generate client certificates using the following steps, the client certificate is automatically installed on the computer that you used to generate the certificate. If you want to install a client certificate on another client computer, you can export the certificate.
1. On the same computer that you used to create the self-signed certificate, open a command prompt as administrator. 2. Modify and run the sample to generate a client certificate.
- * Change *"P2SRootCert"* to the name of the self-signed root that you are generating the client certificate from. Make sure you are using the name of the root certificate, which is whatever the 'CN=' value was that you specified when you created the self-signed root.
+ * Change *"P2SRootCert"* to the name of the self-signed root that you're generating the client certificate from. Make sure you're using the name of the root certificate, which is whatever the 'CN=' value was that you specified when you created the self-signed root.
* Change *P2SChildCert* to the name you want to generate a client certificate to be. If you run the following example without modifying it, the result is a client certificate named P2SChildcert in your Personal certificate store that was generated from root certificate P2SRootCert.
vpn-gateway Vpn Gateway Delete Vnet Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-delete-vnet-gateway-portal.md
description: Learn how to delete a virtual network gateway using the Azure portal. Previously updated : 02/10/2021 Last updated : 07/28/2023
The following steps help you delete any resources that are no longer being used.
#### To delete the Public IP address resource for the gateway
-1. In **All resources**, locate the Public IP address resource that was associated to the gateway. If the virtual network gateway was active-active, you will see two Public IP addresses.
+1. In **All resources**, locate the Public IP address resource that was associated to the gateway. If the virtual network gateway was active-active, you'll see two Public IP addresses.
1. On the **Overview** page for the Public IP address, click **Delete**, then **Yes** to confirm. #### To delete the gateway subnet
The following steps help you delete any resources that are no longer being used.
## <a name="deleterg"></a>Delete a VPN gateway by deleting the resource group
-If you are not concerned about keeping any of your resources in the resource group and you just want to start over, you can delete an entire resource group. This is a quick way to remove everything. The following steps apply only to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md).
+If you aren't concerned about keeping any of your resources in the resource group and you just want to start over, you can delete an entire resource group. This is a quick way to remove everything. The following steps apply only to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md).
1. In **All resources**, locate the resource group and click to open the blade. 1. Click **Delete**. On the Delete blade, view the affected resources. Make sure that you want to delete all of these resources. If not, use the steps in Delete a VPN gateway at the top of this article.
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vpn-faq.md
description: Learn about frequently asked questions for VPN Gateway cross-premis
Previously updated : 01/30/2023 Last updated : 07/28/2023
web-application-firewall Configure Waf Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/configure-waf-custom-rules.md
$poolSetting01 = New-AzApplicationGatewayBackendHttpSettings -Name "setting1" -P
-Protocol Http -CookieBasedAffinity Disabled $rule01 = New-AzApplicationGatewayRequestRoutingRule -Name "rule1" -RuleType basic `
- -BackendHttpSettings $poolSetting01 -HttpListener $listener01 -BackendAddressPool $pool
+ -BackendHttpSettings $poolSetting01 -HttpListener $listener01 -BackendAddressPool $pool -Priority 1000
$autoscaleConfig = New-AzApplicationGatewayAutoscaleConfiguration -MinCapacity 3
$sku = New-AzApplicationGatewaySku -Name WAF_v2 -Tier WAF_v2
### Create two custom rules and apply it to WAF policy ```azurepowershell
-# Create WAF config
-$wafConfig = New-AzApplicationGatewayWebApplicationFirewallConfiguration -Enabled $true -FirewallMode "Prevention" -RuleSetType "OWASP" -RuleSetVersion "3.0"
# Create a User-Agent header custom rule $variable = New-AzApplicationGatewayFirewallMatchVariable -VariableName RequestHeaders -Selector User-Agent $condition = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable -Operator Contains -MatchValue "evilbot" -Transform Lowercase -NegationCondition $False
$condition2 = New-AzApplicationGatewayFirewallCondition -MatchVariable $var2 -Op
$rule2 = New-AzApplicationGatewayFirewallCustomRule -Name allowUS -Priority 14 -RuleType MatchRule -MatchCondition $condition2 -Action Allow -State Enabled # Create a firewall policy
-$wafPolicy = New-AzApplicationGatewayFirewallPolicy -Name wafpolicyNew -ResourceGroup $rgname -Location $location -CustomRule $rule,$rule2
+$policySetting = New-AzApplicationGatewayFirewallPolicySetting -Mode Prevention -State Enabled
+$wafPolicy = New-AzApplicationGatewayFirewallPolicy -Name wafpolicyNew -ResourceGroup $rgname -Location $location -PolicySetting $PolicySetting -CustomRule $rule,$rule2
``` ### Create the Application Gateway
$appgw = New-AzApplicationGateway -Name $appgwName -ResourceGroupName $rgname `
-GatewayIpConfigurations $gipconfig -FrontendIpConfigurations $fipconfig01 ` -FrontendPorts $fp01 -HttpListeners $listener01 ` -RequestRoutingRules $rule01 -Sku $sku -AutoscaleConfiguration $autoscaleConfig `
- -WebApplicationFirewallConfig $wafConfig `
-FirewallPolicy $wafPolicy ```
web-application-firewall Tutorial Restrict Web Traffic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/tutorial-restrict-web-traffic-powershell.md
$defaultlistener = New-AzApplicationGatewayHttpListener `
$frontendRule = New-AzApplicationGatewayRequestRoutingRule ` -Name rule1 ` -RuleType Basic `
+ -Priority 1000 `
-HttpListener $defaultlistener ` -BackendAddressPool $defaultPool ` -BackendHttpSettings $poolSettings