Updates from: 06/24/2023 01:14:41
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Join Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-windows-vm.md
Previously updated : 01/29/2023 Last updated : 06/22/2023 #Customer intent: As an server administrator, I want to learn how to join a Windows Server VM to an Azure Active Directory Domain Services managed domain to provide centralized identity and policy.
If you already have a VM that you want to domain-join, skip to the section to [j
| Virtual machine name | Enter a name for the VM, such as *myVM* | | Region | Choose the region to create your VM in, such as *East US* | | Username | Enter a username for the local administrator account to create on the VM, such as *azureuser* |
- | Password | Enter, and then confirm, a secure password for the local administrator to create on the VM. Don't specify a domain user account's credentials. |
+ | Password | Enter, and then confirm, a secure password for the local administrator to create on the VM. Don't specify a domain user account's credentials. [Windows LAPS](/windows-server/identity/laps/laps-overview) isn't supported. |
1. By default, VMs created in Azure are accessible from the Internet using RDP. When RDP is enabled, automated sign-in attacks are likely to occur, which may disable accounts with common names such as *admin* or *administrator* due to multiple failed successive sign-in attempts.
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-phone-options.md
Previously updated : 04/17/2023 Last updated : 06/23/2023
To work properly, phone numbers must be in the format *+CountryCode PhoneNumber*
## Mobile phone verification
-For Azure AD Multi-Factor Authentication or SSPR, users can choose to receive a text message with a verification code to enter in the sign-in interface, or receive a phone call.
+For Azure AD Multi-Factor Authentication or SSPR, users can choose to receive an SMS message with a verification code to enter in the sign-in interface, or receive a phone call.
If users don't want their mobile phone number to be visible in the directory but want to use it for password reset, administrators shouldn't populate the phone number in the directory. Instead, users should populate their **Authentication Phone** attribute via the combined security info registration at [https://aka.ms/setupsecurityinfo](https://aka.ms/setupsecurityinfo). Administrators can see this information in the user's profile, but it's not published elsewhere.
If users don't want their mobile phone number to be visible in the directory but
Microsoft doesn't guarantee consistent SMS or voice-based Azure AD Multi-Factor Authentication prompt delivery by the same number. In the interest of our users, we may add or remove short codes at any time as we make route adjustments to improve SMS deliverability. Microsoft doesn't support short codes for countries/regions besides the United States and Canada.
-### Text message verification
+> [!NOTE]
+> Starting July 2023, we apply delivery method optimization such that tenants with a free or trial subscription may receive an SMS message or voice call.
+
+### SMS message verification
-With text message verification during SSPR or Azure AD Multi-Factor Authentication, a Short Message Service (SMS) text is sent to the mobile phone number containing a verification code. To complete the sign-in process, the verification code provided is entered into the sign-in interface.
+With SMS message verification during SSPR or Azure AD Multi-Factor Authentication, a Short Message Service (SMS) text is sent to the mobile phone number containing a verification code. To complete the sign-in process, the verification code provided is entered into the sign-in interface.
Android users can enable Rich Communication Services (RCS) on their devices. RCS offers encryption and other improvements over SMS. For Android, MFA text messages may be sent over RCS rather than SMS. The MFA text message is similar to SMS, but RCS messages have more Microsoft branding and a verified checkmark so users know they can trust the message.
active-directory Howto Convert App To Be Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-convert-app-to-be-multi-tenant.md
Multi-tenant applications can also get access tokens to call APIs that are prote
## Related content * [Multi-tenant application sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-3-Multi-Tenant/README.md)
+* [Multi-tier multi-tenant application sample](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/blob/main/6-AdvancedScenarios/2-call-api-mt/README.md)
* [Branding guidelines for applications][AAD-App-Branding] * [Application objects and service principal objects][AAD-App-SP-Objects] * [Integrating applications with Azure Active Directory][AAD-Integrating-Apps]
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md
The following samples illustrate web applications that sign in users. Some sampl
> | Language/<br/>Platform | Code sample(s)<br/> on GitHub | Auth<br/> libraries | Auth flow | > | - | | - | -- | > | ASP.NET Core| ASP.NET Core Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/1-WebApp-OIDC/README.md) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/1-WebApp-OIDC/1-5-B2C/README.md) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-1-Call-MSGraph/README.md) <br/> &#8226; [Customize token cache](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-2-TokenCache/README.md) <br/> &#8226; [Call Graph (multi-tenant)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-3-Multi-Tenant/README.md) <br/> &#8226; [Call Azure REST APIs](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/3-WebApp-multi-APIs/README.md) <br/> &#8226; [Protect web API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/4-WebApp-your-API/4-1-MyOrg/README.md) <br/> &#8226; [Protect web API (B2C)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/4-WebApp-your-API/4-2-B2C/README.md) <br/> &#8226; [Protect multi-tenant web API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/4-WebApp-your-API/4-3-AnyOrg/Readme.md) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-1-Roles/README.md) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-2-Groups/README.md) <br/> &#8226; [Deploy to Azure Storage and App Service](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/6-Deploy-to-Azure/README.md) | [Microsoft.Identity.Web](/dotnet/api/microsoft-authentication-library-dotnet/confidentialclient) | &#8226; OpenID connect <br/> &#8226; Authorization code <br/> &#8226; On-Behalf-Of|
-> | Blazor | Blazor Server Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-OIDC/MyOrg) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-OIDC/B2C) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-graph-user/Call-MSGraph) <br/> &#8226; [Call web API](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-your-API/MyOrg) <br/> &#8226; [Call web API (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-your-API/B2C) | [MSAL.NET](/entra/msal/dotnet) | Implicit/Hybrid flow|
+> | Blazor | Blazor Server Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-OIDC/MyOrg) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-OIDC/B2C) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-graph-user/Call-MSGraph) <br/> &#8226; [Call web API](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-your-API/MyOrg) <br/> &#8226; [Call web API (B2C)](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-your-API/B2C) | [MSAL.NET](/entra/msal/dotnet) | Hybrid flow |
> | ASP.NET Core|[Advanced Token Cache Scenarios](https://github.com/Azure-Samples/ms-identity-dotnet-advanced-token-cache) | [Microsoft.Identity.Web](/dotnet/api/microsoft-authentication-library-dotnet/confidentialclient) | On-Behalf-Of (OBO) | > | ASP.NET Core|[Use the Conditional Access auth context to perform step\-up authentication](https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app/blob/main/README.md) | [Microsoft.Identity.Web](/dotnet/api/microsoft-authentication-library-dotnet/confidentialclient) | Authorization code | > | ASP.NET Core|[Active Directory FS to Azure AD migration](https://github.com/Azure-Samples/ms-identity-dotnet-adfs-to-aad) | [MSAL.NET](/entra/msal/dotnet) | &#8226; SAML <br/> &#8226; OpenID connect | > | ASP.NET | &#8226; [Microsoft Graph Training Sample](https://github.com/microsoftgraph/msgraph-training-aspnetmvcapp) <br/> &#8226; [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) <br/> &#8226; [Sign in users and call Microsoft Graph with admin restricted scope](https://github.com/azure-samples/active-directory-dotnet-admin-restricted-scopes-v2) <br/> &#8226; [Quickstart: Sign in users](https://github.com/AzureAdQuickstarts/AppModelv2-WebApp-OpenIDConnect-DotNet) | [MSAL.NET](/entra/msal/dotnet) | &#8226; OpenID connect <br/> &#8226; Authorization code | > | Java </p> Spring |Azure AD Spring Boot Starter Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/roles) <br/> &#8226; [Use Groups for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/4-Deployment/deploy-to-azure-app-service) <br/> &#8226; [Protect a web API](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/4.%20Spring%20Framework%20Web%20App%20Tutorial/3-Authorization-II/protect-web-api) | &#8226; [MSAL Java](/java/api/com.microsoft.aad.msal4j) <br/> &#8226; Azure AD Boot Starter | Authorization code | > | Java </p> Servlets | Spring-less Servlet Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/2-Authorization-I/call-graph) <br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/3-Authorization-II/roles) <br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/3-Authorization-II/groups) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/3.%20Java%20Servlet%20Web%20App%20Tutorial/4-Deployment/deploy-to-azure-app-service) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Authorization code |
-> | Node.js </p> Express | Express web app series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md)<br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/2-Authorization/1-call-graph/README.md)<br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/3-Deployment/README.md)<br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/1-app-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/2-security-groups/README.md) <br/> &#8226; [Web app that sign in users](https://github.com/Azure-Samples/ms-identity-node) | [MSAL Node](/javascript/api/@azure/msal-node) | Authorization code |
+> | Node.js </p> Express | Express web app series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/1-sign-in/README.md)<br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/1-Authentication/2-sign-in-b2c/README.md)<br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/2-Authorization/1-call-graph/README.md) <br/> &#8226; [Call Microsoft Graph via BFF proxy](https://github.com/Azure-Samples/ms-identity-node) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/3-Deployment/README.md)<br/> &#8226; [Use App Roles for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/1-app-roles/README.md)<br/> &#8226; [Use Security Groups for access control](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-tutorial/blob/main/4-AccessControl/2-security-groups/README.md) | [MSAL Node](/javascript/api/@azure/msal-node) | &#8226; Authorization code <br/>&#8226; Backend-for-Frontend (BFF) proxy |
> | Python </p> Flask | Flask Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/>&#8226; [A template to sign in AAD or B2C users, and optionally call a downstream API (Microsoft Graph)](https://github.com/Azure-Samples/ms-identity-python-webapp) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-flask-tutorial) | [MSAL Python](/python/api/msal/overview-msal) | Authorization code | > | Python </p> Django | Django Series <br/> &#8226; [Sign in users](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in) <br/> &#8226; [Sign in users (B2C)](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/1-Authentication/sign-in-b2c) <br/> &#8226; [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/2-Authorization-I/call-graph) <br/> &#8226; [Deploy to Azure App Service](https://github.com/Azure-Samples/ms-identity-python-django-tutorial/tree/main/3-Deployment/deploy-to-azure-app-service)| [MSAL Python](/python/api/msal/overview-msal) | Authorization code | > | Ruby | Graph Training <br/> &#8226; [Sign in users and call Microsoft Graph](https://github.com/microsoftgraph/msgraph-training-rubyrailsapp) | OmniAuth OAuth2 | Authorization code |
The following samples show public client desktop applications that access the Mi
> | .NET | [Invoke protected API with integrated Windows authentication](https://github.com/azure-samples/active-directory-dotnet-iwa-v2) | [MSAL.NET](/entra/msal/dotnet) | Integrated Windows authentication | > | Java | [Call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-msal-java-samples/tree/main/2.%20Client-Side%20Scenarios/Integrated-Windows-Auth-Flow) | [MSAL Java](/java/api/com.microsoft.aad.msal4j) | Integrated Windows authentication | > | Node.js | [Sign in users](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-desktop) | [MSAL Node](/javascript/api/@azure/msal-node) | Authorization code with PKCE |
-> | PowerShell | [Call Microsoft Graph by signing in users using username/password](https://github.com/azure-samples/active-directory-dotnetcore-console-up-v2) | [MSAL.NET](/entra/msal/dotnet) | Resource owner password credentials |
+> | .NET Core | [Call Microsoft Graph by signing in users using username/password](https://github.com/azure-samples/active-directory-dotnetcore-console-up-v2) | [MSAL.NET](/entra/msal/dotnet) | Resource owner password credentials |
> | Python | [Sign in users](https://github.com/Azure-Samples/ms-identity-python-desktop) | [MSAL Python](/python/api/msal/overview-msal) | Resource owner password credentials | > | Universal Window Platform (UWP) | [Call Microsoft Graph](https://github.com/Azure-Samples/active-directory-xamarin-native-v2/tree/main/2-With-broker) | [MSAL.NET](/entra/msal/dotnet) | Web account manager | > | Windows Presentation Foundation (WPF) | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/2.%20Web%20API%20now%20calls%20Microsoft%20Graph) | [MSAL.NET](/entra/msal/dotnet) | Authorization code with PKCE |
-> | XAML | &#8226; [Sign in users and call ASP.NET Core web API](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/1.%20Desktop%20app%20calls%20Web%20API) <br/> &#8226; [Sign in users and call Microsoft Graph](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2) | [MSAL.NET](/entra/msal/dotnet) | Authorization code with PKCE |
+> | Windows Presentation Foundation (WPF) | &#8226; [Sign in users and call ASP.NET Core web API](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/tree/master/1.%20Desktop%20app%20calls%20Web%20API) <br/> &#8226; [Sign in users and call Microsoft Graph](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2) | [MSAL.NET](/entra/msal/dotnet) | Authorization code with PKCE |
### Mobile
The following samples show how to protect an Azure Function using HttpTrigger an
> | Node.js | [Call Microsoft Graph API on behalf of a user](https://github.com/Azure-Samples/ms-identity-nodejs-webapi-onbehalfof-azurefunctions) | [MSAL Node](/javascript/api/@azure/msal-node) | On-Behalf-Of (OBO)| > | Python | [Python Azure function web API secured by Azure AD](https://github.com/Azure-Samples/ms-identity-python-webapi-azurefunctions) | [MSAL Python](/python/api/msal/overview-msal) | Authorization code |
-### Headless
+### Browserless (Headless)
The following sample shows a public client application running on a device without a web browser. The app can be a command-line tool, an app running on Linux or Mac, or an IoT application. The sample features an app accessing the Microsoft Graph API, in the name of a user who signs-in interactively on another device (such as a mobile phone). This client application uses the Microsoft Authentication Library (MSAL).
active-directory Licensing Powershell Graph Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-powershell-graph-examples.md
foreach ($userId in $skus.Keys) {
Write-Host "SKU IDs:" foreach ($skuId in $skus[$userId].Keys) {
- $sku = Get-MgSubscribedSku -SkuId $skuId
+ $sku = Get-MgSubscribedSku -SubscribedSkuId $skuId
Write-Host "- $($sku.DisplayName)" Write-Host " Assigned directly: $($skus[$userId][$skuId].AssignedDirectly)" Write-Host " Assigned through groups: $($skus[$userId][$skuId].AssignedThroughGroups)"
active-directory How To Define Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-define-custom-attributes.md
Previously updated : 06/06/2023 Last updated : 06/22/2023
You can choose the order in which the attributes are displayed on the sign-up pa
1. Under **Customize**, select **Page layouts**. The attributes you chose to collect appear.
- - To change the properties of an attribute, select a value under the **Label**, **Required**, or **Attribute type** columns, and then type or select a new value.
+ - To change the properties of an attribute, select a value under the **Label**, **Required**, or **Attribute type** columns, and then type or select a new value. (For security reasons, **Email Address** properties can't be changed).
- To change the order of display, select an attribute, and then select **Move up**, **Move down**, **Move to the top**, or **Move to the bottom**.
active-directory Sample Browserless App Dotnet Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-browserless-app-dotnet-sign-in.md
+
+ Title: Sign in users in a sample ASP.NET browserless app
+description: Use a sample to learn how to configure a sample ASP.NET browserless app.
+++++++++ Last updated : 06/23/2023++
+#Customer intent: As a dev, devops, I want to learn about how to configure a sample ASP.NET browserless app to sign in users with my Azure Active Directory (Azure AD) for customers tenant
++
+# Sign in users into a sample ASP.NET browserless app using Device Code flow
+
+This how-to guide uses a sample ASP.NET browserless app to show how to add authentication to the app. The sample app enables users to sign in. The sample ASP.NET browserless app uses [Microsoft Authentication Library for .NET (MSAL NET)](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) to handle authentication.
+
+## Prerequisites
+
+- [.NET 7 SDK](https://dotnet.microsoft.com/download/dotnet/7.0).
+
+- [Visual Studio Code](https://code.visualstudio.com/download) or another code editor.
+
+- Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>.
+
+## Register the headless app
++
+## Enable public client flow
++
+## Grant API permissions
+
+Since this app signs-in users, add delegated permissions:
++
+## Create a user flow
++
+## Associate the browserless app with the user flow
++
+## Clone or download sample browserless app
+
+To get the browserless app sample code, you can do either of the following tasks: [Download the .zip file](https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial/archive/refs/heads/main.zip) or clone the sample web application from GitHub by running the following command:
+
+```console
+git clone https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial.git
+```
+If you choose to download the *.zip* file, extract the sample app file to a folder where the total length of the path is 260 or fewer characters.
+
+## Configure the sample browserless app
+
+1. Open the project in your IDE (like Visual Studio or Visual Studio Code) to configure the code.
+
+1. In your code editor, open the *appsettings.json* file in the *1-Authentication* > *4-sign-in-device-code* folder.
+
+1. Replace `Enter_the_Application_Id_Here` with the Application (client) ID of the app you registered earlier.
+
+1. Replace `Enter_the_Tenant_Subdomain_Here` with the Directory (tenant) subdomain. For example, if your primary domain is *contoso.onmicrosoft.com*, replace `Enter_the_Tenant_Subdomain_Here` with *contoso*. If you don't have your primary domain, learn how to [read tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details).
+
+## Run and test sample browserless app
+
+1. Open a console window, and change to the directory that contains the ASP.NET browserless sample app:
+
+ ```console
+ cd 1-Authentication/4-sign-in-device-code
+ ```
+
+1. In your terminal, run the app by running the following command:
+
+ ```console
+ dotnet run
+ ```
+1. When the app launches, copy the suggested URL *https://microsoft.com/devicelogin* from the terminal and visit it in a browser. Then, copy the device code from the terminal and [follow the prompts](./how-to-browserless-app-dotnet-sign-in-sign-in.md#sign-in-to-your-app) on *https://microsoft.com/devicelogin*.
+
+## How it works
+
+The browserless app is initialized as a public client application. You acquire token using the device code auth grant flow. This flow allows users to sign in to input-constrained devices such as a smart TV, IoT device, or a printer. You then pass a callback to the `AcquireTokenWithDeviceCodeAsync` method. This callback contains a `DeviceCodeResult` object that contains the URL a user navigates to and sign in. Once the user signs in, an `AuthenticationResult` is returned containing an access token and some basic account information.
+
+```csharp
+var scopes = new string[] { }; // by default, MSAL attaches OIDC scopes to every token request
+var result = await app.AcquireTokenWithDeviceCode(scopes, async deviceCode => {
+ Console.WriteLine($"In a broswer, navigate to the URL '{deviceCode.VerificationUrl}' and enter the code '{deviceCode.UserCode}'");
+ await Task.FromResult(0);
+}).ExecuteAsync();
+
+Console.WriteLine($"You signed in as {result.Account.Username}");
+```
+
+## Next steps
+
+Next, learn how to prepare your Azure AD for customers tenant.
+
+> [!div class="nextstepaction"]
+> [Build your own ASP.NET browserless app and sign in users >](how-to-browserless-app-dotnet-sign-in-overview.md)
active-directory Sample Browserless App Node Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-browserless-app-node-sign-in.md
+
+ Title: Sign in users in a sample Node.js browserless application using the Device Code flow
+description: Learn how to configure a sample browserless application to sign in users in an Azure Active Directory (Azure AD) for customers tenant
+++++++++ Last updated : 06/23/2023++
+#Customer intent: As a dev, devops, I want to learn about how to configure a sample Node.js browserless application to authenticate users with my Azure Active Directory (Azure AD) for customers tenant
++
+# Authenticate users in a sample Node.js browserless application using the Device Code flow
+
+This how-to guide uses a sample Node.js application to show how to sign in users in a browserless application. The sample application uses the device code flow to sign in users in an Azure Active Directory (Azure AD) for customers tenant.
+
+In this article, you complete the following tasks:
+
+- Register an application in the Microsoft Entra admin center.
+
+- Create a sign-in and sign-out user flow in Microsoft Entra admin center.
+
+- Associate your browserless application with the user flow.
+
+- Update a sample Node.js browserless application using your own Azure AD for customers tenant.
+
+- Run and test the sample browserless application.
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org).
+
+- [Visual Studio Code](https://code.visualstudio.com/download) or another code editor.
+
+- Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>.
+
+## Register the browserless app
++
+## Grant API permissions
++
+## Create a user flow
++
+## Associate the browserless application with the user flow
++
+## Clone or download sample browserless application
+
+To get the browserless app sample code, you can do either [download the .zip file](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/archive/refs/heads/main.zip) or clone the sample browserless application from GitHub by running the following command:
+
+```powershell
+ git clone https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial.git
+```
+
+## Install project dependencies
+
+1. Open a console window, and navigate to the directory that contains the Node.js sample app. For example:
+
+ ```powershell
+ cd 1-Authentication\4-sign-in-device-code\App
+ ```
+1. Run the following command to install app dependencies:
+
+ ```powershell
+ npm install
+ ```
+
+## Update the sample app to use its Azure app registration details
+
+1. In your code editor, open the `App\authConfig.js` file.
+
+1. Find the placeholder:
+
+ 1. `Enter_the_Application_Id_Here` and replace it with the Application (client) ID of the app you registered earlier.
+
+ 1. Find the placeholder `Enter_the_Tenant_Subdomain_Here` and replace it with the Directory (tenant) subdomain. For instance, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant domain name, [learn how to read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details).
+
+## Run and test sample browserless app
+
+You can now test the sample Node.js browserless app.
+
+1. In your terminal, run the following command:
+
+ ```powershell
+ npm start
+ ```
+
+1. Open your browser, then go to the URL suggested from the message in the terminal, https://microsoft.com/devicelogin. You should see a page similar to the following screenshot:
+
+ :::image type="content" source="media/how-to-browserless-app-node-sample-sign-in/browserless-app-node-sign-in-enter-code.png" alt-text="Screenshot of the enter code prompt in a node browserless application using the device code flow.":::
+
+1. To authenticate, copy the device code from the message in the terminal then paste it in the **Enter Code** prompt. After entering the code, you'll be redirected to the sign-in page as follows:
+
+ :::image type="content" source="media/how-to-browserless-app-node-sample-sign-in/browserless-app-node-sign-in-page.png" alt-text="Screenshot showing the sign in page in a node browserless application.":::
+
+1. On the sign-in page, type your **Email address**. If you don't have an account, select **No account? Create one**, which starts the sign-up flow.
+
+1. If you choose the sign-up option, after filling in your email, one-time passcode, new password and more account details, you complete the whole sign-up flow. After completing the sign up flow and signing in, you see a page similar to the following screenshot:
+
+ :::image type="content" source="media/how-to-browserless-app-node-sample-sign-in/browserless-app-node-signed-in-user.png" alt-text="Screenshot showing a signed-in user in a node browserless application.":::
+
+1. Move back to the terminal and see your authentication information including the ID token claims returned by Microsoft Entra.
+
+## Next steps
+
+Learn how to:
+
+- [Sign in users in your own Node.js browserless application by using Microsoft Entra](how-to-browserless-app-node-sign-in-overview.md)
active-directory Sample Daemon Node Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-daemon-node-call-api.md
+
+ Title: Call an API in a sample Node.js daemon application
+description: Learn how to configure a sample Node.js daemon application that calls an API protected Azure Active Directory (Azure AD) for customers
+++++++++ Last updated : 06/23/2023++
+#Customer intent: As a dev, devops, I want to configure a sample Node.js daemon application that calls an API protected by Azure Active Directory (Azure AD) for customers tenant
++
+# Call an API in a sample Node.js daemon application
+
+This article uses a sample Node.js daemon application to show you how a daemon app acquires a token to call a web API. Azure Active Directory (Azure AD) for customers protects the Web API.
+
+A daemon application acquires a token on behalf of itself (not on behalf of a user). Users can't interact with a daemon application because it requires its own identity. This type of application requests an access token by using its application identity and presenting its application ID, credential (password or certificate), and application ID URI to Azure AD.
+
+A daemon app uses the standard [OAuth 2.0 client credentials grant](../../develop/v2-oauth2-client-creds-grant-flow.md). To simplify the process of acquiring the token, the sample we use in this article uses [Microsoft Authentication Library for Node (MSAL Node)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node).
++
+## Prerequisites
+
+- [Node.js](https://nodejs.org).
+
+- [.NET 7.0](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/install) or later.
+
+- [Visual Studio Code](https://code.visualstudio.com/download) or another code editor.
+
+- Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>.
+
+## Register a daemon application and a web API
+
+In this step, you create the daemon and the web API application registrations, and you specify the scopes of your web API.
+
+### Register a web API application
++
+### Configure app roles
++
+### Configure optional claims
++
+### Register the daemon app
++
+### Create a client secret
++
+### Grant API permissions to the daemon app
++
+## Clone or download sample daemon application and web API
+
+To get the web app sample code, you can do either of the following tasks:
+
+- [Download the .zip file](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/archive/refs/heads/main.zip) or clone the sample web application from GitHub by running the following command:
+
+ ```console
+ git clone https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial.git
+ ```
+If you choose to download the *.zip* file, extract the sample app file to a folder where the total length of the path is 260 or fewer characters.
+
+## Install project dependencies
+
+1. Open a console window, and change to the directory that contains the Node.js sample app:
+
+ ```console
+ cd 2-Authorization\3-call-api-node-daemon\App
+ ```
+1. Run the following commands to install app dependencies:
+
+ ```console
+ npm install && npm update
+ ```
+
+## Configure the sample daemon app and API
+
+To use your app registration in the client web app sample:
+
+1. In your code editor, open `App\authConfig.js` file.
+
+1. Find the placeholder:
+
+ - `Enter_the_Application_Id_Here` and replace it with the Application (client) ID of the daemon app you registered earlier.
+
+ - `Enter_the_Tenant_Subdomain_Here` and replace it with the Directory (tenant) subdomain. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, learn how to [read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details).
+
+ - `Enter_the_Client_Secret_Here` and replace it with the daemon app secret value you copied earlier.
+
+ - `Enter_the_Web_Api_Application_Id_Here` and replace it with the Application (client) ID of the web API you copied earlier.
+
+To use your app registration in the web API sample:
+
+1. In your code editor, open `API\ToDoListAPI\appsettings.json` file.
+
+1. Find the placeholder:
+
+ - `Enter_the_Application_Id_Here` and replace it with the Application (client) ID of the web API you copied.
+
+ - `Enter_the_Tenant_Id_Here` and replace it with the Directory (tenant) ID you copied earlier.
+
+ - `Enter_the_Tenant_Subdomain_Here` and replace it with the Directory (tenant) subdomain. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, learn how to [read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details).
+
+## Run and test sample daemon app and API
+
+1. Open a console window, then run the web API by using the following commands:
+
+ ```console
+ cd 2-Authorization\3-call-api-node-daemon\API\ToDoListAPI
+ dotnet run
+ ```
+1. Run the web app client by using the following commands:
+
+ ```console
+ 2-Authorization\3-call-api-node-daemon\App
+ node . --op getToDos
+ ```
+
+If your daemon app and web API successfully run, you should see something similar to the following JSON array in your console window
+
+```json
+{
+ id: 1,
+ owner: '3e8....-db63-43a2-a767-5d7db...',
+ description: 'Pick up grocery'
+},
+{
+ id: 2,
+ owner: 'c3cc....-c4ec-4531-a197-cb919ed.....',
+ description: 'Finish invoice report'
+},
+{
+ id: 3,
+ owner: 'a35e....-3b8a-4632-8c4f-ffb840d.....',
+ description: 'Water plants'
+}
+```
+
+### How it works
+
+The Node.js app uses [OAuth 2.0 client credentials grant](../../develop/v2-oauth2-client-creds-grant-flow.md) to acquire an access token for itself and not for the user. The access token that the app requests contains the permissions represented as roles. The client credential flow uses this set of permissions in place of user scopes for application tokens.You [exposed these application permissions](#configure-app-roles) in the web API earlier, then [granted them to the daemon app](#grant-api-permissions-to-the-daemon-app).
+
+On the API side, the web API must verify that the access token has the required permissions (application permissions). The web API can't accept an access token that doesn't have the required permissions.
+
+### Access to data
+
+A Web API endpoint should be prepared to accept calls from both users and applications. Therefore, it should have a way to respond to each request accordingly. For example, a call from a user via delegated permissions/scopes receives the user's data to-do list. On the other hand, a call from an application via application permissions/roles may receive the entire to-do list. However, in this article, we're only making an application call, so we didn't need to configure delegated permissions/scopes.
+
+## Next steps
+
+- Learn how to [Acquire an access token, then call a web API in your own Node.js daemon app](how-to-daemon-node-call-api-overview.md).
+
+- Learn how to [Use client certificate instead of a secret for authentication in your Node.js confidential app](how-to-web-app-node-use-certificate.md).
+
+- Learn about [permissions and consent](../../develop/permissions-consent-overview.md).
active-directory Sample Single Page App Vanillajs Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-single-page-app-vanillajs-sign-in.md
+
+ Title: Sign in users in a sample vanilla JavaScript single-page application
+description: Learn how to configure a sample JavaSCript single-page application (SPA) to sign in and sign out users.
+++++++ Last updated : 06/23/2023++
+#Customer intent: As a dev, devops, I want to learn about how to configure a sample vanilla JS SPA to sign in and sign out users with my Azure Active Directory (Azure AD) for customers tenant
++
+# Sign in users in a sample vanilla JavaScript single-page application
+
+This how-to guide uses a sample vanilla JavaScript single-page Application (SPA) to demonstrate how to add authentication to a SPA. The SPA enables users to sign in and sign out by using their own Azure Active Directory (AD) for customers tenant. The sample uses the [Microsoft Authentication Library for JavaScript (MSAL.js)](https://github.com/AzureAD/microsoft-authentication-library-for-js) to handle authentication.
+
+## Prerequisites
+
+* Although any IDE that supports vanilla JS applications can be used, **Visual Studio Code** is recommended for this guide. It can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads) page.
+* [Node.js](https://nodejs.org/en/download/).
+* Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>.
+
+## Register the SPA in the Microsoft Entra admin center
++
+## Grant API permissions
++
+## Create a user flow
++
+## Associate the SPA with the user flow
++
+## Clone or download sample SPA
+
+To get the sample SPA, you can choose one of the following options:
+
+* Clone the repository using Git:
+
+ ```powershell
+ git clone https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial.git
+ ```
+
+* [Download the sample](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/archive/refs/heads/main.zip)
+
+If you choose to download the `.zip` file, extract the sample app file to a folder where the total length of the path is 260 or fewer characters.
+
+## Install project dependencies
+
+1. Open a terminal window in the root directory of the sample project, and enter the following snippet to navigate to the project folder:
+
+ ```powershell
+ cd 1-Authentication\0-sign-in-vanillajs\App
+ ```
+
+1. Install the project dependencies:
+
+ ```powershell
+ npm install
+ ```
+
+## Configure the sample SPA
+
+1. Open `authConfig.js`.
+1. Find `Enter_the_Tenant_Name_Here` and replace it with the name of your tenant.
+1. In **Authority**, find `Enter_the_Tenant_Subdomain_Here` and replace it with the subdomain of your tenant. For example, if your tenant primary domain is *caseyjensen@onmicrosoft.com*, the value you should enter is *casyjensen*.
+1. Save the file.
+
+## Run your project and sign in
+
+1. Open a new terminal and run the following command to start your express web server.
+
+ ```powershell
+ npm start
+ ```
+
+1. Open a web browser and navigate to `http://localhost:3000/`.
+1. Select **No account? Create one**, which starts the sign-up flow.
+1. In the **Create account** window, enter the email address registered to your customer tenant, which starts the sign-up flow as a user for your application.
+1. After entering a one-time passcode from the customer tenant, enter a new password and more account details, this sign-up flow is completed.
+1. If a window appears prompting you to **Stay signed in**, choose either **Yes** or **No**.
+1. The SPA will now display a button saying **Request Profile Information**. Select it to display profile data.
+
+ :::image type="content" source="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png" alt-text="Screenshot of sign in into a vanilla JS SPA." lightbox="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png":::
+
+## Sign out of the application
+
+1. To sign out of the application, select **Sign out** in the navigation bar.
+1. A window appears asking which account to sign out of.
+1. Upon successful sign out, a final window appears advising you to close all browser windows.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Enable self-service password reset](./how-to-enable-password-reset-customers.md)
active-directory Sample Single Page Application Angular https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-single-page-application-angular.md
+
+ Title: Sign in users in a sample Angular single-page application.
+description: Learn how to configure a sample Angular Single Page Application (SPA) using Azure Active Directory for Customers
++++++++ Last updated : 06/23/2023++
+#Customer intent: As a dev, devops, I want to learn about how to configure a sample Angular Single Page Application to sign in and sign out users with my Azure Active Directory (Azure AD) for customers tenant
++
+# Sign in users in a sample Angular single-page application
+
+This how-to guide uses a sample Angular single-page application (SPA) to demonstrate how to add authentication users into a SPA. The SPA enables users to sign in and sign out by using your Azure Active Directory (Azure AD) for customers tenant. The sample uses the [Microsoft Authentication Library for JavaScript (MSAL.js)](https://github.com/AzureAD/microsoft-authentication-library-for-js) to handle authentication.
+
+## Prerequisites
+
+* Although any IDE that supports vanilla JS applications can be used, **Visual Studio Code** is used for this guide. It can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads) page.
+* [Node.js](https://nodejs.org/en/download/).
+* Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>.
++
+## Register the SPA in the Microsoft Entra admin center
++
+## Grant API permissions
++
+## Create a user flow
++
+## Associate the SPA with the user flow
++
+## Clone or download sample SPA
+
+To get the sample SPA, you can choose one of the following options:
+
+* Clone the repository using Git:
+
+ ```powershell
+ git clone https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial.git
+ ```
+
+* [Download the sample](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/archive/refs/heads/main.zip)
+
+If you choose to download the `.zip` file, extract the sample app file to a folder where the total length of the path is 260 or fewer characters.
+
+## Install project dependencies
+
+1. Open a terminal window in the root directory of the sample project, and enter the following snippet to navigate to the project folder:
+
+ ```powershell
+ cd 1-Authentication\2-sign-in-angular\SPA
+ ```
+
+1. Install the project dependencies:
+
+ ```powershell
+ npm install
+ ```
+
+## Configure the sample SPA
+
+1. Open `SPA\src\authConfig.js` and replace the following with the values obtained from the Microsoft Entra admin center
+ * `clientId` - The identifier of the application, also referred to as the client. Replace `Enter_the_Application_Id_Here` with the **Application (client) ID** value that was recorded earlier from the overview page of the registered application.
+ * `authority` - The identity provider instance and sign-in audience for the app. Replace `Enter_the_Tenant_Name_Here` with the name of your CIAM tenant.
+ * The *Tenant ID* is the identifier of the tenant where the application is registered. Replace the `_Enter_the_Tenant_Info_Here` with the **Directory (tenant) ID** value that was recorded earlier from the overview page of the registered application.
+1. Save the file.
+
+## Run your project and sign in
+
+All the required code snippets have been added, so the application can now be called and tested in a web browser.
+
+1. Open a new terminal by selecting **Terminal** > **New Terminal**.
+1. Run the following command to start your web server.
+
+ ```powershell
+ cd 1-Authentication\2-sign-in-angular\SPA
+ npm start
+ ```
+
+1. Open a web browser and navigate to `http://localhost:4200/`.
+
+1. Sign-in with an account registered to the Azure AD for customers tenant.
+
+1. Once you successfully sign-in, the display name is shown next to the **Sign out** button.
+
+## Next steps
+
+Learn how to use the Microsoft Authentication Library (MSAL) for JavaScript to sign in users and acquire tokens to call Microsoft Graph.
active-directory Sample Single Page Application React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-single-page-application-react.md
+
+ Title: Sign in users in a sample React single-page application
+description: Learn how to configure a sample React single-page app (SPA) to sign in and sign out users.
+++++++ Last updated : 06/23/2023+
+#Customer intent: As a dev, devops, I want to learn about how to configure a sample React single-page app to sign in and sign out users with my Azure Active Directory (Azure AD) for customers tenant
++
+# Sign in users in a sample React single-page app (SPA)
+
+This guide uses a sample React single-page application (SPA) to demonstrate how to add authentication to a SPA. This SPA enables users to sign in and sign out by using you Azure Active Directory (Azure AD) for customers tenant. The sample uses the [Microsoft Authentication Library for JavaScript (MSAL.js)](https://github.com/AzureAD/microsoft-authentication-library-for-js) to handle authentication.
+
+## Prerequisites
+* Although any IDE that supports React applications can be used, **Visual Studio Code** is used for this guide. It can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads) page.
+* [Node.js](https://nodejs.org/en/download/).
+* Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>.
++
+## Register the SPA in the Microsoft Entra admin center
++
+## Grant API permissions
++
+## Create a user flow
++
+## Associate the SPA with the user flow
++
+## Clone or download sample SPA
+
+To get the sample SPA, you can choose one of the following options:
+
+* Clone the repository using Git:
+
+ ```powershell
+ git clone https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial.git
+ ```
+
+* [Download the sample](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/archive/refs/heads/main.zip)
+
+If you choose to download the `.zip` file, extract the sample app file to a folder where the total length of the path is 260 or fewer characters.
+
+## Install project dependencies
+
+1. Open a terminal window in the root directory of the sample project, and enter the following snippet to navigate to the project folder:
+
+ ```powershell
+ cd 1-Authentication\1-sign-in-react\SPA
+ ```
+
+1. Install the project dependencies:
+
+ ```powershell
+ npm install
+ ```
+
+## Configure the sample SPA
+
+1. Open _SPA\src\authConfig.js_ and replace the following with the values obtained from the Microsoft Entra admin center
+ * `clientId` - The identifier of the application, also referred to as the client. Replace `Enter_the_Application_Id_Here` with the **Application (client) ID** value that was recorded earlier from the overview page of the registered application.
+ * `authority` - The identity provider instance and sign-in audience for the app. Replace `Enter_the_Tenant_Name_Here` with the name of your Azure AD customer tenant.
+ * The *Tenant ID* is the identifier of the tenant where the application is registered. Replace the `_Enter_the_Tenant_Info_Here` with the **Directory (tenant) ID** value that was recorded earlier from the overview page of the registered application.
+1. Save the file.
+
+## Run your project and sign in
+All the required code snippets have been added, so the application can now be called and tested in a web browser.
+
+1. Open a new terminal by selecting **Terminal** > **New Terminal**.
+1. Run the following command to start your web server.
+
+ ```powershell
+ cd 1-Authentication\1-sign-in-react\SPA
+ npm start
+ ```
+
+1. Open a web browser and navigate to `http://localhost:3000/`.
+
+1. Sign-in with an account registered to the Azure AD customer tenant.
+
+1. Once signed in the display name is shown next to the **Sign out** button.
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Enable self-service password reset](./how-to-enable-password-reset-customers.md)
active-directory Sample Web App Dotnet Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-web-app-dotnet-sign-in.md
+
+ Title: Sign in users to a sample ASP.NET web application
+description: Learn how to configure a sample ASP.NET web app to sign in and sign out users by using an Azure AD for customers tenant.
+++++++++ Last updated : 06/23/2023++
+#Customer intent: As a dev, devops, I want to learn about how to configure a sample ASP.NET web app to sign in and sign out users with my Azure Active Directory (Azure AD) for customers tenant
++
+# Sign in users for a sample ASP.NET web app in an Azure AD for customers tenant
+
+This how-to guide uses a sample ASP.NET web application to show the fundamentals of modern authentication using the [Microsoft Authentication Library for .NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) and [Microsoft Identity Web](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) for ASP.NET to handle authentication.
+
+In this article, you'll register a web application in the Microsoft Entra admin center and create a sign in and sign out user flow. You'll associate your web application with the user flow, download and update a sample ASP.NET web application using your own Azure Active Directory (Azure AD) for customers tenant details. Finally, you'll run and test the sample web application.
+
+## Prerequisites
+
+- Although any IDE that supports ASP.NET applications can be used, Visual Studio Code is used for this guide. It can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads/) page.
+- [.NET 7.0 SDK](https://dotnet.microsoft.com/download/dotnet).
+- Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>.
+
+## Register the web app
++
+## Define the platform and URLs
++
+## Add app client secret
++
+## Grant API permissions
++
+## Create a user flow
++
+## Associate the web application with the user flow
++
+## Clone or download sample web application
+
+To get the web app sample code, you can do either of the following tasks:
+
+- [Download the .zip file](https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial/archive/refs/heads/main.zip). Extract the sample app file to a folder where the total length of the path is 260 or fewer characters.
+- Clone the sample web application from GitHub by running the following command:
+
+ ```powershell
+ git clone https://github.com/Azure-Samples/ms-identity-ciam-dotnet-tutorial.git
+ ```
+
+## Configure the application
+
+1. Navigate to the root folder of the sample you have downloaded and directory that contains the ASP.NET sample app:
+
+ ```powershell
+ cd 1-Authentication\1-sign-in-aspnet-core-mvc
+ ```
+
+1. Open the *appsettings.json* file.
+1. In **Authority**, find `Enter_the_Tenant_Subdomain_Here` and replace it with the subdomain of your tenant. For example, if your tenant primary domain is *caseyjensen@onmicrosoft.com*, the value you should enter is *casyjensen*.
+1. Find the `Enter_the_Application_Id_Here` value and replace it with the application ID (clientId) of the app you registered in the Microsoft Entra admin center.
+1. Replace `Enter_the_Client_Secret_Here` with the client secret value you set up in [Add app client secret](#add-app-client-secret).
+
+## Run the code sample
+
+1. From your shell or command line, execute the following commands:
+
+ ```powershell
+ dotnet run
+ ```
+
+1. Open your web browser and navigate to `https://localhost:7274`.
+
+1. Sign-in with an account registered to the customer tenant.
+
+1. Once signed in the display name is shown next to the **Sign out** button as shown in the following screenshot.
+
+ :::image type="content" source="media/how-to-web-app-dotnet-sign-in-sign-in-out/display-aspnet-welcome.png" alt-text="Screenshot of sign in into a ASP.NET web app.":::
+
+1. To sign-out from the application, select the **Sign out** button.
+
+## Next steps
+
+- [Enable password reset](how-to-enable-password-reset-customers.md)
+- [Customize the default branding](how-to-customize-branding-customers.md)
+- [Configure sign-in with Google](how-to-google-federation-customers.md)
+- [Sign in users in your own ASP.NET web application by using an Azure AD for customers tenant](how-to-web-app-dotnet-sign-in-prepare-app.md)
active-directory Sample Web App Node Sign In Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-web-app-node-sign-in-call-api.md
+
+ Title: Sign in users and call an API in sample Node.js web application
+description: Learn how to configure a sample web app to sign in users and call an API.
+++++++++ Last updated : 06/23/2023++
+#Customer intent: As a dev, devops, I want to learn about how to configure a sample web app to sign in and sign out users with my CIAM tenant
++
+# Sign in users and call an API in sample Node.js web application
+
+This how-to guide uses a sample Node.js web application to show you how to add authentication and authorization. The sample application sign in users to a Node.js web app, which then calls a .NET API. You enable authentication and authorization by using your Azure Active Directory (Azure AD) for customers tenant details. The sample web application uses [Microsoft Authentication Library (MSAL)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) for Node to handle authentication.
+
+In this article, you complete the following tasks:
+
+- Register and configure a web API in the Microsoft Entra admin center.
+
+- Register and configure a client web application in the Microsoft Entra admin center.
+
+- Create a sign-up and sign-in user flow in the Microsoft Entra admin center, and then associate a client web app with it.
+
+- Update a sample Node web application and ASP.NET web API to use your Azure AD for customers tenant details.
+
+- Run and test the sample web application and API.
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org).
+
+- [.NET 7.0](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/install) or later.
+
+- [Visual Studio Code](https://code.visualstudio.com/download) or another code editor.
+
+- Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>.
+
+## Register a web application and a web API
+
+In this step, you create the web and the web API application registrations, and you specify the scopes of your web API.
+
+### Register a web API application
++
+### Configure API scopes
+
+This API needs to expose permissions, which a client needs to acquire for calling the API:
++
+### Configure app roles
++
+### Configure optional claims
++
+### Register the web app
++
+### Create a client secret
++
+### Grant API permissions to the web app
++
+## Create a user flow
++
+## Associate web application with the user flow
++
+## Clone or download sample web application and web API
+
+To get the web app and web API sample code, [download the .zip file](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/archive/refs/heads/main.zip) or clone the sample web application from GitHub by running the following command:
+
+```powershell
+git clone https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial.git
+```
+
+If you choose to download the `.zip` file, extract the sample app file to a folder where the total length of the path is 260 or fewer characters.
+
+## Install project dependencies
+
+1. Open a console window, and change to the directory that contains the Node.js/Express sample app:
+
+ ```powershell
+ cd 2-Authorization\4-call-api-express\App
+ ```
+1. Run the following commands to install web app dependencies:
+
+ ```powershell
+ npm install && npm update
+ ```
+
+## Configure the sample web app and API
+
+To use your app registration in the client web app sample:
+
+1. In your code editor, open `App\authConfig.js` file.
+
+1. Find the placeholder:
+
+ - `Enter_the_Application_Id_Here` and replace it with the Application (client) ID of the app you registered earlier.
+
+ - `Enter_the_Tenant_Subdomain_Here` and replace it with the Directory (tenant) subdomain. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, learn how to [read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details).
+
+ - `Enter_the_Client_Secret_Here` and replace it with the app secret value you copied earlier.
+
+ - `Enter_the_Web_Api_Application_Id_Here` and replace it with the Application (client) ID of the web API you copied earlier.
+
+To use your app registration in the web API sample:
+
+1. In your code editor, open `API\ToDoListAPI\appsettings.json` file.
+
+1. Find the placeholder:
+
+ - `Enter_the_Application_Id_Here` and replace it with the Application (client) ID of the web API you copied.
+
+ - `Enter_the_Tenant_Id_Here` and replace it with the Directory (tenant) ID you copied earlier.
+
+ - `Enter_the_Tenant_Subdomain_Here` and replace it with the Directory (tenant) subdomain. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, learn how to [read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details).
++
+## Run and test sample web app and API
+
+1. Open a console window, then run the web API by using the following commands:
+
+ ```powershell
+ cd 2-Authorization\4-call-api-express\API\ToDoListAPI
+ dotnet run
+ ```
+
+1. Run the web app client by using the following commands:
+
+ ```powershell
+ cd 2-Authorization\4-call-api-express\App
+ npm start
+ ```
+
+1. Open your browser, then go to http://localhost:3000.
+
+1. Select the **Sign In** button. You're prompted to sign in.
+
+ :::image type="content" source="media/how-to-web-app-node-sample-sign-in-call-api/web-app-node-sign-in.png" alt-text="Screenshot of sign in into a node web app.":::
+
+1. On the sign-in page, type your **Email address**, select **Next**, type your **Password**, then select **Sign in**. If you don't have an account, select **No account? Create one** link, which starts the sign-up flow.
+
+1. If you choose the sign-up option, after filling in your email, one-time passcode, new password and more account details, you complete the whole sign-up flow. You see a page similar to the following screenshot. You see a similar page if you choose the sign-in option.
+
+ :::image type="content" source="media/how-to-web-app-node-sample-sign-in-call-api/sign-in-call-api-view-to-do.png" alt-text="Screenshot of sign in into a node web app and call an API.":::
+
+### Call API
+
+1. To call the API, select the **View your todolist** link. You see a page similar to the following screenshot.
+
+ :::image type="content" source="media/how-to-web-app-node-sample-sign-in-call-api/sign-in-call-api-manipulate-to-do.png" alt-text="Screenshot of manipulate API to do list.":::
+
+1. Manipulate the to-do list by creating and removing items.
+
+### How it works
+
+You trigger an API call each time you view, add or remove a task. Each time you trigger an API call, the client web app acquires an access token with the required permissions (scopes) to call an API endpoint. For example, to read a task, the client web app must acquire an access token with `ToDoList.Read` permission/scope.
+
+On the web API side, the endpoint must validate that the permissions/scopes present in the access token, which the client app presents, is valid. If the access token is valid, the endpoint responds to the HTTP request, otherwise, it responds with a `401 Unauthorized` HTTP error.
+
+## Next steps
+
+Learn how to:
+
+- [Sign in users and call an API in your own Node.js web application](how-to-web-app-node-sign-in-call-api-overview.md). By completing these steps, you build a web app and web API similar to the sample you've run.
+
+- [Enable password reset](how-to-enable-password-reset-customers.md).
+
+- [Customize the default branding](how-to-customize-branding-customers.md).
+
+- [Configure sign-in with Google](how-to-google-federation-customers.md).
active-directory Sample Web App Node Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-web-app-node-sign-in.md
+
+ Title: Sign in users in a sample Node.js web application
+description: Learn how to configure a sample web app to sign in and sign out users.
+++++++++ Last updated : 06/23/2023++
+#Customer intent: As a dev, devops, I want to learn about how to configure a sample Node.js web app to sign in and sign out users with my Azure Active Directory (Azure AD) for customers tenant
++
+# Sign in users in a sample Node.js web application
+
+This how-to guide uses a sample Node.js web application to show you how to add authentication to a web application. The sample application enables users to sign in and sign out. The sample web application uses [Microsoft Authentication Library for Node (MSAL Node)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) for Node to handle authentication.
+
+In this article, you do the following tasks:
+
+- Register a web application in the Microsoft Entra admin center.
+
+- Create a sign-in and sign-out user flow in Microsoft Entra admin center.
+
+- Associate your web application with the user flow.
+
+- Update a sample Node.js web application using your own Azure Active Directory (Azure AD) for customers tenant details.
+
+- Run and test the sample web application.
+
+## Prerequisites
+
+- [Node.js](https://nodejs.org).
+
+- [Visual Studio Code](https://code.visualstudio.com/download) or another code editor.
+
+- Azure AD for customers tenant. If you don't already have one, <a href="https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl" target="_blank">sign up for a free trial</a>.
+
+<!--Awaiting this link http://developer.microsoft.com/identity/customers to go live on Developer hub-->
++
+## Register the web app
++
+## Add app client secret
++
+## Grant API permissions
+
+Since this app signs-in users, add delegated permissions:
++
+## Create a user flow
++
+## Associate the web application with the user flow
++
+## Clone or download sample web application
+
+To get the web app sample code, you can do either of the following tasks:
+
+- [Download the .zip file](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/archive/refs/heads/main.zip) or clone the sample web application from GitHub by running the following command:
+
+ ```console
+ git clone https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial.git
+ ```
+If you choose to download the *.zip* file, extract the sample app file to a folder where the total length of the path is 260 or fewer characters.
+
+## Install project dependencies
+
+1. Open a console window, and change to the directory that contains the Node.js sample app:
+
+ ```console
+ cd 1-Authentication\5-sign-in-express\App
+ ```
+1. Run the following commands to install app dependencies:
+
+ ```console
+ npm install && npm update
+ ```
+
+## Configure the sample web app
+
+1. In your code editor, open *App\authConfig.js* file.
+
+1. Find the placeholder:
+
+ 1. `Enter_the_Application_Id_Here` and replace it with the Application (client) ID of the app you registered earlier.
+
+ 1. `Enter_the_Tenant_Subdomain_Here` and replace it with the Directory (tenant) subdomain. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, learn how to [read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details).
+
+ 1. `Enter_the_Client_Secret_Here` and replace it with the app secret value you copied earlier.
+
+## Run and test sample web app
+
+You can now test the sample Node.js web app. You need to start the Node.js server and access it through your browser at `http://localhost:3000`.
+
+1. In your terminal, run the following command:
+
+ ```console
+ npm start
+ ```
+
+1. Open your browser, then go to http://localhost:3000. You should see the page similar to the following screenshot:
+
+ :::image type="content" source="media/how-to-web-app-node-sample-sign-in/web-app-node-sign-in.png" alt-text="Screenshot of sign in into a node web app.":::
+
+1. After the page completes loading, select **Sign in** link. You're prompted to sign in.
+
+1. On the sign-in page, type your **Email address**, select **Next**, type your **Password**, then select **Sign in**. If you don't have an account, select **No account? Create one** link, which starts the sign-up flow.
+
+1. If you choose the sign-up option, after filling in your email, one-time passcode, new password and more account details, you complete the whole sign-up flow. You see a page similar to the following screenshot. You see a similar page if you choose the sign-in option.
+
+ :::image type="content" source="media/how-to-web-app-node-sample-sign-in/web-app-node-view-claims.png" alt-text="Screenshot of view ID token claims.":::
+
+1. Select **Sign out** to sign the user out of the web app or select **View ID token claims** to view ID token claims returned by Microsoft Entra.
+
+### How it works
+
+When users select the **Sign in** link, the app initiates an authentication request and redirects users to Azure AD for customers. On the sign-in or sign-up page that appears, once a user successfully signs in or creates an account, Azure AD for customers returns an ID token to the app. The app validates the ID token, reads the claims, and returns a secure page to the users.
+
+When the users select the **Sign out** link, the app clears its session, the redirect the user to Azure AD for customers sign-out endpoint to notify it that the user has signed out.
+
+If you want to build an app similar to the sample you've run, complete the steps in [Sign in users in your own Node.js web application](how-to-web-app-node-sign-in-overview.md) article.
+
+## Next steps
+
+You may want to:
+
+- [Enable password reset](how-to-enable-password-reset-customers.md)
+
+- [Customize the default branding](how-to-customize-branding-customers.md)
+
+- [Configure sign-in with Google](how-to-google-federation-customers.md)
+
+- [Sign in users in your own Node.js web application](how-to-web-app-node-sign-in-overview.md)
active-directory What Is Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/what-is-deprecated.md
Use the following table to learn about changes including deprecations, retiremen
|||:| |[System-preferred authentication methods](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Sometime after GA| |[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Jun 30, 2023|
-|[Azure AD Graph API](https://aka.ms/aadgraphupdate)|Retirement|Jun 30, 2023|
+|[Azure AD Graph API](https://aka.ms/aadgraphupdate)|Start of phased retirement|Jul 2023|
|[My Apps improvements](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Jun 30, 2023| |[Terms of Use experience](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Jul 2023| |[Azure AD PowerShell and MSOnline PowerShell](https://aka.ms/aadgraphupdate)|Deprecation|Mar 30, 2024|
active-directory Entitlement Management Verified Id Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-verified-id-settings.md
Title: Configure verified ID settings for an access package in entitlement management (Preview)
+ Title: Configure verified ID settings for an access package in entitlement management
description: Learn how to configure verified ID settings for an access package in entitlement management. documentationCenter: ''
-# Configure verified ID settings for an access package in entitlement management (Preview)
+# Configure verified ID settings for an access package in entitlement management
When setting up an access package policy, admins can specify whether itΓÇÖs for users in the directory, connected organizations, or any external user. Entitlement Management determines if the person requesting the access package is within the scope of the policy.
This article describes how to configure the verified ID requirement settings for
Before you begin, you must set up your tenant to use the [Microsoft Entra Verified ID service](../verifiable-credentials/decentralized-identifier-overview.md). You can find detailed instructions on how to do that here: [Configure your tenant for Microsoft Entra Verified ID](../verifiable-credentials/verifiable-credentials-configure-tenant.md).
-## Create an access package with verified ID requirements (Preview)
+## Create an access package with verified ID requirements
To add a verified ID requirement to an access package, you must start from the access packageΓÇÖs requests tab. Follow these steps to add a verified ID requirement to a new access package.
To add a verified ID requirement to an access package, you must start from the a
:::image type="content" source="media/entitlement-management-verified-id-settings/verified-ids-list.png" alt-text="Screenshot of a list of verified IDs.":::
-## Request an access package with verified ID requirements (Preview)
+## Request an access package with verified ID requirements
Once an access package is configured with a verified ID requirement, end-users who are within the scope of the policy are able to request access using the My Access portal. Similarly, approvers are able to see the claims of the VCs presented by requestors when reviewing requests for approval.
active-directory What Is Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/what-is-cloud-sync.md
# What is Azure AD Connect cloud sync?+
+> [!VIDEO https://www.youtube.com/embed/9T6lKEloq0Q]
+ Azure AD Connect cloud sync is a new offering from Microsoft designed to meet and accomplish your hybrid identity goals for synchronization of users, groups, and contacts to Azure AD. It accomplishes this by using the Azure AD cloud provisioning agent instead of the Azure AD Connect application. However, it can be used alongside Azure AD Connect sync and it provides the following benefits: - Support for synchronizing to an Azure AD tenant from a multi-forest disconnected Active Directory forest environment: The common scenarios include merger & acquisition (where the acquired company's AD forests are isolated from the parent company's AD forests), and companies that have historically had multiple AD forests.
active-directory Whatis Azure Ad Connect V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/whatis-azure-ad-connect-v2.md
Azure AD Connect was released several years ago. Since this time, several of the components that Azure AD Connect uses have been scheduled for deprecation and updated to newer versions. Attempting to update all of these components individually would take time and planning. -
-To address this, we've bundled as many of these newer components into a new, single release, so you only have to update once. This release is Azure AD Connect V2. This release is a new version of the same software used to accomplish your hybrid identity goals, built using the latest foundational components.
+To address this issue, we've bundled as many of these newer components into a new, single release, so you only have to update once. This release is Azure AD Connect V2. This release is a new version of the same software used to accomplish your hybrid identity goals, built using the latest foundational components.
>[!NOTE] >Azure AD Connect V1 has been retired as of August 31, 2022 and is no longer supported. Azure AD Connect V1 installations may **stop working unexpectedly**. If you are still using a Azure AD Connect V1 you need to upgrade to Azure AD Connect V2 immediately.
+## Consider moving to Azure AD Connect cloud sync
+Azure AD Connect cloud sync is the future of synchronization for Microsoft. It replaces Azure AD Connect.
+
+> [!VIDEO https://www.youtube.com/embed/9T6lKEloq0Q]
+
+Before moving the Azure AD Connect V2.0, you should consider moving to cloud sync. You can see if cloud sync is right for you, by accessing the [Check sync tool](https://aka.ms/M365Wizard) from the portal or via the link provided.
+
+For more information, see [What is cloud sync?](../cloud-sync/what-is-cloud-sync.md)
+++ ## What are the major changes? ### SQL Server 2019 LocalDB
The previous versions of Azure AD Connect shipped with the ADAL authentication l
### Visual C++ Redist 14
-SQL Server 2019 requires the Visual C++ Redist 14 runtime, so we're updating the C++ runtime library to use this version. This Redistributable will be installed with the Azure AD Connect V2 package, so you don't have to take any action for the C++ runtime update.
+SQL Server 2019 requires the Visual C++ Redist 14 runtime, so we're updating the C++ runtime library to use this version. This Redistributable is installed with the Azure AD Connect V2 package, so you don't have to take any action for the C++ runtime update.
### TLS 1.2
-TLS1.0 and TLS 1.1 are protocols that are deemed unsafe and are being deprecated by Microsoft. This release of Azure AD Connect will only support TLS 1.2.
+TLS1.0 and TLS 1.1 are protocols that are deemed unsafe. Microsoft is deprecating them. This release of Azure AD Connect only supports TLS 1.2.
All versions of Windows Server that are supported for Azure AD Connect V2 already default to TLS 1.2. If your server doesn't support TLS 1.2 you will need to enable this before you can deploy Azure AD Connect V2. For more information, see [TLS 1.2 enforcement for Azure AD Connect](reference-connect-tls-enforcement.md). ### All binaries signed with SHA2
More details about PowerShell prerequisites can be found [here](/powershell/scri
## What else do I need to know? **Why is this upgrade important for me?** </br>
-Next year several of the components in your current Azure AD Connect server installations will go out of support. If you are using unsupported products, it will be harder for our support team to provide you with the support experience your organization requires. So we recommend all customers to upgrade to this newer version as soon as they can.
+Next year several of the components in your current Azure AD Connect server installations will no longer be supported. If you are using unsupported products, it will be harder for our support team to provide you with the support experience your organization requires. So we recommend all customers to upgrade to this newer version as soon as they can.
This upgrade is especially important since we've had to update our prerequisites for Azure AD Connect and you may need additional time to plan and update your servers to the newer versions of these prerequisites
active-directory Whatis Azure Ad Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/whatis-azure-ad-connect.md
Azure AD Connect is an on-premises Microsoft application that's designed to meet and accomplish your hybrid identity goals. If you're evaluating how to best meet your goals, you should also consider the cloud-managed solution [Azure AD Connect cloud sync](../cloud-sync/what-is-cloud-sync.md). > [!div class="nextstepaction"] > [Install Microsoft Azure Active Directory Connect](https://www.microsoft.com/download/details.aspx?id=47594) >
-Azure AD Connect provides the following features:
++
+ >[!IMPORTANT]
+ >Azure AD Connect V1 has been retired as of August 31, 2022 and is no longer supported. Azure AD Connect V1 installations may **stop working unexpectedly**. If you are still using a Azure AD Connect V1 you need to upgrade to Azure AD Connect V2 immediately.
+
+## Consider moving to Azure AD Connect cloud sync
+Azure AD Connect cloud sync is the future of synchronization for Microsoft. It will replace Azure AD Connect.
+
+> [!VIDEO https://www.youtube.com/embed/9T6lKEloq0Q]
+
+Before moving the Azure AD Connect V2.0, you should consider moving to cloud sync. You can see if cloud sync is right for you, by accessing the [Check sync tool](https://aka.ms/M365Wizard) from the portal or via the link provided.
+
+For more information see [What is cloud sync?](../cloud-sync/what-is-cloud-sync.md)
+
+## Azure AD Connect features
- [Password hash synchronization](whatis-phs.md) - A sign-in method that synchronizes a hash of a users on-premises AD password with Azure AD. - [Pass-through authentication](how-to-connect-pta.md) - A sign-in method that allows users to use the same password on-premises and in the cloud, but doesn't require the additional infrastructure of a federated environment.
active-directory Funnel Leasing Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/funnel-leasing-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* A user account in Funnel Leasing with Admin permissions.
+* A live community in Funnel or at least a confirmation that all the required configuration is done on the Funnel side in preparation for a go-live date.
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and Funnel Leasing](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Funnel Leasing to support provisioning with Azure AD
-Contact Funnel Leasing support to configure Funnel Leasing to support provisioning with Azure AD.
+Contact your Funnel Account Manager and let them know you want to enable Azure AD user provisioning, they will provide an authentication Bearer token.
## Step 3. Add Funnel Leasing from the Azure AD application gallery
The Azure AD provisioning service allows you to scope who will be provisioned ba
## Step 5. Configure automatic user provisioning to Funnel Leasing
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD.
+This section guides you through connecting your Azure AD to Funnel's user account provisioning API, and configuring the provisioning service to create, update, and disable assigned user accounts in Funnel based on user assignment in Azure AD.
### To configure automatic user provisioning for Funnel Leasing in Azure AD:
This section guides you through the steps to configure the Azure AD provisioning
![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
-1. In the applications list, select **Funnel Leasing**.
+1. In the applications list, select **Funnel**.
![Screenshot of the Funnel Leasing link in the Applications list.](common/all-applications.png)
This section guides you through the steps to configure the Azure AD provisioning
![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
-1. Under the **Admin Credentials** section, input your Funnel Leasing Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Funnel Leasing. If the connection fails, ensure your Funnel Leasing account has Admin permissions and try again.
+1. Under the **Admin Credentials** section, input `https://nestiolistings.com/scim/v2` as the **Tenant URL** and the **Secret Token** retrieved earlier from your Funnel Account Manager (the authentication Bearer token). Click **Test Connection** to ensure Azure AD can connect to Funnel. If the connection fails, ensure you have a valid authentication token with your Funnel Account Manager.
![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
Once you've configured provisioning, use the following resources to monitor your
* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion * If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+## Role and Group Mappings
+To associate an Azure user to a Funnel role, or an Azure user to a Funnel employee group, Funnel uses a custom mapping functionality.
+
+- Which Azure fields are used?
+
+ For role mappings, Funnel looks at the SCIM `title` attribute by default. This SCIM attribute is mapped to the `jobTitle` Azure user attribute by default.
+
+ For group mappings, Funnel looks at the SCIM `userType` attribute by default. This SCIM attribute is mapped to the `department` Azure user attribute by default.
+
+ If you want to change which fields are used, you can edit the **Attribute Mappings** section and map your desired fields to `title` and `userType`.
+
+- Which values are used?
+
+ For initial setup, determine every value that you want to use for role and group mappings. Provide these values to your Funnel Account Manager to set up the configuration in Funnel.
+
+ For example, if you want to set the `jobTitle` field with an `agent` value, you will need to tell your Funnel Account Manager which Funnel role this value should be mapped.
+
+ If you need to update or add new values in the future, you will need to notify your Funnel Account Manager.
+
+- How do I associate a user to several roles and groups?
+
+ It is not possible to associate a user to several Funnel roles, but it is possible to associate a user to several Funnel employee groups.
+
+ To associate a user to several Funnel employee groups, you will need to specify multiple values in the `department` user attribute (or whichever attribute you mapped to `userType`).
+ Each value will need to be separated by a delimiter. By default the `-` character is used as the delimiter. To use another delimiter, you will need to notify your Funnel account manager.
+ ## More resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory Moqups Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/moqups-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). * An administrator account with Moqups.
+* SCIM-based user provisioning is available to Moqups customers on our [Unlimited Plan](https://moqups.com/pricing).
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and Moqups](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Moqups to support provisioning with Azure AD
-Contact Moqups support to configure Moqups to support provisioning with Azure AD.
+To set up **SCIM** for **Azure**, you will first need to generate an **API Token** in Moqups, and then configure **Automatic Provisioning** in Azure itself.
+
+Generate an API Token:
+
+1. Go to the **Integrations** tab on your Moqups **Dashboard's Account** page.
+1. In the **SCIM Provisioning** section of your **Integration tab**, click the **Generate token** button.
+
+ ![Screenshot of generate token.](media/moqups-provisioning-tutorial/generate-token.png)
+
+1. Copy the **API Token** to your clipboard. You'll need this to complete the process in **Azure**.
+
+ ![Screenshot of api token.](media/moqups-provisioning-tutorial/api-token.png)
## Step 3. Add Moqups from the Azure AD application gallery
This section guides you through the steps to configure the Azure AD provisioning
![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
-1. Under the **Admin Credentials** section, input your Moqups Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Moqups. If the connection fails, ensure your Moqups account has Admin permissions and try again.
+1. In the **Admin Credentials** section, input your Moqups Tenant URL and Secret Token.
+ 1. Use `https://api.moqups.com/scim/v2` as the **Tenant URL**.
+ 1. Use the **API Token** generated in Step 2.1 as the **Secret Token**.
+ 1. Click **Test Connection** so that Azure AD can confirm that the supplied credentials can be used for provisioning. If the connection fails, double-check the **Tenant URL**, as well make sure the **API Token** is correct.
![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
active-directory Zoom Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zoom-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). 1. Determine what data to [map between Azure AD and Zoom](../app-provisioning/customize-application-attributes.md).
-## Step 2. Configure Zoom to support provisioning with Azure AD
-
-1. Sign in to your [Zoom Admin Console](https://zoom.us/signin). Navigate to **ADMIN > Advanced > App Marketplace** in the left navigation pane.
-
- ![Screenshot of Zoom Integrations.](media/zoom-provisioning-tutorial/app-navigations.png)
-
-1. Navigate to **Manage** in the top-right corner of the page.
-
- ![Screenshot of the Zoom App Marketplace with the Manage option called out.](media/zoom-provisioning-tutorial/zoom-manage.png)
-
-1. Navigate to your created Azure AD app.
-
- ![Screenshot of the Created Apps section with the Azure A D app called out.](media/zoom-provisioning-tutorial/zoom03.png)
-
- > [!NOTE]
- > If you don't have an Azure AD app already created, then have a [JWT type Azure AD app](https://developers.zoom.us/docs/platform/build/jwt-app/) created.
-
-1. Select **App Credentials** in the left navigation pane.
-
- ![Screenshot of the left navigation pane with the App Credentials option highlighted.](media/zoom-provisioning-tutorial/zoom04.png)
-
-1. Copy and save the **JWT Token**. This value will be entered in the **Secret Token** field in the Provisioning tab of your Zoom application in the Azure portal. If you need a new non-expiring token, you will need to reconfigure the expiration time which will auto generate a new token.
-
- ![Screenshot of the App Credentials page.](media/zoom-provisioning-tutorial/zoom05.png)
-
-## Step 3. Add Zoom from the Azure AD application gallery
+## Step 2. Add Zoom from the Azure AD application gallery
Add Zoom from the Azure AD application gallery to start managing provisioning to Zoom. If you have previously setup Zoom for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
-## Step 4. Define who will be in scope for provisioning
+## Step 3. Define who will be in scope for provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
The Azure AD provisioning service allows you to scope who will be provisioned ba
* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
-## Step 5. Configure automatic user provisioning to Zoom
+## Step 4. Configure automatic user provisioning to Zoom
This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
This section guides you through the steps to configure the Azure AD provisioning
![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-1. Under the **Admin Credentials** section, select desired **Authentication Method**.
-
- * If the Authentication Method is **OAuth2 Authorization Code Grant**, enter `https://api.zoom.us/scim` in **Tenant URL**, click on **Authorize**, make sure that you enter your Zoom account's Admin credentials. Click **Test Connection** to ensure Azure AD can connect to Zoom. If the connection fails, ensure your Zoom account has Admin permissions and try again.
-
- ![Screenshot of the Zoom provisioning Token.](./media/zoom-provisioning-tutorial/provisioning-oauth.png)
-
- * If the Authentication Method is **Bearer Authentication**, enter `https://api.zoom.us/scim` in **Tenant URL**. Input the **JWT Token** value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to Zoom. If the connection fails, ensure your Zoom account has Admin permissions and try again.
+1. Under the **Admin Credentials** section, select **OAuth2 Authorization Code Grant**. Enter `https://api.zoom.us/scim` in **Tenant URL**, click on **Authorize**, make sure that you enter your Zoom account's Admin credentials. Click **Test Connection** to ensure Azure AD can connect to Zoom. If the connection fails, ensure your Zoom account has Admin permissions and try again.
- ![Screenshot of the Zoom provisioning OAuth.](./media/zoom-provisioning-tutorial/provisioning-bearer-token.png)
+ ![Screenshot of theZoom provisioning Token.](./media/zoom-provisioning-tutorial/provisioning-oauth.png)
- > [!NOTE]
- > You will have two options for your Authentication Method: **Bearer Authentication** and **OAuth2 Authorization Code Grant**. Make sure that you select OAuth2 Authorization Code Grant.
+ > [!NOTE]
+ > You will have two options for your Authentication Method: **Bearer Authentication** and **OAuth2 Authorization Code Grant**. Make sure that you select OAuth2 Authorization Code Grant. Zoom no longer supports the **Bearer Authentication** method
1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
This section guides you through the steps to configure the Azure AD provisioning
This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
-## Step 6. Monitor your deployment
+## Step 5. Monitor your deployment
Once you've configured provisioning, use the following resources to monitor your deployment: 1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
Once you've configured provisioning, use the following resources to monitor your
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md).
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Zoom Support article](https://support.zoom.us/hc/en-us/articles/115005887566-Configuring-Zoom-with-Azure).
## Next steps
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
resourceGroup="myResourceGroup"
location="westcentralus" az aks update --name $clusterName \group $resourceGroup \
+--resource-group $resourceGroup \
--network-plugin-mode overlay \ --pod-cidr 192.168.0.0/16 ```
aks Azure Hpc Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-hpc-cache.md
Title: Integrate Azure HPC Cache with Azure Kubernetes Service
-description: Learn how to integrate HPC Cache with Azure Kubernetes Service
+ Title: Integrate Azure HPC Cache with Azure Kubernetes Service (AKS)
+description: Learn how to integrate HPC Cache with Azure Kubernetes Service (AKS).
Previously updated : 09/08/2021 Last updated : 06/22/2023 #Customer intent: As a cluster operator or developer, I want to learn how to integrate HPC Cache with AKS
-# Integrate Azure HPC Cache with Azure Kubernetes Service
+# Integrate Azure HPC Cache with Azure Kubernetes Service (AKS)
[Azure HPC Cache][hpc-cache] speeds access to your data for high-performance computing (HPC) tasks. By caching files in Azure, Azure HPC Cache brings the scalability of cloud computing to your existing workflow. This article shows you how to integrate Azure HPC Cache with Azure Kubernetes Service (AKS). ## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+* This article assumes you have an existing AKS cluster. If you need an AKS cluster, you can create one using[Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [Azure portal][aks-quickstart-portal].
+
+ > [!IMPORTANT]
+ > Your AKS cluster must be [in a region that supports Azure HPC Cache][hpc-cache-regions].
+
+* You need Azure CLI version 2.7 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. For more information on using HPC Cache with Azure CLI, see the [HPC Cache CLI prerequisites][hpc-cache-cli-prerequisites].
+* Install the `hpc-cache` Azure CLI extension using the [`az extension add --upgrade -n hpc-cache][az-extension-add]` command.
+* Review the [HPC Cache prerequisites][hpc-cache-prereqs]. You need to satisfy these prerequisites before you can run an HPC Cache. Important prerequisites include the following:
+ * The cache requires a *dedicated* subnet with at least 64 IP addresses available.
+ * The subnet must not host other VMs or containers.
+ * The subnet must be accessible from the AKS nodes.
+
+## Create the Azure HPC Cache
+
+1. Get the node resource group using the [`az aks show`][az-aks-show] command with the `--query nodeResourceGroup` query parameter.
+
+ ```azurecli-interactive
+ az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
+ ```
+
+ Your output should look similar to the following example output:
+
+ ```output
+ MC_myResourceGroup_myAKSCluster_eastus
+ ```
+
+2. Create the dedicated HPC Cache subnet using the [`az network vnet subnet create`][az-network-vnet-subnet-create] command.
+
+ ```azurecli
+ RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
+ VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv)
+ VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
+ SUBNET_NAME=MyHpcCacheSubnet
+
+ az network vnet subnet create \
+ --resource-group $RESOURCE_GROUP \
+ --vnet-name $VNET_NAME \
+ --name $SUBNET_NAME \
+ --address-prefixes 10.0.0.0/26
+ ```
+
+3. Register the *Microsoft.StorageCache* resource provider using the [`az provider register`][az-provider-register] command.
+
+ ```azurecli
+ az provider register --namespace Microsoft.StorageCache --wait
+ ```
+
+ > [!NOTE]
+ > The resource provider registration can take some time to complete.
+
+4. Create an HPC Cache in the same node resource group and region using the [`az hpc-cache create`][az-hpc-cache-create].
+
+ > [!NOTE]
+ > The HPC Cache takes approximately 20 minutes to be created.
+
+ ```azurecli
+ RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
+ VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv)
+ VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
+ SUBNET_NAME=MyHpcCacheSubnet
+ SUBNET_ID=$(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name $SUBNET_NAME --query "id" -o tsv)
+
+ az hpc-cache create \
+ --resource-group $RESOURCE_GROUP \
+ --cache-size-gb "3072" \
+ --location eastus \
+ --subnet $SUBNET_ID \
+ --sku-name "Standard_2G" \
+ --name MyHpcCache
+ ```
+
+## Create and configure Azure storage
> [!IMPORTANT]
-> Your AKS cluster must be [in a region that supports Azure HPC Cache][hpc-cache-regions].
+> You need to select a unique storage account name. Replace `uniquestorageaccount` with something unique for you. Storage account names must be *between 3 and 24 characters in length* and *can contain only numbers and lowercase letters*.
-You also need to install and configure Azure CLI version 2.7 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. See [hpc-cache-cli-prerequisites] for more information about using Azure CLI with HPC Cache.
+1. Create a storage account using the [`az storage account create`][az-storage-account-create] command.
-You will also need to install the hpc-cache Azure CLI extension. Please do the following:
+ ```azurecli
+ RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
+ STORAGE_ACCOUNT_NAME=uniquestorageaccount
-```azurecli
-az extension add --upgrade -n hpc-cache
-```
+ az storage account create \
+ -n $STORAGE_ACCOUNT_NAME \
+ -g $RESOURCE_GROUP \
+ -l eastus \
+ --sku Standard_LRS
+ ```
-## Set up Azure HPC Cache
+2. Assign the "Storage Blob Data Contributor Role" on your subscription using the [`az role assignment create`][az-role-assignment-create] command.
-This section explains the steps to create and configure your HPC Cache.
+ ```azurecli
+ STORAGE_ACCOUNT_NAME=uniquestorageaccount
+ STORAGE_ACCOUNT_ID=$(az storage account show --name $STORAGE_ACCOUNT_NAME --query "id" -o tsv)
+ AD_USER=$(az ad signed-in-user show --query objectId -o tsv)
+ CONTAINER_NAME=mystoragecontainer
-### 1. Find the AKS node resource group
+ az role assignment create --role "Storage Blob Data Contributor" --assignee $AD_USER --scope $STORAGE_ACCOUNT_ID
+ ```
-First, get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. You will create your HPC Cache in the same resource group.
+3. Create the Blob container within the storage account using the [`az storage container create`][az-storage-container-create] command.
-The following example gets the node resource group name for the AKS cluster named *myAKSCluster* in the resource group name *myResourceGroup*:
+ ```azurecli
+ az storage container create --name $CONTAINER_NAME --account-name $STORAGE_ACCOUNT_NAME --auth-mode login
+ ```
-```azurecli-interactive
-az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
-```
+4. Provide permissions to the Azure HPC Cache service account to access your storage account and Blob container using the following [`az role assignment`][az-role-assignment-create] commands.
-```output
-MC_myResourceGroup_myAKSCluster_eastus
-```
+ ```azurecli
+ HPC_CACHE_USER="StorageCache Resource Provider"
+ HPC_CACHE_ID=$(az ad sp list --display-name "${HPC_CACHE_USER}" --query "[].objectId" -o tsv)
-### 2. Create the cache subnet
+ az role assignment create --role "Storage Account Contributor" --assignee $HPC_CACHE_ID --scope $STORAGE_ACCOUNT_ID
-There are a number of [prerequisites][hpc-cache-prereqs] that must be satisfied before running an HPC Cache. Most importantly, the cache requires a *dedicated* subnet with at least 64 IP addresses available. This subnet must not host other VMs or containers. This subnet must be accessible from the AKS nodes.
+ az role assignment create --role "Storage Blob Data Contributor" --assignee $HPC_CACHE_ID --scope $STORAGE_ACCOUNT_ID
+ ```
-Create the dedicated HPC Cache subnet:
+5. Add the blob container to your HPC Cache as a storage target using the [`az hpc-cache blob-storage-target add`][az-hpc-cache-blob-storage-target-add] command.
-```azurecli
-RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
-VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv)
-VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
-SUBNET_NAME=MyHpcCacheSubnet
-az network vnet subnet create \
- --resource-group $RESOURCE_GROUP \
- --vnet-name $VNET_NAME \
- --name $SUBNET_NAME \
- --address-prefixes 10.0.0.0/26
-```
+ ```azurecli
+ CONTAINER_NAME=mystoragecontainer
-Register the *Microsoft.StorageCache* resource provider:
+ az hpc-cache blob-storage-target add \
+ --resource-group $RESOURCE_GROUP \
+ --cache-name MyHpcCache \
+ --name MyStorageTarget \
+ --storage-account $STORAGE_ACCOUNT_ID \
+ --container-name $CONTAINER_NAME \
+ --virtual-namespace-path "/myfilepath"
+ ```
-```azurecli
-az provider register --namespace Microsoft.StorageCache --wait
-```
+## Set up client load balancing
-> [!NOTE]
-> The resource provider registration can take some time to complete.
+1. Create an Azure Private DNS Zone for the client-facing IP addresses using the [`az network private-dns zone create`][az-network-private-dns-zone-create] command.
-### 3. Create the HPC Cache
+ ```azurecli
+ PRIVATE_DNS_ZONE="myhpccache.local"
-Create an HPC Cache in the node resource group from step 1 and in the same region as your AKS cluster. Use [az hpc-cache create][az-hpc-cache-create].
+ az network private-dns zone create \
+ -g $RESOURCE_GROUP \
+ -n $PRIVATE_DNS_ZONE
+ ```
-> [!NOTE]
-> The HPC Cache takes approximately 20 minutes to be created.
+2. Create a DNS link between the Azure Private DNS Zone and the VNet using the [`az network private-dns link vnet create`][az-network-private-dns-link-vnet-create] command.
-```azurecli
-RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
-VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv)
-VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
-SUBNET_NAME=MyHpcCacheSubnet
-SUBNET_ID=$(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name $SUBNET_NAME --query "id" -o tsv)
-az hpc-cache create \
- --resource-group $RESOURCE_GROUP \
- --cache-size-gb "3072" \
- --location eastus \
- --subnet $SUBNET_ID \
- --sku-name "Standard_2G" \
- --name MyHpcCache
-```
+ ```azurecli
+ az network private-dns link vnet create \
+ -g $RESOURCE_GROUP \
+ -n MyDNSLink \
+ -z $PRIVATE_DNS_ZONE \
+ -v $VNET_NAME \
+ -e true
+ ```
-### 4. Create a storage account and new container
+3. Create the round-robin DNS name for the client-facing IP addresses using the [`az network private-dns record-set a create`][az-network-private-dns-record-set-a-create] command.
-Create the Azure Storage account for the Blob storage container. The HPC Cache will cache content that is stored in this Blob storage container.
+ ```azurecli
+ DNS_NAME="server"
+ HPC_MOUNTS0=$(az hpc-cache show --name "MyHpcCache" --resource-group $RESOURCE_GROUP --query "mountAddresses[0]" -o tsv | tr --delete '\r')
+ HPC_MOUNTS1=$(az hpc-cache show --name "MyHpcCache" --resource-group $RESOURCE_GROUP --query "mountAddresses[1]" -o tsv | tr --delete '\r')
+ HPC_MOUNTS2=$(az hpc-cache show --name "MyHpcCache" --resource-group $RESOURCE_GROUP --query "mountAddresses[2]" -o tsv | tr --delete '\r')
-> [!IMPORTANT]
-> You need to select a unique storage account name. Replace 'uniquestorageaccount' with something that will be unique for you.
-
-Check that the storage account name that you have selected is available.
-
-```azurecli
-STORAGE_ACCOUNT_NAME=uniquestorageaccount
-az storage account check-name --name $STORAGE_ACCOUNT_NAME
-```
-
-```azurecli
-RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
-STORAGE_ACCOUNT_NAME=uniquestorageaccount
-az storage account create \
- -n $STORAGE_ACCOUNT_NAME \
- -g $RESOURCE_GROUP \
- -l eastus \
- --sku Standard_LRS
-```
-
-Create the Blob container within the storage account.
-
-```azurecli
-STORAGE_ACCOUNT_NAME=uniquestorageaccount
-STORAGE_ACCOUNT_ID=$(az storage account show --name $STORAGE_ACCOUNT_NAME --query "id" -o tsv)
-AD_USER=$(az ad signed-in-user show --query objectId -o tsv)
-CONTAINER_NAME=mystoragecontainer
-az role assignment create --role "Storage Blob Data Contributor" --assignee $AD_USER --scope $STORAGE_ACCOUNT_ID
-az storage container create --name $CONTAINER_NAME --account-name $STORAGE_ACCOUNT_NAME --auth-mode login
-```
-
-Provide permissions to the Azure HPC Cache service account to access your storage account and Blob container.
-
-```azurecli
-HPC_CACHE_USER="StorageCache Resource Provider"
-STORAGE_ACCOUNT_NAME=uniquestorageaccount
-STORAGE_ACCOUNT_ID=$(az storage account show --name $STORAGE_ACCOUNT_NAME --query "id" -o tsv)
-$HPC_CACHE_ID=$(az ad sp list --display-name "${HPC_CACHE_USER}" --query "[].objectId" -o tsv)
-az role assignment create --role "Storage Account Contributor" --assignee $HPC_CACHE_ID --scope $STORAGE_ACCOUNT_ID
-az role assignment create --role "Storage Blob Data Contributor" --assignee $HPC_CACHE_ID --scope $STORAGE_ACCOUNT_ID
-```
-
-### 5. Configure the storage target
-
-Add the blob container to your HPC Cache as a storage target.
-
-```azurecli
-RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
-STORAGE_ACCOUNT_NAME=uniquestorageaccount
-STORAGE_ACCOUNT_ID=$(az storage account show --name $STORAGE_ACCOUNT_NAME --query "id" -o tsv)
-CONTAINER_NAME=mystoragecontainer
-az hpc-cache blob-storage-target add \
- --resource-group $RESOURCE_GROUP \
- --cache-name MyHpcCache \
- --name MyStorageTarget \
- --storage-account $STORAGE_ACCOUNT_ID \
- --container-name $CONTAINER_NAME \
- --virtual-namespace-path "/myfilepath"
-```
-
-### 6. Set up client load balancing
-
-Create a Azure Private DNS Zone for the client-facing IP addresses.
-
-```azurecli
-RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
-VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv)
-VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
-PRIVATE_DNS_ZONE="myhpccache.local"
-az network private-dns zone create \
- -g $RESOURCE_GROUP \
- -n $PRIVATE_DNS_ZONE
-az network private-dns link vnet create \
- -g $RESOURCE_GROUP \
- -n MyDNSLink \
- -z $PRIVATE_DNS_ZONE \
- -v $VNET_NAME \
- -e true
-```
-
-Create the round-robin DNS name.
-
-```azurecli
-DNS_NAME="server"
-PRIVATE_DNS_ZONE="myhpccache.local"
-RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
-HPC_MOUNTS0=$(az hpc-cache show --name "MyHpcCache" --resource-group $RESOURCE_GROUP --query "mountAddresses[0]" -o tsv | tr --delete '\r')
-HPC_MOUNTS1=$(az hpc-cache show --name "MyHpcCache" --resource-group $RESOURCE_GROUP --query "mountAddresses[1]" -o tsv | tr --delete '\r')
-HPC_MOUNTS2=$(az hpc-cache show --name "MyHpcCache" --resource-group $RESOURCE_GROUP --query "mountAddresses[2]" -o tsv | tr --delete '\r')
-az network private-dns record-set a add-record -g $RESOURCE_GROUP -z $PRIVATE_DNS_ZONE -n $DNS_NAME -a $HPC_MOUNTS0
-az network private-dns record-set a add-record -g $RESOURCE_GROUP -z $PRIVATE_DNS_ZONE -n $DNS_NAME -a $HPC_MOUNTS1
-az network private-dns record-set a add-record -g $RESOURCE_GROUP -z $PRIVATE_DNS_ZONE -n $DNS_NAME -a $HPC_MOUNTS2
-```
-
-## Create the AKS persistent volume
-
-Create a `pv-nfs.yaml` file to define a [persistent volume][persistent-volume].
-
-```yaml
-
-apiVersion: v1
-kind: PersistentVolume
-metadata:
- name: pv-nfs
-spec:
- capacity:
- storage: 10000Gi
- accessModes:
- - ReadWriteMany
- mountOptions:
- - vers=3
- nfs:
- server: server.myhpccache.local
- path: /
-```
-
-First, ensure that you have credentials for your Kubernetes cluster.
-
-```azurecli-interactive
-az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
-```
-
-Update the *server* and *path* to the values of your NFS (Network File System) volume you created in the previous step. Create the persistent volume with the [kubectl apply][kubectl-apply] command:
-
-```console
-kubectl apply -f pv-nfs.yaml
-```
-
-Verify that status of the persistent volume is **Available** using the [kubectl describe][kubectl-describe] command:
-
-```console
-kubectl describe pv pv-nfs
-```
+ az network private-dns record-set a add-record -g $RESOURCE_GROUP -z $PRIVATE_DNS_ZONE -n $DNS_NAME -a $HPC_MOUNTS0
+
+ az network private-dns record-set a add-record -g $RESOURCE_GROUP -z $PRIVATE_DNS_ZONE -n $DNS_NAME -a $HPC_MOUNTS1
+
+ az network private-dns record-set a add-record -g $RESOURCE_GROUP -z $PRIVATE_DNS_ZONE -n $DNS_NAME -a $HPC_MOUNTS2
+ ```
+
+## Create a persistent volume
+
+1. Create a `pv-nfs.yaml` file to define a [persistent volume][persistent-volume].
+
+ ```yaml
+
+ apiVersion: v1
+ kind: PersistentVolume
+ metadata:
+ name: pv-nfs
+ spec:
+ capacity:
+ storage: 10000Gi
+ accessModes:
+ - ReadWriteMany
+ mountOptions:
+ - vers=3
+ nfs:
+ server: server.myhpccache.local
+ path: /
+ ```
+
+2. Get the credentials for your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
+
+3. Update the *server* and *path* to the values of your NFS (Network File System) volume you created in the previous step.
+4. Create the persistent volume using the [`kubectl apply`][kubectl-apply] command.
+
+ ```console
+ kubectl apply -f pv-nfs.yaml
+ ```
+
+5. Verify the status of the persistent volume is **Available** using the [`kubectl describe`][kubectl-describe] command.
+
+ ```console
+ kubectl describe pv pv-nfs
+ ```
## Create the persistent volume claim
-Create a `pvc-nfs.yaml` defining a [persistent volume claim][persistent-volume-claim]. For example:
+1. Create a `pvc-nfs.yaml` to define a [persistent volume claim][persistent-volume-claim].
-```yaml
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
- name: pvc-nfs
-spec:
- accessModes:
- - ReadWriteMany
- storageClassName: ""
- resources:
- requests:
- storage: 100Gi
-```
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: pvc-nfs
+ spec:
+ accessModes:
+ - ReadWriteMany
+ storageClassName: ""
+ resources:
+ requests:
+ storage: 100Gi
+ ```
-Use the [kubectl apply][kubectl-apply] command to create the persistent volume claim:
+2. Create the persistent volume claim using the [`kubectl apply`][kubectl-apply] command.
-```console
-kubectl apply -f pvc-nfs.yaml
-```
+ ```console
+ kubectl apply -f pvc-nfs.yaml
+ ```
-Verify that the status of the persistent volume claim is **Bound** using the [kubectl describe][kubectl-describe] command:
+3. Verify the status of the persistent volume claim is **Bound** using the [`kubectl describe`][kubectl-describe] command.
-```console
-kubectl describe pvc pvc-nfs
-```
+ ```console
+ kubectl describe pvc pvc-nfs
+ ```
## Mount the HPC Cache with a pod
-Create a `nginx-nfs.yaml` file to define a pod that uses the persistent volume claim. For example:
-
-```yaml
-kind: Pod
-apiVersion: v1
-metadata:
- name: nginx-nfs
-spec:
- containers:
- - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- name: nginx-nfs
- command:
- - "/bin/sh"
- - "-c"
- - while true; do echo $(date) >> /mnt/azure/myfilepath/outfile; sleep 1; done
- volumeMounts:
- - name: disk01
- mountPath: /mnt/azure
- volumes:
- - name: disk01
- persistentVolumeClaim:
- claimName: pvc-nfs
-```
-
-Create the pod with the [kubectl apply][kubectl-apply] command:
-
-```console
-kubectl apply -f nginx-nfs.yaml
-```
-
-Verify that the pod is running by using the [kubectl describe][kubectl-describe] command:
-
-```console
-kubectl describe pod nginx-nfs
-```
-
-Verify your volume has been mounted in the pod by using [kubectl exec][kubectl-exec] to connect to the pod then `df -h` to check if the volume is mounted.
-
-```console
-kubectl exec -it nginx-nfs -- sh
-```
-
-```output
-/ # df -h
-Filesystem Size Used Avail Use% Mounted on
-...
-server.myhpccache.local:/myfilepath 8.0E 0 8.0E 0% /mnt/azure/myfilepath
-...
-```
+1. Create a `nginx-nfs.yaml` file to define a pod that uses the persistent volume claim.
+
+ ```yaml
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: nginx-nfs
+ spec:
+ containers:
+ - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ name: nginx-nfs
+ command:
+ - "/bin/sh"
+ - "-c"
+ - while true; do echo $(date) >> /mnt/azure/myfilepath/outfile; sleep 1; done
+ volumeMounts:
+ - name: disk01
+ mountPath: /mnt/azure
+ volumes:
+ - name: disk01
+ persistentVolumeClaim:
+ claimName: pvc-nfs
+ ```
+
+2. Create the pod using the [`kubectl apply`][kubectl-apply] command.
+
+ ```console
+ kubectl apply -f nginx-nfs.yaml
+ ```
+
+3. Verify the pod is running using the [`kubectl describe`][kubectl-describe] command.
+
+ ```console
+ kubectl describe pod nginx-nfs
+ ```
+
+4. Verify your volume is mounted in the pod using the [`kubectl exec`][kubectl-exec] command to connect to the pod, then `df -h` to check if the volume is mounted.
+
+ ```console
+ kubectl exec -it nginx-nfs -- sh
+ ```
+
+ ```output
+ / # df -h
+ Filesystem Size Used Avail Use% Mounted on
+ ...
+ server.myhpccache.local:/myfilepath 8.0E 0 8.0E 0% /mnt/azure/myfilepath
+ ...
+ ```
## Frequently asked questions (FAQ) ### Running applications as non-root
-If you need to run an application as a non-root user, you may need to disable root squashing to chown a directory to another user. The non-root user will need to own a directory to access the file system. For the user to own a directory, the root user must chown a directory to that user, but if the HPC Cache is squashing root, this operation will be denied because the root user (UID 0) is being mapped to the anonymous user. More information about root squashing and client access policies is found [here][hpc-cache-access-policies].
-
-### Sending feedback
-
-We'd love to hear from you! Please send any feedback or questions to <aks-hpccache-feed@microsoft.com>.
+If you need to run an application as a non-root user, you may need to disable root squashing to chown a directory to another user. The non-root user needs to own a directory to access the file system. For the user to own a directory, the root user must chown a directory to that user, but if the HPC Cache is squashing root, this operation is denied because the root user (UID 0) is being mapped to the anonymous user. For more information about root squashing and client access policies, see [HPC Cache access policies][hpc-cache-access-policies].
## Next steps
-* For more information on Azure HPC Cache, see [HPC Cache Overview][hpc-cache].
-* For more information on using NFS with AKS, see [Manually create and use an NFS (Network File System) Linux Server volume with Azure Kubernetes Service (AKS)][aks-nfs].
+* For more information on Azure HPC Cache, see [HPC Cache overview][hpc-cache].
+* For more information on using NFS with AKS, see [Manually create and use a Network File System (NFS) Linux Server volume with AKS][aks-nfs].
[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
We'd love to hear from you! Please send any feedback or questions to <aks-hpcca
[kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec [persistent-volume]: concepts-storage.md#persistent-volumes [persistent-volume-claim]: concepts-storage.md#persistent-volume-claims
+[az-network-vnet-subnet-create]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_create
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[az-provider-register]: /cli/azure/provider#az_provider_register
+[az-storage-account-create]: /cli/azure/storage/account#az_storage_account_create
+[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
+[az-storage-container-create]: /cli/azure/storage/container#az_storage_container_create
+[az-hpc-cache-blob-storage-target-add]: /cli/azure/hpc-cache/blob-storage-target#az_hpc_cache_blob_storage_target_add
+[az-network-private-dns-zone-create]: /cli/azure/network/private-dns/zone#az_network_private_dns_zone_create
+[az-network-private-dns-link-vnet-create]: /cli/azure/network/private-dns/link/vnet#az_network_private_dns_link_vnet_create
+[az-network-private-dns-record-set-a-create]: /cli/azure/network/private-dns/record-set/a#az_network_private_dns_record_set_a_create
aks Enable Host Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/enable-host-encryption.md
Title: Enable host-based encryption on Azure Kubernetes Service (AKS)
-description: Learn how to configure a host-based encryption in an Azure Kubernetes Service (AKS) cluster
+description: Learn how to configure a host-based encryption in an Azure Kubernetes Service (AKS) cluster.
Previously updated : 04/26/2021 Last updated : 06/22/2023 ms.devlang: azurecli # Host-based encryption on Azure Kubernetes Service (AKS)
-With host-based encryption, the data stored on the VM host of your AKS agent nodes' VMs is encrypted at rest and flows encrypted to the Storage service. This means the temp disks are encrypted at rest with platform-managed keys. The cache of OS and data disks is encrypted at rest with either platform-managed keys or customer-managed keys depending on the encryption type set on those disks.
+With host-based encryption, the data stored on the VM host of your AKS agent nodes' VMs is encrypted at rest and flows encrypted to the Storage service. This means the temp disks are encrypted at rest with platform-managed keys. The cache of OS and data disks is encrypted at rest with either platform-managed keys or customer-managed keys depending on the encryption type set on those disks.
-By default, when using AKS, OS and data disks use server-side encryption with platform-managed keys. The caches for these disks are also encrypted at rest with platform-managed keys. You can specify your own managed keys following [Bring your own keys (BYOK) with Azure disks in Azure Kubernetes Service](azure-disk-customer-managed-keys.md). The cache for these disks will then also be encrypted using the key that you specify in this step.
+By default, when using AKS, OS and data disks use server-side encryption with platform-managed keys. The caches for these disks are encrypted at rest with platform-managed keys. You can specify your own managed keys following [Bring your own keys (BYOK) with Azure disks in Azure Kubernetes Service](azure-disk-customer-managed-keys.md). The caches for these disks are also encrypted using the key you specify.
Host-based encryption is different than server-side encryption (SSE), which is used by Azure Storage. Azure-managed disks use Azure Storage to automatically encrypt data at rest when saving data. Host-based encryption uses the host of the VM to handle encryption before the data flows through Azure Storage. ## Before you begin
-This feature can only be set at cluster creation or node pool creation time.
-
-> [!NOTE]
-> Host-based encryption is available in [Azure regions][supported-regions] that support server side encryption of Azure managed disks and only with specific [supported VM sizes][supported-sizes].
+Before you begin, review the following prerequisites and limitations.
### Prerequisites -- Ensure you have the CLI extension v2.23 or higher version installed.
+- Ensure you have the CLI extension v2.23 or higher installed.
### Limitations -- Can only be enabled on new node pools.-- Can only be enabled in [Azure regions][supported-regions] that support server-side encryption of Azure managed disks and only with specific [supported VM sizes][supported-sizes].-- Requires an AKS cluster and node pool based on Virtual Machine Scale Sets(VMSS) as *VM set type*.
+- This feature can only be set at cluster or node pool creation time.
+- This feature can only be enabled in [Azure regions][supported-regions] that support server-side encryption of Azure managed disks and only with specific [supported VM sizes][supported-sizes].
+- This feature requires an AKS cluster and node pool based on Virtual Machine Scale Sets as *VM set type*.
## Use host-based encryption on new clusters
-Configure the cluster agent nodes to use host-based encryption when the cluster is created.
+- Create a new cluster and configure the cluster agent nodes to use host-based encryption using the [`az aks create`][az-aks-create] command with the `--enable-encryption-at-host` flag.
-```azurecli-interactive
-az aks create --name myAKSCluster --resource-group myResourceGroup -s Standard_DS2_v2 -l westus2 --enable-encryption-at-host
-```
-
-If you want to create clusters without host-based encryption, you can do so by omitting the `--enable-encryption-at-host` parameter.
+ ```azurecli-interactive
+ az aks create --name myAKSCluster --resource-group myResourceGroup -s Standard_DS2_v2 -l westus2 --enable-encryption-at-host
+ ```
## Use host-based encryption on existing clusters
-You can enable host-based encryption on existing clusters by adding a new node pool to your cluster. Configure a new node pool to use host-based encryption by using the `--enable-encryption-at-host` parameter.
-
-```azurecli
-az aks nodepool add --name hostencrypt --cluster-name myAKSCluster --resource-group myResourceGroup -s Standard_DS2_v2 --enable-encryption-at-host
-```
+- Enable host-based encryption on an existing cluster by adding a new node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command with the `--enable-encryption-at-host` flag.
-If you want to create new node pools without the host-based encryption feature, you can do so by omitting the `--enable-encryption-at-host` parameter.
+ ```azurecli
+ az aks nodepool add --name hostencrypt --cluster-name myAKSCluster --resource-group myResourceGroup -s Standard_DS2_v2 --enable-encryption-at-host
+ ```
> [!NOTE] > After you enable host-based encryption on your cluster, make sure you provide the proper access to your Azure Key Vault to enable encryption at rest. For more information, see [Control access][control-keys] and [Azure built-in roles for Key Vault data plane operations][akv-built-in-roles].
If you want to create new node pools without the host-based encryption feature,
<!-- LINKS - external --> <!-- LINKS - internal -->
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
[best-practices-security]: ./operator-best-practices-cluster-security.md [supported-regions]: ../virtual-machines/disk-encryption.md#supported-regions [supported-sizes]: ../virtual-machines/disk-encryption.md#supported-vm-sizes
-[azure-cli-install]: /cli/azure/install-azure-cli
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
[control-keys]: ../key-vault/general/best-practices.md#control-access-to-your-vault [akv-built-in-roles]: ../key-vault/general/rbac-guide.md#azure-built-in-roles-for-key-vault-data-plane-operations
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az-aks-nodepool-add
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
This article shows you how to create an Azure Kubernetes Service (AKS) cluster w
## Create an AKS cluster with a managed NAT gateway
-* Create an AKS cluster with a new managed NAT gateway using the [`az aks create`][az-aks-create] command with the `--outbound-type managedNATGateway`, `--nat-gateway-managed-outbound-ip-count`, and `--nat-gateway-idle-timeout` parameters. If you want the NAT gateway to operate out of availability zones, specify the zones using `--zones`.
+* Create an AKS cluster with a new managed NAT gateway using the [`az aks create`][az-aks-create] command with the `--outbound-type managedNATGateway`, `--nat-gateway-managed-outbound-ip-count`, and `--nat-gateway-idle-timeout` parameters. If you want the NAT gateway to operate out of a specific availability zone, specify the zones using `--zones`.
+* A single NAT gateway resource cannot be used across multiple availability zones. To ensure zone-resiliency, it is recommended to deploy a NAT gateway resource to each availability zone and assign to subnets containing AKS clusters in each zone. For more information on this deployment model, see [NAT gateway for each zone](/azure/nat-gateway/nat-availability-zones#zonal-nat-gateway-resource-for-each-zone-in-a-region-to-create-zone-resiliency).
```azurecli-interactive az aks create \
This article shows you how to create an Azure Kubernetes Service (AKS) cluster w
``` > [!IMPORTANT]
+ > If no zone is configured for NAT gateway, the default zone placement is "no zone", in which Azure places NAT gateway into a zone for you.
> If no value for the outbound IP address is specified, the default value is one. ### Update the number of outbound IP addresses
This configuration requires bring-your-own networking (via [Kubenet][byo-vnet-ku
--location southcentralus \ --public-ip-addresses myNatGatewayPip ```
+ > [!Important]
+ > A single NAT gateway resource cannot be used across multiple availability zones. To ensure zone-resiliency, it is recommended to deploy a NAT gateway resource to each availability zone and assign to subnets containing AKS clusters in each zone. For more information on this deployment model, see [NAT gateway for each zone](/azure/nat-gateway/nat-availability-zones#zonal-nat-gateway-resource-for-each-zone-in-a-region-to-create-zone-resiliency).
+ > If no zone is configured for NAT gateway, the default zone placement is "no zone", in which Azure places NAT gateway into a zone for you.
5. Create a virtual network using the [`az network vnet create`][az-network-vnet-create] command.
aks Quickstart Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-dapr.md
Title: Deploy an application with the Dapr cluster extension for Azure Kubernetes Service (AKS) or Arc-enabled Kubernetes
-description: Use the Dapr cluster extension for Azure Kubernetes Service (AKS) or Arc-enabled Kubernetes to deploy an application
+description: Use the Dapr cluster extension for Azure Kubernetes Service (AKS) or Arc-enabled Kubernetes to deploy an application.
Previously updated : 05/03/2022 Last updated : 06/22/2023 # Quickstart: Deploy an application using the Dapr cluster extension for Azure Kubernetes Service (AKS) or Arc-enabled Kubernetes
-In this quickstart, you get familiar with using the [Dapr cluster extension][dapr-overview] in an AKS or Arc-enabled Kubernetes cluster. You are deploying a hello world example, consisting of a Python application that generates messages and a Node application that consumes and persists them.
+In this quickstart, you use the [Dapr cluster extension][dapr-overview] in an AKS or Arc-enabled Kubernetes cluster. You deploy a `hello world` example, which consists of a Python application that generates messages and a node application that consumes and persists the messages.
## Prerequisites
In this quickstart, you get familiar with using the [Dapr cluster extension][dap
## Clone the repository
-To obtain the files you are using to deploy the sample application, clone the [Quickstarts repository][hello-world-gh] and change to the `hello-kubernetes` directory:
+1. Clone the [Dapr quickstarts repository][hello-world-gh] using the `git clone` command.
-```bash
-git clone https://github.com/dapr/quickstarts.git
-cd quickstarts/tutorials/hello-kubernetes/
-```
+ ```bash
+ git clone https://github.com/dapr/quickstarts.git
+ ```
+
+2. Change to the `hello-kubernetes` directory using `cd`.
+
+ ```bash
+ cd quickstarts/tutorials/hello-kubernetes/
+ ```
## Create and configure a state store
Dapr can use many different state stores such as, Redis, Azure Cosmos DB, Dynamo
1. Open the [Azure portal][azure-portal-cache] to start the Azure Cache for Redis creation flow. 2. Fill out the necessary information.
-3. Click **Create** to kickoff deployment of your Redis instance.
+3. Select **Create** to start the Redis instance deployment.
4. Take note of the hostname of your Redis instance, which you can retrieve from the **Overview** section in Azure. The hostname might be similar to the following example: `xxxxxx.redis.cache.windows.net:6380`.
-5. Once your instance is created, youΓÇÖll need to grab your access key. Navigate to **Access keys** under **Settings** and create a Kubernetes secret to store your Redis password:
+5. Under **Settings**, navigate to **Access keys** to get your access keys.
+6. Create a Kubernetes secret to store your Redis password using the `kubectl create secret generic redis` command.
-```bash
-kubectl create secret generic redis --from-literal=redis-password=<your-redis-password>
-```
+ ```bash
+ kubectl create secret generic redis --from-literal=redis-password=<your-redis-password>
+ ```
### Configure the Dapr components
-Once your store is created, you'll need to add the keys to the redis.yaml file in the deploy directory of the Hello World repository. Replace the `redisHost` value with your own Redis master address, and the `redisPassword` with your own Secret. You can learn more [here][dapr-component-secrets].
+Once your store is created, you need to add the keys to the `redis.yaml` file in the deploy directory of the *Hello World* repository. You can learn more [here][dapr-component-secrets].
-You will also need to add the following two lines below `redisPassword` to enable connection over TLS:
+1. Replace the `redisHost` value with your own Redis master address.
+2. Replace the `redisPassword` with your own Secret.
+3. Add the following two lines below `redisPassword` to enable connection over TLS
-```yml
-- name: redisPassword
- secretKeyRef:
- name: redis
- key: redis-password
-- name: enableTLS
- value: true
-```
+ ```YAML
+ - name: redisPassword
+ secretKeyRef:
+ name: redis
+ key: redis-password
+ - name: enableTLS
+ value: true
+ ```
### Apply the configuration
-Apply the `redis.yaml` file:
+1. Apply the `redis.yaml` file using the `kubectl apply` command.
+
+ ```bash
+ kubectl apply -f ./deploy/redis.yaml
+ ```
+
+2. Verify your state store was successfully configured using the `kubectl get components.redis` command.
-```bash
-kubectl apply -f ./deploy/redis.yaml
-```
+ ```bash
+ kubectl get components.redis -o yaml
+ ```
-And verify that your state store was successfully configured in the output:
+ You should see output similar to the following example output:
-```output
-component.dapr.io/statestore created
-```
+ ```output
+ component.dapr.io/statestore created
+ ```
## Deploy the Node.js app with the Dapr sidecar
-Apply the Node.js app's deployment to your cluster:
+1. Apply the Node.js app deployment to your cluster using the `kubectl apply` command.
-```bash
-kubectl apply -f ./deploy/node.yaml
-```
+ ```bash
+ kubectl apply -f ./deploy/node.yaml
+ ```
-> [!NOTE]
-> Kubernetes deployments are asynchronous. This means you'll need to wait for the deployment to complete before moving on to the next steps. You can do so with the following command:
-> ```bash
-> kubectl rollout status deploy/nodeapp
-> ```
+ > [!NOTE]
+ > Kubernetes deployments are asynchronous, which means you need to wait for the deployment to complete before moving on to the next steps. You can do so with the following command:
+ >
+ > ```bash
+ > kubectl rollout status deploy/nodeapp
+ > ```
-This deploys the Node.js app to Kubernetes. The Dapr control plane will automatically inject the Dapr sidecar to the Pod. If you take a look at the `node.yaml` file, you see how Dapr is enabled for that deployment:
+ This deploys the Node.js app to Kubernetes. The Dapr control plane automatically injects the Dapr sidecar to the Pod. If you take a look at the `node.yaml` file, you see how Dapr is enabled for that deployment:
-* `dapr.io/enabled: true` - this tells the Dapr control plane to inject a sidecar to this deployment.
+ * `dapr.io/enabled: true`: tells the Dapr control plane to inject a sidecar to this deployment.
+ * `dapr.io/app-id: nodeapp`: assigns a unique ID or name to the Dapr application, so it can be sent messages to and communicated with by other Dapr apps.
-* `dapr.io/app-id: nodeapp` - this assigns a unique ID or name to the Dapr application, so it can be sent messages to and communicated with by other Dapr apps.
+2. Access your service using the `kubectl get svc` command.
-To access your service, obtain and make note of the `EXTERNAL-IP` via `kubectl`:
+ ```bash
+ kubectl get svc nodeapp
+ ```
-```bash
-kubectl get svc nodeapp
-```
+3. Make note of the `EXTERNAL-IP` in the output.
### Verify the service
-To call the service, run:
+1. Call the service using `curl` with your `EXTERNAL-IP`.
-```bash
-curl $EXTERNAL_IP/ports
-```
+ ```bash
+ curl $EXTERNAL_IP/ports
+ ```
-You should see output similar to the following:
+ You should see output similar to the following example output:
-```bash
-{"DAPR_HTTP_PORT":"3500","DAPR_GRPC_PORT":"50001"}
-```
+ ```bash
+ {"DAPR_HTTP_PORT":"3500","DAPR_GRPC_PORT":"50001"}
+ ```
-Next, submit an order to the application:
+2. Submit an order to the application using `curl`.
-```bash
-curl --request POST --data "@sample.json" --header Content-Type:application/json $EXTERNAL_IP/neworder
-```
+ ```bash
+ curl --request POST --data "@sample.json" --header Content-Type:application/json $EXTERNAL_IP/neworder
+ ```
-Confirm the order has been persisted by requesting it:
+3. Confirm the order has persisted by requesting it using `curl`.
-```bash
-curl $EXTERNAL_IP/order
-```
+ ```bash
+ curl $EXTERNAL_IP/order
+ ```
-You should see output similar to the following:
+ You should see output similar to the following example output:
-```bash
-{ "orderId": "42" }
-```
+ ```bash
+ { "orderId": "42" }
+ ```
-> [!TIP]
-> This is a good time to get acquainted with the Dapr dashboard, a convenient interface to check status, information, and logs of applications running on Dapr. To access the dashboard at `http://localhost:8080/`, run the following command:
-> ```bash
-> kubectl port-forward svc/dapr-dashboard -n dapr-system 8080:8080
-> ```
+ > [!TIP]
+ > This is a good time to get familiar with the Dapr dashboard, a convenient interface to check status, information, and logs of applications running on Dapr. To access the dashboard at `http://localhost:8080/`, run the following command:
+ >
+ > ```bash
+ > kubectl port-forward svc/dapr-dashboard -n dapr-system 8080:8080
+ > ```
## Deploy the Python app with the Dapr sidecar
-Take a quick look at the Python app. Navigate to the Python app directory in the `hello-kubernetes` quickstart and open `app.py`.
+1. Navigate to the Python app directory in the `hello-kubernetes` quickstart and open `app.py`.
-This example is a basic Python app that posts JSON messages to `localhost:3500`, which is the default listening port for Dapr. You can invoke the Node.js application's `neworder` endpoint by posting to `v1.0/invoke/nodeapp/method/neworder`. The message contains some data with an `orderId` that increments once per second:
+ This example is a basic Python app that posts JSON messages to `localhost:3500`, which is the default listening port for Dapr. You can invoke the Node.js application's `neworder` endpoint by posting to `v1.0/invoke/nodeapp/method/neworder`. The message contains some data with an `orderId` that increments once per second:
-```python
-n = 0
-while True:
- n += 1
- message = {"data": {"orderId": n}}
+ ```python
+ n = 0
+ while True:
+ n += 1
+ message = {"data": {"orderId": n}}
- try:
- response = requests.post(dapr_url, json=message)
- except Exception as e:
- print(e)
+ try:
+ response = requests.post(dapr_url, json=message)
+ except Exception as e:
+ print(e)
- time.sleep(1)
-```
+ time.sleep(1)
+ ```
-Deploy the Python app to your Kubernetes cluster:
+2. Deploy the Python app to your Kubernetes cluster using the `kubectl apply` command.
-```bash
-kubectl apply -f ./deploy/python.yaml
-```
+ ```bash
+ kubectl apply -f ./deploy/python.yaml
+ ```
-> [!NOTE]
-> As with the previous command, the following command will wait for the deployment to complete:
-> ```bash
-> kubectl rollout status deploy/pythonapp
-> ```
+ > [!NOTE]
+ > As with the previous command, you need to wait for the deployment to complete before moving on to the next steps. You can do so with the following command:
+ >
+ > ```bash
+ > kubectl rollout status deploy/pythonapp
+ > ```
## Observe messages and confirm persistence
-Now that both the Node.js and Python applications are deployed, watch messages come through.
+Now that both the Node.js and Python applications are deployed, you watch messages come through.
-Get the logs of the Node.js app:
+1. Get the logs of the Node.js app using the `kubectl logs` command.
-```bash
-kubectl logs --selector=app=node -c node --tail=-1
-```
+ ```bash
+ kubectl logs --selector=app=node -c node --tail=-1
+ ```
-If the deployments were successful, you should see logs like this:
+ If the deployments were successful, you should see logs like the following example logs:
-```ouput
-Got a new order! Order ID: 1
-Successfully persisted state
-Got a new order! Order ID: 2
-Successfully persisted state
-Got a new order! Order ID: 3
-Successfully persisted state
-```
+ ```output
+ Got a new order! Order ID: 1
+ Successfully persisted state
+ Got a new order! Order ID: 2
+ Successfully persisted state
+ Got a new order! Order ID: 3
+ Successfully persisted state
+ ```
-Call the Node.js app's order endpoint to get the latest order. Grab the external IP address that you saved before and, append "/order" and perform a GET request against it (enter it into your browser, use Postman, or `curl` it!):
+2. Call the Node.js app's order endpoint to get the latest order using `curl`.
-```bash
-curl $EXTERNAL_IP/order
-{"orderID":"42"}
-```
+ ```bash
+ curl $EXTERNAL_IP/order
+ {"orderID":"42"}
+ ```
-You should see the latest JSON in the response.
+ You should see the latest JSON in the response.
## Clean up resources ### [Azure CLI](#tab/azure-cli)
-Use the [az group delete][az-group-delete] command to remove the resource group, the cluster, the namespace, and all related resources.
+* Remove the resource group, cluster, namespace, and all related resources using the [`az group delete`][az-group-delete] command.
-```azurecli-interactive
-az group delete --name MyResourceGroup
-```
+ ```azurecli-interactive
+ az group delete --name MyResourceGroup
+ ```
### [Azure PowerShell](#tab/azure-powershell)
-Use the [Remove-AzResourceGroup][remove-azresourcegroup] command to remove the resource group, the cluster, the namespace, and all related resources.
+* Remove the resource group, cluster, namespace, and all related resources using the [`Remove-AzResourceGroup`][remove-azresourcegroup] command.
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name MyResourceGroup
-```
+ ```azurepowershell-interactive
+ Remove-AzResourceGroup -Name MyResourceGroup
+ ```
## Next steps
-After successfully deploying this sample application:
> [!div class="nextstepaction"]
-> [Learn more about other cluster extensions][cluster-extensions]
+> [Learn more about other cluster extensions][cluster-extensions].
<!-- LINKS --> <!-- INTERNAL -->
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
For more information on using the KMS plugin, see [Encrypting Secret Data at Res
> [!WARNING] > KMS supports Konnectivity or [API Server Vnet Integration][api-server-vnet-integration].
-> You can use `kubectl get po -n kube-system` to verify the results show that a konnectivity-agent-xxx pod is running. If there is, it means the AKS cluster is using Konnectivity. When using VNet integration, you can run the command `az aks cluster show -g -n` to verify the setting `enableVnetIntegration` is set to **true**.
+> You can use `kubectl get po -n kube-system` to verify the results show that a konnectivity-agent-xxx pod is running. If there is, it means the AKS cluster is using Konnectivity. When using VNet integration, you can run the command `az aks show -g -n` to verify the setting `enableVnetIntegration` is set to **true**.
## Limitations
app-service Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/language-support-policy.md
Title: Language Support Policy description: App Service language runtime support policies --+ Last updated 01/23/2023
Those who receive notifications include account administrators, service administ
## Language runtime version support timelines To learn more about specific language support policy timelines, visit the following resources: -- [ASP.NET](https://aka.ms/aspnetrelease)-- [.NET](https://aka.ms/dotnetrelease)
+- [.NET and ASP.NET Core](https://aka.ms/dotnetrelease)
+- [.NET Framework and ASP.NET](https://aka.ms/aspnetrelease)
- [Node](https://aka.ms/noderelease) - [Java](https://aka.ms/javarelease) - [Python](https://aka.ms/pythonrelease)
To learn more about how to update your App Service application language versions
- [Node](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/node_support.md#node-on-linux-app-service) - [Java](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/java_support.md#java-on-app-service) - [Python](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/python_support.md#how-to-update-your-app-to-target-a-different-version-of-python)-- [PHP](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#how-to-update-your-app-to-target-a-different-version-of-php)
+- [PHP](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#how-to-update-your-app-to-target-a-different-version-of-php)
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
The following are the current limitations and known issues with PowerShell runbo
if($item) { write-output "File Created" } ``` 1. You can also upgrade your runbooks to PowerShell 7.1 or PowerShell 7.2 where the same runbook will work as expected.
-* Ensure to import the **Newtonsoft.Json** v10 module explicitly if PowerShell 5.1 runbooks have a dependency on this version of the module.
+* If you import module Az.Accounts with version 2.12.3 or newer, ensure that you import the **Newtonsoft.Json** v10 module explicitly if PowerShell 5.1 runbooks have a dependency on this version of the module. The workaround for this issue is to use PowerShell 7.2 runbooks.
# [PowerShell 7.1 (preview)](#tab/lps71)
The following are the current limitations and known issues with PowerShell runbo
- When you import a PowerShell 7.1 module that's dependent on other modules, you may find that the import button is gray even when PowerShell 7.1 version of the dependent module is installed. For example, Az PowerShell module.Compute version 4.20.0, has a dependency on Az.Accounts being >= 2.6.0. This issue occurs when an equivalent dependent module in PowerShell 5.1 doesn't meet the version requirements. For example, 5.1 version of Az.Accounts were < 2.6.0. - When you start PowerShell 7 runbook using the webhook, it auto-converts the webhook input parameter to an invalid JSON. - We recommend that you use [ExchangeOnlineManagement](https://learn.microsoft.com/powershell/exchange/exchange-online-powershell?view=exchange-ps) module version: 3.0.0 or lower because version: 3.0.0 or higher may lead to job failures.-- Ensure to import the **Newtonsoft.Json** v10 module explicitly if PowerShell 7.1 runbooks have a dependency on this version of the module.
+- If you import module Az.Accounts with version 2.12.3 or newer, ensure that you import the **Newtonsoft.Json** v10 module explicitly if PowerShell 7.1 runbooks have a dependency on this version of the module. The workaround for this issue is to use PowerShell 7.2 runbooks.
# [PowerShell 7.2 (preview)](#tab/lps72)
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
### Hitachi |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|
+|Hitachi Virtual Storage Software Block software-defined storage (VSSB) | 1.24.12 | 1.20.0_2023-06-13 | 16.0.5100.7242 | 14.5 (Ubuntu 20.04)|
+|Hitachi Virtual Storage Platform (VSP) | 1.24.12 | 1.19.0_2023-05-09 | 16.0.937.6221 | 14.5 (Ubuntu 20.04)|
|[Hitachi UCP with RedHat OpenShift](https://www.hitachivantara.com/en-us/solutions/modernize-digital-core/infrastructure-modernization/hybrid-cloud-infrastructure.html) | 1.23.12 | 1.16.0_2023-02-14 | 16.0.937.6221 | 14.5 (Ubuntu 20.04)| |[Hitachi UCP with VMware Tanzu](https://www.hitachivantara.com/en-us/solutions/modernize-digital-core/infrastructure-modernization/hybrid-cloud-infrastructure.html) | 1.23.8 | 1.16.0_2023-02-14 | 16.0.937.6221 | 14.5 (Ubuntu 20.04)| ++ ### HPE |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version
azure-cache-for-redis Cache Azure Active Directory For Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-azure-active-directory-for-authentication.md
Previously updated : 05/31/2023 Last updated : 06/23/2023
The following table includes links to code samples, which demonstrate how to con
| **Client library** | **Language** | **Link to sample code**| |-|-|-|
-| StackExchange.Redis | C#/.NET | [.NET code sample](https://github.com/Azure/Microsoft.Azure.StackExchangeRedis) |
-| Python | Python | [Python code Sample](https://aka.ms/redis/aad/sample-code/python) |
+| StackExchange.Redis | .NET | [StackExchange.Redis code sample](https://github.com/Azure/Microsoft.Azure.StackExchangeRedis) |
+| redis-py | Python | [redis-py code Sample](https://aka.ms/redis/aad/sample-code/python) |
| Jedis | Java | [Jedis code sample](https://aka.ms/redis/aad/sample-code/java-jedis) | | Lettuce | Java | [Lettuce code sample](https://aka.ms/redis/aad/sample-code/java-lettuce) | | Redisson | Java | [Redisson code sample](https://aka.ms/redis/aad/sample-code/java-redisson) | | ioredis | Node.js | [ioredis code sample](https://aka.ms/redis/aad/sample-code/js-ioredis) |
-| Node-redis | Node.js | [noredis code sample](https://aka.ms/redis/aad/sample-code/js-noderedis) |
+| node-redis | Node.js | [node-redis code sample](https://aka.ms/redis/aad/sample-code/js-noderedis) |
### Best practices for Azure AD authentication
azure-functions Configure Networking How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-networking-how-to.md
Title: How to configure Azure Functions with a virtual network description: Article that shows you how to perform certain virtual networking tasks for Azure Functions. Previously updated : 03/24/2023 Last updated : 06/23/2023
This article shows you how to perform tasks related to configuring your function
## Restrict your storage account to a virtual network
-When you create a function app, you must either create a new storage account or link to an existing storage account. The storage account linked to your app must be a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. During function app creation, you can secure a new storage account behind a virtual network and integrate the function app with this network. At this time, you can't secure an existing storage account being used by your function app in the same way.
+When you create a function app, you either create a new storage account or link to an existing storage account. During function app creation, you can secure a new storage account behind a virtual network and integrate the function app with this network. At this time, you can't secure an existing storage account being used by your function app in the same way.
> [!NOTE] > Securing your storage account is supported for all tiers in both Dedicated (App Service) and Elastic Premium plans. Consumption plans currently don't support virtual networks.
+For a list of all restrictions on storage accounts, see [Storage account requirements](storage-considerations.md#storage-account-requirements).
+ ### During function app creation You can create a new function app along with a new storage account secured behind a virtual network. The following links show you how to create these resources by using either the Azure portal or by using deployment templates:
To secure the storage for an existing function app:
| Setting name | Value | Comment | |-|-|-| | `AzureWebJobsStorage`| Storage connection string | This is the connection string for a secured storage account. |
- | `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` | Storage connection string | This is the connection string for a secured storage account.This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions. |
- | `WEBSITE_CONTENTSHARE` | File share | The name of the file share created in the secured storage account where the project deployment files reside.This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions. |
+ | `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` | Storage connection string | This is the connection string for a secured storage account. This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions. |
+ | `WEBSITE_CONTENTSHARE` | File share | The name of the file share created in the secured storage account where the project deployment files reside. This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions. |
| `WEBSITE_CONTENTOVERVNET` | 1 | A value of 1 enables your function app to scale when you have your storage account restricted to a virtual network. You should enable this setting when restricting your storage account to a virtual network. | 1. Select **Save** to save the application settings. Changing app settings causes the app to restart.
azure-functions Consumption Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/consumption-plan.md
Title: Azure Functions Consumption plan hosting description: Learn about how Azure Functions Consumption plan hosting lets you run your code in an environment that scales dynamically, but you only pay for resources used during execution. Previously updated : 8/31/2020 Last updated : 06/06/2023 # Customer intent: As a developer, I want to understand the benefits of using the Consumption plan so I can get the scalability benefits of Azure Functions without having to pay for resources I don't need.
azure-functions Create Function App Linux App Service Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-function-app-linux-app-service-plan.md
- Title: Create a function app on Linux from the Azure portal
-description: Learn how to create your first Azure Function on Linux using the Azure portal.
- Previously updated : 04/29/2020
-#Customer intent: As a developer, learn how to use the Azure portal so that I can create a function app that runs on Linux in App Service plan so that I can have more control over how my functions are scaled.
--
-# Create a function app on Linux in an Azure App Service plan
-
-Azure Functions lets you host your functions on Linux in a default Azure App Service container. This article walks you through how to use the [Azure portal](https://portal.azure.com) to create a Linux-hosted function app that runs in an [App Service plan](dedicated-plan.md). You can also [bring your own custom container](functions-create-function-linux-custom-image.md).
---
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com) using your Azure account.
-
-## Create a function app
-
-You must have a function app to host the execution of your functions on Linux. The function app provides an environment for execution of your function code. It lets you group functions as a logical unit for easier management, deployment, scaling, and sharing of resources. In this article, you create an App Service plan when you create your function app.
-
-1. From the Azure portal menu or the **Home** page, select **Create a resource**.
-
-1. In the **New** page, select **Compute** > **Function App**.
-
- :::image type="content" source="./media/create-function-app-linux-app-service-plan/function-app-create-flow.png" alt-text="Create a function app in the Azure portal":::
-
-1. On the **Basics** page, use the function app settings as specified in the following table.
-
- | Setting | Suggested value | Description |
- | | - | -- |
- | **Subscription** | Your subscription | The subscription under which this new function app is created. |
- | **[Resource Group](../azure-resource-manager/management/overview.md)** | *myResourceGroup* | Name for the new resource group in which to create your function app. |
- | **Function App name** | Globally unique name | Name that identifies your new function app. Valid characters are `a-z` (case insensitive), `0-9`, and `-`. |
- |**Publish**| **Code** (default) | Option to publish code files or a Docker container. |
- | **Runtime stack** | Preferred language | Choose a runtime that supports your favorite function programming language. Choose **.NET Core** for C# and F# functions. |
- |**Version**| Version number | Choose the version of your installed runtime. |
- |**Region**| Preferred region | Choose a [region](https://azure.microsoft.com/regions/) near you or near other services your functions access. |
-
- :::image type="content" source="./media/create-function-app-linux-app-service-plan/function-app-create-basics-linux.png" alt-text="Basics page":::
-
-1. Select **Next : Hosting**. On the **Hosting** page, enter the following settings.
-
- | Setting | Suggested value | Description |
- | | - | -- |
- | **[Storage account](../storage/common/storage-account-create.md)** | Globally unique name | Create a storage account used by your function app. Storage account names must be between 3 and 24 characters in length and can contain numbers and lowercase letters only. You can also use an existing account, which must meet the [storage account requirements](../azure-functions/storage-considerations.md#storage-account-requirements). |
- |**Operating system**| **Linux** | An operating system is pre-selected for you based on your runtime stack selection, but you can change the setting if necessary. |
- | **[Plan](../azure-functions/functions-scale.md)** | **Consumption (Serverless)** | Hosting plan that defines how resources are allocated to your function app. In the default **Consumption** plan, resources are added dynamically as required by your functions. In this [serverless](https://azure.microsoft.com/overview/serverless-computing/) hosting, you pay only for the time your functions run. When you run in an App Service plan, you must manage the [scaling of your function app](../azure-functions/functions-scale.md). |
-
- :::image type="content" source="./media/create-function-app-linux-app-service-plan/function-app-create-hosting-linux.png" alt-text="Hosting page":::
-
-1. Select **Next : Monitoring**. On the **Monitoring** page, enter the following settings.
-
- | Setting | Suggested value | Description |
- | | - | -- |
- | **[Application Insights](../azure-functions/functions-monitoring.md)** | **Yes** (default) | Creates an Application Insights resource of the same *App name* in the nearest supported region. By expanding this setting or selecting **Create new**, you can change the Application Insights name or choose a different region in an [Azure geography](https://azure.microsoft.com/global-infrastructure/geographies/) where you want to store your data. |
-
- :::image type="content" source="./media/create-function-app-linux-app-service-plan/function-app-create-monitoring-linux.png" alt-text="Monitoring page":::
-
-1. Select **Review + create** to review the app configuration selections.
-
-1. On the **Review + create** page, review your settings, and then select **Create** to provision and deploy the function app.
-
-1. Select the **Notifications** icon in the upper-right corner of the portal and watch for the **Deployment succeeded** message.
-
-1. Select **Go to resource** to view your new function app. You can also select **Pin to dashboard**. Pinning makes it easier to return to this function app resource from your dashboard.
-
- ![Deployment notification](./media/create-function-app-linux-app-service-plan/function-app-create-notification2.png)
-
- Even after your function app is available, it may take a few minutes to be fully initialized.
-
-Next, you create a function in the new function app.
-
-## <a name="create-function"></a>Create an HTTP trigger function
-
-This section shows you how to create a function in your new function app in the portal.
-
-> [!NOTE]
-> The portal development experience can be useful for trying out Azure Functions. For most scenarios, consider developing your functions locally and publishing the project to your function app using either [Visual Studio Code](./create-first-function-vs-code-csharp.md#create-an-azure-functions-project) or the [Azure Functions Core Tools](functions-run-local.md#create-a-local-functions-project).
-
-1. From the left menu of the **Functions** window, select **Functions**, then select **Add** from the top menu.
-
-1. From the **New Function** window, select **Http trigger**.
-
- ![Choose HTTP trigger function](./media/create-function-app-linux-app-service-plan/function-app-select-http-trigger.png)
-
-1. In the **New Function** window, accept the default name for **New Function**, or enter a new name.
-
-1. Choose **Anonymous** from the **Authorization level** drop-down list, and then select **Create Function**.
-
- Azure creates the HTTP trigger function. Now, you can run the new function by sending an HTTP request.
-
-## Test the function
-
-1. In your new HTTP trigger function, select **Code + Test** from the left menu, then select **Get function URL** from the top menu.
-
- ![Select Get function URL](./media/create-function-app-linux-app-service-plan/function-app-select-get-function-url.png)
-
-1. In the **Get function URL** dialog box, select **default** from the drop-down list, and then select the **Copy to clipboard** icon.
-
- ![Copy the function URL from the Azure portal](./media/create-function-app-linux-app-service-plan/function-app-develop-tab-testing.png)
-
-1. Paste the function URL into your browser's address bar. Add the query string value `?name=<your_name>` to the end of this URL and press Enter to run the request.
-
- The following example shows the response in the browser:
-
- ![Function response in the browser.](./media/create-function-app-linux-app-service-plan/function-app-browser-testing.png)
-
- The request URL includes a key that is required, by default, to access your function over HTTP.
-
-1. When your function runs, trace information is written to the logs. To see the trace output, return to the **Code + Test** page in the portal and expand the **Logs** arrow at the bottom of the page.
-
- ![Functions log viewer in the Azure portal.](./media/create-function-app-linux-app-service-plan/function-view-logs.png)
-
-## Clean up resources
--
-## Next steps
-
-You have created a function app with a simple HTTP trigger function.
--
-For more information, see [Azure Functions HTTP bindings](functions-bindings-http-webhook.md).
azure-functions Create Premium Plan Function App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-premium-plan-function-app-portal.md
- Title: Create an Azure Functions Premium plan in the portal
-description: Learn how to use the Azure portal to create a function app that runs in the Premium plan.
- Previously updated : 10/30/2020--
-# Create a Premium plan function app in the Azure portal
-
-Azure Functions offers a scalable Premium plan that provides virtual network connectivity, no cold start, and premium hardware. To learn more, see [Azure Functions Premium plan](functions-premium-plan.md).
-
-In this article, you learn how to use the Azure portal to create a function app in a Premium plan.
-
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
-
-## Create a function app
-
-You must have a function app to host the execution of your functions. A function app lets you group functions as a logical unit for easier management, deployment, scaling, and sharing of resources.
--
-At this point, you can create functions in the new function app. These functions can take advantage of the benefits of the [Premium plan](functions-premium-plan.md).
-
-## Clean up resources
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Add an HTTP triggered function](./functions-create-function-app-portal.md#create-function)
azure-functions Functions Add Output Binding Cosmos Db Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md
Because you're using an Azure Cosmos DB output binding, you must have the corres
Except for HTTP and timer triggers, bindings are implemented as extension packages. Run the following [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to add the Azure Cosmos DB extension package to your project. # [In-process](#tab/in-process)
-```bash
-dotnet add package Microsoft.Azure.WebJobs.Extensions.CosmosDB
+```command
+dotnet add package Microsoft.Azure.WebJobs.Extensions.CosmosDB --version 3.0.10
``` # [Isolated process](#tab/isolated-process)
-```bash
-dotnet add package Microsoft.Azure.Functions.Worker.Extensions.CosmosDB
+```command
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.CosmosDB --version 3.0.9
``` ::: zone-end
Extension bundles usage is enabled in the *host.json* file at the root of the pr
Now, you can add the Azure Cosmos DB output binding to your project. ## Add an output binding-
-In Functions, each type of binding requires a `direction`, `type`, and a unique `name` to be defined in the *function.json* file. The way you define these attributes depends on the language of your function app.
- ::: zone pivot="programming-language-csharp"-
-In a C# class library project, the bindings are defined as binding attributes on the function method. The *function.json* file required by Functions is then auto-generated based on these attributes.
--
+In a C# class library project, the bindings are defined as binding attributes on the function method.
# [In-process](#tab/in-process) Open the *HttpExample.cs* project file and add the following parameter to the `Run` method definition:
azure-functions Functions Bindings Event Grid Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md
Use the Event Grid output binding to write events to a custom topic. You must ha
For information on setup and configuration details, see [How to work with Event Grid triggers and bindings in Azure Functions](event-grid-how-tos.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+ > [!IMPORTANT] > The Event Grid output binding is only available for Functions 2.x and higher.
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
::: zone-end ::: zone pivot="programming-language-python"
-The following example shows a trigger binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. It then sends in an event to the custom topic, as specified by the `topicEndpointUri`.
+The following example shows a trigger binding and a Python function that uses the binding. It then sends in an event to the custom topic, as specified by the `topicEndpointUri`. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+Here's the function in the function_app.py file:
+
+```python
+import logging
+import azure.functions as func
+import datetime
+
+@app.function_name(name="eventgrid_output")
+@app.route(route="eventgrid_output")
+@app.event_grid_output(
+ arg_name="outputEvent",
+ topic_endpoint_uri="MyEventGridTopicUriSetting",
+ topic_key_setting="MyEventGridTopicKeySetting")
+def eventgrid_output(eventGridEvent: func.EventGridEvent,
+ outputEvent: func.Out[func.EventGridOutputEvent]) -> None:
+
+ logging.log("eventGridEvent: ", eventGridEvent)
+
+ outputEvent.set(
+ func.EventGridOutputEvent(
+ id="test-id",
+ data={"tag1": "value1", "tag2": "value2"},
+ subject="test-subject",
+ event_type="test-event-1",
+ event_time=datetime.datetime.utcnow(),
+ data_version="1.0"))
+
+```
+# [v1](#tab/python-v1)
Here's the binding data in the *function.json* file: ```json
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md
Use the function trigger to respond to an event sent by an [Event Grid source](.
> [!NOTE] > Event Grid triggers aren't natively supported in an internal load balancer App Service Environment (ASE). The trigger uses an HTTP request that can't reach the function app without a gateway into the virtual network.
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
++ ## Example ::: zone pivot="programming-language-csharp"
$eventGridEvent | Out-String | Write-Host
``` ::: zone-end ::: zone pivot="programming-language-python"
-The following example shows a trigger binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding.
+The following example shows an Event Grid trigger binding and a Python function that uses the binding. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import json
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="eventGridTrigger")
+@app.event_grid_trigger(arg_name="event")
+def eventGridTest(event: func.EventGridEvent):
+ result = json.dumps({
+ 'id': event.id,
+ 'data': event.get_json(),
+ 'topic': event.topic,
+ 'subject': event.subject,
+ 'event_type': event.event_type,
+ })
+
+ logging.info('Python EventGrid trigger processed an event: %s', result)
+```
+
+# [v1](#tab/python-v1)
Here's the binding data in the *function.json* file:
def main(event: func.EventGridEvent):
logging.info('Python EventGrid trigger processed an event: %s', result) ```++ ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes
azure-functions Functions Compare Logic Apps Ms Flow Webjobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md
Title: Integration and automation platform options in Azure description: "Compare Microsoft cloud services that are optimized for integration tasks: Power Automate, Logic Apps, Functions, and WebJobs." Previously updated : 04/09/2018 Last updated : 06/09/2023 #Customer intent: As a developer, I want to understand the choices that Azure offers for hosting and executing my business logic so that I can choose the right set of Azure services.
This article compares the following Microsoft cloud
All of these services can solve integration problems and automate business processes. They can all define input, actions, conditions, and output. You can run each of them on a schedule or trigger. Each service has unique advantages, and this article explains the differences.
-If you're looking for a more general comparison between Azure Functions and other Azure compute options, see [Criteria for choosing an Azure compute service](/azure/architecture/guide/technology-choices/compute-comparison) and [Choosing an Azure compute option for microservices](/azure/architecture/microservices/design/compute-options).
-
-For a good summary and comparison of automation service options in Azure, see [Choose the Automation services in Azure](../automation/automation-services.md).
+>[!NOTE]
+>If you're looking for a more general comparison between Azure Functions and other Azure compute options:
+>+ [Criteria for choosing an Azure compute service](/azure/architecture/guide/technology-choices/compute-comparison)
+>+ [Choosing an Azure compute option for microservices](/azure/architecture/microservices/design/compute-options)
+>
+>For a summary and comparison of automation service options in Azure:
+>+ [Choose the Automation services in Azure](../automation/automation-services.md)
## Compare Microsoft Power Automate and Azure Logic Apps
The following table helps you determine whether Power Automate or Logic Apps is
| **Users** |Office workers, business users, SharePoint administrators |Pro integrators and developers, IT pros | | **Scenarios** |Self-service |Advanced integrations | | **Design tool** |In-browser and mobile app, UI only |In-browser, [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md), and [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) with code view available |
-| **Application lifecycle management (ALM)** |Design and test in non-production environments, promote to production when ready |Azure DevOps: source control, testing, support, automation, and manageability in [Azure Resource Manager](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md) |
+| **Application lifecycle management (ALM)** |Power Platform [provides tools](/power-platform/alm/tools-apps-used-alm) that integrate with DevOps and [GitHub Actions](/power-platform/alm/devops-github-actions) to let you build automated pipelines in the ALM cycle. |Azure DevOps: source control, testing, support, automation, and manageability in [Azure Resource Manager](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md) |
| **Admin experience** |Manage Power Automate environments and data loss prevention (DLP) policies, track licensing: [Admin center](https://admin.powerplatform.microsoft.com) |Manage resource groups, connections, access management, and logging: [Azure portal](https://portal.azure.com) | | **Security** |Microsoft 365 security audit logs, DLP, [encryption at rest](https://wikipedia.org/wiki/Data_at_rest#Encryption) for sensitive data |Security assurance of Azure: [Azure security](https://www.microsoft.com/en-us/trustcenter/Security/AzureSecurity), [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/), [audit logs](https://azure.microsoft.com/blog/azure-audit-logs-ux-refresh/) |
Azure Functions is built on the WebJobs SDK, so it shares many of the same event
|**[Integration with Logic Apps](functions-twitter-email.md)**|Γ£ö|| | **Trigger events** |[Timer](functions-bindings-timer.md)<br>[Azure Storage queues and blobs](functions-bindings-storage-blob.md)<br>[Azure Service Bus queues and topics](functions-bindings-service-bus.md)<br>[Azure Cosmos DB](functions-bindings-cosmosdb.md)<br>[Azure Event Hubs](functions-bindings-event-hubs.md)<br>[HTTP/WebHook (GitHub, Slack)](functions-bindings-http-webhook.md)<br>[Azure Event Grid](functions-bindings-event-grid.md)|[Timer](functions-bindings-timer.md)<br>[Azure Storage queues and blobs](functions-bindings-storage-blob.md)<br>[Azure Service Bus queues and topics](functions-bindings-service-bus.md)<br>[Azure Cosmos DB](functions-bindings-cosmosdb.md)<br>[Azure Event Hubs](functions-bindings-event-hubs.md)<br>[File system](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Files/FileTriggerAttribute.cs)| | **Supported languages** |C#<br>F#<br>JavaScript<br>Java<br>Python<br>PowerShell |C#<sup>1</sup>|
-|**Package managers**|NPM and NuGet|NuGet<sup>2</sup>|
+|**Package managers**|npm and NuGet|NuGet<sup>2</sup>|
<sup>1</sup> WebJobs (without the WebJobs SDK) supports languages such as C#, Java, JavaScript, Bash, .cmd, .bat, PowerShell, PHP, TypeScript, Python, and more. A WebJob can run any program or script that can run in the App Service sandbox.
-<sup>2</sup> WebJobs (without the WebJobs SDK) supports NPM and NuGet.
+<sup>2</sup> WebJobs (without the WebJobs SDK) supports npm and NuGet.
### Summary
For other scenarios where you want to run code snippets for integrating Azure or
## Power Automate, Logic Apps, Functions, and WebJobs together
-You don't have to choose just one of these services. They integrate with each other as well as with external services.
+You don't have to choose just one of these services. They integrate with each other and with external services.
A Power Automate flow can call an Azure Logic Apps workflow. An Azure Logic Apps workflow can call a function in Azure Functions, and vice versa. For example, see [Create a function that integrates with Azure Logic Apps](functions-twitter-email.md).
azure-functions Functions Core Tools Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-core-tools-reference.md
func start
| **`--cors-credentials`** | Allow cross-origin authenticated requests using cookies and the Authentication header. | | **`--dotnet-isolated-debug`** | When set to `true`, pauses the .NET worker process until a debugger is attached from the .NET isolated project being debugged. | | **`--enable-json-output`** | Emits console logs as JSON, when possible. |
-| **`--enableAuth`** | Enable full authentication handling pipeline. |
+| **`--enableAuth`** | Enable full authentication handling pipeline, with authorization requirements. |
| **`--functions`** | A space-separated list of functions to load. | | **`--language-worker`** | Arguments to configure the language worker. For example, you may enable debugging for language worker by providing [debug port and other required arguments](https://github.com/Azure/azure-functions-core-tools/wiki/Enable-Debugging-for-language-workers). | | **`--no-build`** | Don't build the current project before running. For .NET class projects only. The default is `false`. |
Installs Functions extensions in a non-C# class library project.
When possible, you should instead use extension bundles. To learn more, see [Extension bundles](functions-bindings-register.md#extension-bundles).
-For C# class library and .NET isolated projects, instead use standard NuGet package installation methods, such as `dotnet add package`.
+For compiled C# projects (both in-process and isolated worker process), instead use standard NuGet package installation methods, such as `dotnet add package`.
The `install` action supports the following options:
The `install` action supports the following options:
| **`--source`** | NuGet feed source when not using NuGet.org.| | **`--version`** | Extension package version. |
-No action is taken when an extension bundle is defined in your host.json file.
+No action is taken when an extension bundle is defined in your host.json file. When you need to manually install extensions, you must first remove the bundle definition. For more information, see [Install extensions](functions-run-local.md#install-extensions).
## func extensions sync
azure-functions Functions Create Function App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-function-app-portal.md
Title: Create your first function in the Azure portal description: Learn how to create your first Azure Function for serverless execution using the Azure portal. Previously updated : 06/10/2022 Last updated : 06/10/2023
Next, create a function in the new function app.
## Test the function
+> [!TIP]
+> The **Code + Test** functionality in the portal works even for functions that are read-only and can't be edited in the portal.
+ 1. In your new HTTP trigger function, select **Code + Test** from the left menu, and then select **Get function URL** from the top menu. :::image type="content" source="./media/functions-create-first-azure-function/function-app-http-example-get-function-url.png" alt-text="Screenshot of Get function URL window.":::
azure-functions Functions Deployment Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-technologies.md
Title: Deployment technologies in Azure Functions
description: Learn the different ways you can deploy code to Azure Functions. Previously updated : 05/18/2022 Last updated : 06/22/2023 # Deployment technologies in Azure Functions
Each plan has different behaviors. Not all deployment technologies are available
| Local Git<sup>1</sup> |Γ£ö|Γ£ö|Γ£ö| |Γ£ö|Γ£ö| | Cloud sync<sup>1</sup> |Γ£ö|Γ£ö|Γ£ö| |Γ£ö|Γ£ö| | FTP<sup>1</sup> |Γ£ö|Γ£ö|Γ£ö| |Γ£ö|Γ£ö|
-| Portal editing |Γ£ö|Γ£ö|Γ£ö| |Γ£ö<sup>2</sup>|Γ£ö<sup>2</sup>|
+| In-portal editing<sup>2</sup> |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö<sup>3</sup>|Γ£ö<sup>3</sup>|
-<sup>1</sup> Deployment technology that requires [manual trigger syncing](#trigger-syncing).
-<sup>2</sup> Portal editing is enabled only for HTTP and Timer triggers for Functions on Linux using Premium and Dedicated plans.
+<sup>1</sup> Deployment technology that requires [manual trigger syncing](#trigger-syncing).
+<sup>2</sup> In-portal editing is disabled when code is deployed to your function app from outside the portal. For more information, including language support details for in-portal editing, see [Language support details](supported-languages.md#language-support-details).
+<sup>3</sup> In-portal editing is enabled only for HTTP and Timer triggered functions running on Linux in Premium and Dedicated plans.
## Key concepts
You can use FTP to directly transfer files to Azure Functions.
In the portal-based editor, you can directly edit the files that are in your function app (essentially deploying every time you save your changes).
->__How to use it:__ To be able to edit your functions in the Azure portal, you must have [created your functions in the portal](./functions-get-started.md). To preserve a single source of truth, using any other deployment method makes your function read-only and prevents continued portal editing. To return to a state in which you can edit your files in the Azure portal, you can manually turn the edit mode back to `Read/Write` and remove any deployment-related application settings (like [`WEBSITE_RUN_FROM_PACKAGE`](functions-app-settings.md#website_run_from_package).
+>__How to use it:__ To be able to edit your functions in the [Azure portal](https://portal.azure.com), you must have [created your functions in the portal](./functions-get-started.md). To preserve a single source of truth, using any other deployment method makes your function read-only and prevents continued portal editing. To return to a state in which you can edit your files in the Azure portal, you can manually turn the edit mode back to `Read/Write` and remove any deployment-related application settings (like [`WEBSITE_RUN_FROM_PACKAGE`](functions-app-settings.md#website_run_from_package).
->__When to use it:__ The portal is a good way to get started with Azure Functions. For more intense development work, we recommend that you use one of the following client tools:
+>__When to use it:__ The portal is a good way to get started with Azure Functions. For more advanced development work, we recommend that you use one of the following client tools:
> >+ [Visual Studio Code](./create-first-function-vs-code-csharp.md) >+ [Azure Functions Core Tools (command line)](functions-run-local.md)
In the portal-based editor, you can directly edit the files that are in your fun
>__Where app content is stored:__ App content is stored on the file system, which may be backed by Azure Files from the storage account specified when the function app was created.
-The following table shows the operating systems and languages that support portal editing:
+The following table shows the operating systems and languages that support in-portal editing:
| Language | Windows Consumption | Windows Premium | Windows Dedicated | Linux Consumption | Linux Premium | Linux Dedicated | |-|:--: |:-:|:--:|:--:|:-:|::|
The following table shows the operating systems and languages that support porta
| C# Script |Γ£ö|Γ£ö|Γ£ö| |Γ£ö<sup>\*</sup> |Γ£ö<sup>\*</sup>| | F# | | | | | | | | Java | | | | | | |
-| JavaScript (Node.js) |Γ£ö|Γ£ö|Γ£ö| |Γ£ö<sup>\*</sup>|Γ£ö<sup>\*</sup>|
-| Python | | | | | | |
+| JavaScript (Node.js) |Γ£ö|Γ£ö|Γ£ö| |Γ£ö<sup>1</sup>|Γ£ö<sup>1</sup>|
+| Python<sup>2</sup> | | | |Γ£ö |Γ£ö<sup>1</sup> |Γ£ö<sup>1</sup> |
| PowerShell |Γ£ö|Γ£ö|Γ£ö| | | | | TypeScript (Node.js) | | | | | | |
-<sup>*</sup> Portal editing is enabled only for HTTP and Timer triggers for Functions on Linux using Premium and Dedicated plans.
+<sup>1</sup> In-portal editing is enabled only for HTTP and Timer triggers for Functions on Linux using Premium and Dedicated plans.
+<sup>2</sup> In-portal editing is only supported for the [v1 Python programming model](functions-reference-python.md?pivots=python-mode-configuration).
## Deployment behaviors
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
Title: Node.js developer reference for Azure Functions description: Understand how to develop functions by using Node.js.- ms.assetid: 45dedd78-3ff9-411f-bb4b-16d29a11384c Previously updated : 02/24/2022 Last updated : 04/17/2023 ms.devlang: javascript, typescript zone_pivot_groups: functions-nodejs-model
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
Title: Work with Azure Functions Core Tools
description: Learn how to code and test Azure Functions from the command prompt or terminal on your local computer before you run them on Azure Functions. ms.assetid: 242736be-ec66-4114-924b-31795fd18884 Previously updated : 10/05/2021 Last updated : 06/23/2023
+zone_pivot_groups: programming-languages-set-functions
# Work with Azure Functions Core Tools
-Azure Functions Core Tools lets you develop and test your functions on your local computer from the command prompt or terminal. Your local functions can connect to live Azure services, and you can debug your functions on your local computer using the full Functions runtime. You can even deploy a function app to your Azure subscription.
+Azure Functions Core Tools lets you develop and test your functions on your local computer. Core Tools includes a version of the same runtime that powers Azure Functions. This runtime means your local functions run as they would in Azure and can connect to live Azure services during local development and debugging. You can even deploy your code project to Azure using Core Tools.
[!INCLUDE [Don't mix development environments](../../includes/functions-mixed-dev-environments.md)]
-Developing functions on your local computer and publishing them to Azure using Core Tools follows these basic steps:
-
-> [!div class="checklist"]
-> * [Install the Core Tools and dependencies.](#v2)
-> * [Create a function app project from a language-specific template.](#create-a-local-functions-project)
-> * [Register trigger and binding extensions.](#register-extensions)
-> * [Define Storage and other connections.](#local-settings)
-> * [Create a function from a trigger and language-specific template.](#create-func)
-> * [Run the function locally.](#start)
-> * [Publish the project to Azure.](#publish)
+Core Tools can be used with all [supported languages](supported-languages.md). Select your language at the top of the article.
+
+If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-csharp.md).
+If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-java.md).
+If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-node.md).
+If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-powershell.md).
+If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-python.md).
+If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-typescript.md).
+
+Core Tools enables the integrated local development and debugging experience for your functions provided by both Visual Studio and Visual Studio Code.
## Prerequisites
-The specific prerequisites for Core Tools depend on the features you plan to use:
+To be able to publish to Azure from Core Tools, you must have one of the following Azure tools installed locally:
-**[Publish](#publish)**: Core Tools currently depends on either the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell) for authenticating with your Azure account. This means that you must install one of these tools to be able to [publish to Azure](#publish) from Azure Functions Core Tools.
++ [Azure CLI](/cli/azure/install-azure-cli) ++ [Azure PowerShell](/powershell/azure/install-azure-powershell)
-**[Install extensions](#install-extensions)**: To manually install extensions by using Core Tools, you must have the [.NET 6.0 SDK](https://dotnet.microsoft.com/download) installed. The .NET SDK is used by Core Tools to install extensions from NuGet. You don't need to know .NET to use Azure Functions extensions.
+These tools are required to authenticate with your Azure account from your local computer.
## <a name="v2"></a>Core Tools versions
-There are four versions of Azure Functions Core Tools. The version you use depends on your local development environment, [choice of language](supported-languages.md), and level of support required.
-
-Choose one of the following version tabs to learn about each specific version and for detailed installation instructions:
-
-# [Version 4.x](#tab/v4)
-
-Supports [version 4.x](functions-versions.md) of the Functions runtime. This version supports Windows, macOS, and Linux, and uses platform-specific package managers or npm for installation. This is the recommended version of the Functions runtime and Core Tools.
+Major versions of Azure Functions Core Tools are linked to specific major versions of the Azure Functions runtime. For example, version 4.x of Core Tools supports version 4.x of the Functions runtime. This is the recommended major version of both the Functions runtime and Core Tools. You can find the latest Core Tools release version on [this release page](https://github.com/Azure/azure-functions-core-tools/releases/latest).
-# [Version 3.x](#tab/v3)
+Run the following command to determine the version of your current Core Tools installation:
-Supports [version 3.x](functions-versions.md) of the Azure Functions runtime, which reached end of life (EOL) for extended support on December 13, 2022. Use version 4.x instead.
-
-# [Version 2.x](#tab/v2)
+```command
+func --version
+```
-Supports [version 3.x](functions-versions.md) of the Azure Functions runtime, which reached end of life (EOL) for extended support on December 13, 2022. Use version 4.x instead.
+Unless otherwise noted, the examples in this article are for version 4.x.
-# [Version 1.x](#tab/v1)
+The following considerations apply to Core Tools versions:
-Supports version 1.x of the Azure Functions runtime. This version of the tools is only supported on Windows computers and is installed from an [npm package](https://www.npmjs.com/package/azure-functions-core-tools).
++ You can only install one version of Core Tools on a given computer. --
-You can only install one version of Core Tools on a given computer. Unless otherwise noted, the examples in this article are for version 4.x.
++ Version 2.x and 3.x of Core Tools were used with versions 2.x and 3.x of the Functions runtime, which have reached their end of life (EOL). For more information, see [Azure Functions runtime versions overview](functions-versions.md). ++ Version 1.x of Core Tools is required when using version 1.x of the Functions Runtime, which is still supported. This version of Core Tools can only be run locally on Windows computers. If you're currently running on version 1.x, you should consider [migrating your app to version 4.x](migrate-version-1-version-4.md) today. ## Install the Azure Functions Core Tools
-[Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools) includes a version of the same runtime that powers Azure Functions runtime that you can run on your local development computer. It also provides commands to create functions, connect to Azure, and deploy function projects.
-
-Starting with version 2.x, Core Tools runs on [Windows](?tabs=windows#v2), [macOS](?tabs=macos#v2), and [Linux](?tabs=linux#v2).
+The recommended way to install Core Tools depends on the operating system of your local development computer.
-# [Windows](#tab/windows/v4)
+# [Windows](#tab/windows)
The following steps use a Windows installer (MSI) to install Core Tools v4.x. For more information about other package-based installers, see the [Core Tools readme](https://github.com/Azure/azure-functions-core-tools/blob/v4.x/README.md#windows).
Download and run the Core Tools installer, based on your version of Windows:
- [v4.x - Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2174087) (Recommended. [Visual Studio Code debugging](functions-develop-vs-code.md#debugging-functions-locally) requires 64-bit.) - [v4.x - Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2174159)
-# [Windows](#tab/windows/v3)
-
-The following steps use a Windows installer (MSI) to install Core Tools v3.x. For more information about other package-based installers, see the [Core Tools readme](https://github.com/Azure/azure-functions-core-tools/blob/master/README.md#windows).
-
-Download and run the Core Tools installer, based on your version of Windows:
--- [v3.x - Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2135274) (Recommended. [Visual Studio Code debugging](functions-develop-vs-code.md#debugging-functions-locally) requires 64-bit.)-- [v3.x - Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2135275)-
-# [Windows](#tab/windows/v2)
-
-Installing version 2.x of the Core Tools requires npm. You can also [use Chocolatey to install the package](https://github.com/Azure/azure-functions-core-tools/blob/master/README.md#azure-functions-core-tools).
-
-1. If you haven't already done so, [install Node.js with npm](https://nodejs.org/en/download/).
-
-1. Run the following npm command to install the Core Tools package:
-
- ```
- npm install -g azure-functions-core-tools@2 --unsafe-perm true
- ```
-
-# [Windows](#tab/windows/v1)
+If you previously used Windows installer (MSI) to install Core Tools on Windows, you should uninstall the old version from Add Remove Programs before installing the latest version.
If you need to install version 1.x of the Core Tools, see the [GitHub repository](https://github.com/Azure/azure-functions-core-tools/blob/v1.x/README.md#installing) for more information.
-# [macOS](#tab/macos/v4)
+# [macOS](#tab/macos)
[!INCLUDE [functions-x86-emulation-on-arm64-note](../../includes/functions-x86-emulation-on-arm64-note.md)]
The following steps use Homebrew to install the Core Tools on macOS.
# if upgrading on a machine that has 2.x or 3.x installed: brew link --overwrite azure-functions-core-tools@4 ```
+# [Linux](#tab/linux)
-# [macOS](#tab/macos/v3)
--
-The following steps use Homebrew to install the Core Tools on macOS.
+The following steps use [APT](https://wiki.debian.org/Apt) to install Core Tools on your Ubuntu/Debian Linux distribution. For other Linux distributions, see the [Core Tools readme](https://github.com/Azure/azure-functions-core-tools/blob/v4.x/README.md#linux).
-1. Install [Homebrew](https://brew.sh/), if it's not already installed.
-
-1. Install the Core Tools package:
+1. Install the Microsoft package repository GPG key, to validate package integrity:
```bash
- brew tap azure/functions
- brew install azure-functions-core-tools@3
- # if upgrading on a machine that has 2.x installed:
- brew link --overwrite azure-functions-core-tools@3
+ curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
+ sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg
```
-# [macOS](#tab/macos/v2)
+1. Set up the APT source list before doing an APT update.
-
-The following steps use Homebrew to install the Core Tools on macOS.
-
-1. Install [Homebrew](https://brew.sh/), if it's not already installed.
-
-1. Install the Core Tools package:
+ ##### Ubuntu
```bash
- brew tap azure/functions
- brew install azure-functions-core-tools@2
+ sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-$(lsb_release -cs)-prod $(lsb_release -cs) main" > /etc/apt/sources.list.d/dotnetdev.list'
```
-# [macOS](#tab/macos/v1)
-
-Version 1.x of the Core Tools isn't supported on macOS. Use version 2.x or a later version on macOS.
-
-# [Linux](#tab/linux/v4)
--
-5. Install the Core Tools package:
+ ##### Debian
```bash
- sudo apt-get install azure-functions-core-tools-4
+ sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/debian/$(lsb_release -rs | cut -d'.' -f 1)/prod $(lsb_release -cs) main" > /etc/apt/sources.list.d/dotnetdev.list'
```
-# [Linux](#tab/linux/v3)
+1. Check the `/etc/apt/sources.list.d/dotnetdev.list` file for one of the appropriate Linux version strings in the following table:
+ | Linux distribution | Version |
+ | -- | - |
+ | Debian 11 | `bullseye` |
+ | Debian 10 | `buster` |
+ | Debian 9 | `stretch` |
+ | Ubuntu 22.04 | `jammy` |
+ | Ubuntu 20.04 | `focal` |
+ | Ubuntu 19.04 | `disco` |
+ | Ubuntu 18.10 | `cosmic` |
+ | Ubuntu 18.04 | `bionic` |
+ | Ubuntu 17.04 | `zesty` |
+ | Ubuntu 16.04/Linux Mint 18 | `xenial` |
-5. Install the Core Tools package:
+1. Start the APT source update:
```bash
- sudo apt-get install azure-functions-core-tools-3
+ sudo apt-get update
```
-# [Linux](#tab/linux/v2)
--
-5. Install the Core Tools package:
+1. Install the Core Tools package:
```bash
- sudo apt-get install azure-functions-core-tools-2
+ sudo apt-get install azure-functions-core-tools-4
```
-# [Linux](#tab/linux/v1)
-
-Version 1.x of the Core Tools isn't supported on Linux. Use version 2.x or a later version on Linux.
-
-## Changing Core Tools versions
-
-When changing to a different version of Core Tools, you should use the same package manager as the original installation to move to a different package version. For example, if you installed Core Tools version 3.x using npm, you should use the following command to upgrade to version 4.x:
-
-```bash
-npm install -g azure-functions-core-tools@4 --unsafe-perm true
-```
-
-If you used Windows installer (MSI) to install Core Tools on Windows, you should uninstall the old version from Add Remove Programs before installing a different version.
+When upgrading to the latest version of Core Tools, you should use the same package manager as the original installation to perform the upgrade. Visual Studio and Visual Studio Code may also install Azure Functions Core Tools, depending on your specific tools installation.
## Create a local Functions project
The following considerations apply to project initialization:
+ When you don't provide a project name, the current folder is initialized.
-+ If you plan to publish your project to a custom Linux container, use the `--docker` option to make sure that a Dockerfile is generated for your project. To learn more, see [Create a function on Linux using a custom image](functions-create-function-linux-custom-image.md).
-
-Certain languages may have more considerations:
-
-# [C\#](#tab/csharp)
++ If you plan to deploy your project as a function app in a Linux container, use the `--docker` option to make sure that a Dockerfile is generated for your project. To learn more, see [Create a function on Linux using a custom image](functions-create-function-linux-custom-image.md).
-+ Core Tools lets you create function app projects for the .NET runtime as both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# class library projects (.csproj). These projects, which can be used with Visual Studio or Visual Studio Code, are compiled during debugging and when publishing to Azure.
++ Core Tools lets you create function app projects for the .NET runtime as either [in-process](functions-dotnet-class-library.md) or [isolated worker process](dotnet-isolated-process-guide.md) C# class library projects (.csproj). These projects, which can be used with Visual Studio or Visual Studio Code, are compiled during debugging and when publishing to Azure. + Use the `--csx` parameter if you want to work locally with C# script (.csx) files. These files are the same ones you get when you create functions in the Azure portal and when using version 1.x of Core Tools. To learn more, see the [func init reference](functions-core-tools-reference.md#func-init).-
-# [Java](#tab/java)
- + Java uses a Maven archetype to create the local Functions project, along with your first HTTP triggered function. Instead of using `func init` and `func new`, you should follow the steps in the [Command line quickstart](./create-first-function-cli-java.md). -
-# [JavaScript](#tab/node)
- + To use a `--worker-runtime` value of `node`, specify the `--language` as `javascript`. -
-# [PowerShell](#tab/powershell)
-
-There are no other considerations for PowerShell.
-
-# [Python](#tab/python)
- + You should run all commands, including `func init`, from inside a virtual environment. To learn more, see [Create and activate a virtual environment](create-first-function-cli-python.md#create-venv).-
-# [TypeScript](#tab/ts)
- + To use a `--worker-runtime` value of `node`, specify the `--language` as `typescript`.
-
-
-## Register extensions
-
-Starting with runtime version 2.x, [Functions triggers and bindings](functions-triggers-bindings.md) are implemented as .NET extension (NuGet) packages. For compiled C# projects, you simply reference the NuGet extension packages for the specific triggers and bindings you're using. HTTP bindings and timer triggers don't require extensions.
-
-To improve the development experience for non-C# projects, Functions lets you reference a versioned extension bundle in your host.json project file. [Extension bundles](functions-bindings-register.md#extension-bundles) makes all extensions available to your app and removes the chance of having package compatibility issues between extensions. Extension bundles also removes the requirement of installing the .NET SDK and having to deal with the extensions.csproj file.
+## Binding extensions
-Extension bundles is the recommended approach for functions projects other than C# complied projects, and for C# script. For these projects, the extension bundle setting is generated in the _host.json_ file during initialization. If bundles aren't enabled, you need to update the project's host.json file.
+[Functions triggers and bindings](functions-triggers-bindings.md) are implemented as .NET extension (NuGet) packages. To be able to use a specific binding extension, that extension must be installed in the project.
+This section doesn't apply to version 1.x of the Functions runtime. In version 1.x, supported binding were included in the core product extension.
-To learn more, see [Register Azure Functions binding extensions](functions-bindings-register.md#extension-bundles).
+For compiled C# project, add references to the specific NuGet packages for the binding extensions required by your functions. C# script (.csx) project should use [extension bundles](functions-bindings-register.md#extension-bundles).
+Functions provides _extension bundles_ to make is easy to work with binding extensions in your project. Extension bundles, which are versioned and defined in the host.json file, install a complete set of compatible binding extension packages for your app. Your host.json should already have extension bundles enabled. If for some reason you need to add or update the extension bundle in the host.json file, see [Extension bundles](functions-bindings-register.md#extension-bundles).
-There may be cases in a non-.NET project when you can't use extension bundles, such as when you need to target a specific version of an extension not in the bundle. In these rare cases, you can use Core Tools to locally install the specific extension packages required by your project. To learn more, see [Install extensions](#install-extensions).
+If you must use a binding extension or an extension version not in a supported bundle, you'll need to manually install extension. For such rare scenarios, see [Install extensions](#install-extensions).
[!INCLUDE [functions-local-settings-file](../../includes/functions-local-settings-file.md)] By default, these settings aren't migrated automatically when the project is published to Azure. Use the [`--publish-local-settings` option][func azure functionapp publish] when you publish to make sure these settings are added to the function app in Azure. Values in the `ConnectionStrings` section are never published.
-The function app settings values can also be read in your code as environment variables. For more information, see the Environment variables section of these language-specific reference articles:
-
-* [C# precompiled](functions-dotnet-class-library.md#environment-variables)
-* [C# script (.csx)](functions-reference-csharp.md#environment-variables)
-* [Java](functions-reference-java.md#environment-variables)
-* [JavaScript](functions-reference-node.md#environment-variables)
-* [PowerShell](functions-reference-powershell.md#environment-variables)
-* [Python](functions-reference-python.md#environment-variables)
+The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-dotnet-class-library.md#environment-variables).
+The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-java.md#environment-variables).
+The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-node.md#environment-variables).
+The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-powershell.md#environment-variables).
+The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-python.md#environment-variables).
When no valid storage connection string is set for [`AzureWebJobsStorage`] and a local storage emulator isn't being used, the following error message is shown:
To learn more, see the [`func new` command](functions-core-tools-reference.md#fu
## <a name="start"></a>Run functions locally
-To run a Functions project, you run the Functions host from the root directory of your project. The host enables triggers for all functions in the project. The [`start` command](functions-core-tools-reference.md#func-start) varies depending on your project language.
-
-# [C\#](#tab/csharp)
-
-```
-func start
-```
-
-# [Java](#tab/java)
+To run a Functions project, you run the Functions host from the root directory of your project. The host enables triggers for all functions in the project. Use the following command to run your functions locally:
``` mvn clean package mvn azure-functions:run ```-
-# [JavaScript](#tab/node)
- ``` func start ```--
-# [PowerShell](#tab/powershell)
-
-```
-func start
-```
-
-# [Python](#tab/python)
-
-```
-func start
-```
-This command must be [run in a virtual environment](./create-first-function-cli-python.md).
-
-# [TypeScript](#tab/ts)
- ``` npm install npm start ```---
+This command must be [run in a virtual environment](./create-first-function-cli-python.md).
>[!NOTE] > Version 1.x of the Functions runtime instead requires `func host start`. To learn more, see [Azure Functions Core Tools reference](functions-core-tools-reference.md?tabs=v1#func-start). When the Functions host starts, it outputs the URL of HTTP-triggered functions, like in the following example:
Http Function MyHttpTrigger: http://localhost:7071/api/MyHttpTrigger
</pre> >[!IMPORTANT]
->When running locally, authorization isn't enforced for HTTP endpoints. This means that all local HTTP requests are handled as `authLevel = "anonymous"`. For more information, see the [HTTP binding article](functions-bindings-http-webhook-trigger.md#authorization-keys).
+>By default, when running locally authorization isn't enforced for HTTP endpoints. This means that all local HTTP requests are handled as `authLevel = "anonymous"`. For more information, see the [HTTP binding article](functions-bindings-http-webhook-trigger.md#authorization-keys). You can use the `--enableAuth` option to require authorization when running locally. For more information, see [`func start`](./functions-core-tools-reference.md?tabs=v2#func-start)
### Passing test data to a function
You call the following endpoint to locally run HTTP and webhook triggered functi
http://localhost:{port}/api/{function_name} ```
-Make sure to use the same server name and port that the Functions host is listening on. You see this in the output generated when starting the Function host. You can call this URL using any HTTP method supported by the trigger.
+Make sure to use the same server name and port that the Functions host is listening on. You see an endpoint like this in the output generated when starting the Function host. You can call this URL using any HTTP method supported by the trigger.
The following cURL command triggers the `MyHttpTrigger` quickstart function from a GET request with the _name_ parameter passed in the query string.
You can make GET requests from a browser passing data in the query string. For a
For all functions other than HTTP and Event Grid triggers, you can test your functions locally using REST by calling a special endpoint called an _administration endpoint_. Calling this endpoint with an HTTP POST request on the local server triggers the function. You can call the `functions` administrator endpoint (`http://localhost:{port}/admin/functions/`) to get URLs for all available functions, both HTTP triggered and non-HTTP triggered.
-When running locally, authentication and authorization is bypassed. However, when you try to call the same administrator endpoints on your function app in Azure, you must provide an access key. To learn more, see [Function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys).
+When running your functions in Core Tools, authentication and authorization is bypassed. However, when you try to call the same administrator endpoints on your function app in Azure, you must provide an access key. To learn more, see [Function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys).
>[!IMPORTANT] >Access keys are valuable shared secrets. When used locally, they must be securely stored outside of source control. Because authentication and authorization isn't required by Functions when running locally, you should avoid using and storing access keys unless your scenarios require it.
The Azure Functions Core Tools supports two types of deployment:
A project folder may contain language-specific files and directories that shouldn't be published. Excluded items are listed in a .funcignore file in the root project folder.
-You must have already [created a function app in your Azure subscription](functions-cli-samples.md#create), to which you'll deploy your code. Projects that require compilation should be built so that the binaries can be deployed.
+You must have already [created a function app in your Azure subscription](functions-cli-samples.md#create), to which you can deploy your code. Projects that require compilation should be built so that the binaries can be deployed.
To learn how to create a function app from the command prompt or terminal window using the Azure CLI or Azure PowerShell, see [Create a Function App for serverless execution](./scripts/functions-cli-create-serverless.md).
The following considerations apply to this kind of deployment:
+ Java uses Maven to publish your local project to Azure. Instead, use the following command to publish to Azure: `mvn azure-functions:deploy`. Azure resources are created during initial deployment.
-+ You'll get an error if you try to publish to a `<FunctionAppName>` that doesn't exist in your subscription.
++ You get an error when you try to publish to a `<FunctionAppName>` that doesn't exist in your subscription. ### Kubernetes cluster
To learn more, see [Deploying a function app to Kubernetes](functions-kubernetes
## Install extensions
-If you aren't able to use [extension bundles](functions-bindings-register.md#extension-bundles), you can use Azure Functions Core Tools locally to install the specific extension packages required by your project.
-
-> [!IMPORTANT]
-> You can't explicitly install extensions in a function app with extension bundles enabled. First, remove the `extensionBundle` section in *host.json* before explicitly installing extensions.
+> [!NOTE]
+> This section only applies to C# script (.csx) projects, which also rely on extension bundles. Compiled C# projects use NuGet extension packages in the regular way.
-The following items describe some reasons you might need to install extensions manually:
+In the rare event you aren't able to use [extension bundles](functions-bindings-register.md#extension-bundles), you can use Core Tools to install the specific extension packages required by your project. The following are some reasons why you might need to install extensions manually:
* You need to access a specific version of an extension not available in a bundle. * You need to access a custom extension not available in a bundle. * You need to access a specific combination of extensions not available in a single bundle.
-When you explicitly install extensions, a .NET project file named extensions.csproj is added to the root of your project. This file defines the set of NuGet packages required by your functions. While you can work with the [NuGet package references](/nuget/consume-packages/package-references-in-project-files) in this file, Core Tools lets you install extensions without having to manually edit this C# project file.
-
-There are several ways to use Core Tools to install the required extensions in your local project.
-
-### Install all extensions
-
-Use the following command to automatically add all extension packages used by the bindings in your local project:
-
-```command
-func extensions install
-```
+The following considerations apply when manually installing extensions:
-The command reads the *function.json* file to see which packages you need, installs them, and rebuilds the extensions project (extensions.csproj). It adds any new bindings at the current version but doesn't update existing bindings. Use the `--force` option to update existing bindings to the latest version when installing new ones. To learn more, see the [`func extensions install` command](functions-core-tools-reference.md#func-extensions-install).
++ To manually install extensions by using Core Tools, you must have the [.NET 6.0 SDK](https://dotnet.microsoft.com/download) installed.
-If your function app uses bindings or NuGet packages that Core Tools doesn't recognize, you must manually install the specific extension.
++ You can't explicitly install extensions in a function app with extension bundles enabled. First, remove the `extensionBundle` section in *host.json* before explicitly installing extensions.
-### Install a specific extension
++ The first time you explicitly install an extension, a .NET project file named extensions.csproj is added to the root of your app project. This file defines the set of NuGet packages required by your functions. While you can work with the [NuGet package references](/nuget/consume-packages/package-references-in-project-files) in this file, Core Tools lets you install extensions without having to manually edit this C# project file. Use the following command to install a specific extension package at a specific version, in this case the Storage extension:
azure-functions Set Runtime Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/set-runtime-version.md
The function app restarts after the change is made to the site config.
## Next steps
-> [!div class="nextstepaction"]
-> [Target the correct runtime during local dev environment](functions-run-local.md#changing-core-tools-versions)
- > [!div class="nextstepaction"] > [See Release notes for runtime versions](https://github.com/Azure/azure-webjobs-sdk-script/releases)
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
Title: Storage considerations for Azure Functions description: Learn about the storage requirements of Azure Functions and about encrypting stored data. Previously updated : 03/21/2023 Last updated : 06/13/2023 # Storage considerations for Azure Functions
Azure Functions requires an Azure Storage account when you create a function app
<sup>2</sup> Azure Files is set up by default, but you can [create an app without Azure Files](#create-an-app-without-azure-files) under certain conditions.
-> [!IMPORTANT]
-> Access to storage accounts used by function apps should be carefully managed, as the account may store function code and other important data. You should audit what apps and users have access to the storage account and limit access as appropriate. Note that permissions can come from [data actions in the assigned role](../role-based-access-control/role-definitions.md#control-and-data-actions) or through permission to perform the [listKeys operation]. In addition, you can configure [logging for data plane operations](#storage-logs).
+## Important considerations
-[listKeys operation]: /rest/api/storagerp/storage-accounts/list-keys
+You must strongly consider the following facts regarding the storage accounts used by your function apps:
-## Storage account requirements
++ When your function app is hosted on the Consumption plan or Premium plan, your function code and configuration files are stored in Azure Files in the linked storage account. When you delete this storage account, the content is deleted and can't be recovered. For more information, see [Storage account was deleted](functions-recover-storage-account.md#storage-account-was-deleted)
-When creating a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. This requirement exists because Functions relies on Azure Storage for operations such as managing triggers and logging function executions. Some storage accounts don't support queues and tables. These accounts include blob-only storage accounts and Azure Premium Storage.
++ Important data, such as function code, [access keys](functions-bindings-http-webhook-trigger.md#authorization-keys), and other important service-related data, may be persisted in the storage account. You must carefully manage access to the storage accounts used by function apps in the following ways:
-To learn more about storage account types, see [Storage account overview](../storage/common/storage-account-overview.md).
+ + Audit and limit the access of apps and users to the storage account based on a least-privilege model. Permissions to the storage account can come from [data actions in the assigned role](../role-based-access-control/role-definitions.md#control-and-data-actions) or through permission to perform the [listKeys operation].
-While you can use an existing storage account with your function app, you must make sure that it meets these requirements. Storage accounts created as part of the function app create flow in the Azure portal are guaranteed to meet these storage account requirements. In the portal, unsupported accounts are filtered out when choosing an existing storage account while creating a function app. In this flow, you're only allowed to choose existing storage accounts in the same region as the function app you're creating. To learn more, see [Storage account location](#storage-account-location).
+ + Monitor both control plane activity (such as retrieving keys) and data plane operations (such as writing to a blob) in your storage account. Consider maintaining storage logs in a location other than Azure Storage. For more information, see [Storage logs](#storage-logs).
-Storage accounts secured by using firewalls or virtual private networks also can't be used in the portal creation flow. For more information, see [Restrict your storage account to a virtual network](functions-networking-options.md#restrict-your-storage-account-to-a-virtual-network).
+## Storage account requirements
-<!-- JH: Does using a Premium Storage account improve perf? -->
+Storage accounts created as part of the function app create flow in the Azure portal are guaranteed to work with the new function app. In the portal, unsupported accounts are filtered out when choosing an existing storage account while creating a function app. You can also use an existing storage account with your function app. The following restrictions apply to storage accounts used by your function app, so you must make sure an existing storage account meets these requirements:
-> [!IMPORTANT]
-> When using the Consumption/Premium hosting plan, your function code and binding configuration files are stored in Azure Files in the main storage account. When you delete the main storage account, this content is deleted and cannot be recovered.
++ The account type must support Blob, Queue, and Table storage. Some storage accounts don't support queues and tables. These accounts include blob-only storage accounts and Azure Premium Storage. To learn more about storage account types, see [Storage account overview](../storage/common/storage-account-overview.md).+++ Storage accounts already secured by using firewalls or virtual private networks can't be used in the portal creation flow. For more information, see [Restrict your storage account to a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network).+++ When creating your function app in the portal, you're only allowed to choose an existing storage account in the same region as the function app you're creating. This is a performance optimization and not a strict limitation. To learn more, see [Storage account location](#storage-account-location). ## Storage account guidance
The storage account must be accessible to the function app. If you need to use a
### Storage account connection setting
-By default, Functions clients will configure the AzureWebJobsStorage connection as a connection string stored in the [AzureWebJobsStorage application setting](./functions-app-settings.md#azurewebjobsstorage), but you can also [configure AzureWebJobsStorage to use an identity-based connection](functions-reference.md#connecting-to-host-storage-with-an-identity) without a secret.
+By default, function apps configure the `AzureWebJobsStorage` connection as a connection string stored in the [AzureWebJobsStorage application setting](./functions-app-settings.md#azurewebjobsstorage), but you can also [configure AzureWebJobsStorage to use an identity-based connection](functions-reference.md#connecting-to-host-storage-with-an-identity) without a secret.
Function apps are configured to use Azure Files by storing a connection string in the [WEBSITE_CONTENTAZUREFILECONNECTIONSTRING application setting](./functions-app-settings.md#website_contentazurefileconnectionstring) and providing the name of the file share in the [WEBSITE_CONTENTSHARE application setting](./functions-app-settings.md#website_contentshare).
You may need to use separate storage accounts to [avoid host ID collisions](#avo
### Lifecycle management policy considerations
-Functions uses Blob storage to persist important information, such as [function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys). When you apply a [lifecycle management policy](../storage/blobs/lifecycle-management-overview.md) to your Blob Storage account, the policy may remove blobs needed by the Functions host. Because of this fact, you shouldn't apply such policies to the storage account used by Functions. If you do need to apply such a policy, remember to exclude containers used by Functions, which are prefixed with `azure-webjobs` or `scm`.
-
+You shouldn't apply [lifecycle management policies](../storage/blobs/lifecycle-management-overview.md) to your Blob Storage account used by your function app. Functions uses Blob storage to persist important information, such as [function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys), and policies may remove blobs (such as keys) needed by the Functions host. If you must use policies, exclude containers used by Functions, which are prefixed with `azure-webjobs` or `scm`.
### Storage logs
-Azure Monitor resource logs can be used to track events against the storage data plane. See [Monitoring Azure Storage](../storage/blobs/monitor-blob-storage.md) for details on how to configure and examine these logs.
+Because function code and keys may be persisted in the storage account, logging of activity against the storage account is a good way to monitor for unauthorized access. Azure Monitor resource logs can be used to track events against the storage data plane. See [Monitoring Azure Storage](../storage/blobs/monitor-blob-storage.md) for details on how to configure and examine these logs.
-> [!IMPORTANT]
-> Important data such as function code and keys may be persisted in the storage account, and while you should limit access to prevent modification or deletion of this data, you may wish to additionally monitor for unintended access. The [Azure Monitor activity log](../azure-monitor/essentials/activity-log.md) will only show data plane events, including the [listKeys operation], but later use of the key or any identity-based data plane operations will only be visible if you have configured resource logs for the storage account. Having at least the [StorageWrite log category](../storage/blobs/monitor-blob-storage.md#collection-and-routing) enabled can help you identify modifications to the data outside of normal Functions operation. To limit the potential impact of any broadly scoped storage permissions, consider using a non-Storage destination for these logs, such as Log Analytics.
+The [Azure Monitor activity log](../azure-monitor/essentials/activity-log.md) shows control plane events, including the [listKeys operation]. However, you should also configure resource logs for the storage account to track subsequent use of keys or other identity-based data plane operations. You should have at least the [StorageWrite log category](../storage/blobs/monitor-blob-storage.md#collection-and-routing) enabled to be able to identify modifications to the data outside of normal Functions operations.
+
+To limit the potential impact of any broadly scoped storage permissions, consider using a nonstorage destination for these logs, such as Log Analytics. For more information, see [Monitoring Azure Blob Storage](../storage/blobs/monitor-blob-storage.md).
### Optimize storage performance
There are several ways to execute your function code based on changes to blobs i
| Filters | [Blob name pattern](./functions-bindings-storage-blob-trigger.md#blob-name-patterns) | [Event filters](../storage/blobs/storage-blob-event-overview.md#filtering-events) | n/a | [Event filters](../storage/blobs/storage-blob-event-overview.md#filtering-events) | | Requires [event subscription](../event-grid/concepts.md#event-subscriptions) | No | Yes | No | Yes | | Supports high-scale┬▓ | No | Yes | Yes | Yes |
-| Description | Default trigger behavior, which relies on polling the container for updates. For more information, see the examples in the [Blob storage trigger reference](./functions-bindings-storage-blob-trigger.md#example). | Consumes blob storage events from an event subscription. Requires a `Source` parameter value of `EventGrid`. For more information, see [Tutorial: Trigger Azure Functions on blob containers using an event subscription](./functions-event-grid-blob-trigger.md). | Blob name string is manually added to a storage queue when a blob is added to the container. This value is passed directly by a Queue Storage trigger to a Blob Storage input binding on the same function. | Provides the flexibility of triggering on events besides those coming from a storage container. Use when need to also have non-storage events trigger your function. For more information, see [How to work with Event Grid triggers and bindings in Azure Functions](event-grid-how-tos.md). |
+| Description | Default trigger behavior, which relies on polling the container for updates. For more information, see the examples in the [Blob storage trigger reference](./functions-bindings-storage-blob-trigger.md#example). | Consumes blob storage events from an event subscription. Requires a `Source` parameter value of `EventGrid`. For more information, see [Tutorial: Trigger Azure Functions on blob containers using an event subscription](./functions-event-grid-blob-trigger.md). | Blob name string is manually added to a storage queue when a blob is added to the container. This value is passed directly by a Queue Storage trigger to a Blob Storage input binding on the same function. | Provides the flexibility of triggering on events besides those coming from a storage container. Use when need to also have nonstorage events trigger your function. For more information, see [How to work with Event Grid triggers and bindings in Azure Functions](event-grid-how-tos.md). |
<sup>1</sup> Blob Storage input and output bindings support blob-only accounts. <sup>2</sup> High scale can be loosely defined as containers that have more than 100,000 blobs in them or storage accounts that have more than 100 blob updates per second.
Other platform-managed customer data is only stored within the region when hosti
## Host ID considerations
-Functions uses a host ID value as a way to uniquely identify a particular function app in stored artifacts. By default, this ID is auto-generated from the name of the function app, truncated to the first 32 characters. This ID is then used when storing per-app correlation and tracking information in the linked storage account. When you have function apps with names longer than 32 characters and when the first 32 characters are identical, this truncation can result in duplicate host ID values. When two function apps with identical host IDs use the same storage account, you get a host ID collision because stored data can't be uniquely linked to the correct function app.
+Functions uses a host ID value as a way to uniquely identify a particular function app in stored artifacts. By default, this ID is autogenerated from the name of the function app, truncated to the first 32 characters. This ID is then used when storing per-app correlation and tracking information in the linked storage account. When you have function apps with names longer than 32 characters and when the first 32 characters are identical, this truncation can result in duplicate host ID values. When two function apps with identical host IDs use the same storage account, you get a host ID collision because stored data can't be uniquely linked to the correct function app.
>[!NOTE] >This same kind of host ID collison can occur between a function app in a production slot and the same function app in a staging slot, when both slots use the same storage account.
Learn more about Azure Functions hosting options.
> [!div class="nextstepaction"] > [Azure Functions scale and hosting](functions-scale.md)+
+[listKeys operation]: /rest/api/storagerp/storage-accounts/list-keys
azure-maps Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-browsers.md
Title: Web SDK supported browsers | Microsoft Azure Maps
+ Title: Web SDK supported browsers
+ description: Find out how to check whether the Azure Maps Web SDK supports a browser. View a list of supported browsers. Learn how to use map services with legacy browsers. Previously updated : 03/25/2019- Last updated : 06/22/2023+
The following Web SDK modules are also supported in Node.js:
## <a name="Target-Legacy-Browsers"></a>Target legacy browsers
-You might want to target older browsers that don't support WebGL or that have only limited support for it. In such cases, we recommend that you use Azure Maps services together with an open-source map control like [Leaflet](https://leafletjs.com/). Here's an example that makes use of the open source [Azure Maps Leaflet plugin](https://github.com/azure-samples/azure-maps-leaflet).
+You might want to target older browsers that don't support WebGL or that have only limited support for it. In such cases, you can use Azure Maps services together with an open-source map control like [Leaflet](https://leafletjs.com/).
-<br/>
+The [Render Azure Maps in Leaflet] Azure Maps sample shows how to render Azure Maps Raster Tiles in the Leaflet JS map control. This sample uses the open source [Azure Maps Leaflet plugin]. For the source code for this sample, see [Render Azure Maps in Leaflet sample source code].
+<!-
<iframe height="500" scrolling="no" title="Azure Maps + Leaflet" src="//codepen.io/azuremaps/embed/GeLgyx/?height=500&theme-id=0&default-tab=html,result" frameborder="no" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/GeLgyx/'>Azure Maps + Leaflet</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+->
-For code samples using Azure Maps in Leaflet, see [Azure Maps Samples](https://samples.azuremaps.com/?search=leaflet).
+For more code samples using Azure Maps in Leaflet, see [Azure Maps Samples].
-For a list of third-party map control plug-ins, see [Azure Maps community - Open-source projects](open-source-projects.md#third-part-map-control-plugins).
+For a list of third-party map control plug-ins, see [Azure Maps community - Open-source projects].
## Next steps Learn more about the Azure Maps Web SDK:
-[Map control](how-to-use-map-control.md)
+> [!div class="nextstepaction"]
+> [Map control](how-to-use-map-control.md)
-[Services module](how-to-use-services-module.md)
+> [!div class="nextstepaction"]
+> [Services module](how-to-use-services-module.md)
+
+[Render Azure Maps in Leaflet]: https://samples.azuremaps.com/third-party-map-controls/render-azure-maps-in-leaflet
+[Render Azure Maps in Leaflet sample source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Third%20Party%20Map%20Controls/Render%20Azure%20Maps%20in%20Leaflet/Render%20Azure%20Maps%20in%20Leaflet.html
+[Azure Maps Leaflet plugin]: https://github.com/azure-samples/azure-maps-leaflet
+[Azure Maps Samples]: https://samples.azuremaps.com/?search=leaflet
+[Azure Maps community - Open-source projects]: open-source-projects.md#third-part-map-control-plugins
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Title: Application Insights API for custom events and metrics | Microsoft Docs description: Insert a few lines of code in your device or desktop app, webpage, or service to track usage and diagnose issues. Previously updated : 01/24/2023 Last updated : 06/23/2023 ms.devlang: csharp, java, javascript, vb
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
Title: Filtering and preprocessing in the Application Insights SDK | Microsoft Docs description: Write telemetry processors and telemetry initializers for the SDK to filter or add properties to the data before the telemetry is sent to the Application Insights portal. Previously updated : 11/14/2022 Last updated : 06/23/2023 ms.devlang: csharp, javascript, python
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
This section lists all supported platforms and frameworks.
### Logging frameworks * [ILogger](./ilogger.md) * [Log4Net, NLog, or System.Diagnostics.Trace](./asp-net-trace-logs.md)
-* [Log4J, Logback, or java.util.logging](./opentelemetry-enable.md?tabs=java#logs)
+* [Log4J, Logback, or java.util.logging](./opentelemetry-add-modify.md?tabs=java#logs)
* [LogStash plug-in](https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash/logstash-output-applicationinsights) * [Azure Monitor](/archive/blogs/msoms/application-insights-connector-in-oms)
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
Title: Application Map in Azure Application Insights | Microsoft Docs description: Monitor complex application topologies with Application Map and Intelligent view. Previously updated : 11/15/2022 Last updated : 06/23/2023 ms.devlang: csharp, java, javascript, python
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
Below is the currently supported list of dependency calls that are automatically
### Java See the list of Application Insights Java's
-[autocollected dependencies](opentelemetry-enable.md?tabs=java#included-instrumentation-libraries).
+[autocollected dependencies](opentelemetry-add-modify.md?tabs=java#included-instrumentation-libraries).
### Node.js
A list of the latest [currently supported modules](https://github.com/microsoft/
* [Exceptions](./asp-net-exceptions.md) * [User and page data](./javascript.md) * [Availability](./availability-overview.md)
-* Set up custom dependency tracking for [Java](opentelemetry-enable.md?tabs=java#add-custom-spans).
+* Set up custom dependency tracking for [Java](opentelemetry-add-modify.md?tabs=java#add-custom-spans).
* Set up custom dependency tracking for [OpenCensus Python](./opencensus-python-dependency.md). * [Write custom dependency telemetry](./api-custom-events-metrics.md#trackdependency) * See [data model](./data-model-complete.md) for Application Insights types and data model.
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
Title: Azure AD authentication for Application Insights description: Learn how to enable Azure Active Directory (Azure AD) authentication to ensure that only authenticated telemetry is ingested in your Application Insights resources. Previously updated : 02/14/2023 Last updated : 06/23/2023 ms.devlang: csharp, java, javascript, python
azure-monitor Azure Web Apps Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md
Monitoring of your Java web applications running on [Azure App Services](../../a
The recommended way to enable application monitoring for Java applications running on Azure App Services is through Azure portal. Turning on application monitoring in Azure portal will automatically instrument your application with Application Insights, and doesn't require any code changes.
-You can apply extra configurations, and then based on your specific scenario you [add your own custom telemetry](./opentelemetry-enable.md?tabs=java#modify-telemetry) if needed.
+You can apply extra configurations, and then based on your specific scenario you [add your own custom telemetry](./opentelemetry-add-modify.md?tabs=java#modify-telemetry) if needed.
### Auto-instrumentation through Azure portal
azure-monitor Custom Operations Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md
description: Learn how to track custom operations with the Application Insights
ms.devlang: csharp Previously updated : 11/26/2019 Last updated : 06/23/2023
azure-monitor Data Model Complete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-complete.md
ibiza Previously updated : 03/17/2023 Last updated : 06/23/2023 # Application Insights telemetry data model
To learn more:
- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights. - Check out standard context properties collection [configuration](./configuration-with-applicationinsights-config.md#telemetry-initializers-aspnet). - Explore [.NET trace logs in Application Insights](./asp-net-trace-logs.md).-- Explore [Java trace logs in Application Insights](./opentelemetry-enable.md?tabs=java#logs).
+- Explore [Java trace logs in Application Insights](./opentelemetry-add-modify.md?tabs=java#logs).
- Learn about the [Azure Functions built-in integration with Application Insights](../../azure-functions/functions-monitoring.md?toc=/azure/azure-monitor/toc.json) to monitor functions executions. - Learn how to [configure an ASP.NET Core](./asp-net.md) application with Application Insights. - Learn how to [diagnose exceptions in your web apps with Application Insights](./asp-net-exceptions.md).
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
# Data collection, retention, and storage in Application Insights
-When you install the [Application Insights][start] SDK in your app, it sends telemetry about your app to the cloud. As a responsible developer, you want to know exactly what data is sent, what happens to the data, and how you can keep control of it. In particular, could sensitive data be sent, where is it stored, and how secure is it?
+When you install the [Application Insights][start] SDK in your app, it sends telemetry about your app to the [cloud](create-workspace-resource.md). As a responsible developer, you want to know exactly what data is sent, what happens to the data, and how you can keep control of it. In particular, could sensitive data be sent, where is it stored, and how secure is it?
First, the short answer:
The rest of this article discusses these points more fully. The article is self-
[Application Insights][start] is a service provided by Microsoft that helps you improve the performance and usability of your live application. It monitors your application all the time it's running, both during testing and after you've published or deployed it. Application Insights creates charts and tables that show you informative metrics. For example, you might see what times of day you get most users, how responsive the app is, and how well it's served by any external services that it depends on. If there are failures or performance issues, you can search through the telemetry data to diagnose the cause. The service sends you emails if there are any changes in the availability and performance of your app.
-To get this functionality, you install an Application Insights SDK in your application, which becomes part of its code. When your app is running, the SDK monitors its operation and sends telemetry to Application Insights, which is a cloud service hosted by [Microsoft Azure](https://azure.com). Application Insights also works for any applications, not just applications that are hosted in Azure.
+To get this functionality, you install an Application Insights SDK in your application, which becomes part of its code. When your app is running, the SDK monitors its operation and sends telemetry to an [Application Insights Log Analytics workspace](create-workspace-resource.md), which is a cloud service hosted by [Microsoft Azure](https://azure.com). Application Insights also works for any applications, not just applications that are hosted in Azure.
Application Insights stores and analyzes the telemetry. To see the analysis or search through the stored telemetry, you sign in to your Azure account and open the Application Insights resource for your application. You can also share access to the data with other members of your team, or with specified Azure subscribers.
You'll need to write a [telemetry processor plug-in](./api-filtering-sampling.md
## How long is the data kept?
-Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days. You can [select a retention duration](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table) of 30, 60, 90, 120, 180, 270, 365, 550, or 730 days. If you need to keep data longer than 730 days, you can use [Continuous Export](./export-telemetry.md) to copy it to a storage account during data ingestion.
+Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days. You can [select a retention duration](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table) of 30, 60, 90, 120, 180, 270, 365, 550, or 730 days. If you need to keep data longer than 730 days, you can use [diagnostic settings](../essentials/diagnostic-settings.md#diagnostic-settings-in-azure-monitor).
Data kept longer than 90 days incurs extra charges. For more information about Application Insights pricing, see the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Diagnostic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md
The first time you do this step, you're asked to configure a link to your Azure
In addition to the out-of-the-box telemetry sent by Application Insights SDK, you can:
-* Capture log traces from your favorite logging framework in [.NET](./asp-net-trace-logs.md) or [Java](./opentelemetry-enable.md?tabs=java#logs). This means you can search through your log traces and correlate them with page views, exceptions, and other events.
+* Capture log traces from your favorite logging framework in [.NET](./asp-net-trace-logs.md) or [Java](./opentelemetry-add-modify.md?tabs=java#logs). This means you can search through your log traces and correlate them with page views, exceptions, and other events.
* [Write code](./api-custom-events-metrics.md) to send custom events, page views, and exceptions.
azure-monitor Ip Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md
Title: Application Insights IP address collection | Microsoft Docs description: Understand how Application Insights handles IP addresses and geolocation. Previously updated : 04/06/2023 Last updated : 06/23/2023
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
and then at the beginning of each request, call:
Span.current().setAttribute("mycustomer", "xyz"); ```
-Also see: [Add a custom property to a Span](./opentelemetry-enable.md?tabs=java#add-a-custom-property-to-a-span).
+Also see: [Add a custom property to a Span](./opentelemetry-add-modify.md?tabs=java#add-a-custom-property-to-a-span).
## Connection string overrides (preview)
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
description: Learn how to install and use JavaScript feature extensions (Click A
ibiza Previously updated : 02/13/2023 Last updated : 06/23/2023 ms.devlang: javascript
appInsights.loadAppInsights();
-### 2. (Optional) Add a framework extension
+> [!TIP]
+> If you want to add a framework extension or you've already added one, see the [React, React Native, and Angular code samples for how to add the Click Analytics plug-in](./javascript-framework-extensions.md#2-add-the-extension-to-your-code).
-Add a framework extension, if needed.
+### 2. (Optional) Set the authenticated user context
-#### [React](#tab/react)
+If you want to set this optional setting, see [Set the authenticated user context](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#setauthenticatedusercontext).
-```javascript
-import React from 'react';
-import { ApplicationInsights } from '@microsoft/applicationinsights-web';
-import { ReactPlugin } from '@microsoft/applicationinsights-react-js';
-
-var browserHistory = createBrowserHistory({ basename: '' });
-var reactPlugin = new ReactPlugin();
-var clickPluginInstance = new ClickAnalyticsPlugin();
-var clickPluginConfig = {
- autoCapture: true
-};
-var appInsights = new ApplicationInsights({
- config: {
- connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
- extensions: [reactPlugin, clickPluginInstance],
- extensionConfig: {
- [reactPlugin.identifier]: { history: browserHistory },
- [clickPluginInstance.identifier]: clickPluginConfig
- }
- }
-});
-appInsights.loadAppInsights();
-```
-
-> [!NOTE]
-> To add React configuration, see [React configuration](./javascript-framework-extensions.md?tabs=react#add-configuration). For more information on the React plug-in, see [React plug-in](./javascript-framework-extensions.md?tabs=react).
-
-#### [React Native](#tab/reactnative)
-
-```typescript
-import { ApplicationInsights } from '@microsoft/applicationinsights-web';
-import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native';
-import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js';
-
-var clickPluginInstance = new ClickAnalyticsPlugin();
-var clickPluginConfig = {
- autoCapture: true
-};
-var RNPlugin = new ReactNativePlugin();
-var appInsights = new ApplicationInsights({
- config: {
- connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
- extensions: [RNPlugin, clickPluginInstance],
- extensionConfig: {
- [clickPluginInstance.identifier]: clickPluginConfig
- }
- }
-});
-appInsights.loadAppInsights();
-```
-
-> [!NOTE]
-> For more information on the React Native plug-in, see [React Native plug-in](./javascript-framework-extensions.md?tabs=reactnative).
-
-#### [Angular](#tab/angular)
-
-```javascript
-import { ApplicationInsights } from '@microsoft/applicationinsights-web';
-import { AngularPlugin } from '@microsoft/applicationinsights-angularplugin-js';
-import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js';
-import { Component } from '@angular/core';
-import { Router } from '@angular/router';
-
-@Component({
- selector: 'app-root',
- templateUrl: './app.component.html',
- styleUrls: ['./app.component.css']
-})
-export class AppComponent {
- constructor(
- private router: Router
- ){
- var angularPlugin = new AngularPlugin();
- var clickPluginInstance = new ClickAnalyticsPlugin();
- var clickPluginConfig = {
- autoCapture: true
- };
- const appInsights = new ApplicationInsights({ config: {
- connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
- extensions: [angularPlugin, clickPluginInstance],
- extensionConfig: {
- [angularPlugin.identifier]: { router: this.router },
- [clickPluginInstance.identifier]: clickPluginConfig
- }
- } });
- appInsights.loadAppInsights();
- }
-}
-```
-
-> [!NOTE]
-> For more information on the Angular plug-in, see [Angular plug-in](./javascript-framework-extensions.md?tabs=angular).
---
-### 3. (Optional) Set the authenticated user context
-
-If you need to set this optional setting, see [Set the authenticated user context](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#setauthenticatedusercontext). This setting isn't required to use the Click Analytics plug-in.
+> [!NOTE]
+> If you're using a HEART workbook with the Click Analytics plug-in, you don't need to set the authenticated user context to see telemetry data. For more information, see the [HEART workbook documentation](./usage-heart.md#confirm-that-data-is-flowing).
## Use the plug-in
azure-monitor Javascript Framework Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md
description: Learn how to install and use JavaScript framework extensions for th
ibiza Previously updated : 02/13/2023 Last updated : 06/23/2023 ms.devlang: javascript
Initialize a connection to Application Insights:
```javascript import React from 'react'; import { ApplicationInsights } from '@microsoft/applicationinsights-web';
-import { ReactPlugin, withAITracking } from '@microsoft/applicationinsights-react-js';
+import { ReactPlugin } from '@microsoft/applicationinsights-react-js';
import { createBrowserHistory } from "history"; const browserHistory = createBrowserHistory({ basename: '' }); var reactPlugin = new ReactPlugin();
To use this plugin, you need to construct the plugin and add it as an `extension
```typescript import { ApplicationInsights } from '@microsoft/applicationinsights-web'; import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native';-
+// Add the Click Analytics plug-in.
+// import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js';
var RNPlugin = new ReactNativePlugin(); // Add the Click Analytics plug-in. /* var clickPluginInstance = new ClickAnalyticsPlugin();
Set up an instance of Application Insights in the entry component in your app:
import { Component } from '@angular/core'; import { ApplicationInsights } from '@microsoft/applicationinsights-web'; import { AngularPlugin } from '@microsoft/applicationinsights-angularplugin-js';
+// Add the Click Analytics plug-in.
+// import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js';
import { Router } from '@angular/router'; @Component({
extensionConfig: {
### [React](#tab/react)
-### Configuration
+### React router configuration
-| Name | Default | Description |
-|||-|
-| history | null | React router history. For more information, see the [React router package documentation](https://reactrouter.com/en/main). |
+| Name | Type | Required? | Default | Description |
+||--|--|||
+| history | object | Optional | null | Track router history. For more information, see the [React router package documentation](https://reactrouter.com/en/main).<br><br>To track router history, most users can use the `enableAutoRouteTracking` field in the [JavaScript SDK configuration](./javascript-sdk-configuration.md#sdk-configuration). This field collects the same data for page views as the `history` object. Use the `history` object when you're using a router implementation that doesn't update the browser URL, which is what the configuration listens to. You shouldn't enable both the `enableAutoRouteTracking` field and `history` object, because you'll get multiple page view events. |
-#### React components usage tracking
+### React components usage tracking
To instrument various React components usage tracking, apply the `withAITracking` higher-order component function.
azure-monitor Javascript Sdk Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-configuration.md
These configuration fields are optional and default to false unless otherwise st
| namePrefix | string | undefined | An optional value that is used as name postfix for localStorage and session cookie name. | sessionCookiePostfix | string | undefined | An optional value that is used as name postfix for session cookie name. If undefined, namePrefix is used as name postfix for session cookie name. | userCookiePostfix | string | undefined | An optional value that is used as name postfix for user cookie name. If undefined, no postfix is added on user cookie name.
-| enableAutoRouteTracking | boolean | false | Automatically track route changes in Single Page Applications (SPA). If true, each route change sends a new Pageview to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views.
+| enableAutoRouteTracking | boolean | false | Automatically track route changes in Single Page Applications (SPA). If true, each route change sends a new Pageview to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views.<br>***Note***: If you enable this field, you shouldn't also enable the `history` object for [React router configuration](./javascript-framework-extensions.md?tabs=react#react-router-configuration) because you'll get multiple page view events.
| enableRequestHeaderTracking | boolean | false | If true, AJAX & Fetch request headers is tracked, default is false. If ignoreHeaders isn't configured, Authorization and X-API-Key headers aren't logged. | enableResponseHeaderTracking | boolean | false | If true, AJAX & Fetch request's response headers is tracked, default is false. If ignoreHeaders isn't configured, WWW-Authenticate header isn't logged. | ignoreHeaders | string[] | ["Authorization", "X-API-Key", "WWW-Authenticate"] | AJAX & Fetch request and response headers to be ignored in log data. To override or discard the default, add an array with all headers to be excluded or an empty array to the configuration.
azure-monitor Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md
Title: Microsoft Azure Monitor Application Insights JavaScript SDK description: Microsoft Azure Monitor Application Insights JavaScript SDK is a powerful tool for monitoring and analyzing web application performance. Previously updated : 03/07/2023 Last updated : 06/23/2023 ms.devlang: javascript
azure-monitor Kubernetes Codeless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/kubernetes-codeless.md
Title: Monitor applications on AKS with Application Insights - Azure Monitor | M
description: Azure Monitor integrates seamlessly with your application running on Azure Kubernetes Service and allows you to spot the problems with your apps quickly. Previously updated : 11/15/2022 Last updated : 06/23/2023
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Title: Diagnose with Live Metrics - Application Insights - Azure Monitor description: Monitor your web app in real time with custom metrics, and diagnose issues with a live feed of failures, traces, and events. Previously updated : 02/14/2023 Last updated : 06/23/2023 ms.devlang: csharp
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
Title: Monitor applications running on Azure Functions with Application Insights
description: Azure Monitor integrates with your Azure Functions application, allowing performance monitoring and quickly identifying problems. Previously updated : 04/24/2023 Last updated : 06/23/2023
azure-monitor Opencensus Python Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-request.md
Title: Incoming request tracking in Application Insights with OpenCensus Python | Microsoft Docs description: Monitor request calls for your Python apps via OpenCensus Python. Previously updated : 03/22/2023 Last updated : 06/23/2023 ms.devlang: python
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
+
+ Title: Add and modify Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications
+description: This article provides guidance on how to add and modify OpenTelemetry for applications using Azure Monitor.
+ Last updated : 06/22/2023
+ms.devlang: csharp, javascript, typescript, python
++++
+# Add and modify OpenTelemetry
+
+This article provides guidance on how to add and modify OpenTelemetry for applications using [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview).
+
+To learn more about OpenTelemetry concepts, see the [OpenTelemetry overview](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
+
+<!NOTE TO CONTRIBUTORS: PLEASE DO NOT SEPARATE OUT JAVASCRIPT AND TYPESCRIPT INTO DIFFERENT TABS.>
+
+## Automatic data collection
+
+The distros automatically collect data by bundling OpenTelemetry "instrumentation libraries".
+
+### Included instrumentation libraries
+
+#### [ASP.NET Core](#tab/aspnetcore)
+
+Requests
+- [ASP.NET
+ Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
+
+Dependencies
+- [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
+- [SqlClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.SqlClient/README.md) <sup>[1](#FOOTNOTEONE)</sup>
+
+Logging
+- ILogger
+
+For more information about ILogger, see [Logging in C# and .NET](/dotnet/core/extensions/logging) and [code examples](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/logs).
+
+#### [.NET](#tab/net)
+
+The Azure Monitor Exporter doesn't include any instrumentation libraries.
+
+#### [Java](#tab/java)
+
+Requests
+* JMS consumers
+* Kafka consumers
+* Netty
+* Quartz
+* RabbitMQ
+* Servlets
+* Spring scheduling
+
+> [!NOTE]
+> Servlet and Netty autoinstrumentation covers the majority of Java HTTP services, including Java EE, Jakarta EE, Spring Boot, Quarkus, and Micronaut.
+
+Dependencies (plus downstream distributed trace propagation):
+* Apache HttpClient
+* Apache HttpAsyncClient
+* AsyncHttpClient
+* Google HttpClient
+* gRPC
+* java.net.HttpURLConnection
+* Java 11 HttpClient
+* JAX-RS client
+* Jetty HttpClient
+* JMS
+* Kafka
+* Netty client
+* OkHttp
+* RabbitMQ
+
+Dependencies (without downstream distributed trace propagation):
+* Cassandra
+* JDBC
+* MongoDB (async and sync)
+* Redis (Lettuce and Jedis)
+
+Metrics
+
+* Micrometer Metrics, including Spring Boot Actuator metrics
+* JMX Metrics
+
+Logs
+* Logback (including MDC properties) [1](#FOOTNOTEONE)</sup> <sup>[3](#FOOTNOTETHREE)</sup>
+* Log4j (including MDC/Thread Context properties) [1](#FOOTNOTEONE)</sup> <sup>[3](#FOOTNOTETHREE)</sup>
+* JBoss Logging (including MDC properties) [1](#FOOTNOTEONE)</sup> <sup>[3](#FOOTNOTETHREE)</sup>
+* java.util.logging [1](#FOOTNOTEONE)</sup> <sup>[3](#FOOTNOTETHREE)</sup>
+
+Telemetry emitted by these Azure SDKs is automatically collected by default:
+
+* [Azure App Configuration](/java/api/overview/azure/data-appconfiguration-readme) 1.1.10+
+* [Azure Cognitive Search](/java/api/overview/azure/search-documents-readme) 11.3.0+
+* [Azure Communication Chat](/java/api/overview/azure/communication-chat-readme) 1.0.0+
+* [Azure Communication Common](/java/api/overview/azure/communication-common-readme) 1.0.0+
+* [Azure Communication Identity](/java/api/overview/azure/communication-identity-readme) 1.0.0+
+* [Azure Communication Phone Numbers](/java/api/overview/azure/communication-phonenumbers-readme) 1.0.0+
+* [Azure Communication SMS](/java/api/overview/azure/communication-sms-readme) 1.0.0+
+* [Azure Cosmos DB](/java/api/overview/azure/cosmos-readme) 4.22.0+
+* [Azure Digital Twins - Core](/java/api/overview/azure/digitaltwins-core-readme) 1.1.0+
+* [Azure Event Grid](/java/api/overview/azure/messaging-eventgrid-readme) 4.0.0+
+* [Azure Event Hubs](/java/api/overview/azure/messaging-eventhubs-readme) 5.6.0+
+* [Azure Event Hubs - Azure Blob Storage Checkpoint Store](/java/api/overview/azure/messaging-eventhubs-checkpointstore-blob-readme) 1.5.1+
+* [Azure Form Recognizer](/java/api/overview/azure/ai-formrecognizer-readme) 3.0.6+
+* [Azure Identity](/java/api/overview/azure/identity-readme) 1.2.4+
+* [Azure Key Vault - Certificates](/java/api/overview/azure/security-keyvault-certificates-readme) 4.1.6+
+* [Azure Key Vault - Keys](/java/api/overview/azure/security-keyvault-keys-readme) 4.2.6+
+* [Azure Key Vault - Secrets](/java/api/overview/azure/security-keyvault-secrets-readme) 4.2.6+
+* [Azure Service Bus](/java/api/overview/azure/messaging-servicebus-readme) 7.1.0+
+* [Azure Storage - Blobs](/java/api/overview/azure/storage-blob-readme) 12.11.0+
+* [Azure Storage - Blobs Batch](/java/api/overview/azure/storage-blob-batch-readme) 12.9.0+
+* [Azure Storage - Blobs Cryptography](/java/api/overview/azure/storage-blob-cryptography-readme) 12.11.0+
+* [Azure Storage - Common](/java/api/overview/azure/storage-common-readme) 12.11.0+
+* [Azure Storage - Files Data Lake](/java/api/overview/azure/storage-file-datalake-readme) 12.5.0+
+* [Azure Storage - Files Shares](/java/api/overview/azure/storage-file-share-readme) 12.9.0+
+* [Azure Storage - Queues](/java/api/overview/azure/storage-queue-readme) 12.9.0+
+* [Azure Text Analytics](/java/api/overview/azure/ai-textanalytics-readme) 5.0.4+
+
+[//]: # "Azure Cosmos DB 4.22.0+ due to https://github.com/Azure/azure-sdk-for-java/pull/25571"
+
+[//]: # "the remaining above names and links scraped from https://azure.github.io/azure-sdk/releases/latest/java.html"
+[//]: # "and version synched manually against the oldest version in maven central built on azure-core 1.14.0"
+[//]: # ""
+[//]: # "var table = document.querySelector('#tg-sb-content > div > table')"
+[//]: # "var str = ''"
+[//]: # "for (var i = 1, row; row = table.rows[i]; i++) {"
+[//]: # " var name = row.cells[0].getElementsByTagName('div')[0].textContent.trim()"
+[//]: # " var stableRow = row.cells[1]"
+[//]: # " var versionBadge = stableRow.querySelector('.badge')"
+[//]: # " if (!versionBadge) {"
+[//]: # " continue"
+[//]: # " }"
+[//]: # " var version = versionBadge.textContent.trim()"
+[//]: # " var link = stableRow.querySelectorAll('a')[2].href"
+[//]: # " str += '* [' + name + '](' + link + ') ' + version + '\n'"
+[//]: # "}"
+[//]: # "console.log(str)"
+
+#### [Node.js](#tab/nodejs)
+
+The following OpenTelemetry Instrumentation libraries are included as part of Azure Monitor Application Insights Distro.
+
+Requests
+- [HTTP/HTTPS](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http) <sup>[2](#FOOTNOTETWO)</sup>
+
+Dependencies
+- [MongoDB](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-mongodb)
+- [MySQL](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-mysql)
+- [Postgres](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-pg)
+- [Redis](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-redis)
+- [Redis-4](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-redis-4)
+- [Azure SDK](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/instrumentation/opentelemetry-instrumentation-azure-sdk)
+
+Logs
+- [Node.js console](https://nodejs.org/api/console.html)
+- [Bunyan](https://github.com/trentm/node-bunyan#readme)
+- [Winston](https://github.com/winstonjs/winston#readme)
++
+#### [Python](#tab/python)
+
+Requests
+- [Django](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-django) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
+- [FastApi](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-fastapi) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
+- [Flask](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-flask) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
+
+Dependencies
+- [Psycopg2](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-psycopg2)
+- [Requests](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-requests) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
+- [Urllib](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
+- [Urllib3](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib3) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
+
+Logs
+- [Python logging library](https://docs.python.org/3/howto/logging.html) <sup>[4](#FOOTNOTEFOUR)</sup>
+
+Examples of using the Python logging library can be found on [GitHub](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry/samples/logging).
+++
+**Footnotes**
+- <a name="FOOTNOTEONE">1</a>: Supports automatic reporting of *unhandled/uncaught* exceptions
+- <a name="FOOTNOTETWO">2</a>: Supports OpenTelemetry Metrics
+- <a name="FOOTNOTETHREE">3</a>: By default, logging is only collected at INFO level or higher. To change this setting, see the [configuration options](./java-standalone-config.md#autocollected-logging).
+- <a name="FOOTNOTEFOUR">4</a>: By default, logging is only collected at WARNING level or higher.
+
+> [!NOTE]
+> The Azure Monitor OpenTelemetry Distros include custom mapping and logic to automatically emit [Application Insights standard metrics](standard-metrics.md).
+
+> [!TIP]
+> The OpenTelemetry-based offerings currently emit all OpenTelemetry metrics as [Custom Metrics](opentelemetry-add-modify.md#add-custom-metrics) and [Performance Counters](standard-metrics.md#performance-counters) in Metrics Explorer. For .NET, Node.js, and Python, whatever you set as the meter name becomes the metrics namespace.
+
+### Add a community instrumentation library
+
+You can collect more data automatically when you include instrumentation libraries from the OpenTelemetry community.
+
+> [!NOTE]
+> We don't support and cannot guarantee the quality of community instrumentation libraries. If you would like to suggest a community instrumentation library us to include in our distro, post or up-vote an idea in our [feedback community](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0).
+
+> [!CAUTION]
+> Some instrumentation libraries are based on experimental OpenTelemetry semantic specifications. Adding them may leave you vulnerable to future breaking changes.
+
+### [ASP.NET Core](#tab/aspnetcore)
+
+To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `ConfigureOpenTelemetryTraceProvider` methods.
+
+The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect extra metrics.
+
+```csharp
+var builder = WebApplication.CreateBuilder(args);
+
+builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddRuntimeInstrumentation());
+builder.Services.AddOpenTelemetry().UseAzureMonitor();
+
+var app = builder.Build();
+
+app.Run();
+```
+
+### [.NET](#tab/net)
+
+The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect extra metrics.
+
+```csharp
+var metricsProvider = Sdk.CreateMeterProviderBuilder()
+ .AddRuntimeInstrumentation()
+ .AddAzureMonitorMetricExporter();
+```
+
+### [Java](#tab/java)
+You can't extend the Java Distro with community instrumentation libraries. To request that we include another instrumentation library, open an issue on our GitHub page. You can find a link to our GitHub page in [Next Steps](#next-steps).
+
+### [Node.js](#tab/nodejs)
+
+Other OpenTelemetry Instrumentations are available [here](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node) and could be added using TraceHandler in ApplicationInsightsClient.
+
+ ```javascript
+ const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const { ExpressInstrumentation } = require('@opentelemetry/instrumentation-express');
+
+ const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+ const traceHandler = appInsights.getTraceHandler();
+ traceHandler.addInstrumentation(new ExpressInstrumentation());
+```
+
+### [Python](#tab/python)
+Currently unavailable.
+++
+## Collect custom telemetry
+
+This section explains how to collect custom telemetry from your application.
+
+Depending on your language and signal type, there are different ways to collect custom telemetry, including:
+
+- OpenTelemetry API
+- Language-specific logging/metrics libraries
+- Application Insights [Classic API](api-custom-events-metrics.md)
+
+The following table represents the currently supported custom telemetry types:
+
+| Language | Custom Events | Custom Metrics | Dependencies | Exceptions | Page Views | Requests | Traces |
+|-||-|--|||-|--|
+| **ASP.NET Core** | | | | | | | |
+| &nbsp;&nbsp;&nbsp;OpenTelemetry API | | Yes | Yes | Yes | | Yes | |
+| &nbsp;&nbsp;&nbsp;iLogger API | | | | | | | Yes |
+| &nbsp;&nbsp;&nbsp;AI Classic API | | | | | | | |
+| | | | | | | | |
+| **Java** | | | | | | | |
+| &nbsp;&nbsp;&nbsp;OpenTelemetry API | | Yes | Yes | Yes | | Yes | |
+| &nbsp;&nbsp;&nbsp;Logback, Log4j, JUL | | | | Yes | | | Yes |
+| &nbsp;&nbsp;&nbsp;Micrometer Metrics | | Yes | | | | | |
+| &nbsp;&nbsp;&nbsp;AI Classic API | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
+| | | | | | | | |
+| **Node.js** | | | | | | | |
+| &nbsp;&nbsp;&nbsp;OpenTelemetry API | | Yes | Yes | Yes | | Yes | |
+| &nbsp;&nbsp;&nbsp;Console, Winston, Bunyan| | | | | | | Yes |
+| &nbsp;&nbsp;&nbsp;AI Classic API | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
+| | | | | | | | |
+| **Python** | | | | | | | |
+| &nbsp;&nbsp;&nbsp;OpenTelemetry API | | Yes | Yes | Yes | | Yes | |
+| &nbsp;&nbsp;&nbsp;Python Logging Module | | | | | | | Yes |
+
+> [!NOTE]
+> Application Insights Java 3.x listens for telemetry that's sent to the Application Insights [Classic API](api-custom-events-metrics.md). Similarly, Application Insights Node.js 3.x collects events created with the Application Insights [Classic API](api-custom-events-metrics.md). This makes upgrading easier and fills a gap in our custom telemetry support until all custom telemetry types are supported via the OpenTelemetry API.
+
+### Add custom metrics
+
+> [!NOTE]
+> Custom Metrics are under preview in Azure Monitor Application Insights. Custom metrics without dimensions are available by default. To view and alert on dimensions, you need to [opt-in](pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation).
+
+Consider collecting more metrics beyond what's provided by the instrumentation libraries.
+
+The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library.
+
+The following table shows the recommended [aggregation types](../essentials/metrics-aggregation-explained.md#aggregation-types) for each of the OpenTelemetry Metric Instruments.
+
+| OpenTelemetry Instrument | Azure Monitor Aggregation Type |
+|||
+| Counter | Sum |
+| Asynchronous Counter | Sum |
+| Histogram | Min, Max, Average, Sum and Count |
+| Asynchronous Gauge | Average |
+| UpDownCounter | Sum |
+| Asynchronous UpDownCounter | Sum |
+
+> [!CAUTION]
+> Aggregation types beyond what's shown in the table typically aren't meaningful.
+
+The [OpenTelemetry Specification](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#instrument)
+describes the instruments and provides examples of when you might use each one.
+
+> [!TIP]
+> The histogram is the most versatile and most closely equivalent to the Application Insights GetMetric [Classic API](api-custom-events-metrics.md). Azure Monitor currently flattens the histogram instrument into our five supported aggregation types, and support for percentiles is underway. Although less versatile, other OpenTelemetry instruments have a lesser impact on your application's performance.
+
+#### Histogram example
+
+#### [ASP.NET Core](#tab/aspnetcore)
+
+Application startup must subscribe to a Meter by name.
+
+```csharp
+var builder = WebApplication.CreateBuilder(args);
+
+builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddMeter("OTel.AzureMonitor.Demo"));
+builder.Services.AddOpenTelemetry().UseAzureMonitor();
+
+var app = builder.Build();
+
+app.Run();
+```
+
+The `Meter` must be initialized using that same name.
+
+```csharp
+var meter = new Meter("OTel.AzureMonitor.Demo");
+Histogram<long> myFruitSalePrice = meter.CreateHistogram<long>("FruitSalePrice");
+
+var rand = new Random();
+myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
+myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
+myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
+myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "green"));
+myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
+myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
+```
+
+#### [.NET](#tab/net)
+
+```csharp
+public class Program
+{
+ private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
+
+ public static void Main()
+ {
+ using var meterProvider = Sdk.CreateMeterProviderBuilder()
+ .AddMeter("OTel.AzureMonitor.Demo")
+ .AddAzureMonitorMetricExporter()
+ .Build();
+
+ Histogram<long> myFruitSalePrice = meter.CreateHistogram<long>("FruitSalePrice");
+
+ var rand = new Random();
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "green"));
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
+ myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
+
+ System.Console.WriteLine("Press Enter key to exit.");
+ System.Console.ReadLine();
+ }
+}
+```
+
+#### [Java](#tab/java)
+
+```java
+import io.opentelemetry.api.GlobalOpenTelemetry;
+import io.opentelemetry.api.metrics.DoubleHistogram;
+import io.opentelemetry.api.metrics.Meter;
+
+public class Program {
+
+ public static void main(String[] args) {
+ Meter meter = GlobalOpenTelemetry.getMeter("OTEL.AzureMonitor.Demo");
+ DoubleHistogram histogram = meter.histogramBuilder("histogram").build();
+ histogram.record(1.0);
+ histogram.record(100.0);
+ histogram.record(30.0);
+ }
+}
+```
+
+#### [Node.js](#tab/nodejs)
+
+ ```javascript
+ const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+ const customMetricsHandler = appInsights.getMetricHandler().getCustomMetricsHandler();
+ const meter = customMetricsHandler.getMeter();
+ let histogram = meter.createHistogram("histogram");
+ histogram.record(1, { "testKey": "testValue" });
+ histogram.record(30, { "testKey": "testValue2" });
+ histogram.record(100, { "testKey2": "testValue" });
+```
+
+#### [Python](#tab/python)
+
+```python
+from azure.monitor.opentelemetry import configure_azure_monitor
+from opentelemetry import metrics
+
+configure_azure_monitor(
+ connection_string="<your-connection-string>",
+)
+meter = metrics.get_meter_provider().get_meter("otel_azure_monitor_histogram_demo")
+
+histogram = meter.create_histogram("histogram")
+histogram.record(1.0, {"test_key": "test_value"})
+histogram.record(100.0, {"test_key2": "test_value"})
+histogram.record(30.0, {"test_key": "test_value2"})
+
+input()
+```
+++
+#### Counter example
+
+#### [ASP.NET Core](#tab/aspnetcore)
+
+Application startup must subscribe to a Meter by name.
+
+```csharp
+var builder = WebApplication.CreateBuilder(args);
+
+builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddMeter("OTel.AzureMonitor.Demo"));
+builder.Services.AddOpenTelemetry().UseAzureMonitor();
+
+var app = builder.Build();
+
+app.Run();
+```
+
+The `Meter` must be initialized using that same name.
+
+```csharp
+var meter = new Meter("OTel.AzureMonitor.Demo");
+Counter<long> myFruitCounter = meter.CreateCounter<long>("MyFruitCounter");
+
+myFruitCounter.Add(1, new("name", "apple"), new("color", "red"));
+myFruitCounter.Add(2, new("name", "lemon"), new("color", "yellow"));
+myFruitCounter.Add(1, new("name", "lemon"), new("color", "yellow"));
+myFruitCounter.Add(2, new("name", "apple"), new("color", "green"));
+myFruitCounter.Add(5, new("name", "apple"), new("color", "red"));
+myFruitCounter.Add(4, new("name", "lemon"), new("color", "yellow"));
+```
+
+#### [.NET](#tab/net)
+
+```csharp
+public class Program
+{
+ private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
+
+ public static void Main()
+ {
+ using var meterProvider = Sdk.CreateMeterProviderBuilder()
+ .AddMeter("OTel.AzureMonitor.Demo")
+ .AddAzureMonitorMetricExporter()
+ .Build();
+
+ Counter<long> myFruitCounter = meter.CreateCounter<long>("MyFruitCounter");
+
+ myFruitCounter.Add(1, new("name", "apple"), new("color", "red"));
+ myFruitCounter.Add(2, new("name", "lemon"), new("color", "yellow"));
+ myFruitCounter.Add(1, new("name", "lemon"), new("color", "yellow"));
+ myFruitCounter.Add(2, new("name", "apple"), new("color", "green"));
+ myFruitCounter.Add(5, new("name", "apple"), new("color", "red"));
+ myFruitCounter.Add(4, new("name", "lemon"), new("color", "yellow"));
+
+ System.Console.WriteLine("Press Enter key to exit.");
+ System.Console.ReadLine();
+ }
+}
+```
+
+#### [Java](#tab/java)
+
+```Java
+import io.opentelemetry.api.GlobalOpenTelemetry;
+import io.opentelemetry.api.common.AttributeKey;
+import io.opentelemetry.api.common.Attributes;
+import io.opentelemetry.api.metrics.LongCounter;
+import io.opentelemetry.api.metrics.Meter;
+
+public class Program {
+
+ public static void main(String[] args) {
+ Meter meter = GlobalOpenTelemetry.getMeter("OTEL.AzureMonitor.Demo");
+
+ LongCounter myFruitCounter = meter
+ .counterBuilder("MyFruitCounter")
+ .build();
+
+ myFruitCounter.add(1, Attributes.of(AttributeKey.stringKey("name"), "apple", AttributeKey.stringKey("color"), "red"));
+ myFruitCounter.add(2, Attributes.of(AttributeKey.stringKey("name"), "lemon", AttributeKey.stringKey("color"), "yellow"));
+ myFruitCounter.add(1, Attributes.of(AttributeKey.stringKey("name"), "lemon", AttributeKey.stringKey("color"), "yellow"));
+ myFruitCounter.add(2, Attributes.of(AttributeKey.stringKey("name"), "apple", AttributeKey.stringKey("color"), "green"));
+ myFruitCounter.add(5, Attributes.of(AttributeKey.stringKey("name"), "apple", AttributeKey.stringKey("color"), "red"));
+ myFruitCounter.add(4, Attributes.of(AttributeKey.stringKey("name"), "lemon", AttributeKey.stringKey("color"), "yellow"));
+ }
+}
+```
+
+#### [Node.js](#tab/nodejs)
+
+```javascript
+ const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+ const customMetricsHandler = appInsights.getMetricHandler().getCustomMetricsHandler();
+ const meter = customMetricsHandler.getMeter();
+ let counter = meter.createCounter("counter");
+ counter.add(1, { "testKey": "testValue" });
+ counter.add(5, { "testKey2": "testValue" });
+ counter.add(3, { "testKey": "testValue2" });
+```
+
+#### [Python](#tab/python)
+
+```python
+from azure.monitor.opentelemetry import configure_azure_monitor
+from opentelemetry import metrics
+
+configure_azure_monitor(
+ connection_string="<your-connection-string>",
+)
+meter = metrics.get_meter_provider().get_meter("otel_azure_monitor_counter_demo")
+
+counter = meter.create_counter("counter")
+counter.add(1.0, {"test_key": "test_value"})
+counter.add(5.0, {"test_key2": "test_value"})
+counter.add(3.0, {"test_key": "test_value2"})
+
+input()
+```
+++
+#### Gauge Example
+
+#### [ASP.NET Core](#tab/aspnetcore)
+
+Application startup must subscribe to a Meter by name.
+
+```csharp
+var builder = WebApplication.CreateBuilder(args);
+
+builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddMeter("OTel.AzureMonitor.Demo"));
+builder.Services.AddOpenTelemetry().UseAzureMonitor();
+
+var app = builder.Build();
+
+app.Run();
+```
+
+The `Meter` must be initialized using that same name.
+
+```csharp
+var process = Process.GetCurrentProcess();
+
+var meter = new Meter("OTel.AzureMonitor.Demo");
+ObservableGauge<int> myObservableGauge = meter.CreateObservableGauge("Thread.State", () => GetThreadState(process));
+
+private static IEnumerable<Measurement<int>> GetThreadState(Process process)
+{
+ foreach (ProcessThread thread in process.Threads)
+ {
+ yield return new((int)thread.ThreadState, new("ProcessId", process.Id), new("ThreadId", thread.Id));
+ }
+}
+```
+
+#### [.NET](#tab/net)
+
+```csharp
+public class Program
+{
+ private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
+
+ public static void Main()
+ {
+ using var meterProvider = Sdk.CreateMeterProviderBuilder()
+ .AddMeter("OTel.AzureMonitor.Demo")
+ .AddAzureMonitorMetricExporter()
+ .Build();
+
+ var process = Process.GetCurrentProcess();
+
+ ObservableGauge<int> myObservableGauge = meter.CreateObservableGauge("Thread.State", () => GetThreadState(process));
+
+ System.Console.WriteLine("Press Enter key to exit.");
+ System.Console.ReadLine();
+ }
+
+ private static IEnumerable<Measurement<int>> GetThreadState(Process process)
+ {
+ foreach (ProcessThread thread in process.Threads)
+ {
+ yield return new((int)thread.ThreadState, new("ProcessId", process.Id), new("ThreadId", thread.Id));
+ }
+ }
+}
+```
+
+#### [Java](#tab/java)
+
+```Java
+import io.opentelemetry.api.GlobalOpenTelemetry;
+import io.opentelemetry.api.common.AttributeKey;
+import io.opentelemetry.api.common.Attributes;
+import io.opentelemetry.api.metrics.Meter;
+
+public class Program {
+
+ public static void main(String[] args) {
+ Meter meter = GlobalOpenTelemetry.getMeter("OTEL.AzureMonitor.Demo");
+
+ meter.gaugeBuilder("gauge")
+ .buildWithCallback(
+ observableMeasurement -> {
+ double randomNumber = Math.floor(Math.random() * 100);
+ observableMeasurement.record(randomNumber, Attributes.of(AttributeKey.stringKey("testKey"), "testValue"));
+ });
+ }
+}
+```
+
+#### [Node.js](#tab/nodejs)
+
+```typescript
+ const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+ const customMetricsHandler = appInsights.getMetricHandler().getCustomMetricsHandler();
+ const meter = customMetricsHandler.getMeter();
+ let gauge = meter.createObservableGauge("gauge");
+ gauge.addCallback((observableResult: ObservableResult) => {
+ let randomNumber = Math.floor(Math.random() * 100);
+ observableResult.observe(randomNumber, {"testKey": "testValue"});
+ });
+```
+
+#### [Python](#tab/python)
+
+```python
+from typing import Iterable
+
+from azure.monitor.opentelemetry import configure_azure_monitor
+from opentelemetry import metrics
+from opentelemetry.metrics import CallbackOptions, Observation
+
+configure_azure_monitor(
+ connection_string="<your-connection-string>",
+)
+meter = metrics.get_meter_provider().get_meter("otel_azure_monitor_gauge_demo")
+
+def observable_gauge_generator(options: CallbackOptions) -> Iterable[Observation]:
+ yield Observation(9, {"test_key": "test_value"})
+
+def observable_gauge_sequence(options: CallbackOptions) -> Iterable[Observation]:
+ observations = []
+ for i in range(10):
+ observations.append(
+ Observation(9, {"test_key": i})
+ )
+ return observations
+
+gauge = meter.create_observable_gauge("gauge", [observable_gauge_generator])
+gauge2 = meter.create_observable_gauge("gauge2", [observable_gauge_sequence])
+
+input()
+```
+++
+### Add custom exceptions
+
+Select instrumentation libraries automatically report exceptions to Application Insights.
+However, you may want to manually report exceptions beyond what instrumentation libraries report.
+For instance, exceptions caught by your code aren't ordinarily reported. You may wish to report them
+to draw attention in relevant experiences including the failures section and end-to-end transaction views.
+
+#### [ASP.NET Core](#tab/aspnetcore)
+
+- To log an Exception using an Activity:
+ ```csharp
+ using (var activity = activitySource.StartActivity("ExceptionExample"))
+ {
+ try
+ {
+ throw new Exception("Test exception");
+ }
+ catch (Exception ex)
+ {
+ activity?.SetStatus(ActivityStatusCode.Error);
+ activity?.RecordException(ex);
+ }
+ }
+ ```
+- To log an Exception using ILogger:
+ ```csharp
+ var logger = loggerFactory.CreateLogger(logCategoryName);
+
+ try
+ {
+ throw new Exception("Test Exception");
+ }
+ catch (Exception ex)
+ {
+ logger.Log(
+ logLevel: LogLevel.Error,
+ eventId: 0,
+ exception: ex,
+ message: "Hello {name}.",
+ args: new object[] { "World" });
+ }
+ ```
+
+#### [.NET](#tab/net)
+
+- To log an Exception using an Activity:
+ ```csharp
+ using (var activity = activitySource.StartActivity("ExceptionExample"))
+ {
+ try
+ {
+ throw new Exception("Test exception");
+ }
+ catch (Exception ex)
+ {
+ activity?.SetStatus(ActivityStatusCode.Error);
+ activity?.RecordException(ex);
+ }
+ }
+ ```
+- To log an Exception using ILogger:
+ ```csharp
+ var logger = loggerFactory.CreateLogger("ExceptionExample");
+
+ try
+ {
+ throw new Exception("Test Exception");
+ }
+ catch (Exception ex)
+ {
+ logger.Log(
+ logLevel: LogLevel.Error,
+ eventId: 0,
+ exception: ex,
+ message: "Hello {name}.",
+ args: new object[] { "World" });
+ }
+ ```
+
+#### [Java](#tab/java)
+
+You can use `opentelemetry-api` to update the status of a span and record exceptions.
+
+1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
+
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry.instrumentation</groupId>
+ <artifactId>opentelemetry-api</artifactId>
+ <version>1.0.0</version>
+ </dependency>
+ ```
+
+1. Set status to `error` and record an exception in your code:
+
+ ```java
+ import io.opentelemetry.api.trace.Span;
+ import io.opentelemetry.api.trace.StatusCode;
+
+ Span span = Span.current();
+ span.setStatus(StatusCode.ERROR, "errorMessage");
+ span.recordException(e);
+ ```
+
+#### [Node.js](#tab/nodejs)
+
+```javascript
+const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+
+const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+const tracer = appInsights.getTraceHandler().getTracer();
+let span = tracer.startSpan("hello");
+try{
+ throw new Error("Test Error");
+}
+catch(error){
+ span.recordException(error);
+}
+```
+
+#### [Python](#tab/python)
+
+The OpenTelemetry Python SDK is implemented in such a way that exceptions thrown are automatically captured and recorded. See the following code sample for an example of this behavior.
+
+```python
+from azure.monitor.opentelemetry import configure_azure_monitor
+from opentelemetry import trace
+
+configure_azure_monitor(
+ connection_string="<your-connection-string>",
+)
+tracer = trace.get_tracer("otel_azure_monitor_exception_demo")
+
+# Exception events
+try:
+ with tracer.start_as_current_span("hello") as span:
+ # This exception will be automatically recorded
+ raise Exception("Custom exception message.")
+except Exception:
+ print("Exception raised")
+
+```
+
+If you would like to record exceptions manually, you can disable that option
+within the context manager and use `record_exception()` directly as shown in the following example:
+
+```python
+...
+with tracer.start_as_current_span("hello", record_exception=False) as span:
+ try:
+ raise Exception("Custom exception message.")
+ except Exception as ex:
+ # Manually record exception
+ span.record_exception(ex)
+...
+
+```
+++
+### Add custom spans
+
+You may want to add a custom span in two scenarios. First, when there's a dependency request not already collected by an instrumentation library. Second, when you wish to model an application process as a span on the end-to-end transaction view.
+
+#### [ASP.NET Core](#tab/aspnetcore)
+
+> [!NOTE]
+> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
++
+```csharp
+internal static readonly ActivitySource activitySource = new("ActivitySourceName");
+
+var builder = WebApplication.CreateBuilder(args);
+
+builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddSource("ActivitySourceName"));
+builder.Services.AddOpenTelemetry().UseAzureMonitor();
+
+var app = builder.Build();
+
+app.MapGet("/", () =>
+{
+ using (var activity = activitySource.StartActivity("CustomActivity"))
+ {
+ // your code here
+ }
+
+ return $"Hello World!";
+});
+
+app.Run();
+```
+
+When calling `StartActivity`, it defaults to `ActivityKind.Internal` but you can provide any other `ActivityKind`.
+`ActivityKind.Client`, `ActivityKind.Producer`, and `ActivityKind.Internal` are mapped to Application Insights `dependencies`.
+`ActivityKind.Server` and `ActivityKind.Consumer` are mapped to Application Insights `requests`.
+
+#### [.NET](#tab/net)
+
+> [!NOTE]
+> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
+
+```csharp
+using var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddSource("ActivitySourceName")
+ .AddAzureMonitorTraceExporter()
+ .Build();
+
+var activitySource = new ActivitySource("ActivitySourceName");
+
+using (var activity = activitySource.StartActivity("CustomActivity"))
+{
+ // your code here
+}
+```
+
+When calling `StartActivity`, it defaults to `ActivityKind.Internal` but you can provide any other `ActivityKind`.
+`ActivityKind.Client`, `ActivityKind.Producer`, and `ActivityKind.Internal` are mapped to Application Insights `dependencies`.
+`ActivityKind.Server` and `ActivityKind.Consumer` are mapped to Application Insights `requests`.
+
+#### [Java](#tab/java)
+
+##### Use the OpenTelemetry annotation
+
+The simplest way to add your own spans is by using OpenTelemetry's `@WithSpan` annotation.
+
+Spans populate the `requests` and `dependencies` tables in Application Insights.
+
+1. Add `opentelemetry-instrumentation-annotations-1.21.0.jar` (or later) to your application:
+
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry.instrumentation</groupId>
+ <artifactId>opentelemetry-instrumentation-annotations</artifactId>
+ <version>1.21.0</version>
+ </dependency>
+ ```
+
+1. Use the `@WithSpan` annotation to emit a span each time your method is executed:
+
+ ```java
+ import io.opentelemetry.instrumentation.annotations.WithSpan;
+
+ @WithSpan(value = "your span name")
+ public void yourMethod() {
+ }
+ ```
+
+By default, the span ends up in the `dependencies` table with dependency type `InProc`.
+
+For methods representing a background job not captured by autoinstrumentation, we recommend applying the attribute `kind = SpanKind.SERVER` to the `@WithSpan` annotation to ensure they appear in the Application Insights `requests` table.
+
+##### Use the OpenTelemetry API
+
+If the preceding OpenTelemetry `@WithSpan` annotation doesn't meet your needs,
+you can add your spans by using the OpenTelemetry API.
+
+1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
+
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry.instrumentation</groupId>
+ <artifactId>opentelemetry-api</artifactId>
+ <version>1.0.0</version>
+ </dependency>
+ ```
+
+1. Use the `GlobalOpenTelemetry` class to create a `Tracer`:
+
+ ```java
+ import io.opentelemetry.api.GlobalOpenTelemetry;
+ import io.opentelemetry.api.trace.Tracer;
+
+ static final Tracer tracer = GlobalOpenTelemetry.getTracer("com.example");
+ ```
+
+1. Create a span, make it current, and then end it:
+
+ ```java
+ Span span = tracer.spanBuilder("my first span").startSpan();
+ try (Scope ignored = span.makeCurrent()) {
+ // do stuff within the context of this
+ } catch (Throwable t) {
+ span.recordException(t);
+ } finally {
+ span.end();
+ }
+ ```
+
+#### [Node.js](#tab/nodejs)
+
+```javascript
+const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+
+const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+const tracer = appInsights.getTraceHandler().getTracer();
+let span = tracer.startSpan("hello");
+span.end();
+```
++
+#### [Python](#tab/python)
+
+The OpenTelemetry API can be used to add your own spans, which appear in the `requests` and `dependencies` tables in Application Insights.
+
+The code example shows how to use the `tracer.start_as_current_span()` method to start, make the span current, and end the span within its context.
+
+```python
+...
+from opentelemetry import trace
+
+tracer = trace.get_tracer(__name__)
+
+# The "with" context manager starts, makes the span current, and ends the span within it's context
+with tracer.start_as_current_span("my first span") as span:
+ try:
+ # Do stuff within the context of this
+ except Exception as ex:
+ span.record_exception(ex)
+...
+
+```
+
+By default, the span is in the `dependencies` table with a dependency type of `InProc`.
+
+If your method represents a background job not already captured by autoinstrumentation, we recommend setting the attribute `kind = SpanKind.SERVER` to ensure it appears in the Application Insights `requests` table.
+
+```python
+...
+from opentelemetry import trace
+from opentelemetry.trace import SpanKind
+
+tracer = trace.get_tracer(__name__)
+with tracer.start_as_current_span("my request span", kind=SpanKind.SERVER) as span:
+...
+```
+++
+<!--
+
+### Add Custom Events
+
+#### Span Events
+
+The OpenTelemetry Logs/Events API is still under development. In the meantime, you can use the OpenTelemetry Span API to create "Span Events", which populate the traces table in Application Insights. The string passed in to addEvent() is saved to the message field within the trace.
+
+> [!CAUTION]
+> Span Events are only recommended for when you need additional diagnostic metadata associated with your span. For other scenarios, such as describing business events, we recommend you wait for the release of the OpenTelemetry Events API.
+
+#### [ASP.NET Core](#tab/aspnetcore)
+
+Currently unavailable.
+
+#### [.NET](#tab/net)
+
+Currently unavailable.
+
+#### [Java](#tab/java)
+
+You can use `opentelemetry-api` to create span events, which populate the `traces` table in Application Insights. The string passed in to `addEvent()` is saved to the `message` field within the trace.
+
+1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
+
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry.instrumentation</groupId>
+ <artifactId>opentelemetry-api</artifactId>
+ <version>1.0.0</version>
+ </dependency>
+ ```
+
+1. Add span events in your code:
+
+ ```java
+ import io.opentelemetry.api.trace.Span;
+
+ Span.current().addEvent("eventName");
+ ```
+
+#### [Node.js](#tab/nodejs)
+
+Currently unavailable.
+
+#### [Python](#tab/python)
+
+Currently unavailable.
+++
+-->
+
+### Send custom telemetry using the Application Insights Classic API
+
+We recommend you use the OpenTelemetry APIs whenever possible, but there may be some scenarios when you have to use the Application Insights [Classic API](api-custom-events-metrics.md)s.
+
+#### [ASP.NET Core](#tab/aspnetcore)
+
+Not available in .NET.
+
+#### [.NET](#tab/net)
+
+Not available in .NET.
+
+#### [Java](#tab/java)
+
+1. Add `applicationinsights-core` to your application:
+
+ ```xml
+ <dependency>
+ <groupId>com.microsoft.azure</groupId>
+ <artifactId>applicationinsights-core</artifactId>
+ <version>3.4.14</version>
+ </dependency>
+ ```
+
+1. Create a `TelemetryClient` instance:
+
+ ```java
+ static final TelemetryClient telemetryClient = new TelemetryClient();
+ ```
+
+1. Use the client to send custom telemetry:
+
+ ##### Events
+
+ ```java
+ telemetryClient.trackEvent("WinGame");
+ ```
+
+ ##### Metrics
+
+ ```java
+ telemetryClient.trackMetric("queueLength", 42.0);
+ ```
+
+ ##### Dependencies
+
+ ```java
+ boolean success = false;
+ long startTime = System.currentTimeMillis();
+ try {
+ success = dependency.call();
+ } finally {
+ long endTime = System.currentTimeMillis();
+ RemoteDependencyTelemetry telemetry = new RemoteDependencyTelemetry();
+ telemetry.setSuccess(success);
+ telemetry.setTimestamp(new Date(startTime));
+ telemetry.setDuration(new Duration(endTime - startTime));
+ telemetryClient.trackDependency(telemetry);
+ }
+ ```
+
+ ##### Logs
+
+ ```java
+ telemetryClient.trackTrace(message, SeverityLevel.Warning, properties);
+ ```
+
+ ##### Exceptions
+
+ ```java
+ try {
+ ...
+ } catch (Exception e) {
+ telemetryClient.trackException(e);
+ }
+ ```
+
+#### [Node.js](#tab/nodejs)
++
+First, get the `LogHandler`:
+
+```javascript
+const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+const logHandler = appInsights.getLogHandler();
+```
+
+Then use the `LogHandler` to send custom telemetry:
+
+##### Events
+
+```javascript
+let eventTelemetry = {
+ name: "testEvent"
+};
+logHandler.trackEvent(eventTelemetry);
+```
+
+##### Logs
+
+```javascript
+let traceTelemetry = {
+ message: "testMessage",
+ severity: "Information"
+};
+logHandler.trackTrace(traceTelemetry);
+```
+
+##### Exceptions
+
+```javascript
+try {
+ ...
+} catch (error) {
+ let exceptionTelemetry = {
+ exception: error,
+ severity: "Critical"
+ };
+ logHandler.trackException(exceptionTelemetry);
+}
+```
+
+#### [Python](#tab/python)
+
+It isn't available in Python.
+++
+## Modify telemetry
+
+This section explains how to modify telemetry.
+
+### Add span attributes
+
+These attributes might include adding a custom property to your telemetry. You might also use attributes to set optional fields in the Application Insights schema, like Client IP.
+
+#### Add a custom property to a Span
+
+Any [attributes](#add-span-attributes) you add to spans are exported as custom properties. They populate the _customDimensions_ field in the requests, dependencies, traces, or exceptions table.
+
+##### [ASP.NET Core](#tab/aspnetcore)
+
+To add span attributes, use either of the following two ways:
+
+* Use options provided by [instrumentation libraries](opentelemetry-enable.md#install-the-client-library).
+* Add a custom span processor.
+
+> [!TIP]
+> The advantage of using options provided by instrumentation libraries, when they're available, is that the entire context is available. As a result, users can select to add or filter more attributes. For example, the enrich option in the HttpClient instrumentation library gives users access to the [HttpRequestMessage](/dotnet/api/system.net.http.httprequestmessage) and the [HttpResponseMessage](/dotnet/api/system.net.http.httpresponsemessage) itself. They can select anything from it and store it as an attribute.
+
+1. Many instrumentation libraries provide an enrich option. For guidance, see the readme files of individual instrumentation libraries:
+ - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich)
+ - [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#enrich)
+
+1. Use a custom processor:
+
+> [!TIP]
+> Add the processor shown here *before* adding Azure Monitor.
+
+```csharp
+var builder = WebApplication.CreateBuilder(args);
+
+builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddProcessor(new ActivityEnrichingProcessor()));
+builder.Services.AddOpenTelemetry().UseAzureMonitor();
+
+var app = builder.Build();
+
+app.Run();
+```
+
+Add `ActivityEnrichingProcessor.cs` to your project with the following code:
+
+```csharp
+public class ActivityEnrichingProcessor : BaseProcessor<Activity>
+{
+ public override void OnEnd(Activity activity)
+ {
+ // The updated activity will be available to all processors which are called after this processor.
+ activity.DisplayName = "Updated-" + activity.DisplayName;
+ activity.SetTag("CustomDimension1", "Value1");
+ activity.SetTag("CustomDimension2", "Value2");
+ }
+}
+```
+
+#### [.NET](#tab/net)
+
+To add span attributes, use either of the following two ways:
+
+* Use options provided by instrumentation libraries.
+* Add a custom span processor.
+
+> [!TIP]
+> The advantage of using options provided by instrumentation libraries, when they're available, is that the entire context is available. As a result, users can select to add or filter more attributes. For example, the enrich option in the HttpClient instrumentation library gives users access to the httpRequestMessage itself. They can select anything from it and store it as an attribute.
+
+1. Many instrumentation libraries provide an enrich option. For guidance, see the readme files of individual instrumentation libraries:
+ - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.8/src/OpenTelemetry.Instrumentation.AspNet/README.md#enrich)
+ - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich)
+ - [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#enrich)
+
+1. Use a custom processor:
+
+> [!TIP]
+> Add the processor shown here *before* the Azure Monitor Exporter.
+
+```csharp
+using var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddSource("OTel.AzureMonitor.Demo")
+ .AddProcessor(new ActivityEnrichingProcessor())
+ .AddAzureMonitorTraceExporter()
+ .Build();
+```
+
+Add `ActivityEnrichingProcessor.cs` to your project with the following code:
+
+```csharp
+public class ActivityEnrichingProcessor : BaseProcessor<Activity>
+{
+ public override void OnEnd(Activity activity)
+ {
+ // The updated activity will be available to all processors which are called after this processor.
+ activity.DisplayName = "Updated-" + activity.DisplayName;
+ activity.SetTag("CustomDimension1", "Value1");
+ activity.SetTag("CustomDimension2", "Value2");
+ }
+}
+```
+
+##### [Java](#tab/java)
+
+You can use `opentelemetry-api` to add attributes to spans.
+
+Adding one or more span attributes populates the `customDimensions` field in the `requests`, `dependencies`, `traces`, or `exceptions` table.
+
+1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
+
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry.instrumentation</groupId>
+ <artifactId>opentelemetry-api</artifactId>
+ <version>1.0.0</version>
+ </dependency>
+ ```
+
+1. Add custom dimensions in your code:
+
+ ```java
+ import io.opentelemetry.api.trace.Span;
+ import io.opentelemetry.api.common.AttributeKey;
+
+ AttributeKey attributeKey = AttributeKey.stringKey("mycustomdimension");
+ Span.current().setAttribute(attributeKey, "myvalue1");
+ ```
+
+##### [Node.js](#tab/nodejs)
+
+```typescript
+const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+const { ReadableSpan, Span, SpanProcessor } = require("@opentelemetry/sdk-trace-base");
+const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
+
+const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
+
+class SpanEnrichingProcessor implements SpanProcessor{
+ forceFlush(): Promise<void>{
+ return Promise.resolve();
+ }
+ shutdown(): Promise<void>{
+ return Promise.resolve();
+ }
+ onStart(_span: Span): void{}
+ onEnd(span: ReadableSpan){
+ span.attributes["CustomDimension1"] = "value1";
+ span.attributes["CustomDimension2"] = "value2";
+ }
+}
+
+appInsights.getTraceHandler().addSpanProcessor(new SpanEnrichingProcessor());
+```
+
+##### [Python](#tab/python)
+
+Use a custom processor:
+
+```python
+...
+from azure.monitor.opentelemetry import configure_azure_monitor
+from opentelemetry import trace
+
+configure_azure_monitor(
+ connection_string="<your-connection-string>",
+)
+span_enrich_processor = SpanEnrichingProcessor()
+# Add the processor shown below to the current `TracerProvider`
+trace.get_tracer_provider().add_span_processor(span_enrich_processor)
+...
+```
+
+Add `SpanEnrichingProcessor.py` to your project with the following code:
+
+```python
+from opentelemetry.sdk.trace import SpanProcessor
+
+class SpanEnrichingProcessor(SpanProcessor):
+
+ def on_end(self, span):
+ span._name = "Updated-" + span.name
+ span._attributes["CustomDimension1"] = "Value1"
+ span._attributes["CustomDimension2"] = "Value2"
+```
+++
+#### Set the user IP
+
+You can populate the _client_IP_ field for requests by setting the `http.client_ip` attribute on the span. Application Insights uses the IP address to generate user location attributes and then [discards it by default](ip-collection.md#default-behavior).
+
+##### [ASP.NET Core](#tab/aspnetcore)
+
+Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `ActivityEnrichingProcessor.cs`:
+
+```C#
+// only applicable in case of activity.Kind == Server
+activity.SetTag("http.client_ip", "<IP Address>");
+```
+
+#### [.NET](#tab/net)
+
+Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `ActivityEnrichingProcessor.cs`:
+
+```C#
+// only applicable in case of activity.Kind == Server
+activity.SetTag("http.client_ip", "<IP Address>");
+```
+
+##### [Java](#tab/java)
+
+Java automatically populates this field.
+
+##### [Node.js](#tab/nodejs)
+
+Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
+
+```typescript
+...
+const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
+
+class SpanEnrichingProcessor implements SpanProcessor{
+ ...
+
+ onEnd(span){
+ span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>";
+ }
+}
+```
+
+##### [Python](#tab/python)
+
+Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `SpanEnrichingProcessor.py`:
+
+```python
+span._attributes["http.client_ip"] = "<IP Address>"
+```
+++
+#### Set the user ID or authenticated user ID
+
+You can populate the _user_Id_ or _user_AuthenticatedId_ field for requests by using the following guidance. User ID is an anonymous user identifier. Authenticated User ID is a known user identifier.
+
+> [!IMPORTANT]
+> Consult applicable privacy laws before you set the Authenticated User ID.
+
+##### [ASP.NET Core](#tab/aspnetcore)
+
+Use the add [custom property example](#add-a-custom-property-to-a-span).
+
+```csharp
+activity?.SetTag("enduser.id", "<User Id>");
+```
+
+##### [.NET](#tab/net)
+
+Use the add [custom property example](#add-a-custom-property-to-a-span).
+
+```csharp
+activity?.SetTag("enduser.id", "<User Id>");
+```
+
+##### [Java](#tab/java)
+
+Populate the `user ID` field in the `requests`, `dependencies`, or `exceptions` table.
+
+1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
+
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry.instrumentation</groupId>
+ <artifactId>opentelemetry-api</artifactId>
+ <version>1.0.0</version>
+ </dependency>
+ ```
+
+1. Set `user_Id` in your code:
+
+ ```java
+ import io.opentelemetry.api.trace.Span;
+
+ Span.current().setAttribute("enduser.id", "myuser");
+ ```
+
+#### [Node.js](#tab/nodejs)
+
+Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
+
+```typescript
+...
+import { SemanticAttributes } from "@opentelemetry/semantic-conventions";
+
+class SpanEnrichingProcessor implements SpanProcessor{
+ ...
+
+ onEnd(span: ReadableSpan){
+ span.attributes[SemanticAttributes.ENDUSER_ID] = "<User ID>";
+ }
+}
+```
+
+##### [Python](#tab/python)
+
+Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
+
+```python
+span._attributes["enduser.id"] = "<User ID>"
+```
+++
+### Add log attributes
+
+#### [ASP.NET Core](#tab/aspnetcore)
+
+OpenTelemetry uses .NET's ILogger.
+Attaching custom dimensions to logs can be accomplished using a [message template](/dotnet/core/extensions/logging?tabs=command-line#log-message-template).
+
+#### [.NET](#tab/net)
+
+OpenTelemetry uses .NET's ILogger.
+Attaching custom dimensions to logs can be accomplished using a [message template](/dotnet/core/extensions/logging?tabs=command-line#log-message-template).
+
+#### [Java](#tab/java)
+
+Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching custom dimensions to your logs can be accomplished in these ways:
+
+* [Log4j 2 MapMessage](https://logging.apache.org/log4j/2.x/log4j-api/apidocs/org/apache/logging/log4j/message/MapMessage.html) (a `MapMessage` key of `"message"` is captured as the log message)
+* [Log4j 2 Thread Context](https://logging.apache.org/log4j/2.x/manual/thread-context.html)
+* [Log4j 1.2 MDC](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/MDC.html)
+
+#### [Node.js](#tab/nodejs)
+
+Attributes could be added only when calling manual track APIs only. Log attributes for console, bunyan and winston are currently not supported.
+
+```javascript
+const config = new ApplicationInsightsConfig();
+config.instrumentations.http = httpInstrumentationConfig;
+const appInsights = new ApplicationInsightsClient(config);
+const logHandler = appInsights.getLogHandler();
+const attributes = {
+ "testAttribute1": "testValue1",
+ "testAttribute2": "testValue2",
+ "testAttribute3": "testValue3"
+};
+logHandler.trackEvent({
+ name: "testEvent",
+ properties: attributes
+});
+```
+
+#### [Python](#tab/python)
+
+The Python [logging](https://docs.python.org/3/howto/logging.html) library is [autoinstrumented](#logs). You can attach custom dimensions to your logs by passing a dictionary into the `extra` argument of your logs.
+
+```python
+...
+logger.warning("WARNING: Warning log with properties", extra={"key1": "value1"})
+...
+
+```
+++
+### Filter telemetry
+
+You might use the following ways to filter out telemetry before it leaves your application.
+
+#### [ASP.NET Core](#tab/aspnetcore)
+
+1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries:
+ - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter)
+ - [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#filter)
+
+1. Use a custom processor:
+
+ > [!TIP]
+ > Add the processor shown here *before* adding Azure Monitor.
+
+ ```csharp
+ var builder = WebApplication.CreateBuilder(args);
+
+ builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddProcessor(new ActivityFilteringProcessor()));
+ builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddSource("ActivitySourceName"));
+ builder.Services.AddOpenTelemetry().UseAzureMonitor();
+
+ var app = builder.Build();
+
+ app.Run();
+ ```
+
+ Add `ActivityFilteringProcessor.cs` to your project with the following code:
+
+ ```csharp
+ public class ActivityFilteringProcessor : BaseProcessor<Activity>
+ {
+ public override void OnStart(Activity activity)
+ {
+ // prevents all exporters from exporting internal activities
+ if (activity.Kind == ActivityKind.Internal)
+ {
+ activity.IsAllDataRequested = false;
+ }
+ }
+ }
+ ```
+
+1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source are exported.
+
+#### [.NET](#tab/net)
+
+1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries:
+ - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.8/src/OpenTelemetry.Instrumentation.AspNet/README.md#filter)
+ - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter)
+ - [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#filter)
+
+1. Use a custom processor:
+
+ ```csharp
+ using var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddSource("OTel.AzureMonitor.Demo")
+ .AddProcessor(new ActivityFilteringProcessor())
+ .AddAzureMonitorTraceExporter()
+ .Build();
+ ```
+
+ Add `ActivityFilteringProcessor.cs` to your project with the following code:
+
+ ```csharp
+ public class ActivityFilteringProcessor : BaseProcessor<Activity>
+ {
+ public override void OnStart(Activity activity)
+ {
+ // prevents all exporters from exporting internal activities
+ if (activity.Kind == ActivityKind.Internal)
+ {
+ activity.IsAllDataRequested = false;
+ }
+ }
+ }
+ ```
+
+1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source are exported.
++
+#### [Java](#tab/java)
+
+See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) and [telemetry processors](java-standalone-telemetry-processors.md).
+
+#### [Node.js](#tab/nodejs)
+
+1. Exclude the URL option provided by many HTTP instrumentation libraries.
+
+ The following example shows how to exclude a certain URL from being tracked by using the [HTTP/HTTPS instrumentation library](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http):
+
+ ```typescript
+ const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
+ const { IncomingMessage } = require("http");
+ const { RequestOptions } = require("https");
+ const { HttpInstrumentationConfig }= require("@opentelemetry/instrumentation-http");
+
+ const httpInstrumentationConfig: HttpInstrumentationConfig = {
+ enabled: true,
+ ignoreIncomingRequestHook: (request: IncomingMessage) => {
+ // Ignore OPTIONS incoming requests
+ if (request.method === 'OPTIONS') {
+ return true;
+ }
+ return false;
+ },
+ ignoreOutgoingRequestHook: (options: RequestOptions) => {
+ // Ignore outgoing requests with /test path
+ if (options.path === '/test') {
+ return true;
+ }
+ return false;
+ }
+ };
+ const config = new ApplicationInsightsConfig();
+ config.instrumentations.http = httpInstrumentationConfig;
+ const appInsights = new ApplicationInsightsClient(config);
+ ```
+
+2. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`.
+Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
+
+ ```typescript
+ const { SpanKind, TraceFlags } = require("@opentelemetry/api");
+
+ class SpanEnrichingProcessor {
+ ...
+
+ onEnd(span) {
+ if(span.kind == SpanKind.INTERNAL){
+ span.spanContext().traceFlags = TraceFlags.NONE;
+ }
+ }
+ }
+ ```
+
+#### [Python](#tab/python)
+
+1. Exclude the URL with the `OTEL_PYTHON_EXCLUDED_URLS` environment variable:
+ ```
+ export OTEL_PYTHON_EXCLUDED_URLS="http://localhost:8080/ignore"
+ ```
+ Doing so excludes the endpoint shown in the following Flask example:
+
+ ```python
+ ...
+ import flask
+ from azure.monitor.opentelemetry import configure_azure_monitor
+
+ # Configure Azure monitor collection telemetry pipeline
+ configure_azure_monitor(
+ connection_string="<your-connection-string>",
+ )
+ app = flask.Flask(__name__)
+
+ # Requests sent to this endpoint will not be tracked due to
+ # flask_config configuration
+ @app.route("/ignore")
+ def ignore():
+ return "Request received but not tracked."
+ ...
+ ```
+
+1. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`.
+
+ ```python
+ ...
+ from azure.monitor.opentelemetry import configure_azure_monitor
+ from opentelemetry import trace
+
+ configure_azure_monitor(
+ connection_string="<your-connection-string>",
+ )
+ trace.get_tracer_provider().add_span_processor(SpanFilteringProcessor())
+ ...
+ ```
+
+ Add `SpanFilteringProcessor.py` to your project with the following code:
+
+ ```python
+ from opentelemetry.trace import SpanContext, SpanKind, TraceFlags
+ from opentelemetry.sdk.trace import SpanProcessor
+
+ class SpanFilteringProcessor(SpanProcessor):
+
+ # prevents exporting spans from internal activities
+ def on_start(self, span):
+ if span._kind is SpanKind.INTERNAL:
+ span._context = SpanContext(
+ span.context.trace_id,
+ span.context.span_id,
+ span.context.is_remote,
+ TraceFlags.DEFAULT,
+ span.context.trace_state,
+ )
+
+ ```
++
+
+<!-- For more information, see [GitHub Repo](link). -->
+
+### Get the trace ID or span ID
+
+You might want to get the trace ID or span ID. If you have logs sent to a destination other than Application Insights, consider adding the trace ID or span ID. Doing so enables better correlation when debugging and diagnosing issues.
+
+#### [ASP.NET Core](#tab/aspnetcore)
+
+> [!NOTE]
+> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
+
+```csharp
+Activity activity = Activity.Current;
+string traceId = activity?.TraceId.ToHexString();
+string spanId = activity?.SpanId.ToHexString();
+```
+
+#### [.NET](#tab/net)
+
+> [!NOTE]
+> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
+
+```csharp
+Activity activity = Activity.Current;
+string traceId = activity?.TraceId.ToHexString();
+string spanId = activity?.SpanId.ToHexString();
+```
+
+#### [Java](#tab/java)
+
+You can use `opentelemetry-api` to get the trace ID or span ID.
+
+1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
+
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry.instrumentation</groupId>
+ <artifactId>opentelemetry-api</artifactId>
+ <version>1.0.0</version>
+ </dependency>
+ ```
+
+1. Get the request trace ID and the span ID in your code:
+
+ ```java
+ import io.opentelemetry.api.trace.Span;
+
+ Span span = Span.current();
+ String traceId = span.getSpanContext().getTraceId();
+ String spanId = span.getSpanContext().getSpanId();
+ ```
+
+#### [Node.js](#tab/nodejs)
+
+Get the request trace ID and the span ID in your code:
+
+ ```javascript
+ const { trace } = require("@opentelemetry/api");
+
+ let spanId = trace.getActiveSpan().spanContext().spanId;
+ let traceId = trace.getActiveSpan().spanContext().traceId;
+ ```
+
+#### [Python](#tab/python)
+
+Get the request trace ID and the span ID in your code:
+
+ ```python
+ from opentelemetry import trace
+
+ trace_id = trace.get_current_span().get_span_context().trace_id
+ span_id = trace.get_current_span().get_span_context().span_id
+ ```
+++
+## Next steps
+
+### [ASP.NET Core](#tab/aspnetcore)
+
+- To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md)
+- To review the source code, see the [Azure Monitor AspNetCore GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore).
+- To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor AspNetCore NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.AspNetCore) page.
+- To become more familiar with Azure Monitor and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore/tests/Azure.Monitor.OpenTelemetry.AspNetCore.Demo).
+- To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet).
+- To enable usage experiences, [enable web or browser user monitoring](javascript.md).
+
+#### [.NET](#tab/net)
+
+- To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md)
+- To review the source code, see the [Azure Monitor Exporter GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter).
+- To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor Exporter NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter) page.
+- To become more familiar with Azure Monitor and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter/tests/Azure.Monitor.OpenTelemetry.Exporter.Demo).
+- To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet).
+- To enable usage experiences, [enable web or browser user monitoring](javascript.md).
+
+### [Java](#tab/java)
+
+- Review [Java autoinstrumentation configuration options](java-standalone-config.md).
+- To review the source code, see the [Azure Monitor Java autoinstrumentation GitHub repository](https://github.com/Microsoft/ApplicationInsights-Java).
+- To learn more about OpenTelemetry and its community, see the [OpenTelemetry Java GitHub repository](https://github.com/open-telemetry/opentelemetry-java-instrumentation).
+- To enable usage experiences, see [Enable web or browser user monitoring](javascript.md).
+- See the [release notes](https://github.com/microsoft/ApplicationInsights-Java/releases) on GitHub.
+
+### [Node.js](#tab/nodejs)
+
+- To review the source code, see the [Application Insights Beta GitHub repository](https://github.com/microsoft/ApplicationInsights-node.js/tree/beta).
+- To install the npm package and check for updates see the [applicationinsights npm Package](https://www.npmjs.com/package/applicationinsights/v/beta) page.
+- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js).
+- To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js).
+- To enable usage experiences, [enable web or browser user monitoring](javascript.md).
+
+### [Python](#tab/python)
+
+- To review the source code and extra documentation, see the [Azure Monitor Distro GitHub repository](https://github.com/microsoft/ApplicationInsights-Python/blob/main/azure-monitor-opentelemetry/README.md).
+- To see extra samples and use cases, see [Azure Monitor Distro samples](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry/samples).
+- See the [release notes](https://github.com/microsoft/ApplicationInsights-Python/releases) on GitHub.
+- To install the PyPI package, check for updates, or view release notes, see the [Azure Monitor Distro PyPI Package](https://pypi.org/project/azure-monitor-opentelemetry/) page.
+- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-python).
+- To learn more about OpenTelemetry and its community, see the [OpenTelemetry Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python).
+- To see available OpenTelemetry instrumentations and components, see the [OpenTelemetry Contributor Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python-contrib).
+- To enable usage experiences, [enable web or browser user monitoring](javascript.md).
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
export OTEL_TRACES_SAMPLER_ARG=0.1
> [!TIP]
-> When using fixed-rate/percentage sampling and you aren't sure what to set the sampling rate as, start at 5% (i.e., 0.05 sampling ratio) and adjust the rate based on the accuracy of the operations shown in the failures and performance blades. A higher rate generally results in higher accuracy. However, ANY sampling will affect accuracy so we recommend alerting on [OpenTelemetry metrics](opentelemetry-enable.md#metrics), which are unaffected by sampling.
+> When using fixed-rate/percentage sampling and you aren't sure what to set the sampling rate as, start at 5% (i.e., 0.05 sampling ratio) and adjust the rate based on the accuracy of the operations shown in the failures and performance blades. A higher rate generally results in higher accuracy. However, ANY sampling will affect accuracy so we recommend alerting on [OpenTelemetry metrics](opentelemetry-add-modify.md#metrics), which are unaffected by sampling.
## Enable Azure AD authentication
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 06/19/2023 Last updated : 06/22/2023 ms.devlang: csharp, javascript, typescript, python
# Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python and Java applications
-This article describes how to enable and configure OpenTelemetry-based data collection to power the experiences within [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview). We will walk through how to install the "Azure Monitor OpenTelemetry Distro." To learn more about OpenTelemetry concepts, see the [OpenTelemetry overview](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
+This article describes how to enable and configure OpenTelemetry-based data collection to power the experiences within [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview). We walk through how to install the "Azure Monitor OpenTelemetry Distro". To learn more about OpenTelemetry concepts, see the [OpenTelemetry overview](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
## OpenTelemetry Release Status
pip install azure-monitor-opentelemetry --pre
### Enable Azure Monitor Application Insights
-To enable Azure Monitor Application Insights, you will make a minor modification to your application and set your "Connection String". The Connection String tells your application where to send the telemetry the Distro collects, and it's unique to you.
+To enable Azure Monitor Application Insights, you make a minor modification to your application and set your "Connection String". The Connection String tells your application where to send the telemetry the Distro collects, and it's unique to you.
#### Modify your Application ##### [ASP.NET Core](#tab/aspnetcore)
-Add `UseAzureMonitor()` to your application startup. Depending on your version of .NET, this will be in either your `startup.cs` or `program.cs` class.
+Add `UseAzureMonitor()` to your application startup. Depending on your version of .NET, it is in either your `startup.cs` or `program.cs` class.
```csharp var builder = WebApplication.CreateBuilder(args);
app.Run();
##### [.NET](#tab/net)
-Add the Azure Monitor Exporter to each OpenTelemetry signal in application startup. Depending on your version of .NET, this will be in either your `startup.cs` or `program.cs` class.
+Add the Azure Monitor Exporter to each OpenTelemetry signal in application startup. Depending on your version of .NET, it is in either your `startup.cs` or `program.cs` class.
```csharp var tracerProvider = Sdk.CreateTracerProviderBuilder() .AddAzureMonitorTraceExporter();
To copy your unique Connection String:
#### Paste the Connection String in your environment
-To paste your Connection String, select from the options below:
+To paste your Connection String, select from the following options:
A. Set via Environment Variable (Recommended)
Run your application and open your **Application Insights Resource** tab in the
:::image type="content" source="media/opentelemetry/server-requests.png" alt-text="Screenshot of the Application Insights Overview tab with server requests and server response time highlighted.":::
-That's it. Your application is now being monitored by Application Insights. Everything else below is optional and available for further customization.
+That's it. Your application is now monitored by Application Insights. Everything else below is optional and available for further customization.
Not working? Check out the troubleshooting page for [ASP.NET Core](/troubleshoot/azure/azure-monitor/app-insights/opentelemetry-troubleshooting-dotnet), [Java](/troubleshoot/azure/azure-monitor/app-insights/opentelemetry-troubleshooting-java), [Node.js](/troubleshoot/azure/azure-monitor/app-insights/opentelemetry-troubleshooting-nodejs), or [Python](/troubleshoot/azure/azure-monitor/app-insights/opentelemetry-troubleshooting-python).
Not working? Check out the troubleshooting page for [ASP.NET Core](/troubleshoot
As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. To learn more, see [Statsbeat in Azure Application Insights](./statsbeat.md). -
-## Automatic data collection
-
-The distros automatically collect data by bundling in OpenTelemetry "instrumentation libraries".
-
-### Included instrumentation libraries
-
-#### [ASP.NET Core](#tab/aspnetcore)
-
-Requests
-- [ASP.NET
- Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>
-
-Dependencies
-- [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>-- [SqlClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.SqlClient/README.md) <sup>[1](#FOOTNOTEONE)</sup>-
-Logging
-- ILogger
-
-For more information about ILogger, see [Logging in C# and .NET](/dotnet/core/extensions/logging) and [code examples](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/logs).
-
-#### [.NET](#tab/net)
-
-The Azure Monitor Exporter does not include any instrumentation libraries.
-
-#### [Java](#tab/java)
-
-Requests
-* JMS consumers
-* Kafka consumers
-* Netty
-* Quartz
-* RabbitMQ
-* Servlets
-* Spring scheduling
-
-> [!NOTE]
-> Servlet and Netty autoinstrumentation covers the majority of Java HTTP services, including Java EE, Jakarta EE, Spring Boot, Quarkus, and Micronaut.
-
-Dependencies (plus downstream distributed trace propagation):
-* Apache HttpClient
-* Apache HttpAsyncClient
-* AsyncHttpClient
-* Google HttpClient
-* gRPC
-* java.net.HttpURLConnection
-* Java 11 HttpClient
-* JAX-RS client
-* Jetty HttpClient
-* JMS
-* Kafka
-* Netty client
-* OkHttp
-* RabbitMQ
-
-Dependencies (without downstream distributed trace propagation):
-* Cassandra
-* JDBC
-* MongoDB (async and sync)
-* Redis (Lettuce and Jedis)
-
-Metrics
-
-* Micrometer Metrics, including Spring Boot Actuator metrics
-* JMX Metrics
-
-Logs
-* Logback (including MDC properties) [1](#FOOTNOTEONE)</sup> <sup>[3](#FOOTNOTETHREE)</sup>
-* Log4j (including MDC/Thread Context properties) [1](#FOOTNOTEONE)</sup> <sup>[3](#FOOTNOTETHREE)</sup>
-* JBoss Logging (including MDC properties) [1](#FOOTNOTEONE)</sup> <sup>[3](#FOOTNOTETHREE)</sup>
-* java.util.logging [1](#FOOTNOTEONE)</sup> <sup>[3](#FOOTNOTETHREE)</sup>
-
-Telemetry emitted by these Azure SDKs is automatically collected by default:
-
-* [Azure App Configuration](/java/api/overview/azure/data-appconfiguration-readme) 1.1.10+
-* [Azure Cognitive Search](/java/api/overview/azure/search-documents-readme) 11.3.0+
-* [Azure Communication Chat](/java/api/overview/azure/communication-chat-readme) 1.0.0+
-* [Azure Communication Common](/java/api/overview/azure/communication-common-readme) 1.0.0+
-* [Azure Communication Identity](/java/api/overview/azure/communication-identity-readme) 1.0.0+
-* [Azure Communication Phone Numbers](/java/api/overview/azure/communication-phonenumbers-readme) 1.0.0+
-* [Azure Communication SMS](/java/api/overview/azure/communication-sms-readme) 1.0.0+
-* [Azure Cosmos DB](/java/api/overview/azure/cosmos-readme) 4.22.0+
-* [Azure Digital Twins - Core](/java/api/overview/azure/digitaltwins-core-readme) 1.1.0+
-* [Azure Event Grid](/java/api/overview/azure/messaging-eventgrid-readme) 4.0.0+
-* [Azure Event Hubs](/java/api/overview/azure/messaging-eventhubs-readme) 5.6.0+
-* [Azure Event Hubs - Azure Blob Storage Checkpoint Store](/java/api/overview/azure/messaging-eventhubs-checkpointstore-blob-readme) 1.5.1+
-* [Azure Form Recognizer](/java/api/overview/azure/ai-formrecognizer-readme) 3.0.6+
-* [Azure Identity](/java/api/overview/azure/identity-readme) 1.2.4+
-* [Azure Key Vault - Certificates](/java/api/overview/azure/security-keyvault-certificates-readme) 4.1.6+
-* [Azure Key Vault - Keys](/java/api/overview/azure/security-keyvault-keys-readme) 4.2.6+
-* [Azure Key Vault - Secrets](/java/api/overview/azure/security-keyvault-secrets-readme) 4.2.6+
-* [Azure Service Bus](/java/api/overview/azure/messaging-servicebus-readme) 7.1.0+
-* [Azure Storage - Blobs](/java/api/overview/azure/storage-blob-readme) 12.11.0+
-* [Azure Storage - Blobs Batch](/java/api/overview/azure/storage-blob-batch-readme) 12.9.0+
-* [Azure Storage - Blobs Cryptography](/java/api/overview/azure/storage-blob-cryptography-readme) 12.11.0+
-* [Azure Storage - Common](/java/api/overview/azure/storage-common-readme) 12.11.0+
-* [Azure Storage - Files Data Lake](/java/api/overview/azure/storage-file-datalake-readme) 12.5.0+
-* [Azure Storage - Files Shares](/java/api/overview/azure/storage-file-share-readme) 12.9.0+
-* [Azure Storage - Queues](/java/api/overview/azure/storage-queue-readme) 12.9.0+
-* [Azure Text Analytics](/java/api/overview/azure/ai-textanalytics-readme) 5.0.4+
-
-[//]: # "Azure Cosmos DB 4.22.0+ due to https://github.com/Azure/azure-sdk-for-java/pull/25571"
-
-[//]: # "the remaining above names and links scraped from https://azure.github.io/azure-sdk/releases/latest/java.html"
-[//]: # "and version synched manually against the oldest version in maven central built on azure-core 1.14.0"
-[//]: # ""
-[//]: # "var table = document.querySelector('#tg-sb-content > div > table')"
-[//]: # "var str = ''"
-[//]: # "for (var i = 1, row; row = table.rows[i]; i++) {"
-[//]: # " var name = row.cells[0].getElementsByTagName('div')[0].textContent.trim()"
-[//]: # " var stableRow = row.cells[1]"
-[//]: # " var versionBadge = stableRow.querySelector('.badge')"
-[//]: # " if (!versionBadge) {"
-[//]: # " continue"
-[//]: # " }"
-[//]: # " var version = versionBadge.textContent.trim()"
-[//]: # " var link = stableRow.querySelectorAll('a')[2].href"
-[//]: # " str += '* [' + name + '](' + link + ') ' + version + '\n'"
-[//]: # "}"
-[//]: # "console.log(str)"
-
-#### [Node.js](#tab/nodejs)
-
-The following OpenTelemetry Instrumentation libraries are included as part of Azure Monitor Application Insights Distro.
-
-Requests
-- [HTTP/HTTPS](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http) <sup>[2](#FOOTNOTETWO)</sup>-
-Dependencies
-- [MongoDB](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-mongodb)-- [MySQL](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-mysql)-- [Postgres](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-pg)-- [Redis](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-redis)-- [Redis-4](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-redis-4)-- [Azure SDK](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/instrumentation/opentelemetry-instrumentation-azure-sdk)-
-Logs
-- [Node.js console](https://nodejs.org/api/console.html)-- [Bunyan](https://github.com/trentm/node-bunyan#readme)-- [Winston](https://github.com/winstonjs/winston#readme)--
-#### [Python](#tab/python)
-
-Requests
-- [Django](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-django) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>-- [FastApi](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-fastapi) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>-- [Flask](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-flask) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>-
-Dependencies
-- [Psycopg2](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-psycopg2)-- [Requests](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-requests) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>-- [Urllib](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>-- [Urllib3](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib3) <sup>[1](#FOOTNOTEONE)</sup> <sup>[2](#FOOTNOTETWO)</sup>-
-Logs
-- [Python logging library](https://docs.python.org/3/howto/logging.html) <sup>[4](#FOOTNOTEFOUR)</sup>-
-Examples of using the Python logging library can be found on [GitHub](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry/samples/logging).
---
-**Footnotes**
-- <a name="FOOTNOTEONE">1</a>: Supports automatic reporting of *unhandled/uncaught* exceptions-- <a name="FOOTNOTETWO">2</a>: Supports OpenTelemetry Metrics-- <a name="FOOTNOTETHREE">3</a>: By default, logging is only collected at INFO level or higher. To change this setting, see the [configuration options](./java-standalone-config.md#autocollected-logging).-- <a name="FOOTNOTEFOUR">4</a>: By default, logging is only collected at WARNING level or higher..-
-> [!NOTE]
-> The Azure Monitor OpenTelemetry Distros include custom mapping and logic to automatically emit [Application Insights standard metrics](standard-metrics.md).
-
-> [!TIP]
-> The OpenTelemetry-based offerings currently emit all OpenTelemetry metrics as [Custom Metrics](opentelemetry-enable.md#add-custom-metrics) and [Performance Counters](standard-metrics.md#performance-counters) in Metrics Explorer. For .NET, Node.js, and Python, whatever you set as the meter name becomes the metrics namespace.
-
-### Add a community instrumentation library
-
-You can collect more data automatically when you include instrumentation libraries from the OpenTelemetry community.
-
-> [!NOTE]
-> We don't support and cannot guarantee the quality of community instrumentation libraries. If you would like to suggest a community instrumentation library us to include in our distro, post or up-vote an idea in our [feedback community](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0).
-
-> [!CAUTION]
-> Some instrumentation libraries are based on experimental OpenTelemetry semantic specifications. Adding them may leave you vulnerable to future breaking changes.
-
-### [ASP.NET Core](#tab/aspnetcore)
-
-To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `ConfigureOpenTelemetryTraceProvider` methods.
-
-The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect additional metrics.
-
-```csharp
-var builder = WebApplication.CreateBuilder(args);
-
-builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddRuntimeInstrumentation());
-builder.Services.AddOpenTelemetry().UseAzureMonitor();
-
-var app = builder.Build();
-
-app.Run();
-```
-
-### [.NET](#tab/net)
-
-The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect additional metrics.
-
-```csharp
-var metricsProvider = Sdk.CreateMeterProviderBuilder()
- .AddRuntimeInstrumentation()
- .AddAzureMonitorMetricExporter();
-```
-
-### [Java](#tab/java)
-You cannot extend the Java Distro with community instrumentation libraries. To request that we include another instrumentation library, please open an issue on our GitHub page. You can find a link to our GitHub page in [Next Steps](#next-steps).
-
-### [Node.js](#tab/nodejs)
-
-Other OpenTelemetry Instrumentations are available [here](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node) and could be added using TraceHandler in ApplicationInsightsClient.
-
- ```javascript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
- const { ExpressInstrumentation } = require('@opentelemetry/instrumentation-express');
-
- const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
- const traceHandler = appInsights.getTraceHandler();
- traceHandler.addInstrumentation(new ExpressInstrumentation());
-```
-
-### [Python](#tab/python)
-Currently unavailable.
---
-## Collect custom telemetry
-
-This section explains how to collect custom telemetry from your application.
-
-Depending on your language and signal type, there are different ways to collect custom telemetry, including:
-
-- OpenTelemetry API-- Language-specific logging/metrics libraries-- Application Insights [Classic API](api-custom-events-metrics.md)
-
-The following table represents the currently supported custom telemetry types:
-
-| Language | Custom Events | Custom Metrics | Dependencies | Exceptions | Page Views | Requests | Traces |
-|-||-|--|||-|--|
-| **ASP.NET Core** | | | | | | | |
-| &nbsp;&nbsp;&nbsp;OpenTelemetry API | | Yes | Yes | Yes | | Yes | |
-| &nbsp;&nbsp;&nbsp;iLogger API | | | | | | | Yes |
-| &nbsp;&nbsp;&nbsp;AI Classic API | | | | | | | |
-| | | | | | | | |
-| **Java** | | | | | | | |
-| &nbsp;&nbsp;&nbsp;OpenTelemetry API | | Yes | Yes | Yes | | Yes | |
-| &nbsp;&nbsp;&nbsp;Logback, Log4j, JUL | | | | Yes | | | Yes |
-| &nbsp;&nbsp;&nbsp;Micrometer Metrics | | Yes | | | | | |
-| &nbsp;&nbsp;&nbsp;AI Classic API | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
-| | | | | | | | |
-| **Node.js** | | | | | | | |
-| &nbsp;&nbsp;&nbsp;OpenTelemetry API | | Yes | Yes | Yes | | Yes | |
-| &nbsp;&nbsp;&nbsp;Console, Winston, Bunyan| | | | | | | Yes |
-| &nbsp;&nbsp;&nbsp;AI Classic API | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
-| | | | | | | | |
-| **Python** | | | | | | | |
-| &nbsp;&nbsp;&nbsp;OpenTelemetry API | | Yes | Yes | Yes | | Yes | |
-| &nbsp;&nbsp;&nbsp;Python Logging Module | | | | | | | Yes |
-
-> [!NOTE]
-> Application Insights Java 3.x listens for telemetry that's sent to the Application Insights [Classic API](api-custom-events-metrics.md). Similarly, Application Insights Node.js 3.x collects events created with the Application Insights [Classic API](api-custom-events-metrics.md). This makes upgrading easier and fills a gap in our custom telemetry support until all custom telemetry types are supported via the OpenTelemetry API.
-
-### Add Custom Metrics
-
-> [!NOTE]
-> Custom Metrics are under preview in Azure Monitor Application Insights. Custom metrics without dimensions are available by default. To view and alert on dimensions, you need to [opt-in](pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation).
-
-Consider collecting more metrics beyond what's provided by the instrumentation libraries.
-
-The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library.
-
-The following table shows the recommended [aggregation types](../essentials/metrics-aggregation-explained.md#aggregation-types) for each of the OpenTelemetry Metric Instruments.
-
-| OpenTelemetry Instrument | Azure Monitor Aggregation Type |
-|||
-| Counter | Sum |
-| Asynchronous Counter | Sum |
-| Histogram | Min, Max, Average, Sum and Count |
-| Asynchronous Gauge | Average |
-| UpDownCounter | Sum |
-| Asynchronous UpDownCounter | Sum |
-
-> [!CAUTION]
-> Aggregation types beyond what's shown in the table typically aren't meaningful.
-
-The [OpenTelemetry Specification](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#instrument)
-describes the instruments and provides examples of when you might use each one.
-
-> [!TIP]
-> The histogram is the most versatile and most closely equivalent to the Application Insights GetMetric [Classic API](api-custom-events-metrics.md). Azure Monitor currently flattens the histogram instrument into our five supported aggregation types, and support for percentiles is underway. Although less versatile, other OpenTelemetry instruments have a lesser impact on your application's performance.
-
-#### Histogram Example
-
-#### [ASP.NET Core](#tab/aspnetcore)
-
-Application startup must subscribe to a Meter by name.
-
-```csharp
-var builder = WebApplication.CreateBuilder(args);
-
-builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddMeter("OTel.AzureMonitor.Demo"));
-builder.Services.AddOpenTelemetry().UseAzureMonitor();
-
-var app = builder.Build();
-
-app.Run();
-```
-
-The `Meter` must be initialized using that same name.
-
-```csharp
-var meter = new Meter("OTel.AzureMonitor.Demo");
-Histogram<long> myFruitSalePrice = meter.CreateHistogram<long>("FruitSalePrice");
-
-var rand = new Random();
-myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
-myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
-myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
-myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "green"));
-myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
-myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
-```
-
-#### [.NET](#tab/net)
-
-```csharp
-public class Program
-{
- private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
-
- public static void Main()
- {
- using var meterProvider = Sdk.CreateMeterProviderBuilder()
- .AddMeter("OTel.AzureMonitor.Demo")
- .AddAzureMonitorMetricExporter()
- .Build();
-
- Histogram<long> myFruitSalePrice = meter.CreateHistogram<long>("FruitSalePrice");
-
- var rand = new Random();
- myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
- myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
- myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
- myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "green"));
- myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "red"));
- myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow"));
-
- System.Console.WriteLine("Press Enter key to exit.");
- System.Console.ReadLine();
- }
-}
-```
-
-#### [Java](#tab/java)
-
-```java
-import io.opentelemetry.api.GlobalOpenTelemetry;
-import io.opentelemetry.api.metrics.DoubleHistogram;
-import io.opentelemetry.api.metrics.Meter;
-
-public class Program {
-
- public static void main(String[] args) {
- Meter meter = GlobalOpenTelemetry.getMeter("OTEL.AzureMonitor.Demo");
- DoubleHistogram histogram = meter.histogramBuilder("histogram").build();
- histogram.record(1.0);
- histogram.record(100.0);
- histogram.record(30.0);
- }
-}
-```
-
-#### [Node.js](#tab/nodejs)
-
- ```javascript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
- const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
- const customMetricsHandler = appInsights.getMetricHandler().getCustomMetricsHandler();
- const meter = customMetricsHandler.getMeter();
- let histogram = meter.createHistogram("histogram");
- histogram.record(1, { "testKey": "testValue" });
- histogram.record(30, { "testKey": "testValue2" });
- histogram.record(100, { "testKey2": "testValue" });
-```
-
-#### [Python](#tab/python)
-
-```python
-from azure.monitor.opentelemetry import configure_azure_monitor
-from opentelemetry import metrics
-
-configure_azure_monitor(
- connection_string="<your-connection-string>",
-)
-meter = metrics.get_meter_provider().get_meter("otel_azure_monitor_histogram_demo")
-
-histogram = meter.create_histogram("histogram")
-histogram.record(1.0, {"test_key": "test_value"})
-histogram.record(100.0, {"test_key2": "test_value"})
-histogram.record(30.0, {"test_key": "test_value2"})
-
-input()
-```
---
-#### Counter Example
-
-#### [ASP.NET Core](#tab/aspnetcore)
-
-Application startup must subscribe to a Meter by name.
-
-```csharp
-var builder = WebApplication.CreateBuilder(args);
-
-builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddMeter("OTel.AzureMonitor.Demo"));
-builder.Services.AddOpenTelemetry().UseAzureMonitor();
-
-var app = builder.Build();
-
-app.Run();
-```
-
-The `Meter` must be initialized using that same name.
-
-```csharp
-var meter = new Meter("OTel.AzureMonitor.Demo");
-Counter<long> myFruitCounter = meter.CreateCounter<long>("MyFruitCounter");
-
-myFruitCounter.Add(1, new("name", "apple"), new("color", "red"));
-myFruitCounter.Add(2, new("name", "lemon"), new("color", "yellow"));
-myFruitCounter.Add(1, new("name", "lemon"), new("color", "yellow"));
-myFruitCounter.Add(2, new("name", "apple"), new("color", "green"));
-myFruitCounter.Add(5, new("name", "apple"), new("color", "red"));
-myFruitCounter.Add(4, new("name", "lemon"), new("color", "yellow"));
-```
-
-#### [.NET](#tab/net)
-
-```csharp
-public class Program
-{
- private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
-
- public static void Main()
- {
- using var meterProvider = Sdk.CreateMeterProviderBuilder()
- .AddMeter("OTel.AzureMonitor.Demo")
- .AddAzureMonitorMetricExporter()
- .Build();
-
- Counter<long> myFruitCounter = meter.CreateCounter<long>("MyFruitCounter");
-
- myFruitCounter.Add(1, new("name", "apple"), new("color", "red"));
- myFruitCounter.Add(2, new("name", "lemon"), new("color", "yellow"));
- myFruitCounter.Add(1, new("name", "lemon"), new("color", "yellow"));
- myFruitCounter.Add(2, new("name", "apple"), new("color", "green"));
- myFruitCounter.Add(5, new("name", "apple"), new("color", "red"));
- myFruitCounter.Add(4, new("name", "lemon"), new("color", "yellow"));
-
- System.Console.WriteLine("Press Enter key to exit.");
- System.Console.ReadLine();
- }
-}
-```
-
-#### [Java](#tab/java)
-
-```Java
-import io.opentelemetry.api.GlobalOpenTelemetry;
-import io.opentelemetry.api.common.AttributeKey;
-import io.opentelemetry.api.common.Attributes;
-import io.opentelemetry.api.metrics.LongCounter;
-import io.opentelemetry.api.metrics.Meter;
-
-public class Program {
-
- public static void main(String[] args) {
- Meter meter = GlobalOpenTelemetry.getMeter("OTEL.AzureMonitor.Demo");
-
- LongCounter myFruitCounter = meter
- .counterBuilder("MyFruitCounter")
- .build();
-
- myFruitCounter.add(1, Attributes.of(AttributeKey.stringKey("name"), "apple", AttributeKey.stringKey("color"), "red"));
- myFruitCounter.add(2, Attributes.of(AttributeKey.stringKey("name"), "lemon", AttributeKey.stringKey("color"), "yellow"));
- myFruitCounter.add(1, Attributes.of(AttributeKey.stringKey("name"), "lemon", AttributeKey.stringKey("color"), "yellow"));
- myFruitCounter.add(2, Attributes.of(AttributeKey.stringKey("name"), "apple", AttributeKey.stringKey("color"), "green"));
- myFruitCounter.add(5, Attributes.of(AttributeKey.stringKey("name"), "apple", AttributeKey.stringKey("color"), "red"));
- myFruitCounter.add(4, Attributes.of(AttributeKey.stringKey("name"), "lemon", AttributeKey.stringKey("color"), "yellow"));
- }
-}
-```
-
-#### [Node.js](#tab/nodejs)
-
-```javascript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
- const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
- const customMetricsHandler = appInsights.getMetricHandler().getCustomMetricsHandler();
- const meter = customMetricsHandler.getMeter();
- let counter = meter.createCounter("counter");
- counter.add(1, { "testKey": "testValue" });
- counter.add(5, { "testKey2": "testValue" });
- counter.add(3, { "testKey": "testValue2" });
-```
-
-#### [Python](#tab/python)
-
-```python
-from azure.monitor.opentelemetry import configure_azure_monitor
-from opentelemetry import metrics
-
-configure_azure_monitor(
- connection_string="<your-connection-string>",
-)
-meter = metrics.get_meter_provider().get_meter("otel_azure_monitor_counter_demo")
-
-counter = meter.create_counter("counter")
-counter.add(1.0, {"test_key": "test_value"})
-counter.add(5.0, {"test_key2": "test_value"})
-counter.add(3.0, {"test_key": "test_value2"})
-
-input()
-```
---
-#### Gauge Example
-
-#### [ASP.NET Core](#tab/aspnetcore)
-
-Application startup must subscribe to a Meter by name.
-
-```csharp
-var builder = WebApplication.CreateBuilder(args);
-
-builder.Services.ConfigureOpenTelemetryMeterProvider((sp, builder) => builder.AddMeter("OTel.AzureMonitor.Demo"));
-builder.Services.AddOpenTelemetry().UseAzureMonitor();
-
-var app = builder.Build();
-
-app.Run();
-```
-
-The `Meter` must be initialized using that same name.
-
-```csharp
-var process = Process.GetCurrentProcess();
-
-var meter = new Meter("OTel.AzureMonitor.Demo");
-ObservableGauge<int> myObservableGauge = meter.CreateObservableGauge("Thread.State", () => GetThreadState(process));
-
-private static IEnumerable<Measurement<int>> GetThreadState(Process process)
-{
- foreach (ProcessThread thread in process.Threads)
- {
- yield return new((int)thread.ThreadState, new("ProcessId", process.Id), new("ThreadId", thread.Id));
- }
-}
-```
-
-#### [.NET](#tab/net)
-
-```csharp
-public class Program
-{
- private static readonly Meter meter = new("OTel.AzureMonitor.Demo");
-
- public static void Main()
- {
- using var meterProvider = Sdk.CreateMeterProviderBuilder()
- .AddMeter("OTel.AzureMonitor.Demo")
- .AddAzureMonitorMetricExporter()
- .Build();
-
- var process = Process.GetCurrentProcess();
-
- ObservableGauge<int> myObservableGauge = meter.CreateObservableGauge("Thread.State", () => GetThreadState(process));
-
- System.Console.WriteLine("Press Enter key to exit.");
- System.Console.ReadLine();
- }
-
- private static IEnumerable<Measurement<int>> GetThreadState(Process process)
- {
- foreach (ProcessThread thread in process.Threads)
- {
- yield return new((int)thread.ThreadState, new("ProcessId", process.Id), new("ThreadId", thread.Id));
- }
- }
-}
-```
-
-#### [Java](#tab/java)
-
-```Java
-import io.opentelemetry.api.GlobalOpenTelemetry;
-import io.opentelemetry.api.common.AttributeKey;
-import io.opentelemetry.api.common.Attributes;
-import io.opentelemetry.api.metrics.Meter;
-
-public class Program {
-
- public static void main(String[] args) {
- Meter meter = GlobalOpenTelemetry.getMeter("OTEL.AzureMonitor.Demo");
-
- meter.gaugeBuilder("gauge")
- .buildWithCallback(
- observableMeasurement -> {
- double randomNumber = Math.floor(Math.random() * 100);
- observableMeasurement.record(randomNumber, Attributes.of(AttributeKey.stringKey("testKey"), "testValue"));
- });
- }
-}
-```
-
-#### [Node.js](#tab/nodejs)
-
-```typescript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
- const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
- const customMetricsHandler = appInsights.getMetricHandler().getCustomMetricsHandler();
- const meter = customMetricsHandler.getMeter();
- let gauge = meter.createObservableGauge("gauge");
- gauge.addCallback((observableResult: ObservableResult) => {
- let randomNumber = Math.floor(Math.random() * 100);
- observableResult.observe(randomNumber, {"testKey": "testValue"});
- });
-```
-
-#### [Python](#tab/python)
-
-```python
-from typing import Iterable
-
-from azure.monitor.opentelemetry import configure_azure_monitor
-from opentelemetry import metrics
-from opentelemetry.metrics import CallbackOptions, Observation
-
-configure_azure_monitor(
- connection_string="<your-connection-string>",
-)
-meter = metrics.get_meter_provider().get_meter("otel_azure_monitor_gauge_demo")
-
-def observable_gauge_generator(options: CallbackOptions) -> Iterable[Observation]:
- yield Observation(9, {"test_key": "test_value"})
-
-def observable_gauge_sequence(options: CallbackOptions) -> Iterable[Observation]:
- observations = []
- for i in range(10):
- observations.append(
- Observation(9, {"test_key": i})
- )
- return observations
-
-gauge = meter.create_observable_gauge("gauge", [observable_gauge_generator])
-gauge2 = meter.create_observable_gauge("gauge2", [observable_gauge_sequence])
-
-input()
-```
---
-### Add Custom Exceptions
-
-Select instrumentation libraries automatically report exceptions to Application Insights.
-However, you may want to manually report exceptions beyond what instrumentation libraries report.
-For instance, exceptions caught by your code aren't ordinarily reported. You may wish to report them
-to draw attention in relevant experiences including the failures section and end-to-end transaction views.
-
-#### [ASP.NET Core](#tab/aspnetcore)
--- To log an Exception using an Activity:
- ```csharp
- using (var activity = activitySource.StartActivity("ExceptionExample"))
- {
- try
- {
- throw new Exception("Test exception");
- }
- catch (Exception ex)
- {
- activity?.SetStatus(ActivityStatusCode.Error);
- activity?.RecordException(ex);
- }
- }
- ```
-- To log an Exception using ILogger:
- ```csharp
- var logger = loggerFactory.CreateLogger(logCategoryName);
-
- try
- {
- throw new Exception("Test Exception");
- }
- catch (Exception ex)
- {
- logger.Log(
- logLevel: LogLevel.Error,
- eventId: 0,
- exception: ex,
- message: "Hello {name}.",
- args: new object[] { "World" });
- }
- ```
-
-#### [.NET](#tab/net)
--- To log an Exception using an Activity:
- ```csharp
- using (var activity = activitySource.StartActivity("ExceptionExample"))
- {
- try
- {
- throw new Exception("Test exception");
- }
- catch (Exception ex)
- {
- activity?.SetStatus(ActivityStatusCode.Error);
- activity?.RecordException(ex);
- }
- }
- ```
-- To log an Exception using ILogger:
- ```csharp
- var logger = loggerFactory.CreateLogger("ExceptionExample");
-
- try
- {
- throw new Exception("Test Exception");
- }
- catch (Exception ex)
- {
- logger.Log(
- logLevel: LogLevel.Error,
- eventId: 0,
- exception: ex,
- message: "Hello {name}.",
- args: new object[] { "World" });
- }
- ```
-
-#### [Java](#tab/java)
-
-You can use `opentelemetry-api` to update the status of a span and record exceptions.
-
-1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
-
- ```xml
- <dependency>
- <groupId>io.opentelemetry.instrumentation</groupId>
- <artifactId>opentelemetry-api</artifactId>
- <version>1.0.0</version>
- </dependency>
- ```
-
-1. Set status to `error` and record an exception in your code:
-
- ```java
- import io.opentelemetry.api.trace.Span;
- import io.opentelemetry.api.trace.StatusCode;
-
- Span span = Span.current();
- span.setStatus(StatusCode.ERROR, "errorMessage");
- span.recordException(e);
- ```
-
-#### [Node.js](#tab/nodejs)
-
-```javascript
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
-
-const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
-const tracer = appInsights.getTraceHandler().getTracer();
-let span = tracer.startSpan("hello");
-try{
- throw new Error("Test Error");
-}
-catch(error){
- span.recordException(error);
-}
-```
-
-#### [Python](#tab/python)
-
-The OpenTelemetry Python SDK is implemented in such a way that exceptions thrown are automatically captured and recorded. See the following code sample for an example of this behavior.
-
-```python
-from azure.monitor.opentelemetry import configure_azure_monitor
-from opentelemetry import trace
-
-configure_azure_monitor(
- connection_string="<your-connection-string>",
-)
-tracer = trace.get_tracer("otel_azure_monitor_exception_demo")
-
-# Exception events
-try:
- with tracer.start_as_current_span("hello") as span:
- # This exception will be automatically recorded
- raise Exception("Custom exception message.")
-except Exception:
- print("Exception raised")
-
-```
-
-If you would like to record exceptions manually, you can disable that option
-within the context manager and use `record_exception()` directly as shown in the following example:
-
-```python
-...
-with tracer.start_as_current_span("hello", record_exception=False) as span:
- try:
- raise Exception("Custom exception message.")
- except Exception as ex:
- # Manually record exception
- span.record_exception(ex)
-...
-
-```
---
-### Add Custom Spans
-
-You may want to add a custom span in two scenarios. First, when there's a dependency request not already collected by an instrumentation library. Second, when you wish to model an application process as a span on the end-to-end transaction view.
-
-#### [ASP.NET Core](#tab/aspnetcore)
-
-> [!NOTE]
-> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
--
-```csharp
-internal static readonly ActivitySource activitySource = new("ActivitySourceName");
-
-var builder = WebApplication.CreateBuilder(args);
-
-builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddSource("ActivitySourceName"));
-builder.Services.AddOpenTelemetry().UseAzureMonitor();
-
-var app = builder.Build();
-
-app.MapGet("/", () =>
-{
- using (var activity = activitySource.StartActivity("CustomActivity"))
- {
- // your code here
- }
-
- return $"Hello World!";
-});
-
-app.Run();
-```
-
-When calling `StartActivity` it will default to `ActivityKind.Internal` but you can provide any other `ActivityKind`.
-`ActivityKind.Client`, `ActivityKind.Producer`, and `ActivityKind.Internal` are mapped to Application Insights `dependencies`.
-`ActivityKind.Server` and `ActivityKind.Consumer` are mapped to Application Insights `requests`.
-
-#### [.NET](#tab/net)
-
-> [!NOTE]
-> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
-
-```csharp
-using var tracerProvider = Sdk.CreateTracerProviderBuilder()
- .AddSource("ActivitySourceName")
- .AddAzureMonitorTraceExporter()
- .Build();
-
-var activitySource = new ActivitySource("ActivitySourceName");
-
-using (var activity = activitySource.StartActivity("CustomActivity"))
-{
- // your code here
-}
-```
-
-When calling `StartActivity` it will default to `ActivityKind.Internal` but you can provide any other `ActivityKind`.
-`ActivityKind.Client`, `ActivityKind.Producer`, and `ActivityKind.Internal` are mapped to Application Insights `dependencies`.
-`ActivityKind.Server` and `ActivityKind.Consumer` are mapped to Application Insights `requests`.
-
-#### [Java](#tab/java)
-
-##### Use the OpenTelemetry annotation
-
-The simplest way to add your own spans is by using OpenTelemetry's `@WithSpan` annotation.
-
-Spans populate the `requests` and `dependencies` tables in Application Insights.
-
-1. Add `opentelemetry-instrumentation-annotations-1.21.0.jar` (or later) to your application:
-
- ```xml
- <dependency>
- <groupId>io.opentelemetry.instrumentation</groupId>
- <artifactId>opentelemetry-instrumentation-annotations</artifactId>
- <version>1.21.0</version>
- </dependency>
- ```
-
-1. Use the `@WithSpan` annotation to emit a span each time your method is executed:
-
- ```java
- import io.opentelemetry.instrumentation.annotations.WithSpan;
-
- @WithSpan(value = "your span name")
- public void yourMethod() {
- }
- ```
-
-By default, the span ends up in the `dependencies` table with dependency type `InProc`.
-
-For methods representing a background job not captured by autoinstrumentation, we recommend applying the attribute `kind = SpanKind.SERVER` to the `@WithSpan` annotation to ensure they appear in the Application Insights `requests` table.
-
-##### Use the OpenTelemetry API
-
-If the preceding OpenTelemetry `@WithSpan` annotation doesn't meet your needs,
-you can add your spans by using the OpenTelemetry API.
-
-1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
-
- ```xml
- <dependency>
- <groupId>io.opentelemetry.instrumentation</groupId>
- <artifactId>opentelemetry-api</artifactId>
- <version>1.0.0</version>
- </dependency>
- ```
-
-1. Use the `GlobalOpenTelemetry` class to create a `Tracer`:
-
- ```java
- import io.opentelemetry.api.GlobalOpenTelemetry;
- import io.opentelemetry.api.trace.Tracer;
-
- static final Tracer tracer = GlobalOpenTelemetry.getTracer("com.example");
- ```
-
-1. Create a span, make it current, and then end it:
-
- ```java
- Span span = tracer.spanBuilder("my first span").startSpan();
- try (Scope ignored = span.makeCurrent()) {
- // do stuff within the context of this
- } catch (Throwable t) {
- span.recordException(t);
- } finally {
- span.end();
- }
- ```
-
-#### [Node.js](#tab/nodejs)
-
-```javascript
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
-
-const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
-const tracer = appInsights.getTraceHandler().getTracer();
-let span = tracer.startSpan("hello");
-span.end();
-```
--
-#### [Python](#tab/python)
-
-The OpenTelemetry API can be used to add your own spans, which appear in the `requests` and `dependencies` tables in Application Insights.
-
-The code example shows how to use the `tracer.start_as_current_span()` method to start, make the span current, and end the span within its context.
-
-```python
-...
-from opentelemetry import trace
-
-tracer = trace.get_tracer(__name__)
-
-# The "with" context manager starts, makes the span current, and ends the span within it's context
-with tracer.start_as_current_span("my first span") as span:
- try:
- # Do stuff within the context of this
- except Exception as ex:
- span.record_exception(ex)
-...
-
-```
-
-By default, the span is in the `dependencies` table with a dependency type of `InProc`.
-
-If your method represents a background job not already captured by autoinstrumentation, we recommend setting the attribute `kind = SpanKind.SERVER` to ensure it appears in the Application Insights `requests` table.
-
-```python
-...
-from opentelemetry import trace
-from opentelemetry.trace import SpanKind
-
-tracer = trace.get_tracer(__name__)
-with tracer.start_as_current_span("my request span", kind=SpanKind.SERVER) as span:
-...
-```
---
-<!--
-
-### Add Custom Events
-
-#### Span Events
-
-The OpenTelemetry Logs/Events API is still under development. In the meantime, you can use the OpenTelemetry Span API to create "Span Events", which populate the traces table in Application Insights. The string passed in to addEvent() is saved to the message field within the trace.
-
-> [!CAUTION]
-> Span Events are only recommended for when you need additional diagnostic metadata associated with your span. For other scenarios, such as describing business events, we recommend you wait for the release of the OpenTelemetry Events API.
-
-#### [ASP.NET Core](#tab/aspnetcore)
-
-Currently unavailable.
-
-#### [.NET](#tab/net)
-
-Currently unavailable.
-
-#### [Java](#tab/java)
-
-You can use `opentelemetry-api` to create span events, which populate the `traces` table in Application Insights. The string passed in to `addEvent()` is saved to the `message` field within the trace.
-
-1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
-
- ```xml
- <dependency>
- <groupId>io.opentelemetry.instrumentation</groupId>
- <artifactId>opentelemetry-api</artifactId>
- <version>1.0.0</version>
- </dependency>
- ```
-
-1. Add span events in your code:
-
- ```java
- import io.opentelemetry.api.trace.Span;
-
- Span.current().addEvent("eventName");
- ```
-
-#### [Node.js](#tab/nodejs)
-
-Currently unavailable.
-
-#### [Python](#tab/python)
-
-Currently unavailable.
--->
-
-### Send custom telemetry using the Application Insights Classic API
-
-We recommend you use the OpenTelemetry APIs whenever possible, but there may be some scenarios when you have to use the Application Insights [Classic API](api-custom-events-metrics.md)s.
-
-#### [ASP.NET Core](#tab/aspnetcore)
-
-This isn't available in .NET.
-
-#### [.NET](#tab/net)
-
-This isn't available in .NET.
-
-#### [Java](#tab/java)
-
-1. Add `applicationinsights-core` to your application:
-
- ```xml
- <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>applicationinsights-core</artifactId>
- <version>3.4.14</version>
- </dependency>
- ```
-
-1. Create a `TelemetryClient` instance:
-
- ```java
- static final TelemetryClient telemetryClient = new TelemetryClient();
- ```
-
-1. Use the client to send custom telemetry:
-
- ##### Events
-
- ```java
- telemetryClient.trackEvent("WinGame");
- ```
-
- ##### Metrics
-
- ```java
- telemetryClient.trackMetric("queueLength", 42.0);
- ```
-
- ##### Dependencies
-
- ```java
- boolean success = false;
- long startTime = System.currentTimeMillis();
- try {
- success = dependency.call();
- } finally {
- long endTime = System.currentTimeMillis();
- RemoteDependencyTelemetry telemetry = new RemoteDependencyTelemetry();
- telemetry.setSuccess(success);
- telemetry.setTimestamp(new Date(startTime));
- telemetry.setDuration(new Duration(endTime - startTime));
- telemetryClient.trackDependency(telemetry);
- }
- ```
-
- ##### Logs
-
- ```java
- telemetryClient.trackTrace(message, SeverityLevel.Warning, properties);
- ```
-
- ##### Exceptions
-
- ```java
- try {
- ...
- } catch (Exception e) {
- telemetryClient.trackException(e);
- }
- ```
-
-#### [Node.js](#tab/nodejs)
--
-1. Get `LogHandler`:
-
-```javascript
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
-const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
-const logHandler = appInsights.getLogHandler();
-```
-
-1. Use the `LogHandler` to send custom telemetry:
-
- ##### Events
-
- ```javascript
- let eventTelemetry = {
- name: "testEvent"
- };
- logHandler.trackEvent(eventTelemetry);
- ```
-
- ##### Logs
-
- ```javascript
- let traceTelemetry = {
- message: "testMessage",
- severity: "Information"
- };
- logHandler.trackTrace(traceTelemetry);
- ```
-
- ##### Exceptions
-
- ```javascript
- try {
- ...
- } catch (error) {
- let exceptionTelemetry = {
- exception: error,
- severity: "Critical"
- };
- logHandler.trackException(exceptionTelemetry);
- }
- ```
-
-#### [Python](#tab/python)
-
-It isn't available in Python.
---
-## Modify telemetry
-
-This section explains how to modify telemetry.
-
-### Add span attributes
-
-These attributes might include adding a custom property to your telemetry. You might also use attributes to set optional fields in the Application Insights schema, like Client IP.
-
-#### Add a custom property to a Span
-
-Any [attributes](#add-span-attributes) you add to spans are exported as custom properties. They populate the _customDimensions_ field in the requests, dependencies, traces, or exceptions table.
-
-##### [ASP.NET Core](#tab/aspnetcore)
-
-To add span attributes, use either of the following two ways:
-
-* Use options provided by [instrumentation libraries](#install-the-client-library).
-* Add a custom span processor.
-
-> [!TIP]
-> The advantage of using options provided by instrumentation libraries, when they're available, is that the entire context is available. As a result, users can select to add or filter more attributes. For example, the enrich option in the HttpClient instrumentation library gives users access to the [HttpRequestMessage](/dotnet/api/system.net.http.httprequestmessage) and the [HttpResponseMessage](/dotnet/api/system.net.http.httpresponsemessage) itself. They can select anything from it and store it as an attribute.
-
-1. Many instrumentation libraries provide an enrich option. For guidance, see the readme files of individual instrumentation libraries:
- - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich)
- - [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#enrich)
-
-1. Use a custom processor:
-
-> [!TIP]
-> Add the processor shown here *before* adding Azure Monitor.
-
-```csharp
-var builder = WebApplication.CreateBuilder(args);
-
-builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddProcessor(new ActivityEnrichingProcessor()));
-builder.Services.AddOpenTelemetry().UseAzureMonitor();
-
-var app = builder.Build();
-
-app.Run();
-```
-
-Add `ActivityEnrichingProcessor.cs` to your project with the following code:
-
-```csharp
-public class ActivityEnrichingProcessor : BaseProcessor<Activity>
-{
- public override void OnEnd(Activity activity)
- {
- // The updated activity will be available to all processors which are called after this processor.
- activity.DisplayName = "Updated-" + activity.DisplayName;
- activity.SetTag("CustomDimension1", "Value1");
- activity.SetTag("CustomDimension2", "Value2");
- }
-}
-```
-
-#### [.NET](#tab/net)
-
-To add span attributes, use either of the following two ways:
-
-* Use options provided by instrumentation libraries.
-* Add a custom span processor.
-
-> [!TIP]
-> The advantage of using options provided by instrumentation libraries, when they're available, is that the entire context is available. As a result, users can select to add or filter more attributes. For example, the enrich option in the HttpClient instrumentation library gives users access to the httpRequestMessage itself. They can select anything from it and store it as an attribute.
-
-1. Many instrumentation libraries provide an enrich option. For guidance, see the readme files of individual instrumentation libraries:
- - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.8/src/OpenTelemetry.Instrumentation.AspNet/README.md#enrich)
- - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich)
- - [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#enrich)
-
-1. Use a custom processor:
-
-> [!TIP]
-> Add the processor shown here *before* the Azure Monitor Exporter.
-
-```csharp
-using var tracerProvider = Sdk.CreateTracerProviderBuilder()
- .AddSource("OTel.AzureMonitor.Demo")
- .AddProcessor(new ActivityEnrichingProcessor())
- .AddAzureMonitorTraceExporter()
- .Build();
-```
-
-Add `ActivityEnrichingProcessor.cs` to your project with the following code:
-
-```csharp
-public class ActivityEnrichingProcessor : BaseProcessor<Activity>
-{
- public override void OnEnd(Activity activity)
- {
- // The updated activity will be available to all processors which are called after this processor.
- activity.DisplayName = "Updated-" + activity.DisplayName;
- activity.SetTag("CustomDimension1", "Value1");
- activity.SetTag("CustomDimension2", "Value2");
- }
-}
-```
-
-##### [Java](#tab/java)
-
-You can use `opentelemetry-api` to add attributes to spans.
-
-Adding one or more span attributes populates the `customDimensions` field in the `requests`, `dependencies`, `traces`, or `exceptions` table.
-
-1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
-
- ```xml
- <dependency>
- <groupId>io.opentelemetry.instrumentation</groupId>
- <artifactId>opentelemetry-api</artifactId>
- <version>1.0.0</version>
- </dependency>
- ```
-
-1. Add custom dimensions in your code:
-
- ```java
- import io.opentelemetry.api.trace.Span;
- import io.opentelemetry.api.common.AttributeKey;
-
- AttributeKey attributeKey = AttributeKey.stringKey("mycustomdimension");
- Span.current().setAttribute(attributeKey, "myvalue1");
- ```
-
-##### [Node.js](#tab/nodejs)
-
-```typescript
-const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
-const { ReadableSpan, Span, SpanProcessor } = require("@opentelemetry/sdk-trace-base");
-const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
-
-const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig());
-
-class SpanEnrichingProcessor implements SpanProcessor{
- forceFlush(): Promise<void>{
- return Promise.resolve();
- }
- shutdown(): Promise<void>{
- return Promise.resolve();
- }
- onStart(_span: Span): void{}
- onEnd(span: ReadableSpan){
- span.attributes["CustomDimension1"] = "value1";
- span.attributes["CustomDimension2"] = "value2";
- }
-}
-
-appInsights.getTraceHandler().addSpanProcessor(new SpanEnrichingProcessor());
-```
-
-##### [Python](#tab/python)
-
-Use a custom processor:
-
-```python
-...
-from azure.monitor.opentelemetry import configure_azure_monitor
-from opentelemetry import trace
-
-configure_azure_monitor(
- connection_string="<your-connection-string>",
-)
-span_enrich_processor = SpanEnrichingProcessor()
-# Add the processor shown below to the current `TracerProvider`
-trace.get_tracer_provider().add_span_processor(span_enrich_processor)
-...
-```
-
-Add `SpanEnrichingProcessor.py` to your project with the following code:
-
-```python
-from opentelemetry.sdk.trace import SpanProcessor
-
-class SpanEnrichingProcessor(SpanProcessor):
-
- def on_end(self, span):
- span._name = "Updated-" + span.name
- span._attributes["CustomDimension1"] = "Value1"
- span._attributes["CustomDimension2"] = "Value2"
-```
---
-#### Set the user IP
-
-You can populate the _client_IP_ field for requests by setting the `http.client_ip` attribute on the span. Application Insights uses the IP address to generate user location attributes and then [discards it by default](ip-collection.md#default-behavior).
-
-##### [ASP.NET Core](#tab/aspnetcore)
-
-Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `ActivityEnrichingProcessor.cs`:
-
-```C#
-// only applicable in case of activity.Kind == Server
-activity.SetTag("http.client_ip", "<IP Address>");
-```
-
-#### [.NET](#tab/net)
-
-Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `ActivityEnrichingProcessor.cs`:
-
-```C#
-// only applicable in case of activity.Kind == Server
-activity.SetTag("http.client_ip", "<IP Address>");
-```
-
-##### [Java](#tab/java)
-
-Java automatically populates this field.
-
-##### [Node.js](#tab/nodejs)
-
-Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
-
-```typescript
-...
-const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
-
-class SpanEnrichingProcessor implements SpanProcessor{
- ...
-
- onEnd(span){
- span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>";
- }
-}
-```
-
-##### [Python](#tab/python)
-
-Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `SpanEnrichingProcessor.py`:
-
-```python
-span._attributes["http.client_ip"] = "<IP Address>"
-```
---
-#### Set the user ID or authenticated user ID
-
-You can populate the _user_Id_ or _user_AuthenticatedId_ field for requests by using the following guidance. User ID is an anonymous user identifier. Authenticated User ID is a known user identifier.
-
-> [!IMPORTANT]
-> Consult applicable privacy laws before you set the Authenticated User ID.
-
-##### [ASP.NET Core](#tab/aspnetcore)
-
-Use the add [custom property example](#add-a-custom-property-to-a-span).
-
-```csharp
-activity?.SetTag("enduser.id", "<User Id>");
-```
-
-##### [.NET](#tab/net)
-
-Use the add [custom property example](#add-a-custom-property-to-a-span).
-
-```csharp
-activity?.SetTag("enduser.id", "<User Id>");
-```
-
-##### [Java](#tab/java)
-
-Populate the `user ID` field in the `requests`, `dependencies`, or `exceptions` table.
-
-1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
-
- ```xml
- <dependency>
- <groupId>io.opentelemetry.instrumentation</groupId>
- <artifactId>opentelemetry-api</artifactId>
- <version>1.0.0</version>
- </dependency>
- ```
-
-1. Set `user_Id` in your code:
-
- ```java
- import io.opentelemetry.api.trace.Span;
-
- Span.current().setAttribute("enduser.id", "myuser");
- ```
-
-#### [Node.js](#tab/nodejs)
-
-Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
-
-```typescript
-...
-import { SemanticAttributes } from "@opentelemetry/semantic-conventions";
-
-class SpanEnrichingProcessor implements SpanProcessor{
- ...
-
- onEnd(span: ReadableSpan){
- span.attributes[SemanticAttributes.ENDUSER_ID] = "<User ID>";
- }
-}
-```
-
-##### [Python](#tab/python)
-
-Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
-
-```python
-span._attributes["enduser.id"] = "<User ID>"
-```
---
-### Add Log Attributes
-
-#### [ASP.NET Core](#tab/aspnetcore)
-
-OpenTelemetry uses .NET's ILogger.
-Attaching custom dimensions to logs can be accomplished using a [message template](/dotnet/core/extensions/logging?tabs=command-line#log-message-template).
-
-#### [.NET](#tab/net)
-
-OpenTelemetry uses .NET's ILogger.
-Attaching custom dimensions to logs can be accomplished using a [message template](/dotnet/core/extensions/logging?tabs=command-line#log-message-template).
-
-#### [Java](#tab/java)
-
-Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching custom dimensions to your logs can be accomplished in these ways:
-
-* [Log4j 2 MapMessage](https://logging.apache.org/log4j/2.x/log4j-api/apidocs/org/apache/logging/log4j/message/MapMessage.html) (a `MapMessage` key of `"message"` is captured as the log message)
-* [Log4j 2 Thread Context](https://logging.apache.org/log4j/2.x/manual/thread-context.html)
-* [Log4j 1.2 MDC](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/MDC.html)
-
-#### [Node.js](#tab/nodejs)
-
-Attributes could be added only when calling manual track APIs only, log attributes for console, bunyan and winston are currently not supported.
-
-```javascript
-const config = new ApplicationInsightsConfig();
-config.instrumentations.http = httpInstrumentationConfig;
-const appInsights = new ApplicationInsightsClient(config);
-const logHandler = appInsights.getLogHandler();
-const attributes = {
- "testAttribute1": "testValue1",
- "testAttribute2": "testValue2",
- "testAttribute3": "testValue3"
-};
-logHandler.trackEvent({
- name: "testEvent",
- properties: attributes
-});
-```
-
-#### [Python](#tab/python)
-
-The Python [logging](https://docs.python.org/3/howto/logging.html) library is [autoinstrumented](#logs). You can attach custom dimensions to your logs by passing a dictionary into the `extra` argument of your logs.
-
-```python
-...
-logger.warning("WARNING: Warning log with properties", extra={"key1": "value1"})
-...
-
-```
---
-### Filter telemetry
-
-You might use the following ways to filter out telemetry before it leaves your application.
-
-#### [ASP.NET Core](#tab/aspnetcore)
-
-1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries:
- - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter)
- - [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#filter)
-
-1. Use a custom processor:
-
- > [!TIP]
- > Add the processor shown here *before* adding Azure Monitor.
-
- ```csharp
- var builder = WebApplication.CreateBuilder(args);
-
- builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddProcessor(new ActivityFilteringProcessor()));
- builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddSource("ActivitySourceName"));
- builder.Services.AddOpenTelemetry().UseAzureMonitor();
-
- var app = builder.Build();
-
- app.Run();
- ```
-
- Add `ActivityFilteringProcessor.cs` to your project with the following code:
-
- ```csharp
- public class ActivityFilteringProcessor : BaseProcessor<Activity>
- {
- public override void OnStart(Activity activity)
- {
- // prevents all exporters from exporting internal activities
- if (activity.Kind == ActivityKind.Internal)
- {
- activity.IsAllDataRequested = false;
- }
- }
- }
- ```
-
-1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source are exported.
-
-#### [.NET](#tab/net)
-
-1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries:
- - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.8/src/OpenTelemetry.Instrumentation.AspNet/README.md#filter)
- - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter)
- - [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#filter)
-
-1. Use a custom processor:
-
- ```csharp
- using var tracerProvider = Sdk.CreateTracerProviderBuilder()
- .AddSource("OTel.AzureMonitor.Demo")
- .AddProcessor(new ActivityFilteringProcessor())
- .AddAzureMonitorTraceExporter()
- .Build();
- ```
-
- Add `ActivityFilteringProcessor.cs` to your project with the following code:
-
- ```csharp
- public class ActivityFilteringProcessor : BaseProcessor<Activity>
- {
- public override void OnStart(Activity activity)
- {
- // prevents all exporters from exporting internal activities
- if (activity.Kind == ActivityKind.Internal)
- {
- activity.IsAllDataRequested = false;
- }
- }
- }
- ```
-
-1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source will be exported.
--
-#### [Java](#tab/java)
-
-See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) and [telemetry processors](java-standalone-telemetry-processors.md).
-
-#### [Node.js](#tab/nodejs)
-
-1. Exclude the URL option provided by many HTTP instrumentation libraries.
-
- The following example shows how to exclude a certain URL from being tracked by using the [HTTP/HTTPS instrumentation library](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http):
-
- ```typescript
- const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");
- const { IncomingMessage } = require("http");
- const { RequestOptions } = require("https");
- const { HttpInstrumentationConfig }= require("@opentelemetry/instrumentation-http");
-
- const httpInstrumentationConfig: HttpInstrumentationConfig = {
- enabled: true,
- ignoreIncomingRequestHook: (request: IncomingMessage) => {
- // Ignore OPTIONS incoming requests
- if (request.method === 'OPTIONS') {
- return true;
- }
- return false;
- },
- ignoreOutgoingRequestHook: (options: RequestOptions) => {
- // Ignore outgoing requests with /test path
- if (options.path === '/test') {
- return true;
- }
- return false;
- }
- };
- const config = new ApplicationInsightsConfig();
- config.instrumentations.http = httpInstrumentationConfig;
- const appInsights = new ApplicationInsightsClient(config);
- ```
-
-2. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`.
-Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
-
- ```typescript
- const { SpanKind, TraceFlags } = require("@opentelemetry/api");
-
- class SpanEnrichingProcessor {
- ...
-
- onEnd(span) {
- if(span.kind == SpanKind.INTERNAL){
- span.spanContext().traceFlags = TraceFlags.NONE;
- }
- }
- }
- ```
-
-#### [Python](#tab/python)
-
-1. Exclude the URL with the `OTEL_PYTHON_EXCLUDED_URLS` environment variable:
- ```
- export OTEL_PYTHON_EXCLUDED_URLS="http://localhost:8080/ignore"
- ```
- Doing so excludes the endpoint shown in the following Flask example:
-
- ```python
- ...
- import flask
- from azure.monitor.opentelemetry import configure_azure_monitor
-
- # Configure Azure monitor collection telemetry pipeline
- configure_azure_monitor(
- connection_string="<your-connection-string>",
- )
- app = flask.Flask(__name__)
-
- # Requests sent to this endpoint will not be tracked due to
- # flask_config configuration
- @app.route("/ignore")
- def ignore():
- return "Request received but not tracked."
- ...
- ```
-
-1. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`.
-
- ```python
- ...
- from azure.monitor.opentelemetry import configure_azure_monitor
- from opentelemetry import trace
-
- configure_azure_monitor(
- connection_string="<your-connection-string>",
- )
- trace.get_tracer_provider().add_span_processor(SpanFilteringProcessor())
- ...
- ```
-
- Add `SpanFilteringProcessor.py` to your project with the following code:
-
- ```python
- from opentelemetry.trace import SpanContext, SpanKind, TraceFlags
- from opentelemetry.sdk.trace import SpanProcessor
-
- class SpanFilteringProcessor(SpanProcessor):
-
- # prevents exporting spans from internal activities
- def on_start(self, span):
- if span._kind is SpanKind.INTERNAL:
- span._context = SpanContext(
- span.context.trace_id,
- span.context.span_id,
- span.context.is_remote,
- TraceFlags.DEFAULT,
- span.context.trace_state,
- )
-
- ```
--
-
-<!-- For more information, see [GitHub Repo](link). -->
-
-### Get the trace ID or span ID
-
-You might want to get the trace ID or span ID. If you have logs sent to a destination other than Application Insights, consider adding the trace ID or span ID. Doing so enables better correlation when debugging and diagnosing issues.
-
-#### [ASP.NET Core](#tab/aspnetcore)
-
-> [!NOTE]
-> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
-
-```csharp
-Activity activity = Activity.Current;
-string traceId = activity?.TraceId.ToHexString();
-string spanId = activity?.SpanId.ToHexString();
-```
-
-#### [.NET](#tab/net)
-
-> [!NOTE]
-> The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
-
-```csharp
-Activity activity = Activity.Current;
-string traceId = activity?.TraceId.ToHexString();
-string spanId = activity?.SpanId.ToHexString();
-```
-
-#### [Java](#tab/java)
-
-You can use `opentelemetry-api` to get the trace ID or span ID.
-
-1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
-
- ```xml
- <dependency>
- <groupId>io.opentelemetry.instrumentation</groupId>
- <artifactId>opentelemetry-api</artifactId>
- <version>1.0.0</version>
- </dependency>
- ```
-
-1. Get the request trace ID and the span ID in your code:
-
- ```java
- import io.opentelemetry.api.trace.Span;
-
- Span span = Span.current();
- String traceId = span.getSpanContext().getTraceId();
- String spanId = span.getSpanContext().getSpanId();
- ```
-
-#### [Node.js](#tab/nodejs)
-
-Get the request trace ID and the span ID in your code:
-
- ```javascript
- const { trace } = require("@opentelemetry/api");
-
- let spanId = trace.getActiveSpan().spanContext().spanId;
- let traceId = trace.getActiveSpan().spanContext().traceId;
- ```
-
-#### [Python](#tab/python)
-
-Get the request trace ID and the span ID in your code:
-
- ```python
- from opentelemetry import trace
-
- trace_id = trace.get_current_span().get_span_context().trace_id
- span_id = trace.get_current_span().get_span_context().span_id
- ```
--- ## Support ### [ASP.NET Core](#tab/aspnetcore)
To provide feedback:
### [ASP.NET Core](#tab/aspnetcore)
+- For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md)
- To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md) - To review the source code, see the [Azure Monitor AspNetCore GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore). - To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor AspNetCore NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.AspNetCore) page.
To provide feedback:
#### [.NET](#tab/net)
+- For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md)
- To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md) - To review the source code, see the [Azure Monitor Exporter GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter). - To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor Exporter NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter) page.
To provide feedback:
### [Java](#tab/java)
+- For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md)
- Review [Java autoinstrumentation configuration options](java-standalone-config.md). - To review the source code, see the [Azure Monitor Java autoinstrumentation GitHub repository](https://github.com/Microsoft/ApplicationInsights-Java). - To learn more about OpenTelemetry and its community, see the [OpenTelemetry Java GitHub repository](https://github.com/open-telemetry/opentelemetry-java-instrumentation).
To provide feedback:
### [Node.js](#tab/nodejs)
+- For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md)
- To review the source code, see the [Application Insights Beta GitHub repository](https://github.com/microsoft/ApplicationInsights-node.js/tree/beta). - To install the npm package and check for updates see the [applicationinsights npm Package](https://www.npmjs.com/package/applicationinsights/v/beta) page. - To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js).
To provide feedback:
### [Python](#tab/python)
+- For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md)
- To review the source code and extra documentation, see the [Azure Monitor Distro GitHub repository](https://github.com/microsoft/ApplicationInsights-Python/blob/main/azure-monitor-opentelemetry/README.md). - To see extra samples and use cases, see [Azure Monitor Distro samples](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry/samples). - See the [release notes](https://github.com/microsoft/ApplicationInsights-Python/releases) on GitHub.
To provide feedback:
- To enable usage experiences, [enable web or browser user monitoring](javascript.md). -
-<!-- PR for Hector-->
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
The collection endpoint pre-aggregates events before ingestion sampling. For thi
|-|--|-|--| | ASP.NET | Supported <sup>1<sup> | Not supported | Not supported | | ASP.NET Core | Supported <sup>2<sup> | Not supported | Not supported |
-| Java | Not supported | Not supported | [Supported](opentelemetry-enable.md?tabs=java#metrics) |
+| Java | Not supported | Not supported | [Supported](opentelemetry-add-modify.md?tabs=java#metrics) |
| Node.js | Not supported | Not supported | Not supported | 1. ASP.NET codeless attach on virtual machines/virtual machine scale sets and on-premises emits standard metrics without dimensions. The same is true for Azure App Service, but the collection level must be set to recommended. The SDK is required for all dimensions.
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Title: Telemetry sampling in Azure Application Insights | Microsoft Docs description: How to keep the volume of telemetry under control. Previously updated : 03/22/2023 Last updated : 06/23/2023
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
Title: Connection strings in Application Insights | Microsoft Docs description: This article shows how to use connection strings. Previously updated : 11/15/2022 Last updated : 06/23/2023
azure-monitor Transaction Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md
Title: Application Insights transaction diagnostics | Microsoft Docs description: This article explains Application Insights end-to-end transaction diagnostics. Previously updated : 11/15/2022 Last updated : 06/23/2023
azure-monitor Usage Heart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-heart.md
You only have to interact with the main workbook, **HEART Analytics - All Sectio
To validate that data is flowing as expected to light up the metrics accurately, select the **Development Requirements** tab. > [!IMPORTANT]
-> Unless you [set the authenticated user context](./javascript-feature-extensions.md#3-optional-set-the-authenticated-user-context), you must select **Anonymous Users** from the **ConversionScope** dropdown to see telemetry data.
+> Unless you [set the authenticated user context](./javascript-feature-extensions.md#2-optional-set-the-authenticated-user-context), you must select **Anonymous Users** from the **ConversionScope** dropdown to see telemetry data.
:::image type="content" source="media/usage-overview/development-requirements-1.png" alt-text="Screenshot that shows the Development Requirements tab of the HEART Analytics - All Sections workbook.":::
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
Title: Usage analysis with Application Insights | Azure Monitor description: Understand your users and what they do with your app. Previously updated : 02/14/2023 Last updated : 06/23/2023
The **Users** and **Sessions** reports filter your data by pages or custom event
Insights on the right point out interesting patterns in the set of data.
-* The **Users** report counts the numbers of unique users that access your pages within your chosen time periods. For web apps, users are counted by using cookies. If someone accesses your site with different browsers or client machines, or clears their cookies, they'll be counted more than once.
-* The **Sessions** report counts the number of user sessions that access your site. A session is a period of activity by a user. It's terminated by a period of inactivity of more than half an hour.
+* The **Users** report counts the numbers of unique users that access your pages within your chosen time periods. For web apps, users are counted by using cookies. If someone accesses your site with different browsers or client machines, or clears their cookies, they're counted more than once.
+* The **Sessions** report tabulates the number of user sessions that access your site. A session represents a period of activity initiated by a user and concludes with a period of inactivity exceeding half an hour.
For more information about the Users, Sessions, and Events tools, see [Users, sessions, and events analysis in Application Insights](usage-segmentation.md).
For more information about the Retention workbook, see [User retention analysis
## Custom business events
-To get a clear understanding of what users do with your app, it's useful to insert lines of code to log custom events. These events can track anything from detailed user actions, such as selecting specific buttons, to more significant business events, such as making a purchase or winning a game.
+To understand user interactions in your app, insert code lines to log custom events. These events track various user actions, like button selections, or important business events, such as purchases or game victories.
-You can also use the [Click Analytics Auto-collection plug-in](javascript-feature-extensions.md) to collect custom events.
+You can also use the [Click Analytics Autocollection plug-in](javascript-feature-extensions.md) to collect custom events.
In some cases, page views can represent useful events, but it isn't true in general. A user can open a product page without buying the product.
In the Users, Sessions, and Events tools, you can slice and dice custom events b
:::image type="content" source="./media/usage-overview/events.png" alt-text="Screenshot that shows the Events tab filtered by AnalyticsItemsOperation and split by AppID." lightbox="./media/usage-overview/events.png":::
-Whenever youΓÇÖre in any usage experience, click the **Open the last run query** icon to take you back to the underlying query.
+Whenever youΓÇÖre in any usage experience, select the **Open the last run query** icon to take you back to the underlying query.
:::image type="content" source="./media/usage-overview/open-last-run-query-icon.png" alt-text="Screenshot of the Application Insights Session pane in the Azure portal. The Open the last run query icon is highlighted." lightbox="./media/usage-overview/open-last-run-query-icon.png":::
When you design each feature of your app, consider how you're going to measure i
## A | B testing
-If you don't know which variant of a feature will be more successful, release both and make each variant accessible to different users. Measure the success of each variant, and then move to a unified version.
+If you're unsure which feature variant is more successful, release both and let different users access each variant. Measure the success of each variant, and then transition to a unified version.
-For this technique, you attach distinct property values to all the telemetry that's sent by each version of your app. You can do that step by defining properties in the active TelemetryContext. These default properties are added to every telemetry message that the application sends. That means the properties are added to your custom messages and the standard telemetry.
+In this technique, you attach unique property values to all the telemetry sent by each version of your app. You can do it by defining properties in the active TelemetryContext. These default properties get included in every telemetry message sent by the application. It includes both custom messages and standard telemetry.
In the Application Insights portal, filter and split your data on the property values so that you can compare the different versions.
azure-monitor Usage Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-retention.md
Title: Analyze web app user retention with Application Insights description: This article shows you how to determine how many users return to your app. Previously updated : 07/30/2021 Last updated : 06/23/2023
Workbook capabilities:
## Use business events to track retention
-To get the most useful retention analysis, measure events that represent significant business activities.
+You should measure events that represent significant business activities to get the most useful retention analysis.
-For example, many users might open a page in your app without playing the game that it displays. Tracking only the page views would provide an inaccurate estimate of how many people returned to play the game after enjoying it previously. To get a clear picture of returning players, your app should send a custom event when a user actually plays.
+For more information and example code, see [Custom business events](usage-overview.md#custom-business-events).
-It's good practice to code custom events that represent key business actions. Then you can use these events for your retention analysis. To capture the game outcome, you need to write a line of code to send a custom event to Application Insights. If you write it in the webpage code or in Node.JS, it looks like this example:
-
-```JavaScript
- appinsights.trackEvent("won game");
-```
-
-Or in ASP.NET server code:
-
-```csharp
- telemetry.TrackEvent("won game");
-```
-
-Learn more about [writing custom events](./api-custom-events-metrics.md#trackevent).
+To learn more, see [writing custom events](./api-custom-events-metrics.md#trackevent).
## Next steps
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
description: Monitoring .NET Core/.NET Framework non-HTTP apps with Azure Monito
ms.devlang: csharp Previously updated : 04/24/2023 Last updated : 06/23/2023
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|PEBytesIn |Yes |Bytes In |Count |Total |Total number of Bytes Out |No Dimensions |
-|PEBytesOut |Yes |Bytes Out |Count |Total |Total number of Bytes Out |No Dimensions |
+|PEBytesIn |No |Bytes In |Count |Total |Total number of Bytes Out |No Dimensions |
+|PEBytesOut |No |Bytes Out |Count |Total |Total number of Bytes Out |No Dimensions |
## Microsoft.Network/privateLinkServices <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
- [Read about metrics in Azure Monitor](../data-platform.md) - [Create alerts on metrics](../alerts/alerts-overview.md) - [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)--++ <!--Gen Date: Sun Jun 04 2023 10:14:09 GMT+0300 (Israel Daylight Time)-->
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
Title: Azure Monitor Logs cost calculations and options
description: Cost details for data stored in a Log Analytics workspace in Azure Monitor, including commitment tiers and data size calculation. Previously updated : 04/06/2023 Last updated : 06/23/2023 ms.reviwer: dalek git
See the documentation for different services and solutions for any unique billin
## Commitment tiers
-In addition to the pay-as-you-go model, Log Analytics has *commitment tiers*, which can save you as much as 30 percent compared to the pay-as-you-go price. With commitment tier pricing, you can commit to buy data ingestion for a workspace, starting at 100 GB per day, at a lower price than pay-as-you-go pricing. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected.
+In addition to the pay-as-you-go model, Log Analytics has *commitment tiers*, which can save you as much as 30 percent compared to the pay-as-you-go price. With commitment tier pricing, you can commit to buy data ingestion for a workspace, starting at 100 GB per day, at a lower price than pay-as-you-go pricing. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. (Overage is billed using the same commitment tier billing meter. For example if a workspace is in the 200 GB/day commitment tier and ingests 300 GB in a day, that usage will be billed as 1.5 units of the 200 GB/day commitment tier.) The commitment tiers have a 31-day commitment period from the time a commitment tier is selected.
- During the commitment period, you can change to a higher commitment tier, which restarts the 31-day commitment period. You can't move back to pay-as-you-go or to a lower commitment tier until after you finish the commitment period. - At the end of the commitment period, the workspace retains the selected commitment tier, and the workspace can be moved to Pay-As-You-Go or to a lower commitment tier at any time.
azure-netapp-files Cross Zone Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-zone-replication-introduction.md
The preview of cross-zone replication is available in the following regions:
* Australia East * Brazil South * Canada Central
+* Central India
* Central US * East Asia * East US
azure-netapp-files Large Volumes Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md
Support for Azure NetApp Files large volumes is available in the following regio
* Australia Southeast * Brazil South * Canada Central
+* Central India
* Central US * East US * East US 2
azure-netapp-files Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-availability-zones.md
Latency is subject to availability zone latency for within availability zone acc
## Azure regions with availability zones
-For a list of regions that that currently support availability zones, refer to [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
+For a list of regions that that currently support availability zones, see [Azure regions with availability zone support](../reliability/availability-zones-service-support.md).
## Next steps
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-github-actions.md
Create secrets for your Azure credentials, resource group, and subscriptions.
1. In [GitHub](https://github.com/), navigate to your repository.
-1. Select **Security > Secrets and variables > Actions > New repository secret**.
+1. Select **Settings > Secrets and variables > Actions > New repository secret**.
1. Paste the entire JSON output from the Azure CLI command into the secret's value field. Name the secret `AZURE_CREDENTIALS`.
azure-resource-manager All Files Test Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/all-files-test-cases.md
Title: All files test cases for Azure Resource Manager test toolkit
description: Describes the tests that are run for all files by the Azure Resource Manager template test toolkit. Previously updated : 07/16/2021-- Last updated : 06/22/2023 # Test cases for all files
azure-resource-manager Child Resource Name Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/child-resource-name-type.md
Title: Child resources in templates
description: Describes how to set the name and type for child resources in an Azure Resource Manager template (ARM template). Previously updated : 01/19/2022 Last updated : 06/22/2023 # Set name and type for child resources
The following example shows a virtual network and with a subnet. Notice that the
"resources": [ { "type": "Microsoft.Network/virtualNetworks",
- "apiVersion": "2018-10-01",
+ "apiVersion": "2022-11-01",
"name": "VNet1", "location": "[parameters('location')]", "properties": {
The following example shows a virtual network and with a subnet. Notice that the
"resources": [ { "type": "subnets",
- "apiVersion": "2018-10-01",
+ "apiVersion": "2022-11-01",
"name": "Subnet1", "dependsOn": [ "VNet1"
The following example shows a virtual network and subnet that are both defined a
"resources": [ { "type": "Microsoft.Network/virtualNetworks",
- "apiVersion": "2018-10-01",
+ "apiVersion": "2022-11-01",
"name": "VNet1", "location": "[parameters('location')]", "properties": {
The following example shows a virtual network and subnet that are both defined a
}, { "type": "Microsoft.Network/virtualNetworks/subnets",
- "apiVersion": "2018-10-01",
+ "apiVersion": "2022-11-01",
"name": "VNet1/Subnet1", "dependsOn": [ "VNet1"
azure-resource-manager Copy Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/copy-outputs.md
Title: Define multiple instances of an output value
description: Use copy operation in an Azure Resource Manager template (ARM template) to iterate multiple times when returning a value from a deployment. Previously updated : 05/07/2021 Last updated : 06/22/2023 # Output iteration in ARM templates
The following example creates a variable number of storage accounts and returns
"storageCount": { "type": "int", "defaultValue": 2
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
} }, "variables": {
- "baseName": "[concat('storage', uniqueString(resourceGroup().id))]"
+ "baseName": "[format('storage{0}', uniqueString(resourceGroup().id))]"
}, "resources": [ {
+ "copy": {
+ "name": "storagecopy",
+ "count": "[parameters('storageCount')]"
+ },
"type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-04-01",
- "name": "[concat(copyIndex(), variables('baseName'))]",
- "location": "[resourceGroup().location]",
+ "apiVersion": "2022-09-01",
+ "name": "[format('{0}{1}', copyIndex(), variables('baseName'))]",
+ "location": "[parameters('location')]",
"sku": { "name": "Standard_LRS" }, "kind": "Storage",
- "properties": {},
- "copy": {
- "name": "storagecopy",
- "count": "[parameters('storageCount')]"
- }
+ "properties": {}
} ], "outputs": {
The following example creates a variable number of storage accounts and returns
"type": "array", "copy": { "count": "[parameters('storageCount')]",
- "input": "[reference(concat(copyIndex(), variables('baseName'))).primaryEndpoints.blob]"
+ "input": "[reference(format('{0}{1}', copyIndex(), variables('baseName'))).primaryEndpoints.blob]"
} } }
The next example returns three properties from the new storage accounts.
"storageCount": { "type": "int", "defaultValue": 2
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
} }, "variables": {
- "baseName": "[concat('storage', uniqueString(resourceGroup().id))]"
+ "baseName": "[format('storage{0}', uniqueString(resourceGroup().id))]"
}, "resources": [ {
+ "copy": {
+ "name": "storagecopy",
+ "count": "[length(range(0, parameters('storageCount')))]"
+ },
"type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-04-01",
- "name": "[concat(copyIndex(), variables('baseName'))]",
- "location": "[resourceGroup().location]",
+ "apiVersion": "2022-09-01",
+ "name": "[format('{0}{1}', range(0, parameters('storageCount'))[copyIndex()], variables('baseName'))]",
+ "location": "[parameters('location')]",
"sku": { "name": "Standard_LRS" }, "kind": "Storage",
- "properties": {},
- "copy": {
- "name": "storagecopy",
- "count": "[parameters('storageCount')]"
- }
+ "properties": {}
} ], "outputs": { "storageInfo": { "type": "array", "copy": {
- "count": "[parameters('storageCount')]",
+ "count": "[length(range(0, parameters('storageCount')))]",
"input": {
- "id": "[reference(concat(copyIndex(), variables('baseName')), '2019-04-01', 'Full').resourceId]",
- "blobEndpoint": "[reference(concat(copyIndex(), variables('baseName'))).primaryEndpoints.blob]",
- "status": "[reference(concat(copyIndex(), variables('baseName'))).statusOfPrimary]"
+ "id": "[resourceId('Microsoft.Storage/storageAccounts', format('{0}{1}', copyIndex(), variables('baseName')))]",
+ "blobEndpoint": "[reference(format('{0}{1}', copyIndex(), variables('baseName'))).primaryEndpoints.blob]",
+ "status": "[reference(format('{0}{1}', copyIndex(), variables('baseName'))).statusOfPrimary]"
} } }
The preceding example returns an array with the following values:
```json [ {
- "id": "Microsoft.Storage/storageAccounts/0storagecfrbqnnmpeudi",
+ "id": "/subscriptions/00000000/resourceGroups/demoGroup/providers/Microsoft.Storage/storageAccounts/0storagecfrbqnnmpeudi",
"blobEndpoint": "https://0storagecfrbqnnmpeudi.blob.core.windows.net/", "status": "available" }, {
- "id": "Microsoft.Storage/storageAccounts/1storagecfrbqnnmpeudi",
+ "id": "/subscriptions/00000000/resourceGroups/demoGroup/providers/Microsoft.Storage/storageAccounts/1storagecfrbqnnmpeudi",
"blobEndpoint": "https://1storagecfrbqnnmpeudi.blob.core.windows.net/", "status": "available" }
azure-resource-manager Create Templates Use Intellij https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/create-templates-use-intellij.md
Title: Deploy template - IntelliJ IDEA description: Learn how to create your first Azure Resource Manager template (ARM template) using the IntelliJ IDEA, and how to deploy it. ms.devlang: java Previously updated : 08/01/2019 Last updated : 06/23/2023 #Customer intent: As a developer new to Azure deployment, I want to learn how to use the IntelliJ IDEA to create and edit Resource Manager templates, so I can use the templates to deploy Azure resources.
Learn how to deploy an Azure Resource Manager template (ARM template) to Azure using the IntelliJ IDEA, and the process of editing and update the template directly from the IDE. ARM templates are JSON files that define the resources you need to deploy for your solution. To understand the concepts associated with deploying and managing your Azure solutions, see the [template deployment overview](overview.md).
-![Resource Manager template portal diagram](./media/quickstart-create-templates-use-the-portal/azure-resource-manager-export-deploy-template-portal.png)
After completing the tutorial, you deploy an Azure Storage account. The same process can be used to deploy other Azure resources.
Instead of creating a template from scratch, you open a template from [Azure Qui
1. If your Azure Toolkit are properly installed and signed-in, you should see Azure Explorer in your IntelliJ IDEA's sidebar. Right-click on the **Resource Management** and select **Create Deployment**.
- ![Resource Manager template right click to create deployment](./media/create-templates-use-intellij/resource-manager-create-deployment-right-click.png)
+ :::image type="content" source="./media/create-templates-use-intellij/resource-manager-create-deployment-right-click.png" alt-text="Screenshot of Resource Manager template right click to create deployment.":::
1. Config your **Deployment Name**, **Subscription**, **Resource Group**, and **Region**. Here we deploy the template into a new resource group `testRG`. Then, select path for **Resource Template** as `azuredeploy.json` and **Resource Parameters** as `azuredeploy.parameters.json` you downloaded.
- ![Resource Manager template select files to create deployment](./media/create-templates-use-intellij/resource-manager-create-deployment-select-files.png)
+ :::image type="content" source="./media/create-templates-use-intellij/resource-manager-create-deployment-select-files.png" alt-text="Screenshot of Resource Manager template select files to create deployment.":::
1. After you click OK, the deployment is started. Until the deployment complete, you can find the progress from IntelliJ IDEA's **status bar** on the bottom.
- ![Resource Manager template deployment status](./media/create-templates-use-intellij/resource-manager-create-deployment-status.png)
+ :::image type="content" source="./media/create-templates-use-intellij/resource-manager-create-deployment-status.png" alt-text="Resource Manager template deployment status.":::
## Browse an existing deployment 1. After the deployment is done, you can see the new resource group `testRG` and a new deployment created. Right-click on the deployment and you can see a list of possible actions. Now select **Show Properties**.
- ![Resource Manager template browse deployment](./media/create-templates-use-intellij/resource-manager-deployment-browse.png)
+ :::image type="content" source="./media/create-templates-use-intellij/resource-manager-deployment-browse.png" alt-text="Screenshot of Resource Manager template browse deployment.":::
1. A tab view will be open to show some useful properties like deployment status and template structure.
- ![Resource Manager template show deployment properties](./media/create-templates-use-intellij/resource-manager-deployment-show-properties.png)
+ :::image type="content" source="./media/create-templates-use-intellij/resource-manager-deployment-show-properties.png" alt-text="Screenshot of Resource Manager template show deployment properties.":::
## Edit and update an existing deployment 1. Select **Edit Deployment** from right-click menu or the show properties view before. Another tab view will be open, showing the template and parameter files for the deployment on Azure. To save those files to local, you could click **Export Template File** or **Export Parameter Files**.
- ![Resource Manager template edit deployment](./media/create-templates-use-intellij/resource-manager-edit-deployment.png)
+ :::image type="content" source="./media/create-templates-use-intellij/resource-manager-edit-deployment.png" alt-text="Screenshot of Resource Manager template edit deployment.":::
1. You can edit the two files on this page and deploy the changes to Azure. Here we modify the value of **storageAccountType** in parameter files, from `Standard_LRS` to `Standard_GRS`. Then, click **Update Deployment** on the bottom and confirm the update.
- ![Screenshot shows the Resource Manager template with the Update Deployment prompt displayed.](./media/create-templates-use-intellij/resource-manager-edit-deployment-update.png)
+ :::image type="content" source="./media/create-templates-use-intellij/resource-manager-edit-deployment-update.png" alt-text="Screenshot shows the Resource Manager template with the Update Deployment prompt displayed.":::
1. After update deployment completed, you can verify on the portal that the created storage account is changed to `Standard_GRS`.
Instead of creating a template from scratch, you open a template from [Azure Qui
1. When the Azure resources are no longer needed, clean up the resources you deployed by deleting the resource group. You can do it from Azure portal or Azure CLI. In Azure Explorer from IntelliJ IDEA, right click on your created **resource group** and select delete.
- ![Delete resource group in Azure Explorer from IntelliJ IDEA](./media/create-templates-use-intellij/delete-resource-group.png)
+ :::image type="content" source="./media/create-templates-use-intellij/delete-resource-group.png" alt-text="Screenshot of Delete resource group in Azure Explorer from IntelliJ IDEA.":::
> [!NOTE] > Notice that delete a deployment will not delete resources created by the deployment. Please delete corresponding resource group or specific resources if you no longer need them.
azure-resource-manager Createuidefinition Test Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/createuidefinition-test-cases.md
Title: createUiDefinition.json test cases for Azure Resource Manager test toolki
description: Describes the createUiDefinition.json tests that are run by the Azure Resource Manager template test toolkit. Previously updated : 07/16/2021-- Last updated : 06/22/2023 # Test cases for createUiDefinition.json
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-github-actions.md
Title: Deploy Resource Manager templates by using GitHub Actions description: Describes how to deploy Azure Resource Manager templates (ARM templates) by using GitHub Actions. Previously updated : 05/10/2022 Last updated : 06/23/2023
The workflow file must be stored in the **.github/workflows** folder at the root
- **name**: The name of the workflow. - **on**: The name of the GitHub events that triggers the workflow. The workflow is trigger when there is a push event on the main branch, which modifies at least one of the two files specified. The two files are the workflow file and the template file.
-
+ # [OpenID Connect](#tab/openid)
-
+ ```yml on: [push] name: Azure ARM
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/overview.md
Title: Templates overview
description: Describes the benefits using Azure Resource Manager templates (ARM templates) for deployment of resources. Previously updated : 05/26/2022 Last updated : 06/23/2023 # What are ARM templates?
When you deploy a template, Resource Manager converts the template into REST API
"resources": [ { "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-04-01",
+ "apiVersion": "2022-09-01",
"name": "mystorageaccount",
- "location": "westus",
+ "location": "centralus",
"sku": { "name": "Standard_LRS" },
- "kind": "StorageV2",
- "properties": {}
- }
+ "kind": "StorageV2"
+ },
] ```
azure-resource-manager Parameter File Test Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/parameter-file-test-cases.md
Title: Parameter file test cases for Azure Resource Manager test toolkit
description: Describes the parameter file tests that are run by the Azure Resource Manager template test toolkit. Previously updated : 07/16/2021-- Last updated : 06/23/2023 # Test cases for parameter files
azure-resource-manager Resource Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-extensions.md
Title: Post-deployment configuration with extensions description: Learn how to use Azure Resource Manager template (ARM template) extensions for post-deployment configurations.- Previously updated : 12/14/2018- Last updated : 06/23/2023 # Post-deployment configurations by using extensions
azure-resource-manager Scope Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/scope-functions.md
Title: Template functions in scoped deployments
description: Describes how template functions are resolved in scoped deployments. The scope can be a tenant, management groups, subscriptions, and resource groups. Previously updated : 10/22/2020 Last updated : 06/23/2023 # ARM template functions in deployment scopes
azure-resource-manager Template Functions Logical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-logical.md
Title: Template functions - logical
description: Describes the functions to use in an Azure Resource Manager template (ARM template) to determine logical values. Previously updated : 02/11/2022 Last updated : 06/23/2023 # Logical functions for ARM templates
azure-resource-manager Template Spec Convert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-spec-convert.md
Title: Convert portal template to template spec description: Describes how to convert an existing template in the Azure portal gallery to a template specs. Previously updated : 05/25/2022-- Last updated : 06/22/2023 # Convert template gallery in portal to template specs
azure-resource-manager Template Tutorial Use Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-use-key-vault.md
Title: Use Azure Key Vault in templates description: Learn how to use Azure Key Vault to pass secure parameter values during Azure Resource Manager template (ARM template) deployment. Previously updated : 03/01/2021 Last updated : 06/23/2023
azure-resource-manager Update Visual Studio Deployment Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/update-visual-studio-deployment-script.md
Title: Update Visual Studio's template deployment script to use Az PowerShell description: Update the Visual Studio template deployment script from AzureRM to Az PowerShell- Previously updated : 01/31/2020- Last updated : 06/23/2023 + # Update Visual Studio template deployment script to use Az PowerShell module Visual Studio 16.4 supports using the Az PowerShell module in the template deployment script. However, Visual Studio doesn't automatically install that module. To use the Az module, you need to take four steps:
azure-vmware Deploy Vsan Stretched Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vsan-stretched-clusters.md
Title: Deploy vSAN stretched clusters
description: Learn how to deploy vSAN stretched clusters. Previously updated : 06/12/2023 Last updated : 06/23/2023
To protect against split-brain scenarios and help measure site health, a managed
The following diagram depicts a vSAN cluster stretched across two AZs. In summary, stretched clusters simplify protection needs by providing the same trusted controls and capabilities in addition to the scale and flexibility of the Azure infrastructure.
It's important to understand that stretched cluster private clouds only offer an
The following diagram shows the secondary site partitioning scenario.
- :::image type="content" source="media/stretch-clusters/diagram-2-secondary-site-power-off-workload.png" alt-text="Diagram shows vSphere high availability powering off the workload virtual machines on the secondary site.":::
+ :::image type="content" source="media/stretch-clusters/diagram-2-secondary-site-power-off-workload.png" alt-text="Diagram shows vSphere high availability powering off the workload virtual machines on the secondary site." border="false" lightbox="media/stretch-clusters/diagram-2-secondary-site-power-off-workload.png":::
- If the secondary site partitioning progressed into the failure of the primary site instead, or resulted in a complete partitioning, vSphere HA would attempt to restart the workload VMs on the secondary site. If vSphere HA attempted to restart the workload VMs on the secondary site, it would put the workload VMs in an unsteady state. The following diagram shows the preferred site failure or complete partitioning scenario.
- :::image type="content" source="media/stretch-clusters/diagram-3-restart-workload-secondary-site.png" alt-text="Diagram shows vSphere high availability trying to restart the workload virtual machines on the secondary site when preferred site failure or complete partitioning occurs.":::
+ :::image type="content" source="media/stretch-clusters/diagram-3-restart-workload-secondary-site.png" alt-text="Diagram shows vSphere high availability trying to restart the workload virtual machines on the secondary site when preferred site failure occurs." border="false" lightbox="media/stretch-clusters/diagram-3-restart-workload-secondary-site.png":::
It should be noted that these types of failures, although rare, fall outside the scope of the protection offered by a stretched cluster private cloud. Because of those types of rare failures, a stretched cluster solution should be regarded as a multi-AZ high availability solution reliant upon vSphere HA. It's important you understand that a stretched cluster solution isn't meant to replace a comprehensive multi-region Disaster Recovery strategy that can be employed to ensure application availability. The reason is because a Disaster Recovery solution typically has separate management and control planes in separate Azure regions. Azure VMware Solution stretched clusters have a single management and control plane stretched across two availability zones within the same Azure region. For example, one vCenter Server, one NSX-T Manager cluster, one NSX-T Data Center Edge VM pair.
Customers will be charged based on the number of nodes deployed within the priva
### Will I be charged for the witness node and for inter-AZ traffic?
-No. Customers won't see a charge for the witness node and the inter-AZ traffic. The witness node is entirely service managed, and Azure VMware Solution provides the required lifecycle management of the witness node. As the entire solution is service managed, the customer only needs to identify the appropriate SPBM policy to set for the workload virtual machines. The rest is managed by Microsoft.
+No. Customers won't see a charge for the witness node and the inter-AZ traffic. The witness node is entirely service managed, and Azure VMware Solution provides the required lifecycle management of the witness node. As the entire solution is service managed, the customer only needs to identify the appropriate SPBM policy to set for the workload virtual machines. The rest is managed by Microsoft.
backup Backup Azure Vms Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-automation.md
Title: Back up and recover Azure VMs with PowerShell description: Describes how to back up and recover Azure VMs using Azure Backup with PowerShell- Previously updated : 04/25/2022-+ Last updated : 06/24/2023+
-# Back up and restore Azure VMs with PowerShell
+# Back up and restore Azure VMs using Azure PowerShell
-This article explains how to back up and restore an Azure VM in an [Azure Backup](backup-overview.md) Recovery Services vault using PowerShell cmdlets.
+This article describes how to back up and restore an Azure VM in an [Azure Backup](backup-overview.md) Recovery Services vault using PowerShell cmdlets.
-In this article you learn how to:
+Azure Backup provides independent and isolated backups to guard against unintended destruction of the data on your VMs. Backups are stored in a Recovery Services vault with built-in management of recovery points. Configuration and scaling are simple, backups are optimized, and you can easily restore as needed.
-> [!div class="checklist"]
->
-> * Create a Recovery Services vault and set the vault context.
-> * Define a backup policy
-> * Apply the backup policy to protect multiple virtual machines
-> * Trigger an on-demand backup job for the protected virtual machines
Before you can back up (or protect) a virtual machine, you must complete the [prerequisites](backup-azure-arm-vms-prepare.md) to prepare your environment for protecting your VMs. ## Before you start
Before you can back up (or protect) a virtual machine, you must complete the [pr
The object hierarchy is summarized in the following diagram.
-![Recovery Services object hierarchy](./media/backup-azure-vms-arm-automation/recovery-services-object-hierarchy.png)
+![Disgram shows the Recovery Services object hierarchy.](./media/backup-azure-vms-arm-automation/recovery-services-object-hierarchy.png)
Review the **Az.RecoveryServices** [cmdlet reference](/powershell/module/az.recoveryservices/) reference in the Azure library.
To begin:
The aliases and cmdlets for Azure Backup, Azure Site Recovery, and the Recovery Services vault appear. The following image is an example of what you'll see. It isn't the complete list of cmdlets.
- ![list of Recovery Services](./media/backup-azure-vms-automation/list-of-recoveryservices-ps.png)
+ ![Screenshot shows the list of Recovery Services.](./media/backup-azure-vms-automation/list-of-recoveryservices-ps.png)
3. Sign in to your Azure account using **Connect-AzAccount**. This cmdlet brings up a web page prompts you for your account credentials:
The following steps lead you through creating a Recovery Services vault. A Recov
New-AzRecoveryServicesVault -Name "testvault" -ResourceGroupName "test-rg" -Location "West US" ```
-3. Specify the type of storage redundancy to use. You can use [Locally Redundant Storage (LRS)](../storage/common/storage-redundancy.md#locally-redundant-storage), [Geo-redundant Storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage), or [Zone-redundant storage (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage). The following example shows the **-BackupStorageRedundancy** option for *testvault* set to **GeoRedundant**.
+3. Specify the type of storage redundancy to use. You can use [Locally Redundant Storage (LRS)](../storage/common/storage-redundancy.md#locally-redundant-storage), [Geo-redundant Storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage), or [Zone-redundant storage (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage). The following example shows the **-BackupStorageRedundancy** option for `testvault` set to **GeoRedundant**.
```powershell $vault1 = Get-AzRecoveryServicesVault -Name "testvault"
Use a Recovery Services vault to protect your virtual machines. Before you apply
### Set vault context
-Before enabling protection on a VM, use [Set-AzRecoveryServicesVaultContext](/powershell/module/az.recoveryservices/set-azrecoveryservicesvaultcontext) to set the vault context. Once the vault context is set, it applies to all subsequent cmdlets. The following example sets the vault context for the vault, *testvault*.
+Before enabling protection on a VM, use [Set-AzRecoveryServicesVaultContext](/powershell/module/az.recoveryservices/set-azrecoveryservicesvaultcontext) to set the vault context. Once the vault context is set, it applies to all subsequent cmdlets. The following example sets the vault context for the vault, `testvault`.
```powershell Get-AzRecoveryServicesVault -Name "testvault" -ResourceGroupName "Contoso-docs-rg" | Set-AzRecoveryServicesVaultContext
There's an important difference between the restoring a VM using the Azure porta
> >
-The following graphic shows the object hierarchy from the RecoveryServicesVault down to the BackupRecoveryPoint.
+The following graphic shows the object hierarchy from the `RecoveryServicesVault` down to the `BackupRecoveryPoint`.
-![Recovery Services object hierarchy showing BackupContainer](./media/backup-azure-vms-arm-automation/backuprecoverypoint-only.png)
+![Screenshot shows the BackupContainer listed by Recovery Services object hierarchy.](./media/backup-azure-vms-arm-automation/backuprecoverypoint-only.png)
To restore backup data, identify the backed-up item and the recovery point that holds the point-in-time data. Use [Restore-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/restore-azrecoveryservicesbackupitem) to restore data from the vault to your account.
The template isn't directly accessible since it's under a customer's storage acc
### Create a VM using the config file
-The following section lists steps necessary to create a VM using _VMConfig_ file.
+The following section lists steps necessary to create a VM using `VMConfig` file.
> [!NOTE] > It's highly recommended to use the deployment template detailed above to create a VM. This section (Points 1-6) will be deprecated soon.
backup Backup Azure Vms First Look Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-first-look-arm.md
Title: Back up an Azure VM from the VM settings description: In this article, learn how to back up either a singular Azure VM or multiple Azure VMs with the Azure Backup service.- Previously updated : 06/13/2019+ Last updated : 06/24/2023 + # Back up an Azure VM from the VM settings
-This article explains how to back up Azure VMs with the [Azure Backup](backup-overview.md) service. You can back up Azure VMs using a couple of methods:
+This article describes how to back up Azure VMs with the [Azure Backup](backup-overview.md) service.
+
+Azure Backup provides independent and isolated backups to guard against unintended destruction of the data on your VMs. Backups are stored in a Recovery Services vault with built-in management of recovery points. Configuration and scaling are simple, backups are optimized, and you can easily restore as needed. You can back up Azure VMs using a couple of methods:
- Single Azure VM: The instructions in this article describe how to back up an Azure VM directly from the VM settings. - Multiple Azure VMs: You can set up a Recovery Services vault and configure backup for multiple Azure VMs. Follow the instructions in [this article](backup-azure-arm-vms-prepare.md) for this scenario. ## Before you start
-1. [Learn](backup-architecture.md#how-does-azure-backup-work) how backup works, and [verify](backup-support-matrix.md#azure-vm-backup-support) support requirements.
+1. [Learn](backup-architecture.md#how-does-azure-backup-work) how Azure Backup works, and [verify](backup-support-matrix.md#azure-vm-backup-support) support requirements.
2. [Get an overview](backup-azure-vms-introduction.md) of Azure VM backup. ### Azure VM agent installation
To back up Azure VMs, Azure Backup installs an extension on the VM agent running
## Back up from Azure VM settings
+Follow these steps:
+ 1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Select **All services** and in the Filter, type **Virtual machines**, and then select **Virtual machines**. 3. From the list of VMs, select the VM you want to back up.
To back up Azure VMs, Azure Backup installs an extension on the VM agent running
- If you already have a vault, select **Select existing**, and select a vault. - If you don't have a vault, select **Create new**. Specify a name for the vault. It's created in the same region and resource group as the VM. You can't modify these settings when you enable backup directly from the VM settings.
- ![Enable Backup Wizard](./media/backup-azure-vms-first-look-arm/vm-menu-enable-backup-small.png)
+ ![Screenshot shows how to enable backup wizard.](./media/backup-azure-vms-first-look-arm/vm-menu-enable-backup-small.png)
6. In **Choose backup policy**, do one of the following:
To back up Azure VMs, Azure Backup installs an extension on the VM agent running
- Select an existing backup policy if you have one. - Create a new policy, and define the policy settings.
- ![Select backup policy](./media/backup-azure-vms-first-look-arm/set-backup-policy.png)
+ ![Screenshot shows how to select a backup policy.](./media/backup-azure-vms-first-look-arm/set-backup-policy.png)
7. Select **Enable Backup**. This associates the backup policy with the VM.
- ![Enable Backup button](./media/backup-azure-vms-first-look-arm/vm-management-menu-enable-backup-button.png)
+ ![Screenshot shows the selection of Enable Backup.](./media/backup-azure-vms-first-look-arm/vm-management-menu-enable-backup-button.png)
8. You can track the configuration progress in the portal notifications. 9. After the job completes, in the VM menu, select **Backup**. The page shows backup status for the VM, information about recovery points, jobs running, and alerts issued.
- ![Backup status](./media/backup-azure-vms-first-look-arm/backup-item-view-update.png)
+ ![Screenshot shows the backup status.](./media/backup-azure-vms-first-look-arm/backup-item-view-update.png)
-10. After enabling backup, an initial backup runs. You can start the initial backup immediately, or wait until it starts in accordance with the backup schedule.
+10. After enabling backup, an initial backup run. You can start the initial backup immediately, or wait until it starts in accordance with the backup schedule.
- Until the initial backup completes, the **Last backup status** shows as **Warning (Initial backup pending)**. - To see when the next scheduled backup will run, select the backup policy name. ## Run a backup immediately
+Follow these steps:
+ 1. To run a backup immediately, in the VM menu, select **Backup** > **Backup now**.
- ![Run backup](./media/backup-azure-vms-first-look-arm/backup-now-update.png)
+ ![Screenshot shows how to run backup.](./media/backup-azure-vms-first-look-arm/backup-now-update.png)
2. In **Backup Now**, use the calendar control to select until when the recovery point will be retained > and **OK**.
- ![Backup retention day](./media/backup-azure-vms-first-look-arm/backup-now-blade-calendar.png)
+ ![Screenshot shows the backup retention day.](./media/backup-azure-vms-first-look-arm/backup-now-blade-calendar.png)
3. Portal notifications let you know the backup job has been triggered. To monitor backup progress, select **View all jobs**.
backup Backup Windows With Mars Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-windows-with-mars-agent.md
Title: Back up Windows machines by using the MARS agent description: Use the Microsoft Azure Recovery Services (MARS) agent to back up Windows machines.- Previously updated : 03/03/2020+ Last updated : 06/23/2023 + # Back up Windows Server files and folders to Azure
-This article explains how to back up Windows machines by using the [Azure Backup](backup-overview.md) service and the Microsoft Azure Recovery Services (MARS) agent. MARS is also known as the Azure Backup agent.
-
-In this article, you'll learn how to:
-
-> [!div class="checklist"]
->
-> * Verify the prerequisites
-> * Create a backup policy and schedule.
-> * Perform an on-demand backup.
+This article describes how to back up Windows machines by using the [Azure Backup](backup-overview.md) service and the Microsoft Azure Recovery Services (MARS) agent. MARS is also known as the Azure Backup agent.
## Before you start
The backup policy specifies when to take snapshots of the data to create recover
Azure Backup doesn't automatically take daylight saving time (DST) into account. This default could cause some discrepancy between the actual time and the scheduled backup time.
-To create a backup policy:
+To create a backup policy, follow these steps:
1. After you download and register the MARS agent, open the agent console. You can find it by searching your machine for **Microsoft Azure Backup**. 1. Under **Actions**, select **Schedule Backup**.
- ![Schedule a Windows Server backup](./media/backup-configure-vault/schedule-first-backup.png)
+ [ ![Screenshot shows how to schedule a Windows Server backup.](./media/backup-configure-vault/schedule-first-backup.png) ](./media/backup-configure-vault/schedule-first-backup.png#lightbox)
+ 1. In the Schedule Backup Wizard, select **Getting started** > **Next**. 1. Under **Select Items to Back up**, select **Add Items**.
- ![Add items to back up](./media/backup-azure-manage-mars/select-item-to-backup.png)
+ [ ![Screenshot shows how to add items for back up.](./media/backup-azure-manage-mars/select-item-to-backup.png) ](./media/backup-azure-manage-mars/select-item-to-backup.png#lightbox)
1. In the **Select Items** box, select items to back up, and then select **OK**.
- ![Select items to back up](./media/backup-azure-manage-mars/selected-items-to-backup.png)
+ [ ![Screenshot shows how to select items to back up.](./media/backup-azure-manage-mars/selected-items-to-backup.png) ](./media/backup-azure-manage-mars/selected-items-to-backup.png#lightbox)
1. On the **Select Items to Back Up** page, select **Next**. 1. On the **Specify Backup Schedule** page, specify when to take daily or weekly backups. Then select **Next**.
To create a backup policy:
* The number of recovery points created in your environment depends on your backup schedule. * You can schedule up to three daily backups per day. In the following example, two daily backups occur, one at midnight and one at 6:00 PM.
- ![Set up a daily backup schedule](./media/backup-configure-vault/day-schedule.png)
+ [ ![Screenshot shows how to set up a daily backup schedule.](./media/backup-configure-vault/day-schedule.png) ](./media/backup-configure-vault/day-schedule.png#lightbox)
* You can run weekly backups too. In the following example, backups are taken every alternate Sunday and Wednesday at 9:30 AM and 1:00 AM.
- ![Set up a weekly backup schedule](./media/backup-configure-vault/week-schedule.png)
+ [ ![Screenshot shows how to set up a weekly backup schedule.](./media/backup-configure-vault/week-schedule.png) ](./media/backup-configure-vault/week-schedule.png#lightbox)
1. On the **Select Retention Policy** page, specify how to store historical copies of your data. Then select **Next**. * Retention settings specify which recovery points to store and how long to store them. * For a daily retention setting, you indicate that at the time specified for the daily retention, the latest recovery point will be retained for the specified number of days. Or you could specify a monthly retention policy to indicate that the recovery point created on the 30th of every month should be stored for 12 months.
- * Retention for daily and weekly recovery points usually coincides with the backup schedule. So when the schedule triggers a backup, the recovery point that the backup creates is stored for the duration that the daily or weekly retention policy specifies.
+ * Retention for daily and weekly recovery points usually coincides with the backup schedule. So when the schedule triggers a backup, the recovery point (that the backup operation creates) is stored for the duration that the daily or weekly retention policy specifies.
* In the following example: * Daily backups at midnight and 6:00 PM are kept for seven days.
To create a backup policy:
* Backups taken on the last Saturday of the month at midnight and 6:00 PM are kept for 12 months. * Backups taken on the last Saturday in March are kept for 10 years.
- ![Example of a retention policy](./media/backup-configure-vault/retention-example.png)
+ [ ![Screenshot shows the example of a retention policy.](./media/backup-configure-vault/retention-example.png) ](./media/backup-configure-vault/retention-example.png#lightbox)
1. On the **Choose Initial Backup Type** page, decide if you want to take the initial backup over the network or use offline backup. To take the initial backup over the network, select **Automatically over the network** > **Next**. For more information about offline backup, see [Use Azure Data Box for offline backup](offline-backup-azure-data-box.md).
- ![Choose an initial backup type](./media/backup-azure-manage-mars/choose-initial-backup-type.png)
+ [ ![Screenshot shows how to choose an initial backup type.](./media/backup-azure-manage-mars/choose-initial-backup-type.png) ](./media/backup-azure-manage-mars/choose-initial-backup-type.png#lightbox)
1. On the **Confirmation** page, review the information, and then select **Finish**.
- ![Confirm the backup type](./media/backup-azure-manage-mars/confirm-backup-type.png)
+ [ ![Screenshot shows how to confirm the backup type.](./media/backup-azure-manage-mars/confirm-backup-type.png) ](./media/backup-azure-manage-mars/confirm-backup-type.png#lightbox)
1. After the wizard finishes creating the backup schedule, select **Close**.
- ![View the backup schedule progress](./media/backup-azure-manage-mars/confirm-modify-backup-process.png)
+ [ ![Screenshot shows how to view the backup schedule progress.](./media/backup-azure-manage-mars/confirm-modify-backup-process.png) ](./media/backup-azure-manage-mars/confirm-modify-backup-process.png#lightbox)
Create a policy on each machine where the agent is installed.
-### Do the initial backup offline
+### Run the initial backup offline
-You can run an initial backup automatically over the network, or you can back up offline. Offline seeding for an initial backup is useful if you have large amounts of data that will require a lot of network bandwidth to transfer.
+You can run an initial backup automatically over the network, or you can back up offline. Offline seeding for an initial backup is useful if you've large amounts of data that will require a lot of network bandwidths to transfer.
-To do an offline transfer:
+To do an offline transfer, follow these steps:
1. Write the backup data to a staging location. 1. Use the AzureOfflineBackupDiskPrep tool to copy the data from the staging location to one or more SATA disks.
To enable network throttling:
1. In the MARS agent, select **Change Properties**. 1. On the **Throttling** tab, select **Enable internet bandwidth usage throttling for backup operations**.
- ![Set up network throttling for backup operations](./media/backup-configure-vault/throttling-dialog.png)
+ [ ![Screenshot shows how to set up network throttling for backup operations.](./media/backup-configure-vault/throttling-dialog.png) ](./media/backup-configure-vault/throttling-dialog.png#lightbox)
1. Specify the allowed bandwidth during work hours and nonwork hours. Bandwidth values begin at 512 Kbps and go up to 1,023 Mbps. Then select **OK**. ## Run an on-demand backup
+To run an on-demand backup, follow these steps:
+ 1. In the MARS agent, select **Back Up Now**.
- ![Back up now in Windows Server](./media/backup-configure-vault/backup-now.png)
+ [ ![Screenshot shows the Back up now option in Windows Server.](./media/backup-configure-vault/backup-now.png) ](./media/backup-configure-vault/backup-now.png#lightbox)
1. If the MARS agent version is 2.0.9169.0 or newer, then you can set a custom retention date. In the **Retain Backup Till** section, choose a date from the calendar.
- ![Use the calendar to customize a retention date](./media/backup-configure-vault/mars-ondemand.png)
+ [ ![Screenshot shows how to use the calendar to customize a retention date.](./media/backup-configure-vault/mars-ondemand.png) ](./media/backup-configure-vault/mars-ondemand.png#lightbox)
1. On the **Confirmation** page, review the settings, and select **Back Up**. 1. Select **Close** to close the wizard. If you close the wizard before the backup finishes, the wizard continues to run in the background.
After the initial backup finishes, the **Job completed** status appears in the B
## Set up on-demand backup policy retention behavior
-> [!NOTE]
-> This information applies only to MARS agent versions that are older than 2.0.9169.0.
->
+The following table shows the data retention duration for various backup schedules:
| Backup-schedule option | Duration of data retention | -- | --
-| Day | **Default retention**: Equivalent to the "retention in days for daily backups." <br/><br/> **Exception**: If a daily scheduled backup that's set for long-term retention (weeks, months, or years) fails, an on-demand backup that's triggered right after the failure is considered for long-term retention. Otherwise, the next scheduled backup is considered for long-term retention.<br/><br/> **Example scenario**: The scheduled backup on Thursday at 8:00 AM failed. This backup was to be considered for weekly, monthly, or yearly retention. So the first on-demand backup triggered before the next scheduled backup on Friday at 8:00 AM is automatically tagged for weekly, monthly, or yearly retention. This backup substitutes for the Thursday 8:00 AM backup.
-| Week | **Default retention**: One day. On-demand backups that are taken for a data source that has a weekly backup policy are deleted the next day. They're deleted even if they're the most recent backups for the data source. <br/><br/> **Exception**: If a weekly scheduled backup that's set for long-term retention (weeks, months, or years) fails, an on-demand backup that's triggered right after the failure is considered for long-term retention. Otherwise, the next scheduled backup is considered for long-term retention. <br/><br/> **Example scenario**: The scheduled backup on Thursday at 8:00 AM failed. This backup was to be considered for monthly or yearly retention. So the first on-demand backup that's triggered before the next scheduled backup on Thursday at 8:00 AM is automatically tagged for monthly or yearly retention. This backup substitutes for the Thursday 8:00 AM backup.
+| Day | **Default retention**: Equivalent to the "retention in days for daily backups." <br/><br/> **Exception**: If a daily scheduled backup that's set for long-term retention (weeks, months, or years) fails, an on-demand backup that's triggered right after the failure is considered for long-term retention. Otherwise, the next scheduled backup is considered for long-term retention.<br/><br/> **Example scenario**: The scheduled backup on Thursday at 8:00 AM failed. This backup was to be considered for weekly, monthly, or yearly retention. So the first on-demand backup triggered before the next scheduled backup on Friday at 8:00 AM is automatically tagged for weekly, monthly, or yearly retention. This backup substitute for the Thursday 8:00 AM backup.
+| Week | **Default retention**: One day. On-demand backups that are taken for a data source that has a weekly backup policy are deleted the next day. They're deleted even if they're the most recent backups for the data source. <br/><br/> **Exception**: If a weekly scheduled backup that's set for long-term retention (weeks, months, or years) fails, an on-demand backup that's triggered right after the failure is considered for long-term retention. Otherwise, the next scheduled backup is considered for long-term retention. <br/><br/> **Example scenario**: The scheduled backup on Thursday at 8:00 AM failed. This backup was to be considered for monthly or yearly retention. So the first on-demand backup that's triggered before the next scheduled backup on Thursday at 8:00 AM is automatically tagged for monthly or yearly retention. This backup substitute for the Thursday 8:00 AM backup.
For more information, see [Create a backup policy](#create-a-backup-policy).
+> [!NOTE]
+> This information applies only to MARS agent versions that are older than 2.0.9169.0.
+ ## Next steps * Learn how to [Restore files in Azure](backup-azure-restore-windows-server.md).
bastion Bastion Connect Vm Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-scale-set.md
description: Learn how to connect to an Azure virtual machine scale set using Az
Previously updated : 05/24/2022 Last updated : 06/23/2023
This section shows you the basic steps to connect to your virtual machine scale
1. On the **Bastion** page, fill in the required settings. The settings you can select depend on the virtual machine to which you're connecting, and the [Bastion SKU](configuration-settings.md#skus) tier that you're using. The Standard SKU gives you more connection options than the Basic SKU. For more information about settings, see [Bastion configuration settings](configuration-settings.md).
- :::image type="content" source="./media/bastion-connect-vm-scale-set/connection-settings.png" alt-text="Screenshot shows connection settings options with Open in new browser tab selected." lightbox="./media/bastion-connect-vm-scale-set/connection-settings.png":::
- 1. After filling in the values on the Bastion page, select **Connect** to connect to the instance. ## Next steps
-Read the [Bastion FAQ](bastion-faq.md).
+Read the [Bastion FAQ](bastion-faq.md).
bastion Bastion Nsg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-nsg.md
description: Learn about using network security groups with Azure Bastion.
Previously updated : 06/21/2021 Last updated : 06/23/2023 # Working with NSG access and Azure Bastion
bastion Connect Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-ip-address.md
description: Learn how to connect to your virtual machines using a specified pri
Previously updated : 04/26/2022 Last updated : 06/26/2023
bastion Connect Vm Native Client Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-vm-native-client-linux.md
description: Learn how to connect to a VM from a Linux computer by using Bastion
Previously updated : 06/12/2023 Last updated : 06/23/2023 # Connect to a VM using Bastion and a Linux native client
-This article helps you connect to a VM in the VNet using the native client (SSH or RDP) on your local computer using the **az network bastion tunnel** command. The native client feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). For more information and steps to configure Bastion for native client connections, see [Configure Bastion for native client connections](native-client.md). Connections via native client require the Bastion Standard SKU.
+This article helps you connect via Azure Bastion to a VM in VNet using the native client on your local Linux computer. The native client feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). For more information and steps to configure Bastion for native client connections, see [Configure Bastion for native client connections](native-client.md). Connections via native client require the Bastion Standard SKU.
:::image type="content" source="./media/native-client/native-client-architecture.png" alt-text="Diagram shows a connection via native client." lightbox="./media/native-client/native-client-architecture.png":::
-After you've configured Bastion for native client support, you can connect to a VM using the **az network bastion tunnel** command. When you use this command, you can do the following:
+After you've configured Bastion for native client support, you can connect to a VM using a native Linux client. The method you use to connect depends on both the client you're connecting from, and the VM you're connecting to. The following list shows some of the available ways you can connect from a Linux native client. See [Connect to VMs](native-client.md#connect) for the full list showing available client connection/feature combinations.
- * Use native clients on *non*-Windows local computers (example: a Linux computer).
- * Use the native client of your choice. (This includes the Windows native client.)
- * Connect using SSH or RDP. (The bastion tunnel doesn't relay web servers or hosts.)
- * Set up concurrent VM sessions with Bastion.
- * [Upload files](vm-upload-download-native.md#tunnel-command) to your target VM from your local computer. File download from the target VM to the local client is currently not supported for this command.
+* Connect to a Linux VM using **az network bastion ssh**.
+* Connect to a Windows VM using **az network bastion tunnel**.
+* Connect to any VM using **az network bastion tunnel**.
+* [Upload files](vm-upload-download-native.md#tunnel-command) to your target VM over SSH using **az network bastion tunnel**. File download from the target VM to the local client is currently not supported for this command.
-Limitations:
-
-* Signing in using an SSH private key stored in Azure Key Vault isnΓÇÖt supported with this feature. Before signing in to your Linux VM using an SSH key pair, download your private key to a file on your local machine.
-* This feature isn't supported on Cloud Shell.
-
-## <a name="prereq"></a>Prerequisites
+## Prerequisites
[!INCLUDE [VM connect prerequisites](../../includes/bastion-native-pre-vm-connect.md)]
-## <a name="verify"></a>Verify roles and ports
+## Verify roles and ports
Verify that the following roles and ports are configured in order to connect to the VM. [!INCLUDE [roles and ports](../../includes/bastion-native-roles-ports.md)]
-## <a name="connect-tunnel"></a>Connect to a VM
+## <a name="ssh"></a>Connect to a Linux VM
-This section helps you connect to your virtual machine from native clients on *non*-Windows local computers (example: Linux) using the **az network bastion tunnel** command. You can also connect using this method from a Windows computer. This is helpful when you require an SSH connection and want to upload files to your VM. The bastion tunnel supports RDP/SSH connection, but doesn't relay web servers or hosts.
+The steps in the following sections help you connect to a Linux VM from a Linux native client using the **az network bastion** command. This extension can be installed by running, `az extension add --name ssh`.
-This connection supports file upload from the local computer to the target VM. For more information, see [Upload files](vm-upload-download-native.md).
+When you connect using this command, file transfers aren't supported. If you want to upload files, connect using the [az network bastion tunnel](#tunnel) command instead.
+This command lets you do the following:
-## <a name="connect-IP"></a>Connect to VM via IP Address
+* Connect to a Linux VM using SSH.
+* Authenticate via Azure Active Directory
+* Connect to concurrent VM sessions within the virtual network.
+To sign in, use one of the following examples. Once you sign in to your target VM, the native client on your computer opens up with your VM session.
+
+**SSH key pair**
+
+To sign in to your VM using an SSH key pair, use the following example.
+
+```azurecli
+az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
+```
+
+**Azure AD authentication**
+
+If youΓÇÖre signing in to an Azure AD login-enabled VM, use the following example. For more information, see [Azure Linux VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md).
+
+```azurecli
+az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "AAD"
+```
+
+**Username/password**
-Use the following command as an example:
-
- **Tunnel:**
-
- ```azurecli
- az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>"
- ```
+If youΓÇÖre signing in to your VM using a local username and password, use the following example. YouΓÇÖll then be prompted for the password for the target VM.
+
+```azurecli
+az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "password" --username "<Username>"
+```
+
+#### <a name="VM-IP"></a>SSH to a Linux VM IP address
+
+You can connect to a VM private IP address instead of the resource ID. Be aware that Azure AD authentication, and custom ports and protocols aren't supported when using this type of connection. For more information about IP-based connections, see [Connect to a VM - IP address](connect-ip-address.md).
+
+Using the `az network bastion` command, replace `--target-resource-id` with `--target-ip-address` and the specified IP address to connect to your VM. The following example uses --ssh-key for the authentication method.
+
+```azurecli
+az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-addres "<VMIPAddress>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
+```
+
+## <a name="tunnel"></a>Connect to a VM - tunnel command
++
+### <a name="tunnel-IP"></a>Tunnel to a VM IP address
+ ## Next steps
bastion Connect Vm Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-vm-native-client-windows.md
description: Learn how to connect to a VM from a Windows computer by using Basti
Previously updated : 06/12/2023 Last updated : 06/23/2023
This article helps you connect to a VM in the VNet using the native client (SSH
:::image type="content" source="./media/native-client/native-client-architecture.png" alt-text="Diagram shows a connection via native client." lightbox="./media/native-client/native-client-architecture.png":::
-After you've configured Bastion for native client support, you can connect to a VM using the native Windows client. This lets you do the following:
+After you've configured Bastion for native client support, you can connect to a VM using a native Windows client. The method you use to connect depends on both the client you're connecting from, and the VM you're connecting to. The following list shows some of the available ways you can connect from a Windows native client. See [Connect to VMs](native-client.md#connect) for the full list showing available client connection/feature combinations.
- * Connect using SSH or RDP.
- * [Upload and download files](vm-upload-download-native.md#rdp) over RDP.
- * If you want to connect using SSH and need to upload files to your target VM, you can use the instructions for the [az network bastion tunnel](connect-vm-native-client-linux.md) command instead.
-
-Limitations:
-
-* Signing in using an SSH private key stored in Azure Key Vault isnΓÇÖt supported with this feature. Before signing in to your Linux VM using an SSH key pair, download your private key to a file on your local machine.
-* This feature isn't supported on Cloud Shell.
+* Connect to a Windows VM using **az network bastion rdp**.
+* Connect to a Linux VM using **az network bastion ssh**.
+* Connect to a VM using **az network bastion tunnel**.
+* [Upload and download files](vm-upload-download-native.md#rdp) over RDP.
+* Upload files over SSH using **az network bastion tunnel**.
## <a name="prereq"></a>Prerequisites
Verify that the following roles and ports are configured in order to connect to
[!INCLUDE [roles and ports](../../includes/bastion-native-roles-ports.md)]
-## <a name="connect-windows"></a>Connect to a Windows VM
+## Connect to a VM
-1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource.
-
- ```azurecli
- az login
- az account list
- az account set --subscription "<subscription ID>"
- ```
+The steps in the following sections help you connect to a VM from a Windows native client using the **az network bastion** command.
-1. Sign in to your target Windows VM using one of the following example options. If you want to specify a custom port value, you should also include the field **--resource-port** in the sign-in command.
+### <a name="connect-windows"></a>RDP to a Windows VM
- **RDP:**
+1. Sign in to your Azure account using `az login`. If you have more than one subscription, you can view them using `az account list` and select the subscription containing your Bastion resource using `az account set --subscription "<subscription ID>"`.
- To connect via RDP, use the following command. YouΓÇÖll then be prompted to input your credentials. You can use either a local username and password, or your Azure AD credentials. For more information, see [Azure Windows VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md).
+1. To connect via RDP, use the following example.
```azurecli az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" ```
+1. After running the command, you're prompted to input your credentials. You can use either a local username and password, or your Azure AD credentials. Once you sign in to your target VM, the native client on your computer opens up with your VM session via **MSTSC**.
+ > [!IMPORTANT]
- > Remote connection to VMs that are joined to Azure AD is allowed only from Windows 10 or later PCs that are Azure AD registered (starting with Windows 10 20H1), Azure AD joined, or hybrid Azure AD joined to the *same* directory as the VM.
+ > Remote connection to VMs that are joined to Azure AD is allowed only from Windows 10 or later PCs that are Azure AD registered (starting with Windows 10 20H1), Azure AD joined, or hybrid Azure AD joined to the *same* directory as the VM.
- **SSH:**
+#### Specify authentication method
- The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example.
+Optionally, you can also specify the authentication method as part of the command.
- ```azurecli
- az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
- ```
+* **Azure AD authentication:** `--auth-type "AAD"` For more information, see [Azure Windows VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md).
- Once you sign in to your target VM, the native client on your computer opens up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions.
+* **User name and password:** `--auth-type "password" --username "<Username>"`
-## <a name="connect-linux"></a>Connect to a Linux VM
+#### Specify a custom port
-1. Sign in to your Azure account. If you have more than one subscription, select the subscription containing your Bastion resource.
+You can specify a custom port when you connect to a Windows VM via RDP.
- ```azurecli
- az login
- az account list
- az account set --subscription "<subscription ID>"
- ```
+One scenario where this could be especially useful would be connecting to a Windows VM via port 22. This is a potential workaround for the limitation with the *az network bastion ssh* command, which can't be used by a Windows native client to connect to a Windows VM.
+
+To specify a custom port, include the field **--resource-port** in the sign-in command, as shown in the following example.
+
+```azurecli
+az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --resource-port "22"
+```
+
+#### RDP to a Windows VM IP address
+
+You can also connect to a VM private IP address, instead of the resource ID. Azure AD authentication, and custom ports and protocols aren't supported when using this type of connection. For more information about IP-based connections, see [Connect to a VM - IP address](connect-ip-address.md).
-1. Sign in to your target Linux VM using one of the following example options. If you want to specify a custom port value, you should also include the field **--resource-port** in the sign-in command.
+Using the `az network bastion` command, replace `--target-resource-id` with `--target-ip-address` and the specified IP address to connect to your VM.
+
+```azurecli
+az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>"
+```
+
+### <a name="connect-linux"></a>SSH to a Linux VM
+
+1. Sign in to your Azure account using `az login`. If you have more than one subscription, you can view them using `az account list` and select the subscription containing your Bastion resource using `az account set --subscription "<subscription ID>"`.
+
+1. Sign in to your target Linux VM using one of the following example options. If you want to specify a custom port value, include the field **--resource-port** in the sign-in command.
**Azure AD:** If youΓÇÖre signing in to an Azure AD login-enabled VM, use the following command. For more information, see [Azure Linux VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md). ```azurecli
- az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "AAD"
+ az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "AAD"
```
- **SSH:**
+ **SSH key pair:**
The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example.
Verify that the following roles and ports are configured in order to connect to
az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId or VMSSInstanceResourceId>" --auth-type "password" --username "<Username>" ```
- 1. Once you sign in to your target VM, the native client on your computer opens up with your VM session; **MSTSC** for RDP sessions, and **SSH CLI extension (az ssh)** for SSH sessions.
+1. Once you sign in to your target VM, the native client on your computer opens up with your VM session using **SSH CLI extension (az ssh)**.
-## <a name="connect-IP"></a>Connect to VM via IP Address
+#### SSH to a Linux VM IP address
+You can also connect to a VM private IP address, instead of the resource ID. Azure AD authentication, and custom ports and protocols aren't supported when using this type of connection. For more information about IP-based connections, see [Connect to a VM - IP address](connect-ip-address.md).
-Use the following commands as examples:
+Using the `az network bastion` command, replace `--target-resource-id` with `--target-ip-address` and the specified IP address to connect to your VM.
- **RDP:**
-
- ```azurecli
- az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>
- ```
-
- **SSH:**
-
- ```azurecli
- az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-addres "<VMIPAddress>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
- ```
+```azurecli
+az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
+```
+
+## Connect to a VM - tunnel command
++
+### <a name="tunnel-IP"></a>Tunnel to a VM IP address
+ ## Next steps
bastion Kerberos Authentication Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/kerberos-authentication-portal.md
-# Configure Bastion for Kerberos authentication using the Azure portal (Preview)
+# Configure Bastion for Kerberos authentication using the Azure portal
This article shows you how to configure Azure Bastion to use Kerberos authentication. Kerberos authentication can be used with both the Basic and the Standard Bastion SKUs. For more information about Kerberos authentication, see the [Kerberos authentication overview](/windows-server/security/kerberos/kerberos-authentication-overview). For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md) ## Considerations
-* During Preview, the Kerberos setting for Azure Bastion can be configured in the Azure portal only and not with native client.
+* The Kerberos setting for Azure Bastion can be configured in the Azure portal only and not with native client.
* VMs migrated from on-premises to Azure are not currently supported for Kerberos.  * Cross-realm authentication is not currently supported for Kerberos.  * Changes to DNS server are not currently supported for Kerberos. After making any changes to DNS server, you will need to delete and re-create the Bastion resource.
bastion Native Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/native-client.md
description: Learn how to configure Bastion for native client connections.
Previously updated : 06/19/2023 Last updated : 06/23/2023
If you've already deployed Bastion to your VNet, modify the following configurat
:::image type="content" source="./media/native-client/update-host.png" alt-text="Screenshot that shows settings for updating an existing host with Native Client Support box selected." lightbox="./media/native-client/update-host.png":::
-## <a name="secure "></a>Secure your native client connection
+## <a name="secure"></a>Secure your native client connection
If you want to further secure your native client connection, you can limit port access by only providing access to port 22/3389. To restrict port access, you must deploy the following NSG rules on your AzureBastionSubnet to allow access to select ports and deny access from any other ports. :::image type="content" source="./media/native-client/network-security-group.png" alt-text="Screenshot that shows NSG configurations." lightbox="./media/native-client/network-security-group.png":::
-## Connecting to VMs
+## <a name="connect"></a>Connecting to VMs
-After you deploy this feature, there are different connection instructions, depending on the host computer you're connecting from.
+After you deploy this feature, there are different connection instructions, depending on the host computer you're connecting from, and the client VM to which you're connecting.
-* [Connect from the native client on a Windows computer](connect-vm-native-client-windows.md). This lets you do the following:
+Use the following table to understand how to connect from native clients. Notice that different supported combinations of native client and target VMs allow for different features and require specific commands.
- * Connect using SSH or RDP.
- * [Upload and download files](vm-upload-download-native.md#rdp) over RDP.
- * If you want to connect using SSH and need to upload files to your target VM, you can use the instructions for the [az network bastion tunnel](connect-vm-native-client-linux.md) command instead.
+| Client | Target VM | Method | Azure Active Directory authentication | File transfer | Concurrent VM sessions | Custom port |
+||||| |||
+| Windows native client | Windows VM | [RDP](connect-vm-native-client-windows.md) | Yes | [Upload/Download](vm-upload-download-native.md#rdp) | Yes | Yes |
+| | Linux VM | [SSH](connect-vm-native-client-windows.md) | Yes |No | Yes | Yes |
+| | Any VM|[az network bastion tunnel](connect-vm-native-client-windows.md) |No |[Upload](vm-upload-download-native.md#tunnel-command)| No | No |
+| Linux native client | Linux VM |[SSH](connect-vm-native-client-linux.md#ssh)| Yes | No | Yes | Yes |
+| | Windows or any VM| [az network bastion tunnel](connect-vm-native-client-linux.md) | No | [Upload](vm-upload-download-native.md#tunnel-command) | No | No |
+| Other native client (putty) | Any VM | [az network bastion tunnel](connect-vm-native-client-linux.md) | No | [Upload](vm-upload-download-native.md#tunnel-command) | No | No |
-* [Connect using the **az network bastion tunnel** command](connect-vm-native-client-linux.md). This lets you do the following:
-
- * Use native clients on *non*-Windows local computers (example: a Linux PC).
- * Use the native client of your choice. (This includes the Windows native client.)
- * Connect using SSH or RDP. (The bastion tunnel doesn't relay web servers or hosts.)
- * Set up concurrent VM sessions with Bastion.
- * [Upload files](vm-upload-download-native.md#tunnel-command) to your target VM from your local computer. File download from the target VM to the local client is currently not supported for this command.
-
-### Limitations
+**Limitations:**
* Signing in using an SSH private key stored in Azure Key Vault isnΓÇÖt supported with this feature. Before signing in to a Linux VM using an SSH key pair, download your private key to a file on your local machine. * Connecting using a native client isn't supported on Cloud Shell.
bastion Vnet Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/vnet-peering.md
description: Learn how VNet peering and Azure Bastion can be used together to co
Previously updated : 12/06/2021 Last updated : 06/23/2023
bastion Work Remotely Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/work-remotely-support.md
Title: Enable remote work by using Azure Bastion description: Learn how to use Azure Bastion to enable remote access to virtual machines. -+ Previously updated : 03/25/2020- Last updated : 06/23/2023+
cloud-shell Private Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/private-vnet.md
-description: Deploy Cloud Shell into an Azure virtual network
+description: This article describes a scenario for using Azure Cloud Shell in a private virtual network.
ms.contributor: jahelmic Previously updated : 11/14/2022 Last updated : 06/21/2023
-tags: azure-resource-manager
Title: Cloud Shell in an Azure virtual network
+ Title: Using Cloud Shell in an Azure virtual network
-# Deploy Cloud Shell into an Azure virtual network
+# Using Cloud Shell in an Azure virtual network
-A regular Cloud Shell session runs in a container in a Microsoft network separate from your
-resources. Commands running inside the container can't access resources that can only be accessed
-from a specific virtual network. For example, you can't use SSH to connect from Cloud Shell to a
-virtual machine that only has a private IP address, or use `kubectl` to connect to a Kubernetes
-cluster that has locked down access.
+By default, Cloud Shell sessions run in a container in a Microsoft network that's separate from your
+resources. Commands run inside the container can't access resources in a private virtual network.
+For example, you can't use SSH to connect from Cloud Shell to a virtual machine that only has a
+private IP address, or use `kubectl` to connect to a Kubernetes cluster that has locked down access.
-This optional feature addresses these limitations and allows you to deploy Cloud Shell into an Azure
-virtual network that you control. From there, the container is able to interact with resources
-within the virtual network you select.
+To provide access to your private resources, you can deploy Cloud Shell into an Azure Virtual
+Network that you control. This is referred to as _VNET isolation_.
-The following diagram shows the resource architecture that's deployed and used in this scenario.
+## Benefits to VNET isolation with Azure Cloud Shell
-![Illustrates the Cloud Shell isolated VNET architecture.][06]
+Deploying Azure Cloud Shell in a private VNET offers several benefits:
-Before you can use Cloud Shell in your own Azure Virtual Network, you need to create several
-resources. This article shows how to set up the required resources using an ARM template.
+- The resources you want to manage don't have to have public IP addresses.
+- You can use command line tools, SSH, and PowerShell remoting from the Cloud Shell container to
+ manage your resources.
+- The storage account that Cloud Shell uses doesn't have to be publicly accessible.
-> [!NOTE]
-> These resources only need to be set up once for the virtual network. They can then be shared by
-> all administrators with access to the virtual network.
-
-## Required network resources
-
-### Virtual network
-
-A virtual network defines the address space in which one or more subnets are created.
-
-You need to identify the virtual network to be used for Cloud Shell. Usually, you want to use an
-existing virtual network that contains resources you want to manage or a network that peers with
-networks that contain your resources.
-
-### Subnet
-
-Within the selected virtual network, a dedicated subnet must be used for Cloud Shell containers.
-This subnet is delegated to the Azure Container Instances (ACI) service. When a user requests a
-Cloud Shell container in a virtual network, Cloud Shell uses ACI to create a container that's in
-this delegated subnet. No other resources can be created in this subnet.
-
-### Network profile
-
-A network profile is a network configuration template for Azure resources that specifies certain
-network properties for the resource.
-
-### Azure Relay
-
-An [Azure Relay][01] allows two endpoints that aren't directly reachable to communicate. In this
-case, it's used to allow the administrator's browser to communicate with the container in the
-private network.
-
-The Azure Relay instance used for Cloud Shell can be configured to control which networks can access
-container resources:
--- Accessible from the public internet: In this configuration, Cloud Shell provides a way to reach
- the internal resources from outside.
-- Accessible from specified networks: In this configuration, administrators must access the Azure
- portal from a computer running in the appropriate network to be able to use Cloud Shell.
-- Disabled: When the networking for relay is set to disabled, the computer running Azure Cloud Shell
- must be able to reach the private endpoint connected to the relay.
-
-## Storage requirements
-
-As in standard Cloud Shell, a storage account is required while using Cloud Shell in a virtual
-network. Each administrator needs a fileshare to store their files. The storage account needs to be
-accessible from the virtual network that's used by Cloud Shell.
-
-> [!NOTE]
-> Secondary storage regions are currently not supported in Cloud Shell VNET scenarios.
-
-## Virtual network deployment limitations
+## Things to consider before deploying Azure Cloud Shell in a VNET
- Starting Cloud Shell in a virtual network is typically slower than a standard Cloud Shell session.-- All Cloud Shell primary regions, except Central India, are supported.-- [Azure Relay][01] is a paid service. See the [pricing][04] guide. In the Cloud Shell scenario, one
- hybrid connection is used for each administrator while they're using Cloud Shell. The connection
- is automatically shut down after the Cloud Shell session is ended.
-
-## Register the resource provider
-
-The Microsoft.ContainerInstances resource provider needs to be registered in the subscription that
-holds the virtual network you want to use. Select the appropriate subscription with
-`Set-AzContext -Subscription {subscriptionName}`, and then run:
-
-```powershell
-PS> Get-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance | select ResourceTypes,RegistrationState
-
-ResourceTypes RegistrationState
-- --
-{containerGroups} Registered
-...
-```
-
-If **RegistrationState** is `Registered`, no action is required. If it's `NotRegistered`, run
-`Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance`.
-
-## Deploy network resources
-
-### Create a resource group and virtual network
-
-If you already have a desired VNET that you would like to connect to, skip this section.
-
-In the Azure portal, or using Azure CLI, Azure PowerShell, etc. create a resource group and a
-virtual network in the new resource group, **the resource group and virtual network need to be in
-the same region**.
-
-### ARM templates
-
-Use the [Azure Quickstart Template][03] for creating Cloud Shell resources in a virtual network,
-and the [Azure Quickstart Template][05] for creating necessary storage. Take note of your resource
-names, primarily your fileshare name.
-
-### Open relay firewall
-
-By default the relay is only accessible from the virtual network where it was created. To open
-access, navigate to the relay created using the previous template, select "Networking" in settings,
-allow access from your browser network to the relay.
-
-### Configuring Cloud Shell to use a virtual network.
-
-> [!NOTE]
-> This step must be completed for each administrator that uses Cloud Shell.
+- VNET isolation requires you to use [Azure Relay][01], which is a paid service. In the Cloud Shell
+ scenario, one hybrid connection is used for each administrator while they're using Cloud Shell.
+ The connection is automatically closed when the Cloud Shell session ends.
-After deploying and completing the previous steps, open Cloud Shell. One of these experiences must
-be used each time you want to connect to an isolated Cloud Shell experience.
+## Architecture
-> [!NOTE]
-> If Cloud Shell has been used in the past, the existing clouddrive must be unmounted. To do this
-> run `clouddrive unmount` from an active Cloud Shell session, refresh your page.
+The following diagram shows the resource architecture that you must build to enable this scenario.
-Connect to Cloud Shell. You'll be prompted with the first run experience. Select your preferred
-shell experience, select **Show advanced settings** and select the **Show VNET isolation settings**
-box. Fill in the fields in the form. Most fields will be autofilled to the available resources that
-can be associated with Cloud Shell in a virtual network. You must provide a name for the fileshare.
+![Illustration of Cloud Shell isolated VNET architecture.][03]
-![Illustrates the Cloud Shell isolated VNET first experience settings.][07]
+- **Customer Client Network** - Client users can be located anywhere on the Internet to securely
+ access and authenticate to the Azure portal and use Cloud Shell to manage resources contained in
+ the customers subscription. For stricter security, you can allow users to launch Cloud Shell only
+ from the virtual network contained in your subscription.
+- **Microsoft Network** - Customers connect to the Azure portal on Microsoft's network to
+ authenticate and launch Cloud Shell.
+- **Customer Virtual Network** - This is the network that contains the subnets to support VNET
+ isolation. Resources such as virtual machines and services are directly accessible from Cloud
+ Shell without the need to assign a public IP address.
+- **Azure Relay** - An [Azure Relay][01] allows two endpoints that aren't directly reachable to
+ communicate. In this case, it's used to allow the administrator's browser to communicate with the
+ container in the private network.
+- **File share** - Cloud Shell requires a storage account that is accessible from the virtual
+ network. The storage account provides the file share used by Cloud Shell users.
-## Next steps
+## Related links
-[Learn about Azure Virtual Networks][02]
+For more information, see the [pricing][02] guide.
<!-- link references --> [01]: ../azure-relay/relay-what-is-it.md
-[02]: ../virtual-network/virtual-networks-overview.md
-[03]: https://aka.ms/cloudshell/docs/vnet/template
-[04]: https://azure.microsoft.com/pricing/details/service-bus/
-[05]: https://azure.microsoft.com/resources/templates/cloud-shell-vnet-storage/
-[06]: media/private-vnet/data-diagram.png
-[07]: media/private-vnet/vnet-settings.png
+[02]: https://azure.microsoft.com/pricing/details/service-bus/
+[03]: media/private-vnet/data-diagram.png
cloud-shell Quickstart Deploy Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart-deploy-vnet.md
+
+description: This article provides step-by-step instructions to deploy Azure Cloud Shell in a private virtual network.
+ms.contributor: jahelmic
Last updated : 06/07/2023+
+ Title: Deploy Azure Cloud Shell in a VNET with quickstart templates
++
+# Deploy Azure Cloud Shell in a VNET with quickstart templates
+
+Before you can deploy Azure Cloud Shell in a virtual network (VNET) configuration using the
+quickstart templates, there are several prerequisites to complete before running the templates.
+
+This document guides you through the process to complete the configuration.
+
+## Steps to deploy Azure Cloud Shell in a VNET
+
+This article walks you through the following steps to deploy Azure Cloud Shell in a VNET:
+
+1. Collect the required information
+1. Provision the virtual networks using the **Azure Cloud Shell - VNet** ARM template
+1. Provision the VNET storage account using the **Azure Cloud Shell - VNet storage** ARM template
+1. Configure and use Azure Cloud Shell in a VNET
+
+## 1. Collect the required information
+
+There are several pieces of information that you need to collect before you can deploy Azure Cloud.
+You can use the default Azure Cloud Shell instance to gather the required information and create the
+necessary resources. You should create dedicated resources for the Azure Cloud Shell VNET
+deployment. All resources must be in the same Azure region and contained in the same resource group.
+
+- **Subscription** - The name of your subscription containing the resource group used for the Azure
+ Cloud Shell VNET deployment
+- **Resource Group** - The name of the resource group used for the Azure Cloud Shell VNET deployment
+- **Region** - The location of the resource group
+- **Virtual Network** - The name of the virtual network created for Azure Cloud Shell VNET
+- **Azure Container Instance ID** - The ID of the Azure Container Instance for your resource group
+- **Azure Relay Namespace** - The name that you want to assign to the Relay resource created by the
+ template
+
+### Create a resource group
+
+You can create the resource group using the Azure portal, Azure CLI, or Azure PowerShell. For more
+information, see the following articles:
+
+- [Manage Azure resource groups by using the Azure portal][02]
+- [Manage Azure resource groups by using Azure CLI][01]
+- [Manage Azure resource groups by using Azure PowerShell][03]
+
+### Create a virtual network
+
+You can create the virtual network using the Azure portal, Azure CLI, or Azure PowerShell. For more
+information, see the following articles:
+
+- [Use the Azure portal to create a virtual network][05]
+- [Use Azure PowerShell to create a virtual network][06]
+- [Use Azure CLI to create a virtual network][04]
+
+### Register the resource provider
+
+Azure Cloud Shell runs in a container. The **Microsoft.ContainerInstances** resource provider needs
+to be registered in the subscription that holds the virtual network for your deployment. Depending
+when your tenant was created, the provider may already be registered.
+
+Use the following commands to check the registration status.
+
+```powershell
+Set-AzContext -Subscription MySubscriptionName
+Get-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance |
+ Select-Object ResourceTypes, RegistrationState
+```
+
+```Output
+ResourceTypes RegistrationState
+- --
+{containerGroups} Registered
+{serviceAssociationLinks} Registered
+{locations} Registered
+{locations/capabilities} Registered
+{locations/usages} Registered
+{locations/operations} Registered
+{locations/operationresults} Registered
+{operations} Registered
+{locations/cachedImages} Registered
+{locations/validateDeleteVirtualNetworkOrSubnets} Registered
+{locations/deleteVirtualNetworkOrSubnets} Registered
+```
+
+If **RegistrationState** for `{containerGroups}` is `NotRegistered`, run the following command to
+register the provider:
+
+```powershell
+Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance
+```
+
+### Azure Container Instance Id
+
+To configure the VNET for Cloud Shell using the quickstarts, retrieve the `Azure Container Instance`
+ID for your organization.
+
+```powershell
+Get-AzADServicePrincipal -DisplayNameBeginsWith 'Azure Container Instance'
+```
+
+```Output
+DisplayName Id AppId
+-- -- --
+Azure Container Instance Service 8fe7fd25-33fe-4f89-ade3-0e705fcf4370 34fbe509-d6cb-4813-99df-52d944bfd95a
+```
+
+Take note of the **Id** value for the `Azure Container Instance` service principal. It's needed for
+the **Azure Cloud Shell - VNet storage** template.
+
+## 2. Provision the virtual network using the ARM template
+
+Use the [Azure Cloud Shell - VNet][07] template to create Cloud Shell resources in a virtual
+network. The template creates three subnets under the virtual network created earlier. You may
+choose to change the supplied names of the subnets or use the defaults. The virtual network, along
+with the subnets, require valid IP address assignments.
+
+The ARM template requires specific information about the resources you created earlier, along with
+naming information for new resources. This information is filled out along with the prefilled
+information in the form.
+
+Information needed for the template:
+
+- **Subscription** - The name of your subscription containing the resource group for Azure Cloud
+ Shell VNET
+- **Resource Group** - The resource group name of either an existing or newly created resource group
+- **Region** - Location of the resource group
+- **Virtual Network** - The name of the virtual network created for Azure Cloud Shell VNET
+- **Azure Container Instance OID** - The ID of the Azure Container Instance for your resource group
+
+Fill out the form with the following information:
+
+| Project details | Value |
+| | -- |
+| Subscription | Defaults to the current subscription context.<br>For this example, we're using `MyCompany Subscription` |
+| Resource group | Enter the name of the resource group from the prerequisite information.<br>For this example, we're using `rg-cloudshell-eastus`. |
+
+| Instance details | Value |
+| - | - |
+| Region | Prefilled with your default region.<br>For this example, we're using `East US`. |
+| Existing VNET Name | Fill in the value from the prerequisite information you gathered.<br>For this example, we're using `vnet-cloudshell-eastus`. |
+| Relay Namespace Name | Create a name that you want to assign to the Relay resource created by the template.<br>For this example, we're using `arn-cloudshell-eastus`. |
+| Azure Container Instance OID | Fill in the value from the prerequisite information you gathered.<br>For this example, we're using `8fe7fd25-33fe-4f89-ade3-0e705fcf4370`. |
+| Container Subnet Name | Defaults to `cloudshellsubnet`. Enter the name of the subnet containing your container. |
+| Container Subnet Address Prefix | For this example, we use `10.0.1.0/24`. |
+| Relay Subnet Name | Defaults to `relaysubnet`. Enter the name of the subnet containing your relay. |
+| Relay Subnet Address Prefix | For this example, we use `10.0.2.0/24`. |
+| Storage Subnet Name | Defaults to `storagesubnet`. Enter the name of the subnet containing your storage. |
+| Storage Subnet Address Prefix | For this example, we use `10.0.3.0/24`. |
+| Private Endpoint Name | Defaults to `cloudshellRelayEndpoint`. Enter the name of the subnet containing your container. |
+| Tag Name | Defaults to `{"Environment":"cloudshell"}`. Leave unchanged or add more tags. |
+| Location | Defaults to `[resourceGroup().location]`. Leave unchanged. |
+
+Once the form is complete, select **Review + Create** and deploy the network ARM template to your
+subscription.
+
+## 3. Provision the VNET storage using the ARM template
+
+Use the [Azure Cloud Shell - VNet storage][08] template to create Cloud Shell resources in a virtual
+network. The template creates the storage account and assigns it to the private VNET.
+
+The ARM template requires specific information about the resources you created earlier, along
+with naming information for new resources.
+
+Information needed for the template:
+
+- **Subscription** - The name of the subscription containing the resource group for Azure Cloud
+ Shell VNET.
+- **Resource Group** - The resource group name of either an existing or newly created resource group
+- **Region** - Location of the resource group
+- **Existing VNET name** - The name of the virtual network created earlier
+- **Existing Storage Subnet Name** - The name of the storage subnet created with the Network
+ quickstart template
+- **Existing Container Subnet Name** - The name of the container subnet created with the Network
+ quickstart template
+
+Fill out the form with the following information:
+
+| Project details | Value |
+| | -- |
+| Subscription | Defaults to the current subscription context.<br>For this example, we're using `MyCompany Subscription` |
+| Resource group | Enter the name of the resource group from the prerequisite information.<br>For this example, we're using `rg-cloudshell-eastus`. |
+
+| Instance details | Value |
+| | -- |
+| Region | Prefilled with your default region.<br>For this example, we're using `East US`. |
+| Existing VNET Name | For this example, we're using `vnet-cloudshell-eastus`. |
+| Existing Storage Subnet Name | Fill in the name of the resource created by the network template. |
+| Existing Container Subnet Name | Fill in the name of the resource created by the network template. |
+| Storage Account Name | Create a name for the new storage account.<br>For this example, we're using `MyVnetStorage`. |
+| File Share Name | Defaults to `acsshare`. Enter the name of the file share want to create. |
+| Resource Tags | Defaults to `{"Environment":"cloudshell"}`. Leave unchanged or add more tags. |
+| Location | Defaults to `[resourceGroup().location]`. Leave unchanged. |
+
+Once the form is complete, select **Review + Create** and deploy the network ARM template to your
+subscription.
+
+## 4. Configuring Cloud Shell to use a virtual network
+
+After deploying your private Cloud Shell instance, each Cloud Shell user must change their
+configuration to use the new private instance.
+
+If you have used the default Cloud Shell before deploying the private instance, you must reset your
+user settings.
+
+1. Open Cloud Shell
+1. Select **Cloud Shell settings** from the menu bar (gear icon).
+1. Select **Reset user settings** then select **Reset**
+
+Resetting the user settings triggers the first-time user experience the next time you start Cloud
+Shell.
+
+[ ![Screenshot of Cloud Shell storage dialog box.](media/quickstart-deploy-vnet/setup-cloud-shell-storage.png) ](media/quickstart-deploy-vnet/setup-cloud-shell-storage.png#lightbox)
+
+1. Choose your preferred shell experience (Bash or PowerShell)
+1. Select **Show advanced settings**
+1. Select the **Show VNET isolation settings** checkbox.
+1. Choose the **Subscription** containing your private Cloud Shell instance.
+1. Choose the **Region** containing your private Cloud Shell instance.
+1. Select the **Resource group** name containing your private Cloud Shell instance. If you have
+ selected the correct resource group, the **Virtual network**, **Network profile**, and **Relay
+ namespace** should be automatically populated with the correct values.
+1. Enter the name for the **File share** you created with the storage template.
+1. Select **Create storage**.
+
+## Next steps
+
+You must complete the Cloud Shell configuration steps for each user that needs to use the new
+private Cloud Shell instance.
+
+<!-- link references -->
+[01]: /azure/azure-resource-manager/management/manage-resource-groups-cli
+[02]: /azure/azure-resource-manager/management/manage-resource-groups-portal
+[03]: /azure/azure-resource-manager/management/manage-resource-groups-powershell
+[04]: /azure/virtual-network/quick-create-cli
+[05]: /azure/virtual-network/quick-create-portal
+[06]: /azure/virtual-network/quick-create-powershell
+[07]: https://aka.ms/cloudshell/docs/vnet/template
+[08]: https://azure.microsoft.com/resources/templates/cloud-shell-vnet-storage/
communication-services Inbound Calling Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/inbound-calling-capabilities.md
Inbound PSTN calling is currently supported in GA for Dynamics Omnichannel. You
Supported in General Availability, to set up inbound calling in Omnichannel for Customer Service with direct routing or Voice Calling (PSTN) follow [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling).
-**Inbound calling with ACS Call Automation SDK**
+**Inbound calling with Azure Communication Services Call Automation SDK**
Call Automation enables you to build custom calling workflows within your applications to optimize business processes and boost customer satisfaction. The library supports managing incoming calls to the phone numbers acquired from Communication Services and incoming calls via Direct Routing. You can also use Call Automation to place outbound calls from the phone numbers owned by your resource, among other capabilities.
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated`
Many countries/regions and states have laws and regulations that apply to call recording. PSTN, voice, and video calls often require that users consent to the recording of their communications. It is your responsibility to use the call recording capabilities in compliance with the law. You must obtain consent from the parties of recorded communications in a manner that complies with the laws applicable to each participant.
-Regulations around the maintenance of personal data require the ability to export user data. In order to support these requirements, recording metadata files include the participantId for each call participant in the `participants` array. You can cross-reference the MRIs in the `participants` array with your internal user identities to identify participants in a call.
+Regulations around the maintenance of personal data require the ability to export user data. In order to support these requirements, recording metadata files include the participantId for each call participant in the `participants` array. You can cross-reference the Azure Communication Services User Identity in the `participants` array with your internal user identities to identify participants in a call.
## Next steps For more information, see the following articles:
communication-services Add Custom Verified Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-custom-verified-domains.md
To provision a custom domain, you need to:
- Click **Add domain** on the upper navigation bar. - Select **Custom domain** from the dropdown.
-3. Navigate to "Add a custom Domain".
+3. Navigate to "Add a custom Domain."
4. Enter your "Domain Name" and re enter domain name. 5. Click **Confirm** :::image type="content" source="./media/email-domains-custom-add.png" alt-text="Screenshot that shows where to enter the custom domain value."::: 6. Ensure that domain name isn't misspelled or click edit to correct the domain name and confirm.+ 7. Click **Add** :::image type="content" source="./media/email-domains-custom-add-confirm.png" alt-text="Screenshot that shows how to add a custom domain of your choice.":::
To provision a custom domain, you need to:
:::image type="content" source="./media/email-domains-custom-verify.png" alt-text="Screenshot that shows the Configure link that you need to click to verify domain ownership." lightbox="media/email-domains-custom-verify-expanded.png":::
-12. You need add the above TXT record to your domain's registrar or DNS hosting provider. Click **Next** once you've completed this step.
+12. Add the above TXT record to your domain's registrar or DNS hosting provider. Refer to the [adding DNS records in popular domain registrars table](#txt-records) for information on how to add a TXT record for your DNS provider.
+
+Click **Next** once you've completed this step.
13. Verify that TXT record is created successfully in your DNS and Click **Done** 14. DNS changes take up to 15 to 30 minutes. Click **Close**
To provision a custom domain, you need to:
### Configure sender authentication for custom domain 1. Navigate to **Provision Domains** and confirm that **Domain Status** is in "Verified" state.
-2. You can add SPF and DKIM by clicking **Configure**. You need add the following TXT record and CNAME records to your domain's registrar or DNS hosting provider. Click **Next** once you've completed this step.
+2. You can add SPF and DKIM by clicking **Configure**. Add the following TXT record and CNAME records to your domain's registrar or DNS hosting provider. Refer to the [adding DNS records in popular domain registrars table](#cname-records) for information on how to add a TXT & CNAME record for your DNS provider.
+Click **Next** once you've completed this step.
:::image type="content" source="./media/email-domains-custom-spf.png" alt-text="Screenshot that shows the D N S records that you need to add for S P F validation for your verified domains.":::- :::image type="content" source="./media/email-domains-custom-dkim-1.png" alt-text="Screenshot that shows the D N S records that you need to add for D K I M.":::- :::image type="content" source="./media/email-domains-custom-dkim-2.png" alt-text="Screenshot that shows the D N S records that you need to add for additional D K I M records."::: 3. Verify that TXT and CNAME records are created successfully in your DNS and Click **Done**
You can optionally configure your MailFrom address to be something other than th
**Your email domain is now ready to send emails.**
+## Adding DNS records in popular domain registrars
+
+### TXT records
+
+The following links provide additional information on how to add a TXT record using many of the popular domain registrars.
+
+| Registrar Name | Documentation Link |
+| | |
+| IONOS by 1 & 1 | [Steps 1-7](/microsoft-365/admin/dns/create-dns-records-at-1-1-internet?view=o365-worldwide#:~:text=Add%20a%20TXT%20record%20for%20verification,created%20can%20update%20across%20the%20Internet.&preserve-view=true)
+| 123-reg.co.uk | [Steps 1-6](/microsoft-365/admin/dns/create-dns-records-at-123-reg-co-uk?view=o365-worldwide#:~:text=DNS%20records.-,Add%20a%20TXT%20record%20for%20verification,that%20the%20record%20you%20just%20created%20can%20update%20across%20the%20Internet.,-Now%20that%20you%27ve&preserve-view=true)
+| Amazon Web Services (AWS) | [Steps 1-8](/microsoft-365/admin/dns/create-dns-records-at-aws?view=o365-worldwide#:~:text=DNS%20records.-,Add%20a%20TXT%20record%20for%20verification,that%20the%20record%20you%20just%20created%20can%20update%20across%20the%20Internet.,-Now%20that%20you%27ve&preserve-view=true)
+| Cloudflare | [Steps 1-6](/microsoft-365/admin/dns/create-dns-records-at-cloudflare?view=o365-worldwide#:~:text=Add%20a%20TXT,across%20the%20Internet.&preserve-view=true)
+| GoDaddy | [Steps 1-6](/microsoft-365/admin/dns/create-dns-records-at-godaddy?view=o365-worldwide#:~:text=Add%20a%20TXT,across%20the%20Internet.&preserve-view=true)
+| Namecheap | [Steps 1-9](/microsoft-365/admin/dns/create-dns-records-at-namecheap?view=o365-worldwide#:~:text=DNS%20records.-,Add%20a%20TXT%20record%20for%20verification,that%20the%20record%20you%20just%20created%20can%20update%20across%20the%20Internet.,-Now%20that%20you%27ve&preserve-view=true)
+| Network Solutions | [Steps 1-9](/microsoft-365/admin/dns/create-dns-records-at-network-solutions?view=o365-worldwide#:~:text=DNS%20records.-,Add%20a%20TXT%20record%20for%20verification,that%20the%20record%20you%20just%20created%20can%20update%20across%20the%20Internet,-.&preserve-view=true)
+| OVH | [Steps 1-9](/microsoft-365/admin/dns/create-dns-records-at-ovh?view=o365-worldwide#:~:text=DNS%20records.-,Add%20a%20TXT%20record%20for%20verification,that%20the%20record%20you%20just%20created%20can%20update%20across%20the%20Internet.,-Now%20that%20you%27ve&preserve-view=true)
+| web.com | [Steps 1-8](/microsoft-365/admin/dns/create-dns-records-at-web-com?view=o365-worldwide#:~:text=with%20your%20domain.-,Add%20a%20TXT%20record%20for%20verification,that%20the%20record%20you%20just%20created%20can%20update%20across%20the%20Internet,-.&preserve-view=true)
+| Wix | [Steps 1-5](/microsoft-365/admin/dns/create-dns-records-at-wix?view=o365-worldwide#:~:text=DNS%20records.-,Add%20a%20TXT%20record%20for%20verification,that%20the%20record%20you%20just%20created%20can%20update%20across%20the%20Internet,-.&preserve-view=true)
+| Other (General) | [Steps 1-4](/microsoft-365/admin/get-help-with-domains/create-dns-records-at-any-dns-hosting-provider?view=o365-worldwide#:~:text=Recommended%3A%20Verify%20with,domain%20is%20verified.&preserve-view=true)
+
+### CNAME records
+
+The following links provide additional information on how to add a CNAME record using many of the popular domain registrars (Make sure to use the values from the configuration window rather than the ones in the documentation link.)
+
+| Registrar Name | Documentation Link |
+| | |
+| IONOS by 1 & 1 | [Steps 1-10](/microsoft-365/admin/dns/create-dns-records-at-1-1-internet?view=o365-worldwide#:~:text=Add%20the%20CNAME,Select%20Save.&preserve-view=true)
+| 123-reg.co.uk | [Steps 1-6](/microsoft-365/admin/dns/create-dns-records-at-123-reg-co-uk?view=o365-worldwide#:~:text=for%20that%20record.-,Add%20the%20CNAME%20record%20required%20for%20Microsoft,Select%20Add.,-Add%20a%20TXT&preserve-view=true)
+| Amazon Web Services (AWS) | [Steps 1-8](/microsoft-365/admin/dns/create-dns-records-at-aws?view=o365-worldwide#:~:text=selecting%20Delete.-,Add%20the%20CNAME%20record%20required%20for%20Microsoft%20365,Select%20Create%20records.,-Add%20a%20TXT&preserve-view=true)
+| Cloudflare | [Steps 1-6](/microsoft-365/admin/dns/create-dns-records-at-cloudflare?view=o365-worldwide#:~:text=Add%20the%20CNAME,Select%20Save.&preserve-view=true)
+| GoDaddy | [Steps 1-6](/microsoft-365/admin/dns/create-dns-records-at-godaddy?view=o365-worldwide#:~:text=Add%20the%20CNAME,Select%20Save.&preserve-view=true)
+| Namecheap | [Steps 1-8](/microsoft-365/admin/dns/create-dns-records-at-namecheap?view=o365-worldwide#:~:text=in%20this%20procedure.-,Add%20the%20CNAME%20record%20required%20for%20Microsoft,Select%20the%20Save%20Changes%20(check%20mark)%20control.,-Add%20a%20TXT&preserve-view=true)
+| Network Solutions | [Steps 1-9](/microsoft-365/admin/dns/create-dns-records-at-network-solutions?view=o365-worldwide#:~:text=for%20each%20record.-,Add%20the%20CNAME%20record%20required%20for%20Microsoft,View%20in%20the%20upper%20right%20to%20view%20the%20record%20you%20created.,-Add%20a%20TXT&preserve-view=true)
+| OVH | [Steps 1-8](/microsoft-365/admin/dns/create-dns-records-at-ovh?view=o365-worldwide#add-the-cname-record-required-for-microsoft:~:text=Select%20Confirm.-,Add%20the%20CNAME%20record%20required%20for%20Microsoft,Select%20Confirm.,-Add%20a%20TXT&preserve-view=true)
+| web.com | [Steps 1-8](/microsoft-365/admin/dns/create-dns-records-at-web-com?view=o365-worldwide#add-the-cname-record-required-for-microsoft:~:text=for%20each%20record.-,Add%20the%20CNAME%20record%20required%20for%20Microsoft,Select%20ADD.,-Add%20a%20TXT&preserve-view=true)
+| Wix | [Steps 1-5](/microsoft-365/admin/dns/create-dns-records-at-wix?view=o365-worldwide#add-the-cname-record-required-for-microsoft:~:text=Select%20Save.-,Add%20the%20CNAME%20record%20required%20for%20Microsoft,that%20the%20record%20you%20just%20created%20can%20update%20across%20the%20Internet.,-Add%20a%20TXT&preserve-view=true)
+| Other (General) | [Guide](/microsoft-365/admin/dns/create-dns-records-using-windows-based-dns?view=o365-worldwide#add-cname-records:~:text=%3E%20OK.-,Add%20CNAME%20records,Select%20OK.,-Add%20the%20SIP&preserve-view=true)
+ ## Next steps * [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md
ms.suite: integration Previously updated : 02/21/2023 Last updated : 06/23/2023 tags: connectors
tags: connectors
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-This article shows how to access your Azure Blob Storage account and container from a workflow in Azure Logic Apps using the Azure Blob Storage connector. This connector provides triggers and actions that your workflow can use for blob operations. You can then create automated workflows that run when triggered by events in your storage container or in other systems, and run actions to work with data in your storage container.
-
-For example, you can access and manage files stored as blobs in your Azure storage account.
+This how-to guide shows how to access your Azure Blob Storage account and container from a workflow in Azure Logic Apps using the Azure Blob Storage connector. This connector provides triggers and actions that your workflow can use for blob operations. You can then create automated workflows that run when triggered by events in your storage container or in other systems, and run actions to work with data in your storage container. For example, you can access and manage files stored as blobs in your Azure storage account.
You can connect to Azure Blob Storage from a workflow in **Logic App (Consumption)** and **Logic App (Standard)** resource types. You can use the connector with logic app workflows in multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, and the integration service environment (ISE). With **Logic App (Standard)**, you can use either the **Azure Blob** *built-in* connector operations or the **Azure Blob Storage** managed connector operations.
The following steps use the Azure portal, but with the appropriate Azure Logic A
### [Consumption](#tab/consumption)
-1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
-
-1. On the designer, under the search box, select **Standard**. In the search box, enter **Azure blob**.
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app and blank workflow in the designer.
-1. From the triggers list, select the trigger that you want.
+1. In the designer, under the search box, select **Standard**, and then [follow these general steps to add the Azure Blob Storage managed trigger you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
This example continues with the trigger named **When a blob is added or modified (properties only)**.
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-add-trigger.png" alt-text="Screenshot showing Azure portal, Consumption workflow designer, and Azure Blob Storage trigger selected.":::
- 1. If prompted, provide the following information for your connection to your storage account. When you're done, select **Create**. | Property | Required | Description |
The following steps use the Azure portal, but with the appropriate Azure Logic A
| Property | Required | Value | Description | |-|-|-|-|
- | **Azure Storage Account name** | Yes, <br>but only for access key authentication | <*storage-account-name*> | The name for the Azure storage account where your blob container exists. <br><br>**Note**: To find the storage account name, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys**. Under **Storage account name**, copy and save the name. |
+ | **Azure Storage Account name** | Yes, but only for access key authentication | <*storage-account-name*> | The name for the Azure storage account where your blob container exists. <br><br>**Note**: To find the storage account name, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys**. Under **Storage account name**, copy and save the name. |
| **Azure Storage Account Access Key** | Yes, <br>but only for access key authentication | <*storage-account-access-key*> | The access key for your Azure storage account. <br><br>**Note**: To find the access key, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Show**. Copy and save the primary key value. |
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-trigger-create-connection.png" alt-text="Screenshot showing Consumption workflow, Azure Blob Storage trigger, and example connection information.":::
+ [![Screenshot showing Consumption workflow, Azure Blob Storage trigger, and example connection information.](./media/connectors-create-api-azureblobstorage/consumption-trigger-create-connection.png)](./media/connectors-create-api-azureblobstorage/consumption-trigger-create-connection.png#lightbox)
1. After the trigger information box appears, provide the necessary information. For the **Container** property value, select the folder icon to browse for your blob container. Or, enter the path manually using the syntax **/<*container-name*>**, for example:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-trigger-information.png" alt-text="Screenshot showing Consumption workflow with Azure Blob Storage trigger, and example trigger information.":::
+ [![Screenshot showing Consumption workflow with Azure Blob Storage trigger, and example trigger information.](./media/connectors-create-api-azureblobstorage/consumption-trigger-information.png)](./media/connectors-create-api-azureblobstorage/consumption-trigger-information.png#lightbox)
1. To add other properties available for this trigger, open the **Add new parameter list**, and select the properties that you want.
The steps to add and use a Blob trigger differ based on whether you want to use
#### Built-in connector trigger
-1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
-
-1. On the designer, select **Choose an operation**. Under the **Choose an operation** search box, select **Built-in**.
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app and blank workflow in the designer.
-1. In the search box, enter **Azure blob**. From the triggers list, select the trigger that you want.
+1. In the designer, [follow these general steps to find and add the Azure Blob Storage built-in trigger you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
This example continues with the trigger named **When a blob is added or updated**.
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-add-built-in-trigger.png" alt-text="Screenshot showing Azure portal, Standard workflow designer, and Azure Blob built-in trigger selected.":::
- 1. If prompted, provide the following information for your connection to your storage account. When you're done, select **Create**. | Property | Required | Description |
The steps to add and use a Blob trigger differ based on whether you want to use
|-|-|-|-| | **Storage account connection string** | Yes, <br>but only for connection string authentication | <*storage-account-connection-string*> | The connection string for your Azure storage account. <br><br>**Note**: To find the connection string, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Connection string** > **Show**. Copy and save the connection string for the primary key. |
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-create-connection.png" alt-text="Screenshot showing Standard workflow, Azure Blob built-in trigger, and example connection information.":::
+ [![Screenshot showing Standard workflow, Azure Blob built-in trigger, and example connection information.](./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-create-connection.png)](./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-create-connection.png#lightbox)
1. After the trigger information box appears, provide the necessary information.
The steps to add and use a Blob trigger differ based on whether you want to use
The following example shows a trigger setup that checks the root folder for a newly added blob:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-root-folder.png" alt-text="Screenshot showing Standard workflow with Azure Blob built-in trigger set up for root folder.":::
+ [![Screenshot showing Standard workflow with Azure Blob built-in trigger set up for root folder.](./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-root-folder.png)](./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-root-folder.png#lightbox)
The following example shows a trigger setup that checks a subfolder for changes to an existing blob:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-subfolder-existing-blob.png" alt-text="Screenshot showing Standard workflow with Azure Blob built-in trigger set up for a subfolder and specific blob.":::
+ [![Screenshot showing Standard workflow with Azure Blob built-in trigger set up for a subfolder and specific blob.](./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-subfolder-existing-blob.png)](./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-subfolder-existing-blob.png#lightbox)
1. Add any other actions that your workflow requires.
The steps to add and use a Blob trigger differ based on whether you want to use
#### Managed connector trigger
-1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app and blank workflow in the designer.
-1. On the designer, select **Choose an operation**. Under the search box, select **Azure**.
-
-1. In the search box, enter **Azure blob**.
-
-1. From the triggers list, select the trigger that you want.
+1. In the designer, [follow these general steps to find and add the Azure Blob Storage managed trigger you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
This example continues with the trigger named **When a blob is added or modified (properties only)**.
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-add-managed-trigger.png" alt-text="Screenshot showing Azure portal, Standard logic app workflow designer, and Azure Blob Storage managed trigger selected.":::
- 1. If prompted, provide the following information for your connection to your storage account. When you're done, select **Create**. | Property | Required | Description |
The steps to add and use a Blob trigger differ based on whether you want to use
| **Azure Storage Account name** | Yes, <br>but only for access key authentication | <*storage-account-name*> | The name for the Azure storage account where your blob container exists. <br><br>**Note**: To find the storage account name, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys**. Under **Storage account name**, copy and save the name. | | **Azure Storage Account Access Key** | Yes, <br>but only for access key authentication | <*storage-account-access-key*> | The access key for your Azure storage account. <br><br>**Note**: To find the access key, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Show**. Copy and save the primary key value. |
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-managed-trigger-create-connection.png" alt-text="Screenshot showing Standard workflow, Azure Blob managed trigger, and example connection information.":::
+ [![Screenshot showing Standard workflow, Azure Blob managed trigger, and example connection information.](./media/connectors-create-api-azureblobstorage/standard-managed-trigger-create-connection.png)](./media/connectors-create-api-azureblobstorage/standard-managed-trigger-create-connection.png#lightbox)
1. After the trigger information box appears, provide the necessary information. For the **Container** property value, select the folder icon to browse for your blob storage container. Or, enter the path manually using the syntax **/<*container-name*>**, for example:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-managed-trigger.png" alt-text="Screenshot showing Azure Blob Storage managed trigger with parameters configuration.":::
+ [![Screenshot showing Azure Blob Storage managed trigger with parameters configuration.](./media/connectors-create-api-azureblobstorage/standard-managed-trigger.png)](./media/connectors-create-api-azureblobstorage/standard-managed-trigger.png#lightbox)
1. To add other properties available for this trigger, open the **Add new parameter list** and select those properties. For more information, review [Azure Blob Storage managed connector trigger properties](/connectors/azureblobconnector/#when-a-blob-is-added-or-modified-(properties-only)-(v2)).
The following steps use the Azure portal, but with the appropriate Azure Logic A
### [Consumption](#tab/consumption)
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app and workflow in the designer.
-1. If your workflow is blank, add the trigger that your workflow requires.
+1. If your workflow is blank, add the trigger that your scenario requires.
This example uses the [**Recurrence** trigger](connectors-native-recurrence.md).
-1. In your workflow where you want to add the Blob action, follow one of these steps:
-
- - To add an action under the last step, select **New step**.
-
- - To add an action between steps, move your pointer use over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **Azure blob**.
-
-1. From the actions list, select the action that you want.
+1. In the designer, [follow these general steps to find and add the Azure Blob Storage managed action you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
This example continues with the action named **Get blob content**.
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-add-action.png" alt-text="Screenshot showing Azure portal, Consumption workflow designer, and Azure Blob Storage action selected.":::
- 1. If prompted, provide the following information for your connection. When you're done, select **Create**. | Property | Required | Description |
The following steps use the Azure portal, but with the appropriate Azure Logic A
| **Azure Storage Account name** | Yes, <br>but only for access key authentication | <*storage-account-name*> | The name for the Azure storage account where your blob container exists. <br><br>**Note**: To find the storage account name, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys**. Under **Storage account name**, copy and save the name. | | **Azure Storage Account Access Key** | Yes, <br>but only for access key authentication | <*storage-account-access-key*> | The access key for your Azure storage account. <br><br>**Note**: To find the access key, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Show**. Copy and save the primary key value. |
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-create-connection.png" alt-text="Screenshot showing Consumption workflow, Azure Blob action, and example connection information.":::
+ [![Screenshot showing Consumption workflow, Azure Blob action, and example connection information.](./media/connectors-create-api-azureblobstorage/consumption-action-create-connection.png)](./media/connectors-create-api-azureblobstorage/consumption-action-create-connection.png#lightbox)
1. In the action information box, provide the necessary information.
The following steps use the Azure portal, but with the appropriate Azure Logic A
The following example shows the action setup that gets the content from a blob in the root folder:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-root-folder.png" alt-text="Screenshot showing Consumption workflow with Blob action setup for root folder.":::
+ [![Screenshot showing Consumption workflow with Blob action setup for root folder.](./media/connectors-create-api-azureblobstorage/consumption-action-root-folder.png)](./media/connectors-create-api-azureblobstorage/consumption-action-root-folder.png#lightbox)
The following example shows the action setup that gets the content from a blob in the subfolder:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-sub-folder.png" alt-text="Screenshot showing Consumption workflow with Blob action setup for subfolder.":::
+ [![Screenshot showing Consumption workflow with Blob action setup for subfolder.](./media/connectors-create-api-azureblobstorage/consumption-action-sub-folder.png)](./media/connectors-create-api-azureblobstorage/consumption-action-sub-folder.png#lightbox)
1. Add any other actions that your workflow requires.
The steps to add and use an Azure Blob action differ based on whether you want t
#### Built-in connector action
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app and workflow in the designer.
-1. If your workflow is blank, add the trigger that your workflow requires.
+1. If your workflow is blank, add the trigger that your scenario requires.
This example uses the [**Recurrence** trigger](connectors-native-recurrence.md).
-1. In your workflow where you want to add the Blob action, follow one of these steps:
-
- - To add an action under the last step, select the plus sign (**+**), and then select **Add an action**.
-
- - To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**.
-
-1. On the **Add an action** pane, under the search box, select **Built-in**. In the search box, enter **Azure blob**.
-
-1. From the actions list, select the action that you want.
+1. In the designer, [follow these general steps to find and add the Azure Blob Storage built-in action you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
This example continues with the action named **Read blob content**, which only reads the blob content. To later view the content, add a different action that creates a file with the blob content using another connector. For example, you can add a OneDrive action that creates a file based on the blob content.
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-add-built-in-action.png" alt-text="Screenshot showing Azure portal, Standard workflow designer, and Azure Blob built-in action selected.":::
- 1. If prompted, provide the following information for your connection to your storage account. When you're done, select **Create**. | Property | Required | Description |
The steps to add and use an Azure Blob action differ based on whether you want t
|-|-|-|-| | **Storage account connection string** | Yes, <br>but only for connection string authentication | <*storage-account-connection-string*> | The connection string for your Azure storage account. <br><br>**Note**: To find the connection string, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Connection string** > **Show**. Copy and save the connection string for the primary key. |
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-create-connection.png" alt-text="Screenshot showing Standard workflow, Azure Blob built-in trigger, and example connection information.":::
+ [![Screenshot showing Standard workflow, Azure Blob built-in trigger, and example connection information.](./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-create-connection.png)](./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-create-connection.png#lightbox)
1. In the action information box, provide the necessary information.
The steps to add and use an Azure Blob action differ based on whether you want t
#### Managed connector action
-1. In the [Azure portal](https://portal.azure.com), open your workflow in the designer.
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app and workflow in the designer.
-1. If your workflow is blank, add any trigger that you want.
+1. If your workflow is blank, add the trigger that your scenario requires.
This example starts with the [**Recurrence** trigger](connectors-native-recurrence.md).
-1. In your workflow where you want to add the Blob action, follow one of these steps:
-
- - To add an action under the last step, select the plus sign (**+**), and then select **Add an action**.
-
- - To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **Azure blob**.
-
-1. From the actions list, select the Blob action that you want.
+1. In the designer, [follow these general steps to find and add the Azure Blob Storage managed action you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
This example continues with the action named **Get blob content**.
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-add-managed-action.png" alt-text="Screenshot showing Azure portal, Standard workflow designer, and Azure Blob Storage managed action selected.":::
- 1. If prompted, provide the following information for your connection to your storage account. When you're done, select **Create**. | Property | Required | Description |
The steps to add and use an Azure Blob action differ based on whether you want t
| **Azure Storage Account name** | Yes, <br>but only for access key authentication | <*storage-account-name*> | The name for the Azure storage account where your blob container exists. <br><br>**Note**: To find the storage account name, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys**. Under **Storage account name**, copy and save the name. | | **Azure Storage Account Access Key** | Yes, <br>but only for access key authentication | <*storage-account-access-key*> | The access key for your Azure storage account. <br><br>**Note**: To find the access key, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Show**. Copy and save the primary key value. |
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-managed-action-create-connection.png" alt-text="Screenshot showing Standard workflow, Azure Blob Storage managed action, and example connection information.":::
+ [![Screenshot showing Standard workflow, Azure Blob Storage managed action, and example connection information.](./media/connectors-create-api-azureblobstorage/standard-managed-action-create-connection.png)](./media/connectors-create-api-azureblobstorage/standard-managed-action-create-connection.png#lightbox)
1. In the action information box, provide the necessary information.
The steps to add and use an Azure Blob action differ based on whether you want t
||| | Get the content from a specific blob in the root folder. | **/<*container-name*>/<*blob-name*>** | | Get the content from a specific blob in a subfolder. | **/<*container-name*>/<*subfolder*>/<*blob-name*>** |
- |||
The following example shows the action setup that gets the content from a blob in the root folder:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-managed-action-root-folder.png" alt-text="Screenshot showing Consumption logic app workflow designer with Blob action setup for root folder.":::
+ [![Screenshot showing Consumption logic app workflow designer with Blob action setup for root folder.](./media/connectors-create-api-azureblobstorage/standard-managed-action-root-folder.png)](./media/connectors-create-api-azureblobstorage/standard-managed-action-root-folder.png#lightbox)
The following example shows the action setup that gets the content from a blob in the subfolder:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-managed-action-sub-folder.png" alt-text="Screenshot showing Consumption logic app workflow designer with Blob action setup for subfolder.":::
+ [![Screenshot showing Consumption logic app workflow designer with Blob action setup for subfolder.](./media/connectors-create-api-azureblobstorage/standard-managed-action-sub-folder.png)](./media/connectors-create-api-azureblobstorage/standard-managed-action-sub-folder.png#lightbox)
1. Add any other actions that your workflow requires.
To add your outbound IP addresses to the storage account firewall, follow these
1. Under **Firewall**, add the IP addresses or ranges that need access. If you need to access the storage account from your computer, select **Add your client IP address**.
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/storage-ip-configure.png" alt-text="Screenshot of blob storage account networking page in Azure portal, showing firewall settings to add IP addresses and ranges to the allowlist.":::
+ [![Screenshot of blob storage account networking page in Azure portal, showing firewall settings to add IP addresses and ranges to the allowlist.](./media/connectors-create-api-azureblobstorage/storage-ip-configure.png)](./media/connectors-create-api-azureblobstorage/storage-ip-configure.png#lightbox)
1. When you're done, select **Save**.
To set up the exception and managed identity support, first configure appropriat
1. Under **Exceptions**, select **Allow trusted Microsoft services to access this storage account**.
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/storage-networking-configure.png" alt-text="Screenshot showing Azure portal and Blob Storage account networking pane with allow settings.":::
+ [![Screenshot showing Azure portal and Blob Storage account networking pane with allow settings.](./media/connectors-create-api-azureblobstorage/storage-networking-configure.png)](./media/connectors-create-api-azureblobstorage/storage-networking-configure.png#lightbox)
1. When you're done, select **Save**.
The following steps are the same for Consumption logic apps in multi-tenant envi
1. On the **System assigned** pane, set **Status** to **On**, if not already enabled, select **Save**, and confirm your changes. Under **Permissions**, select **Azure role assignments**.
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/role-assignment-add-1.png" alt-text="Screenshot showing the Azure portal and logic app resource menu with the 'Identity' settings pane and 'Azure role assignment permissions' button.":::
+ [![Screenshot showing the Azure portal and logic app resource menu with the 'Identity' settings pane and 'Azure role assignment permissions' button.](./media/connectors-create-api-azureblobstorage/role-assignment-add-1.png)](./media/connectors-create-api-azureblobstorage/role-assignment-add-1.png#lightbox)
1. On the **Azure role assignments** pane, select **Add role assignment**.
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/role-assignment-add-2.png" alt-text="Screenshot showing the logic app role assignments pane with the selected subscription and button to add a new role assignment.":::
+ [![Screenshot showing the logic app role assignments pane with the selected subscription and button to add a new role assignment.](./media/connectors-create-api-azureblobstorage/role-assignment-add-2.png)](./media/connectors-create-api-azureblobstorage/role-assignment-add-2.png#lightbox)
1. On the **Add role assignments** pane, set up the new role assignment with the following values:
The following steps are the same for Consumption logic apps in multi-tenant envi
| **Resource** | <*storage-account-name*> | The name for the storage account that you want to access from your logic app workflow. | | **Role** | <*role-to-assign*> | The role that your scenario requires for your workflow to work with the resource. This example requires **Storage Blob Data Contributor**, which allows read, write, and delete access to blob containers and date. For permissions details, move your mouse over the information icon next to a role in the drop-down menu. |
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/role-assignment-configure.png" alt-text="Screenshot of role assignment configuration pane, showing settings for scope, subscription, resource, and role.":::
+ [![Screenshot of role assignment configuration pane, showing settings for scope, subscription, resource, and role.](./media/connectors-create-api-azureblobstorage/role-assignment-configure.png)](./media/connectors-create-api-azureblobstorage/role-assignment-configure.png#lightbox)
1. When you're done, select **Save** to finish creating the role assignment.
container-apps Jobs Get Started Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs-get-started-cli.md
To use manual jobs, you first create a job with trigger type `Manual` and then s
az containerapp job create \ --name "$JOB_NAME" --resource-group "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ --trigger-type "Manual" \
- --replica-timeout 60 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \
+ --replica-timeout 1800 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \
--image "mcr.microsoft.com/k8se/quickstart-jobs:latest" \ --cpu "0.25" --memory "0.5Gi" ```
Create a job in the Container Apps environment that starts every minute using th
az containerapp job create \ --name "$JOB_NAME" --resource-group "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ --trigger-type "Schedule" \
- --replica-timeout 60 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \
+ --replica-timeout 1800 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \
--image "mcr.microsoft.com/k8se/quickstart-jobs:latest" \ --cpu "0.25" --memory "0.5Gi" \ --cron-expression "*/1 * * * *"
container-apps Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs.md
To create a manual job using the Azure CLI, use the `az containerapp job create`
az containerapp job create \ --name "my-job" --resource-group "my-resource-group" --environment "my-environment" \ --trigger-type "Manual" \
- --replica-timeout 60 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \
+ --replica-timeout 1800 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \
--image "mcr.microsoft.com/k8se/quickstart-jobs:latest" \ --cpu "0.25" --memory "0.5Gi" ```
The following example Azure Resource Manager template creates a manual job named
"replicaCompletionCount": 1 }, "replicaRetryLimit": 1,
- "replicaTimeout": 60,
+ "replicaTimeout": 1800,
"triggerType": "Manual" }, "environmentId": "/subscriptions/<subscription_id>/resourceGroups/my-resource-group/providers/Microsoft.App/managedEnvironments/my-environment",
To create a scheduled job using the Azure CLI, use the `az containerapp job crea
az containerapp job create \ --name "my-job" --resource-group "my-resource-group" --environment "my-environment" \ --trigger-type "Schedule" \
- --replica-timeout 60 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \
+ --replica-timeout 1800 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \
--image "mcr.microsoft.com/k8se/quickstart-jobs:latest" \ --cpu "0.25" --memory "0.5Gi" \ --cron-expression "0 0 * * *"
The following example Azure Resource Manager template creates a manual job named
"replicaCompletionCount": 1 }, "replicaRetryLimit": 1,
- "replicaTimeout": 60,
+ "replicaTimeout": 1800,
"triggerType": "Schedule" }, "environmentId": "/subscriptions/<subscription_id>/resourceGroups/my-resource-group/providers/Microsoft.App/managedEnvironments/my-environment",
To create an event-driven job using the Azure CLI, use the `az containerapp job
az containerapp job create \ --name "my-job" --resource-group "my-resource-group" --environment "my-environment" \ --trigger-type "Event" \
- --replica-timeout 60 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \
+ --replica-timeout 1800 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \
--image "docker.io/myuser/my-event-driven-job:latest" \ --cpu "0.25" --memory "0.5Gi" \ --min-executions "0" \
The following example Azure Resource Manager template creates an event-driven jo
} }, "replicaRetryLimit": 1,
- "replicaTimeout": 60,
+ "replicaTimeout": 1800,
"triggerType": "Event", "secrets": [ {
container-apps Tutorial Ci Cd Runners Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-ci-cd-runners-jobs.md
You can now create a job that uses to use the container image. In this section,
```bash az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ --trigger-type Event \
- --replica-timeout 300 \
+ --replica-timeout 1800 \
--replica-retry-limit 1 \ --replica-completion-count 1 \ --parallelism 1 \
You can now create a job that uses to use the container image. In this section,
```powershell az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" ` --trigger-type Event `
- --replica-timeout 300 `
+ --replica-timeout 1800 `
--replica-retry-limit 1 ` --replica-completion-count 1 ` --parallelism 1 `
Now that you have a placeholder agent, you can create a self-hosted agent. In th
```bash az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ --trigger-type Event \
- --replica-timeout 300 \
+ --replica-timeout 1800 \
--replica-retry-limit 1 \ --replica-completion-count 1 \ --parallelism 1 \
az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$E
```powershell az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \ --trigger-type Event \
- --replica-timeout 300 \
+ --replica-timeout 1800 \
--replica-retry-limit 1 \ --replica-completion-count 1 \ --parallelism 1 \
container-apps Tutorial Event Driven Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-event-driven-jobs.md
To deploy the job, you must first build a container image for the job and push i
--resource-group "$RESOURCE_GROUP" \ --environment "$ENVIRONMENT" \ --trigger-type "Event" \
- --replica-timeout "60" \
+ --replica-timeout "1800" \
--replica-retry-limit "1" \ --replica-completion-count "1" \ --parallelism "1" \
container-registry Container Registry Tutorial Build Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-build-task.md
az acr task create \
--registry $ACR_NAME \ --name taskhelloworld \ --image helloworld:{{.Run.ID}} \
- --context https://github.com/$GIT_USER/acr-build-helloworld-node.git#main \
+ --context https://github.com/$GIT_USER/acr-build-helloworld-node.git#master \
--file Dockerfile \ --git-access-token $GIT_PAT ```
cosmos-db Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md
Azure Cosmos DB for MongoDB vCore supports the following indexes and index prope
| `Single Field Index` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes | | `Compound Index` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes | | `Multikey Index` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
-| `Text Index` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No |
+| `Text Index` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes |
| `Geospatial Index` | :::image type="icon" source="media/compatibility/no-icon.svg"::: No | | `Hashed Index` | :::image type="icon" source="media/compatibility/yes-icon.svg"::: Yes | | `Vector Index (only available in Cosmos DB)` | :::image type="icon" source="medi) |
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Updates that change cluster internals, such as installing a [new minor PostgreSQ
* Learn about [cluster node availability zones](./concepts-cluster.md#node-availability-zone) and [how to set preferred availability zone](./howto-scale-grow.md#choose-preferred-availability-zone). * General availability: The new domain name and FQDN format for cluster nodes. The change applies to newly provisioned clusters only. * See [details](./concepts-node-domain-name.md).
+* Preview: Audit logging of database activities in Azure Cosmos DB for PostgreSQL is available through the PostgreSQL Audit extension.
+ *See [details](./how-to-enable-audit.md).
### May 2023
data-factory Format Delta https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delta.md
The below table lists the properties supported by a delta source. You can edit t
| Compression type | The compression type of the delta table | no | `bzip2`<br>`gzip`<br>`deflate`<br>`ZipDeflate`<br>`snappy`<br>`lz4` | compressionType | | Compression level | Choose whether the compression completes as quickly as possible or if the resulting file should be optimally compressed. | required if `compressedType` is specified. | `Optimal` or `Fastest` | compressionLevel | | Time travel | Choose whether to query an older snapshot of a delta table | no | Query by timestamp: Timestamp <br> Query by version: Integer | timestampAsOf <br> versionAsOf |
-| Allow no files found | If true, an error is not thrown if no files are found | no | `true` or `false` | ignoreNoFilesFound |
+| Allow no files found | If true, an error isn't thrown if no files are found | no | `true` or `false` | ignoreNoFilesFound |
#### Import schema
-Delta is only available as an inline dataset and, by default, doesn't have an associated schema. To get column metadata, click the **Import schema** button in the **Projection** tab. This will allow you to reference the column names and data types specified by the corpus. To import the schema, a [data flow debug session](concepts-data-flow-debug-mode.md) must be active and you must have an existing CDM entity definition file to point to.
+Delta is only available as an inline dataset and, by default, doesn't have an associated schema. To get column metadata, click the **Import schema** button in the **Projection** tab. This allows you to reference the column names and data types specified by the corpus. To import the schema, a [data flow debug session](concepts-data-flow-debug-mode.md) must be active, and you must have an existing CDM entity definition file to point to.
### Delta source script example
The below table lists the properties supported by a delta sink. You can edit the
| Folder path | The direct of the delta lake | yes | String | folderPath | | Compression type | The compression type of the delta table | no | `bzip2`<br>`gzip`<br>`deflate`<br>`ZipDeflate`<br>`snappy`<br>`lz4` | compressionType | | Compression level | Choose whether the compression completes as quickly as possible or if the resulting file should be optimally compressed. | required if `compressedType` is specified. | `Optimal` or `Fastest` | compressionLevel |
-| Vacuum | Specify retention threshold in hours for older versions of table. A value of 0 or less defaults to 30 days | yes | Integer | vacuum |
+| Vacuum | Deletes files older than the specified duration that is no longer relevant to the current table version. When a value of 0 or less is specified, the vacuum operation isn't performed. | yes | Integer | vacuum |
| Table action | Tells ADF what to do with the target Delta table in your sink. You can leave it as-is and append new rows, overwrite the existing table definition and data with new metadata and data, or keep the existing table structure but first truncate all rows, then insert the new rows. | no | None, Truncate, Overwrite | truncate, overwrite |
-| Update method | Specify which update operations are allowed on the delta lake. For methods that aren't insert, a preceding alter row transformation is required to mark rows. | yes | `true` or `false` | deletable <br> insertable <br> updateable <br> merge |
+| Update method | When you select "Allow insert" alone or when you write to a new delta table, the target receives all incoming rows regardless of the Row policies set. If your data contains rows of other Row policies, they need to be excluded using a preceding Filter transform. <br><br> When all Update methods are selected a Merge is performed, where rows are inserted/deleted/upserted/updated as per the Row Policies set using a preceding Alter Row transform. | yes | `true` or `false` | insertable <br> deletable <br> upsertable <br> updateable |
| Optimized Write | Achieve higher throughput for write operation via optimizing internal shuffle in Spark executors. As a result, you may notice fewer partitions and files that are of a larger size | no | `true` or `false` | optimizedWrite: true | | Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true |
moviesAltered sink(
) ~> movieDB ``` ### Delta sink with partition pruning
-With this option under Update method above (i.e. update/upsert/delete), you can limit the number of partitions that are inspected. Only partitions satisfying this condition will be fetched from the target store. You can specify fixed set of values that a partition column may take.
+With this option under Update method above (i.e. update/upsert/delete), you can limit the number of partitions that are inspected. Only partitions satisfying this condition is fetched from the target store. You can specify fixed set of values that a partition column may take.
:::image type="content" source="media/format-delta/delta-pruning.png" alt-text="Screenshot of partition pruning options are available to limit the inspection.":::
Delta will only read 2 partitions where **part_col == 5 and 8** from the target
### Delta sink optimization options
-In Settings tab, you will find three more options to optimize delta sink transformation.
+In Settings tab, you find three more options to optimize delta sink transformation.
-* When **Merge schema** option is enabled, it allows schema evolution, i.e. any columns that are present in the current incoming stream but not in the target Delta table are automatically added to its schema. This option is supported across all update methods.
+* When **Merge schema** option is enabled, it allows schema evolution, i.e. any columns that are present in the current incoming stream but not in the target Delta table is automatically added to its schema. This option is supported across all update methods.
* When **Auto compact** is enabled, after an individual write, transformation checks if files can further be compacted, and runs a quick OPTIMIZE job (with 128 MB file sizes instead of 1GB) to further compact files for partitions that have the most number of small files. Auto compaction helps in coalescing a large number of small files into a smaller number of large files. Auto compaction only kicks in when there are at least 50 files. Once a compaction operation is performed, it creates a new version of the table, and writes a new file containing the data of several previous files in a compact compressed form.
-* When **Optimize write** is enabled, sink transformation dynamically optimizes partition sizes based on the actual data by attempting to write out 128 MB files for each table partition. This is an approximate size and can vary depending on dataset characteristics. Optimized writes improve the overall efficiency of the *writes and subsequent reads*. It organizes partitions such that the performance of subsequent reads will improve
+* When **Optimize write** is enabled, sink transformation dynamically optimizes partition sizes based on the actual data by attempting to write out 128 MB files for each table partition. This is an approximate size and can vary depending on dataset characteristics. Optimized writes improve the overall efficiency of the *writes and subsequent reads*. It organizes partitions such that the performance of subsequent reads improve.
> [!TIP] > The optimized write process will slow down your overall ETL job because the Sink will issue the Spark Delta Lake Optimize command after your data is processed. It is recommended to use Optimized Write sparingly. For example, if you have an hourly data pipeline, execute a data flow with Optimized Write daily. ### Known limitations
-When writing to a delta sink, there is a known limitation where the numbers of rows written won't be return in the monitoring output.
+When you write to a delta sink, there's a known limitation where the numbers of rows written won't show-up in the monitoring output.
## Next steps
ddos-protection Manage Ddos Ip Protection Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-ip-protection-cli.md
-# Quickstart: Create and configure Azure DDoS IP Protection using Azure CLI
+# QuickStart: Create and configure Azure DDoS IP Protection using Azure CLI
Get started with Azure DDoS IP Protection by using Azure CLI.
-In this quickstart, you'll enable DDoS IP protection and link it to a public IP address.
+In this QuickStart, you'll enable DDoS IP protection and link it to a public IP address.
+ ## Prerequisites
ddos-protection Manage Ddos Protection Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-cli.md
Last updated 05/23/2023
-# Quickstart: Create and configure Azure DDoS Network Protection using Azure CLI
+# QuickStart: Create and configure Azure DDoS Network Protection using Azure CLI
Get started with Azure DDoS Network Protection by using Azure CLI. A DDoS protection plan defines a set of virtual networks that have DDoS Network Protection enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
-In this quickstart, you'll create a DDoS protection plan and link it to a virtual network.
+In this QuickStart, you'll create a DDoS protection plan and link it to a virtual network.
+ ## Prerequisites
ddos-protection Manage Ddos Protection Powershell Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-powershell-ip.md
-# Quickstart: Create and configure Azure DDoS IP Protection using Azure PowerShell
+# QuickStart: Create and configure Azure DDoS IP Protection using Azure PowerShell
Get started with Azure DDoS IP Protection by using Azure PowerShell.
-In this quickstart, you'll enable DDoS IP protection and link it to a public IP address utilizing PowerShell.
+In this QuickStart, you'll enable DDoS IP protection and link it to a public IP address utilizing PowerShell.
+ ## Prerequisites
ddos-protection Manage Ddos Protection Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-powershell.md
Last updated 05/23/2023
-# Quickstart: Create and configure Azure DDoS Network Protection using Azure PowerShell
+# QuickStart: Create and configure Azure DDoS Network Protection using Azure PowerShell
Get started with Azure DDoS Network Protection by using Azure PowerShell. A DDoS protection plan defines a set of virtual networks that have DDoS Network Protection enabled, across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
-In this quickstart, you'll create a DDoS protection plan and link it to a virtual network.
+In this QuickStart, you'll create a DDoS protection plan and link it to a virtual network.
+ ## Prerequisites
deployment-environments Concept Common Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-common-components.md
Previously updated : 04/25/2023 Last updated : 06/23/2023 # Components common to Azure Deployment Environments and Microsoft Dev Box
dev-box Concept Common Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-common-components.md
Previously updated : 04/25/2023 Last updated : 06/23/2023 # Components common to Microsoft Dev Box and Azure Deployment Environments
devtest-labs Automate Add Lab User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/automate-add-lab-user.md
You can get the ObjectId by using the [Get-AzureRMADUser](/powershell/module/azu
$userObjectId = (Get-AzureRmADUser -UserPrincipalName 'email@company.com').Id ```
-You can also use the Azure Active Directory PowerShell cmdlets that include [Get-MsolUser](/powershell/module/msonline/get-msoluser?preserve-view=true&view=azureadps-1.0), [Get-MsolGroup](/powershell/module/msonline/get-msolgroup?preserve-view=true&view=azureadps-1.0), and [Get-MsolServicePrincipal](/powershell/module/msonline/get-msolserviceprincipal?preserve-view=true&view=azureadps-1.0).
+You can also use the Microsoft Graph PowerShell cmdlets that include [Get-MgUser](/powershell/module/microsoft.graph.users/get-mguser?view=graph-powershell-1.0&preserve-view=true), [Get-MgGroup](/powershell/module/microsoft.graph.groups/get-mggroup?view=graph-powershell-1.0&preserve-view=true), and [Get-MgServicePrincipal](/powershell/module/microsoft.graph.applications/get-mgserviceprincipal?view=graph-powershell-1.0&preserve-view=true).
### Scope Scope specifies the resource or resource group for which the role assignment should apply. For resources, the scope is in the form: `/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/{provider-namespace}/{resource-type}/{resource-name}`. The template uses the `subscription().subscriptionId` function to fill in the `subscription-id` part and the `resourceGroup().name` template function to fill in the `resource-group-name` part. Using these functions means that the lab to which you're assigning a role must exist in the current subscription and the same resource group to which the template deployment is made. The last part, `resource-name`, is the name of the lab. This value is received via the template parameter in this example.
event-hubs Event Hubs Capture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-capture-overview.md
Event Hubs Capture enables you to process real-time and batch-based pipelines on
> [!IMPORTANT] > - The destination storage (Azure Storage or Azure Data Lake Storage) account must be in the same subscription as the event hub.
-> - Event Hubs doesn't support capturing events in a **premium** storage account.
+> - Event Hubs capture supports any Storage account with support for block blobs.
## How Event Hubs Capture works
firewall Ip Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/ip-groups.md
An IP Group can have a single IP address, multiple IP addresses, one or more IP
IP Groups can be reused in Azure Firewall DNAT, network, and application rules for multiple firewalls across regions and subscriptions in Azure. Group names must be unique. You can configure an IP Group in the Azure portal, Azure CLI, or REST API. A sample template is provided to help you get started.
-> [!NOTE]
-> IP Groups are not currently available in Azure national cloud environments.
- ## Sample format The following IPv4 address format examples are valid to use in IP Groups:
firewall Protect Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-kubernetes-service.md
This article shows you how you can protect Azure Kubernetes Service (AKS) cluste
Azure Kubernetes Service (AKS) offers a managed Kubernetes cluster on Azure. For more information, see [Azure Kubernetes Service](../aks/intro-kubernetes.md).
-Despite AKS being a fully managed solution, it does not offer a built-in solution to secure ingress and egress traffic between the cluster and external networks. Azure Firewall offers a solution to this.
+Despite AKS being a fully managed solution, it doesn't offer a built-in solution to secure ingress and egress traffic between the cluster and external networks. Azure Firewall offers a solution to this.
AKS clusters are deployed on a virtual network. This network can be managed (created by AKS) or custom (pre-configured by the user beforehand). In either case, the cluster has outbound dependencies on services outside of that virtual network (the service has no inbound dependencies). For management and operational purposes, nodes in an AKS cluster need to access [certain ports and fully qualified domain names (FQDNs)](../aks/outbound-rules-control-egress.md) describing these outbound dependencies. This is required for various functions including, but not limited to, the nodes that communicate with the Kubernetes API server. They download and install core Kubernetes cluster components and node security updates, or pull base system container images from Microsoft Container Registry (MCR), and so on. These outbound dependencies are almost entirely defined with FQDNs, which don't have static addresses behind them. The lack of static addresses means that Network Security Groups can't be used to lock down outbound traffic from an AKS cluster. For this reason, by default, AKS clusters have unrestricted outbound (egress) Internet access. This level of network access allows nodes and services you run to access external resources as needed.
-However, in a production environment, communications with a Kubernetes cluster should be protected to prevent against data exfiltration along with other vulnerabilities. All incoming and outgoing network traffic must be monitored and controlled based on a set of security rules. If you want to do this, you will have to restrict egress traffic, but a limited number of ports and addresses must remain accessible to maintain healthy cluster maintenance tasks and satisfy those outbound dependencies previously mentioned.
+However, in a production environment, communications with a Kubernetes cluster should be protected to prevent against data exfiltration along with other vulnerabilities. All incoming and outgoing network traffic must be monitored and controlled based on a set of security rules. If you want to do this, you'll have to restrict egress traffic, but a limited number of ports and addresses must remain accessible to maintain healthy cluster maintenance tasks and satisfy those outbound dependencies previously mentioned.
-The simplest solution uses a firewall device that can control outbound traffic based on domain names. A firewall typically establishes a barrier between a trusted network and an untrusted network, such as the Internet. Azure Firewall, for example, can restrict outbound HTTP and HTTPS traffic based on the FQDN of the destination, giving you fine-grained egress traffic control, but at the same time allows you to provide access to the FQDNs encompassing an AKS clusterΓÇÖs outbound dependencies (something that NSGs cannot do). Likewise, you can control ingress traffic and improve security by enabling threat intelligence-based filtering on an Azure Firewall deployed to a shared perimeter network. This filtering can provide alerts, and deny traffic to and from known malicious IP addresses and domains.
+The simplest solution uses a firewall device that can control outbound traffic based on domain names. A firewall typically establishes a barrier between a trusted network and an untrusted network, such as the Internet. Azure Firewall, for example, can restrict outbound HTTP and HTTPS traffic based on the FQDN of the destination, giving you fine-grained egress traffic control, but at the same time allows you to provide access to the FQDNs encompassing an AKS clusterΓÇÖs outbound dependencies (something that NSGs can't do). Likewise, you can control ingress traffic and improve security by enabling threat intelligence-based filtering on an Azure Firewall deployed to a shared perimeter network. This filtering can provide alerts, and deny traffic to and from known malicious IP addresses and domains.
See the following video by Abhinav Sriram for a quick overview on how this works in practice on a sample environment:
The following diagram shows the sample environment from the video that the scrip
:::image type="content" source="media/protect-azure-kubernetes-service/aks-firewall.png" alt-text="Diagram showing A K S cluster with Azure Firewall for ingress egress filtering.":::
-There is one difference between the script and the following guide. The script uses managed identities, but the guide uses a service principal. This shows you two different ways to create an identity to manage and create cluster resources.
+There's one difference between the script and the following guide. The script uses managed identities, but the guide uses a service principal. This shows you two different ways to create an identity to manage and create cluster resources.
## Restrict egress traffic using Azure Firewall
Create a resource group to hold all of the resources.
az group create --name $RG --location $LOC ```
-Create a virtual network with two subnets to host the AKS cluster and the Azure Firewall. Each will have their own subnet. Let's start with the AKS network.
+Create a virtual network with two subnets to host the AKS cluster and the Azure Firewall. Each has their own subnet. Let's start with the AKS network.
``` # Dedicated virtual network with AKS subnet
Azure Firewall inbound and outbound rules must be configured. The main purpose o
> If your cluster or application creates a large number of outbound connections directed to the same or small subset of destinations, you might require more firewall frontend IPs to avoid maxing out the ports per frontend IP. > For more information on how to create an Azure firewall with multiple IPs, see [**here**](../firewall/quick-create-multiple-ip-template.md)
-Create a standard SKU public IP resource that will be used as the Azure Firewall frontend address.
+Create a standard SKU public IP resource that is used as the Azure Firewall frontend address.
```azurecli az network public-ip create -g $RG -n $FWPUBLICIP_NAME -l $LOC --sku "Standard"
See [virtual network route table documentation](../virtual-network/virtual-netwo
Below are three network rules you can use to configure on your firewall, you may need to adapt these rules based on your deployment. The first rule allows access to port 9000 via TCP. The second rule allows access to port 1194 and 123 via UDP. Both these rules will only allow traffic destined to the Azure Region CIDR that we're using, in this case East US.
-Finally, we'll add a third network rule opening port 123 to an Internet time server FQDN (for example:`ntp.ubuntu.com`) via UDP. Adding an FQDN as a network rule is one of the specific features of Azure Firewall, and you'll need to adapt it when using your own options.
+Finally, we add a third network rule opening port 123 to an Internet time server FQDN (for example:`ntp.ubuntu.com`) via UDP. Adding an FQDN as a network rule is one of the specific features of Azure Firewall, and you need to adapt it when using your own options.
-After setting the network rules, we'll also add an application rule using the `AzureKubernetesService` that covers all needed FQDNs accessible through TCP port 443 and port 80.
+After setting the network rules, we'll also add an application rule using the `AzureKubernetesService` that covers the needed FQDNs accessible through TCP port 443 and port 80. In addition, you may need to configure additional network and application rules based on your deployment. For more information, see [Outbound network and FQDN rules for Azure Kubernetes Service (AKS) clusters](../aks/outbound-rules-control-egress.md#required-outbound-network-rules-and-fqdns-for-aks-clusters).
+
+```azurecli
```
-# Add FW Network Rules
+#### Add FW Network Rules
az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'apiudp' --protocols 'UDP' --source-addresses '*' --destination-addresses "AzureCloud.$LOC" --destination-ports 1194 --action allow --priority 100 az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'apitcp' --protocols 'TCP' --source-addresses '*' --destination-addresses "AzureCloud.$LOC" --destination-ports 9000 az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'time' --protocols 'UDP' --source-addresses '*' --destination-fqdns 'ntp.ubuntu.com' --destination-ports 123
-# Add FW Application Rules
+#### Add FW Application Rules
az network firewall application-rule create -g $RG -f $FWNAME --collection-name 'aksfwar' -n 'fqdn' --source-addresses '*' --protocols 'http=80' 'https=443' --fqdn-tags "AzureKubernetesService" --action allow --priority 100 ```
az network vnet subnet update -g $RG --vnet-name $VNET_NAME --name $AKSSUBNET_NA
### Deploy AKS with outbound type of UDR to the existing network
-Now an AKS cluster can be deployed into the existing virtual network. We'll also use [outbound type `userDefinedRouting`](../aks/egress-outboundtype.md), this feature ensures any outbound traffic will be forced through the firewall and no other egress paths will exist (by default the Load Balancer outbound type could be used).
+Now an AKS cluster can be deployed into the existing virtual network. We'll also use [outbound type `userDefinedRouting`](../aks/egress-outboundtype.md), this feature ensures any outbound traffic is forced through the firewall and no other egress paths exist (by default the Load Balancer outbound type could be used).
![aks-deploy](../aks/media/limit-egress-traffic/aks-udr-fw.png)
The target subnet to be deployed into is defined with the environment variable,
SUBNETID=$(az network vnet subnet show -g $RG --vnet-name $VNET_NAME --name $AKSSUBNET_NAME --query id -o tsv) ```
-You'll define the outbound type to use the UDR that already exists on the subnet. This configuration will enable AKS to skip the setup and IP provisioning for the load balancer.
+You define the outbound type to use the UDR that already exists on the subnet. This configuration enables AKS to skip the setup and IP provisioning for the load balancer.
> [!IMPORTANT] > For more information on outbound type UDR including limitations, see [**egress outbound type UDR**](../aks/egress-outboundtype.md#limitations).
az aks get-credentials -g $RG -n $AKSNAME
## Restrict ingress traffic using Azure Firewall
-You can now start exposing services and deploying applications to this cluster. In this example, we'll expose a public service, but you may also choose to expose an internal service via [internal load balancer](../aks/internal-lb.md).
+You can now start exposing services and deploying applications to this cluster. In this example, we expose a public service, but you may also choose to expose an internal service via [internal load balancer](../aks/internal-lb.md).
![Public Service DNAT](../aks/media/limit-egress-traffic/aks-create-svc.png)
To configure inbound connectivity, a DNAT rule must be written to the Azure Fire
The destination address can be customized as it's the port on the firewall to be accessed. The translated address must be the IP address of the internal load balancer. The translated port must be the exposed port for your Kubernetes service.
-You'll need to specify the internal IP address assigned to the load balancer created by the Kubernetes service. Retrieve the address by running:
+You need to specify the internal IP address assigned to the load balancer created by the Kubernetes service. Retrieve the address by running:
```bash kubectl get services
hdinsight Hbase Troubleshoot Phoenix No Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-phoenix-no-data.md
Title: HDP upgrade & no data in Apache Phoenix views in Azure HDInsight
description: HDP upgrade causes no data in Apache Phoenix views in Azure HDInsight Previously updated : 05/26/2022 Last updated : 06/23/2023 # Scenario: HDP upgrade causes no data in Apache Phoenix views in Azure HDInsight
hdinsight Hdinsight Cluster Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-cluster-availability.md
description: Learn how to use Apache Ambari to monitor cluster health and availa
Previously updated : 05/26/2022 Last updated : 06/23/2023 # How to monitor cluster availability with Apache Ambari in Azure HDInsight
hdinsight Apache Hive Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-replication.md
Title: How to use Apache Hive replication in Azure HDInsight clusters
description: Learn how to use Hive replication in HDInsight clusters to replicate the Hive metastore and the Azure Data Lake Storage Gen 2 data lake. Previously updated : 05/26/2022 Last updated : 06/23/2023 # How to use Apache Hive replication in Azure HDInsight clusters
hdinsight Apache Spark Jupyter Notebook Install Locally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-notebook-install-locally.md
description: Learn how to install Jupyter Notebook locally on your computer and
Previously updated : 05/06/2022 Last updated : 06/23/2023 # Install Jupyter Notebook on your computer and connect to Apache Spark on HDInsight
hdinsight Apache Spark Machine Learning Mllib Ipython https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-machine-learning-mllib-ipython.md
description: Learn how to use Spark MLlib to create a machine learning app that
Previously updated : 05/19/2022 Last updated : 06/23/2023 # Use Apache Spark MLlib to build a machine learning application and analyze a dataset
hdinsight Apache Spark Streaming High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-streaming-high-availability.md
description: How to set up Apache Spark Streaming for a high-availability scenar
Previously updated : 05/26/2022 Last updated : 06/23/2023 # Create high-availability Apache Spark Streaming jobs with YARN
hdinsight Apache Spark Troubleshoot Job Fails Invalidclassexception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-job-fails-invalidclassexception.md
Title: InvalidClassException error from Apache Spark - Azure HDInsight
description: Apache Spark job fails with InvalidClassException, class version mismatch, in Azure HDInsight Previously updated : 05/10/2022 Last updated : 06/23/2023 # Apache Spark job fails with InvalidClassException, class version mismatch, in Azure HDInsight
iot-hub C2d Messaging Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/c2d-messaging-ios.md
This article shows you how to:
At the end of this article, you run the following Swift iOS project:
-* **sample-device**: the same app created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md), which connects to your IoT hub and receives cloud-to-device messages.
+* **sample-device**: the sample app from the [Azure IoT Samples for IoS Platform repository](https://github.com/Azure-Samples/azure-iot-samples-ios), which connects to your IoT hub and receives cloud-to-device messages.
> [!NOTE] > IoT Hub has SDK support for many device platforms and languages (including C, Java, Python, and JavaScript) through the [Azure IoT device SDKs](iot-hub-devguide-sdks.md).
To learn more about cloud-to-device messages, see [Send cloud-to-device messages
* The code sample from the [Azure IoT Samples for IoS Platform repository](https://github.com/Azure-Samples/azure-iot-samples-ios).
-* The latest version of [XCode](https://developer.apple.com/xcode/), running the latest version of the iOS SDK. This quickstart was tested with XCode 9.3 and iOS 11.3.
+* The latest version of [XCode](https://developer.apple.com/xcode/), running the latest version of the iOS SDK. This article was tested with XCode 9.3 and iOS 11.3.
* The latest version of [CocoaPods](https://guides.cocoapods.org/using/getting-started.html).
iot-hub C2d Messaging Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/c2d-messaging-python.md
This article shows you how to:
At the end of this article, you run two Python console apps:
-* **SimulatedDevice.py**: a modified version of the app created in [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python), which connects to your IoT hub and receives cloud-to-device messages.
+* **SimulatedDevice.py**: simulates a device that connects to your IoT hub and receives cloud-to-device messages.
* **SendCloudToDeviceMessage.py**: sends cloud-to-device messages to the simulated device app through IoT Hub.
To learn more about cloud-to-device messages, see [Send cloud-to-device messages
* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
In this section, you create a Python console app to simulate a device and receiv
RECEIVED_MESSAGES = 0 ```
-1. Add the following code to **SimulatedDevice.py** file. Replace the `{deviceConnectionString}` placeholder value with the device connection string for the device you created in the [Send telemetry from a device to an IoT hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) quickstart:
+1. Add the following code to **SimulatedDevice.py** file. Replace the `{deviceConnectionString}` placeholder value with the connection string for the registered device in [Prerequisites](#prerequisites):
```python CONNECTION_STRING = "{deviceConnectionString}"
iot-hub Device Management Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-cli.md
If you want to use the Azure Cloud Shell, you must first launch and configure it
1. Select the **Cloud Shell** icon from the page header in the Azure portal.
- :::image type="content" source="./media/quickstart-send-telemetry-cli/cloud-shell-button.png" alt-text="Screenshot of the global controls from the page header of the Azure portal, highlighting the Cloud Shell icon.":::
+ :::image type="content" source="./media/device-management-cli/cloud-shell-button.png" alt-text="Screenshot of the global controls from the page header of the Azure portal, highlighting the Cloud Shell icon.":::
> [!NOTE] > If this is the first time you've used the Cloud Shell, it prompts you to create storage, which is required to use the Cloud Shell. Select a subscription to create a storage account and Microsoft Azure Files share.
If you want to use the Azure Cloud Shell, you must first launch and configure it
> [!NOTE] > Some commands require different syntax or formatting in the **Bash** and **PowerShell** environments. For more information, see [Tips for using the Azure CLI successfully](/cli/azure/use-cli-effectively?tabs=bash%2Cbash2).
- :::image type="content" source="./media/quickstart-send-telemetry-cli/cloud-shell-environment.png" alt-text="Screenshot of an Azure Cloud Shell window, highlighting the environment selector in the toolbar.":::
+ :::image type="content" source="./media/device-management-cli/cloud-shell-environment.png" alt-text="Screenshot of an Azure Cloud Shell window, highlighting the environment selector in the toolbar.":::
## Prepare two CLI sessions
Next, you must prepare two Azure CLI sessions. If you're using the Cloud Shell,
1. Open the second CLI session. If you're using the Cloud Shell in a browser, select the **Open new session** icon on the toolbar of your first CLI session. If using the CLI locally, open a second CLI instance.
- :::image type="content" source="media/quickstart-send-telemetry-cli/cloud-shell-new-session.png" alt-text="Screenshot of an Azure Cloud Shell window, highlighting the Open New Session icon in the toolbar.":::
+ :::image type="content" source="media/device-management-cli/cloud-shell-new-session.png" alt-text="Screenshot of an Azure Cloud Shell window, highlighting the Open New Session icon in the toolbar.":::
## Create and simulate a device
iot-hub Device Management Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-dotnet.md
This article shows you how to create:
* Visual Studio.
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
iot-hub Device Management Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-java.md
This article shows you how to create:
## Prerequisites
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
iot-hub Device Management Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-node.md
This article shows you how to create:
## Prerequisites
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
iot-hub Device Management Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-python.md
This article shows you how to create:
* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
iot-hub Device Twins Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-cli.md
If you want to use the Azure Cloud Shell, you must first launch and configure it
1. Select the **Cloud Shell** icon from the page header in the Azure portal.
- :::image type="content" source="./media/quickstart-send-telemetry-cli/cloud-shell-button.png" alt-text="Screenshot of the global controls from the page header of the Azure portal, highlighting the Cloud Shell icon.":::
+ :::image type="content" source="./media/device-twins-cli/cloud-shell-button.png" alt-text="Screenshot of the global controls from the page header of the Azure portal, highlighting the Cloud Shell icon.":::
> [!NOTE] > If this is the first time you've used the Cloud Shell, it prompts you to create storage, which is required to use the Cloud Shell. Select a subscription to create a storage account and Microsoft Azure Files share.
If you want to use the Azure Cloud Shell, you must first launch and configure it
> [!NOTE] > Some commands require different syntax or formatting in the **Bash** and **PowerShell** environments. For more information, see [Tips for using the Azure CLI successfully](/cli/azure/use-cli-effectively?tabs=bash%2Cbash2).
- :::image type="content" source="./media/quickstart-send-telemetry-cli/cloud-shell-environment.png" alt-text="Screenshot of an Azure Cloud Shell window, highlighting the environment selector in the toolbar.":::
+ :::image type="content" source="./media/device-twins-cli/cloud-shell-environment.png" alt-text="Screenshot of an Azure Cloud Shell window, highlighting the environment selector in the toolbar.":::
## Prepare two CLI sessions
Next, you must prepare two Azure CLI sessions. If you're using the Cloud Shell,
1. Open the second CLI session. If you're using the Cloud Shell in a browser, select the **Open new session** icon on the toolbar of your first CLI session. If using the CLI locally, open a second CLI instance.
- :::image type="content" source="media/quickstart-send-telemetry-cli/cloud-shell-new-session.png" alt-text="Screenshot of an Azure Cloud Shell window, highlighting the Open New Session icon in the toolbar.":::
+ :::image type="content" source="media/device-twins-cli/cloud-shell-new-session.png" alt-text="Screenshot of an Azure Cloud Shell window, highlighting the Open New Session icon in the toolbar.":::
## Create and simulate a device
iot-hub Device Twins Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-dotnet.md
In this article, you create two .NET console apps:
* Visual Studio.
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
iot-hub Device Twins Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-java.md
In this article, you create two Java console apps:
## Prerequisites
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
iot-hub Device Twins Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-node.md
In this article, you create two Node.js console apps:
To complete this article, you need:
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
iot-hub Device Twins Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-python.md
In this article, you create two Python console apps:
* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
In this section, you create a Python console app that adds location metadata to
from azure.iot.hub.models import Twin, TwinProperties, QuerySpecification, QueryResult ```
-4. Add the following code. Replace `[IoTHub Connection String]` with the IoT hub connection string you copied in [Get the IoT hub connection string](#get-the-iot-hub-connection-string). Replace `[Device Id]` with the device ID (the name) from your registered device in the IoT Hub.
+4. Add the following code. Replace `[IoTHub Connection String]` with the IoT hub connection string you copied in [Get the IoT hub connection string](#get-the-iot-hub-connection-string). Replace `[Device Id]` with the device ID (the name) from your registered device in the IoT hub.
```python IOTHUB_CONNECTION_STRING = "[IoTHub Connection String]"
iot-hub File Upload Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-dotnet.md
At the end of this article, you run two .NET console apps:
## Prerequisites
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
Now you're ready to run the applications.
-1. Next, run the device app to upload the file to Azure storage. Open a new command prompt and change folders to the **azure-iot-sdk-csharp\iothub\device\samples\getting started\FileUploadSample** under the folder where you expanded the Azure IoT C# SDK. Run the following commands. Replace the `{Your device connection string}` placeholder value in the second command with the device connection string you saw when you registered a device in the IoT Hub.
+1. Next, run the device app to upload the file to Azure storage. Open a new command prompt and change folders to the **azure-iot-sdk-csharp\iothub\device\samples\getting started\FileUploadSample** under the folder where you expanded the Azure IoT C# SDK. Run the following commands. Replace the `{Your device connection string}` placeholder value in the second command with the device connection string you saw when you registered a device in the IoT hub.
```cmd/sh dotnet restore
iot-hub File Upload Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-java.md
These files are typically batch processed in the cloud, using tools such as [Azu
## Prerequisites
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
iot-hub File Upload Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-node.md
At the end of this article, you run two Node.js console apps:
## Prerequisites
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
In this section, you create a device app to upload a file to IoT hub. The code i
1. Copy an image file to the `fileupload` folder and give it a name such as `myimage.png`.
-1. Add environment variables for your device connection string and the path to the file that you want to upload. You got the device connection string when you registered a device in the IoT Hub.
+1. Add environment variables for your device connection string and the path to the file that you want to upload. You got the device connection string when you registered a device in the IoT hub.
- For Windows:
iot-hub File Upload Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-python.md
At the end of this article, you run the Python console app **FileUpload.py**, wh
* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
iot-hub Horizontal Arm Route Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/horizontal-arm-route-messages.md
Last updated 08/24/2020
-# Quickstart: Deploy an Azure IoT Hub and a storage account using an ARM template
+# Quickstart: Deploy an Azure IoT hub and a storage account using an ARM template
-In this quickstart, you use an Azure Resource Manager template (ARM template) to create an IoT Hub that will route messages to Azure Storage, and a storage account to hold the messages. After manually adding a virtual IoT device to the hub to submit the messages, you configure that connection information in an application called *arm-read-write* to submit messages from the device to the hub. The hub is configured so the messages sent to the hub are automatically routed to the storage account. At the end of this quickstart, you can open the storage account and see the messages sent.
+In this quickstart, you use an Azure Resource Manager template (ARM template) to create an IoT hub that will route messages to Azure Storage, and a storage account to hold the messages. After manually adding a virtual IoT device to the hub to submit the messages, you configure that connection information in an application called *arm-read-write* to submit messages from the device to the hub. The hub is configured so the messages sent to the hub are automatically routed to the storage account. At the end of this quickstart, you can open the storage account and see the messages sent.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
This section provides the steps to deploy the template, create a virtual device,
> [!NOTE] > These messages are encoded in UTF-32 and base64. If you read the message back, you have to decode it from base64 and utf-32 in order to read it as ASCII. If you're interested, you can use the method ReadOneRowFromFile in the Routing Tutorial to read one for from one of these message files and decode it into ASCII. ReadOneRowFromFile is in the IoT C# SDK repository that you unzipped for this quickstart. Here is the path from the top of that folder: *./iothub/device/samples/getting started/RoutingTutorial/SimulatedDevice/Program.cs.* Set the boolean `readTheFile` to true, and hardcode the path to the file on disk, and it will open and translate the first row in the file.
-You have deployed an ARM template to create an IoT Hub and a storage account, and run a program to send messages to the hub. The messages are then automatically stored in the storage account where they can be viewed.
+You have deployed an ARM template to create an IoT hub and a storage account, and run a program to send messages to the hub. The messages are then automatically stored in the storage account where they can be viewed.
## Clean up resources
iot-hub Iot Concepts And Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-concepts-and-iot-hub.md
To try out an end-to-end IoT solution, check out the IoT Hub quickstarts:
- [Send telemetry from a device to IoT Hub](quickstart-send-telemetry-cli.md) - [Send telemetry from an IoT Plug and Play device to IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?toc=/azure/iot-hub/toc.json&bc=/azure/iot-hub/breadcrumb/toc.json)-- [Control a device connected to an IoT hub](quickstart-control-device.md)
+- [Quickstart: Control a device connected to an IoT hub](quickstart-control-device.md)
To learn more about the ways you can build and deploy IoT solutions with Azure IoT, visit:
iot-hub Iot Hub Bulk Identity Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-bulk-identity-mgmt.md
Title: Import/Export of Azure IoT Hub device identities
+ Title: Import and export Azure IoT Hub device identities
description: How to use the Azure IoT service SDK to run bulk operations against the identity registry to import and export device identities. Import operations enable you to create, update, and delete device identities in bulk. Previously updated : 10/02/2019- Last updated : 06/16/2023+ # Import and export IoT Hub device identities in bulk
-Each IoT hub has an identity registry you can use to create per-device resources in the service. The identity registry also enables you to control access to the device-facing endpoints. This article describes how to import and export device identities in bulk to and from an identity registry. To see a working sample in C# and learn how you can use this capability when migrating an IoT hub to a different region, see [How to migrate an IoT Hub using Azure Resource Manager templates](migrate-hub-arm.md).
+Each IoT hub has an identity registry you can use to create per-device resources in the service. The identity registry also enables you to control access to the device-facing endpoints. This article describes how to import and export device identities in bulk to and from an identity registry, using the ImportExportDeviceSample sample included with the [Microsoft Azure IoT SDK for .NET](https://github.com/Azure/azure-iot-sdk-csharp/tree/main). For more information about how you can use this capability when migrating an IoT hub to a different region, see [How to manually migrate an Azure IoT hub using an Azure Resource Manager template](migrate-hub-arm.md).
> [!NOTE] > IoT Hub has recently added virtual network support in a limited number of regions. This feature secures import and export operations and eliminates the need to pass keys for authentication. Initially, virtual network support is available only in these regions: *WestUS2*, *EastUS*, and *SouthCentralUS*. To learn more about virtual network support and the API calls to implement it, see [IoT Hub Support for virtual networks](virtual-network-support.md). Import and export operations take place in the context of *Jobs* that enable you to execute bulk service operations against an IoT hub.
-The **RegistryManager** class includes the **ExportDevicesAsync** and **ImportDevicesAsync** methods that use the **Job** framework. These methods enable you to export, import, and synchronize the entirety of an IoT hub identity registry.
+The **RegistryManager** class in the SDK includes the **ExportDevicesAsync** and **ImportDevicesAsync** methods that use the **Job** framework. These methods enable you to export, import, and synchronize the entirety of an IoT hub identity registry.
This topic discusses using the **RegistryManager** class and **Job** system to perform bulk imports and exports of devices to and from an IoT hub's identity registry. You can also use the Azure IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning to one or more IoT hubs without requiring human intervention. To learn more, see the [provisioning service documentation](../iot-dps/index.yml).
+> [!NOTE]
+> Some of the code snippets in this article are included from the ImportExportDevicesSample service sample provided with the [Microsoft Azure IoT SDK for .NET](https://github.com/Azure/azure-iot-sdk-csharp/tree/main). The sample is located in the `/iothub/service/samples/how to guides/ImportExportDevicesSample` folder of the SDK and, where specified, code snippets are included from the `ImportExportDevicesSample.cs` file for that SDK sample. For more information about the ImportExportDevicesSample sample and other service samples included in the Azure IoT SDK for.NET, see [Azure IoT hub service samples for C#](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples/how%20to%20guides).
+ ## What are jobs? Identity registry operations use the **Job** system when the operation:
To find the connection string for your IoT hub, in the Azure portal:
- Select a policy, taking into account the permissions you need. -- Copy the connectionstring from the panel on the right-hand side of the screen.
+- Copy the connection string from the panel on the right-hand side of the screen.
-The following C# code snippet shows how to poll every five seconds to see if the job has finished executing:
+The following C# code snippet, from the **WaitForJobAsync** method in the SDK sample, shows how to poll every five seconds to see if the job has finished executing:
```csharp // Wait until job is finished
-while(true)
+while (true)
{
- exportJob = await registryManager.GetJobAsync(exportJob.JobId);
- if (exportJob.Status == JobStatus.Completed ||
- exportJob.Status == JobStatus.Failed ||
- exportJob.Status == JobStatus.Cancelled)
- {
- // Job has finished executing
- break;
- }
+ job = await registryManager.GetJobAsync(job.JobId);
+ if (job.Status == JobStatus.Completed
+ || job.Status == JobStatus.Failed
+ || job.Status == JobStatus.Cancelled)
+ {
+ // Job has finished executing
+ break;
+ }
+ Console.WriteLine($"\tJob status is {job.Status}...");
- await Task.Delay(TimeSpan.FromSeconds(5));
+ await Task.Delay(TimeSpan.FromSeconds(5));
} ``` > [!NOTE] > If your storage account has firewall configurations that restrict IoT Hub's connectivity, consider using [Microsoft trusted first party exception](./virtual-network-support.md#egress-connectivity-from-iot-hub-to-other-azure-resources) (available in select regions for IoT hubs with managed service identity). - ## Device import/export job limits
-Only 1 active device import or export job is allowed at a time for all IoT Hub tiers. IoT Hub also has limits for rate of jobs operations. To learn more, see [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md).
+Only one active device import or export job is allowed at a time for all IoT Hub tiers. IoT Hub also has limits for rate of jobs operations. To learn more, see [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md).
## Export devices
while(true)
} ```
-The job stores its output in the provided blob container as a block blob with the name **devices.txt**. The output data consists of JSON serialized device data, with one device per line.
+You can find similar code in the **ExportDevicesAsync** method from the SDK sample. The job stores its output in the provided blob container as a block blob with the name **devices.txt**. The output data consists of JSON serialized device data, with one device per line.
The following example shows the output data:
If a device has twin data, then the twin data is also exported together with the
} ```
-If you need access to this data in code, you can easily deserialize this data using the **ExportImportDevice** class. The following C# code snippet shows how to read device information that was previously exported to a block blob:
+If you need access to this data in code, you can easily deserialize this data using the **ExportImportDevice** class. The following C# code snippet, from the **ReadFromBlobAsync** method in the SDK sample, shows how to read device information that was previously exported from **ExportImportDevice** into a **BlobClient** instance:
```csharp
-var exportedDevices = new List<ExportImportDevice>();
-
-using (var streamReader = new StreamReader(await blob.OpenReadAsync(AccessCondition.GenerateIfExistsCondition(), null, null), Encoding.UTF8))
+private static async Task<List<string>> ReadFromBlobAsync(BlobClient blobClient)
{
- while (streamReader.Peek() != -1)
- {
- string line = await streamReader.ReadLineAsync();
- var device = JsonConvert.DeserializeObject<ExportImportDevice>(line);
- exportedDevices.Add(device);
- }
+ // Read the blob file of devices, import each row into a list.
+ var contents = new List<string>();
+
+ using Stream blobStream = await blobClient.OpenReadAsync();
+ using var streamReader = new StreamReader(blobStream, Encoding.UTF8);
+ while (streamReader.Peek() != -1)
+ {
+ string line = await streamReader.ReadLineAsync();
+ contents.Add(line);
+ }
+
+ return contents;
} ```
You can use the **ImportDevicesAsync** method to perform the following bulk oper
* Bulk deletions of existing devices * Bulk status changes (enable or disable devices) * Bulk assignment of new device authentication keys
-* Bulk auto-regeneration of device authentication keys
+* Bulk automatic regeneration of device authentication keys
* Bulk update of twin data You can perform any combination of the preceding operations within a single **ImportDevicesAsync** call. For example, you can register new devices and delete or update existing devices at the same time. When used along with the **ExportDevicesAsync** method, you can completely migrate all your devices from one IoT hub to another.
-If the import file includes twin metadata, then this metadata overwrites the existing twin metadata. If the import file does not include twin metadata, then only the `lastUpdateTime` metadata is updated using the current time.
+If the import file includes twin metadata, then this metadata overwrites the existing twin metadata. If the import file doesn't include twin metadata, then only the `lastUpdateTime` metadata is updated using the current time.
Use the optional **importMode** property in the import serialization data for each device to control the import process per-device. The **importMode** property has the following options: | importMode | Description | | | |
-| **Create** |If a device does not exist with the specified **ID**, it is newly registered. If the device already exists, an error is written to the log file. |
-| **CreateOrUpdate** |If a device does not exist with the specified **ID**, it is newly registered. If the device already exists, existing information is overwritten with the provided input data without regard to the **ETag** value. |
-| **CreateOrUpdateIfMatchETag** |If a device does not exist with the specified **ID**, it is newly registered. If the device already exists, existing information is overwritten with the provided input data only if there is an **ETag** match. If there is an **ETag** mismatch, an error is written to the log file. |
-| **Delete** |If a device already exists with the specified **ID**, it is deleted without regard to the **ETag** value. If the device does not exist, an error is written to the log file. |
-| **DeleteIfMatchETag** |If a device already exists with the specified **ID**, it is deleted only if there is an **ETag** match. If the device does not exist, an error is written to the log file. If there is an ETag mismatch, an error is written to the log file. |
-| **Update** |If a device already exists with the specified **ID**, existing information is overwritten with the provided input data without regard to the **ETag** value. If the device does not exist, an error is written to the log file. |
-| **UpdateIfMatchETag** |If a device already exists with the specified **ID**, existing information is overwritten with the provided input data only if there is an **ETag** match. If the device does not exist or there is an **ETag** mismatch, an error is written to the log file. |
+| **Create** |If a device doesn't exist with the specified **ID**, it's newly registered. If the device already exists, an error is written to the log file. |
+| **CreateOrUpdate** |If a device doesn't exist with the specified **ID**, it's newly registered. If the device already exists, existing information is overwritten with the provided input data without regard to the **ETag** value. |
+| **CreateOrUpdateIfMatchETag** |If a device doesn't exist with the specified **ID**, it's newly registered. If the device already exists, existing information is overwritten with the provided input data only if there's an **ETag** match. If there's an **ETag** mismatch, an error is written to the log file. |
+| **Delete** |If a device already exists with the specified **ID**, it's deleted without regard to the **ETag** value. If the device doesn't exist, an error is written to the log file. |
+| **DeleteIfMatchETag** |If a device already exists with the specified **ID**, it's deleted only if there's an **ETag** match. If the device doesn't exist, an error is written to the log file. If there's an ETag mismatch, an error is written to the log file. |
+| **Update** |If a device already exists with the specified **ID**, existing information is overwritten with the provided input data without regard to the **ETag** value. If the device doesn't exist, an error is written to the log file. |
+| **UpdateIfMatchETag** |If a device already exists with the specified **ID**, existing information is overwritten with the provided input data only if there's an **ETag** match. If the device doesn't exist or there's an **ETag** mismatch, an error is written to the log file. |
| **UpdateTwin** |If a twin already exists with the specified **ID**, existing information is overwritten with the provided input data without regard to the twin's **ETag** value. |
-| **UpdateTwinIfMatchETag** |If a twin already exists with the specified **ID**, existing information is overwritten with the provided input data only if there is a match on the twin's **ETag** value. The twin's **ETag**, is processed independently from the device's **ETag**. If there is a mismatch with the existing twin's **ETag**, an error is written to the log file. |
+| **UpdateTwinIfMatchETag** |If a twin already exists with the specified **ID**, existing information is overwritten with the provided input data only if there's a match on the twin's **ETag** value. The twin's **ETag** is processed independently from the device's **ETag**. If there's a mismatch with the existing twin's **ETag**, an error is written to the log file. |
> [!NOTE]
-> If the serialization data does not explicitly define an **importMode** flag for a device, it defaults to **createOrUpdate** during the import operation.
+> If the serialization data doesn't explicitly define an **importMode** flag for a device, it defaults to **createOrUpdate** during the import operation.
## Import troubleshooting
-Using an import job to create devices may fail with a quota issue when it is close to the device count limit of the IoT hub. This can happen even if the total device count is still lower than the quota limit. The **IotHubQuotaExceeded (403002)** error is returned with the following error message: "Total number of devices on IotHub exceeded the allocated quota.ΓÇ¥
+Using an import job to create devices may fail with a quota issue when it's close to the device count limit of the IoT hub. This failure can happen even if the total device count is still lower than the quota limit. The **IotHubQuotaExceeded (403002)** error is returned with the following error message: "Total number of devices on IotHub exceeded the allocated quota.ΓÇ¥
If you get this error, you can use the following query to return the total number of devices registered on your IoT hub:
SELECT COUNT() as totalNumberOfDevices FROM devices
For information about the total number of devices that can be registered to an IoT hub, see [IoT Hub limits](iot-hub-devguide-quotas-throttling.md#other-limits).
-If there's still quota available, you can examine the job output blob for devices that failed with the **IotHubQuotaExceeded (403002)** error. You can then try adding these devices individually to the IoT hub. For example, you can use the **AddDeviceAsync** or **AddDeviceWithTwinAsync** methods. Don't try to add the devices using another job as you'll likely encounter the same error.
+If there's still quota available, you can examine the job output blob for devices that failed with the **IotHubQuotaExceeded (403002)** error. You can then try adding these devices individually to the IoT hub. For example, you can use the **AddDeviceAsync** or **AddDeviceWithTwinAsync** methods. Don't try to add the devices using another job as you may likely encounter the same error.
## Import devices example ΓÇô bulk device provisioning
-The following C# code sample illustrates how to generate multiple device identities that:
+The following C# code snippet, from the **GenerateDevicesAsync** method in the SDK sample, illustrates how to generate multiple device identities that:
* Include authentication keys. * Write that device information to a block blob. * Import the devices into the identity registry. ```csharp
-// Provision 1,000 more devices
-var serializedDevices = new List<string>();
-
-for (var i = 0; i < 1000; i++)
+private async Task GenerateDevicesAsync(RegistryManager registryManager, int numToAdd)
{
- // Create a new ExportImportDevice
- // CryptoKeyGenerator is in the Microsoft.Azure.Devices.Common namespace
- var deviceToAdd = new ExportImportDevice()
- {
- Id = Guid.NewGuid().ToString(),
- Status = DeviceStatus.Enabled,
- Authentication = new AuthenticationMechanism()
+ var stopwatch = Stopwatch.StartNew();
+
+ Console.WriteLine($"Creating {numToAdd} devices for the source IoT hub.");
+ int interimProgressCount = 0;
+ int displayProgressCount = 1000;
+ int totalProgressCount = 0;
+
+ // generate reference for list of new devices we're going to add, will write list to this blob
+ BlobClient generateDevicesBlob = _blobContainerClient.GetBlobClient(_generateDevicesBlobName);
+
+ // define serializedDevices as a generic list<string>
+ var serializedDevices = new List<string>(numToAdd);
+
+ for (int i = 1; i <= numToAdd; i++)
{
- SymmetricKey = new SymmetricKey()
- {
- PrimaryKey = CryptoKeyGenerator.GenerateKey(32),
- SecondaryKey = CryptoKeyGenerator.GenerateKey(32)
- }
- },
- ImportMode = ImportMode.Create
- };
+ // Create device name with this format: Hub_00000000 + a new guid.
+ // This should be large enough to display the largest number (1 million).
+ string deviceName = $"Hub_{i:D8}_{Guid.NewGuid()}";
+ Debug.Print($"Adding device '{deviceName}'");
+
+ // Create a new ExportImportDevice.
+ var deviceToAdd = new ExportImportDevice
+ {
+ Id = deviceName,
+ Status = DeviceStatus.Enabled,
+ Authentication = new AuthenticationMechanism
+ {
+ SymmetricKey = new SymmetricKey
+ {
+ PrimaryKey = GenerateKey(32),
+ SecondaryKey = GenerateKey(32),
+ }
+ },
+ // This indicates that the entry should be added as a new device.
+ ImportMode = ImportMode.Create,
+ };
+
+ // Add device to the list as a serialized object.
+ serializedDevices.Add(JsonConvert.SerializeObject(deviceToAdd));
+
+ // Not real progress as you write the new devices, but will at least show *some* progress.
+ interimProgressCount++;
+ totalProgressCount++;
+ if (interimProgressCount >= displayProgressCount)
+ {
+ Console.WriteLine($"Added {totalProgressCount}/{numToAdd} devices.");
+ interimProgressCount = 0;
+ }
+ }
- // Add device to the list
- serializedDevices.Add(JsonConvert.SerializeObject(deviceToAdd));
-}
+ // Now have a list of devices to be added, each one has been serialized.
+ // Write the list to the blob.
+ var sb = new StringBuilder();
+ serializedDevices.ForEach(serializedDevice => sb.AppendLine(serializedDevice));
-// Write the list to the blob
-var sb = new StringBuilder();
-serializedDevices.ForEach(serializedDevice => sb.AppendLine(serializedDevice));
-await blob.DeleteIfExistsAsync();
+ // Write list of serialized objects to the blob.
+ using Stream stream = await generateDevicesBlob.OpenWriteAsync(overwrite: true);
+ byte[] bytes = Encoding.UTF8.GetBytes(sb.ToString());
+ for (int i = 0; i < bytes.Length; i += BlobWriteBytes)
+ {
+ int length = Math.Min(bytes.Length - i, BlobWriteBytes);
+ await stream.WriteAsync(bytes.AsMemory(i, length));
+ }
+ await stream.FlushAsync();
-using (CloudBlobStream stream = await blob.OpenWriteAsync())
-{
- byte[] bytes = Encoding.UTF8.GetBytes(sb.ToString());
- for (var i = 0; i < bytes.Length; i += 500)
- {
- int length = Math.Min(bytes.Length - i, 500);
- await stream.WriteAsync(bytes, i, length);
- }
-}
+ Console.WriteLine("Running a registry manager job to add the devices.");
-// Call import using the blob to add new devices
-// Log information related to the job is written to the same container
-// This normally takes 1 minute per 100 devices
-JobProperties importJob =
- await registryManager.ImportDevicesAsync(containerSasUri, containerSasUri);
+ // Should now have a file with all the new devices in it as serialized objects in blob storage.
+ // generatedListBlob has the list of devices to be added as serialized objects.
+ // Call import using the blob to add the new devices.
+ // Log information related to the job is written to the same container.
+ // This normally takes 1 minute per 100 devices (according to the docs).
-// Wait until job is finished
-while(true)
-{
- importJob = await registryManager.GetJobAsync(importJob.JobId);
- if (importJob.Status == JobStatus.Completed ||
- importJob.Status == JobStatus.Failed ||
- importJob.Status == JobStatus.Cancelled)
- {
- // Job has finished executing
- break;
- }
+ // First, initiate an import job.
+ // This reads in the rows from the text file and writes them to IoT Devices.
+ // If you want to add devices from a file, you can create a file and use this to import it.
+ // They have to be in the exact right format.
+ try
+ {
+ // The first URI is the container to import from; the file defaults to devices.txt, but may be specified.
+ // The second URI points to the container to write errors to as a blob.
+ // This lets you import the devices from any file name. Since we wrote the new
+ // devices to [devicesToAdd], need to read the list from there as well.
+ var importGeneratedDevicesJob = JobProperties.CreateForImportJob(
+ _containerUri,
+ _containerUri,
+ _generateDevicesBlobName);
+ importGeneratedDevicesJob = await registryManager.ImportDevicesAsync(importGeneratedDevicesJob);
+ await WaitForJobAsync(registryManager, importGeneratedDevicesJob);
+ }
+ catch (Exception ex)
+ {
+ Console.WriteLine($"Adding devices failed due to {ex.Message}");
+ }
- await Task.Delay(TimeSpan.FromSeconds(5));
+ stopwatch.Stop();
+ Console.WriteLine($"GenerateDevices, time elapsed = {stopwatch.Elapsed}.");
} ```-
+
## Import devices example ΓÇô bulk deletion
-The following code sample shows you how to delete the devices you added using the previous code sample:
+The following C# code snippet, from the **DeleteFromHubAsync** method in the SDK sample, shows you how to delete all of the devices from an IoT hub:
```csharp
-// Step 1: Update each device's ImportMode to be Delete
-sb = new StringBuilder();
-serializedDevices.ForEach(serializedDevice =>
+private async Task DeleteFromHubAsync(RegistryManager registryManager, bool includeConfigurations)
{
- // Deserialize back to an ExportImportDevice
- var device = JsonConvert.DeserializeObject<ExportImportDevice>(serializedDevice);
+ var stopwatch = Stopwatch.StartNew();
- // Update property
- device.ImportMode = ImportMode.Delete;
+ Console.WriteLine("Deleting all devices from an IoT hub.");
- // Re-serialize
- sb.AppendLine(JsonConvert.SerializeObject(device));
-});
+ Console.WriteLine("Exporting a list of devices from IoT hub to blob storage.");
-// Step 2: Write the new import data back to the block blob
-await blob.DeleteIfExistsAsync();
-using (CloudBlobStream stream = await blob.OpenWriteAsync())
-{
- byte[] bytes = Encoding.UTF8.GetBytes(sb.ToString());
- for (var i = 0; i < bytes.Length; i += 500)
- {
- int length = Math.Min(bytes.Length - i, 500);
- await stream.WriteAsync(bytes, i, length);
- }
-}
+ // Read from storage, which contains serialized objects.
+ // Write each line to the serializedDevices list.
+ BlobClient devicesBlobClient = _blobContainerClient.GetBlobClient(_destHubDevicesImportBlobName);
-// Step 3: Call import using the same blob to delete all devices
-importJob = await registryManager.ImportDevicesAsync(containerSasUri, containerSasUri);
+ Console.WriteLine("Reading the list of devices in from blob storage.");
+ List<string> serializedDevices = await ReadFromBlobAsync(devicesBlobClient);
-// Wait until job is finished
-while(true)
-{
- importJob = await registryManager.GetJobAsync(importJob.JobId);
- if (importJob.Status == JobStatus.Completed ||
- importJob.Status == JobStatus.Failed ||
- importJob.Status == JobStatus.Cancelled)
- {
- // Job has finished executing
- break;
- }
+ // Step 1: Update each device's ImportMode to be Delete
+ Console.WriteLine("Updating ImportMode to be 'Delete' for each device and writing back to the blob.");
+ var sb = new StringBuilder();
+ serializedDevices.ForEach(serializedEntity =>
+ {
+ // Deserialize back to an ExportImportDevice and change import mode.
+ ExportImportDevice device = JsonConvert.DeserializeObject<ExportImportDevice>(serializedEntity);
+ device.ImportMode = ImportMode.Delete;
+
+ // Reserialize the object now that we've updated the property.
+ sb.AppendLine(JsonConvert.SerializeObject(device));
+ });
+
+ // Step 2: Write the list in memory to the blob.
+ BlobClient deleteDevicesBlobClient = _blobContainerClient.GetBlobClient(_hubDevicesCleanupBlobName);
+ await WriteToBlobAsync(deleteDevicesBlobClient, sb.ToString());
+
+ // Step 3: Call import using the same blob to delete all devices.
+ Console.WriteLine("Running a registry manager job to delete the devices from the IoT hub.");
+ var importJob = JobProperties.CreateForImportJob(
+ _containerUri,
+ _containerUri,
+ _hubDevicesCleanupBlobName);
+ importJob = await registryManager.ImportDevicesAsync(importJob);
+ await WaitForJobAsync(registryManager, importJob);
+
+ // Step 4: delete configurations
+ if (includeConfigurations)
+ {
+ BlobClient configsBlobClient = _blobContainerClient.GetBlobClient(_srcHubConfigsExportBlobName);
+ List<string> serializedConfigs = await ReadFromBlobAsync(configsBlobClient);
+ foreach (string serializedConfig in serializedConfigs)
+ {
+ try
+ {
+ Configuration config = JsonConvert.DeserializeObject<Configuration>(serializedConfig);
+ await registryManager.RemoveConfigurationAsync(config.Id);
+ }
+ catch (Exception ex)
+ {
+ Console.WriteLine($"Failed to deserialize or remove a config.\n\t{serializedConfig}\n\n{ex.Message}");
+ }
+ }
+ }
- await Task.Delay(TimeSpan.FromSeconds(5));
+ stopwatch.Stop();
+ Console.WriteLine($"Deleted IoT hub devices and configs: time elapsed = {stopwatch.Elapsed}");
} ```
static string GetContainerSasUri(CloudBlobContainer container)
## Next steps
-In this article, you learned how to perform bulk operations against the identity registry in an IoT hub. Many of these operations, including how to move devices from one hub to another, are used in the **Manage devices registered to the IoT hub** section of [How to migrate an IoT hub using Azure Resource Manager templates](migrate-hub-arm.md#manage-the-devices-registered-to-the-iot-hub).
-
-The migration article has a working sample associated with it, which is located in the IoT C# samples on this page: [Azure IoT hub service samples for C#](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples/how%20to%20guides), with the project being ImportExportDevicesSample.
+In this article, you learned how to perform bulk operations against the identity registry in an IoT hub. Many of these operations, including how to move devices from one hub to another, are used in the **Manage devices registered to the IoT hub** section of [How to manually migrate an Azure IoT hub using an Azure Resource Manager template](migrate-hub-arm.md#manage-the-devices-registered-to-the-iot-hub).
iot-hub Iot Hub Create Through Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-through-portal.md
Title: Use the Azure portal to create an IoT Hub
+ Title: Create an IoT hub using the Azure portal
description: How to create, manage, and delete Azure IoT hubs through the Azure portal. Includes information about pricing tiers, scaling, security, and messaging configuration.
iot-hub Iot Hub Create Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-using-cli.md
Title: Create an IoT Hub using Azure CLI
+ Title: Create an IoT hub using the Azure CLI
description: Learn how to use the Azure CLI commands to create a resource group and then create an IoT hub in the resource group. Also learn how to remove the hub.
When you create an IoT hub, you must create it in a resource group. Either use a
> az account list-locations -o table > ```
-## Create an IoT Hub
+## Create an IoT hub
Use the Azure CLI to create a resource group and then add an IoT hub.
The result is a JSON printout which includes your keys and other information.
Alternatively, there are several options to register a device using different kinds of authorization. To explore the options, see [Examples](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-create-examples) on the **az iot hub device-identity** reference page.
-## Remove an IoT Hub
+## Remove an IoT hub
There are various commands to [delete an individual resource](/cli/azure/resource), such as an IoT hub.
iot-hub Iot Hub Devguide C2d Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-c2d-guidance.md
Here is a detailed comparison of the various cloud-to-device communication optio
Learn how to use direct methods, desired properties, and cloud-to-device messages in the following tutorials:
-* [Use direct methods](quickstart-control-device.md)
+* [Quickstart: Control a device connected to an IoT hub](quickstart-control-device.md)
* [Use desired properties to configure devices](tutorial-device-twins.md) * [Send cloud-to-device messages](c2d-messaging-node.md)
iot-hub Iot Hub Devguide Direct Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-direct-methods.md
Now you have learned how to use direct methods, you may be interested in the fol
If you would like to try out some of the concepts described in this article, you may be interested in the following IoT Hub tutorial:
-* [Use direct methods](quickstart-control-device.md)
+* [Quickstart: Control a device connected to an IoT hub](quickstart-control-device.md)
* [Device management with the Azure IoT Hub extension for VS Code](iot-hub-device-management-iot-toolkit.md)
iot-hub Iot Hub Devguide Identity Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-identity-registry.md
Use asynchronous operations on the [IoT Hub resource provider endpoint](iot-hub-
For more information about the import and export APIs, see [IoT Hub resource provider REST APIs](/rest/api/iothub/iothubresource). To learn more about running import and export jobs, see [Bulk management of IoT Hub device identities](iot-hub-bulk-identity-mgmt.md).
-Device identities can also be exported and imported from an IoT Hub via the Service API via either the [REST API](/rest/api/iothub/service/jobs/createimportexportjob) or one of the IoT Hub [Service SDKs](./iot-hub-devguide-sdks.md#azure-iot-hub-service-sdks).
+Device identities can also be exported and imported from an IoT hub by using the Service API through either the [REST API](/rest/api/iothub/service/jobs/createimportexportjob) or one of the IoT Hub [Service SDKs](./iot-hub-devguide-sdks.md#azure-iot-hub-service-sdks).
## Device provisioning
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
# IoT Hub quotas and throttling
-This article explains the quotas for an IoT Hub, and provides information to help you understand how throttling works.
+This article explains the quotas for an IoT hub, and provides information to help you understand how throttling works.
Each Azure subscription can have at most 50 IoT hubs, and at most 1 Free hub.
iot-hub Iot Hub Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-distributed-tracing.md
In this article, you use the [Azure IoT device SDK for C](https://github.com/Azu
- Southeast Asia - West US 2 -- This article assumes that you're familiar with sending telemetry messages to your IoT hub. Make sure you've completed the [quickstart for sending telemetry in C](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c).
+- This article assumes that you're familiar with sending telemetry messages to your IoT hub.
- Register a device with your IoT hub and save the connection string. Registration steps are available in the quickstart.
iot-hub Iot Hub Ha Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-ha-dr.md
Here's a summary of the HA/DR options presented in this article that can be used
## Next steps * [What is Azure IoT Hub?](about-iot-hub.md)
-* [Get started with IoT Hubs (Quickstart)](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp)
+* [Quickstart: Send telemetry from an IoT Plug and Play device to Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp)
* [Tutorial: Perform manual failover for an IoT hub](tutorial-manual-failover.md)
iot-hub Iot Hub How To Order Connection State Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-how-to-order-connection-state-events.md
The sequence number is a string representation of a hexadecimal number. You can
* A collection in your database. See [Add a collection](../cosmos-db/create-sql-api-java.md#add-a-container) for a walkthrough. When you create your collection, use `/id` for the partition key.
-* An IoT Hub in Azure. If you haven't created one yet, see [Get started with IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) for a walkthrough.
+* An Azure subscription. If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+* An IoT hub under your Azure subscription. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
## Create a logic app
iot-hub Iot Hub Live Data Visualization In Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-live-data-visualization-in-web-apps.md
In this article, you learn how to visualize real-time sensor data that your IoT
## Prerequisites
-This tutorial assumes that you already have an IoT hub instance in your Azure subscription and a registered IoT device sending temperature data.
- The web application sample for this tutorial is written in Node.js. The steps in this article assume a Windows development machine; however, you can also perform these steps on a Linux system in your preferred shell.
-* Use the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) or complete one of the [Send telemetry](../iot-develop/quickstart-send-telemetry-iot-hub.md) quickstarts to get a device sending temperature data to IoT Hub. These articles cover the following requirements:
+* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps to create an IoT hub using the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+
+* A device registered in your IoT hub. If you haven't registered a device yet, register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
- * An active Azure subscription
- * An IoT hub under your subscription
- * A registered device running a client application that sends messages to your IoT hub
+* A simulated device that sends telemetry messages to your IoT hub. Use the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) to get a simulated device that sends temperature data to IoT Hub.
* [Node.js](https://nodejs.org) version 14 or later. To check your node version run `node --version`.
iot-hub Iot Hub Preview Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-preview-mode.md
These features are improvements at the IoT Hub protocol and authentication layer
1. Select **Next: Review + create**, then **Create**.
-Once created, an IoT Hub in preview mode always shows this banner, letting you know to use this IoT hub for preview purposes only:
+Once created, an IoT hub in preview mode always shows this banner, letting you know to use this IoT hub for preview purposes only:
:::image type="content" source="media/iot-hub-preview-mode/banner.png" alt-text="Image showing banner for preview mode IoT hub":::
iot-hub Iot Hub Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-public-network-access.md
If you have trouble accessing your IoT hub, your network configuration could be
When trying to access your IoT hub with other tools, such as the Azure CLI, the error message may include `{"errorCode": 401002, "message": "Unauthorized"}` in the case where the request is not routed correctly to your IoT hub.
-To get access to the IoT hub, request permission from your IT administrator to add your IP address in the IP address range or to enable public network access to all networks. If that fails to resolve the issue, check your local network settings or contact your local network administrator to fix connectivity to the IoT Hub. For example, sometimes a proxy in the local network can interfere with access to IoT Hub.
+To get access to the IoT hub, request permission from your IT administrator to add your IP address in the IP address range or to enable public network access to all networks. If that fails to resolve the issue, check your local network settings or contact your local network administrator to fix connectivity to the IoT hub. For example, sometimes a proxy in the local network can interfere with access to IoT Hub.
If the preceding commands do not work or you cannot turn on all network ranges, contact Microsoft support.
iot-hub Iot Hub Raspberry Pi Kit C Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-raspberry-pi-kit-c-get-started.md
You can use the resources created in this topic with other tutorials and quickst
1. From the left-hand menu in the Azure portal, select **All resources** and then select the IoT Hub you created. 1. At the top of the IoT Hub overview pane, click **Delete**.
-1. Enter your hub name and click **Delete** again to confirm permanently deleting the IoT Hub.
+1. Enter your hub name and click **Delete** again to confirm permanently deleting the IoT hub.
## Next steps
iot-hub Iot Hub Raspberry Pi Kit Node Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-raspberry-pi-kit-node-get-started.md
You can use the resources created in this topic with other tutorials and quickst
1. From the left-hand menu in the Azure portal, select **All resources** and then select the IoT Hub you created. 1. At the top of the IoT Hub overview pane, click **Delete**.
-1. Enter your hub name and click **Delete** again to confirm permanently deleting the IoT Hub.
+1. Enter your hub name and click **Delete** again to confirm permanently deleting the IoT hub.
## Next steps
iot-hub Iot Hub Rm Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-rest.md
Title: Create an Azure IoT hub using the resource provider REST API
-description: Learn how to use the resource provider C# REST API to create and manage an IoT Hub programmatically.
+description: Learn how to use the resource provider C# REST API to create and manage an IoT hub programmatically.
iot-hub Iot Hub Rm Template Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-template-powershell.md
Title: Create an Azure IoT Hub using a template (PowerShell)
-description: How to use an Azure Resource Manager template to create an IoT Hub with Azure PowerShell.
+ Title: Create an Azure IoT hub using a template (PowerShell)
+description: How to use an Azure Resource Manager template to create an IoT hub with Azure PowerShell.
iot-hub Iot Hubs Manage Device Twin Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hubs-manage-device-twin-tags.md
Device twin tags can be used as a powerful tool to help you organize your device
## Prerequisites
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* At least two registered devices. Register devices in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
iot-hub Migrate Hub Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/migrate-hub-arm.md
This section provides specific instructions for migrating the hub.
1. Select **Export template** from the list of properties and settings for the hub.
- :::image type="content" source="./media/migrate-hub-arm/iot-hub-export-template.png" alt-text="Screenshot showing the command for exporting the template for the IoT Hub." border="true":::
+ :::image type="content" source="./media/migrate-hub-arm/iot-hub-export-template.png" alt-text="Screenshot showing the command for exporting the template for the IoT hub." border="true":::
1. Select **Download** to download the template. Save the file somewhere you can find it again.
- :::image type="content" source="./media/migrate-hub-arm/iot-hub-download-template.png" alt-text="Screenshot showing the command for downloading the template for the IoT Hub." border="true":::
+ :::image type="content" source="./media/migrate-hub-arm/iot-hub-download-template.png" alt-text="Screenshot showing the command for downloading the template for the IoT hub." border="true":::
### View the template
Don't clean up until you're certain the new hub is up and running and the device
## Next steps
-You have migrated an IoT hub into a new hub in a new region, complete with the devices. For more information about performing bulk operations against the identity registry in an IoT Hub, see [Import and export IoT Hub device identities in bulk](iot-hub-bulk-identity-mgmt.md).
+You have migrated an IoT hub into a new hub in a new region, complete with the devices. For more information about performing bulk operations against the identity registry in an IoT hub, see [Import and export IoT Hub device identities in bulk](iot-hub-bulk-identity-mgmt.md).
iot-hub Module Twins C https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-c.md
At the end of this article, you have two C apps:
## Prerequisites
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* The latest [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c).
iot-hub Module Twins Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-cli.md
This article shows you how to create an Azure CLI session in which you:
* Azure CLI. You can also run the commands in this article using the [Azure Cloud Shell](../cloud-shell/overview.md), an interactive CLI shell that runs in your browser or in an app such as Windows Terminal. If you use the Cloud Shell, you don't need to install anything. If you prefer to use the CLI locally, this article requires Azure CLI version 2.36 or later. Run `az --version` to find the version. To locally install or upgrade Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* Make sure that port 8883 is open in your firewall. The samples in this article use MQTT protocol, which communicates over port 8883. This port can be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
If you want to use the Azure Cloud Shell, you must first launch and configure it
1. Select the **Cloud Shell** icon from the page header in the Azure portal.
- :::image type="content" source="./media/quickstart-send-telemetry-cli/cloud-shell-button.png" alt-text="Screenshot of the global controls from the page header of the Azure portal, highlighting the Cloud Shell icon.":::
+ :::image type="content" source="./media/module-twins-cli/cloud-shell-button.png" alt-text="Screenshot of the global controls from the page header of the Azure portal, highlighting the Cloud Shell icon.":::
> [!NOTE] > If this is the first time you've used the Cloud Shell, it prompts you to create storage, which is required to use the Cloud Shell. Select a subscription to create a storage account and Microsoft Azure Files share.
If you want to use the Azure Cloud Shell, you must first launch and configure it
> [!NOTE] > Some commands require different syntax or formatting in the **Bash** and **PowerShell** environments. For more information, see [Tips for using the Azure CLI successfully](/cli/azure/use-cli-effectively?tabs=bash%2Cbash2).
- :::image type="content" source="./media/quickstart-send-telemetry-cli/cloud-shell-environment.png" alt-text="Screenshot of an Azure Cloud Shell window, highlighting the environment selector in the toolbar.":::
+ :::image type="content" source="./media/module-twins-cli/cloud-shell-environment.png" alt-text="Screenshot of an Azure Cloud Shell window, highlighting the environment selector in the toolbar.":::
## Prepare a CLI session
iot-hub Module Twins Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-dotnet.md
At the end of this article, you have two .NET console apps:
* Visual Studio.
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
## Get the IoT hub connection string
iot-hub Module Twins Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-node.md
At the end of this article, you have two Node.js apps:
## Prerequisites
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* Node.js version 10.0.x or later. [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/tree/main/doc/node-devbox-setup.md) describes how to install Node.js for this article on either Windows or Linux.
iot-hub Module Twins Portal Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-portal-dotnet.md
In this article, you will learn how to:
* Visual Studio.
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
iot-hub Module Twins Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-python.md
At the end of this article, you have three Python apps:
* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
-* An IoT Hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
* [Python version 3.7 or later](https://www.python.org/downloads/) is recommended. Make sure to use the 32-bit or 64-bit installation as required by your setup. When prompted during the installation, make sure to add Python to your platform-specific environment variable.
iot-hub Quickstart Bicep Route Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-bicep-route-messages.md
Last updated 05/11/2022
-# Quickstart: Deploy an Azure IoT Hub and a storage account using Bicep
+# Quickstart: Deploy an Azure IoT hub and a storage account using Bicep
-In this quickstart, you use Bicep to create an IoT Hub that will route messages to Azure Storage and a storage account to hold the messages. After manually adding a virtual IoT device to the hub to submit the messages, you configure that connection information in an application called *arm-read-write* to submit messages from the device to the hub. The hub is configured so the messages sent to the hub are automatically routed to the storage account. At the end of this quickstart, you can open the storage account and see the messages sent.
+In this quickstart, you use Bicep to create an IoT hub that will route messages to Azure Storage and a storage account to hold the messages. After manually adding a virtual IoT device to the hub to submit the messages, you configure that connection information in an application called *arm-read-write* to submit messages from the device to the