Updates from: 06/02/2022 01:20:01
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Asignio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-asignio.md
Follow the steps mentioned in [this tutorial](tutorial-register-applications.md?
| Property | Value | |:--|:-| |Name | Login with Asignio *(or a name of your choice)*
- |Metadata URL | https://authorization.asignio.com/.well-known/openid-configuration|
+ |Metadata URL | `https://authorization.asignio.com/.well-known/openid-configuration`|
| Client ID | enter the client ID that you previously generated in [step 1](#step-1-configure-an-application-with-asignio)| |Client Secret | enter the Client secret that you previously generated in [step 1](#step-1-configure-an-application-with-asignio)| | Scope | openid email profile |
active-directory Howto Mfa Userstates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userstates.md
Previously updated : 07/22/2021 Last updated : 06/01/2022
All users start out *Disabled*. When you enroll users in per-user Azure AD Multi
To view and manage user states, complete the following steps to access the Azure portal page: 1. Sign in to the [Azure portal](https://portal.azure.com) as a Global administrator.
-1. Search for and select *Azure Active Directory*, then select **Users** > **All users**.
-1. Select **Per-user MFA**. You may need to scroll to the right to see this menu option. Select the example screenshot below to see the full Azure portal window and menu location:
- [![Select Multi-Factor Authentication from the Users window in Azure AD.](media/howto-mfa-userstates/selectmfa-cropped.png)](media/howto-mfa-userstates/selectmfa.png#lightbox)
+1. Search for and select **Azure Active Directory**, then select **Users** > **All users**.
+1. Select **Per-user MFA**.
+ :::image type="content" border="true" source="media/howto-mfa-userstates/selectmfa-cropped.png" alt-text="Screenshot of select Multi-Factor Authentication from the Users window in Azure AD.":::
1. A new page opens that displays the user state, as shown in the following example. ![Screenshot that shows example user state information for Azure AD Multi-Factor Authentication](./media/howto-mfa-userstates/userstate1.png)
active-directory Block Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/block-legacy-authentication.md
-# How to: Block legacy authentication to Azure AD with Conditional Access
+# How to: Block legacy authentication access to Azure AD with Conditional Access
To give your users easy access to your cloud apps, Azure Active Directory (Azure AD) supports a broad variety of authentication protocols including legacy authentication. However, legacy authentication doesn't support multifactor authentication (MFA). MFA is in many environments a common requirement to address identity theft.
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
The filter for devices condition in Conditional Access evaluates policy based on
## Next steps
+- [Back to school ΓÇô Using Boolean algebra correctly in complex filters](https://techcommunity.microsoft.com/t5/intune-customer-success/back-to-school-using-boolean-algebra-correctly-in-complex/ba-p/3422765)
- [Update device Graph API](/graph/api/device-update?tabs=http) - [Conditional Access: Conditions](concept-conditional-access-conditions.md) - [Common Conditional Access policies](concept-conditional-access-policy-common.md)-- [Securing devices as part of the privileged access story](/security/compass/privileged-access-devices)
+- [Securing devices as part of the privileged access story](/security/compass/privileged-access-devices)
active-directory Concept Conditional Access Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
Previously updated : 03/17/2021 Last updated : 06/01/2022
The following options are available to include when creating a Conditional Acces
- All guest and external users - This selection includes any B2B guests and external users including any user with the `user type` attribute set to `guest`. This selection also applies to any external user signed-in from a different organization like a Cloud Solution Provider (CSP). - Directory roles
- - Allows administrators to select specific built-in Azure AD directory roles used to determine policy assignment. For example, organizations may create a more restrictive policy on users assigned the global administrator role. Other role types are not supported, including administrative unit-scoped roles and custom roles.
+ - Allows administrators to select specific built-in Azure AD directory roles used to determine policy assignment. For example, organizations may create a more restrictive policy on users assigned the global administrator role. Other role types aren't supported, including administrative unit-scoped roles and custom roles.
- Users and groups - Allows targeting of specific sets of users. For example, organizations can select a group that contains all members of the HR department when an HR app is selected as the cloud app. A group can be any type of user group in Azure AD, including dynamic or assigned security and distribution groups. Policy will be applied to nested users and groups.
The following options are available to include when creating a Conditional Acces
## Exclude users
-When organizations both include and exclude a user or group the user or group is excluded from the policy, as an exclude action overrides an include in policy. Exclusions are commonly used for emergency access or break-glass accounts. More information about emergency access accounts and why they are important can be found in the following articles:
+When organizations both include and exclude a user or group the user or group is excluded from the policy, as an exclude action overrides an include in policy. Exclusions are commonly used for emergency access or break-glass accounts. More information about emergency access accounts and why they're important can be found in the following articles:
* [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md) * [Create a resilient access control management strategy with Azure Active Directory](../authentication/concept-resilient-controls.md)
The following options are available to exclude when creating a Conditional Acces
### Preventing administrator lockout
-To prevent an administrator from locking themselves out of their directory when creating a policy applied to **All users** and **All apps**, they will see the following warning.
+To prevent an administrator from locking themselves out of their directory when creating a policy applied to **All users** and **All apps**, they'll see the following warning.
> Don't lock yourself out! We recommend applying a policy to a small set of users first to verify it behaves as expected. We also recommend excluding at least one administrator from this policy. This ensures that you still have access and can update a policy if a change is required. Please review the affected users and apps.
By default the policy will provide an option to exclude the current user from th
![Warning, don't lock yourself out!](./media/concept-conditional-access-users-groups/conditional-access-users-and-groups-lockout-warning.png)
-If you do find yourself locked out, see [What to do if you are locked out of the Azure portal?](troubleshoot-conditional-access.md#what-to-do-if-youre-locked-out-of-the-azure-portal)
+If you do find yourself locked out, see [What to do if you're locked out of the Azure portal?](troubleshoot-conditional-access.md#what-to-do-if-youre-locked-out-of-the-azure-portal)
+
+### External partner access
+
+Conditional Access policies that target external users may interfere with service provider access, for example granular delegated admin privileges [Introduction to granular delegated admin privileges (GDAP)](/partner-center/gdap-introduction).
## Next steps
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
Title: Microsoft identity platform access tokens | Azure-
+ Title: Microsoft identity platform access tokens
description: Learn about access tokens emitted by the Azure AD v1.0 and Microsoft identity platform (v2.0) endpoints.
active-directory Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/accounts-overview.md
Title: Microsoft identity platform accounts & tenant profiles on Android | Azure
+ Title: Microsoft identity platform accounts & tenant profiles on Android
description: An overview of the Microsoft identity platform accounts for Android
active-directory Active Directory Certificate Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-certificate-credentials.md
Title: Microsoft identity platform certificate credentials- description: This article discusses the registration and use of certificate credentials for application authentication.
active-directory Active Directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-claims-mapping.md
Title: Customize Azure AD tenant app claims (PowerShell)- description: Learn how to customize claims emitted in tokens for an application in a specific Azure Active Directory tenant.
active-directory Active Directory Configurable Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-configurable-token-lifetimes.md
Title: Configurable token lifetimes- description: Learn how to set lifetimes for access, SAML, and ID tokens issued by the Microsoft identity platform.
active-directory Active Directory Enterprise App Role Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-enterprise-app-role-management.md
Title: Configure role claim for enterprise Azure AD apps | Azure-
+ Title: Configure role claim for enterprise Azure AD apps
description: Learn how to configure the role claim issued in the SAML token for enterprise applications in Azure Active Directory
active-directory Active Directory How Applications Are Added https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-how-applications-are-added.md
Title: How and why apps are added to Azure AD- description: What does it mean for an application to be added to Azure AD and how do they get there?
active-directory Active Directory How To Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-how-to-integrate.md
Title: How to integrate with the Microsoft identity platform | Azure-
+ Title: How to integrate with the Microsoft identity platform
description: Learn the benefits of integrating your application with the Microsoft identity platform, and get resources for features like simplified sign-in, identity management, multi-factor authentication, and access control.
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
Title: Provide optional claims to Azure AD apps- description: How to add custom or additional claims to the SAML 2.0 and JSON Web Tokens (JWT) tokens issued by Microsoft identity platform.
active-directory Active Directory Saml Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-saml-claims-customization.md
Title: Customize app SAML token claims- description: Learn how to customize the claims issued by Microsoft identity platform in the SAML token for enterprise applications.
active-directory Active Directory Schema Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-schema-extensions.md
Title: Use Azure AD schema extension attributes in claims- description: Describes how to use directory schema extension attributes for sending user data to applications in token claims.
active-directory Active Directory V2 Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-v2-protocols.md
Title: OAuth 2.0 and OpenID Connect protocols on the Microsoft identity platform | Azure-
+ Title: OAuth 2.0 and OpenID Connect protocols on the Microsoft identity platform
description: A guide to OAuth 2.0 and OpenID Connect protocols as supported by the Microsoft identity platform.
active-directory Api Find An Api How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/api-find-an-api-how-to.md
Title: Find an API for a custom-developed app | Azure
+ Title: Find an API for a custom-developed app
description: How to configure the permissions you need to access a particular API in your custom developed Azure AD application
active-directory App Objects And Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-objects-and-service-principals.md
Title: Apps & service principals in Azure AD- description: Learn about the relationship between application and service principal objects in Azure Active Directory.
active-directory App Resilience Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-resilience-continuous-access-evaluation.md
Title: "How to use Continuous Access Evaluation enabled APIs in your applications | Azure"-
+ Title: "How to use Continuous Access Evaluation enabled APIs in your applications"
description: How to increase app security and resilience by adding support for Continuous Access Evaluation, enabling long-lived access tokens that can be revoked based on critical events and policy evaluation.
active-directory App Sign In Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-sign-in-flow.md
Title: App sign-in flow with the Microsoft identity platform | Azure-
+ Title: App sign-in flow with the Microsoft identity platform
description: Learn about the sign-in flow of web, desktop, and mobile apps in Microsoft identity platform.
active-directory Apple Sso Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/apple-sso-plugin.md
Title: Microsoft Enterprise SSO plug-in for Apple devices- description: Learn about the Azure Active Directory SSO plug-in for iOS, iPadOS, and macOS devices.
active-directory Application Consent Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/application-consent-experience.md
Title: Azure AD app consent experiences- description: Learn more about the Azure AD consent experiences to see how you can use it when managing and developing applications on Azure AD
active-directory Application Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/application-model.md
Title: Application model | Azure-
+ Title: Application model
description: Learn about the process of registering your application so it can integrate with the Microsoft identity platform.
active-directory Authentication Flows App Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-flows-app-scenarios.md
Title: Microsoft identity platform authentication flows & app scenarios | Azure
+ Title: Microsoft identity platform authentication flows & app scenarios
description: Learn about application scenarios for the Microsoft identity platform, including authenticating identities, acquiring tokens, and calling protected APIs.
active-directory Authentication National Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-national-cloud.md
Title: Azure AD authentication & national clouds | Azure-
+ Title: Azure AD authentication & national clouds
description: Learn about app registration and authentication endpoints for national clouds.
active-directory Authentication Vs Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-vs-authorization.md
Title: Authentication vs. authorization | Azure-
+ Title: Authentication vs. authorization
description: Learn about the basics of authentication and authorization in the Microsoft identity platform.
active-directory Authorization Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authorization-basics.md
Title: Authorization basics | Azure-
+ Title: Authorization basics
description: Learn about the basics of authorization in the Microsoft identity platform.
active-directory Claims Challenge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/claims-challenge.md
Title: Claims challenges, claims requests, and client capabilities- description: Explanation of claims challenges, claims requests, and client capabilities in the Microsoft identity platform.
active-directory Config Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/config-authority.md
Title: Configure identity providers (MSAL iOS/macOS) | Azure-
+ Title: Configure identity providers (MSAL iOS/macOS)
description: Learn how to use different authorities such as B2C, sovereign clouds, and guest users, with MSAL for iOS and macOS.
active-directory Configure Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configure-token-lifetimes.md
Title: Set lifetimes for tokens- description: Learn how to set lifetimes for tokens issued by Microsoft identity platform. Learn how to learn how to manage an organization's default policy, create a policy for web sign-in, create a policy for a native app that calls a web API, and manage an advanced policy.
active-directory Consent Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/consent-framework.md
Title: Microsoft identity platform consent framework- description: Learn about the consent framework in the Microsoft identity platform and how it applies to multi-tenant applications.
active-directory Console App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/console-app-quickstart.md
Title: "Quickstart: Call Microsoft Graph from a console application | Azure"-
+ Title: "Quickstart: Call Microsoft Graph from a console application"
description: In this quickstart, you learn how a console application can get an access token and call an API protected by Microsoft identity platform, using the app's own identity
active-directory Customize Webviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/customize-webviews.md
Title: Customize browsers & WebViews (MSAL iOS/macOS) | Azure-
+ Title: Customize browsers & WebViews (MSAL iOS/macOS)
description: Learn how to customize the MSAL iOS/macOS browser experience to sign in users.
active-directory Delegated And App Perms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/delegated-and-app-perms.md
Title: Differences between delegated and app permissions | Azure
+ Title: Differences between delegated and app permissions
description: Learn about delegated and application permissions, how they are used by clients and exposed by resources for applications you are developing with Azure AD
active-directory Desktop App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-app-quickstart.md
Title: "Quickstart: Sign in users and call Microsoft Graph in a desktop app | Azure"-
+ Title: "Quickstart: Sign in users and call Microsoft Graph in a desktop app"
description: In this quickstart, learn how a desktop application can get an access token and call an API protected by the Microsoft identity platform.
active-directory Developer Support Help Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-support-help-options.md
Title: Support and help options for Microsoft identity platform developers | Azure
+ Title: Support and help options for Microsoft identity platform developers
description: Learn where to get help and find answers to your questions as you build identity and access management (IAM) solutions that integrate with Azure Active Directory (Azure AD) and other components of the Microsoft identity platform.
active-directory Howto Add App Roles In Azure Ad Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md
Title: Add app roles and get them from a token | Azure-
+ Title: Add app roles and get them from a token
description: Learn how to add app roles to an application registered in Azure Active Directory, assign users and groups to these roles, and receive them in the 'roles' claim in the token.
active-directory Howto Add Branding In Azure Ad Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-branding-in-azure-ad-apps.md
Title: Sign in with Microsoft branding guidelines | Azure AD- description: Learn about application branding guidelines for Microsoft identity platform.
active-directory Howto Add Terms Of Service Privacy Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-terms-of-service-privacy-statement.md
Title: Terms of Service and privacy statement for apps | Azure
+ Title: Terms of Service and privacy statement for apps
description: Learn how you can configure the terms of service and privacy statement for apps registered to use Azure AD.
active-directory Howto Authenticate Service Principal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-authenticate-service-principal-powershell.md
Title: Create an Azure app identity (PowerShell) | Azure-
+ Title: Create an Azure app identity (PowerShell)
description: Describes how to use Azure PowerShell to create an Azure Active Directory application and service principal, and grant it access to resources through role-based access control. It shows how to authenticate application with a certificate.
active-directory Howto Build Services Resilient To Metadata Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-build-services-resilient-to-metadata-refresh.md
Title: "How to: Build services that are resilient to Azure AD's OpenID Connect metadata refresh | Azure"-
+ Title: "How to: Build services that are resilient to Azure AD's OpenID Connect metadata refresh"
description: Learn how to ensure that your web app or web api is resilient to Azure AD's OpenID Connect metadata refresh.
active-directory Howto Configure Publisher Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-publisher-domain.md
Title: Configure an app's publisher domain | Azure-
+ Title: Configure an app's publisher domain
description: Learn how to configure an application's publisher domain to let users know where their information is being sent.
active-directory Howto Convert App To Be Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-convert-app-to-be-multi-tenant.md
Title: Build apps that sign in Azure AD users- description: Shows how to build a multi-tenant application that can sign in a user from any Azure Active Directory tenant.
active-directory Howto Create Self Signed Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-self-signed-certificate.md
Title: Create a self-signed public certificate to authenticate your application | Azure-
+ Title: Create a self-signed public certificate to authenticate your application
description: Create a self-signed public certificate to authenticate your application.
active-directory Howto Create Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-service-principal-portal.md
Title: Create an Azure AD app and service principal in the portal- description: Create a new Azure Active Directory app and service principal to manage access to resources with role-based access control in Azure Resource Manager.
active-directory Howto Get List Of All Active Directory Auth Library Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-get-list-of-all-active-directory-auth-library-apps.md
Title: "How to: Get a complete list of all apps using Active Directory Authentication Library (ADAL) in your tenant | Azure"-
+ Title: "How to: Get a complete list of all apps using Active Directory Authentication Library (ADAL) in your tenant"
description: In this how-to guide, you get a complete list of all apps that are using ADAL in your tenant.
active-directory Howto Handle Samesite Cookie Changes Chrome Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-handle-samesite-cookie-changes-chrome-browser.md
Title: How to handle SameSite cookie changes in Chrome browser | Azure-
+ Title: How to handle SameSite cookie changes in Chrome browser
description: Learn how to handle SameSite cookie changes in Chrome browser.
active-directory Howto Implement Rbac For Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-implement-rbac-for-apps.md
Title: Implement role-based access control in apps- description: Learn how to implement role-based access control in your applications.
active-directory Howto Modify Supported Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-modify-supported-accounts.md
Title: "How to: Change the account types supported by an application | Azure"-
+ Title: "How to: Change the account types supported by an application"
description: In this how-to, you configure an application registered with the Microsoft identity platform to change who, or what accounts, can access the application.
active-directory Howto Remove App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-remove-app.md
Title: "How to: Remove a registered app from the Microsoft identity platform | Azure"-
+ Title: "How to: Remove a registered app from the Microsoft identity platform"
description: In this how-to, you learn how to remove an application registered with the Microsoft identity platform.
active-directory Howto Restore App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-restore-app.md
Title: "How to: Restore or remove a recently deleted application with the Microsoft identity platform | Azure"-
+ Title: "How to: Restore or remove a recently deleted application with the Microsoft identity platform"
description: In this how-to, you learn how to restore or permanently delete a recently deleted application registered with the Microsoft identity platform.
active-directory Howto Restrict Your App To A Set Of Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md
Title: Restrict Azure AD app to a set of users | Azure-
+ Title: Restrict Azure AD app to a set of users
description: Learn how to restrict access to your apps registered in Azure AD to a selected set of users.
active-directory Howto V2 Keychain Objc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-v2-keychain-objc.md
Title: Configure keychain - description: Learn how to configure keychain so that your app can cache tokens in the keychain.
active-directory Id Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/id-tokens.md
Title: Microsoft identity platform ID tokens | Azure-
+ Title: Microsoft identity platform ID tokens
description: Learn how to use id_tokens emitted by the Azure AD v1.0 and Microsoft identity platform (v2.0) endpoints.
active-directory Identity Platform Integration Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/identity-platform-integration-checklist.md
Title: Best practices for the Microsoft identity platform | Azure
+ Title: Best practices for the Microsoft identity platform
description: Learn about best practices, recommendations, and common oversights when integrating with the Microsoft identity platform.
active-directory Identity Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/identity-videos.md
Title: Microsoft identity platform videos | Azure
+ Title: Microsoft identity platform videos
description: A list of videos about modern authentication and the Microsoft identity platform
active-directory Mark App As Publisher Verified https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mark-app-as-publisher-verified.md
Title: Mark an app as publisher verified - Microsoft identity platform | Azure
+ Title: Mark an app as publisher verified
description: Describes how to mark an app as publisher verified. When an application is marked as publisher verified, it means that the publisher has verified their identity using a Microsoft Partner Network account that has completed the verification process and has associated this MPN account with their application registration.
active-directory Microsoft Identity Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/microsoft-identity-web.md
Title: Microsoft Identity Web authentication library overview- description: Learn about Microsoft Identity Web, an authentication and authorization library for ASP.NET Core applications that integrate with Azure Active Directory, Azure AD B2C, and Microsoft Graph and other web APIs.
active-directory Migrate Adal Msal Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-adal-msal-java.md
Title: ADAL to MSAL migration guide (MSAL4j) | Azure-
+ Title: ADAL to MSAL migration guide (MSAL4j)
description: Learn how to migrate your Azure Active Directory Authentication Library (ADAL) Java app to the Microsoft Authentication Library (MSAL).
active-directory Migrate Android Adal Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-android-adal-msal.md
Title: ADAL to MSAL migration guide for Android | Azure-
+ Title: ADAL to MSAL migration guide for Android
description: Learn how to migrate your Azure Active Directory Authentication Library (ADAL) Android app to the Microsoft Authentication Library (MSAL).
active-directory Migrate Objc Adal Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-objc-adal-msal.md
Title: ADAL to MSAL migration guide (MSAL iOS/macOS) | Azure-
+ Title: ADAL to MSAL migration guide (MSAL iOS/macOS)
description: Learn the differences between MSAL for iOS/macOS and the Azure AD Authentication Library for ObjectiveC (ADAL.ObjC) and how to migrate to MSAL for iOS/macOS.
active-directory Migrate Python Adal Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-python-adal-msal.md
Title: Python ADAL to MSAL migration guide | Azure-
+ Title: Python ADAL to MSAL migration guide
description: Learn how to migrate your Azure Active Directory Authentication Library (ADAL) Python app to the Microsoft Authentication Library (MSAL) for Python.
active-directory Migrate Spa Implicit To Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-spa-implicit-to-auth-code.md
Title: Migrate JavaScript single-page app from implicit grant to authorization code flow | Azure-
+ Title: Migrate JavaScript single-page app from implicit grant to authorization code flow
description: How to update a JavaScript SPA using MSAL.js 1.x and the implicit grant flow to MSAL.js 2.x and the authorization code flow with PKCE and CORS support.
active-directory Mobile App Quickstart Portal Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-android.md
Title: "Quickstart: Add sign in with Microsoft to an Android app | Azure"-
+ Title: "Quickstart: Add sign in with Microsoft to an Android app"
description: In this quickstart, learn how Android applications can call an API that requires access tokens issued by the Microsoft identity platform.
active-directory Mobile App Quickstart Portal Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-ios.md
Title: "Quickstart: Add sign in with Microsoft to an iOS or macOS app | Azure"-
+ Title: "Quickstart: Add sign in with Microsoft to an iOS or macOS app"
description: In this quickstart, learn how an iOS or macOS app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API.
active-directory Mobile App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart.md
Title: "Quickstart: Add sign in with Microsoft to a mobile app | Azure"-
+ Title: "Quickstart: Add sign in with Microsoft to a mobile app"
description: In this quickstart, learn how a mobile app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API.
active-directory Mobile Sso Support Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-sso-support-overview.md
Title: Support single sign-on and app protection policies in mobile apps you develop | Azure-
+ Title: Support single sign-on and app protection policies in mobile apps you develop
description: Explanation and overview of building mobile applications that support single sign-on and app protection policies using the Microsoft identity platform and integrating with Azure Active Directory.
active-directory Msal Acquire Cache Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-acquire-cache-tokens.md
Title: Acquire and cache tokens with Microsoft Authentication Library (MSAL) | Azure-
+ Title: Acquire and cache tokens with Microsoft Authentication Library (MSAL)
description: Learn about acquiring and caching tokens using MSAL.
active-directory Msal Android B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-b2c.md
Title: Azure AD B2C (MSAL Android) | Azure-
+ Title: Azure AD B2C (MSAL Android)
description: Learn about specific considerations when using Azure AD B2C with the Microsoft Authentication Library for Android (MSAL.Android)
active-directory Msal Android Handling Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-handling-exceptions.md
Title: Errors and exceptions (MSAL Android) | Azure-
+ Title: Errors and exceptions (MSAL Android)
description: Learn how to handle errors and exceptions, Conditional Access, and claims challenges in MSAL Android applications.
active-directory Msal Android Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-shared-devices.md
Title: Shared device mode for Android devices- description: Learn how to enable shared device mode to allow frontline workers to share an Android device
active-directory Msal Android Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-single-sign-on.md
Title: How to enable cross-app SSO on Android using MSAL | Azure-
+ Title: How to enable cross-app SSO on Android using MSAL
description: How to use the Microsoft Authentication Library (MSAL) for Android to enable single sign-on across your applications.
active-directory Msal Authentication Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-authentication-flows.md
Title: Authentication flow support in the Microsoft Authentication Library (MSAL) | Azure-
+ Title: Authentication flow support in the Microsoft Authentication Library (MSAL)
description: Learn about the authorization grants and authentication flows supported by MSAL.
active-directory Msal B2c Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-b2c-overview.md
Title: Use MSAL.js with Azure AD B2C- description: The Microsoft Authentication Library for JavaScript (MSAL.js) enables applications to work with Azure AD B2C and acquire tokens to call secured web APIs. These web APIs can be Microsoft Graph, other Microsoft APIs, web APIs from others, or your own web API.
active-directory Msal Client Application Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-client-application-configuration.md
Title: Client application configuration (MSAL) | Azure-
+ Title: Client application configuration (MSAL)
description: Learn about configuration options for public client and confidential client applications using the Microsoft Authentication Library (MSAL).
active-directory Msal Client Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-client-applications.md
Title: Public and confidential client apps (MSAL) | Azure-
+ Title: Public and confidential client apps (MSAL)
description: Learn about public client and confidential client applications in the Microsoft Authentication Library (MSAL).
active-directory Msal Compare Msal Js And Adal Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-compare-msal-js-and-adal-js.md
Title: "Migrate your JavaScript application from ADAL.js to MSAL.js | Azure"-
+ Title: "Migrate your JavaScript application from ADAL.js to MSAL.js"
description: How to update your existing JavaScript application to use the Microsoft Authentication Library (MSAL) for authentication and authorization instead of the Active Directory Authentication Library (ADAL).
active-directory Msal Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-configuration.md
Title: Android MSAL configuration file | Azure-
+ Title: Android MSAL configuration file
description: An overview of the Android Microsoft Authentication Library (MSAL) configuration file, which represents an application's configuration in Azure Active Directory.
active-directory Msal Differences Ios Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-differences-ios-macos.md
Title: MSAL for iOS & macOS differences | Azure-
+ Title: MSAL for iOS & macOS differences
description: Describes the Microsoft Authentication Library (MSAL) usage differences between iOS and macOS.
active-directory Msal Error Handling Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-dotnet.md
Title: Handle errors and exceptions in MSAL.NET- description: Learn how to handle errors and exceptions, Conditional Access claims challenges, and retries in MSAL.NET.
active-directory Msal Error Handling Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-ios.md
Title: Handle errors and exceptions in MSAL for iOS/macOS- description: Learn how to handle errors and exceptions, Conditional Access claims challenges, and retries in MSAL for iOS/macOS applications.
active-directory Msal Error Handling Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-java.md
Title: Handle errors and exceptions in MSAL4J- description: Learn how to handle errors and exceptions, Conditional Access claims challenges, and retries in MSAL4J applications.
active-directory Msal Error Handling Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-js.md
Title: Handle errors and exceptions in MSAL.js- description: Learn how to handle errors and exceptions, Conditional Access claims challenges, and retries in MSAL.js applications.
active-directory Msal Error Handling Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-python.md
Title: Handle errors and exceptions in MSAL for Python- description: Learn how to handle errors and exceptions, Conditional Access claims challenges, and retries in MSAL for Python applications.
active-directory Msal Ios Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-ios-shared-devices.md
Title: Shared device mode for iOS devices- description: Learn how to enable shared device mode to allow frontline workers to share an iOS device
active-directory Msal Java Adfs Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-java-adfs-support.md
Title: AD FS support (MSAL for Java)- description: Learn about Active Directory Federation Services (AD FS) support in the Microsoft Authentication Library for Java (MSAL4j).
active-directory Msal Java Get Remove Accounts Token Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-java-get-remove-accounts-token-cache.md
Title: Get & remove accounts from the token cache (MSAL4j) | Azure-
+ Title: Get & remove accounts from the token cache (MSAL4j)
description: Learn how to view and remove accounts from the token cache using the Microsoft Authentication Library for Java.
active-directory Msal Java Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-java-token-cache-serialization.md
Title: Custom token cache serialization (MSAL4j)- description: Learn how to serialize the token cache for MSAL for Java
active-directory Msal Js Avoid Page Reloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-avoid-page-reloads.md
Title: Avoid page reloads (MSAL.js) | Azure-
+ Title: Avoid page reloads (MSAL.js)
description: Learn how to avoid page reloads when acquiring and renewing tokens silently using the Microsoft Authentication Library for JavaScript (MSAL.js).
active-directory Msal Js Initializing Client Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-initializing-client-applications.md
Title: Initialize MSAL.js client apps | Azure-
+ Title: Initialize MSAL.js client apps
description: Learn about initializing client applications using the Microsoft Authentication Library for JavaScript (MSAL.js).
active-directory Msal Js Known Issues Ie Edge Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-known-issues-ie-edge-browsers.md
Title: Issues on Internet Explorer & Microsoft Edge (MSAL.js) | Azure-
+ Title: Issues on Internet Explorer & Microsoft Edge (MSAL.js)
description: Learn about know issues when using the Microsoft Authentication Library for JavaScript (MSAL.js) with Internet Explorer and Microsoft Edge browsers.
active-directory Msal Js Pass Custom State Authentication Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-pass-custom-state-authentication-request.md
Title: Pass custom state in authentication requests (MSAL.js) | Azure-
+ Title: Pass custom state in authentication requests (MSAL.js)
description: Learn how to pass a custom state parameter value in authentication request using the Microsoft Authentication Library for JavaScript (MSAL.js).
active-directory Msal Js Prompt Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-prompt-behavior.md
Title: Interactive request prompt behavior (MSAL.js) | Azure-
+ Title: Interactive request prompt behavior (MSAL.js)
description: Learn to customize prompt behavior in interactive calls using the Microsoft Authentication Library for JavaScript (MSAL.js).
active-directory Msal Js Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-sso.md
Title: Single sign-on (MSAL.js) | Azure-
+ Title: Single sign-on (MSAL.js)
description: Learn about building single sign-on experiences using the Microsoft Authentication Library for JavaScript (MSAL.js).
active-directory Msal Js Use Ie Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-use-ie-browser.md
Title: Issues on Internet Explorer (MSAL.js) | Azure-
+ Title: Issues on Internet Explorer (MSAL.js)
description: Use the Microsoft Authentication Library for JavaScript (MSAL.js) with Internet Explorer browser.
active-directory Msal Logging Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-android.md
Title: Logging errors and exceptions in MSAL for Android.- description: Learn how to log errors and exceptions in MSAL for Android.
active-directory Msal Logging Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-dotnet.md
Title: Logging errors and exceptions in MSAL.NET- description: Learn how to log errors and exceptions in MSAL.NET
active-directory Msal Logging Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-ios.md
Title: Logging errors and exceptions in MSAL for iOS/macOS- description: Learn how to log errors and exceptions in MSAL for iOS/macOS
active-directory Msal Logging Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-java.md
Title: Logging errors and exceptions in MSAL for Java- description: Learn how to log errors and exceptions in MSAL for Java
active-directory Msal Logging Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-js.md
Title: Logging errors and exceptions in MSAL.js- description: Learn how to log errors and exceptions in MSAL.js
active-directory Msal Logging Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-python.md
Title: Logging errors and exceptions in MSAL for Python- description: Learn how to log errors and exceptions in MSAL for Python
active-directory Msal Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-migration.md
Title: Migrate to the Microsoft Authentication Library (MSAL)- description: Learn about the differences between the Microsoft Authentication Library (MSAL) and Azure AD Authentication Library (ADAL) and how to migrate to MSAL.
active-directory Msal National Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-national-cloud.md
Title: Use MSAL in a national cloud app | Azure-
+ Title: Use MSAL in a national cloud app
description: The Microsoft Authentication Library (MSAL) enables application developers to acquire tokens in order to call secured web APIs. These web APIs can be Microsoft Graph, other Microsoft APIs, partner web APIs, or your own web API. MSAL supports multiple application architectures and platforms.
active-directory Msal Net Aad B2c Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-aad-b2c-considerations.md
Title: Azure AD B2C and MSAL.NET- description: Considerations when using Azure AD B2C with the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Net Acquire Token Silently https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-acquire-token-silently.md
Title: Acquire a token from the cache (MSAL.NET) - description: Learn how to acquire an access token silently (from the token cache) using the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Net Adfs Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-adfs-support.md
Title: AD FS support in MSAL.NET | Azure-
+ Title: AD FS support in MSAL.NET
description: Learn about Active Directory Federation Services (AD FS) support in the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Net Clear Token Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-clear-token-cache.md
Title: Clear the token cache (MSAL.NET) | Azure-
+ Title: Clear the token cache (MSAL.NET)
description: Learn how to clear the token cache using the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Net Client Assertions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-client-assertions.md
Title: Client assertions (MSAL.NET) | Azure-
+ Title: Client assertions (MSAL.NET)
description: Learn about signed client assertions support for confidential client applications in the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Net Differences Adal Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-differences-adal-net.md
Title: Differences between ADAL.NET and MSAL.NET apps | Azure-
+ Title: Differences between ADAL.NET and MSAL.NET apps
description: Learn about the differences between the Microsoft Authentication Library for .NET (MSAL.NET) and Azure AD Authentication Library for .NET (ADAL.NET).
active-directory Msal Net Initializing Client Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-initializing-client-applications.md
Title: Initialize MSAL.NET client applications | Azure-
+ Title: Initialize MSAL.NET client applications
description: Learn about initializing public client and confidential client applications using the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Net Instantiate Confidential Client Config Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-instantiate-confidential-client-config-options.md
Title: Instantiate a confidential client app (MSAL.NET) | Azure-
+ Title: Instantiate a confidential client app (MSAL.NET)
description: Learn how to instantiate a confidential client application with configuration options using the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Net Instantiate Public Client Config Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-instantiate-public-client-config-options.md
Title: Instantiate a public client app (MSAL.NET) | Azure-
+ Title: Instantiate a public client app (MSAL.NET)
description: Learn how to instantiate a public client application with configuration options using the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Net Migration Android Broker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration-android-broker.md
Title: Migrate Xamarin Android apps using brokers to MSAL.NET- description: Learn how to migrate Xamarin Android apps that use the Microsoft Authenticator or Intune Company Portal from ADAL.NET to MSAL.NET.
active-directory Msal Net Migration Confidential Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration-confidential-client.md
Title: Migrate confidential client applications to MSAL.NET- description: Learn how to migrate a confidential client application from Azure Active Directory Authentication Library for .NET to Microsoft Authentication Library for .NET.
active-directory Msal Net Migration Ios Broker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration-ios-broker.md
Title: Migrate Xamarin apps using brokers to MSAL.NET- description: Learn how to migrate Xamarin iOS apps that use Microsoft Authenticator from ADAL.NET to MSAL.NET.
active-directory Msal Net Migration Public Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration-public-client.md
Title: Migrate public client applications to MSAL.NET- description: Learn how to migrate a public client application from Azure Active Directory Authentication Library for .NET to Microsoft Authentication Library for .NET.
active-directory Msal Net Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration.md
Title: Migrating to MSAL.NET and Microsoft.Identity.Web- description: Learn why and how to migrate from Azure AD Authentication Library for .NET (ADAL.NET) to Microsoft Authentication Library for .NET (MSAL.NET) or Microsoft.Identity.Web
active-directory Msal Net Provide Httpclient https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-provide-httpclient.md
Title: Provide an HttpClient & proxy (MSAL.NET) | Azure-
+ Title: Provide an HttpClient & proxy (MSAL.NET)
description: Learn about providing your own HttpClient and proxy to connect to Azure AD using the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Net System Browser Android Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-system-browser-android-considerations.md
Title: Xamarin Android system browser considerations (MSAL.NET) | Azure-
+ Title: Xamarin Android system browser considerations (MSAL.NET)
description: Learn about considerations for using system browsers on Xamarin Android with the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md
Title: Token cache serialization (MSAL.NET) | Azure-
+ Title: Token cache serialization (MSAL.NET)
description: Learn about serialization and custom serialization of the token cache using the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Net Use Brokers With Xamarin Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-use-brokers-with-xamarin-apps.md
Title: Use brokers with Xamarin iOS & Android | Azure-
+ Title: Use brokers with Xamarin iOS & Android
description: Learn how to setup Xamarin iOS applications that can use the Microsoft Authenticator and the Microsoft Authentication Library for .NET (MSAL.NET). Also learn how to migrate from Azure AD Authentication Library for .NET (ADAL.NET) to the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Net User Gets Consent For Multiple Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-user-gets-consent-for-multiple-resources.md
Title: Get consent for several resources (MSAL.NET) | Azure-
+ Title: Get consent for several resources (MSAL.NET)
description: Learn how a user can get pre-consent for several resources using the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Net Uwp Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-uwp-considerations.md
Title: UWP considerations (MSAL.NET) | Azure-
+ Title: UWP considerations (MSAL.NET)
description: Learn about considerations for using Universal Windows Platform (UWP) with the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Net Web Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-web-browsers.md
Title: Using web browsers (MSAL.NET) | Azure-
+ Title: Using web browsers (MSAL.NET)
description: Learn about specific considerations when using Xamarin Android with the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Net Xamarin Android Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-xamarin-android-considerations.md
Title: Xamarin Android code configuration and troubleshooting (MSAL.NET) | Azure-
+ Title: Xamarin Android code configuration and troubleshooting (MSAL.NET)
description: Learn about considerations for using Xamarin Android with the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Net Xamarin Ios Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-xamarin-ios-considerations.md
Title: Xamarin iOS considerations (MSAL.NET) | Azure-
+ Title: Xamarin iOS considerations (MSAL.NET)
description: Learn about considerations for using Xamarin iOS with the Microsoft Authentication Library for .NET (MSAL.NET).
active-directory Msal Node Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-node-extensions.md
Title: "Learn about Microsoft Authentication Extensions for Node | Azure"-
+ Title: "Learn about Microsoft Authentication Extensions for Node"
description: The Microsoft Authentication Extensions for Node enables application developers to perform cross-platform token cache serialization and persistence. It gives extra support to the Microsoft Authentication Library for Node (MSAL Node).
active-directory Msal Node Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-node-migration.md
Title: "Migrate your Node.js application from ADAL to MSAL | Azure"-
+ Title: "Migrate your Node.js application from ADAL to MSAL"
description: How to update your existing Node.js application to use the Microsoft Authentication Library (MSAL) for authentication and authorization instead of the Active Directory Authentication Library (ADAL).
active-directory Msal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-overview.md
Title: Learn about MSAL | Azure-
+ Title: Learn about MSAL
description: The Microsoft Authentication Library (MSAL) enables application developers to acquire tokens in order to call secured web APIs. These web APIs can be the Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API. MSAL supports multiple application architectures and platforms.
active-directory Msal Python Adfs Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-python-adfs-support.md
Title: Azure AD FS support (MSAL Python)- description: Learn about Active Directory Federation Services (AD FS) support in the Microsoft Authentication Library for Python
active-directory Msal Python Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-python-token-cache-serialization.md
Title: Custom token cache serialization (MSAL Python) | Azure-
+ Title: Custom token cache serialization (MSAL Python)
description: Learn how to serializing the token cache for MSAL for Python
active-directory Msal Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-shared-devices.md
Title: Shared device mode overview- description: Learn about shared device mode to enable device sharing for your frontline workers.
active-directory Msal V1 App Scopes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-v1-app-scopes.md
Title: Scopes for v1.0 apps (MSAL) | Azure
+ Title: Scopes for v1.0 apps (MSAL)
description: Learn about the scopes for a v1.0 application using the Microsoft Authentication Library (MSAL).
active-directory Multi Service Web App Access Microsoft Graph As User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-microsoft-graph-as-user.md
Title: Tutorial - Web app accesses Microsoft Graph as the user | Azure
+ Title: Tutorial - Web app accesses Microsoft Graph as the user
description: In this tutorial, you learn how to access data in Microsoft Graph from a web app for a signed-in user.
active-directory Multi Service Web App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-storage.md
Title: Tutorial - Web app accesses storage by using managed identities | Azure
+ Title: Tutorial - Web app accesses storage by using managed identities
description: In this tutorial, you learn how to access Azure Storage for an app by using managed identities.
active-directory Multi Service Web App Authentication App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-authentication-app-service.md
Title: Tutorial - Add authentication to a web app on Azure App Service | Azure
+ Title: Tutorial - Add authentication to a web app on Azure App Service
description: In this tutorial, you learn how to enable authentication and authorization for a web app running on Azure App Service. Limit access to the web app to users in your organizationΓÇï.
active-directory Multi Service Web App Clean Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-clean-up-resources.md
Title: Tutorial - Clean up resources | Azure
+ Title: Tutorial - Clean up resources
description: In this tutorial, you learn how to clean up the Azure resources allocated while creating the web app.
active-directory Multi Service Web App Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-overview.md
Title: Tutorial - Build a secure web app on Azure App Service | Azure
+ Title: Tutorial - Build a secure web app on Azure App Service
description: In this tutorial, you learn how to build a web app by using Azure App Service, sign in users to the web app, call Azure Storage, and call Microsoft Graph.
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
Title: Publisher verification overview - Microsoft identity platform | Azure
+ Title: Publisher verification overview
description: Provides an overview of the publisher verification program for the Microsoft identity platform. Lists the benefits, program requirements, and frequently asked questions. When an application is marked as publisher verified, it means that the publisher has verified their identity using a Microsoft Partner Network account that has completed the verification process and has associated this MPN account with their application registration.
active-directory Quickstart Configure App Access Web Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-configure-app-access-web-apis.md
Title: "Quickstart: Configure an app to access a web API | Azure"-
+ Title: "Quickstart: Configure an app to access a web API"
description: In this quickstart, you configure an app registration representing a web API in the Microsoft identity platform to enable scoped resource access (permissions) to client applications.
active-directory Quickstart Configure App Expose Web Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-configure-app-expose-web-apis.md
Title: "Quickstart: Register and expose a web API | Azure"-
+ Title: "Quickstart: Register and expose a web API"
description: In this quickstart, your register a web API with the Microsoft identity platform and configure its scopes, exposing it to clients for permissions-based access to the API's resources.
active-directory Quickstart Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-create-new-tenant.md
Title: "Quickstart: Create an Azure Active Directory tenant"- description: In this quickstart, you learn how to create an Azure Active Directory tenant for use in developing applications that use the Microsoft identity platform for authentication and authorization.
active-directory Quickstart Register App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-register-app.md
Title: "Quickstart: Register an app in the Microsoft identity platform | Azure"
+ Title: "Quickstart: Register an app in the Microsoft identity platform"
description: In this quickstart, you learn how to register an application with the Microsoft identity platform.
active-directory Quickstart V2 Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-android.md
Title: "Quickstart: Add sign in with Microsoft to an Android app | Azure"-
+ Title: "Quickstart: Add sign in with Microsoft to an Android app"
description: In this quickstart, learn how Android applications can call an API that requires access tokens issued by the Microsoft identity platform.
active-directory Quickstart V2 Aspnet Core Web Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-web-api.md
Title: "Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform | Azure"-
+ Title: "Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform"
description: In this quickstart, you download and modify a code sample that demonstrates how to protect an ASP.NET Core web API by using the Microsoft identity platform for authorization.
active-directory Quickstart V2 Aspnet Core Webapp Calls Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph.md
Title: "Quickstart: ASP.NET Core web app that signs in users and calls Microsoft Graph | Azure"-
+ Title: "Quickstart: ASP.NET Core web app that signs in users and calls Microsoft Graph"
description: In this quickstart, you learn how an app uses Microsoft.Identity.Web to implement Microsoft sign-in in an ASP.NET Core web app using OpenID Connect and calls Microsoft Graph.
active-directory Quickstart V2 Aspnet Core Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md
Title: "Quickstart: Add sign-in with Microsoft Identity to an ASP.NET Core web app | Azure"-
+ Title: "Quickstart: Add sign-in with Microsoft Identity to an ASP.NET Core web app"
description: In this quickstart, you learn how an app implements Microsoft sign-in on an ASP.NET Core web app by using OpenID Connect
active-directory Quickstart V2 Aspnet Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md
Title: "Quickstart: ASP.NET web app that signs in users"- description: Download and run a code sample that shows how an ASP.NET web app can sign in Azure AD users.
active-directory Quickstart V2 Dotnet Native Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-dotnet-native-aspnet.md
Title: "Quickstart: Call an ASP.NET web API that is protected by the Microsoft identity platform | Azure"-
+ Title: "Quickstart: Call an ASP.NET web API that is protected by the Microsoft identity platform"
description: In this quickstart, learn how to call an ASP.NET web API that's protected by the Microsoft identity platform from a Windows Desktop (WPF) application.
active-directory Quickstart V2 Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-ios.md
Title: "Quickstart: Add sign in with Microsoft to an iOS or macOS app | Azure"-
+ Title: "Quickstart: Add sign in with Microsoft to an iOS or macOS app"
description: In this quickstart, learn how an iOS or macOS app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API.
active-directory Quickstart V2 Java Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-java-daemon.md
Title: "Quickstart: Call Microsoft Graph from a Java daemon | Azure"-
+ Title: "Quickstart: Call Microsoft Graph from a Java daemon"
description: In this quickstart, you learn how a Java app can get an access token and call an API protected by Microsoft identity platform endpoint, using the app's own identity
active-directory Quickstart V2 Java Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-java-webapp.md
Title: "Quickstart: Add sign-in with Microsoft to a Java web app | Azure"-
+ Title: "Quickstart: Add sign-in with Microsoft to a Java web app"
description: In this quickstart, you'll learn how to add sign-in with Microsoft to a Java web application by using OpenID Connect.
active-directory Quickstart V2 Javascript Auth Code Angular https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript-auth-code-angular.md
Title: "Quickstart: Sign in users in JavaScript Angular single-page apps (SPA) with auth code and call Microsoft Graph | Azure"-
+ Title: "Quickstart: Sign in users in JavaScript Angular single-page apps (SPA) with auth code and call Microsoft Graph"
description: In this quickstart, learn how a JavaScript Angular single-page application (SPA) can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow and call Microsoft Graph.
active-directory Quickstart V2 Javascript Auth Code React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript-auth-code-react.md
Title: "Quickstart: Sign in users in JavaScript React single-page apps (SPA) with auth code and call Microsoft Graph | Azure"-
+ Title: "Quickstart: Sign in users in JavaScript React single-page apps (SPA) with auth code and call Microsoft Graph"
description: In this quickstart, learn how a JavaScript React single-page application (SPA) can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow and call Microsoft Graph.
active-directory Quickstart V2 Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript-auth-code.md
Title: "Quickstart: Sign in users in JavaScript single-page apps (SPA) with auth code | Azure"-
+ Title: "Quickstart: Sign in users in JavaScript single-page apps (SPA) with auth code"
description: In this quickstart, learn how a JavaScript single-page application (SPA) can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow.
active-directory Quickstart V2 Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript.md
Title: "Quickstart: Sign in users in JavaScript single-page apps | Azure"-
+ Title: "Quickstart: Sign in users in JavaScript single-page apps"
description: In this quickstart, you learn how a JavaScript app can call an API that requires access tokens issued by the Microsoft identity platform.
active-directory Quickstart V2 Netcore Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
Title: "Quickstart: Get token & call Microsoft Graph in a console app | Azure"-
+ Title: "Quickstart: Get token & call Microsoft Graph in a console app"
description: In this quickstart, you learn how a .NET Core sample app can use the client credentials flow to get a token and call Microsoft Graph.
active-directory Quickstart V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-console.md
Title: "Quickstart: Call Microsoft Graph from a Node.js console app | Azure"-
+ Title: "Quickstart: Call Microsoft Graph from a Node.js console app"
description: In this quickstart, you download and run a code sample that shows how a Node.js console application can get an access token and call an API protected by a Microsoft identity platform endpoint, using the app's own identity
active-directory Quickstart V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-desktop.md
Title: "Quickstart: Call Microsoft Graph from a Node.js desktop app | Azure"-
+ Title: "Quickstart: Call Microsoft Graph from a Node.js desktop app"
description: In this quickstart, you learn how a Node.js Electron desktop application can sign-in users and get an access token to call an API protected by a Microsoft identity platform endpoint
active-directory Quickstart V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-webapp-msal.md
Title: "Quickstart: Add authentication to a Node.js web app with MSAL Node | Azure"-
+ Title: "Quickstart: Add authentication to a Node.js web app with MSAL Node"
description: In this quickstart, you learn how to implement authentication with a Node.js web app and the Microsoft Authentication Library (MSAL) for Node.js.
active-directory Quickstart V2 Nodejs Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-webapp.md
Title: "Quickstart: Add user sign-in to a Node.js web app | Azure"-
+ Title: "Quickstart: Add user sign-in to a Node.js web app"
description: In this quickstart, you learn how to implement authentication in a Node.js web application using OpenID Connect.
active-directory Quickstart V2 Python Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-python-daemon.md
Title: "Quickstart: Call Microsoft Graph from a Python daemon | Azure"-
+ Title: "Quickstart: Call Microsoft Graph from a Python daemon"
description: In this quickstart, you learn how a Python process can get an access token and call an API protected by Microsoft identity platform, using the app's own identity
active-directory Quickstart V2 Python Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-python-webapp.md
Title: "Quickstart: Add sign-in with Microsoft to a Python web app | Azure"-
+ Title: "Quickstart: Add sign-in with Microsoft to a Python web app"
description: In this quickstart, learn how a Python web app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API.
active-directory Quickstart V2 Uwp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-uwp.md
Title: "Quickstart: Sign in users and call Microsoft Graph in a Universal Windows Platform app | Azure"-
+ Title: "Quickstart: Sign in users and call Microsoft Graph in a Universal Windows Platform app"
description: In this quickstart, learn how a Universal Windows Platform (UWP) application can get an access token and call an API protected by Microsoft identity platform.
active-directory Quickstart V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-windows-desktop.md
Title: "Quickstart: Sign in users and call Microsoft Graph in a Windows desktop app | Azure"
+ Title: "Quickstart: Sign in users and call Microsoft Graph in a Windows desktop app"
description: In this quickstart, learn how a Windows Presentation Foundation (WPF) application can get an access token and call an API protected by the Microsoft identity platform.
active-directory Redirect Uris Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/redirect-uris-ios.md
Title: Use redirect URIs with MSAL (iOS/macOS) | Azure-
+ Title: Use redirect URIs with MSAL (iOS/macOS)
description: Learn about the differences between the Microsoft Authentication Library for ObjectiveC (MSAL for iOS and macOS) and Azure AD Authentication Library for ObjectiveC (ADAL.ObjC) and how to migrate between them.
active-directory Reference App Manifest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-app-manifest.md
Title: Understanding the Azure Active Directory app manifest- description: Detailed coverage of the Azure Active Directory app manifest, which represents an application's identity configuration in an Azure AD tenant, and is used to facilitate OAuth authorization, consent experience, and more.
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-claims-mapping-policy-type.md
Title: Claims mapping policy- description: Learn about the claims mapping policy type, which is used to modify the claims emitted in tokens issued for specific applications.
active-directory Reference Saml Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-saml-tokens.md
Title: SAML 2.0 token claims reference | Azure-
+ Title: SAML 2.0 token claims reference
description: Claims reference with details on the claims included in SAML 2.0 tokens issued by the Microsoft identity platform, including their JWT equivalents.
active-directory Reference Third Party Cookies Spas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-third-party-cookies-spas.md
Title: How to handle Intelligent Tracking Protection (ITP) in Safari | Azure-
+ Title: How to handle Intelligent Tracking Protection (ITP) in Safari
description: Single-page app (SPA) authentication when third-party cookies are no longer allowed.
active-directory Reference V2 Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-v2-libraries.md
Title: Microsoft identity platform authentication libraries | Azure
+ Title: Microsoft identity platform authentication libraries
description: List of client libraries and middleware compatible with the Microsoft identity platform. Use these libraries to add support for user sign-in (authentication) and protected web API access (authorization) to your applications.
active-directory Refresh Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/refresh-tokens.md
Title: Microsoft identity platform refresh tokens | Azure-
+ Title: Microsoft identity platform refresh tokens
description: Learn about refresh tokens emitted by the Azure AD.
active-directory Registration Config How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/registration-config-how-to.md
Title: Get the endpoints for an Azure AD app registration- description: How to find the authentication endpoints for a custom application you're developing or registering with Azure AD.
active-directory Reply Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reply-url.md
Title: Redirect URI (reply URL) restrictions | Azure AD- description: A description of the restrictions and limitations on redirect URI (reply URL) format enforced by the Microsoft identity platform.
active-directory Request Custom Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/request-custom-claims.md
Title: Request custom claims (MSAL iOS/macOS) | Azure - description: Learn how to request custom claims.
active-directory Scenario Daemon App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-app-configuration.md
Title: Configure daemon apps that call web APIs - Microsoft identity platform | Azure
+ Title: Configure daemon apps that call web APIs
description: Learn how to configure the code for your daemon application that calls web APIs (app configuration)
active-directory Scenario Daemon App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-app-registration.md
Title: Register daemon apps that call web APIs - Microsoft identity platform | Azure
+ Title: Register daemon apps that call web APIs
description: Learn how to build a daemon app that calls web APIs - app registration
active-directory Scenario Daemon Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-call-api.md
Title: Call a web API from a daemon app | Azure
+ Title: Call a web API from a daemon app
description: Learn how to build a daemon app that calls a web API.
active-directory Scenario Daemon Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-overview.md
Title: Build a daemon app that calls web APIs | Azure-
+ Title: Build a daemon app that calls web APIs
description: Learn how to build a daemon app that calls web APIs
active-directory Scenario Daemon Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-production.md
Title: Move a daemon app that calls web APIs to production | Azure
+ Title: Move a daemon app that calls web APIs to production
description: Learn how to move a daemon app that calls web APIs to production
active-directory Scenario Desktop Acquire Token Device Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-device-code-flow.md
Title: Acquire a token to call a web API using device code flow (desktop app) | Azure-
+ Title: Acquire a token to call a web API using device code flow (desktop app)
description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using device code flow
active-directory Scenario Desktop Acquire Token Integrated Windows Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-integrated-windows-authentication.md
Title: Acquire a token to call a web API using integrated Windows authentication (desktop app) | Azure-
+ Title: Acquire a token to call a web API using integrated Windows authentication (desktop app)
description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using integrated Windows authentication
active-directory Scenario Desktop Acquire Token Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-interactive.md
Title: Acquire a token to call a web API interactively (desktop app) | Azure-
+ Title: Acquire a token to call a web API interactively (desktop app)
description: Learn how to build a desktop app that calls web APIs to acquire a token for the app interactively
active-directory Scenario Desktop Acquire Token Username Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-username-password.md
Title: Acquire a token to call a web API using username and password (desktop app) | Azure-
+ Title: Acquire a token to call a web API using username and password (desktop app)
description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using username and password.
active-directory Scenario Desktop Acquire Token Wam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-wam.md
Title: Acquire a token to call a web API using web account manager (desktop app) | Azure-
+ Title: Acquire a token to call a web API using web account manager (desktop app)
description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using web account manager
active-directory Scenario Desktop Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token.md
Title: Acquire a token to call a web API (desktop app) | Azure-
+ Title: Acquire a token to call a web API (desktop app)
description: Learn how to build a desktop app that calls web APIs to acquire a token for the app
active-directory Scenario Desktop App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-app-configuration.md
Title: Configure desktop apps that call web APIs | Azure
+ Title: Configure desktop apps that call web APIs
description: Learn how to configure the code of a desktop app that calls web APIs
active-directory Scenario Desktop App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-app-registration.md
Title: Register desktop apps that call web APIs | Azure
+ Title: Register desktop apps that call web APIs
description: Learn how to build a desktop app that calls web APIs (app registration)
active-directory Scenario Desktop Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-call-api.md
Title: Call web APIs from a desktop app | Azure
+ Title: Call web APIs from a desktop app
description: Learn how to build a desktop app that calls web APIs
active-directory Scenario Desktop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-overview.md
Title: Build a desktop app that calls web APIs | Azure-
+ Title: Build a desktop app that calls web APIs
description: Learn how to build a desktop app that calls web APIs (overview)
active-directory Scenario Desktop Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-production.md
Title: Move desktop app calling web APIs to production | Azure
+ Title: Move desktop app calling web APIs to production
description: Learn how to move a desktop app that calls web APIs to production
active-directory Scenario Mobile Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-mobile-acquire-token.md
Title: Acquire a token to call a web API (mobile apps) | Azure-
+ Title: Acquire a token to call a web API (mobile apps)
description: Learn how to build a mobile app that calls web APIs. (Get a token for the app.)
active-directory Scenario Mobile App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-mobile-app-configuration.md
Title: Configure mobile apps that call web APIs | Azure-
+ Title: Configure mobile apps that call web APIs
description: Learn how to configure your mobile app's code to call a web API
active-directory Scenario Mobile App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-mobile-app-registration.md
Title: Register mobile apps that call web APIs | Azure-
+ Title: Register mobile apps that call web APIs
description: Learn how to build a mobile app that calls web APIs (app's registration)
active-directory Scenario Mobile Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-mobile-call-api.md
Title: Call a web API from a mobile app | Azure-
+ Title: Call a web API from a mobile app
description: Learn how to build a mobile app that calls web APIs. (Call a web API.)
active-directory Scenario Mobile Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-mobile-overview.md
Title: Build a mobile app that calls web APIs | Azure-
+ Title: Build a mobile app that calls web APIs
description: Learn how to build a mobile app that calls web APIs (overview)
active-directory Scenario Mobile Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-mobile-production.md
Title: Prepare mobile app-calling web APIs for production | Azure-
+ Title: Prepare mobile app-calling web APIs for production
description: Learn how to build a mobile app that calls web APIs. (Prepare apps for production.)
active-directory Scenario Protected Web Api App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-protected-web-api-app-registration.md
Title: Protected web API app registration | Azure-
+ Title: Protected web API app registration
description: Learn how to build a protected web API and the information you need to register the app.
active-directory Scenario Protected Web Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-protected-web-api-overview.md
Title: Protected web API - overview- description: Learn how to build a protected web API (overview).
active-directory Scenario Protected Web Api Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-protected-web-api-production.md
Title: Move a protected web API to production | Azure-
+ Title: Move a protected web API to production
description: Learn how to build a protected web API (move to production).
active-directory Scenario Spa Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-acquire-token.md
Title: Acquire a token to call a web API (single-page apps) | Azure-
+ Title: Acquire a token to call a web API (single-page apps)
description: Learn how to build a single-page application (acquire a token to call an API)
active-directory Scenario Spa App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-app-configuration.md
Title: Configure single-page app | Azure-
+ Title: Configure single-page app
description: Learn how to build a single-page application (app's code configuration)
active-directory Scenario Spa App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-app-registration.md
Title: Register single-page applications (SPA) | Azure-
+ Title: Register single-page applications (SPA)
description: Learn how to build a single-page application (app registration)
active-directory Scenario Spa Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-call-api.md
Title: Build single-page app calling a web API- description: Learn how to build a single-page application that calls a web API
active-directory Scenario Spa Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-overview.md
Title: JavaScript single-page app scenario- description: Learn how to build a single-page application (scenario overview) by using the Microsoft identity platform.
active-directory Scenario Spa Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-production.md
Title: Move single-page app to production- description: Learn how to build a single-page application (move to production)
active-directory Scenario Spa Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-sign-in.md
Title: Single-page app sign-in & sign-out- description: Learn how to build a single-page application (sign-in)
active-directory Scenario Web Api Call Api Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-acquire-token.md
Title: Get a token for a web API that calls web APIs | Azure-
+ Title: Get a token for a web API that calls web APIs
description: Learn how to build a web API that calls web APIs that require acquiring a token for the app.
active-directory Scenario Web Api Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-app-configuration.md
Title: Configure a web API that calls web APIs | Azure-
+ Title: Configure a web API that calls web APIs
description: Learn how to build a web API that calls web APIs (app's code configuration)
active-directory Scenario Web Api Call Api App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-app-registration.md
Title: Register a web API that calls web APIs | Azure-
+ Title: Register a web API that calls web APIs
description: Learn how to build a web API that calls downstream web APIs (app registration).
active-directory Scenario Web Api Call Api Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-call-api.md
Title: Web API that calls web APIs | Azure-
+ Title: Web API that calls web APIs
description: Learn how to build a web API that calls web APIs.
active-directory Scenario Web Api Call Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-overview.md
Title: Build a web API that calls web APIs | Azure-
+ Title: Build a web API that calls web APIs
description: Learn how to build a web API that calls downstream web APIs (overview).
active-directory Scenario Web Api Call Api Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-production.md
Title: Move web API calling web APIs to production | Azure-
+ Title: Move web API calling web APIs to production
description: Learn how to move a web API that calls web APIs to production.
active-directory Scenario Web App Call Api Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-acquire-token.md
Title: Get a token in a web app that calls web APIs | Azure-
+ Title: Get a token in a web app that calls web APIs
description: Learn how to acquire a token for a web app that calls web APIs
active-directory Scenario Web App Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-app-configuration.md
Title: Configure a web app that calls web APIs | Azure-
+ Title: Configure a web app that calls web APIs
description: Learn how to configure the code of a web app that calls web APIs
active-directory Scenario Web App Call Api App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-app-registration.md
Title: Register a web app that calls web APIs | Azure-
+ Title: Register a web app that calls web APIs
description: Learn how to register a web app that calls web APIs
active-directory Scenario Web App Call Api Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-call-api.md
Title: Call a web api from a web app | Azure-
+ Title: Call a web api from a web app
description: Learn how to build a web app that calls web APIs (calling a protected web API)
active-directory Scenario Web App Call Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-overview.md
Title: Build a web app that authenticates users and calls web APIs | Azure-
+ Title: Build a web app that authenticates users and calls web APIs
description: Learn how to build a web app that authenticates users and calls web APIs (overview)
active-directory Scenario Web App Call Api Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-production.md
Title: Move to production a web app that calls web APIs | Azure-
+ Title: Move to production a web app that calls web APIs
description: Learn how to move to production a web app that calls web APIs.
active-directory Scenario Web App Call Api Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-sign-in.md
Title: Remove accounts from the token cache on sign-out | Azure-
+ Title: Remove accounts from the token cache on sign-out
description: Learn how to remove an account from the token cache on sign-out
active-directory Scenario Web App Sign User App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md
Title: Configure a web app that signs in users | Azure-
+ Title: Configure a web app that signs in users
description: Learn how to build a web app that signs in users (code configuration)
active-directory Scenario Web App Sign User App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-registration.md
Title: Register a web app that signs in users | Azure-
+ Title: Register a web app that signs in users
description: Learn how to register a web app that signs in users
active-directory Scenario Web App Sign User Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-overview.md
Title: Sign in users from a Web app | Azure-
+ Title: Sign in users from a Web app
description: Learn how to build a web app that signs in users (overview)
active-directory Scenario Web App Sign User Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-production.md
Title: Move web app that signs in users to production | Azure-
+ Title: Move web app that signs in users to production
description: Learn how to build a web app that signs in users (move to production)
active-directory Scenario Web App Sign User Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-sign-in.md
Title: Write a web app that signs in/out users | Azure-
+ Title: Write a web app that signs in/out users
description: Learn how to build a web app that signs in/out users
active-directory Secure Group Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/secure-group-access-control.md
Title: Secure access control using groups in Azure AD - Microsoft identity platform
+ Title: Secure access control using groups in Azure AD
description: Learn about how groups are used to securely control access to resources in Azure AD.
active-directory Secure Least Privileged Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/secure-least-privileged-access.md
Title: "Increase app security with the principle of least privilege"- description: Learn how the principle of least privilege can help increase the security of your application, its data, and which features of the Microsoft identity platform you can use to implement least privileged access.
active-directory Security Best Practices For App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/security-best-practices-for-app-registration.md
Title: Best practices for Azure AD application registration configuration - Microsoft identity platform
+ Title: Best practices for Azure AD application registration configuration
description: Learn about a set of best practices and general guidance on Azure AD application registration configuration.
active-directory Security Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/security-tokens.md
Title: Security tokens | Azure-
+ Title: Security tokens
description: Learn about the basics of security tokens in the Microsoft identity platform.
active-directory Single And Multi Tenant Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-and-multi-tenant-apps.md
Title: Single and multi-tenant apps in Azure AD- description: Learn about the features and differences between single-tenant and multi-tenant apps in Azure AD.
active-directory Single Multi Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-multi-account.md
Title: Single and multiple account public client apps | Azure
+ Title: Single and multiple account public client apps
description: An overview of single and multiple account public client apps.
active-directory Single Page App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-page-app-quickstart.md
Title: "Quickstart: Sign in users in single-page apps (SPA) with auth code | Azure"-
+ Title: "Quickstart: Sign in users in single-page apps (SPA) with auth code"
description: In this quickstart, learn how a JavaScript single-page application (SPA) can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow.
active-directory Single Sign On Macos Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-on-macos-ios.md
Title: Configure SSO on macOS and iOS - description: Learn how to configure single sign on (SSO) on macOS and iOS.
active-directory Single Sign On Saml Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-on-saml-protocol.md
Title: Azure Single Sign On SAML Protocol- description: This article describes the Single Sign-On (SSO) SAML protocol in Azure Active Directory documentationcenter: .net
active-directory Ssl Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/ssl-issues.md
Title: Troubleshoot TLS/SSL issues (MSAL iOS/macOS) | Azure-
+ Title: Troubleshoot TLS/SSL issues (MSAL iOS/macOS)
description: Learn what to do about various problems using TLS/SSL certificates with the MSAL.Objective-C library.
active-directory Sso Between Adal Msal Apps Macos Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sso-between-adal-msal-apps-macos-ios.md
Title: SSO between ADAL & MSAL apps (iOS/macOS) | Azure-
+ Title: SSO between ADAL & MSAL apps (iOS/macOS)
description: Learn how to share SSO between ADAL and MSAL apps
active-directory Support Fido2 Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/support-fido2-authentication.md
Title: Support passwordless authentication with FIDO2 keys in apps you develop | Azure-
+ Title: Support passwordless authentication with FIDO2 keys in apps you develop
description: This deployment guide explains how to support passwordless authentication with FIDO2 security keys in the applications you develop
active-directory Supported Accounts Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/supported-accounts-validation.md
# required metadata Title: Validation differences by supported account types | Azure-
+ Title: Validation differences by supported account types
description: Learn about the validation differences of various properties for different supported account types when registering your app with the Microsoft identity platform.
active-directory Test Automate Integration Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-automate-integration-testing.md
Title: Run automated integration tests- description: Learn how to run automated integration tests as a user against APIs protected by the Microsoft identity platform. Use the Resource Owner Password Credential Grant (ROPC) auth flow to sign in as a user instead of automating the interactive sign-in prompt UI.
active-directory Test Setup Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-setup-environment.md
Title: Set up a test environment for your app- description: Learn how to set up an Azure Active Directory test environment so you can test your application integrated with Microsoft identity platform. Evaluate whether you need a separate tenant for testing or if you can use your production tenant.
active-directory Test Throttle Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-throttle-service-limits.md
Title: Test environments, throttling, and service limits- description: Learn about the throttling and service limits to consider while deploying an Azure Active Directory test environment and testing an app integrated with the Microsoft identity platform.
active-directory Troubleshoot Publisher Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/troubleshoot-publisher-verification.md
Title: Troubleshoot publisher verification | Azure-
+ Title: Troubleshoot publisher verification
description: Describes how to troubleshoot publisher verification for the Microsoft identity platform by calling Microsoft Graph APIs.
active-directory Tutorial Blazor Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-blazor-server.md
Title: Tutorial - Create a Blazor Server app that uses the Microsoft identity platform for authentication | Azure-
+ Title: Tutorial - Create a Blazor Server app that uses the Microsoft identity platform for authentication
description: In this tutorial, you set up authentication using the Microsoft identity platform in a Blazor Server app.
active-directory Tutorial Blazor Webassembly https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-blazor-webassembly.md
Title: Tutorial - Sign in users and call a protected API from a Blazor WebAssembly app - description: In this tutorial, sign in users and call a protected API using the Microsoft identity platform in a Blazor WebAssembly (WASM) app.
active-directory Tutorial V2 Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-android.md
Title: "Tutorial: Create an Android app that uses the Microsoft identity platform for authentication | Azure"-
+ Title: "Tutorial: Create an Android app that uses the Microsoft identity platform for authentication"
description: In this tutorial, you build an Android app that uses the Microsoft identity platform to sign in users and get an access token to call the Microsoft Graph API on their behalf.
active-directory Tutorial V2 Angular Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-angular-auth-code.md
Title: "Tutorial: Create an Angular app that uses the Microsoft identity platform for authentication using auth code flow | Azure"-
+ Title: "Tutorial: Create an Angular app that uses the Microsoft identity platform for authentication using auth code flow"
description: In this tutorial, you build an Angular single-page app (SPA) using auth code flow that uses the Microsoft identity platform to sign in users and get an access token to call the Microsoft Graph API on their behalf.
active-directory Tutorial V2 Asp Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-asp-webapp.md
Title: "Tutorial: Create an ASP.NET web app that uses the Microsoft identity platform for authentication | Azure"-
+ Title: "Tutorial: Create an ASP.NET web app that uses the Microsoft identity platform for authentication"
description: In this tutorial, you build an ASP.NET web application that uses the Microsoft identity platform and OWIN middleware to enable user login.
active-directory Tutorial V2 Aspnet Daemon Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-aspnet-daemon-web-app.md
Title: "Tutorial: Build a multi-tenant daemon that accesses Microsoft Graph business data | Azure"-
+ Title: "Tutorial: Build a multi-tenant daemon that accesses Microsoft Graph business data"
description: In this tutorial, learn how to call an ASP.NET web API protected by Azure Active Directory from a Windows desktop (WPF) application. The WPF client authenticates a user, requests an access token, and calls the web API.
active-directory Tutorial V2 Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-auth-code.md
Title: "Tutorial: Create a JavaScript single-page app that uses auth code flow | Azure"-
+ Title: "Tutorial: Create a JavaScript single-page app that uses auth code flow"
description: In this tutorial, you create a JavaScript SPA that can sign in users and use the auth code flow to obtain an access token from the Microsoft identity platform and call the Microsoft Graph API.
active-directory Tutorial V2 Javascript Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-spa.md
Title: "Tutorial: Create a JavaScript single-page app that uses the Microsoft identity platform for authentication | Azure"-
+ Title: "Tutorial: Create a JavaScript single-page app that uses the Microsoft identity platform for authentication"
description: In this tutorial, you build a JavaScript single-page app (SPA) that uses the Microsoft identity platform to sign in users and get an access token to call the Microsoft Graph API on their behalf.
active-directory Tutorial V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-console.md
Title: "Tutorial: Call Microsoft Graph in a Node.js console app | Azure"-
+ Title: "Tutorial: Call Microsoft Graph in a Node.js console app"
description: In this tutorial, you build a console app for calling Microsoft Graph to a Node.js console app.
active-directory Tutorial V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-desktop.md
Title: "Tutorial: Sign in users and call the Microsoft Graph API in an Electron desktop app | Azure"-
+ Title: "Tutorial: Sign in users and call the Microsoft Graph API in an Electron desktop app"
description: In this tutorial, you build an Electron desktop app that can sign in users and use the auth code flow to obtain an access token from the Microsoft identity platform and call the Microsoft Graph API.
active-directory Tutorial V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-webapp-msal.md
Title: "Tutorial: Sign in users in a Node.js & Express web app | Azure"-
+ Title: "Tutorial: Sign in users in a Node.js & Express web app"
description: In this tutorial, you add support for signing-in users in a web app.
active-directory Tutorial V2 React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-react.md
Title: "Tutorial: Create a React single-page app that uses auth code flow | Azure"-
+ Title: "Tutorial: Create a React single-page app that uses auth code flow"
description: In this tutorial, you create a React SPA that can sign in users and use the auth code flow to obtain an access token from the Microsoft identity platform and call the Microsoft Graph API.
active-directory Tutorial V2 Shared Device Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-shared-device-mode.md
Title: "Tutorial: Use shared-device mode with the Microsoft Authentication Library (MSAL) for Android | Azure"-
+ Title: "Tutorial: Use shared-device mode with the Microsoft Authentication Library (MSAL) for Android"
description: In this tutorial, you learn how to prepare an Android device to run in shared mode and run a first-line worker app.
active-directory Tutorial V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-desktop.md
Title: "Tutorial: Create a Windows Presentation Foundation (WPF) app that uses the Microsoft identity platform for authentication | Azure"-
+ Title: "Tutorial: Create a Windows Presentation Foundation (WPF) app that uses the Microsoft identity platform for authentication"
description: In this tutorial, you build a WPF application that uses the Microsoft identity platform to sign in users and get an access token to call the Microsoft Graph API on their behalf.
active-directory Tutorial V2 Windows Uwp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-uwp.md
Title: "Tutorial: Create a Universal Windows Platform (UWP) app that uses the Microsoft identity platform for authentication | Azure"-
+ Title: "Tutorial: Create a Universal Windows Platform (UWP) app that uses the Microsoft identity platform for authentication"
description: In this tutorial, you build a UWP application that uses the Microsoft identity platform to sign in users and get an access token to call the Microsoft Graph API on their behalf.
active-directory Userinfo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/userinfo.md
Title: Microsoft identity platform UserInfo endpoint | Azure-
+ Title: Microsoft identity platform UserInfo endpoint
description: Learn about the UserInfo endpoint on the Microsoft identity platform.
active-directory V2 App Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-app-types.md
Title: Application types for the Microsoft identity platform | Azure
+ Title: Application types for the Microsoft identity platform
description: The types of apps and scenarios supported by the Microsoft identity platform.
active-directory V2 Conditional Access Dev Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-conditional-access-dev-guide.md
Title: Developer guidance for Azure Active Directory Conditional Access- description: Developer guidance and scenarios for Azure AD Conditional Access and Microsoft identity platform. keywords:
active-directory V2 Oauth Ropc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth-ropc.md
Title: Sign in with resource owner password credentials grant | Azure-
+ Title: Sign in with resource owner password credentials grant
description: Support browser-less authentication flows using the resource owner password credential (ROPC) grant.
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
Title: Microsoft identity platform and OAuth 2.0 authorization code flow | Azure-
+ Title: Microsoft identity platform and OAuth 2.0 authorization code flow
description: Build web applications using the Microsoft identity platform implementation of the OAuth 2.0 authentication protocol.
active-directory V2 Oauth2 Client Creds Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-client-creds-grant-flow.md
Title: OAuth 2.0 client credentials flow on the Microsoft identity platform | Azure
+ Title: OAuth 2.0 client credentials flow on the Microsoft identity platform
description: Build web applications by using the Microsoft identity platform implementation of the OAuth 2.0 authentication protocol.
active-directory V2 Oauth2 Device Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-device-code.md
Title: OAuth 2.0 device code flow | Azure-
+ Title: OAuth 2.0 device code flow
description: Sign in users without a browser. Build embedded and browser-less authentication flows using the device authorization grant.
active-directory V2 Oauth2 Implicit Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-implicit-grant-flow.md
Title: OAuth 2.0 implicit grant flow - The Microsoft identity platform | Azure
+ Title: OAuth 2.0 implicit grant flow - The Microsoft identity platform
description: Secure single-page apps using Microsoft identity platform implicit flow.
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
Title: Microsoft identity platform and OAuth2.0 On-Behalf-Of flow | Azure-
+ Title: Microsoft identity platform and OAuth2.0 On-Behalf-Of flow
description: This article describes how to use HTTP messages to implement service to service authentication using the OAuth2.0 On-Behalf-Of flow.
active-directory V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-overview.md
Title: Microsoft identity platform overview - Azure- description: Learn about the components of the Microsoft identity platform and how they can help you build identity and access management (IAM) support into your applications.
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-protocols-oidc.md
Title: Microsoft identity platform and OpenID Connect protocol | Azure-
+ Title: Microsoft identity platform and OpenID Connect protocol
description: Build web applications by using the Microsoft identity platform implementation of the OpenID Connect authentication protocol.
active-directory V2 Saml Bearer Assertion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-saml-bearer-assertion.md
Title: Exchange a SAML token issued by Active Directory Federation Services (AD FS) for a Microsoft Graph access token- description: Learn how to fetch data from Microsoft Graph without prompting an AD FS-federated user for credentials by using the SAML bearer assertion flow.
active-directory V2 Supported Account Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-supported-account-types.md
Title: Supported account types | Azure
+ Title: Supported account types
description: Conceptual documentation about audiences and supported account types in applications
active-directory Web Api Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-quickstart.md
Title: "Quickstart: Protect a web API with the Microsoft identity platform | Azure"-
+ Title: "Quickstart: Protect a web API with the Microsoft identity platform"
description: In this quickstart, you download and modify a code sample that demonstrates how to protect a web API by using the Microsoft identity platform for authorization.
active-directory Web App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart.md
Title: "Quickstart: Sign in users in web apps using the auth code flow"- description: In this quickstart, learn how a web app can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Title: "What's new in the Microsoft identity platform docs"- description: "New and updated documentation for the Microsoft identity platform."
active-directory Workload Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identities-overview.md
Title: Workload identities - description: Understand the concepts and supported scenarios for using workload identity in Azure Active Directory.
active-directory Workload Identity Federation Create Trust Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-gcp.md
Title: Access Azure resources from Google Cloud without credentials- description: Access Azure AD protected resources from a service running in Google Cloud without using secrets or certificates. Use workload identity federation to set up a trust relationship between an app in Azure AD and an identity in Google Cloud. The workload running in Google Cloud can get an access token from Microsoft identity platform and access Azure AD protected resources.
active-directory Workload Identity Federation Create Trust Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-github.md
Title: Create a trust relationship between an app and GitHub- description: Set up a trust relationship between an app in Azure AD and a GitHub repo. This allows a GitHub Actions workflow to access Azure AD protected resources without using secrets or certificates.
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust.md
Title: Create a trust relationship between an app and an external identity provider- description: Set up a trust relationship between an app in Azure AD and an external identity provider. This allows a software workload outside of Azure to access Azure AD protected resources without using secrets or certificates.
active-directory Workload Identity Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation.md
Title: Workload identity federation - description: Use workload identity federation to grant workloads running outside of Azure access to Azure AD protected resources without using secrets or certificates. This eliminates the need for developers to store and maintain long-lived secrets or certificates outside of Azure.
active-directory Zero Trust For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/zero-trust-for-developers.md
Title: "Increase app security by following Zero Trust principles"- description: Learn how following the Zero Trust principles can help increase the security of your application, its data, and which features of the Microsoft identity platform you can use to build Zero Trust-ready apps.
active-directory Azureadjoin Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/azureadjoin-plan.md
# How to: Plan your Azure AD join implementation
-Azure AD join allows you to join devices directly to Azure AD without the need to join to on-premises Active Directory while keeping your users productive and secure. Azure AD join is enterprise-ready for both at-scale and scoped deployments. SSO access to on-premises resources is also available to devices that are Azure AD joined. For more information, see [How SSO to on-premises resources works on Azure AD joined devices](azuread-join-sso.md).
+You can join devices directly to Azure Active Directory (Azure AD) without the need to join to on-premises Active Directory while keeping your users productive and secure. Azure AD join is enterprise-ready for both at-scale and scoped deployments. Single sign-on (SSO) access to on-premises resources is also available to devices that are Azure AD joined. For more information, see [How SSO to on-premises resources works on Azure AD joined devices](azuread-join-sso.md).
This article provides you with the information you need to plan your Azure AD join implementation.
To plan your Azure AD join implementation, you should familiarize yourself with:
## Review your scenarios
-While hybrid Azure AD join may be preferred for certain scenarios, Azure AD join enables you to transition towards a cloud-first model with Windows. If you're planning to modernize your devices management and reduce device-related IT costs, Azure AD join provides a great foundation towards achieving those goals.
+Azure AD join enables you to transition towards a cloud-first model with Windows. If you're planning to modernize your devices management and reduce device-related IT costs, Azure AD join provides a great foundation towards achieving those goals.
Consider Azure AD join if your goals align with the following criteria:
Consider Azure AD join if your goals align with the following criteria:
## Review your identity infrastructure
-Azure AD join works in managed and federated environments. We think most organizations will deploy hybrid Azure AD join with managed domains. Managed domain scenarios don't require configuring a federation server.
+Azure AD join works in managed and federated environments. We think most organizations will deploy with managed domains. Managed domain scenarios don't require configuring and managing a federation server like Active Directory Federation Services (AD FS).
### Managed environment
If your identity provider doesn't support these protocols, Azure AD join doesn't
> [!NOTE] > Currently, Azure AD join does not work with [AD FS 2019 configured with external authentication providers as the primary authentication method](/windows-server/identity/ad-fs/operations/additional-authentication-methods-ad-fs#enable-external-authentication-methods-as-primary). Azure AD join defaults to password authentication as the primary method, which results in authentication failures in this scenario
-### Smartcards and certificate-based authentication
-
-You can't use smartcards or certificate-based authentication to join devices to Azure AD. However, smartcards can be used to sign in to Azure AD joined devices if you have AD FS configured.
-
-**Recommendation:** Implement Windows Hello for Business for strong, password-less authentication to Windows 10 or newer.
- ### User configuration If you create users in your:
If you create users in your:
- **On-premises Active Directory**, you need to synchronize them to Azure AD using [Azure AD Connect](../hybrid/how-to-connect-sync-whatis.md). - **Azure AD**, no extra setup is required.
-On-premises UPNs that are different from Azure AD UPNs aren't supported on Azure AD joined devices. If your users use an on-premises UPN, you should plan to switch to using their primary UPN in Azure AD.
+On-premises user principal names (UPNs) that are different from Azure AD UPNs aren't supported on Azure AD joined devices. If your users use an on-premises UPN, you should plan to switch to using their primary UPN in Azure AD.
UPN changes are only supported starting Windows 10 2004 update. Users on devices with this update won't have any issues after changing their UPNs. For devices before the Windows 10 2004 update, users would have SSO and Conditional Access issues on their devices. They need to sign in to Windows through the "Other user" tile using their new UPN to resolve this issue.
Azure AD join:
### Management platform
-Device management for Azure AD joined devices is based on an MDM platform such as Intune, and MDM CSPs. Starting in Windows 10 there is a built-in MDM agent that works with all compatible MDM solutions.
+Device management for Azure AD joined devices is based on a mobile device management (MDM) platform such as Intune, and MDM CSPs. Starting in Windows 10 there's a built-in MDM agent that works with all compatible MDM solutions.
> [!NOTE] > Group policies are not supported in Azure AD joined devices as they are not connected to on-premises Active Directory. Management of Azure AD joined devices is only possible through MDM
Device management for Azure AD joined devices is based on an MDM platform such a
There are two approaches for managing Azure AD joined devices: - **MDM-only** - A device is exclusively managed by an MDM provider like Intune. All policies are delivered as part of the MDM enrollment process. For Azure AD Premium or EMS customers, MDM enrollment is an automated step that is part of an Azure AD join.-- **Co-management** - A device is managed by an MDM provider and SCCM. In this approach, the SCCM agent is installed on an MDM-managed device to administer certain aspects.
+- **Co-management** - A device is managed by an MDM provider and Microsoft Endpoint Configuration Manager. In this approach, the Microsoft Endpoint Configuration Manager agent is installed on an MDM-managed device to administer certain aspects.
If you're using Group Policies, evaluate your GPO and MDM policy parity by using [Group Policy analytics](/mem/intune/configuration/group-policy-analytics) in Microsoft Endpoint Manager.
Review supported and unsupported policies to determine whether you can use an MD
If your MDM solution isn't available through the Azure AD app gallery, you can add it following the process outlined in [Azure Active Directory integration with MDM](/windows/client-management/mdm/azure-active-directory-integration-with-mdm).
-Through co-management, you can use SCCM to manage certain aspects of your devices while policies are delivered through your MDM platform. Microsoft Intune enables co-management with SCCM. For more information on co-management for Windows 10 or newer devices, see [What is co-management?](/configmgr/core/clients/manage/co-management-overview). If you use an MDM product other than Intune, check with your MDM provider on applicable co-management scenarios.
+Through co-management, you can use Microsoft Endpoint Configuration Manager to manage certain aspects of your devices while policies are delivered through your MDM platform. Microsoft Intune enables co-management with Microsoft Endpoint Configuration Manager. For more information on co-management for Windows 10 or newer devices, see [What is co-management?](/configmgr/core/clients/manage/co-management-overview). If you use an MDM product other than Intune, check with your MDM provider on applicable co-management scenarios.
**Recommendation:** Consider MDM only management for Azure AD joined devices.
Before you can configure your mobility settings, you may have to add an MDM prov
**To add an MDM provider**:
-1. On the **Azure Active Directory page**, in the **Manage** section, click `Mobility (MDM and MAM)`.
-1. Click **Add application**.
+1. On the **Azure Active Directory page**, in the **Manage** section, select `Mobility (MDM and MAM)`.
+1. Select **Add application**.
1. Select your MDM provider from the list. :::image type="content" source="./media/azureadjoin-plan/04.png" alt-text="Screenshot of the Azure Active Directory Add an application page. Several M D M providers are listed." border="false":::
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-management-azure-portal.md
You must be assigned one of the following roles to view or manage device setting
- **Additional local administrators on Azure AD joined devices**: This setting allows you to select the users who are granted local administrator rights on a device. These users are added to the Device Administrators role in Azure AD. Global Administrators in Azure AD and device owners are granted local administrator rights by default. This option is a premium edition capability available through products like Azure AD Premium and Enterprise Mobility + Security. - **Users may register their devices with Azure AD**: You need to configure this setting to allow users to register Windows 10 or newer personal, iOS, Android, and macOS devices with Azure AD. If you select **None**, devices aren't allowed to register with Azure AD. Enrollment with Microsoft Intune or mobile device management for Microsoft 365 requires registration. If you've configured either of these services, **ALL** is selected and **NONE** is unavailable.-- **Require Multi-Factor Authentication to register or join devices with Azure AD**: This setting allows you to specify whether users are required to provide another authentication factor to join or register their devices to Azure AD. The default is **No**. We recommend that you require multifactor authentication when a device is registered or joined. Before you enable multifactor authentication for this service, you must ensure that multifactor authentication is configured for users that register their devices. For more information on Azure AD Multi-Factor Authentication services, see [getting started with Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md).
+- **Require Multi-Factor Authentication to register or join devices with Azure AD**: This setting allows you to specify whether users are required to provide another authentication factor to join or register their devices to Azure AD. The default is **No**. We recommend that you require multifactor authentication when a device is registered or joined. Before you enable multifactor authentication for this service, you must ensure that multifactor authentication is configured for users that register their devices. For more information on Azure AD Multi-Factor Authentication services, see [getting started with Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md). This setting may not work with third-party identity providers.
> [!NOTE] > The **Require Multi-Factor Authentication to register or join devices with Azure AD** setting applies to devices that are either Azure AD joined (with some exceptions) or Azure AD registered. This setting doesn't apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enabling-azure-ad-login-for-windows-vm-in-azure), or Azure AD joined devices that use [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying).
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
If you've configured a Conditional Access policy that requires multi-factor auth
- Your credentials did not work.
-![Your credentials did not work](./media/howto-vm-sign-in-azure-ad-windows/your-credentials-did-not-work.png)
- > [!WARNING]
-> Per-user Enabled/Enforced Azure AD Multi-Factor Authentication is not supported for VM Sign-In. This setting causes Sign-in to fail with ΓÇ£Your credentials do not work.ΓÇ¥ error message.
+> Legacy per-user Enabled/Enforced Azure AD Multi-Factor Authentication is not supported for VM Sign-In. This setting causes Sign-in to fail with ΓÇ£Your credentials do not work.ΓÇ¥ error message.
+
+![Your credentials did not work](./media/howto-vm-sign-in-azure-ad-windows/your-credentials-did-not-work.png)
-You can resolve the above issue by removing the per user MFA setting, by following these steps:
+You can resolve the above issue by removing the per-user MFA setting, by following these steps:
```
active-directory Manage Stale Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-stale-devices.md
Previously updated : 02/15/2022 Last updated : 06/01/2022
It isn't advisable to immediately delete a device that appears to be stale becau
### MDM-controlled devices
-If your device is under control of Intune or any other MDM solution, retire the device in the management system before disabling or deleting it.
+If your device is under control of Intune or any other MDM solution, retire the device in the management system before disabling or deleting it. For more information see the article [Remove devices by using wipe, retire, or manually unenrolling the device](/mem/intune/remote-actions/devices-wipe).
### System-managed devices
Any authentication where a device is being used to authenticate to Azure AD are
## Next steps
+Devices managed with Intune can be retired or wiped, for more information see the article [Remove devices by using wipe, retire, or manually unenrolling the device](/mem/intune/remote-actions/devices-wipe).
+ To get an overview of how to manage device in the Azure portal, see [managing devices using the Azure portal](device-management-azure-portal.md)
active-directory 7 Secure Access Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/7-secure-access-conditional-access.md
There may be times you want to block external users except a specific group. For
After confirming your settings using [report-only mode](../conditional-access/howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+### External partner access
+
+Conditional Access policies that target external users may interfere with service provider access, for example granular delegated admin privileges [Introduction to granular delegated admin privileges (GDAP)](/partner-center/gdap-introduction).
+ ## Implement Conditional Access Many common Conditional Access policies are documented. See the article [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md) for other common policies you may want to adapt for external users.
active-directory Overview Identity Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/overview-identity-protection.md
Previously updated : 06/15/2021 Last updated : 05/31/2022
The signals generated by and fed to Identity Protection, can be further fed into
## Why is automation important?
-In his [blog post in October of 2018](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Eight-essentials-for-hybrid-identity-3-Securing-your-identity/ba-p/275843) Alex Weinert, who leads Microsoft's Identity Security and Protection team, explains why automation is so important when dealing with the volume of events:
+In the blog post *[Cyber Signals: Defending against cyber threats with the latest research, insights, and trends](https://www.microsoft.com/security/blog/2022/02/03/cyber-signals-defending-against-cyber-threats-with-the-latest-research-insights-and-trends/)* dated February 3, 2022 we shared a thread intelligence brief including the following statistics:
-> Each day, our machine learning and heuristic systems provide risk scores for 18 billion login attempts for over 800 million distinct accounts, 300 million of which are discernibly done by adversaries (entities like: criminal actors, hackers).
->
-> At Ignite last year, I spoke about the top 3 attacks on our identity systems. Here is the recent volume of these attacks
->
-> - **Breach replay**: 4.6BN attacks detected in May 2018
-> - **Password spray**: 350k in April 2018
-> - **Phishing**: This is hard to quantify exactly, but we saw 23M risk events in March 2018, many of which are phish related
+> * Analyzed ...24 trillion security signals combined with intelligence we track by monitoring more than 40 nation-state groups and over 140 threat groups...
+> * ...From January 2021 through December 2021, weΓÇÖve blocked more than 25.6 billion Azure AD brute force authentication attacks...
+This scale of signals and attacks requires some level of automation to be able to keep up.
## Risk detection and remediation Identity Protection identifies risks of many types, including:
Identity Protection identifies risks of many types, including:
- Password spray - and more...
-More detail on these and other risks including how or when they are calculated can be found in the article, [What is risk](concept-identity-protection-risks.md).
+More detail on these and other risks including how or when they're calculated can be found in the article, [What is risk](concept-identity-protection-risks.md).
The risk signals can trigger remediation efforts such as requiring users to: perform Azure AD Multi-Factor Authentication, reset their password using self-service password reset, or blocking until an administrator takes action.
More information can be found in the article, [How To: Investigate risk](howto-i
### Risk levels
-Identity Protection categorizes risk into three tiers: low, medium, and high.
+Identity Protection categorizes risk into tiers: low, medium, and high.
-While Microsoft does not provide specific details about how risk is calculated, we will say that each level brings higher confidence that the user or sign-in is compromised. For example, something like one instance of unfamiliar sign-in properties for a user might not be as threatening as leaked credentials for another user.
+While Microsoft doesn't provide specific details about how risk is calculated, we'll say that each level brings higher confidence that the user or sign-in is compromised. For example, something like one instance of unfamiliar sign-in properties for a user might not be as threatening as leaked credentials for another user.
## Exporting risk data
Data from Identity Protection can be exported to other tools for archive and fur
Information about integrating Identity Protection information with Microsoft Sentinel can be found in the article, [Connect data from Azure AD Identity Protection](../../sentinel/data-connectors-reference.md#azure-active-directory-identity-protection).
-Additionally, organizations can choose to store data for longer periods by changing diagnostic settings in Azure AD to send RiskyUsers and UserRiskEvents data to a Log Analytics workspace, archive data to a storage account, stream data to an Event Hub, or send data to a partner solution. Detailed information about how to do so can be found in the article, [How To: Export risk data](howto-export-risk-data.md).
+Additionally, organizations can choose to store data for longer periods by changing diagnostic settings in Azure AD to send RiskyUsers and UserRiskEvents data to a Log Analytics workspace, archive data to a storage account, stream data to Event Hubs, or send data to a partner solution. Detailed information about how to do so can be found in the article, [How To: Export risk data](howto-export-risk-data.md).
## Permissions
Identity Protection requires users be a Security Reader, Security Operator, Secu
| Security operator | View all Identity Protection reports and Overview blade <br><br> Dismiss user risk, confirm safe sign-in, confirm compromise | Configure or change policies <br><br> Reset password for a user <br><br> Configure alerts | | Security reader | View all Identity Protection reports and Overview blade | Configure or change policies <br><br> Reset password for a user <br><br> Configure alerts <br><br> Give feedback on detections |
-Currently, the security operator role cannot access the Risky sign-ins report.
+Currently, the security operator role can't access the Risky sign-ins report.
Conditional Access administrators can also create policies that factor in sign-in risk as a condition. Find more information in the article [Conditional Access: Conditions](../conditional-access/concept-conditional-access-conditions.md#sign-in-risk).
Conditional Access administrators can also create policies that factor in sign-i
[!INCLUDE [Active Directory P2 license](../../../includes/active-directory-p2-license.md)]
-| Capability | Details | Azure AD Free / Microsoft 365 Apps | Azure AD Premium P1|Azure AD Premium P2 |
+| Capability | Details | Azure AD Free / Microsoft 365 Apps | Azure AD Premium P1 | Azure AD Premium P2 |
| | | | | |
-| Risk policies | User risk policy (via Identity Protection) | No | No |Yes |
-| Risk policies | Sign-in risk policy (via Identity Protection or Conditional Access) | No | No |Yes |
-| Security reports | Overview | No | No |Yes |
-| Security reports | Risky users | Limited Information. Only users with medium and high risk are shown. No details drawer or risk history. | Limited Information. Only users with medium and high risk are shown. No details drawer or risk history. | Full access|
-| Security reports | Risky sign-ins | Limited Information. No risk detail or risk level is shown. | Limited Information. No risk detail or risk level is shown. | Full access|
-| Security reports | Risk detections | No | Limited Information. No details drawer.| Full access|
-| Notifications | Users at risk detected alerts | No | No |Yes |
-| Notifications | Weekly digest| No | No | Yes |
-| | MFA registration policy | No | No | Yes |
+| Risk policies | User risk policy (via Identity Protection) | No | No | Yes |
+| Risk policies | Sign-in risk policy (via Identity Protection or Conditional Access) | No | No | Yes |
+| Security reports | Overview | No | No | Yes |
+| Security reports | Risky users | Limited Information. Only users with medium and high risk are shown. No details drawer or risk history. | Limited Information. Only users with medium and high risk are shown. No details drawer or risk history. | Full access|
+| Security reports | Risky sign-ins | Limited Information. No risk detail or risk level is shown. | Limited Information. No risk detail or risk level is shown. | Full access |
+| Security reports | Risk detections | No | Limited Information. No details drawer.| Full access |
+| Notifications | Users at risk detected alerts | No | No | Yes |
+| Notifications | Weekly digest | No | No | Yes |
+| MFA registration policy | | No | No | Yes |
More information on these rich reports can be found in the article, [How To: Investigate risk](howto-identity-protection-investigate-risk.md#navigating-the-reports).
active-directory Access Panel Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/access-panel-collections.md
Title: Create collections for My Apps portals-+ description: Use My Apps collections to Customize My Apps pages for a simpler My Apps experience for your users. Organize applications into groups with separate tabs.
active-directory Add Application Portal Assign Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-assign-users.md
Title: 'Quickstart: Create and assign a user account'-+ description: Create a user account in your Azure Active Directory tenant and assign it to an application.
active-directory Add Application Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-configure.md
Title: 'Configure enterprise application properties'-+ description: Configure the properties of an enterprise application in Azure Active Directory.
active-directory Add Application Portal Setup Oidc Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md
Title: 'Add an OpenID Connect-based single sign-on application' description: Learn how to add OpenID Connect-based single sign-on application in Azure Active Directory.-+
active-directory Add Application Portal Setup Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-sso.md
Title: 'Quickstart: Enable single sign-on for an enterprise application'-+ description: Enable single sign-on for an enterprise application in Azure Active Directory. -+ Last updated 09/21/2021-+ #Customer intent: As an administrator of an Azure AD tenant, I want to enable single sign-on for an enterprise application.
active-directory Add Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal.md
Title: 'Quickstart: Add an enterprise application' description: Add an enterprise application in Azure Active Directory.-+
active-directory Admin Consent Workflow Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/admin-consent-workflow-faq.md
Title: Frequently asked questions about the admin consent workflow-+ description: Find answers to frequently asked questions (FAQs) about the admin consent workflow.
active-directory Admin Consent Workflow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/admin-consent-workflow-overview.md
Title: Overview of admin consent workflow-+ description: Learn about the admin consent workflow in Azure Active Directory
active-directory App Management Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/app-management-powershell-samples.md
Title: PowerShell samples in Application Management-+ description: These PowerShell samples are used for apps you manage in your Azure Active Directory tenant. You can use these sample scripts to find expiration information about secrets and certificates.
active-directory Application List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-list.md
Title: Viewing apps using your tenant for identity management-+ description: Understand how to view all applications using your Azure Active Directory tenant for identity management.
active-directory Application Management Certs Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-management-certs-faq.md
Title: Application Management certificates frequently asked questions-+ description: Learn answers to frequently asked questions (FAQ) about managing certificates for apps using Azure Active Directory as an Identity Provider (IdP).
active-directory Application Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-properties.md
Title: 'Properties of an enterprise application'-+ description: Learn about the properties of an enterprise application in Azure Active Directory.
active-directory Application Sign In Other Problem Access Panel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-other-problem-access-panel.md
Title: Troubleshoot problems signing in to an application from My Apps portal-+ description: Troubleshoot problems signing in to an application from Azure AD My Apps
active-directory Application Sign In Problem Application Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-problem-application-error.md
Title: Error message appears on app page after you sign in-+ description: How to resolve issues with Azure AD sign in when the app returns an error message.
active-directory Application Sign In Problem First Party Microsoft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-problem-first-party-microsoft.md
Title: Problems signing in to a Microsoft application-+ description: Troubleshoot common problems faced when signing in to first-party Microsoft Applications using Azure AD (like Microsoft 365).
active-directory Application Sign In Unexpected User Consent Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-error.md
Title: Unexpected error when performing consent to an application-+ description: Discusses errors that can occur during the process of consenting to an application and what you can do about them
active-directory Application Sign In Unexpected User Consent Prompt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-prompt.md
Title: Unexpected consent prompt when signing in to an application-+ description: How to troubleshoot when a user sees a consent prompt for an application you have integrated with Azure AD that you did not expect
active-directory Assign App Owners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-app-owners.md
Title: Assign enterprise application owners-+ description: Learn how to assign owners to applications in Azure Active Directory documentationcenter: ''
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
Title: Assign users and groups-+ description: Learn how to assign and unassign users, and groups, for an app using Azure Active Directory for identity management.
active-directory Certificate Signing Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/certificate-signing-options.md
Title: Advanced certificate signing options in a SAML token-+ description: Learn how to use advanced certificate signing options in the SAML token for pre-integrated apps in Azure Active Directory
active-directory Cloud App Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/cloud-app-security.md
Title: App visibility and control with Microsoft Defender for Cloud Apps-+ description: Learn ways to identify app risk levels, stop breaches and leaks in real time, and use app connectors to take advantage of provider APIs for visibility and governance.
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
Title: Configure the admin consent workflow-+ description: Learn how to configure a way for end users to request access to applications that require admin consent.
active-directory Configure Authentication For Federated Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md
Title: Configure sign-in auto-acceleration using Home Realm Discovery-+ description: Learn how to force federated IdP acceleration for an application using Home Realm Discovery policy.
active-directory Configure Linked Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-linked-sign-on.md
Title: Add linked single sign-on to an application description: Add linked single sign-on to an application in Azure Active Directory.-+
active-directory Configure Password Single Sign On Non Gallery Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications.md
Title: Add password-based single sign-on to an application description: Add password-based single sign-on to an application in Azure Active Directory.-+
active-directory Configure Permission Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-permission-classifications.md
Title: Configure permission classifications-+ description: Learn how to manage delegated permission classifications.
active-directory Configure Risk Based Step Up Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-risk-based-step-up-consent.md
Title: Configure risk-based step-up consent-+ description: Learn how to disable and enable risk-based step-up consent to reduce user exposure to malicious apps that make illicit consent requests.
active-directory Configure User Consent Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent-groups.md
Title: Configure group owner consent to apps accessing group data-+ description: Learn manage whether group and team owners can consent to applications that will have access to the group or team's data.
active-directory Configure User Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent.md
Title: Configure how users consent to applications-+ description: Learn how to manage how and when users can consent to applications that will have access to your organization's data.
active-directory Consent And Permissions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/consent-and-permissions-overview.md
Title: Overview of consent and permissions-+ description: Learn about the fundamental concepts of consents and permissions in Azure AD
active-directory Datawiza With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-with-azure-ad.md
Title: Secure hybrid access with Datawiza-+ description: Learn how to integrate Datawiza with Azure AD. See how to use Datawiza and Azure AD to authenticate users and give them access to on-premises and cloud apps.
active-directory Debug Saml Sso Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/debug-saml-sso-issues.md
Title: Debug SAML-based single sign-on-+ description: Debug SAML-based single sign-on to applications in Azure Active Directory.
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md
Title: 'Quickstart: Delete an enterprise application' description: Delete an enterprise application in Azure Active Directory.-+
active-directory Disable User Sign In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
Title: Disable how a how a user signs in-+ description: How to disable an enterprise application so that no users may sign in to it in Azure Active Directory
active-directory End User Experiences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/end-user-experiences.md
Title: End-user experiences for applications-+ description: Azure Active Directory (Azure AD) provides several customizable ways to deploy applications to end users in your organization.
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-integration.md
Title: Secure hybrid access with F5-+ description: F5 BIG-IP Access Policy Manager and Azure Active Directory integration for Secure Hybrid Access
active-directory F5 Aad Password Less Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-password-less-vpn.md
Title: Configure F5 BIG-IP SSL-VPN solution in Azure AD-+ description: Tutorial to configure F5ΓÇÖs BIG-IP based Secure socket layer Virtual private network (SSL-VPN) solution with Azure Active Directory (AD) for Secure Hybrid Access (SHA)
active-directory F5 Bigip Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-bigip-deployment-guide.md
Title: Secure hybrid access with F5 deployment guide-+ description: Tutorial to deploy F5 BIG-IP Virtual Edition (VE) VM in Azure IaaS for Secure hybrid access
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
Title: Grant tenant-wide admin consent to an application -+ description: Learn how to grant tenant-wide consent to an application so that end-users are not prompted for consent when signing in to an application.
active-directory Grant Consent Single User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-consent-single-user.md
Title: Grant consent on behalf of a single user description: Learn how to grant consent on behalf of a single user when user consent is disabled or restricted.-+
active-directory Hide Application From User Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/hide-application-from-user-portal.md
Title: Hide an Enterprise application-+ description: How to hide an Enterprise application from user's experience in Azure Active Directory access portals or Microsoft 365 launchers.
active-directory Home Realm Discovery Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/home-realm-discovery-policy.md
Title: Home Realm Discovery policy-+ description: Learn how to manage Home Realm Discovery policy for Azure Active Directory authentication for federated users, including auto-acceleration and domain hints.
active-directory Howto Saml Token Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-saml-token-encryption.md
Title: SAML token encryption description: Learn how to configure Azure Active Directory SAML token encryption.-+
active-directory Manage App Consent Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-app-consent-policies.md
Title: Manage app consent policies description: Learn how to manage built-in and custom app consent policies to control when consent can be granted.-+
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
Title: Review permissions granted to applications-+ description: Learn how to review and manage permissions for an application in Azure Active Directory.
active-directory Manage Consent Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-consent-requests.md
Title: Manage consent to applications and evaluate consent requests description: Learn how to manage consent requests when user consent is disabled or restricted, and how to evaluate a request for tenant-wide admin consent to an application in Azure Active Directory.-+
active-directory Manage Self Service Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-self-service-access.md
Title: How to enable self-service application assignment-+ description: Enable self-service application access to allow users to find their own applications from their My Apps portal
active-directory Migrate Adfs Application Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-application-activity.md
Title: Use the activity report to move AD FS apps to Azure Active Directory description: The Active Directory Federation Services (AD FS) application activity report lets you quickly migrate applications from AD FS to Azure Active Directory (Azure AD). This migration tool for AD FS identifies compatibility with Azure AD and gives migration guidance.-+
active-directory Migrate Adfs Apps To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-apps-to-azure.md
Title: Moving application authentication from AD FS to Azure Active Directory description: Learn how to use Azure Active Directory to replace Active Directory Federation Services (AD FS), giving users single sign-on to all their applications.-+
active-directory Migration Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migration-resources.md
Title: Resources for migrating apps to Azure Active Directory description: Resources to help you migrate application access and authentication to Azure Active Directory (Azure AD).-+
active-directory Myapps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/myapps-overview.md
Title: My Apps portal overview description: Learn about how to manage applications in the My Apps portal.-+
active-directory One Click Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/one-click-sso-tutorial.md
Title: One-click, single sign-on (SSO) configuration of your Azure Marketplace application description: Steps for one-click configuration of SSO for your application from the Azure Marketplace.-+
active-directory Overview Application Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/overview-application-gallery.md
Title: Overview of the Azure Active Directory application gallery description: An overview of using the Azure Active Directory application gallery.-+
active-directory Overview Assign App Owners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/overview-assign-app-owners.md
Title: Overview of enterprise application ownership-+ description: Learn about enterprise application ownership in Azure Active Directory
active-directory Plan An Application Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/plan-an-application-integration.md
Title: Get started integrating Azure Active Directory with apps description: This article is a getting started guide for integrating Azure Active Directory (AD) with on-premises applications, and cloud applications.-+
active-directory Plan Sso Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/plan-sso-deployment.md
Title: Plan a single sign-on deployment description: Plan the deployment of single sign-on in Azure Active Directory.-+
active-directory Prevent Domain Hints With Home Realm Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/prevent-domain-hints-with-home-realm-discovery.md
Title: Prevent sign-in auto-acceleration using Home Realm Discovery policy-+ description: Learn how to prevent domain_hint auto-acceleration to federated IDPs.
active-directory Protect Against Consent Phishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/protect-against-consent-phishing.md
Title: Protecting against consent phishing-+ description: Learn ways of mitigating against app-based consent phishing attacks using Azure AD.
active-directory Review Admin Consent Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/review-admin-consent-requests.md
Title: Review and take action on admin consent requests-+ description: Learn how to review and take action on admin consent requests that were created after you were designated as a reviewer.
active-directory Secure Hybrid Access Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
Title: Secure hybrid access with Azure AD partner integration description: Help customers discover and migrate SaaS applications into Azure AD and connect apps that use legacy authentication methods with Azure AD.-+
active-directory Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access.md
Title: Secure hybrid access description: This article describes partner solutions for integrating your legacy on-premises, public cloud, or private cloud applications with Azure AD. -+
active-directory Silverfort Azure Ad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/silverfort-azure-ad-integration.md
Title: Secure hybrid access with Azure AD and Silverfort description: In this tutorial, learn how to integrate Silverfort with Azure AD for secure hybrid access -+
active-directory Tenant Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tenant-restrictions.md
Title: Use tenant restrictions to manage access to SaaS apps description: How to use tenant restrictions to manage which users can access apps based on their Azure AD tenant.-+
active-directory Troubleshoot App Publishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/troubleshoot-app-publishing.md
Title: Your sign-in was blocked description: Troubleshoot a blocked sign-in to the Microsoft Application Network portal. -+
active-directory Troubleshoot Password Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/troubleshoot-password-based-sso.md
Title: Troubleshoot password-based single sign-on description: Troubleshoot issues with an Azure AD app that's configured for password-based single sign-on.-+
active-directory Troubleshoot Saml Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/troubleshoot-saml-based-sso.md
Title: Troubleshoot SAML-based single sign-on description: Troubleshoot issues with an Azure AD app that's configured for SAML-based single sign-on.-+
active-directory Tutorial Govern Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-govern-monitor.md
Title: "Tutorial: Govern and monitor applications"-+ description: In this tutorial, you learn how to govern and monitor an application in Azure Active Directory.
active-directory Tutorial Manage Access Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-manage-access-security.md
Title: "Tutorial: Manage application access and security"-+ description: In this tutorial, you learn how to manage access to an application in Azure Active Directory and make sure it's secure.
active-directory Tutorial Manage Certificates For Federated Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-manage-certificates-for-federated-single-sign-on.md
Title: "Tutorial: Manage federation certificates" description: In this tutorial, you'll learn how to customize the expiration date for your federation certificates, and how to renew certificates that will soon expire.-+
active-directory V2 Howto App Gallery Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md
Title: Publish your application description: Learn how to publish your application in the Azure Active Directory application gallery. -+
active-directory View Applications Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/view-applications-portal.md
Title: 'Quickstart: View enterprise applications' description: View the enterprise applications that are registered to use your Azure Active Directory tenant.-+
active-directory Ways Users Get Assigned To Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md
Title: Understand how users are assigned to apps description: Understand how users get assigned to an app that is using Azure Active Directory for identity management.-+
active-directory What Is Access Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-access-management.md
Title: Manage access to apps-+ description: Describes how Azure Active Directory enables organizations to specify the apps to which each user has access.
active-directory What Is Application Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-application-management.md
Title: What is application management? description: An overview of managing the lifecycle of an application in Azure Active Directory.-+
active-directory What Is Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-single-sign-on.md
Title: What is single sign-on? description: Learn about single sign-on for enterprise applications in Azure Active Directory.-+
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Filesystem Size Used Avail Use% Mounted on
/dev/sdc 15G 46M 15G 1% /mnt/azuredisk ```
+## On-demand bursting
+
+On-demand disk bursting model allows disk bursts whenever its needs exceed its current capacity. This model incurs additional charges anytime the disk bursts. On-demand bursting is only available for premium SSDs larger than 512 GiB. For more information on premium SSDs provisioned IOPS and throughput per disk, see [Premium SSD size][az-premium-ssd]. Alternatively, credit-based bursting is where the disk will burst only if it has burst credits accumulated in its credit bucket. Credit-based bursting does not incur additional charges when the disk bursts. Credit-based bursting is only available for premium SSDs 512 GiB and smaller, and standard SSDs 1024 GiB and smaller. For more details on on-demand bursting, see [On-demand bursting][az-on-demand-bursting].
+
+> [!IMPORTANT]
+> The default `managed-csi-premium` storage class has on-demand bursting disabled and uses credit-based bursting. Any premium SSD dynamically created by a persistent volume claim based on the default `managed-csi-premium` storage class also has on-demand bursting disabled.
+
+To create a premium SSD persistent volume with [on-demand bursting][az-on-demand-bursting] enabled you can create a new storage class with the [enableBursting][csi-driver-parameters] parameter set to `true` as shown in the following YAML template. For more details on enabling on-demand bursting, see [On-demand bursting][az-on-demand-bursting]. For more details on building your own storage class with on-demand bursting enabled, see [Create a Burstable Managed CSI Premium Storage Class][create-burstable-storage-class].
+
+```yaml
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: burstable-managed-csi-premium
+provisioner: disk.csi.azure.com
+parameters:
+ skuname: Premium_LRS
+ enableBursting: "true"
+reclaimPolicy: Delete
+volumeBindingMode: WaitForFirstConsumer
+allowVolumeExpansion: true
+```
+ ## Windows containers The Azure disk CSI driver supports Windows nodes and containers. If you want to use Windows containers, follow the [Windows containers quickstart][aks-quickstart-cli] to add a Windows node pool.
$ kubectl exec -it busybox-azuredisk-0 -- cat c:\mnt\azuredisk\data.txt # on Win
[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/ [kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/ [managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/
+[csi-driver-parameters]: https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/docs/driver-parameters.md
+[create-burstable-storage-class]: https://github.com/Azure-Samples/burstable-managed-csi-premium
<!-- LINKS - internal --> [azure-disk-volume]: azure-disk-volume.md
$ kubectl exec -it busybox-azuredisk-0 -- cat c:\mnt\azuredisk\data.txt # on Win
[az-feature-register]: /cli/azure/feature#az_feature_register [az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register
+[az-on-demand-bursting]: ../virtual-machines/disk-bursting.md#on-demand-bursting
+[enable-on-demand-bursting]: ../virtual-machines/disks-enable-bursting.md?tabs=azure-cli
+[az-premium-ssd]: ../virtual-machines/disks-types.md#premium-ssds
aks Control Kubeconfig Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/control-kubeconfig-access.md
For enhanced security on access to AKS clusters, [integrate Azure Active Directo
<!-- LINKS - internal --> [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: /learn/quick-kubernetes-deploy-powershell.md
+[aks-quickstart-powershell]: /azure/aks/learn/quick-kubernetes-deploy-powershell
[azure-cli-install]: /cli/azure/install-azure-cli [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [azure-rbac]: ../role-based-access-control/overview.md
aks Open Service Mesh Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-troubleshoot.md
aks-osm-webhook-osm 1 102m
### Check for the service and the CA bundle of the Validating webhook ```azurecli-interactive
-kubectl get ValidatingWebhookConfiguration aks-osm-webhook-osm -o json | jq '.webhooks[0].clientConfig.service'
+kubectl get ValidatingWebhookConfiguration aks-osm-validator-mesh-osm -o json | jq '.webhooks[0].clientConfig.service'
``` A well configured Validating Webhook Configuration would look exactly like this:
aks Start Stop Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-cluster.md
If the `ProvisioningState` shows `Starting` that means your cluster hasn't fully
<!-- LINKS - internal --> [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: /learn/quick-kubernetes-deploy-powershell.md
+[aks-quickstart-powershell]: /azure/aks/learn/quick-kubernetes-deploy-powershell
[install-azure-cli]: /cli/azure/install-azure-cli [az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update
api-management Api Management Howto Cache External https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-cache-external.md
Title: Use an external cache in Azure API Management | Microsoft Docs
-description: Learn how to configure and use an external cache in Azure API Management. Using an external cache lets you overcome some limitations of the built-in cache.
+description: Learn how to configure and use an external Redis-compatible cache in Azure API Management. Using an external cache gives you more control and flexibility than the built-in cache.
documentationcenter: '' - - Previously updated : 04/26/2020+ Last updated : 05/19/2022 # Use an external Redis-compatible cache in Azure API Management
-In addition to utilizing the built-in cache, Azure API Management allows for caching responses in an external Redis-compatible cache, e.g. Azure Cache for Redis.
+In addition to utilizing the built-in cache, Azure API Management allows for caching responses in an external Redis-compatible cache, such as Azure Cache for Redis.
Using an external cache allows you to overcome a few limitations of the built-in cache: * Avoid having your cache periodically cleared during API Management updates * Have more control over your cache configuration
-* Cache more data than your API Management tier allows to
+* Cache more data than your API Management tier allows
* Use caching with the Consumption tier of API Management
-* Enable caching in the [API Management self-hosted gateways](self-hosted-gateway-overview.md)
+* Enable caching in the [API Management self-hosted gateway](self-hosted-gateway-overview.md)
For more detailed information about caching, see [API Management caching policies](api-management-caching-policies.md) and [Custom caching in Azure API Management](api-management-sample-cache-by-key.md).
To complete this tutorial, you need to:
## <a name="create-cache"> </a> Create Azure Cache for Redis
-This section explains how to create an Azure Cache for Redis in Azure. If you already have an Azure Cache for Redis, within or outside of Azure, you can <a href="#add-external-cache">skip</a> to the next section.
+This section explains how to create an Azure Cache for Redis in Azure. If you already have an Azure Cache for Redis, or another Redis-compatible cache within or outside of Azure, you can <a href="#add-external-cache">skip</a> to the next section.
[!INCLUDE [redis-cache-create](../azure-cache-for-redis/includes/redis-cache-create.md)] ## <a name="create-cache"> </a> Deploy Redis cache to Kubernetes
-For caching, self-hosted gateways rely exclusively on external caches. For caching to be effective self-hosted gateways and the cache they rely on must be located close to each other to minimize lookup and store latencies. Deploying a Redis cache into the same Kubernetes cluster or in a separate cluster nearby are the best options. Follow this [link](https://github.com/kubernetes/examples/tree/master/guestbook) to learn how to deploy Redis cache to a Kubernetes cluster.
+For a self-hosted gateway, caching requires an external cache. For caching to be effective, a self-hosted gateway and the cache it relies on must be located close to each other to minimize lookup and store latencies. Deploying a Redis cache into the same Kubernetes cluster or in a separate cluster nearby are the best options. Learn how to [deploy Redis cache to a Kubernetes cluster](https://github.com/kubernetes/examples/tree/master/guestbook).
## <a name="add-external-cache"> </a>Add an external cache
-Follow the steps below to add an external Azure Cache for Redis in Azure API Management.
+Follow the steps below to add an external Redis-compatible cache in Azure API Management. You can limit the cache to a specific gateway in your API Management instance.
![Screenshot that shows how to add an external Azure Cache for Redis in Azure API Management.](media/api-management-howto-cache-external/add-external-cache.png)
+### Use from setting
+
+The **Use from** setting in the configuration specifies the location of your API Management instance that will use the cache. Select one of the following:
+
+* The Azure region where the API Management instance is hosted (or one of the configured locations, if you have a [multi-region](api-management-howto-deploy-multi-region.md) deployment)
+
+* A self-hosted gateway location
+
+* **Default**, to configure the cache as the default for all gateway locations in the API Management instance
+
+ A cache used for **Default** will be overridden by a cache used for a specific matching region or location.
+
+ For example, consider an API Management instance that's hosted in the East US, Southeast Asia, and West Europe regions. There are two caches configured, one for **Default** and one for **Southeast Asia**. In this example, API Management in **Southeast Asia** will use its own cache, while the other two regions will use the **Default** cache entry.
+ > [!NOTE]
-> The **Use from** setting specifies an Azure region or a self-hosted gateway location that will use the configured cache. The caches configured as **Default** will be overridden by caches with a specific matching region or location value.
->
-> For example, if API Management is hosted in the East US, Southeast Asia and West Europe regions and there are two caches configured, one for **Default** and one for **Southeast Asia**, API Management in **Southeast Asia** will use its own cache, while the other two regions will use the **Default** cache entry.
+> You can configure the same external cache for more than one API Management instance. The API Management instances can be in the same or different regions. When sharing the cache for more than one instance, you must select **Default** in the **Use from** setting.
### Add an Azure Cache for Redis from the same subscription 1. Browse to your API Management instance in the Azure portal. 2. Select the **External cache** tab from the menu on the left.
-3. Click the **+ Add** button.
+3. Select the **+ Add** button.
4. Select your cache in the **Cache instance** dropdown field.
-5. Select **Default** or specify the desired region in the **Use from** dropdown field.
-6. Click **Save**.
+5. Select **Default** or specify the desired region in the [**Use from**](#use-from-setting) dropdown field.
+6. Select **Save**.
-### Add an Azure Cache for Redis hosted outside of the current Azure subscription or Azure in general
+### Add a Redis-compatible cache hosted outside of the current Azure subscription or Azure in general
1. Browse to your API Management instance in the Azure portal. 2. Select the **External cache** tab from the menu on the left.
-3. Click the **+ Add** button.
+3. Select the **+ Add** button.
4. Select **Custom** in the **Cache instance** dropdown field.
-5. Select **Default** or specify the desired region in the **Use from** dropdown field.
-6. Provide your Azure Cache for Redis connection string in the **Connection string** field.
-7. Click **Save**.
+5. Select **Default** or specify the desired region in the [**Use from**](#use-from-setting) dropdown field.
+6. Provide your Azure Cache for Redis (or Redis-compatible cache) connection string in the **Connection string** field.
+7. Select **Save**.
### Add a Redis cache to a self-hosted gateway 1. Browse to your API Management instance in the Azure portal. 2. Select the **External cache** tab from the menu on the left.
-3. Click the **+ Add** button.
+3. Select the **+ Add** button.
4. Select **Custom** in the **Cache instance** dropdown field.
-5. Specify the desired self-hosted gateway location or **Default** in the **Use from** dropdown field.
+5. Specify the desired self-hosted gateway location or **Default** in the [**Use from**](#use-from-setting) dropdown field.
6. Provide your Redis cache connection string in the **Connection string** field.
-7. Click **Save**.
+7. Select **Save**.
## Use the external cache
-Once the external cache is configured in Azure API Management, it can be used through caching policies. See [Add caching to improve performance in Azure API Management](api-management-howto-cache.md) for detailed steps.
+After adding a Redis-compatible cache, configure [caching policies](api-management-caching-policies.md) to enable response caching, or caching of values by key, in the external cache.
+
+For a detailed example, see [Add caching to improve performance in Azure API Management](api-management-howto-cache.md).
## <a name="next-steps"> </a>Next steps * For more information about caching policies, see [Caching policies][Caching policies] in the [API Management policy reference][API Management policy reference].
-* For information on caching items by key using policy expressions, see [Custom caching in Azure API Management](api-management-sample-cache-by-key.md).
+* To cache items by key using policy expressions, see [Custom caching in Azure API Management](api-management-sample-cache-by-key.md).
[API Management policy reference]: ./api-management-policies.md [Caching policies]: ./api-management-caching-policies.md
api-management Api Management Sample Cache By Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-sample-cache-by-key.md
description: Learn how to cache items by key in Azure API Management. You can mo
documentationcenter: '' - editor: ''-+ -- Previously updated : 12/15/2016 Last updated : 05/19/2022 # Custom caching in Azure API Management
-Azure API Management service has built-in support for [HTTP response caching](api-management-howto-cache.md) using the resource URL as the key. The key can be modified by request headers using the `vary-by` properties. This is useful for caching entire HTTP responses (also known as representations), but sometimes it is useful to just cache a portion of a representation. The new [cache-lookup-value](./api-management-caching-policies.md#GetFromCacheByKey) and [cache-store-value](./api-management-caching-policies.md#StoreToCacheByKey) policies provide the ability to store and retrieve arbitrary pieces of data from within policy definitions. This ability also adds value to the previously introduced [send-request](./api-management-advanced-policies.md#SendRequest) policy because you can now cache responses from external services.
+Azure API Management service has built-in support for [HTTP response caching](api-management-howto-cache.md) using the resource URL as the key. The key can be modified by request headers using the `vary-by` properties. This is useful for caching entire HTTP responses (also known as representations), but sometimes it's useful to just cache a portion of a representation. The [cache-lookup-value](./api-management-caching-policies.md#GetFromCacheByKey) and [cache-store-value](./api-management-caching-policies.md#StoreToCacheByKey) policies provide the ability to store and retrieve arbitrary pieces of data from within policy definitions. This ability also adds value to the [send-request](./api-management-advanced-policies.md#SendRequest) policy because you can cache responses from external services.
## Architecture
-API Management service uses a shared per-tenant data cache so that, as you scale up to multiple units you still get access to the same cached data. However, when working with a multi-region deployment there are independent caches within each of the regions. It is important to not treat the cache as a data store, where it is the only source of some piece of information. If you did, and later decided to take advantage of the multi-region deployment, then customers with users that travel may lose access to that cached data.
+API Management service uses a shared per-tenant internal data cache so that, as you scale up to multiple units, you still get access to the same cached data. However, when working with a multi-region deployment there are independent caches within each of the regions. It's important to not treat the cache as a data store, where it's the only source of some piece of information. If you did, and later decided to take advantage of the multi-region deployment, then customers with users that travel may lose access to that cached data.
+
+> [!NOTE]
+> The internal cache is not available in the **Consumption** tier of Azure API Management. You can [use an external Azure Cache for Redis](api-management-howto-cache-external.md) instead. An external cache allows for greater cache control and flexibility for API Management instances in all tiers.
## Fragment caching
-There are certain cases where responses being returned contain some portion of data that is expensive to determine and yet remains fresh for a reasonable amount of time. As an example, consider a service built by an airline that provides information relating flight reservations, flight status, etc. If the user is a member of the airlines points program, they would also have information relating to their current status and accumulated mileage. This user-related information might be stored in a different system, but it may be desirable to include it in responses returned about flight status and reservations. This can be done using a process called fragment caching. The primary representation can be returned from the origin server using some kind of token to indicate where the user-related information is to be inserted.
+There are certain cases where responses being returned contain some portion of data that is expensive to determine and yet remains fresh for a reasonable amount of time. As an example, consider a service built by an airline that provides information relating flight reservations, flight status, and so on. If the user is a member of the airlines points program, they would also have information relating to their current status and accumulated mileage. This user-related information might be stored in a different system, but it may be desirable to include it in responses returned about flight status and reservations. This can be done using a process called fragment caching. The primary representation can be returned from the origin server using some kind of token to indicate where the user-related information is to be inserted.
Consider the following JSON response from a backend API.
And secondary resource at `/userprofile/{userid}` that looks like,
{ "username" : "Bob Smith", "Status" : "Gold" } ```
-To determine the appropriate user information to include, API Management needs to identify who the end user is. This mechanism is implementation-dependent. As an example, I am using the `Subject` claim of a `JWT` token.
+To determine the appropriate user information to include, API Management needs to identify who the end user is. This mechanism is implementation-dependent. The following example uses the `Subject` claim of a `JWT` token.
```xml <set-variable
To avoid API Management from making this HTTP request again, when the same user
value="@((string)context.Variables["userprofile"])" duration="100000" /> ```
-API Management stores the value in the cache using the exact same key that API Management originally attempted to retrieve it with. The duration that API Management chooses to store the value should be based on how often the information changes and how tolerant users are to out-of-date information.
+API Management stores the value in the cache using the same key that API Management originally attempted to retrieve it with. The duration that API Management chooses to store the value should be based on how often the information changes and how tolerant users are to out-of-date information.
-It is important to realize that retrieving from the cache is still an out-of-process, network request and potentially can still add tens of milliseconds to the request. The benefits come when determining the user profile information takes longer than that due to needing to do database queries or aggregate information from multiple back-ends.
+It is important to realize that retrieving from the cache is still an out-of-process network request and potentially can add tens of milliseconds to the request. The benefits come when determining the user profile information takes longer than that due to needing to do database queries or aggregate information from multiple back-ends.
The final step in the process is to update the returned response with the user profile information.
The final step in the process is to update the returned response with the user p
to="@((string)context.Variables["userprofile"])" /> ```
-You can chose to include the quotation marks as part of the token so that even when the replace doesnΓÇÖt occur, the response is still a valid JSON.
+You can choose to include the quotation marks as part of the token so that even when the replacement doesnΓÇÖt occur, the response is still a valid JSON.
-Once you combine all these steps together, the end result is a policy that looks like the following one.
+Once you combine these steps, the end result is a policy that looks like the following one.
```xml <policies>
Once you combine all these steps together, the end result is a policy that looks
</policies> ```
-This caching approach is primarily used in web sites where HTML is composed on the server side so that it can be rendered as a single page. It can also be useful in APIs where clients cannot do client-side HTTP caching or it is desirable not to put that responsibility on the client.
+This caching approach is primarily used in websites where HTML is composed on the server side so that it can be rendered as a single page. It can also be useful in APIs where clients can't do client-side HTTP caching or it's desirable not to put that responsibility on the client.
This same kind of fragment caching can also be done on the backend web servers using a Redis caching server, however, using the API Management service to perform this work is useful when the cached fragments are coming from different back-ends than the primary responses. ## Transparent versioning
-It is common practice for multiple different implementation versions of an API to be supported at any one time. For example, to support different environments (dev, test, production, etc.) or to support older versions of the API to give time for API consumers to migrate to newer versions.
+It's common practice for multiple different implementation versions of an API to be supported at any one time. For example, to support different environments (dev, test, production, etc.) or to support older versions of the API to give time for API consumers to migrate to newer versions.
-One approach to handling this, instead of requiring client developers to change the URLs from `/v1/customers` to `/v2/customers` is to store in the consumerΓÇÖs profile data which version of the API they currently wish to use and call the appropriate backend URL. To determine the correct backend URL to call for a particular client, it is necessary to query some configuration data. By caching this configuration data, API Management can minimize the performance penalty of doing this lookup.
+One approach to handling this, instead of requiring client developers to change the URLs from `/v1/customers` to `/v2/customers` is to store in the consumerΓÇÖs profile data which version of the API they currently wish to use and call the appropriate backend URL. To determine the correct backend URL to call for a particular client, it's necessary to query some configuration data. By caching this configuration data, API Management can minimize the performance penalty of doing this lookup.
The first step is to determine the identifier used to configure the desired version. In this example, I chose to associate the version to the product subscription key.
key="@("clientversion-" + context.Variables["clientid"])"
variable-name="clientversion" /> ```
-Then, API Management checks to see if it did not find it in the cache.
+Then, API Management checks to see if it didn't find it in the cache.
```xml <choose>
api-management Graphql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-api.md
If you want to import a GraphQL schema and set up field resolvers using REST or
|-|-| | **Display name** | The name by which your GraphQL API will be displayed. | | **Name** | Raw name of the GraphQL API. Automatically populates as you type the display name. |
- | **GraphQL API endpoint** | The base URL with your GraphQL API endpoint name. <br /> For example: *`https://example.com/your-GraphQL-name`*. You can also use a common ["Star Wars" GraphQL endpoint](https://swapi-graphql.azure-api.net/graphql) as a demo. |
+ | **GraphQL API endpoint** | The base URL with your GraphQL API endpoint name. <br /> For example: *`https://example.com/your-GraphQL-name`*. You can also use a common "Star Wars" GraphQL endpoint such as `https://swapi-graphql.azure-api.net/graphql` as a demo. |
| **Upload schema** | Optionally select to browse and upload your schema file to replace the schema retrieved from the GraphQL endpoint (if available). | | **Description** | Add a description of your API. | | **URL scheme** | Select **HTTP**, **HTTPS**, or **Both**. Default selection: *Both*. |
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
The steps you follow for your project depends on whether you're using [Entity Fr
1. In Visual Studio, open the Package Manager Console and add the NuGet package [Azure.Identity](https://www.nuget.org/packages/Azure.Identity) and update Entity Framework: ```powershell
- Install-Package Azure.Identity -Version 1.5.0
+ Install-Package Azure.Identity
Update-Package EntityFramework ```- 1. In your DbContext object (in *Models/MyDbContext.cs*), add the following code to the default constructor. ```csharp
app-service Webjobs Dotnet Deploy Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-dotnet-deploy-vs.md
Deploy a project as a WebJob by itself, or link it to a web project so that it a
### Prerequisites
-Install Visual Studio 2017 or Visual Studio 2019 with the [Azure development workload](/visualstudio/install/install-visual-studio#step-4choose-workloads).
+Install Visual Studio 2022 with the [Azure development workload](/visualstudio/install/install-visual-studio#step-4choose-workloads).
### <a id="convert"></a> Enable WebJobs deployment for an existing console app project
If you enable **Always on** in Azure, you can use Visual Studio to change the We
1. In **Solution Explorer**, right-click the project and select **Publish**.
-1. In the **Publish** tab, choose **Edit**.
+1. In the **Settings** section, choose **Show all settings**.
1. In the **Profile settings** dialog box, choose **Continuous** for **WebJob Type**, and then choose **Save**.
app-service Webjobs Sdk Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-sdk-get-started.md
Get started with the Azure WebJobs SDK for Azure App Service to enable your web apps to run background tasks, scheduled tasks, and respond to events.
-Use Visual Studio 2019 to create a .NET core console app that uses the WebJobs SDK to respond to Azure Storage Queue messages, run the project locally, and finally deploy it to Azure.
+Use Visual Studio 2022 to create a .NET Core console app that uses the WebJobs SDK to respond to Azure Storage Queue messages, run the project locally, and finally deploy it to Azure.
In this tutorial, you will learn how to:
In this tutorial, you will learn how to:
## Prerequisites
-* Visual Studio 2019 with the **Azure development** workload. [Install Visual Studio 2019](/visualstudio/install/).
+* Visual Studio 2022 with the **Azure development** workload. [Install Visual Studio 2022](/visualstudio/install/).
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet). ## Create a console app
-In this section, you start by creating a project in Visual Studio 2019. Next, you'll add tools for Azure development, code publishing, and functions that listen for triggers and call functions. Last, you'll set up console logging that disables a legacy monitoring tool and enables a console provider with default filtering.
+In this section, you start by creating a project in Visual Studio 2022. Next, you'll add tools for Azure development, code publishing, and functions that listen for triggers and call functions. Last, you'll set up console logging that disables a legacy monitoring tool and enables a console provider with default filtering.
>[!NOTE]
->The procedures in this article are verified for creating a .NET Core console app that runs on .NET Core 3.1.
+>The procedures in this article are verified for creating a .NET Core console app that runs on .NET 6.0.
### Create a project
In this section, you start by creating a project in Visual Studio 2019. Next, yo
1. Under **Configure your new project**, name the project *WebJobsSDKSample*, and then select **Next**.
-1. Choose your **Target framework** and select **Create**. This tutorial has been verified using .NET Core 3.1.
+1. Choose your **Target framework** and select **Create**. This tutorial has been verified using .NET 6.0.
### Install WebJobs NuGet packages Install the latest WebJobs NuGet package. This package includes Microsoft.Azure.WebJobs (WebJobs SDK), which lets you publish your function code to WebJobs in Azure App Service.
-1. Get the latest stable 3.x version of the [Microsoft.Azure.WebJobs.Extensions NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions/).
+1. Get the latest stable 4.x version of the [Microsoft.Azure.WebJobs.Extensions NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions/).
2. In Visual Studio, go to **Tools** > **NuGet Package Manager**. 3. Select **Package Manager Console**. You'll see a list of NuGet cmdlets, a link to documentation, and a `PM>` entry point.
-4. In the following command, replace `<3_X_VERSION>` with the current version number you found in step 1.
+4. In the following command, replace `<4_X_VERSION>` with the current version number you found in step 1.
```powershell
- Install-Package Microsoft.Azure.WebJobs.Extensions -version <3_X_VERSION>
+ Install-Package Microsoft.Azure.WebJobs.Extensions -version <4_X_VERSION>
``` 5. In the **Package Manager Console**, execute the command. The extension list appears and automatically installs.
Install the latest WebJobs NuGet package. This package includes Microsoft.Azure.
The host is the runtime container for functions that listens for triggers and calls functions. The following steps create a host that implements [`IHost`](/dotnet/api/microsoft.extensions.hosting.ihost), which is the Generic Host in ASP.NET Core.
-1. Select the **Program.cs** tab and add these `using` statements:
+1. Select the **Program.cs** tab, remove the existing contents, and add these `using` statements:
```cs using System.Threading.Tasks; using Microsoft.Extensions.Hosting; ```
-1. Also under **Program.cs**, replace the `Main` method with the following code:
+1. Also under **Program.cs**, add the following code:
```cs
- static async Task Main()
+ namespace WebJobsSDKSample
{
- var builder = new HostBuilder();
- builder.ConfigureWebJobs(b =>
+ class Program
+ {
+ static async Task Main()
+ {
+ var builder = new HostBuilder();
+ builder.ConfigureWebJobs(b =>
{ b.AddAzureStorageCoreServices(); });
- var host = builder.Build();
- using (host)
- {
- await host.RunAsync();
+ var host = builder.Build();
+ using (host)
+ {
+ await host.RunAsync();
+ }
+ }
} } ```
Set up console logging that uses the [ASP.NET Core logging framework](/aspnet/co
1. Get the latest stable version of the [`Microsoft.Extensions.Logging.Console` NuGet package](https://www.nuget.org/packages/Microsoft.Extensions.Logging.Console/), which includes `Microsoft.Extensions.Logging`.
-2. In the following command, replace `<3_X_VERSION>` with the current version number you found in step 1. Each type of NuGet Package has a unique version number.
+2. In the following command, replace `<6_X_VERSION>` with the current version number you found in step 1. Each type of NuGet Package has a unique version number.
```powershell
- Install-Package Microsoft.Extensions.Logging.Console -version <3_X_VERSION>
+ Install-Package Microsoft.Extensions.Logging.Console -version <6_X_VERSION>
``` 3. In the **Package Manager Console**, fill in the current version number and execute the command. The extension list appears and automatically installs.
Starting with version 3 of the WebJobs SDK, to connect to Azure Storage services
>[!NOTE] > Beginning with 5.x, Microsoft.Azure.WebJobs.Extensions.Storage has been [split by storage service](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage/CHANGELOG.md#major-changes-and-features) and has migrated the `AddAzureStorage()` extension method by service type.
-1. Get the latest stable version of the [Microsoft.Azure.WebJobs.Extensions.Storage](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage) NuGet package, version 3.x.
+1. Get the latest stable version of the [Microsoft.Azure.WebJobs.Extensions.Storage](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage) NuGet package, version 5.x.
-1. In the following command, replace `<3_X_VERSION>` with the current version number you found in step 1. Each type of NuGet Package has a unique version number.
+1. In the following command, replace `<5_X_VERSION>` with the current version number you found in step 1. Each type of NuGet Package has a unique version number.
```powershell
- Install-Package Microsoft.Azure.WebJobs.Extensions.Storage -Version <3_X_VERSION>
+ Install-Package Microsoft.Azure.WebJobs.Extensions.Storage -Version <5_X_VERSION>
``` 1. In the **Package Manager Console**, execute the command with the current version number at the `PM>` entry point.
-1. Continuing in **Program.cs**, in the `ConfigureWebJobs` extension method, add the `AddAzureStorage` method on the [`HostBuilder`](/dotnet/api/microsoft.extensions.hosting.hostbuilder) instance (before the `Build` command) to initialize the Storage extension. At this point, the `ConfigureWebJobs` method looks like this:
+1. Continuing in **Program.cs**, in the `ConfigureWebJobs` extension method, add the `AddAzureStorageQueues` method on the [`HostBuilder`](/dotnet/api/microsoft.extensions.hosting.hostbuilder) instance (before the `Build` command) to initialize the Storage extension. At this point, the `ConfigureWebJobs` method looks like this:
```cs builder.ConfigureWebJobs(b => { b.AddAzureStorageCoreServices();
- b.AddAzureStorage();
+ b.AddAzureStorageQueues();
}); ``` 1. Add the following code in the `Main` method after the `builder` is instantiated:
Starting with version 3 of the WebJobs SDK, to connect to Azure Storage services
builder.ConfigureWebJobs(b => { b.AddAzureStorageCoreServices();
- b.AddAzureStorage();
+ b.AddAzureStorageQueues();
}); var host = builder.Build(); using (host)
Because this file contains a connection string secret, you shouldn't store the f
Build and run the project locally and create a message queue to trigger the function.
-1. In **Cloud Explorer** in Visual Studio, expand the node for your new storage account, and then right-click **Queues**.
-
-1. Select **Create Queue**.
-
-1. Enter *queue* as the name for the queue, and then select **OK**.
-
- ![Screenshot that shows where you create the queue and name it "queue". ](./media/webjobs-sdk-get-started/create-queue.png)
-
-1. Right-click the node for the new queue, and then select **Open**.
+1. In the Azure portal, navigate to your storage account and select the **Queues** tab (1). Select **+ Queue** (2) and enter **queue** as the Queue name (3). Then, select **OK** (4).
-1. Select the **Add Message** icon.
+ ![This image shows how to create a new Azure Storage Queue.](./media/webjobs-sdk-get-started/create-queue-azure-storage.png "New Azure Storage Queue")
- ![Screenshot that highlights the Add Message icon.](./media/webjobs-sdk-get-started/create-queue-message.png)
+2. Click the new queue and select **Add message**.
-1. In the **Add Message** dialog, enter *Hello World!* as the **Message text**, and then select **OK**. There is now a message in the queue.
+3. In the **Add Message** dialog, enter *Hello World!* as the **Message text**, and then select **OK**. There is now a message in the queue.
![Create queue](./media/webjobs-sdk-get-started/hello-world-text.png)
-1. Press **Ctrl+F5** to run the project.
+4. Press **Ctrl+F5** to run the project.
The console shows that the runtime found your function. Because you used the `QueueTrigger` attribute in the `ProcessQueueMessage` function, the WebJobs runtime listens for messages in the queue named `queue`. When it finds a new message in this queue, the runtime calls the function, passing in the message string value.
-1. Go back to the **Queue** window and refresh it. The message is gone, since it has been processed by your function running locally.
+5. Go back to the **Queue** window and refresh it. The message is gone, since it has been processed by your function running locally.
-1. Close the console window.
+6. Close the console window.
It's now time to publish your WebJobs SDK project to Azure. ## <a name="deploy-as-a-webjob"></a>Deploy to Azure
-During deployment, you create an app service instance where you'll run your functions. When you publish a .NET Core console app to App Service in Azure, it automatically runs as a WebJob. To learn more about publishing, see [Develop and deploy WebJobs using Visual Studio](webjobs-dotnet-deploy-vs.md).
+During deployment, you create an app service instance where you'll run your functions. When you publish a .NET console app to App Service in Azure, it automatically runs as a WebJob. To learn more about publishing, see [Develop and deploy WebJobs using Visual Studio](webjobs-dotnet-deploy-vs.md).
### Create Azure resources
For a continuous WebJob, you should enable the Always on setting in the site so
With the web app created in Azure, it's time to publish the WebJobs project.
-1. In the **Publish** page under **Hosting**, select the edit button and change the **WebJob Type** to `Continuous` and select **Save**. This makes sure that the WebJob is running when messages are added to the queue. Triggered WebJobs are typically used only for manual webhooks.
+1. In the **Publish** page under **Hosting**, select the edit button and change the **WebJob Type** to `Continuous` and select **Save**. This makes sure that the WebJob is running when messages are added to the queue. Triggered WebJobs are typically used only for manual webhooks.
-1. Select the **Publish** button at the top right corner of the **Publish** page. When the operation completes, your WebJob is running on Azure.
+ ![Change WebJob type from the VS 2022 Publish window.](./media/webjobs-sdk-get-started/change-webjob-type.png)
++
+2. Select the **Publish** button at the top right corner of the **Publish** page. When the operation completes, your WebJob is running on Azure.
### Create a storage connection app setting
This initializes the Application Insights logging provider with default [filteri
1. In **Solution Explorer**, right-click the project and select **Publish**.
-1. As before, use **Cloud Explorer** in Visual Studio to create a queue message like you did [earlier](#test-locally), except enter *Hello App Insights!* as the message text.
+1. As before, use the Azure portal to create a queue message like you did [earlier](#test-locally), except enter *Hello App Insights!* as the message text.
1. In your **Publish** profile page, select the three dots above **Hosting** to show **Hosting profile section actions** and choose **Open in Azure Portal**.
This initializes the Application Insights logging provider with default [filteri
Bindings simplify code that reads and writes data. Input bindings simplify code that reads data. Output bindings simplify code that writes data.
-### Add input binding
+### Add bindings
+
+Input bindings simplify code that reads data. For this example, the queue message is the name of a blob, which you'll use to find and read a blob in Azure Storage. You will then use output bindings to write a copy of the file to the same container.
+
+1. In **Functions.cs**, add a `using`:
-Input bindings simplify code that reads data. For this example, the queue message is the name of a blob, which you'll use to find and read a blob in Azure Storage.
+ ```cs
+ using System.IO;
+ ```
-1. In *Functions.cs*, replace the `ProcessQueueMessage` method with the following code:
+2. Replace the `ProcessQueueMessage` method with the following code:
```cs public static void ProcessQueueMessage( [QueueTrigger("queue")] string message, [Blob("container/{queueTrigger}", FileAccess.Read)] Stream myBlob,
+ [Blob("container/copy-{queueTrigger}", FileAccess.Write)] Stream outputBlob,
ILogger logger) { logger.LogInformation($"Blob name:{message} \n Size: {myBlob.Length} bytes");
+ myBlob.CopyTo(outputBlob);
} ```-
+
In this code, `queueTrigger` is a [binding expression](../azure-functions/functions-bindings-expressions-patterns.md), which means it resolves to a different value at runtime. At runtime, it has the contents of the queue message.
-1. Add a `using`:
+ This code uses output bindings to create a copy of the file identified by the queue message. The file copy is prefixed with *copy-*.
- ```cs
- using System.IO;
- ```
+3. In **Program.cs**, in the `ConfigureWebJobs` extension method, add the `AddAzureStorageBlobs` method on the [`HostBuilder`](/dotnet/api/microsoft.extensions.hosting.hostbuilder) instance (before the `Build` command) to initialize the Storage extension. At this point, the `ConfigureWebJobs` method looks like this:
+
+ ```cs
+ builder.ConfigureWebJobs(b =>
+ {
+ b.AddAzureStorageCoreServices();
+ b.AddAzureStorageQueues();
+ b.AddAzureStorageBlobs();
+ });
+ ```
-1. Create a blob container in your storage account.
+4. Create a blob container in your storage account.
- a. In **Cloud Explorer** in Visual Studio, expand the node for your storage account, right-click **Blobs**, and then select **Create Blob Container**.
+ a. In the Azure portal, navigate to the **Containers** tab below **Data storage** and select **+ Container**
- b. In the **Create Blob Container** dialog, enter *container* as the container name, and then select **OK**.
+ b. In the **New container** dialog, enter *container* as the container name, and then select **Create**.
-1. Upload the *Program.cs* file to the blob container. (This file is used here as an example; you could upload any text file and create a queue message with the file's name.)
+5. Upload the *Program.cs* file to the blob container. (This file is used here as an example; you could upload any text file and create a queue message with the file's name.)
- a. In **Cloud Explorer**, double-click the node for the container you created.
+ a. Select the new container you created
- b. In the **Container** window, select the **Upload** button.
+ b. Select the **Upload** button.
![Blob upload button](./media/webjobs-sdk-get-started/blob-upload-button.png) c. Find and select *Program.cs*, and then select **OK**.
-1. Create a queue message in the queue you created earlier, with *Program.cs* as the text of the message.
-
- ![Queue message Program.cs](./media/webjobs-sdk-get-started/queue-msg-program-cs.png)
-
-1. Run the project locally.
-
- The queue message triggers the function, which then reads the blob and logs its length. The console output looks like this:
-
- ```console
- Found the following functions:
- ConsoleApp1.Functions.ProcessQueueMessage
- Job host started
- Executing 'Functions.ProcessQueueMessage' (Reason='New queue message detected on 'queue'.', Id=5a2ac479-de13-4f41-aae9-1361f291ff88)
- Blob name:Program.cs
- Size: 532 bytes
- Executed 'Functions.ProcessQueueMessage' (Succeeded, Id=5a2ac479-de13-4f41-aae9-1361f291ff88)
- ```
-### Add an output binding
-
-Output bindings simplify code that writes data. This example modifies the previous one by writing a copy of the blob instead of logging its size. Blob storage bindings are included in the Azure Storage extension package that we installed previously.
-
-1. Replace the `ProcessQueueMessage` method with the following code:
-
- ```cs
- public static void ProcessQueueMessage(
- [QueueTrigger("queue")] string message,
- [Blob("container/{queueTrigger}", FileAccess.Read)] Stream myBlob,
- [Blob("container/copy-{queueTrigger}", FileAccess.Write)] Stream outputBlob,
- ILogger logger)
- {
- logger.LogInformation($"Blob name:{message} \n Size: {myBlob.Length} bytes");
- myBlob.CopyTo(outputBlob);
- }
- ```
-
-1. Create another queue message with *Program.cs* as the text of the message.
-
-1. Run the project locally.
-
- The queue message triggers the function, which then reads the blob, logs its length, and creates a new blob. The console output is the same, but when you go to the blob container window and select **Refresh**, you see a new blob named *copy-Program.cs.*
- ### Republish the project 1. In **Solution Explorer**, right-click the project and select **Publish**. 1. In the **Publish** dialog, make sure that the current profile is selected and then select **Publish**. Results of the publish are detailed in the **Output** window.
-1. Verify the function in Azure by again uploading a file to the blob container and adding a message to the queue that is the name of the uploaded file. You see the message get removed from the queue and a copy of the file created in the blob container.
+1. Create a queue message in the queue you created earlier, with *Program.cs* as the text of the message.
+
+ ![Queue message Program.cs](./media/webjobs-sdk-get-started/queue-msg-program-cs.png)
+
+1. A copy of the file, *copy-Program.cs*, will appear in the blob container.
## Next steps
azure-functions Durable Functions Create First Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-create-first-csharp.md
You have used Visual Studio Code to create and publish a C# durable function app
::: zone pivot="code-editor-visualstudio"
-In this article, you learn how to use Visual Studio 2019 to locally create and test a "hello world" durable function. This function orchestrates and chains-together calls to other functions. You then publish the function code to Azure. These tools are available as part of the Azure development workload in Visual Studio 2019.
+In this article, you learn how to use Visual Studio 2022 to locally create and test a "hello world" durable function. This function orchestrates and chains-together calls to other functions. You then publish the function code to Azure. These tools are available as part of the Azure development workload in Visual Studio 2022.
![Screenshot shows a Visual Studio 2019 window with a durable function.](./media/durable-functions-create-first-csharp/functions-vs-complete.png)
In this article, you learn how to use Visual Studio 2019 to locally create and t
To complete this tutorial:
-* Install [Visual Studio 2019](https://visualstudio.microsoft.com/vs/). Make sure that the **Azure development** workload is also installed. Visual Studio 2017 also supports Durable Functions development, but the UI and steps differ.
+* Install [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). Make sure that the **Azure development** workload is also installed. Visual Studio 2019 also supports Durable Functions development, but the UI and steps differ.
* Verify you have the [Azure Storage Emulator](../../storage/common/storage-use-emulator.md) installed and running.
The Azure Functions template creates a project that can be published to a functi
1. Type a **Project name** for your project, and select **OK**. The project name must be valid as a C# namespace, so don't use underscores, hyphens, or any other nonalphanumeric characters.
-1. In **Create a new Azure Functions Application**, use the settings specified in the table that follows the image.
+1. Under **Additional information**, use the settings specified in the table that follows the image.
![Create a new Azure Functions Application dialog in Visual Studio](./media/durable-functions-create-first-csharp/functions-vs-new-function.png) | Setting | Suggested value | Description | | | - |-- |
- | **Version** | Azure Functions 3.0 <br />(.NET Core) | Creates a function project that uses the version 3.0 runtime of Azure Functions, which supports .NET Core 3.1. For more information, see [How to target Azure Functions runtime version](../functions-versions.md). |
- | **Template** | Empty | Creates an empty function app. |
+ | **Functions worker** | .NET 6 | Creates a function project that supports .NET 6 and the Azure Functions Runtime 4.0. For more information, see [How to target Azure Functions runtime version](../functions-versions.md). |
+ | **Function** | Empty | Creates an empty function app. |
| **Storage account** | Storage Emulator | A storage account is required for durable function state management. | 4. Select **Create** to create an empty function project. This project has the basic configuration files needed to run your functions.
The following steps use a template to create the durable function code in your p
1. Verify **Azure Function** is selected from the add menu, type a name for your C# file, and then select **Add**.
-1. Select the **Durable Functions Orchestration** template and then select **Ok**
+1. Select the **Durable Functions Orchestration** template and then select **Add**.
![Select durable template](./media/durable-functions-create-first-csharp/functions-vs-select-template.png)
Azure Functions Core Tools lets you run an Azure Functions project on your local
```json {
+ "name": "Durable",
"instanceId": "d495cb0ac10d4e13b22729c37e335190", "runtimeStatus": "Completed", "input": null,
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
The following table explains the binding configuration properties that you set i
|**direction** | Required. Must be set to `in`. | |**name** | Required. The name of the variable that represents the query results in function code. | | **commandText** | Required. The Transact-SQL query command or name of the stored procedure executed by the binding. |
-| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This value isn't the actual connection string and must instead resolve to an environment variable name. |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This value isn't the actual connection string and must instead resolve to an environment variable name. Optional keywords in the connection string value are [available to refine SQL bindings connectivity](./functions-bindings-azure-sql.md#sql-connection-string). |
| **commandType** | Required. A [CommandType](/dotnet/api/system.data.commandtype) value, which is [Text](/dotnet/api/system.data.commandtype#fields) for a query and [StoredProcedure](/dotnet/api/system.data.commandtype#fields) for a stored procedure. | | **parameters** | Optional. Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). | ::: zone-end
The following table explains the binding configuration properties that you set i
::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-python"
-The attribute's constructor takes the SQL command text, the command type, parameters, and the connection string setting name. The command can be a Transact-SQL (T-SQL) query with the command type `System.Data.CommandType.Text` or stored procedure name with the command type `System.Data.CommandType.StoredProcedure`. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
+The attribute's constructor takes the SQL command text, the command type, parameters, and the connection string setting name. The command can be a Transact-SQL (T-SQL) query with the command type `System.Data.CommandType.Text` or stored procedure name with the command type `System.Data.CommandType.StoredProcedure`. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.1&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
::: zone-end
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
The following table explains the binding configuration properties that you set i
|**direction** | Required. Must be set to `out`. | |**name** | Required. The name of the variable that represents the entity in function code. | | **commandText** | Required. The name of the table being written to by the binding. |
-| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable.|
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable. Optional keywords in the connection string value are [available to refine SQL bindings connectivity](./functions-bindings-azure-sql.md#sql-connection-string). |
::: zone-end
The following table explains the binding configuration properties that you set i
## Usage ::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-python"
-The `CommandText` property is the name of the table where the data is to be stored. The connection string setting name corresponds to the application setting that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
+The `CommandText` property is the name of the table where the data is to be stored. The connection string setting name corresponds to the application setting that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.1&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
::: zone-end
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
Support for Python durable functions with SQL bindings isn't yet available.
::: zone-end
+## SQL connection string
+
+Azure SQL bindings for Azure Functions have a required property for connection string on both [input](./functions-bindings-azure-sql-input.md) and [output](./functions-bindings-azure-sql-output.md) bindings. SQL bindings passes the connection string to the Microsoft.Data.SqlClient library and supports the connection string as defined in the [SqlClient ConnectionString documentation](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.1&preserve-view=true). Notable keywords include:
+
+- `Authentication` allows a function to connect to Azure SQL with Azure Active Directory, including [Active Directory Managed Identity](./functions-identity-access-azure-sql-with-managed-identity.md)
+- `Command Timeout` allows a function to wait for specified amount of time in seconds before terminating a query (default 30 seconds)
+- `ConnectRetryCount` allows a function to automatically make additional reconnection attempts, especially applicable to Azure SQL Database serverless tier (default 1)
++ ## Considerations - Because the Azure SQL bindings doesn't have a trigger, you need to use another supported trigger to start a function that reads from or writes to an Azure SQL database.
azure-functions Functions Create Your First Function Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio.md
The Azure Functions project template in Visual Studio creates a C# class library
1. In **Configure your new project**, enter a **Project name** for your project, and then select **Create**. The function app name must be valid as a C# namespace, so don't use underscores, hyphens, or any other nonalphanumeric characters.
-1. For the **Create a new Azure Functions application** settings, use the values in the following table:
+1. For the **Additional information** settings, use the values in the following table:
| Setting | Value | Description | | | - |-- |
- | **.NET version** | **.NET 6** | This value creates a function project that runs in-process with version 4.x of the Azure Functions runtime. You can also choose **.NET 6 (isolated)** to create a project that runs in a separate worker process. Azure Functions 1.x supports the .NET Framework. For more information, see [Azure Functions runtime versions overview](./functions-versions.md). |
- | **Function template** | **HTTP trigger** | This value creates a function triggered by an HTTP request. |
- | **Storage account (AzureWebJobsStorage)** | **Storage emulator** | Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. An HTTP trigger doesn't use an Azure Storage account connection string; all other trigger types require a valid Azure Storage account connection string. When you select this option, the Azurite emulator is used. |
+ | **Functions worker** | **.NET 6** or **.NET 6 Isolated** | When you choose **.NET 6**, you create a project that runs in-process with version 4.x of the Azure Functions runtime. When you choose **.NET 6 Isolated**, you create a project that runs in a separate worker process. Azure Functions 1.x supports the .NET Framework. For more information, see [Azure Functions runtime versions overview](./functions-versions.md). |
+ | **Function** | **HTTP trigger** | This value creates a function triggered by an HTTP request. |
+ | **Use Azurite for runtime storage account (AzureWebJobsStorage)** | Enable | Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. An HTTP trigger doesn't use an Azure Storage account connection string; all other trigger types require a valid Azure Storage account connection string. When you select this option, the Azurite emulator is used. |
| **Authorization level** | **Anonymous** | The created function can be triggered by any client without providing a key. This authorization setting makes it easy to test your new function. For more information about keys and authorization, see [Authorization keys](./functions-bindings-http-webhook-trigger.md#authorization-keys) and [HTTP and webhook bindings](./functions-bindings-http-webhook.md). | :::image type="content" source="../../includes/media/functions-vs-tools-create/functions-project-settings-v4.png" alt-text="Azure Functions project settings"::: Make sure you set the **Authorization level** to **Anonymous**. If you choose the default level of **Function**, you're required to present the [function key](./functions-bindings-http-webhook-trigger.md#authorization-keys) in requests to access your function endpoint.
-1. Select **Create** to create the function project and HTTP trigger function.
+2. Select **Create** to create the function project and HTTP trigger function.
Visual Studio creates a project and class that contains boilerplate code for the HTTP trigger function type. The boilerplate code sends an HTTP response that includes a value from the request body or query string. The `HttpTrigger` attribute specifies that the function is triggered by an HTTP request.
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md
Title: Develop Azure Functions using Visual Studio
-description: Learn how to develop and test Azure Functions by using Azure Functions Tools for Visual Studio 2019.
+description: Learn how to develop and test Azure Functions by using Azure Functions Tools for Visual Studio 2022.
ms.devlang: csharp
Visual Studio provides the following benefits when you develop your functions:
This article provides details about how to use Visual Studio to develop C# class library functions and publish them to Azure. Before you read this article, consider completing the [Functions quickstart for Visual Studio](functions-create-your-first-function-visual-studio.md).
-Unless otherwise noted, procedures and examples shown are for Visual Studio 2019.
+Unless otherwise noted, procedures and examples shown are for Visual Studio 2022.
## Prerequisites -- Azure Functions Tools. To add Azure Function Tools, include the **Azure development** workload in your Visual Studio installation. Azure Functions Tools is available in the Azure development workload starting with Visual Studio 2017.
+- Azure Functions Tools. To add Azure Function Tools, include the **Azure development** workload in your Visual Studio installation. If you are using Visual Studio 2017, you may need to [follow some additional installation steps](#azure-functions-tools-with-visual-studio-2017).
- Other resources that you need, such as an Azure Storage account, are created in your subscription during the publishing process. - [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-> [!NOTE]
-> In Visual Studio 2017, the Azure development workload installs Azure Functions Tools as a separate extension. When you update your Visual Studio 2017 installation, make sure that you're using the [most recent version](#check-your-tools-version) of the Azure Functions Tools. The following sections show you how to check and (if needed) update your Azure Functions Tools extension in Visual Studio 2017.
->
-> Skip these sections if you're using Visual Studio 2019.
-
-### <a name="check-your-tools-version"></a>Check your tools version in Visual Studio 2017
-
-1. From the **Tools** menu, choose **Extensions and Updates**. Expand **Installed** > **Tools**, and then choose **Azure Functions and Web Jobs Tools**.
-
- ![Verify the Functions tools version](./media/functions-develop-vs/functions-vstools-check-functions-tools.png)
-
-1. Note the installed **Version** and compare this version with the latest version listed in the [release notes](https://github.com/Azure/Azure-Functions/blob/master/VS-AzureTools-ReleaseNotes.md).
-
-1. If your version is older, update your tools in Visual Studio as shown in the following section.
-
-### Update your tools in Visual Studio 2017
-
-1. In the **Extensions and Updates** dialog, expand **Updates** > **Visual Studio Marketplace**, choose **Azure Functions and Web Jobs Tools** and select **Update**.
-
- ![Update the Functions tools version](./media/functions-develop-vs/functions-vstools-update-functions-tools.png)
-
-1. After the tools update is downloaded, select **Close**, and then close Visual Studio to trigger the tools update with VSIX Installer.
-
-1. In VSIX Installer, choose **Modify** to update the tools.
-
-1. After the update is complete, choose **Close**, and then restart Visual Studio.
-
-> [!NOTE]
-> In Visual Studio 2019 and later, the Azure Functions tools extension is updated as part of Visual Studio.
- ## Create an Azure Functions project [!INCLUDE [Create a project using the Azure Functions](../../includes/functions-vstools-create.md)]
After you create an Azure Functions project, the project template creates a C# p
* **local.settings.json**: Maintains settings used when running functions locally. These settings aren't used when running in Azure. For more information, see [Local settings file](#local-settings). >[!IMPORTANT]
- >Because the local.settings.json file can contain secrets, you must exclude it from your project source control. Ensure the **Copy to Output Directory** setting for this file is set to **Copy if newer**.
+ >Because the local.settings.json file can contain secrets, you must exclude it from your project source control. Make sure the **Copy to Output Directory** setting for this file is set to **Copy if newer**.
For more information, see [Functions class library project](functions-dotnet-class-library.md#functions-class-library-project).
The Functions runtime uses an Azure Storage account internally. For all trigger
To set the storage account connection string:
-1. In Visual Studio, select **View** > **Cloud Explorer**.
+1. In the Azure portal, navigate to your storage account.
-2. In **Cloud Explorer**, expand **Storage Accounts**, and then select your storage account. In the **Properties** tab, copy the **Primary Connection String** value.
+2. In the **Access keys** tab, below **Security + networking**, copy the **Connection string** of **key1**.
2. In your project, open the local.settings.json file and set the value of the `AzureWebJobsStorage` key to the connection string you copied.
In C# class library functions, the bindings used by the function are defined by
2. Select **Azure Function**, enter a **Name** for the class, and then select **Add**.
-3. Choose your trigger, set the binding properties, and then select **OK**. The following example shows the settings for creating a Queue storage trigger function.
+3. Choose your trigger, set the binding properties, and then select **Add**. The following example shows the settings for creating a Queue storage trigger function.
![Create a Queue storage trigger function](./media/functions-develop-vs/functions-vstools-create-queuetrigger.png)
- This trigger example uses a connection string with a key named `QueueStorage`. Define this connection string setting in the [local.settings.json file](functions-develop-local.md#local-settings-file).
+ You will then be prompted to choose between two Azure storage emulators or referencing a provisioned Azure storage account.
+
+ This trigger example uses a connection string with a key named `QueueStorage`. This key, stored in the [local.settings.json file](functions-develop-local.md#local-settings-file), either references the Azure storage emulators or an Azure storage account.
4. Examine the newly added class. You see a static `Run()` method that's attributed with the `FunctionName` attribute. This attribute indicates that the method is the entry point for the function.
Use the following steps to publish your project to a function app in Azure.
Visual Studio doesn't upload these settings automatically when you publish the project. Any settings you add in the local.settings.json you must also add to the function app in Azure.
-The easiest way to upload the required settings to your function app in Azure is to select the **Manage Azure App Service settings** link that appears after you successfully publish your project.
+The easiest way to upload the required settings to your function app in Azure is to expand the three dots next to the **Hosting** section and select the **Manage Azure App Service settings** link that appears after you successfully publish your project.
:::image type="content" source="./media/functions-develop-vs/functions-vstools-app-settings.png" alt-text="Settings in Publish window":::
To set up your environment, create a function and test the app. The following st
1. [Create a new Functions app](functions-get-started.md) and name it **Functions** 2. [Create an HTTP function from the template](functions-get-started.md) and name it **MyHttpTrigger**. 3. [Create a timer function from the template](functions-create-scheduled-function.md) and name it **MyTimerTrigger**.
-4. [Create an xUnit Test app](https://xunit.net/docs/getting-started/netcore/cmdline) in the solution and name it **Functions.Tests**.
+4. [Create an xUnit Test app](https://xunit.net/docs/getting-started/netcore/cmdline) in the solution and name it **Functions.Tests**. Remove the default test files.
5. Use NuGet to add a reference from the test app to [Microsoft.AspNetCore.Mvc](https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc/) 6. [Reference the *Functions* app](/visualstudio/ide/managing-references-in-a-project) from *Functions.Tests* app.
namespace Functions.Tests
public void Timer_should_log_message() { var logger = (ListLogger)TestFactory.CreateLogger(LoggerTypes.List);
- MyTimerTrigger.Run(null, logger);
+ new MyTimerTrigger().Run(null, logger);
var msg = logger.Logs[0]; Assert.Contains("C# Timer trigger function executed at", msg); }
The members implemented in this class are:
- **Http_trigger_should_return_string_from_member_data**: This test uses xUnit attributes to provide sample data to the HTTP function. -- **Timer_should_log_message**: This test creates an instance of `ListLogger` and passes it to a timer function. Once the function is run, then the log is checked to ensure the expected message is present.
+- **Timer_should_log_message**: This test creates an instance of `ListLogger` and passes it to a timer function. Once the function is run, then the log is checked to make sure the expected message is present.
If you want to access application settings in your tests, you can [inject](functions-dotnet-dependency-injection.md) an `IConfiguration` instance with mocked environment variable values into your function. ### Run tests
-To run the tests, navigate to the **Test Explorer** and select **Run all**.
+To run the tests, navigate to the **Test Explorer** and select **Run All Tests in View**.
![Testing Azure Functions with C# in Visual Studio](./media/functions-test-a-function/azure-functions-test-visual-studio-xunit.png)
To run the tests, navigate to the **Test Explorer** and select **Run all**.
To debug the tests, set a breakpoint on a test, navigate to the **Test Explorer** and select **Run > Debug Last Run**.
+## Azure Functions tools with Visual Studio 2017
+
+Azure Functions Tools is available in the Azure development workload starting with Visual Studio 2017. In Visual Studio 2017, the Azure development workload installs Azure Functions Tools as a separate extension. In Visual Studio 2019 and later, the Azure Functions tools extension is updated as part of Visual Studio.
+
+When you update your Visual Studio 2017 installation, make sure that you're using the [most recent version](#check-your-tools-version) of the Azure Functions Tools. The following sections show you how to check and (if needed) update your Azure Functions Tools extension in Visual Studio 2017.
+
+### <a name="check-your-tools-version"></a>Check your tools version in Visual Studio 2017
+
+1. From the **Tools** menu, choose **Extensions and Updates**. Expand **Installed** > **Tools**, and then choose **Azure Functions and Web Jobs Tools**.
+
+ ![Verify the Functions tools version](./media/functions-develop-vs/functions-vstools-check-functions-tools.png)
+
+1. Note the installed **Version** and compare this version with the latest version listed in the [release notes](https://github.com/Azure/Azure-Functions/blob/master/VS-AzureTools-ReleaseNotes.md).
+
+1. If your version is older, update your tools in Visual Studio as shown in the following section.
+
+### Update your tools in Visual Studio 2017
+
+1. In the **Extensions and Updates** dialog, expand **Updates** > **Visual Studio Marketplace**, choose **Azure Functions and Web Jobs Tools** and select **Update**.
+
+ ![Update the Functions tools version](./media/functions-develop-vs/functions-vstools-update-functions-tools.png)
+
+1. After the tools update is downloaded, select **Close**, and then close Visual Studio to trigger the tools update with VSIX Installer.
+
+1. In VSIX Installer, choose **Modify** to update the tools.
+
+1. After the update is complete, choose **Close**, and then restart Visual Studio.
## Next steps
azure-functions Functions Scenario Database Table Cleanup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scenario-database-table-cleanup.md
You must have previously published your app to Azure. If you haven't already don
You need to add the NuGet package that contains the SqlClient library. This data access library is needed to connect to SQL Database.
-1. Open your local function app project in Visual Studio 2019.
+1. Open your local function app project in Visual Studio 2022.
1. In Solution Explorer, right-click the function app project and choose **Manage NuGet Packages**. 1. On the **Browse** tab, search for ```System.Data.SqlClient``` and, when found, select it.
-1. In the **System.Data.SqlClient** page, select version `4.5.1` and then click **Install**.
+1. In the **System.Data.SqlClient** page, select version `4.8.3` and then click **Install**.
1. When the install completes, review the changes and then click **OK** to close the **Preview** window.
Now, you can add the C# function code that connects to your SQL Database.
1. With the **Azure Functions** template selected, name the new item something like `DatabaseCleanup.cs` and select **Add**.
-1. In the **New Azure function** dialog box, choose **Timer trigger** and then **OK**. This dialog creates a code file for the timer triggered function.
+1. In the **New Azure function** dialog box, choose **Timer trigger** and then **Add**. This dialog creates a code file for the timer triggered function.
1. Open the new code file and add the following using statements at the top of the file:
Now, you can add the C# function code that connects to your SQL Database.
On the first execution, you should update 32 rows of data. Following runs update no data rows, unless you make changes to the SalesOrderHeader table data so that more rows are selected by the `UPDATE` statement.
-If you plan to [publish this function](functions-develop-vs.md#publish-to-azure), remember to change the `TimerTrigger` attribute to a more reasonable [cron schedule](functions-bindings-timer.md#ncrontab-expressions) than every 15 seconds.
+If you plan to [publish this function](functions-develop-vs.md#publish-to-azure), remember to change the `TimerTrigger` attribute to a more reasonable [cron schedule](functions-bindings-timer.md#ncrontab-expressions) than every 15 seconds. Also, you need to ensure that the Function Apps instance has network access to the Azure SQL Database instance by granting access to Azure IP addresses.
## Next steps
azure-functions Openapi Apim Integrate Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/openapi-apim-integrate-visual-studio.md
In this tutorial, you learn how to:
The serverless function you create provides an API that lets you determine whether an emergency repair on a wind turbine is cost-effective. Because both the function app and API Management instance you create use consumption plans, your cost for completing this tutorial is minimal. > [!NOTE]
-> The OpenAPI and API Management integration featured in this article is currently in preview. This method for exposing a serverless API is only supported for C# class library (.NET Core 3.1) functions. All other language runtimes should instead [use Azure API Management integration from the portal](functions-openapi-definition.md).
+> The OpenAPI and API Management integration featured in this article is currently in preview. This method for exposing a serverless API is only supported for [in-process](functions-dotnet-class-library.md) C# class library functions. [Isolated process](dotnet-isolated-process-guide.md) C# class library functions and all other language runtimes should instead [use Azure API Management integration from the portal](functions-openapi-definition.md).
## Prerequisites
-+ [Visual Studio 2019](https://azure.microsoft.com/downloads/), version 16.10, or a later version. Make sure you select the **Azure development** workload during installation.
++ [Visual Studio 2022](https://azure.microsoft.com/downloads/). Make sure you select the **Azure development** workload during installation. + An active [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/dotnet/) before you begin.
The Azure Functions project template in Visual Studio creates a project that you
| Setting | Value | Description | | | - |-- |
- | **.NET version** | **.NET Core 3 (LTS)** | This value creates a function project that uses the version 3.x runtime of Azure Functions. OpenAPI file generation is only supported for version 3.x of the Functions runtime. |
+ | **Functions worker** | **.NET 6** | This value creates a function project that runs in-process on version 4.x of the Azure Functions runtime. OpenAPI file generation is only supported for versions 3.x and 4.x of the Functions runtime, and isolated process isn't supported. |
| **Function template** | **HTTP trigger with OpenAPI** | This value creates a function triggered by an HTTP request, with the ability to generate an OpenAPI definition file. |
- | **Storage account (AzureWebJobsStorage)** | **Storage emulator** | You can use the emulator for local development of HTTP trigger functions. Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. |
+ | **Use Azurite for runtime storage account (AzureWebJobsStorage)** | **Selected** | You can use the emulator for local development of HTTP trigger functions. Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. |
| **Authorization level** | **Function** | When running in Azure, clients must provide a key when accessing the endpoint. For more information about keys and authorization, see [function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys). | ![Azure Functions project settings](./media/openapi-apim-integrate-vs/functions-project-settings.png)
Before you can publish your project, you must have a function app in your Azure
1. In the **Publish** tab, select the ellipses (**...**) next to **Hosting** and select **Open API in Azure portal**. The API Management instance you created is opened in the Azure portal in your default browser. This API Management instance is already linked to your function app.
-1. Under **APIs**, select **Azure Functions OpenAPI Extension** > **Test** > **POST Run**, then under **Inbound policy** select **Add policy**.
+1. Under **APIs**, select **OpenAPI Document on Azure Functions** > **POST Run**, then under **Inbound processing** select **Add policy**.
:::image type="content" source="media/openapi-apim-integrate-vs/apim-add-policy.png" alt-text="Add an inbound policy to the API":::
-1. In **Add inbound policy**, choose **Set query parameters**, type `code` for **Name**, select **+Value**, paste in the copied function key, and select **Save**. API Management includes the function key when it passes call through to the function endpoint.
+1. Below **Inbound processing**, in **Set query parameters**, type `code` for **Name**, select **+Value**, paste in the copied function key, and select **Save**. API Management includes the function key when it passes calls through to the function endpoint.
+
+ :::image type="content" source="media/openapi-apim-integrate-vs/inbound-processing-rule.png" alt-text="Provide Function credentials to the API inbound processing rule":::
Now that the function key is set, you can call the API to verify that it works when hosted in Azure.
Now that the function key is set, you can call the API to verify that it works w
If your API works as expected, you can download the OpenAPI definition.
-1. 1. Under **APIs**, select **Azure Functions OpenAPI Extension**, select the ellipses (**...**), and select **Export**.
+1. 1. Under **APIs**, select **OpenAPI Document on Azure Functions**, select the ellipses (**...**), and select **Export**.
![Download OpenAPI definition](media/openapi-apim-integrate-vs/download-definition.png)
Select **Delete resource group**, type the name of your group in the text box to
## Next steps
-You've used Visual Studio 2019 to create a function that is self-documenting because of the [OpenAPI Extension](https://github.com/Azure/azure-functions-openapi-extension) and integrated with API Management. You can now refine the definition in API Management in the portal. You can also [learn more about API Management](../api-management/api-management-key-concepts.md).
+You've used Visual Studio 2022 to create a function that is self-documenting because of the [OpenAPI Extension](https://github.com/Azure/azure-functions-openapi-extension) and integrated with API Management. You can now refine the definition in API Management in the portal. You can also [learn more about API Management](../api-management/api-management-key-concepts.md).
> [!div class="nextstepaction"] > [Edit the OpenAPI definition in API Management](../api-management/edit-api.md)
azure-government Documentation Government Stig Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-stig-linux-vm.md
Title: Deploy STIG-compliant Linux Virtual Machines (Preview)
-description: This quickstart shows you how to deploy a STIG-compliant Linux VM (Preview) from Azure Marketplace
+description: This quickstart shows you how to deploy a STIG-compliant Linux VM (Preview) from the Azure portal or Azure Government portal.
Last updated 06/14/2021-+ # Deploy STIG-compliant Linux Virtual Machines (Preview)
azure-government Documentation Government Stig Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-stig-windows-vm.md
Title: Deploy STIG-compliant Windows Virtual Machines (Preview)
-description: This quickstart shows you how to deploy a STIG-compliant Windows VM (Preview) from Azure Marketplace
+description: This quickstart shows you how to deploy a STIG-compliant Windows VM (Preview) from the Azure portal or Azure Government portal.
Last updated 06/14/2021-+ # Deploy STIG-compliant Windows Virtual Machines (Preview)
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
The following tables list the operating systems that are supported by the Azure
<sup>2</sup> Using the Azure Monitor agent [client installer (preview)](./azure-monitor-agent-windows-client.md) ### Linux
+> [!NOTE]
+> For Dependency Agent, please additionally check for supported kernel versions. See "Dependency agent Linux kernel support" table below for details
++ | Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Dependency agent | Diagnostics extension <sup>2</sup>| |:|::|::|::|:: | AlmaLinux | X | | | |
The following tables list the operating systems that are supported by the Azure
| SUSE Linux Enterprise Server 12 SP5 | X | X | X | X | | SUSE Linux Enterprise Server 12 | X | X | X | X | | Ubuntu 22.04 LTS | X | | | |
-| Ubuntu 20.04 LTS | X | X | X | X |
+| Ubuntu 20.04 LTS | X | X | X | X <sup>4</sup> |
| Ubuntu 18.04 LTS | X | X | X | X | | Ubuntu 16.04 LTS | X | X | X | X | | Ubuntu 14.04 LTS | | X | | X |
The following tables list the operating systems that are supported by the Azure
<sup>3</sup> Known issue collecting Syslog events in versions prior to 1.9.0.
+<sup>4</sup> Not all kernel versions are supported, check supported kernel versions below.
+ #### Dependency agent Linux kernel support Since the Dependency agent works at the kernel level, support is also dependent on the kernel version. As of Dependency agent version 9.10.* the agent supports * kernels. The following table lists the major and minor Linux OS release and supported kernel versions for the Dependency agent.
azure-monitor Alerts Managing Alert States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-managing-alert-states.md
- Title: Manage alert and smart group states
-description: Managing the states of the alert and smart group instances
-- Previously updated : 2/23/2022---
-# Manage alert and smart group states
-
-Alerts in Azure Monitor now have an [alert state and a monitor condition](./alerts-overview.md) and, similarly, Smart Groups have a [smart group state](./alerts-smartgroups-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json). Changes to the state are now captured in history associated with the respective alert or smart group. This article walks you through the process of changing the state, for both an alert and a smart group.
-
-## Change the state of an alert
-
-1. You can change the state of an alert in the following different ways:
- * In the All Alerts page, click the checkbox next to the alerts you wish to change the state of, and click change state.
- ![Screenshot shows the All Alerts page with Change state selected.](./media/alerts-managing-alert-states/state-all-alerts.jpg)
- * In the Alert Details page for a particular alert instance, you can click change state
- ![Screenshot shows the Alert Details page with Change alert state selected.](./media/alerts-managing-alert-states/state-alert-details.jpg)
- * In the Alert Details page for a specific alert instance, in the Smart Group pane you can click the checkbox next to the alerts you wish
- ![Screenshot shows the Alert Details page for the heartbeat alert with some instances having check marks.](./media/alerts-managing-alert-states/state-alert-details-sg.jpg)
-
- * In the Smart Group Details page, in the list of member alerts you can click the checkbox next to the alerts you wish to change the state of and click Change Stateto change the state of and click Change State.
- ![Screenshot shows the Smart Group Details page where you can select alerts for which to change state.](./media/alerts-managing-alert-states/state-sg-details-alerts.jpg)
-1. On clicking Change State, a popup opens up allowing you to select the state (New/Acknowledged/Closed) and enter a comment if necessary.
-![Screenshot shows the Details Change alert dialog box.](./media/alerts-managing-alert-states/state-alert-change.jpg)
-1. Once this is done, the state change is recorded in the history of the respective alert. This can be viewed by opening the respective Details page, and checking the history section.
-![Screenshot shows the history of state changes.](./media/alerts-managing-alert-states/state-alert-history.jpg)
-
-## Change the state of a smart group
-1. You can change the state of a smart group in the following different ways:
- 1. In the Smart Group list page, you can click the checkbox next to the smart groups you wish to change the state of and click Change State
- ![Screenshot shows the Change State page for Smart Groups.](./media/alerts-managing-alert-states/state-sg-list.jpg)
- 1. In the Smart Group Details page, you can click change state
- ![Screenshot shows the Smart Group Details page with Change smart group state selected.](./media/alerts-managing-alert-states/state-sg-details.jpg)
-1. On clicking Change State, a popup opens up allowing you to select the state (New/Acknowledged/Closed) and enter a comment if necessary.
-![Screenshot shows the Change state dialog box for the smart group.](./media/alerts-managing-alert-states/state-sg-change.jpg)
- > [!NOTE]
- > Changing the state of a smart group does not change the state of the individual member alerts.
-
-1. Once this is done, the state change is recorded in the history of the respective smart group. This can be viewed by opening the respective Details page, and checking the history section.
-![Screenshot shows the history of changes for the smart group.](./media/alerts-managing-alert-states/state-sg-history.jpg)
azure-monitor Alerts Managing Smart Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-managing-smart-groups.md
- Title: Manage smart groups (preview)
-description: Managing Smart Groups created over your alert instances
- Previously updated : 2/23/2022---
-# Manage smart groups (preview)
-
-[Smart groups (preview)](./alerts-smartgroups-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json) use machine learning algorithms to group together alerts on the basis of co-occurrence or similarity, so that the user can now manage smart groups instead of having to manage each alert individually. This article will walk you through how to access and use smart groups in Azure Monitor.
-
-1. To see the Smart Groups created for your alert instances you can either:
-
- 1. Click on **Smart Groups** from the **Alerts Summary** page.
- ![Screenshot shows the Alert Summary page with Smart groups highlighted.](./media/alerts-managing-smart-groups/sg-alerts-summary.jpg)
-
- 1. Click on Alerts by Smart Groups from the All Alerts page.
- ![Screenshot shows the All Alerts page with Alert by Smart Group highlighted.](./media/alerts-managing-smart-groups/sg-all-alerts.jpg)
-
-2. This takes you to the list view for all Smart Groups created over your alert instances. Instead of sifting through multiple alerts, you can now deal with the smart groups instead.
-![Screenshot shows the All Alerts page.](./media/alerts-managing-smart-groups/sg-list.jpg)
-
-3. Clicking on any Smart Group opens up the details page, where you can see the grouping reason, along with the member alerts. This aggregation allows you to deal with a singular smart group, instead of sifting through multiple alerts.
-![Screenshot shows the Details page.](./media/alerts-managing-smart-groups/sg-details.jpg)
azure-monitor Alerts Prepare Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-prepare-migration.md
The following table is a reference to the programmatic interfaces for both class
| Deployment script type | Classic alerts | New metric alerts | | - | -- | -- | |REST API | [microsoft.insights/alertrules](/rest/api/monitor/alertrules) | [microsoft.insights/metricalerts](/rest/api/monitor/metricalerts) |
-|Azure CLI | [az monitor alert](/cli/azure/monitor/alert) | [az monitor metrics alert](/cli/azure/monitor/metrics/alert) |
+|Azure CLI | [az monitor alert](/cli/monitor/alert) | [az monitor metrics alert](/cli/azure/monitor/metrics/alert) |
|PowerShell | [Reference](/powershell/module/az.monitor/add-azmetricalertrule) | [Reference](/powershell/module/az.monitor/add-azmetricalertrulev2) | | Azure Resource Manager template | [For classic alerts](./alerts-enable-template.md)|[For new metric alerts](./alerts-metric-create-templates.md)|
If you're using a partner integration that's not listed here, confirm with the p
## Next steps - [How to use the migration tool](alerts-using-migration-tool.md)-- [Understand how the migration tool works](alerts-understand-migration.md)
+- [Understand how the migration tool works](alerts-understand-migration.md)
azure-monitor Alerts Smartgroups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-smartgroups-overview.md
- Title: Smart groups (preview)
-description: Smart Groups are aggregations of alerts that help you reduce alert noise
- Previously updated : 2/23/2022--
-# Smart groups (preview)
-
-A common challenge faced when dealing with alerts is sifting through the noise to find out what actually matters - smart groups are intended to be the solution to that problem.
-
-Smart groups are automatically created by using machine learning algorithms to combine related alerts that represent a single issue. When an alert is created, the algorithm adds it to a new smart group or an existing smart group based on information such as historical patterns, similar properties, and similar structure. For example, if % CPU on several virtual machines in a subscription simultaneously spikes leading to many individual alerts, and if such alerts have occurred together anytime in the past, these alerts will likely be grouped into a single Smart Group, suggesting a potential common root cause. This means that for someone troubleshooting alerts, smart groups not only allows them to reduce noise by managing related alerts as a single aggregated unit, it also guides them towards possible common root causes for their alerts.
-
-Currently, the algorithm only considers alerts from the same monitor service within a subscription. Smart groups can reduce up to 99% of alert noise through this consolidation. You can view the reason that alerts were included in a group in the smart group details page.
-
-You can view the details of smart groups and set the state similarly to how you can with alerts. Each alert is a member of one and only one smart group.
-
-## Smart group state
-
-Smart group state is a similar concept to the alert state, which allows you to manage the resolution process at the level of a smart group. Similar to the alert state, when a smart group is created, it has the **New** state, which can be changed to either **Acknowledged** or **Closed**.
-
-The following smart group states are supported.
-
-| State | Description |
-|:|:|
-| New | The issue has just been detected and has not yet been reviewed. |
-| Acknowledged | An administrator has reviewed the smart group and started working on it. |
-| Closed | The issue has been resolved. After a smart group has been closed, you can reopen it by changing it to another state. |
-
-[Learn how to change the state of your smart group.](./alerts-managing-alert-states.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
-
-> [!NOTE]
-> Changing the state of a smart group does not change the state of the individual member alerts.
-
-## Smart group details page
-
-The Smart group detail page is displayed when you select a smart group. It provides details about the smart group, including the reasoning that was used to create the group, and enables you to change its state.
-
-![Smart group detail](media/alerts-smartgroups-overview/smart-group-detail.png)
--
-The smart group detail page includes the following sections.
-
-| Section | Description |
-|:|:|
-| Alerts | Lists the individual alerts that are included in the smart group. Select an alert to open its alert detail page. |
-| History | Lists each action taken by the smart group and any changes that are made to it. This is currently limited to state changes and alert membership changes. |
-
-## Smart group taxonomy
-
-The name of a smart group is the name of its first alert. You can't create or rename a smart group.
-
-## Next steps
--- [Manage smart groups](./alerts-managing-smart-groups.md?toc=%2fazure%2fazure-monitor%2ftoc.json)-- [Change your alert and smart group state](./alerts-managing-alert-states.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
azure-monitor Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md
The `operation_ParentId` field is in the format `<trace-id>.<parent-id>`, where
OpenCensus Python enables you to correlate logs by adding a trace ID, a span ID, and a sampling flag to log records. You add these attributes by installing OpenCensus [logging integration](https://pypi.org/project/opencensus-ext-logging/). The following attributes will be added to Python `LogRecord` objects: `traceId`, `spanId`, and `traceSampled`. (applicable only for loggers that are created after the integration)
+Install the OpenCensus logging integration:
+
+```console
+python -m pip install opencensus-ext-logging
+```
+ **Sample application** ```python
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
OpenCensus.stats supports 4 aggregation methods but provides partial support for
main() ```
-1. The exporter sends metric data to Azure Monitor at a fixed interval. The default is every 15 seconds. We're tracking a single metric, so this metric data, with whatever value and time stamp it contains, is sent every interval. The value is cumulative, can only increase and resets to 0 on restart. You can find the data under `customMetrics`, but `customMetrics` properties valueCount, valueSum, valueMin, valueMax, and valueStdDev are not effectively used.
+1. The exporter sends metric data to Azure Monitor at a fixed interval. The default is every 15 seconds. To modify the export interval, pass in `export_interval` as a parameter in seconds to `new_metrics_exporter()`. We're tracking a single metric, so this metric data, with whatever value and time stamp it contains, is sent every interval. The value is cumulative, can only increase and resets to 0 on restart. You can find the data under `customMetrics`, but `customMetrics` properties valueCount, valueSum, valueMin, valueMax, and valueStdDev are not effectively used.
### Setting custom dimensions in metrics
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Just a few examples of what you can do with Azure Monitor include:
- Detect and diagnose issues across applications and dependencies with [Application Insights](app/app-insights-overview.md). - Correlate infrastructure issues with [VM insights](vm/vminsights-overview.md) and [Container insights](containers/container-insights-overview.md). - Drill into your monitoring data with [Log Analytics](logs/log-query-overview.md) for troubleshooting and deep diagnostics.-- Support operations at scale with [smart alerts](alerts/alerts-smartgroups-overview.md) and [automated actions](alerts/alerts-action-rules.md).
+- Support operations at scale with [automated actions](alerts/alerts-action-rules.md).
- Create visualizations with Azure [dashboards](visualize/tutorial-logs-dashboards.md) and [workbooks](visualize/workbooks-overview.md). - Collect data from [monitored resources](./monitor-reference.md) using [Azure Monitor Metrics](./essentials/data-platform-metrics.md). - Investigate change data for routine monitoring or for triaging incidents using [Change Analysis](./change/change-analysis.md).
azure-monitor Profiler Aspnetcore Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-aspnetcore-linux.md
+
+ Title: Profile ASP.NET Core Azure Linux web apps with Application Insights Profiler | Microsoft Docs
+description: A conceptual overview and step-by-step tutorial on how to use Application Insights Profiler.
+
+ms.devlang: csharp
+ Last updated : 02/23/2018++
+# Profile ASP.NET Core Azure Linux web apps with Application Insights Profiler
+
+This feature is currently in preview.
+
+Find out how much time is spent in each method of your live web application when using [Application Insights](../app/app-insights-overview.md). Application Insights Profiler is now available for ASP.NET Core web apps that are hosted in Linux on Azure App Service. This guide provides step-by-step instructions on how the Profiler traces can be collected for ASP.NET Core Linux web apps.
+
+After you complete this walkthrough, your app can collect Profiler traces like the traces that are shown in the image. In this example, the Profiler trace indicates that a particular web request is slow because of time spent waiting. The *hot path* in the code that's slowing the app is marked by a flame icon. The **About** method in the **HomeController** section is slowing the web app because the method is calling the **Thread.Sleep** function.
+
+![Profiler traces](./media/profiler-aspnetcore-linux/profiler-traces.png)
+
+## Prerequisites
+The following instructions apply to all Windows, Linux, and Mac development environments:
+
+* Install the [.NET Core SDK 3.1 or later](https://dotnet.microsoft.com/download/dotnet).
+* Install Git by following the instructions at [Getting Started - Installing Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git).
+
+## Set up the project locally
+
+1. Open a Command Prompt window on your machine. The following instructions work for all Windows, Linux, and Mac development environments.
+
+1. Create an ASP.NET Core MVC web application:
+
+ ```console
+ dotnet new mvc -n LinuxProfilerTest
+ ```
+
+1. Change the working directory to the root folder for the project.
+
+1. Add the NuGet package to collect the Profiler traces:
+
+ ```console
+ dotnet add package Microsoft.ApplicationInsights.Profiler.AspNetCore
+ ```
+
+1. Enable Application Insights and Profiler in Startup.cs:
+
+ ```csharp
+ public void ConfigureServices(IServiceCollection services)
+ {
+ services.AddApplicationInsightsTelemetry(); // Add this line of code to enable Application Insights.
+ services.AddServiceProfiler(); // Add this line of code to Enable Profiler
+ services.AddControllersWithViews();
+ }
+ ```
+
+1. Add a line of code in the **HomeController.cs** section to randomly delay a few seconds:
+
+ ```csharp
+ using System.Threading;
+ ...
+
+ public IActionResult About()
+ {
+ Random r = new Random();
+ int delay = r.Next(5000, 10000);
+ Thread.Sleep(delay);
+ return View();
+ }
+ ```
+
+1. Save and commit your changes to the local repository:
+
+ ```console
+ git init
+ git add .
+ git commit -m "first commit"
+ ```
+
+## Create the Linux web app to host your project
+
+1. Create the web app environment by using App Service on Linux:
+
+ :::image type="content" source="./media/profiler-aspnetcore-linux/create-linux-app-service.png" alt-text="Create the Linux web app":::
+
+2. Create the deployment credentials:
+
+ > [!NOTE]
+ > Record your password to use later when deploying your web app.
+
+ ![Create the deployment credentials](./media/profiler-aspnetcore-linux/create-deployment-credentials.png)
+
+3. Choose the deployment options. Set up a local Git repository in the web app by following the instructions on the Azure portal. A Git repository is automatically created.
+
+ ![Set up the Git repository](./media/profiler-aspnetcore-linux/setup-git-repo.png)
+
+For more deployment options, see [App Service documentation](../../app-service/index.yml).
+
+## Deploy your project
+
+1. In your Command Prompt window, browse to the root folder for your project. Add a Git remote repository to point to the repository on App Service:
+
+ ```console
+ git remote add azure https://<username>@<app_name>.scm.azurewebsites.net:443/<app_name>.git
+ ```
+
+ * Use the **username** that you used to create the deployment credentials.
+ * Use the **app name** that you used to create the web app by using App Service on Linux.
+
+2. Deploy the project by pushing the changes to Azure:
+
+ ```console
+ git push azure main
+ ```
+
+ You should see output similar to the following example:
+
+ ```output
+ Counting objects: 9, done.
+ Delta compression using up to 8 threads.
+ Compressing objects: 100% (8/8), done.
+ Writing objects: 100% (9/9), 1.78 KiB | 911.00 KiB/s, done.
+ Total 9 (delta 3), reused 0 (delta 0)
+ remote: Updating branch 'main'.
+ remote: Updating submodules.
+ remote: Preparing deployment for commit id 'd7369a99d7'.
+ remote: Generating deployment script.
+ remote: Running deployment command...
+ remote: Handling ASP.NET Core Web Application deployment.
+ remote: ......
+ remote: Restoring packages for /home/site/repository/EventPipeExampleLinux.csproj...
+ remote: .
+ remote: Installing Newtonsoft.Json 10.0.3.
+ remote: Installing Microsoft.ApplicationInsights.Profiler.Core 1.1.0-LKG
+ ...
+ ```
+
+## Add Application Insights to monitor your web apps
+
+1. [Create an Application Insights resource](../app/create-new-resource.md).
+
+2. Copy the **iKey** value of the Application Insights resource and set the following settings in your web apps:
+
+ `APPINSIGHTS_INSTRUMENTATIONKEY: [YOUR_APPINSIGHTS_KEY]`
+
+ When the app settings are changed, the site automatically restarts. After the new settings are applied, the Profiler immediately runs for two minutes. The Profiler then runs for two minutes every hour.
+
+3. Generate some traffic to your website. You can generate traffic by refreshing the site **About** page a few times.
+
+4. Wait two to five minutes for the events to aggregate to Application Insights.
+
+5. Browse to the Application Insights **Performance** pane in the Azure portal. You can view the Profiler traces at the bottom right of the pane.
+
+ ![View Profiler traces](./media/profiler-aspnetcore-linux/view-traces.png)
+++
+## Next steps
+If you use custom containers that are hosted by Azure App Service, follow the instructions in [
+Enable Service Profiler for a containerized ASP.NET Core application](https://github.com/Microsoft/ApplicationInsights-Profiler-AspNetCore/tree/master/examples/EnableServiceProfilerForContainerApp) to enable Application Insights Profiler.
+
+Report any issues or suggestions to the Application Insights GitHub repository:
+[ApplicationInsights-Profiler-AspNetCore: Issues](https://github.com/Microsoft/ApplicationInsights-Profiler-AspNetCore/issues).
azure-monitor Profiler Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-azure-functions.md
+
+ Title: Profile Azure Functions app with Application Insights Profiler
+description: Enable Application Insights Profiler for Azure Functions app.
+
+ms.contributor: charles.weininger
+ Last updated : 05/03/2022++
+# Profile live Azure Functions app with Application Insights
+
+In this article, you'll use the Azure portal to:
+- View the current app settings for your Functions app.
+- Add two new app settings to enable Profiler on the Functions app.
+- Navigate to the Profiler for your Functions app to view data.
+
+> [!NOTE]
+> You can enable the Application Insights Profiler for Azure Functions apps on the **App Service** plan.
+
+## Pre-requisites
+
+- [An Azure Functions app](../../azure-functions/functions-create-function-app-portal.md). Verify your Functions app is on the **App Service** plan.
+
+ :::image type="content" source="./media/profiler-azure-functions/choose-plan.png" alt-text="Screenshot of where to select App Service plan from drop-down in Functions app creation.":::
++
+- Linked to [an Application Insights resource](../app/create-new-resource.md). Make note of the instrumentation key.
+
+## App settings for enabling Profiler
+
+|App Setting | Value |
+||-|
+|APPINSIGHTS_PROFILERFEATURE_VERSION | 1.0.0 |
+|DiagnosticServices_EXTENSION_VERSION | ~3 |
+
+## Add app settings to your Azure Functions app
+
+From your Functions app overview page in the Azure portal:
+
+1. Under **Settings**, select **Configuration**.
+
+ :::image type="content" source="./media/profiler-azure-functions/configuration-menu.png" alt-text="Screenshot of selecting Configuration from under the Settings section of the left side menu.":::
+
+1. In the **Application settings** tab, verify the `APPINSIGHTS_INSTRUMENTATIONKEY` setting is included in the settings list.
+
+ :::image type="content" source="./media/profiler-azure-functions/app-insights-key.png" alt-text="Screenshot showing the App Insights Instrumentation Key setting in the list.":::
+
+1. Select **New application setting**.
+
+ :::image type="content" source="./media/profiler-azure-functions/new-setting-button.png" alt-text="Screenshot outlining the new application setting button.":::
+
+1. Copy the **App Setting** and its **Value** from the [table above](#app-settings-for-enabling-profiler) and paste into the corresponding fields.
+
+ :::image type="content" source="./media/profiler-azure-functions/app-setting-1.png" alt-text="Screenshot adding the app insights profiler feature version setting.":::
+
+ :::image type="content" source="./media/profiler-azure-functions/app-setting-2.png" alt-text="Screenshot adding the diagnostic services extension version setting.":::
+
+ Leave the **Deployment slot setting** blank for now.
+
+1. Click **OK**.
+
+1. Click **Save** in the top menu, then **Continue**.
+
+ :::image type="content" source="./media/profiler-azure-functions/save-button.png" alt-text="Screenshot outlining the save button in the top menu of the configuration blade.":::
+
+ :::image type="content" source="./media/profiler-azure-functions/continue-button.png" alt-text="Screenshot outlining the continue button in the dialog after saving.":::
+
+The app settings now show up in the table:
+
+ :::image type="content" source="./media/profiler-azure-functions/app-settings-table.png" alt-text="Screenshot showing the two new app settings in the table on the configuration blade.":::
++
+## View the Profiler data for your Azure Functions app
+
+1. Under **Settings**, select **Application Insights (preview)** from the left menu.
+
+ :::image type="content" source="./media/profiler-azure-functions/app-insights-menu.png" alt-text="Screenshot showing application insights from the left menu of the Functions app.":::
+
+1. Select **View Application Insights data**.
+
+ :::image type="content" source="./media/profiler-azure-functions/view-app-insights-data.png" alt-text="Screenshot showing the button for viewing application insights data for the Functions app.":::
+
+1. On the App Insights page for your Functions app, select **Performance** from the left menu.
+
+ :::image type="content" source="./media/profiler-azure-functions/performance-menu.png" alt-text="Screenshot showing the performance link in the left menu of the app insights blade of the functions app.":::
+
+1. Select **Profiler** from the top menu of the Performance blade.
+
+ :::image type="content" source="./media/profiler-azure-functions/profiler-function-app.png" alt-text="Screenshot showing link to profiler for functions app.":::
++
+## Next Steps
+
+- Set these values using [Azure Resource Manager Templates](../app/azure-web-apps-net-core.md#app-service-application-settings-with-azure-resource-manager), [Azure PowerShell](/powershell/module/az.websites/set-azwebapp), or the [Azure CLI](/cli/azure/webapp/config/appsettings).
+- Learn more about [Profiler settings](profiler-settings.md).
azure-monitor Profiler Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-bring-your-own-storage.md
+
+ Title: Configure BYOS (Bring Your Own Storage) for Profiler & Snapshot Debugger
+description: Configure BYOS (Bring Your Own Storage) for Profiler & Snapshot Debugger
+ Last updated : 01/14/2021++
+# Configure Bring Your Own Storage (BYOS) for Application Insights Profiler and Snapshot Debugger
+
+## What is Bring Your Own Storage (BYOS) and why might I need it?
+When you use Application Insights Profiler or Snapshot Debugger, artifacts generated by your application are uploaded into Azure storage accounts over the public Internet. Those accounts are paid and controlled by Microsoft for processing and analysis. Microsoft controls the encryption-at-rest and lifetime management policies for those artifacts.
+
+With Bring Your Own Storage, these artifacts are uploaded into a storage account that you control. That means you control the encryption-at-rest policy, the lifetime management policy and network access. You will, however, be responsible for the costs associated with that storage account.
+
+> [!NOTE]
+> If you are enabling Private Link, Bring Your Own Storage is a requirement. For more information about Private Link for Application Insights, [see the documentation.](../logs/private-link-security.md)
+>
+> If you are enabling Customer-Managed Keys, Bring Your Own Storage is a requirement. For more information about Customer-Managed Keys for Application Insights, [see the documentation.](../logs/customer-managed-keys.md).
+
+## How will my storage account be accessed?
+1. Agents running in your Virtual Machines or App Service will upload artifacts (profiles, snapshots, and symbols) to blob containers in your account. This process involves contacting the Application Insights Profiler or Snapshot Debugger service to obtain a SAS (Shared Access Signature) token to a new blob in your storage account.
+1. The Application Insights Profiler or Snapshot Debugger service will analyze the incoming blob and write back the analysis results and log files into blob storage. Depending on available compute capacity, this process may occur anytime after upload.
+1. When you view the profiler traces, or snapshot debugger analysis, the service will fetch the analysis results from blob storage.
+
+## Prerequisites
+* Make sure to create your Storage Account in the same location as your Application Insights Resource. Ex. If your Application Insights resource is in West US 2, your Storage Account must be also in West US 2.
+* Grant the "Storage Blob Data Contributor" role to the AAD application "Diagnostic Services Trusted Storage Access" in your storage account via the Access Control (IAM) UI.
+* If Private Link enabled, configure the additional setting to allow connection to our Trusted Microsoft Service from your Virtual Network.
+
+## How to enable BYOS
+
+### Create Storage Account
+Create a brand-new Storage Account (if you don't have it) on the same location as your Application Insights resource.
+If your Application Insights resource it's on `West US 2`, then, your Storage Account must be in `West US 2`.
+
+### Grant Access to Diagnostic Services to your Storage Account
+A BYOS storage account will be linked to an Application Insights resource. There may be only one storage account per Application Insights resource and both must be in the same location. You may use the same storage account with more than one Application Insights resource.
+
+First, the Application Insights Profiler, and Snapshot Debugger service needs to be granted access to the storage account. To grant access, add the role `Storage Blob Data Contributor` to the AAD application named `Diagnostic Services Trusted Storage Access` via the Access Control (IAM) page in your storage account as shown in Figure 1.0.
+
+Steps:
+
+1. Select **Access control (IAM)**.
+
+1. Select **Add** > **Add role assignment** to open the Add role assignment page.
+
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | Storage Blob Data Contributor |
+ | Assign access to | User, group, or service principal |
+ | Members | Diagnostic Services Trusted Storage Access |
+
+ ![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
+
+After you added the role, it will appear under the "Role assignments" section, like the below Figure 1.1.
+_![Figure 1.1](media/profiler-bring-your-own-storage/figure-11.png)_
+_Figure 1.1_
+
+If you're also using Private Link, it's required one additional configuration to allow connection to our Trusted Microsoft Service from your Virtual Network. Refer to the [Storage Network Security documentation](../../storage/common/storage-network-security.md#trusted-microsoft-services).
+
+### Link your Storage Account with your Application Insights resource
+To configure BYOS for code-level diagnostics (Profiler/Debugger), there are three options:
+
+* Using Azure PowerShell cmdlets
+* Using the Azure CLI
+* Using Azure Resource Manager templates
+
+#### Configure using Azure PowerShell Cmdlets
+
+1. Make sure you have installed Az PowerShell 4.2.0 or greater.
+
+ To install Azure PowerShell, refer to the [Official Azure PowerShell documentation](/powershell/azure/install-az-ps).
+
+1. Install the Application Insights PowerShell extension.
+ ```powershell
+ Install-Module -Name Az.ApplicationInsights -Force
+ ```
+
+1. Sign in with your Azure Account
+ ```powershell
+ Connect-AzAccount -Subscription "{subscription_id}"
+ ```
+
+ For more info of how to sign in, refer to the [Connect-AzAccount documentation](/powershell/module/az.accounts/connect-azaccount).
+
+1. Remove previous Storage Account linked to your Application Insights resource.
+
+ Pattern:
+ ```powershell
+ $appInsights = Get-AzApplicationInsights -ResourceGroupName "{resource_group_name}" -Name "{application_insights_name}"
+ Remove-AzApplicationInsightsLinkedStorageAccount -ResourceId $appInsights.Id
+ ```
+
+ Example:
+ ```powershell
+ $appInsights = Get-AzApplicationInsights -ResourceGroupName "byos-test" -Name "byos-test-westus2-ai"
+ Remove-AzApplicationInsightsLinkedStorageAccount -ResourceId $appInsights.Id
+ ```
+
+1. Connect your Storage Account with your Application Insights resource.
+
+ Pattern:
+ ```powershell
+ $storageAccount = Get-AzStorageAccount -ResourceGroupName "{resource_group_name}" -Name "{storage_account_name}"
+ $appInsights = Get-AzApplicationInsights -ResourceGroupName "{resource_group_name}" -Name "{application_insights_name}"
+ New-AzApplicationInsightsLinkedStorageAccount -ResourceId $appInsights.Id -LinkedStorageAccountResourceId $storageAccount.Id
+ ```
+
+ Example:
+ ```powershell
+ $storageAccount = Get-AzStorageAccount -ResourceGroupName "byos-test" -Name "byosteststoragewestus2"
+ $appInsights = Get-AzApplicationInsights -ResourceGroupName "byos-test" -Name "byos-test-westus2-ai"
+ New-AzApplicationInsightsLinkedStorageAccount -ResourceId $appInsights.Id -LinkedStorageAccountResourceId $storageAccount.Id
+ ```
+
+#### Configure using Azure CLI
+
+1. Make sure you have installed Azure CLI.
+
+ To install Azure CLI, refer to the [Official Azure CLI documentation](/cli/azure/install-azure-cli).
+
+1. Install the Application Insights CLI extension.
+ ```azurecli
+ az extension add -n application-insights
+ ```
+
+1. Connect your Storage Account with your Application Insights resource.
+
+ Pattern:
+ ```azurecli
+ az monitor app-insights component linked-storage link --resource-group "{resource_group_name}" --app "{application_insights_name}" --storage-account "{storage_account_name}"
+ ```
+
+ Example:
+ ```azurecli
+ az monitor app-insights component linked-storage link --resource-group "byos-test" --app "byos-test-westus2-ai" --storage-account "byosteststoragewestus2"
+ ```
+
+ Expected output:
+ ```powershell
+ {
+ "id": "/subscriptions/{subscription}/resourcegroups/byos-test/providers/microsoft.insights/components/byos-test-westus2-ai/linkedstorageaccounts/serviceprofiler",
+ "linkedStorageAccount": "/subscriptions/{subscription}/resourceGroups/byos-test/providers/Microsoft.Storage/storageAccounts/byosteststoragewestus2",
+ "name": "serviceprofiler",
+ "resourceGroup": "byos-test",
+ "type": "microsoft.insights/components/linkedstorageaccounts"
+ }
+ ```
+
+ > [!NOTE]
+ > For performing updates on the linked Storage Accounts to your Application Insights resource, refer to the [Application Insights CLI documentation](/cli/azure/monitor/app-insights/component/linked-storage).
+
+#### Configure using Azure Resource Manager template
+
+1. Create an Azure Resource Manager template file with the following content (byos.template.json).
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "applicationinsights_name": {
+ "type": "String"
+ },
+ "storageaccount_name": {
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "name": "[concat(parameters('applicationinsights_name'), '/serviceprofiler')]",
+ "type": "Microsoft.Insights/components/linkedStorageAccounts",
+ "apiVersion": "2020-03-01-preview",
+ "properties": {
+ "linkedStorageAccount": "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageaccount_name'))]"
+ }
+ }
+ ],
+ "outputs": {}
+ }
+ ```
+
+1. Run the following PowerShell command to deploy previous template (create Linked Storage Account).
+
+ Pattern:
+ ```powershell
+ New-AzResourceGroupDeployment -ResourceGroupName "{your_resource_name}" -TemplateFile "{local_path_to_arm_template}"
+ ```
+
+ Example:
+ ```powershell
+ New-AzResourceGroupDeployment -ResourceGroupName "byos-test" -TemplateFile "D:\Docs\byos.template.json"
+ ```
+
+1. Provide the following parameters when prompted in the PowerShell console:
+
+ | Parameter | Description |
+ |-|--|
+ | application_insights_name | The name of the Application Insights resource to enable BYOS. |
+ | storage_account_name | The name of the Storage Account resource that you'll use as your BYOS. |
+
+ Expected output:
+ ```powershell
+ Supply values for the following parameters:
+ (Type !? for Help.)
+ application_insights_name: byos-test-westus2-ai
+ storage_account_name: byosteststoragewestus2
+
+ DeploymentName : byos.template
+ ResourceGroupName : byos-test
+ ProvisioningState : Succeeded
+ Timestamp : 4/16/2020 1:24:57 AM
+ Mode : Incremental
+ TemplateLink :
+ Parameters :
+ Name Type Value
+ ============================== ========================= ==========
+ application_insights_name String byos-test-westus2-ai
+ storage_account_name String byosteststoragewestus2
+
+ Outputs :
+ DeploymentDebugLogLevel :
+ ```
+
+1. Enable code-level diagnostics (Profiler/Debugger) on the workload of interest through the Azure portal. (App Service > Application Insights)
+_![Figure 2.0](media/profiler-bring-your-own-storage/figure-20.png)_
+_Figure 2.0_
+
+## Troubleshooting
+### Template schema '{schema_uri}' isn't supported.
+* Make sure that the `$schema` property of the template is valid. It must follow the following pattern:
+`https://schema.management.azure.com/schemas/{schema_version}/deploymentTemplate.json#`
+* Make sure that the `schema_version` of the template is within valid values: `2014-04-01-preview, 2015-01-01, 2018-05-01, 2019-04-01, 2019-08-01`.
+ Error message:
+ ```powershell
+ New-AzResourceGroupDeployment : 11:53:49 AM - Error: Code=InvalidTemplate; Message=Deployment template validation failed: 'Template schema
+ 'https://schema.management.azure.com/schemas/2020-01-01/deploymentTemplate.json#' is not supported. Supported versions are
+ '2014-04-01-preview,2015-01-01,2018-05-01,2019-04-01,2019-08-01'. Please see https://aka.ms/arm-template for usage details.'.
+ ```
+
+### No registered resource provider found for location '{location}'.
+* Make sure that the `apiVersion` of the resource `microsoft.insights/components` is `2015-05-01`.
+* Make sure that the `apiVersion` of the resource `linkedStorageAccount` is `2020-03-01-preview`.
+ Error message:
+ ```powershell
+ New-AzResourceGroupDeployment : 6:18:03 PM - Resource microsoft.insights/components 'byos-test-westus2-ai' failed with message '{
+ "error": {
+ "code": "NoRegisteredProviderFound",
+ "message": "No registered resource provider found for location 'westus2' and API version '2020-03-01-preview' for type 'components'. The supported api-versions are '2014-04-01,
+ 2014-08-01, 2014-12-01-preview, 2015-05-01, 2018-05-01-preview'. The supported locations are ', eastus, southcentralus, northeurope, westeurope, southeastasia, westus2, uksouth,
+ canadacentral, centralindia, japaneast, australiaeast, koreacentral, francecentral, centralus, eastus2, eastasia, westus, southafricanorth, northcentralus, brazilsouth, switzerlandnorth,
+ australiasoutheast'."
+ }
+ }'
+ ```
+### Storage account location should match AI component location.
+* Make sure that the location of the Application Insights resource is the same as the Storage Account.
+ Error message:
+ ```powershell
+ New-AzResourceGroupDeployment : 1:01:12 PM - Resource microsoft.insights/components/linkedStorageAccounts 'byos-test-centralus-ai/serviceprofiler' failed with message '{
+ "error": {
+ "code": "BadRequest",
+ "message": "Storage account location should match AI component location",
+ "innererror": {
+ "trace": [
+ "System.ArgumentException"
+ ]
+ }
+ }
+ }'
+ ```
+
+For general Profiler troubleshooting, refer to the [Profiler Troubleshoot documentation](profiler-troubleshooting.md).
+
+For general Snapshot Debugger troubleshooting, refer to the [Snapshot Debugger Troubleshoot documentation](../app/snapshot-debugger-troubleshoot.md).
+
+## FAQs
+* If I have Profiler or Snapshot enabled, and then I enabled BYOS, will my data be migrated into my Storage Account?
+ _No, it won't._
+
+* Will BYOS work with Encryption at Rest and Customer-Managed Key?
+ _Yes, to be precise, BYOS is a requisite to have profiler/debugger enabled with Customer-Manager Keys._
+
+* Will BYOS work in an environment isolated from the Internet?
+ _Yes. In fact, BYOS is a requirement for isolated network scenarios._
+
+* Will BYOS work when, both, Customer-Managed Keys and Private Link were enabled?
+ _Yes, it can be possible._
+
+* If I have enabled BYOS, can I go back using Diagnostic Services storage accounts to store my data collected?
+ _Yes, you can, but, right now we don't support data migration from your BYOS._
+
+* After enabling BYOS, will I take over of all the related costs of it, which are Storage and Networking?
+ _Yes_
azure-monitor Profiler Cloudservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-cloudservice.md
+
+ Title: Profile live Azure Cloud Services with Application Insights | Microsoft Docs
+description: Enable Application Insights Profiler for Azure Cloud Services.
++ Last updated : 08/06/2018++
+# Profile live Azure Cloud Services with Application Insights
+
+You can also deploy Application Insights Profiler on these
+* [Azure App Service](profiler.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Service Fabric applications](profiler-servicefabric.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Virtual Machines](profiler-vm.md?toc=/azure/azure-monitor/toc.json)
+
+Application Insights Profiler is installed with the Azure Diagnostics extension. You just need to configure Azure Diagnostics to install Profiler and send profiles to your Application Insights resource.
+
+## Enable Profiler for Azure Cloud Services
+1. Check to make sure that you're using [.NET Framework 4.6.1](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) or newer. If you are using OS family 4, you'll need to install .NET Framework 4.6.1 or newer with a [startup task](../../cloud-services/cloud-services-dotnet-install-dotnet.md). OS Family 5 includes a compatible version of .NET Framework by default.
+
+1. Add [Application Insights SDK to Azure Cloud Services](../app/cloudservices.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
+
+ **The bug in the profiler that ships in the WAD for Cloud Services has been fixed.** The latest version of WAD (1.12.2.0) for Cloud Services works with all recent versions of the App Insights SDK. Cloud Service hosts will upgrade WAD automatically, but it isn't immediate. To force an upgrade, you can redeploy your service or reboot the node.
+
+1. Track requests with Application Insights:
+
+ * For ASP.NET web roles, Application Insights can track the requests automatically.
+
+ * For worker roles, [add code to track requests](profiler-trackrequests.md?toc=/azure/azure-monitor/toc.json).
+
+1. Configure the Azure Diagnostics extension to enable Profiler:
+
+ a. Locate the [Azure Diagnostics](../agents/diagnostics-extension-overview.md) *diagnostics.wadcfgx* file for your application role, as shown here:
+
+ ![Location of the diagnostics config file](./media/profiler-cloudservice/cloud-service-solution-explorer.png)
+
+ If you can't find the file, see [Set up diagnostics for Azure Cloud Services and Virtual Machines](/visualstudio/azure/vs-azure-tools-diagnostics-for-cloud-services-and-virtual-machines).
+
+ b. Add the following `SinksConfig` section as a child element of `WadCfg`:
+
+ ```xml
+ <WadCfg>
+ <DiagnosticMonitorConfiguration>...</DiagnosticMonitorConfiguration>
+ <SinksConfig>
+ <Sink name="MyApplicationInsightsProfiler">
+ <!-- Replace with your own Application Insights instrumentation key. -->
+ <ApplicationInsightsProfiler>00000000-0000-0000-0000-000000000000</ApplicationInsightsProfiler>
+ </Sink>
+ </SinksConfig>
+ </WadCfg>
+ ```
+
+ > [!NOTE]
+ > If the *diagnostics.wadcfgx* file also contains another sink of type ApplicationInsights, all three of the following instrumentation keys must match:
+ > * The key that's used by your application.
+ > * The key that's used by the ApplicationInsights sink.
+ > * The key that's used by the ApplicationInsightsProfiler sink.
+ >
+ > You can find the actual instrumentation key value that's used by the `ApplicationInsights` sink in the *ServiceConfiguration.\*.cscfg* files.
+ > After the Visual Studio 15.5 Azure SDK release, only the instrumentation keys that are used by the application and the ApplicationInsightsProfiler sink need to match each other.
+
+1. Deploy your service with the new Diagnostics configuration, and Application Insights Profiler is configured to run on your service.
+
+
+## Next steps
+
+* Generate traffic to your application (for example, launch an [availability test](../app/monitor-web-app-availability.md)). Then, wait 10 to 15 minutes for traces to start to be sent to the Application Insights instance.
+* See [Profiler traces](profiler-overview.md?toc=/azure/azure-monitor/toc.json) in the Azure portal.
+* To troubleshoot Profiler issues, see [Profiler troubleshooting](profiler-troubleshooting.md?toc=/azure/azure-monitor/toc.json).
+
azure-monitor Profiler Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-containers.md
+
+ Title: Profile Azure Containers with Application Insights Profiler
+description: Enable Application Insights Profiler for Azure Containers.
+ms.contributor: charles.weininger
+ Last updated : 05/26/2022++
+# Profile live Azure containers with Application Insights
+
+You can enable the Application Insights Profiler for ASP.NET Core application running in your container almost without code. To enable the Application Insights Profiler on your container instance, you'll need to:
+
+* Add the reference to the NuGet package.
+* Set the environment variables to enable it.
+
+In this article, you'll learn the various ways you can:
+- Install the NuGet package in the project.
+- Set the environment variable via the orchestrator (like Kubernetes).
+- Learn security considerations around production deployment, like protecting your Application Insights Instrumentation key.
+
+## Pre-requisites
+
+- [An Application Insights resource](../app/create-new-resource.md). Make note of the instrumentation key.
+- [Docker Desktop](https://www.docker.com/products/docker-desktop/) to build docker images.
+- [.NET 6 SDK](https://dotnet.microsoft.com/download/dotnet/6.0) installed.
+
+## Set up the environment
+
+1. Clone and use the following [sample project](https://github.com/microsoft/ApplicationInsights-Profiler-AspNetCore/tree/main/examples/EnableServiceProfilerForContainerAppNet6):
+
+ ```bash
+ git clone https://github.com/microsoft/ApplicationInsights-Profiler-AspNetCore.git
+ ```
+
+1. Navigate to the Container App example:
+
+ ```bash
+ cd examples/EnableServiceProfilerForContainerAppNet6
+ ```
+
+1. This example is a bare bone project created by calling the following CLI command:
+
+ ```powershell
+ dotnet new mvc -n EnableServiceProfilerForContainerApp
+ ```
+
+ Note that we've added delay in the `Controllers/WeatherForecastController.cs` project to simulate the bottleneck.
+
+ ```CSharp
+ [HttpGet(Name = "GetWeatherForecast")]
+ public IEnumerable<WeatherForecast> Get()
+ {
+ SimulateDelay();
+ ...
+ // Other existing code.
+ }
+ private void SimulateDelay()
+ {
+ // Delay for 500ms to 2s to simulate a bottleneck.
+ Thread.Sleep((new Random()).Next(500, 2000));
+ }
+ ```
+
+1. Enable Application Insights and Profiler in `Startup.cs`:
+
+ ```csharp
+ public void ConfigureServices(IServiceCollection services)
+ {
+ services.AddApplicationInsightsTelemetry(); // Add this line of code to enable Application Insights.
+ services.AddServiceProfiler(); // Add this line of code to Enable Profiler
+ services.AddControllersWithViews();
+ }
+ ```
+
+## Pull the latest ASP.NET Core build/runtime images
+
+1. Navigate to the .NET Core 6.0 example directory.
+
+ ```bash
+ cd examples/EnableServiceProfilerForContainerAppNet6
+ ```
+
+1. Pull the latest ASP.NET Core images
+
+ ```shell
+ docker pull mcr.microsoft.com/dotnet/sdk:6.0
+ docker pull mcr.microsoft.com/dotnet/aspnet:6.0
+ ```
+
+> [!TIP]
+> Find the official images for Docker [SDK](https://hub.docker.com/_/microsoft-dotnet-sdk) and [runtime](https://hub.docker.com/_/microsoft-dotnet-aspnet).
+
+## Add your Application Insights key
+
+1. Via your Application Insights resource in the Azure portal, take note of your Application Insights instrumentation key.
+
+ :::image type="content" source="./media/profiler-containerinstances/application-insights-key.png" alt-text="Find instrumentation key in Azure portal":::
+
+1. Open `appsettings.json` and add your Application Insights instrumentation key to this code section:
+
+ ```json
+ {
+ "ApplicationInsights":
+ {
+ "InstrumentationKey": "Your instrumentation key"
+ }
+ }
+ ```
+
+## Build and run the Docker image
+
+1. Review the `Dockerfile`.
+
+1. Build the example image:
+
+ ```bash
+ docker build -t profilerapp .
+ ```
+
+1. Run the container:
+
+ ```bash
+ docker run -d -p 8080:80 --name testapp profilerapp
+ ```
+
+## View the container via your browser
+
+To hit the endpoint, either:
+
+- Visit `http://localhost:8080/weatherforecast` in your browser, or
+- Use curl:
+
+ ```terraform
+ curl http://localhost:8080/weatherforecast
+ ```
++
+## Inspect the logs
+
+Optionally, inspect the local log to see if a session of profiling finished:
+
+```bash
+docker logs testapp
+```
+
+In the local logs, note the following events:
+
+```output
+Starting application insights profiler with instrumentation key: your-instrumentation key # Double check the instrumentation key
+Service Profiler session started. # Profiler started.
+Finished calling trace uploader. Exit code: 0 # Uploader is called with exit code 0.
+Service Profiler session finished. # A profiling session is completed.
+```
+
+## View the Service Profiler traces
+
+1. Wait for 2-5 minutes so the events can be aggregated to Application Insights.
+1. Open the **Performance** blade in your Application Insights resource.
+1. Once the trace process is complete, you will see the Profiler Traces button like it below:
+
+ :::image type="content" source="./media/profiler-containerinstances/profiler-traces.png" alt-text="Profile traces in the performance blade":::
+++
+## Clean up resources
+
+Run the following command to stop the example project:
+
+```bash
+docker rm -f testapp
+```
+
+## Next Steps
+
+- Learn more about [Application Insights Profiler](./profiler-overview.md).
+- Learn how to enable Profiler in your [ASP.NET Core applications run on Linux](./profiler-aspnetcore-linux.md).
azure-monitor Profiler Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-overview.md
+
+ Title: Profile production apps in Azure with Application Insights Profiler
+description: Identify the hot path in your web server code with a low-footprint profiler
+ms.contributor: charles.weininger
+ Last updated : 05/26/2022+++
+# Profile production applications in Azure with Application Insights
+
+Azure Application Insights Profiler provides performance traces for applications running in production in Azure. Profiler:
+- Captures the data automatically at scale without negatively affecting your users.
+- Helps you identify the ΓÇ£hotΓÇ¥ code path spending the most time handling a particular web request.
+
+## Enable Application Insights Profiler for your application
+
+### Supported in Profiler
+
+Profiler works with .NET applications deployed on the following Azure services. View specific instructions for enabling Profiler for each service type in the links below.
+
+| Compute platform | .NET (>= 4.6) | .NET Core | Java |
+| - | - | | - |
+| [Azure App Service](profiler.md) | Yes | Yes | No |
+| [Azure Virtual Machines and virtual machine scale sets for Windows](profiler-vm.md) | Yes | Yes | No |
+| [Azure Virtual Machines and virtual machine scale sets for Linux](profiler-aspnetcore-linux.md) | No | Yes | No |
+| [Azure Cloud Services](profiler-cloudservice.md) | Yes | Yes | N/A |
+| [Azure Container Instances for Windows](profiler-containers.md) | No | Yes | No |
+| [Azure Container Instances for Linux](profiler-containers.md) | No | Yes | No |
+| Kubernetes | No | Yes | No |
+| Azure Functions | Yes | Yes | No |
+| Azure Spring Cloud | N/A | No | No |
+| [Azure Service Fabric](profiler-servicefabric.md) | Yes | Yes | No |
+
+If you've enabled Profiler but aren't seeing traces, check our [Troubleshooting guide](profiler-troubleshooting.md?toc=/azure/azure-monitor/toc.json).
+
+## How to generate load to view Profiler data
+
+For Profiler to upload traces, your application must be actively handling requests. You can trigger Profiler manually with a single click.
+
+Suppose you're running a web performance test. You'll need traces to help you understand how your web app is running under load. By controlling when traces are captured, you'll know when the load test will be running, while the random sampling interval might miss it.
+
+### Generate traffic to your web app by starting a web performance test
+
+If you've newly enabled Profiler, you can run a short [load test](/vsts/load-test/app-service-web-app-performance-test). If your web app already has incoming traffic or if you just want to manually generate traffic, skip the load test and start a Profiler on-demand session.
+
+### Start a Profiler on-demand session
+1. From the Application Insights overview page, select **Performance** from the left menu.
+1. On the **Performance** pane, select **Profiler** from the top menu for Profiler settings.
+
+ :::image type="content" source="./media/profiler-overview/profiler-button-inline.png" alt-text="Screenshot of the Profiler button from the Performance blade" lightbox="media/profiler-settings/profiler-button.png":::
+
+1. Once the Profiler settings page loads, select **Profile Now**.
+
+ :::image type="content" source="./media/profiler-settings/configure-blade-inline.png" alt-text="Profiler page features and settings" lightbox="media/profiler-settings/configure-blade.png":::
+
+### View traces
+1. After the Profiler sessions finish running, return to the **Performance** pane.
+1. Under **Drill into...**, select **Profiler traces** to view the traces.
+
+ :::image type="content" source="./media/profiler-overview/trace-explorer-inline.png" alt-text="Screenshot of trace explorer page" lightbox="media/profiler-overview/trace-explorer.png":::
+
+The trace explorer displays the following information:
+
+| Filter | Description |
+| | -- |
+| Profile tree v. Flame graph | View the traces as either a tree or in graph form. |
+| Hot path | Select to open the biggest leaf node. In most cases, this node is near a performance bottleneck. |
+| Framework dependencies | Select to view each of the traced framework dependencies associated with the traces. |
+| Hide events | Type in strings to hide from the trace view. Select *Suggested events* for suggestions. |
+| Event | Event or function name. The tree displays a mix of code and events that occurred, such as SQL and HTTP events. The top event represents the overall request duration. |
+| Module | The module where the traced event or function occurred. |
+| Thread time | The time interval between the start of the operation and the end of the operation. |
+| Timeline | The time when the function or event was running in relation to other functions. |
+
+## How to read performance data
+
+The Microsoft service profiler uses a combination of sampling methods and instrumentation to analyze the performance of your application. When detailed collection is in progress, the service profiler samples the instruction pointer of each machine CPU every millisecond. Each sample captures the complete call stack of the thread that's currently executing. It gives detailed information about what that thread was doing, at both a high level and a low level of abstraction. The service profiler also collects other events to track activity correlation and causality, including context switching events, Task Parallel Library (TPL) events, and thread pool events.
+
+The call stack displayed in the timeline view is the result of the sampling and instrumentation. Because each sample captures the complete call stack of the thread, it includes code from Microsoft .NET Framework and other frameworks that you reference.
+
+### <a id="jitnewobj"></a>Object allocation (clr!JIT\_New or clr!JIT\_Newarr1)
+
+**clr!JIT\_New** and **clr!JIT\_Newarr1** are helper functions in .NET Framework that allocate memory from a managed heap.
+- **clr!JIT\_New** is invoked when an object is allocated.
+- **clr!JIT\_Newarr1** is invoked when an object array is allocated.
+
+These two functions usually work quickly. If **clr!JIT\_New** or **clr!JIT\_Newarr1** take up time in your timeline, the code might be allocating many objects and consuming significant amounts of memory.
+
+### <a id="theprestub"></a>Loading code (clr!ThePreStub)
+
+**clr!ThePreStub** is a helper function in .NET Framework that prepares the code for initial execution, which usually includes just-in-time (JIT) compilation. For each C# method, **clr!ThePreStub** should be invoked, at most, once during a process.
+
+If **clr!ThePreStub** takes extra time for a request, it's the first request to execute that method. The .NET Framework runtime takes a significant amount of time to load the first method. Consider:
+- Using a warmup process that executes that portion of the code before your users access it.
+- Running Native Image Generator (ngen.exe) on your assemblies.
+
+### <a id="lockcontention"></a>Lock contention (clr!JITutil\_MonContention or clr!JITutil\_MonEnterWorker)
+
+**clr!JITutil\_MonContention** or **clr!JITutil\_MonEnterWorker** indicate that the current thread is waiting for a lock to be released. This text is often displayed when you:
+- Execute a C# **LOCK** statement,
+- Invoke the **Monitor.Enter** method, or
+- Invoke a method with the **MethodImplOptions.Synchronized** attribute.
+
+Lock contention usually occurs when thread _A_ acquires a lock and thread _B_ tries to acquire the same lock before thread _A_ releases it.
+
+### <a id="ngencold"></a>Loading code ([COLD])
+
+If the .NET Framework runtime is executing [unoptimized code](/cpp/build/profile-guided-optimizations) for the first time, the method name will contain **[COLD]**:
+
+`mscorlib.ni![COLD]System.Reflection.CustomAttribute.IsDefined`
+
+For each method, it should be displayed once during the process, at most.
+
+If loading code takes a substantial amount of time for a request, it's the request's initiate execute of the unoptimized portion of the method. Consider using a warmup process that executes that portion of the code before your users access it.
+
+### <a id="httpclientsend"></a>Send HTTP request
+
+Methods such as **HttpClient.Send** indicate that the code is waiting for an HTTP request to be completed.
+
+### <a id="sqlcommand"></a>Database operation
+
+Methods such as **SqlCommand.Execute** indicate that the code is waiting for a database operation to finish.
+
+### <a id="await"></a>Waiting (AWAIT\_TIME)
+
+**AWAIT\_TIME** indicates that the code is waiting for another task to finish. This delay occurs with the C# **AWAIT** statement. When the code does a C# **AWAIT**:
+- The thread unwinds and returns control to the thread pool.
+- There's no blocked thread waiting for the **AWAIT** to finish.
+
+However, logically, the thread that did the **AWAIT** is "blocked", waiting for the operation to finish. The **AWAIT\_TIME** statement indicates the blocked time, waiting for the task to finish.
+
+### <a id="block"></a>Blocked time
+
+**BLOCKED_TIME** indicates that the code is waiting for another resource to be available. For example, it might be waiting for:
+- A synchronization object
+- A thread to be available
+- A request to finish
+
+### Unmanaged Async
+
+In order for async calls to be tracked across threads, .NET Framework emits ETW events and passes activity ids between threads. Since unmanaged (native) code and some older styles of asynchronous code lack these events and activity ids, the Profiler can't track the thread and functions running on the thread. This is labeled **Unmanaged Async** in the call stack. Download the ETW file to use [PerfView](https://github.com/Microsoft/perfview/blob/master/documentation/Downloading.md) for more insight.
+
+### <a id="cpu"></a>CPU time
+
+The CPU is busy executing the instructions.
+
+### <a id="disk"></a>Disk time
+
+The application is performing disk operations.
+
+### <a id="network"></a>Network time
+
+The application is performing network operations.
+
+### <a id="when"></a>When column
+
+The **When** column is a visualization of the variety of _inclusive_ samples collected for a node over time. The total range of the request is divided into 32 time buckets, where the node's inclusive samples accumulate. Each bucket is represented as a bar. The height of the bar represents a scaled value. For the following nodes, the bar represents the consumption of one of the resources during the bucket:
+- Nodes marked **CPU_TIME** or **BLOCKED_TIME**.
+- Nodes with an obvious relationship to consuming a resource (for example, a CPU, disk, or thread).
+
+For these metrics, you can get a value of greater than 100% by consuming multiple resources. For example, if you use two CPUs during an interval on average, you get 200%.
+
+## Limitations
+
+The default data retention period is five days.
+
+There are no charges for using the Profiler service. To use it, your web app must be hosted in the basic tier of the Web Apps feature of Azure App Service, at minimum.
+
+## Overhead and sampling algorithm
+
+Profiler randomly runs two minutes/hour on each virtual machine hosting the application with Profiler enabled for capturing traces. When Profiler is running, it adds from 5-15% CPU overhead to the server.
+
+## Next steps
+Enable Application Insights Profiler for your Azure application. Also see:
+* [App Services](profiler.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Cloud Services](profiler-cloudservice.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Service Fabric](profiler-servicefabric.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Virtual Machines and virtual machine scale sets](profiler-vm.md?toc=/azure/azure-monitor/toc.json)
++
+[performance-blade]: ./media/profiler-overview/performance-blade-v2-examples.png
+[trace-explorer]: ./media/profiler-overview/trace-explorer.png
azure-monitor Profiler Servicefabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-servicefabric.md
+
+ Title: Profile live Azure Service Fabric apps with Application Insights
+description: Enable Profiler for a Service Fabric application
++ Last updated : 08/06/2018++
+# Profile live Azure Service Fabric applications with Application Insights
+
+You can also deploy Application Insights Profiler on these
+* [Azure App Service](profiler.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Cloud Services](profiler-cloudservice.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Virtual Machines](profiler-vm.md?toc=/azure/azure-monitor/toc.json)
+
+## Set up the environment deployment definition
+
+Application Insights Profiler is included with Azure Diagnostics. You can install the Azure Diagnostics extension by using an Azure Resource Manager template for your Service Fabric cluster. Get a [template that installs Azure Diagnostics on a Service Fabric Cluster](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/ServiceFabricCluster.json).
+
+To set up your environment, take the following actions:
+
+1. Profiler supports .NET Framework and .Net Core. If you're using .NET Framework, make sure you're using [.NET Framework 4.6.1](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) or later. It's sufficient to confirm that the deployed OS is `Windows Server 2012 R2` or later. Profiler supports .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) and newer applications.
+
+1. Search for the [Azure Diagnostics](../agents/diagnostics-extension-overview.md) extension in the deployment template file.
+
+1. Add the following `SinksConfig` section as a child element of `WadCfg`. Replace the `ApplicationInsightsProfiler` property value with your own Application Insights instrumentation key:
+
+ ```json
+ "SinksConfig": {
+ "Sink": [
+ {
+ "name": "MyApplicationInsightsProfilerSink",
+ "ApplicationInsightsProfiler": "00000000-0000-0000-0000-000000000000"
+ }
+ ]
+ }
+ ```
+
+ For information about adding the Diagnostics extension to your deployment template, see [Use monitoring and diagnostics with a Windows VM and Azure Resource Manager templates](../../virtual-machines/extensions/diagnostics-template.md?toc=/azure/virtual-machines/windows/toc.json).
+
+1. Deploy your Service Fabric cluster by using your Azure Resource Manager template.
+ If your settings are correct, Application Insights Profiler will be installed and enabled when the Azure Diagnostics extension is installed.
+
+1. Add Application Insights to your Service Fabric application.
+ For Profiler to collect profiles for your requests, your application must be tracking operations with Application Insights. For stateless APIs, you can refer to instructions for [tracking Requests for profiling](profiler-trackrequests.md?toc=/azure/azure-monitor/toc.json). For more information about tracking custom operations in other kinds of apps, see [track custom operations with Application Insights .NET SDK](../app/custom-operations-tracking.md).
+
+1. Redeploy your application.
++
+## Next steps
+
+* Generate traffic to your application (for example, launch an [availability test](../app/monitor-web-app-availability.md)). Then, wait 10 to 15 minutes for traces to start to be sent to the Application Insights instance.
+* See [Profiler traces](profiler-overview.md?toc=/azure/azure-monitor/toc.json) in the Azure portal.
+* For help with troubleshooting Profiler issues, see [Profiler troubleshooting](profiler-troubleshooting.md?toc=/azure/azure-monitor/toc.json).
+
azure-monitor Profiler Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-settings.md
+
+ Title: Configure Application Insights Profiler | Microsoft Docs
+description: Use the Azure Application Insights Profiler settings pane to see Profiler status and start profiling sessions
+ms.contributor: Charles.Weininger
+ Last updated : 04/26/2022+++
+# Configure Application Insights Profiler
+
+To open the Azure Application Insights Profiler settings pane, select **Performance** from the left menu within your Application Insights page.
++
+View profiler traces across your Azure resources via two methods:
+
+**Profiler button**
+
+Select the **Profiler** button from the top menu.
++
+**By operation**
+
+1. Select an operation from the **Operation name** list ("Overall" is highlighted by default).
+1. Select the **Profiler traces** button.
+
+ :::image type="content" source="./media/profiler-settings/operation-entry-inline.png" alt-text="Select operation and Profiler traces to view all profiler traces" lightbox="media/profiler-settings/operation-entry.png":::
+
+1. Select one of the requests from the list to the left.
+1. Select **Configure Profiler**.
+
+ :::image type="content" source="./media/profiler-settings/configure-profiler-inline.png" alt-text="Overall selection and clicking Profiler traces to view all profiler traces" lightbox="media/profiler-settings/configure-profiler.png":::
+
+Once within the Profiler, you can configure and view the Profiler. The **Application Insights Profiler** page has these features:
++
+| Feature | Description |
+|-|-|
+Profile Now | Starts profiling sessions for all apps that are linked to this instance of Application Insights.
+Triggers | Allows you to configure triggers that cause the profiler to run.
+Recent profiling sessions | Displays information about past profiling sessions, which you can sort using the filters at the top of the page.
+
+## Profile Now
+Select **Profile Now** to start a profiling session on demand. When you click this link, all profiler agents that are sending data to this Application Insights instance will start to capture a profile. After 5 to 10 minutes, the profile session will show in the list below.
+
+To manually trigger a profiler session, you'll need, at minimum, *write* access on your role for the Application Insights component. In most cases, you get write access automatically. If you're having issues, you'll need the "Application Insights Component Contributor" subscription scope role added. [See more about role access control with Azure Monitoring](../app/resources-roles-access-control.md).
+
+## Trigger Settings
+
+Select the Triggers button on the menu bar to open the CPU, Memory, and Sampling trigger settings pane.
+
+**CPU or Memory triggers**
+
+You can set up a trigger to start profiling when the percentage of CPU or Memory use hits the level you set.
++
+| Setting | Description |
+|-|-|
+On / Off Button | On: profiler can be started by this trigger; Off: profiler won't be started by this trigger.
+Memory threshold | When this percentage of memory is in use, the profiler will be started.
+Duration | Sets the length of time the profiler will run when triggered.
+Cooldown | Sets the length of time the profiler will wait before checking for the memory or CPU usage again after it's triggered.
+
+**Sampling trigger**
+
+Unlike CPU or memory triggers, the Sampling trigger isn't triggered by an event. Instead, it's triggered randomly to get a truly random sample of your application's performance. You can:
+- Turn this trigger off to disable random sampling.
+- Set how often profiling will occur and the duration of the profiling session.
++
+| Setting | Description |
+|-|-|
+On / Off Button | On: profiler can be started by this trigger; Off: profiler won't be started by this trigger.
+Sample rate | The rate at which the profiler can occur. </br> <ul><li>The **Normal** setting collects data 5% of the time, which is about 2 minutes per hour.</li><li>The **High** setting profiles 50% of the time.</li><li>The **Maximum** setting profiles 75% of the time.</li></ul> </br> Normal is recommended for production environments.
+Duration | Sets the length of time the profiler will run when triggered.
+
+## Recent Profiling Sessions
+This section of the Profiler page displays recent profiling session information. A profiling session represents the time taken by the profiler agent while profiling one of the machines hosting your application. Open the profiles from a session by clicking on one of the rows. For each session, we show:
+
+| Setting | Description |
+|-|-|
+Triggered by | How the session was started, either by a trigger, Profile Now, or default sampling.
+App Name | Name of the application that was profiled.
+Machine Instance | Name of the machine the profiler agent ran on.
+Timestamp | Time when the profile was captured.
+Tracee | Number of traces that were attached to individual requests.
+CPU % | Percentage of CPU that was being used while the profiler was running.
+Memory % | Percentage of memory that was being used while the profiler was running.
+
+## Next steps
+[Enable Profiler and view traces](profiler-overview.md?toc=/azure/azure-monitor/toc.json)
+
+[profiler-on-demand]: ./media/profiler-settings/profiler-on-demand.png
+[performance-blade]: ./media/profiler-settings/performance-blade.png
+[configure-profiler-page]: ./media/profiler-settings/configureBlade.png
+[trigger-settings-flyout]: ./media/profiler-settings/trigger-central-p-u.png
+[create-performance-test]: ./media/profiler-settings/new-performance-test.png
+[configure-performance-test]: ./media/profiler-settings/configure-performance-test.png
+[load-test-queued]: ./media/profiler-settings/load-test-queued.png
+[load-test-in-progress]: ./media/profiler-settings/load-test-in-progress.png
+[enable-app-insights]: ./media/profiler-settings/enable-app-insights-blade-01.png
+[update-site-extension]: ./media/profiler-settings/update-site-extension-01.png
+[change-and-save-appinsights]: ./media/profiler-settings/change-and-save-app-insights-01.png
+[app-settings-for-profiler]: ./media/profiler-settings/app-settings-for-profiler-01.png
+[check-for-extension-update]: ./media/profiler-settings/check-extension-update-01.png
+[profiler-timeout]: ./media/profiler-settings/profiler-time-out.png
azure-monitor Profiler Trackrequests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-trackrequests.md
+
+ Title: Write code to track requests with Azure Application Insights | Microsoft Docs
+description: Write code to track requests with Application Insights so you can get profiles for your requests.
++ Last updated : 08/06/2018++
+# Write code to track requests with Application Insights
+
+To view profiles for your application on the Performance page, Azure Application Insights needs to track requests for your application. Application Insights can automatically track requests for applications that are built on already-instrumented frameworks. Two examples are ASP.NET and ASP.NET Core.
+
+For other applications, such as Azure Cloud Services worker roles and Service Fabric stateless APIs, you need to write code to tell Application Insights where your requests begin and end. After you've written this code, requests telemetry is sent to Application Insights. You can view the telemetry on the Performance page, and profiles are collected for those requests.
++
+To manually track requests, do the following:
+
+ 1. Early in the application lifetime, add the following code:
+
+ ```csharp
+ using Microsoft.ApplicationInsights.Extensibility;
+ ...
+ // Replace with your own Application Insights instrumentation key.
+ TelemetryConfiguration.Active.InstrumentationKey = "00000000-0000-0000-0000-000000000000";
+ ```
+ For more information about this global instrumentation key configuration, see [Use Service Fabric with Application Insights](https://github.com/Azure-Samples/service-fabric-dotnet-getting-started/blob/dev/appinsights/ApplicationInsights.md).
+
+ 1. For any piece of code that you want to instrument, add a `StartOperation<RequestTelemetry>` **using** statement around it, as shown in the following example:
+
+ ```csharp
+ using Microsoft.ApplicationInsights;
+ using Microsoft.ApplicationInsights.DataContracts;
+ ...
+ var client = new TelemetryClient();
+ ...
+ using (var operation = client.StartOperation<RequestTelemetry>("Insert_Your_Custom_Event_Unique_Name"))
+ {
+ // ... Code I want to profile.
+ }
+ ```
+
+ Calling `StartOperation<RequestTelemetry>` within another `StartOperation<RequestTelemetry>` scope isn't supported. You can use `StartOperation<DependencyTelemetry>` in the nested scope instead. For example:
+
+ ```csharp
+ using (var getDetailsOperation = client.StartOperation<RequestTelemetry>("GetProductDetails"))
+ {
+ try
+ {
+ ProductDetail details = new ProductDetail() { Id = productId };
+ getDetailsOperation.Telemetry.Properties["ProductId"] = productId.ToString();
+
+ // By using DependencyTelemetry, 'GetProductPrice' is correctly linked as part of the 'GetProductDetails' request.
+ using (var getPriceOperation = client.StartOperation<DependencyTelemetry>("GetProductPrice"))
+ {
+ double price = await _priceDataBase.GetAsync(productId);
+ if (IsTooCheap(price))
+ {
+ throw new PriceTooLowException(productId);
+ }
+ details.Price = price;
+ }
+
+ // Similarly, note how 'GetProductReviews' doesn't establish another RequestTelemetry.
+ using (var getReviewsOperation = client.StartOperation<DependencyTelemetry>("GetProductReviews"))
+ {
+ details.Reviews = await _reviewDataBase.GetAsync(productId);
+ }
+
+ getDetailsOperation.Telemetry.Success = true;
+ return details;
+ }
+ catch(Exception ex)
+ {
+ getDetailsOperation.Telemetry.Success = false;
+
+ // This exception gets linked to the 'GetProductDetails' request telemetry.
+ client.TrackException(ex);
+ throw;
+ }
+ }
+ ```
azure-monitor Profiler Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-troubleshooting.md
+
+ Title: Troubleshoot problems with Azure Application Insights Profiler
+description: This article presents troubleshooting steps and information to help developers enable and use Application Insights Profiler.
+ Last updated : 08/06/2018++
+# Troubleshoot problems enabling or viewing Application Insights Profiler
+
+## <a id="troubleshooting"></a>General troubleshooting
+
+### Make sure you're using the appropriate Profiler Endpoint
+
+Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
+
+|App Setting | US Government Cloud | China Cloud |
+|||-|
+|ApplicationInsightsProfilerEndpoint | `https://profiler.monitor.azure.us` | `https://profiler.monitor.azure.cn` |
+|ApplicationInsightsEndpoint | `https://dc.applicationinsights.us` | `https://dc.applicationinsights.azure.cn` |
+
+### Profiles are uploaded only if there are requests to your application while Profiler is running
+
+Azure Application Insights Profiler collects data for two minutes each hour. It can also collect data when you select the **Profile Now** button in the **Configure Application Insights Profiler** pane.
+
+> [!NOTE]
+> The profiling data is uploaded only when it can be attached to a request that happened while Profiler was running.
+
+Profiler writes trace messages and custom events to your Application Insights resource. You can use these events to see how Profiler is running:
+
+1. Search for trace messages and custom events sent by Profiler to your Application Insights resource. You can use this search string to find the relevant data:
+
+ ```
+ stopprofiler OR startprofiler OR upload OR ServiceProfilerSample
+ ```
+ The following image displays two examples of searches from two AI resources:
+
+ * At the left, the application isn't receiving requests while Profiler is running. The message explains that the upload was canceled because of no activity.
+
+ * At the right, Profiler started and sent custom events when it detected requests that happened while Profiler was running. If the `ServiceProfilerSample` custom event is displayed, it means that a profile was captured and its available in the **Application Insights Performance** pane.
+
+ If no records are displayed, Profiler isn't running. To troubleshoot, see the troubleshooting sections for your specific app type later in this article.
+
+ ![Search Profiler telemetry][profiler-search-telemetry]
+
+### Other things to check
+* Make sure that your app is running on .NET Framework 4.6.
+* If your web app is an ASP.NET Core application, it must be running at least ASP.NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+* If the data you're trying to view is older than a couple of weeks, try limiting your time filter and try again. Traces are deleted after seven days.
+* Make sure that proxies or a firewall haven't blocked access to https://gateway.azureserviceprofiler.net.
+* Profiler isn't supported on free or shared app service plans. If you're using one of those plans, try scaling up to one of the basic plans and Profiler should start working.
+
+### <a id="double-counting"></a>Double counting in parallel threads
+
+In some cases, the total time metric in the stack viewer is more than the duration of the request.
+
+This situation might occur when two or more parallel threads are associated with a request. In that case, the total thread time is more than the elapsed time.
+
+One thread might be waiting on the other to be completed. The viewer tries to detect this situation and omits the uninteresting wait. In doing so, it errs on the side of displaying too much information rather than omit what might be critical information.
+
+When you see parallel threads in your traces, determine which threads are waiting so that you can identify the hot path for the request.
+
+Usually, the thread that quickly goes into a wait state is simply waiting on the other threads. Concentrate on the other threads, and ignore the time in the waiting threads.
+
+### Error report in the profile viewer
+Submit a support ticket in the portal. Be sure to include the correlation ID from the error message.
+
+## Troubleshoot Profiler on Azure App Service
+
+For Profiler to work properly:
+* Your web app service plan must be Basic tier or higher.
+* Your web app must have Application Insights enabled.
+* Your web app must have the following app settings:
+
+ |App Setting | Value |
+ ||-|
+ |APPINSIGHTS_INSTRUMENTATIONKEY | iKey for your Application Insights resource |
+ |APPINSIGHTS_PROFILERFEATURE_VERSION | 1.0.0 |
+ |DiagnosticServices_EXTENSION_VERSION | ~3 |
++
+* The **ApplicationInsightsProfiler3** webjob must be running. To check the webjob:
+ 1. Go to [Kudu](/archive/blogs/cdndevs/the-kudu-debug-console-azure-websites-best-kept-secret).
+ 1. In the **Tools** menu, select **WebJobs Dashboard**.
+ The **WebJobs** pane opens.
+
+ ![Screenshot shows the WebJobs pane, which displays the name, status, and last run time of jobs.][profiler-webjob]
+
+ 1. To view the details of the webjob, including the log, select the **ApplicationInsightsProfiler3** link.
+ The **Continuous WebJob Details** pane opens.
+
+ ![Screenshot shows the Continuous WebJob Details pane.][profiler-webjob-log]
+
+If Profiler isn't working for you, you can download the log and send it to our team for assistance, serviceprofilerhelp@microsoft.com.
+
+### Check the Diagnostic Services site extension' Status Page
+If Profiler was enabled through the [Application Insights pane](profiler.md) in the portal, it was enabled by the Diagnostic Services site extension.
+
+> [!NOTE]
+> Codeless installation of Application Insights Profiler follows the .NET Core support policy.
+> For more information about supported runtimes, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+
+You can check the Status Page of this extension by going to the following url:
+`https://{site-name}.scm.azurewebsites.net/DiagnosticServices`
+
+> [!NOTE]
+> The domain of the Status Page link will vary depending on the cloud.
+This domain will be the same as the Kudu management site for App Service.
+
+This Status Page shows the installation state of the Profiler and Snapshot Collector agents. If there was an unexpected error, it will be displayed and show how to fix it.
+
+You can use the Kudu management site for App Service to get the base url of this Status Page:
+1. Open your App Service application in the Azure portal.
+2. Select **Advanced Tools**, or search for **Kudu**.
+3. Select **Go**.
+4. Once you are on the Kudu management site, in the URL, **append the following `/DiagnosticServices` and press enter**.
+ It will end like this: `https://<kudu-url>/DiagnosticServices`
+
+It will display a Status Page similar like the below:
+![Diagnostic Services Status Page](../app/media/diagnostic-services-site-extension/status-page.png)
+
+### Manual installation
+
+When you configure Profiler, updates are made to the web app's settings. If your environment requires it, you can apply the updates manually. An example might be that your application is running in a Web Apps environment for Power Apps. To apply updates manually:
+
+1. In the **Web App Control** pane, open **Settings**.
+
+1. Set **.NET Framework version** to **v4.6**.
+
+1. Set **Always On** to **On**.
+1. Create these app settings:
+
+ |App Setting | Value |
+ ||-|
+ |APPINSIGHTS_INSTRUMENTATIONKEY | iKey for your Application Insights resource |
+ |APPINSIGHTS_PROFILERFEATURE_VERSION | 1.0.0 |
+ |DiagnosticServices_EXTENSION_VERSION | ~3 |
+
+### Too many active profiling sessions
+
+You can enable Profiler on a maximum of four Web Apps that are running in the same service plan. If you've more than four, Profiler might throw a *Microsoft.ServiceProfiler.Exceptions.TooManyETWSessionException*. To solve it, move some web apps to a different service plan.
+
+### Deployment error: Directory Not Empty 'D:\\home\\site\\wwwroot\\App_Data\\jobs'
+
+If you're redeploying your web app to a Web Apps resource with Profiler enabled, you might see the following message:
+
+*Directory Not Empty 'D:\\home\\site\\wwwroot\\App_Data\\jobs'*
+
+This error occurs if you run Web Deploy from scripts or from the Azure Pipelines. The solution is to add the following deployment parameters to the Web Deploy task:
+
+```
+-skip:Directory='.*\\App_Data\\jobs\\continuous\\ApplicationInsightsProfiler.*' -skip:skipAction=Delete,objectname='dirPath',absolutepath='.*\\App_Data\\jobs\\continuous$' -skip:skipAction=Delete,objectname='dirPath',absolutepath='.*\\App_Data\\jobs$' -skip:skipAction=Delete,objectname='dirPath',absolutepath='.*\\App_Data$'
+```
+
+These parameters delete the folder that's used by Application Insights Profiler and unblock the redeploy process. They don't affect the Profiler instance that's currently running.
+
+### How do I determine whether Application Insights Profiler is running?
+
+Profiler runs as a continuous webjob in the web app. You can open the web app resource in the [Azure portal](https://portal.azure.com). In the **WebJobs** pane, check the status of **ApplicationInsightsProfiler**. If it isn't running, open **Logs** to get more information.
+
+## Troubleshoot VMs and Cloud Services
+
+>**The bug in the profiler that ships in the WAD for Cloud Services has been fixed.** The latest version of WAD (1.12.2.0) for Cloud Services works with all recent versions of the App Insights SDK. Cloud Service hosts will upgrade WAD automatically, but it isn't immediate. To force an upgrade, you can redeploy your service or reboot the node.
+
+To see whether Profiler is configured correctly by Azure Diagnostics, follow the below steps:
+1. Verify that the content of the Azure Diagnostics configuration deployed is what you expect.
+
+1. Second, make sure that Azure Diagnostics passes the proper iKey on the Profiler command line.
+
+1. Third, check the Profiler log file to see whether Profiler ran but returned an error.
+
+To check the settings that were used to configure Azure Diagnostics:
+
+1. Sign in to the virtual machine (VM), and then open the log file at this location. The plugin version may be newer on your machine.
+
+ For VMs:
+ ```
+ c:\WindowsAzure\logs\Plugins\Microsoft.Azure.Diagnostics.PaaSDiagnostics\1.11.3.12\DiagnosticsPlugin.log
+ ```
+
+ For Cloud
+ ```
+ c:\logs\Plugins\Microsoft.Azure.Diagnostics.PaaSDiagnostics\1.11.3.12\DiagnosticsPlugin.log
+ ```
+
+1. In the file, you can search for the string **WadCfg** to find the settings that were passed to the VM to configure Azure Diagnostics. You can check to see whether the iKey used by the Profiler sink is correct.
+
+1. Check the command line that's used to start Profiler. The arguments that are used to launch Profiler are in the following file. (The drive could be c: or d: and the directory may be hidden.)
+
+ For VMs:
+ ```
+ C:\ProgramData\ApplicationInsightsProfiler\config.json
+ ```
+
+ for Cloud
+ ```
+ D:\ProgramData\ApplicationInsightsProfiler\config.json
+ ```
+
+1. Make sure that the iKey on the Profiler command line is correct.
+
+1. Using the path found in the preceding *config.json* file, check the Profiler log file, called **BootstrapN.log**. It displays the debug information that indicates the settings that Profiler is using. It also displays status and error messages from Profiler.
+
+ For VMs, the file is here:
+ ```
+ C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.IaaSDiagnostics\1.17.0.6\ApplicationInsightsProfiler
+ ```
+
+ For Cloud
+ ```
+ C:\Logs\Plugins\Microsoft.Azure.Diagnostics.IaaSDiagnostics\1.17.0.6\ApplicationInsightsProfiler
+ ```
+
+ If Profiler is running while your application is receiving requests, the following message is displayed: *Activity detected from iKey*.
+
+ When the trace is being uploaded, the following message is displayed: *Start to upload trace*.
++
+## Edit network proxy or firewall rules
+
+If your application connects to the Internet via a proxy or a firewall, you may need to update the rules to communicate with the Profiler service.
+
+The IPs used by Application Insights Profiler are included in the Azure Monitor service tag. For more information, see [Service Tags documentation](../../virtual-network/service-tags-overview.md).
++
+[profiler-search-telemetry]:./media/profiler-troubleshooting/Profiler-Search-Telemetry.png
+[profiler-webjob]:./media/profiler-troubleshooting/profiler-web-job.png
+[profiler-webjob-log]:./media/profiler-troubleshooting/profiler-web-job-log.png
azure-monitor Profiler Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-vm.md
+
+ Title: Profile web apps on an Azure VM - Application Insights Profiler
+description: Profile web apps on an Azure VM by using Application Insights Profiler.
+ Last updated : 11/08/2019++
+# Profile web apps running on an Azure virtual machine or a virtual machine scale set by using Application Insights Profiler
++
+You can also deploy Azure Application Insights Profiler on these
+* [Azure App Service](./profiler.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
+* [Azure Cloud Services](profiler-cloudservice.md?toc=/azure/azure-monitor/toc.json)
+* [Azure Service Fabric](?toc=%2fazure%2fazure-monitor%2ftoc.json)
+
+## Deploy Profiler on a virtual machine or a virtual machine scale set
+This article shows you how to get Application Insights Profiler running on your Azure virtual machine (VM) or Azure virtual machine scale set. Profiler is installed with the Azure Diagnostics extension for VMs. Configure the extension to run Profiler, and build the Application Insights SDK into your application.
+
+1. Add the Application Insights SDK to your [ASP.NET application](../app/asp-net.md).
+
+ To view profiles for your requests, you must send request telemetry to Application Insights.
+
+1. Install Azure Diagnostics extension on your VM. For full Resource Manager template examples, see:
+ * [Virtual machine](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/WindowsVirtualMachine.json)
+ * [Virtual machine scale set](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/WindowsVirtualMachineScaleSet.json)
+
+ The key part is the ApplicationInsightsProfilerSink in the WadCfg. To have Azure Diagnostics enable Profiler to send data to your iKey, add another sink to this section.
+
+ ```json
+ "SinksConfig": {
+ "Sink": [
+ {
+ "name": "ApplicationInsightsSink",
+ "ApplicationInsights": "85f73556-b1ba-46de-9534-606e08c6120f"
+ },
+ {
+ "name": "MyApplicationInsightsProfilerSink",
+ "ApplicationInsightsProfiler": "85f73556-b1ba-46de-9534-606e08c6120f"
+ }
+ ]
+ },
+ ```
+
+1. Deploy the modified environment deployment definition.
+
+ Applying the modifications usually involves a full template deployment or a cloud service-based publish through PowerShell cmdlets or Visual Studio.
+
+ The following PowerShell commands are an alternate approach for existing virtual machines that touch only the Azure Diagnostics extension. Add the previously mentioned ProfilerSink to the config that's returned by the Get-AzVMDiagnosticsExtension command. Then pass the updated config to the Set-AzVMDiagnosticsExtension command.
+
+ ```powershell
+ $ConfigFilePath = [IO.Path]::GetTempFileName()
+ # After you export the currently deployed Diagnostics config to a file, edit it to include the ApplicationInsightsProfiler sink.
+ (Get-AzVMDiagnosticsExtension -ResourceGroupName "MyRG" -VMName "MyVM").PublicSettings | Out-File -Verbose $ConfigFilePath
+ # Set-AzVMDiagnosticsExtension might require the -StorageAccountName argument
+ # If your original diagnostics configuration had the storageAccountName property in the protectedSettings section (which is not downloadable), be sure to pass the same original value you had in this cmdlet call.
+ Set-AzVMDiagnosticsExtension -ResourceGroupName "MyRG" -VMName "MyVM" -DiagnosticsConfigurationPath $ConfigFilePath
+ ```
+
+1. If the intended application is running through [IIS](https://www.microsoft.com/web/downloads/platform.aspx), enable the `IIS Http Tracing` Windows feature.
+
+ 1. Establish remote access to the environment, and then use the [Add Windows features](/iis/configuration/system.webserver/tracing/) window. Or run the following command in PowerShell (as administrator):
+
+ ```powershell
+ Enable-WindowsOptionalFeature -FeatureName IIS-HttpTracing -Online -All
+ ```
+
+ 1. If establishing remote access is a problem, you can use the [Azure CLI](/cli/azure/get-started-with-azure-cli) to run the following command:
+
+ ```azurecli
+ az vm run-command invoke -g MyResourceGroupName -n MyVirtualMachineName --command-id RunPowerShellScript --scripts "Enable-WindowsOptionalFeature -FeatureName IIS-HttpTracing -Online -All"
+ ```
+
+1. Deploy your application.
+
+## Set Profiler Sink using Azure Resource Explorer
+
+We don't yet have a way to set the Application Insights Profiler sink from the portal. Instead of using PowerShell as described above, you can use Azure Resource Explorer to set the sink. But note, if you deploy the VM again, the sink will be lost. You'll need to update the config you use when deploying the VM to preserve this setting.
+
+1. Check that the Windows Azure Diagnostics extension is installed by viewing the extensions installed for your virtual machine.
+
+ ![Check if WAD extension is installed][wadextension]
+
+2. Find the VM Diagnostics extension for your VM. Go to [https://resources.azure.com](https://resources.azure.com). Expand your resource group, Microsoft.Compute virtualMachines, virtual machine name, and extensions.
+
+ ![Navigate to WAD config in Azure Resource Explorer][azureresourceexplorer]
+
+3. Add the Application Insights Profiler sink to the SinksConfig node under WadCfg. If you don't already have a SinksConfig section, you may need to add one. Be sure to specify the proper Application Insights iKey in your settings. You'll need to switch the explorers mode to Read/Write in the upper right corner and Press the blue 'Edit' button.
+
+ ![Add Application Insights Profiler Sink][resourceexplorersinksconfig]
+
+4. When you're done editing the config, press 'Put'. If the put is successful, a green check will appear in the middle of the screen.
+
+ ![Send put request to apply changes][resourceexplorerput]
++++++
+## Can Profiler run on on-premises servers?
+We have no plan to support Application Insights Profiler for on-premises servers.
+
+## Next steps
+
+- Generate traffic to your application (for example, launch an [availability test](../app/monitor-web-app-availability.md)). Then, wait 10 to 15 minutes for traces to start to be sent to the Application Insights instance.
+- See [Profiler traces](profiler-overview.md?toc=/azure/azure-monitor/toc.json) in the Azure portal.
+- For help with troubleshooting Profiler issues, see [Profiler troubleshooting](profiler-troubleshooting.md?toc=/azure/azure-monitor/toc.json).
+
+[azureresourceexplorer]: ./media/profiler-vm/azure-resource-explorer.png
+[resourceexplorerput]: ./media/profiler-vm/resource-explorer-put.png
+[resourceexplorersinksconfig]: ./media/profiler-vm/resource-explorer-sinks-config.png
+[wadextension]: ./media/profiler-vm/wad-extension.png
+
azure-monitor Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler.md
+
+ Title: Enable Profiler for Azure App Service apps | Microsoft Docs
+description: Profile live apps on Azure App Service with Application Insights Profiler.
+ Last updated : 05/11/2022++
+# Enable Profiler for Azure App Service apps
+
+Application Insights Profiler is pre-installed as part of the App Services runtime. You can run Profiler on ASP.NET and ASP.NET Core apps running on Azure App Service using Basic service tier or higher. Follow these steps even if you've included the App Insights SDK in your application at build time.
+
+To enable Profiler on Linux, walk through the [ASP.NET Core Azure Linux web apps instructions](profiler-aspnetcore-linux.md).
+
+> [!NOTE]
+> Codeless installation of Application Insights Profiler follows the .NET Core support policy.
+> For more information about supported runtime, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
++
+## Pre-requisites
+
+- An [Azure App Services ASP.NET/ASP.NET Core app](/app-service/quickstart-dotnetcore.md).
+- [Application Insights resource](../app/create-new-resource.md) connected to your App Service app.
+
+## Verify "Always On" setting is enabled
+
+1. In the Azure portal, navigate to your App Service.
+1. Under **Settings** in the left side menu, select **Configuration**.
+
+ :::image type="content" source="./media/profiler/configuration-menu.png" alt-text="Screenshot of selecting Configuration from the left side menu.":::
+
+1. Select the **General settings** tab.
+1. Verify **Always On** > **On** is selected.
+
+ :::image type="content" source="./media/profiler/always-on.png" alt-text="Screenshot of the General tab on the Configuration pane and showing the Always On being enabled.":::
+
+1. Select **Save** if you've made changes.
+
+## Enable Application Insights and Profiler
+
+1. Under **Settings** in the left side menu, select **Application Insights**.
+
+ :::image type="content" source="./media/profiler/app-insights-menu.png" alt-text="Screenshot of selecting Application Insights from the left side menu.":::
+
+1. Under **Application Insights**, select **Enable**.
+1. Verify you've connected an Application Insights resource to your app.
+
+ :::image type="content" source="./media/profiler/enable-app-insights.png" alt-text="Screenshot of enabling App Insights on your app.":::
+
+1. Scroll down and select the **.NET** or **.NET Core** tab, depending on your app.
+1. Verify **Collection Level** > **Recommended** is selected.
+1. Under **Profiler**, select **On**.
+ - If you chose the **Basic** collection level earlier, the Profiler setting is disabled.
+1. Select **Apply**, then **Yes** to confirm.
+
+ :::image type="content" source="./media/profiler/enable-profiler.png" alt-text="Screenshot of enabling Profiler on your app.":::
+
+## Enable Profiler using app settings
+
+If your Application Insights resource is in a different subscription from your App Service, you'll need to enable Profiler manually by creating app settings for your Azure App Service. You can automate the creation of these settings using a template or other means. The settings needed to enable the profiler:
+
+|App Setting | Value |
+||-|
+|APPINSIGHTS_INSTRUMENTATIONKEY | iKey for your Application Insights resource |
+|APPINSIGHTS_PROFILERFEATURE_VERSION | 1.0.0 |
+|DiagnosticServices_EXTENSION_VERSION | ~3 |
+
+Set these values using:
+- [Azure Resource Manager Templates](../app/azure-web-apps-net-core.md#app-service-application-settings-with-azure-resource-manager)
+- [Azure PowerShell](/powershell/module/az.websites/set-azwebapp)
+- [Azure CLI](/cli/azure/webapp/config/appsettings)
+
+## Enable Profiler for other clouds
+
+Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
+
+|App Setting | US Government Cloud | China Cloud |
+|||-|
+|ApplicationInsightsProfilerEndpoint | `https://profiler.monitor.azure.us` | `https://profiler.monitor.azure.cn` |
+|ApplicationInsightsEndpoint | `https://dc.applicationinsights.us` | `https://dc.applicationinsights.azure.cn` |
+
+## Enable Azure Active Directory authentication for profile ingestion
+
+Application Insights Profiler supports Azure AD authentication for profiles ingestion. For all profiles of your application to be ingested, your application must be authenticated and provide the required application settings to the Profiler agent.
+
+Profiler only supports Azure AD authentication when you reference and configure Azure AD using the Application Insights SDK in your application.
+
+To enable Azure AD for profiles ingestion:
+
+1. Create and add the managed identity to authenticate against your Application Insights resource to your App Service.
+
+ a. [System-Assigned Managed identity documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-system-assigned-identity)
+
+ b. [User-Assigned Managed identity documentation](../../app-service/overview-managed-identity.md?tabs=portal%2chttp#add-a-user-assigned-identity)
+
+1. [Configure and enable Azure AD](../app/azure-ad-authentication.md?tabs=net#configuring-and-enabling-azure-ad-based-authentication) in your Application Insights resource.
+
+1. Add the following application setting to let the Profiler agent know which managed identity to use:
+
+ For System-Assigned Identity:
+
+ |App Setting | Value |
+ ||-|
+ |APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AAD |
+
+ For User-Assigned Identity:
+
+ |App Setting | Value |
+ ||-|
+ |APPLICATIONINSIGHTS_AUTHENTICATION_STRING | Authorization=AAD;ClientId={Client id of the User-Assigned Identity} |
+
+## Disable Profiler
+
+To stop or restart Profiler for an individual app's instance:
+
+1. Under **Settings** in the left side menu, select **WebJobs**.
+
+ :::image type="content" source="./media/profiler/web-jobs-menu.png" alt-text="Screenshot of selecting web jobs from the left side menu.":::
+
+1. Select the webjob named `ApplicationInsightsProfiler3`.
+
+1. Click **Stop** from the top menu.
+
+ :::image type="content" source="./media/profiler/stop-web-job.png" alt-text="Screenshot of selecting stop for stopping the webjob.":::
+
+1. Select **Yes** to confirm.
+
+We recommend that you have Profiler enabled on all your apps to discover any performance issues as early as possible.
+
+Profiler's files can be deleted when using WebDeploy to deploy changes to your web application. You can prevent the deletion by excluding the App_Data folder from being deleted during deployment.
+
+## Next steps
+
+* [Working with Application Insights in Visual Studio](../app/visual-studio.md)
azure-netapp-files Azacsnap Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-installation.md
na Previously updated : 02/05/2022 Last updated : 06/01/2022
This section explains how to enable communication with storage. Ensure the stora
# [SAP HANA](#tab/sap-hana)
+> [!IMPORTANT]
+> If deploying to a centralized virtual machine, then it will need to have the SAP HANA client installed and set up so the AzAcSnap user can run `hdbsql` and `hdbuserstore` commands. The SAP HANA Client can downloaded from https://tools.hana.ondemand.com/#hanatools.
+ The snapshot tools communicate with SAP HANA and need a user with appropriate permissions to initiate and release the database save-point. The following example shows the setup of the SAP HANA v2 user and the `hdbuserstore` for communication to the SAP HANA database.
azure-netapp-files Azacsnap Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-preview.md
na Previously updated : 03/07/2022 Last updated : 06/01/2022
The following example commands set up a user (AZACSNAP) in the Oracle database,
1. Copy the ZIP file to the target system (for example, the centralized virtual machine running AzAcSnap).
- > [!NOTE]
+ > [!IMPORTANT]
> If deploying to a centralized virtual machine, then it will need to have the Oracle instant client installed and set up so the AzAcSnap user can > run `sqlplus` commands. The Oracle Instant Client can downloaded from https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html. > In order for SQL\*Plus to run correctly, download both the required package (for example, Basic Light Package) and the optional SQL\*Plus tools package.
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
This section provides references to SAP on Azure solutions.
### SAP AnyDB
-* [SAP System on Oracle Database on Azure - Azure Architecture Center](/azure/architecture/example-scenario/apps/sap-on-oracle)
+* [SAP System on Oracle Database on Azure - Azure Architecture Center](/azure/architecture/example-scenario/apps/sap-production)
* [Oracle Azure Virtual Machines DBMS deployment for SAP workload - Azure Virtual Machines](../virtual-machines/workloads/sap/dbms_guide_oracle.md#oracle-configuration-guidelines-for-sap-installations-in-azure-vms-on-linux) * [Deploy SAP AnyDB (Oracle 19c) with Azure NetApp Files](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-anydb-oracle-19c-with-azure-netapp-files/ba-p/2064043) * [Manual Recovery Guide for SAP Oracle 19c on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-oracle-19c-on-azure-vms-from-azure/ba-p/3242408)
azure-resource-manager Tutorial Custom Providers Function Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-authoring.md
Title: Author a RESTful endpoint
description: This tutorial shows how to author a RESTful endpoint for custom providers. It details how to handle requests and responses for the supported RESTful HTTP methods. Previously updated : 01/13/2021 Last updated : 05/06/2022
In this tutorial, you update the function app to work as a RESTful endpoint for
- **POST**: Trigger an action - **GET (collection)**: List all existing resources
- For this tutorial, you use Azure Table storage. But any database or storage service can work.
+ For this tutorial, you use Azure Table storage, but any database or storage service works.
## Partition custom resources in storage
The following example shows an `x-ms-customproviders-requestpath` header for a c
X-MS-CustomProviders-RequestPath: /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/{myResourceType}/{myResourceName} ```
-Based on the example's `x-ms-customproviders-requestpath` header, you can create the *partitionKey* and *rowKey* parameters for your storage as shown in the following table:
+Based on the `x-ms-customproviders-requestpath` header, you can create the *partitionKey* and *rowKey* parameters for your storage as shown in the following table:
Parameter | Template | Description ||
public class CustomResource : ITableEntity
public ETag ETag { get; set; } } ```+ **CustomResource** is a simple, generic class that accepts any input data. It's based on **ITableEntity**, which is used to store data. The **CustomResource** class implements all properties from interface **ITableEntity**: **timestamp**, **eTag**, **partitionKey**, and **rowKey**. ## Support custom provider RESTful methods
public static async Task<HttpResponseMessage> TriggerCustomAction(HttpRequestMes
} ```
-The **TriggerCustomAction** method accepts an incoming request and simply echoes back the response with a status code.
+The **TriggerCustomAction** method accepts an incoming request and echoes back the response with a status code.
### Create a custom resource
public static async Task<HttpResponseMessage> CreateCustomResource(HttpRequestMe
} ```
-The **CreateCustomResource** method updates the incoming request to include the Azure-specific fields **id**, **name**, and **type**. These fields are top-level properties used by services across Azure. They let the custom provider interoperate with other services like Azure Policy, Azure Resource Manager Templates, and Azure Activity Log.
+The **CreateCustomResource** method updates the incoming request to include the Azure-specific fields **id**, **name**, and **type**. These fields are top-level properties used by services across Azure. They let the custom provider interoperate with other services like Azure Policy, Azure Resource Manager templates, and Azure Activity Log.
Property | Example | Description ||
azure-resource-manager Tutorial Resource Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/tutorial-resource-onboarding.md
Title: Tutorial - resource onboarding
+ Title: Extend resources with custom providers
description: Resource onboarding through custom providers allows you to manipulate and extend existing Azure resources. Previously updated : 09/17/2019 Last updated : 05/06/2022
-# Tutorial: Resource onboarding with Azure Custom Providers
+# Extend resources with custom providers
-In this tutorial, you'll deploy to Azure a custom resource provider that extends the Azure Resource Manager API with the Microsoft.CustomProviders/associations resource type. The tutorial shows how to extend existing resources that are outside the resource group where the custom provider instance is located. In this tutorial, the custom resource provider is powered by an Azure logic app, but you can use any public API endpoint.
+In this tutorial, you deploy a custom resource provider to Azure that extends the Azure Resource Manager API with the Microsoft.CustomProviders/associations resource type. The tutorial shows how to extend existing resources that are outside the resource group where the custom provider instance is located. In this tutorial, the custom resource provider is powered by an Azure logic app, but you can use any public API endpoint.
## Prerequisites
-To complete this tutorial, you need to know:
+To complete this tutorial, make sure you review the following:
* The capabilities of [Azure Custom Providers](overview.md). * Basic information about [resource onboarding with custom providers](concepts-resource-onboarding.md). ## Get started with resource onboarding
-In this tutorial, there are two pieces that need to be deployed: the custom provider and the association. To make the process easier, you can optionally use a single template that deploys both.
+In this tutorial, there are two pieces that need to be deployed: **the custom provider** and **the association**. To make the process easier, you can optionally use a single template that deploys both.
The template will use these resources:
-* Microsoft.CustomProviders/resourceProviders
-* Microsoft.Logic/workflows
-* Microsoft.CustomProviders/associations
+* [Microsoft.CustomProviders/resourceProviders](/azure/templates/microsoft.customproviders/resourcproviders)
+* [Microsoft.Logic/workflows](/azure/templates/microsoft.logic/workflows)
+* [Microsoft.CustomProviders/associations](/azure/templates/microsoft.customproviders/associations)
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "location": {
The template will use these resources:
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2017-05-10",
+ "apiVersion": "2021-04-01",
"condition": "[empty(parameters('customResourceProviderId'))]", "name": "customProviderInfrastructureTemplate", "properties": {
The template will use these resources:
"resources": [ { "type": "Microsoft.Logic/workflows",
- "apiVersion": "2017-07-01",
+ "apiVersion": "2019-05-01",
"name": "[parameters('logicAppName')]", "location": "[parameters('location')]", "properties": {
The template will use these resources:
"name": "associations", "mode": "Secure", "routingType": "Webhook,Cache,Extension",
- "endpoint": "[[listCallbackURL(concat(resourceId('Microsoft.Logic/workflows', parameters('logicAppName')), '/triggers/CustomProviderWebhook'), '2017-07-01').value]"
+ "endpoint": "[[listCallbackURL(concat(resourceId('Microsoft.Logic/workflows', parameters('logicAppName')), '/triggers/CustomProviderWebhook'), '2019-05-01').value]"
} ] }
The template will use these resources:
The first part of the template deploys the custom provider infrastructure. This infrastructure defines the effect of the associations resource. If you're not familiar with custom providers, see [Custom provider basics](overview.md).
-Let's deploy the custom provider infrastructure. Either copy, save, and deploy the preceding template, or follow along and deploy the infrastructure by using the Azure portal.
+Let's deploy the custom provider infrastructure. Either copy, save, and deploy the preceding template, or follow along and deploy the infrastructure using the Azure portal.
1. Go to the [Azure portal](https://portal.azure.com).
Let's deploy the custom provider infrastructure. Either copy, save, and deploy t
![Select Add](media/tutorial-resource-onboarding/templatesadd.png)
-4. Under **General**, enter a **Name** and **Description** for the new template:
+4. Under **General**, enter a *Name* and *Description* for the new template:
![Template name and description](media/tutorial-resource-onboarding/templatesdescription.png)
Let's deploy the custom provider infrastructure. Either copy, save, and deploy t
After you have the custom provider infrastructure set up, you can easily deploy more associations. The resource group for additional associations doesn't have to be the same as the resource group where you deployed the custom provider infrastructure. To create an association, you need to have Microsoft.CustomProviders/resourceproviders/write permissions on the specified Custom Resource Provider ID.
-1. Go to the custom provider **Microsoft.CustomProviders/resourceProviders** resource in the resource group of the previous deployment. You'll need to select the **Show hidden types** check box:
+1. Go to the custom provider **Microsoft.CustomProviders/resourceProviders** resource in the resource group of the previous deployment. You need to select the **Show hidden types** check box:
![Go to the resource](media/tutorial-resource-onboarding/showhidden.png) 2. Copy the Resource ID property of the custom provider.
-3. Search for **templates** in **All Services** or by using the main search box:
+3. Search for *templates* in **All Services** or by using the main search box:
![Search for templates](media/tutorial-resource-onboarding/templates.png)
After you have the custom provider infrastructure set up, you can easily deploy
![New associations resource](media/tutorial-resource-onboarding/createdassociationresource.png)
-If you want, you can go back to the logic app **Run history** and see that another call was made to the logic app. You can update the logic app to augment additional functionality for each created association.
+You can go back to the logic app **Run history** and see that another call was made to the logic app. You can update the logic app to augment additional functionality for each created association.
-## Getting help
+## Next steps
-If you have questions about Azure Custom Providers, try asking them on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-custom-providers). A similar question might have already been answered, so check first before posting. Add the tag `azure-custom-providers` to get a fast response!
+In this article, you deployed a custom resource provider to Azure that extends the Azure Resource Manager API with the Microsoft.CustomProviders/associates resource type. To continue learning about custom providers, see:
+* [Deploy associations for a custom provider using Azure Policy](./concepts-built-in-policy.md)
+* [Azure Custom Providers resource onboarding overview](./concepts-resource-onboarding.md)
azure-signalr Signalr Concept Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-performance.md
One of the key benefits of using Azure SignalR Service is the ease of scaling Si
In this guide, we'll introduce the factors that affect SignalR application performance. We'll describe typical performance in different use-case scenarios. In the end, we'll introduce the environment and tools that you can use to generate a performance report.
+## Quick evaluation using metrics
+ Before going through the factors that impact the performance, let's first introduce an easy way to monitor the pressure of your service. There's a metrics called **Server Load** on the Portal.
+
+ <kbd>![Screenshot of the Server Load metric of Azure SignalR on Portal. The metrics shows Server Load is at about 8 percent usage. ](./media/signalr-concept-performance/server-load.png "Server Load")</kbd>
++
+ It shows the computing pressure of your SignalR service. You could test on your own scenario and check this metrics to decide whether to scale up. The latency inside SignalR service would remain low if the Server Load is below 70%.
+
+> [!NOTE]
+> If you are using unit 50 or unit 100 **and** your scenario is mainly sending to small groups (group size <100) or single connection, you need to check [sending to small group](#small-group) or [sending to connection](#send-to-connection) for reference. In those scenarios there is large routing cost which is not included in the Server Load.
+
+ Below are detailed concepts for evaluating performance.
+ ## Term definitions *Inbound*: The incoming message to Azure SignalR Service.
azure-video-indexer Customize Person Model With Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/customize-person-model-with-website.md
You can add more faces to the person by selecting **Add images**.
Select the image you wish to delete and click **Delete**.
-#### Rename and delete the person
+#### Rename and delete a person
You can use the manage pane to rename the person and to delete the person from the Person model.
To delete a detected face in your video, go to the Insights pane and select the
The person, if they had been named, will also continue to exist in the Person model that was used to index the video from which you deleted the face unless you specifically delete the person from the Person model.
+## Optimize the ability of your model to recognize a person
+
+To optimize your model ability to recognize the person, upload as many different images as possible and from different angles. To get optimal results, use high resolution images.
+ ## Next steps [Customize Person model using APIs](customize-person-model-with-api.md)
azure-vmware Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-configure-networking.md
Title: Tutorial - Configure networking for your VMware private cloud in Azure
description: Learn to create and configure the networking needed to deploy your private cloud in Azure Previously updated : 07/30/2021 Last updated : 05/31/2022
In this tutorial, you learn how to:
## Connect with the Azure vNet connect feature
-You can use the **Azure vNet connect** feature to use an existing vNet or create a new vNet to connect to Azure VMware Solution.
+You can use the **Azure vNet connect** feature to use an existing vNet or create a new vNet to connect to Azure VMware Solution. **Azure vNet connect** is a function to configure vNet connectivity, it does not record configuration state; browse the Azure portal to check what settings have been configured.
>[!NOTE] >Address space in the vNet cannot overlap with the Azure VMware Solution private cloud CIDR.
Before selecting an existing vNet, there are specific requirements that must be
1. In the same region as Azure VMware Solution private cloud. 1. In the same resource group as Azure VMware Solution private cloud. 1. vNet must contain an address space that doesn't overlap with Azure VMware Solution.
+1. Validate solution design is within Azure VMware Solution limits (https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits).
### Select an existing vNet
azure-web-pubsub Tutorial Serverless Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-iot.md
+
+ Title: Tutorial - Visualize IoT device data from IoT Hub using Azure Web PubSub service and Azure Functions
+description: A tutorial to walk through how to use Azure Web PubSub service and Azure Functions to monitor device data from IoT Hub.
++++ Last updated : 06/01/2022++
+# Tutorial: Visualize IoT device data from IoT Hub using Azure Web PubSub service and Azure Functions
+
+In this tutorial, you learn how to use Azure Web PubSub service and Azure Functions to build a serverless application with real-time data visualization from IoT Hub.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Build a serverless data visualization app
+> * Work together with Web PubSub function input and output bindings and Azure IoT hub
+> * Run the sample functions locally
+
+## Prerequisites
+
+# [JavaScript](#tab/javascript)
+
+* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/)
+
+* [Node.js](https://nodejs.org/en/download/), version 10.x.
+ > [!NOTE]
+ > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages).
+
+* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v3 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+
+* The [Azure CLI](/cli/azure) to manage Azure resources.
+++++
+## Create a Web PubSub instance
+If you already have a Web PubSub instance in your Azure subscription, you can skip this section.
+++
+## Create and run the functions locally
+
+1. Make sure you have [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) installed. And then create an empty directory for the project. Run command under this working directory.
+
+ # [JavaScript](#tab/javascript)
+ ```bash
+ func init --worker-runtime javascript
+ ```
+
+
+2. Update `host.json`'s `extensionBundle` to version larger than _3.3.0_ which contains Web PubSub support.
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[3.3.*, 4.0.0)"
+ }
+}
+```
+
+3. Create an `index` function to read and host a static web page for clients.
+ ```bash
+ func new -n index -t HttpTrigger
+ ```
+ # [JavaScript](#tab/javascript)
+ - Update `index/index.js` with following code that serve the html content as a static site.
+ ```js
+ var fs = require("fs");
+ var path = require("path");
+
+ module.exports = function (context, req) {
+ let index = path.join(
+ context.executionContext.functionDirectory,
+ "https://docsupdatetracker.net/index.html"
+ );
+ fs.readFile(index, "utf8", function (err, data) {
+ if (err) {
+ console.log(err);
+ context.done(err);
+ return;
+ }
+ context.res = {
+ status: 200,
+ headers: {
+ "Content-Type": "text/html",
+ },
+ body: data,
+ };
+ context.done();
+ });
+ };
+
+ ```
+
+4. Create this _https://docsupdatetracker.net/index.html_ file under the same folder as file _index.js_:
+
+ ```html
+ <!doctype html>
+
+ <html lang="en">
+
+ <head>
+ <!-- Required meta tags -->
+ <meta charset="utf-8">
+ <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
+ <script src="https://cdn.jsdelivr.net/npm/chart.js@2.8.0/dist/Chart.min.js" type="text/javascript"
+ charset="utf-8"></script>
+ <script>
+ document.addEventListener("DOMContentLoaded", async function (event) {
+ const res = await fetch(`/api/negotiate?id=${1}`);
+ const data = await res.json();
+ const webSocket = new WebSocket(data.url);
+
+ class TrackedDevices {
+ constructor() {
+ // key as the deviceId, value as the temperature array
+ this.devices = new Map();
+ this.maxLen = 50;
+ this.timeData = new Array(this.maxLen);
+ }
+
+ // Find a device temperature based on its Id
+ findDevice(deviceId) {
+ return this.devices.get(deviceId);
+ }
+
+ addData(time, temperature, deviceId, dataSet, options) {
+ let containsDeviceId = false;
+ this.timeData.push(time);
+ for (const [key, value] of this.devices) {
+ if (key === deviceId) {
+ containsDeviceId = true;
+ value.push(temperature);
+ } else {
+ value.push(null);
+ }
+ }
+
+ if (!containsDeviceId) {
+ const data = getRandomDataSet(deviceId, 0);
+ let temperatures = new Array(this.maxLen);
+ temperatures.push(temperature);
+ this.devices.set(deviceId, temperatures);
+ data.data = temperatures;
+ dataSet.push(data);
+ }
+
+ if (this.timeData.length > this.maxLen) {
+ this.timeData.shift();
+ this.devices.forEach((value, key) => {
+ value.shift();
+ })
+ }
+ }
+
+ getDevicesCount() {
+ return this.devices.size;
+ }
+ }
+
+ const trackedDevices = new TrackedDevices();
+ function getRandom(max) {
+ return Math.floor((Math.random() * max) + 1)
+ }
+ function getRandomDataSet(id, axisId) {
+ return getDataSet(id, axisId, getRandom(255), getRandom(255), getRandom(255));
+ }
+ function getDataSet(id, axisId, r, g, b) {
+ return {
+ fill: false,
+ label: id,
+ yAxisID: axisId,
+ borderColor: `rgba(${r}, ${g}, ${b}, 1)`,
+ pointBoarderColor: `rgba(${r}, ${g}, ${b}, 1)`,
+ backgroundColor: `rgba(${r}, ${g}, ${b}, 0.4)`,
+ pointHoverBackgroundColor: `rgba(${r}, ${g}, ${b}, 1)`,
+ pointHoverBorderColor: `rgba(${r}, ${g}, ${b}, 1)`,
+ spanGaps: true,
+ };
+ }
+
+ function getYAxy(id, display) {
+ return {
+ id: id,
+ type: "linear",
+ scaleLabel: {
+ labelString: display || id,
+ display: true,
+ },
+ position: "left",
+ };
+ }
+
+ // Define the chart axes
+ const chartData = { datasets: [], };
+
+ // Temperature (┬║C), id as 0
+ const chartOptions = {
+ responsive: true,
+ animation: {
+ duration: 250 * 1.5,
+ easing: 'linear'
+ },
+ scales: {
+ yAxes: [
+ getYAxy(0, "Temperature (┬║C)"),
+ ],
+ },
+ };
+ // Get the context of the canvas element we want to select
+ const ctx = document.getElementById("chart").getContext("2d");
+
+ chartData.labels = trackedDevices.timeData;
+ const chart = new Chart(ctx, {
+ type: "line",
+ data: chartData,
+ options: chartOptions,
+ });
+
+ webSocket.onmessage = function onMessage(message) {
+ try {
+ const messageData = JSON.parse(message.data);
+ console.log(messageData);
+
+ // time and either temperature or humidity are required
+ if (!messageData.MessageDate ||
+ !messageData.IotData.temperature) {
+ return;
+ }
+ trackedDevices.addData(messageData.MessageDate, messageData.IotData.temperature, messageData.DeviceId, chartData.datasets, chartOptions.scales);
+ const numDevices = trackedDevices.getDevicesCount();
+ document.getElementById("deviceCount").innerText =
+ numDevices === 1 ? `${numDevices} device` : `${numDevices} devices`;
+ chart.update();
+ } catch (err) {
+ console.error(err);
+ }
+ };
+ });
+ </script>
+ <style>
+ body {
+ font: 14px "Lucida Grande", Helvetica, Arial, sans-serif;
+ padding: 50px;
+ margin: 0;
+ text-align: center;
+ }
+
+ .flexHeader {
+ display: flex;
+ flex-direction: row;
+ flex-wrap: nowrap;
+ justify-content: space-between;
+ }
+
+ #charts {
+ display: flex;
+ flex-direction: row;
+ flex-wrap: wrap;
+ justify-content: space-around;
+ align-content: stretch;
+ }
+
+ .chartContainer {
+ flex: 1;
+ flex-basis: 40%;
+ min-width: 30%;
+ max-width: 100%;
+ }
+
+ a {
+ color: #00B7FF;
+ }
+ </style>
+
+ <title>Temperature Real-time Data</title>
+ </head>
+
+ <body>
+ <h1 class="flexHeader">
+ <span>Temperature Real-time Data</span>
+ <span id="deviceCount">0 devices</span>
+ </h1>
+ <div id="charts">
+ <canvas id="chart"></canvas>
+ </div>
+ </body>
+
+ </html>
+ ```
+
+5. Create a `negotiate` function to help clients get service connection url with access token.
+ ```bash
+ func new -n negotiate -t HttpTrigger
+ ```
+ # [JavaScript](#tab/javascript)
+ - Update `negotiate/function.json` to include input binding [`WebPubSubConnection`](reference-functions-bindings.md#input-binding), with the following json codes.
+ ```json
+ {
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ },
+ {
+ "type": "webPubSubConnection",
+ "name": "connection",
+ "hub": "%hubName%",
+ "direction": "in"
+ }
+ ]
+ }
+ ```
+ - Update `negotiate/index.js` and to return the `connection` binding which contains the generated token.
+ ```js
+ module.exports = function (context, req, connection) {
+ // Add your own auth logic here
+ context.res = { body: connection };
+ context.done();
+ };
+ ```
+
+6. Create a `messagehandler` function to generate notifications with template `"IoT Hub (Event Hub)"`.
+ ```bash
+ func new --template "IoT Hub (Event Hub)" --name messagehandler
+ ```
+ # [JavaScript](#tab/javascript)
+ - Update _messagehandler/function.json_ to add [Web PubSub output binding](reference-functions-bindings.md#output-binding) with the following json code. Please note that we use variable `%hubName%` as the hub name for both IoT eventHubName and Web PubSub hub.
+ ```json
+ {
+ "bindings": [
+ {
+ "type": "eventHubTrigger",
+ "name": "IoTHubMessages",
+ "direction": "in",
+ "eventHubName": "%hubName%",
+ "connection": "IOTHUBConnectionString",
+ "cardinality": "many",
+ "consumerGroup": "$Default",
+ "dataType": "string"
+ },
+ {
+ "type": "webPubSub",
+ "name": "actions",
+ "hub": "%hubName%",
+ "direction": "out"
+ }
+ ]
+ }
+ ```
+ - Update `messagehandler/index.js` with the following code. It sends every message from IoT hub to every client connected to Web PubSub service using Web PubSub output bindings.
+ ```js
+ module.exports = function (context, IoTHubMessages) {
+ IoTHubMessages.forEach((message) => {
+ const deviceMessage = JSON.parse(message);
+ context.log(`Processed message: ${message}`);
+ context.bindings.actions = {
+ actionName: "sendToAll",
+ data: JSON.stringify({
+ IotData: deviceMessage,
+ MessageDate: deviceMessage.date || new Date().toISOString(),
+ DeviceId: deviceMessage.deviceId,
+ }),
+ };
+ });
+
+ context.done();
+ };
+ ```
+
+7. Update the Function settings
+
+ 1. Add `hubName` setting and replace `{YourIoTHubName}` with the hub name you used when creating your IoT Hub:
+
+ ```bash
+ func settings add hubName "{YourIoTHubName}"
+ ```
+
+ 2. Get the **Service Connection String** for IoT Hub using below CLI command:
+
+ ```azcli
+ az iot hub connection-string show --policy-name service --hub-name {YourIoTHubName} --output table --default-eventhub
+ ```
+
+ And set `IOTHubConnectionString` using below command, replacing `<iot-connection-string>` with the value:
+
+ ```bash
+ func settings add IOTHubConnectionString "<iot-connection-string>"
+ ```
+
+ 3. Get the **Connection String** for Web PubSub using below CLI command:
+
+ ```azcli
+ az webpubsub key show --name "<your-unique-resource-name>" --resource-group "<your-resource-group>" --query primaryConnectionString
+ ```
+
+ And set `WebPubSubConnectionString` using below command, replacing `<webpubsub-connection-string>` with the value:
+
+ ```bash
+ func settings add WebPubSubConnectionString "<webpubsub-connection-string>"
+ ```
+
+ > [!NOTE]
+ > `IoT Hub (Event Hub)` Function trigger used in the sample has dependency on Azure Storage, but you can use local storage emulator when the Function is running locally. If you got some error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.`, you'll need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md).
+
+8. Run the function locally
+
+ Now you're able to run your local function by command below.
+
+ ```bash
+ func start
+ ```
+
+ And checking the running logs, you can visit your local host static page by visiting: `https://localhost:7071/api/index`.
+
+## Run the device to send data
+
+### Register a device
+
+A device must be registered with your IoT hub before it can connect.
+
+If you already have a device registered in your IoT hub, you can skip this section.
+
+1. Run the [az iot hub device-identity create](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-create) command in Azure Cloud Shell to create the device identity.
+
+ **YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub.
+
+ ```azurecli-interactive
+ az iot hub device-identity create --hub-name {YourIoTHubName} --device-id simDevice
+ ```
+
+2. Run the [az iot hub device-identity connection-string show](/cli/azure/iot/hub/device-identity/connection-string#az-iot-hub-device-identity-connection-string-show) command in Azure Cloud Shell to get the _device connection string_ for the device you just registered:
+
+ **YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub.
+
+ ```azurecli-interactive
+ az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id simDevice --output table
+ ```
+
+ Make a note of the device connection string, which looks like:
+
+ `HostName={YourIoTHubName}.azure-devices.net;DeviceId=simDevice;SharedAccessKey={YourSharedAccessKey}`
+
+- For quickest results, simulate temperature data using the [Raspberry Pi Azure IoT Online Simulator](https://azure-samples.github.io/raspberry-pi-web-simulator/#Getstarted). Paste in the **device connection string**, and select the **Run** button.
+
+- If you have a physical Raspberry Pi and BME280 sensor, you may measure and report real temperature and humidity values by following the [Connect Raspberry Pi to Azure IoT Hub (Node.js)](/azure/iot-hub/iot-hub-raspberry-pi-kit-node-get-started) tutorial.
+
+## Run the visualization website
+Open function host index page: `http://localhost:7071/api/index` to view the real-time dashboard. Register multiple devices and you can see the dashboard updates multiple devices in real-time. Open multiple browsers and you can see every page are updated in real-time.
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Create a simple chatroom with Azure Web PubSub](https://azure.github.io/azure-webpubsub/getting-started/create-a-chat-app/js-handle-events)
+
+> [!div class="nextstepaction"]
+> [Azure Web PubSub bindings for Azure Functions](https://azure.github.io/azure-webpubsub/references/functions-bindings)
+
+> [!div class="nextstepaction"]
+> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
backup Backup Azure Manage Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-manage-vms.md
In the Azure portal, the Recovery Services vault dashboard provides access to va
You can manage backups by using the dashboard and by drilling down to individual VMs. To begin machine backups, open the vault on the dashboard:
-![Full dashboard view with slider](./media/backup-azure-manage-vms/bottom-slider.png)
[!INCLUDE [backup-center.md](../../includes/backup-center.md)]
To view VMs on the vault dashboard:
1. Sign in to the [Azure portal](https://portal.azure.com/). 1. On the left menu, select **All services**.
- ![Select All services](./media/backup-azure-manage-vms/select-all-services.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/select-all-services.png" alt-text="Screenshot showing to select All services.":::
1. In the **All services** dialog box, enter *Recovery Services*. The list of resources filters according to your input. In the list of resources, select **Recovery Services vaults**.
- ![Enter and choose Recovery Services vaults](./media/backup-azure-manage-vms/all-services.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/all-services.png" alt-text="Screenshot showing to enter and choose Recovery Services vaults.":::
The list of Recovery Services vaults in the subscription appears. 1. For ease of use, select the pin icon next to your vault name and select **Pin to dashboard**. 1. Open the vault dashboard.
- ![Open the vault dashboard and Settings pane](./media/backup-azure-manage-vms/full-view-rs-vault.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/full-view-rs-vault.png" alt-text="Screenshot showing to open the vault dashboard and Settings pane.":::
1. On the **Backup Items** tile, select **Azure Virtual Machine**.
- ![Open the Backup Items tile](./media/backup-azure-manage-vms/azure-virtual-machine.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/azure-virtual-machine.png" alt-text="Screenshot showing to open the Backup Items tile.":::
1. On the **Backup Items** pane, you can view the list of protected VMs. In this example, the vault protects one virtual machine: *myVMR1*.
- ![View the Backup Items pane](./media/backup-azure-manage-vms/backup-items-blade-select-item.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/backup-items-blade-select-item.png" alt-text="Screenshot showing to view the Backup Items pane.":::
1. From the vault item's dashboard, you can modify backup policies, run an on-demand backup, stop or resume protection of VMs, delete backup data, view restore points, and run a restore.
- ![The Backup Items dashboard and the Settings pane](./media/backup-azure-manage-vms/item-dashboard-settings.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/item-dashboard-settings.png" alt-text="Screenshot showing the Backup Items dashboard and the Settings pane.":::
## Manage backup policy for a VM
To manage a backup policy:
1. Sign in to the [Azure portal](https://portal.azure.com/). Open the vault dashboard. 2. On the **Backup Items** tile, select **Azure Virtual Machine**.
- ![Open the Backup Items tile](./media/backup-azure-manage-vms/azure-virtual-machine.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/azure-virtual-machine.png" alt-text="Screenshot showing to open the Backup Items tile.":::
3. On the **Backup Items** pane, you can view the list of protected VMs and last backup status with latest restore points time.
- ![View the Backup Items pane](./media/backup-azure-manage-vms/backup-items-blade-select-item.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/backup-items-blade-select-item.png" alt-text="Screenshot showing to view the Backup Items pane.":::
4. From the vault item's dashboard, you can select a backup policy. * To switch policies, select a different policy and then select **Save**. The new policy is immediately applied to the vault.
- ![Choose a backup policy](./media/backup-azure-manage-vms/backup-policy-create-new.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/backup-policy-create-new.png" alt-text="Screenshot showing to choose a backup policy.":::
## Run an on-demand backup
To trigger an on-demand backup:
1. On the [vault item dashboard](#view-vms-on-the-dashboard), under **Protected Item**, select **Backup Item**.
- ![The Backup now option](./media/backup-azure-manage-vms/backup-now-button.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/backup-now-button.png" alt-text="Screenshot showing the Backup now option.":::
2. From **Backup Management Type**, select **Azure Virtual Machine**. The **Backup Item (Azure Virtual Machine)** pane appears. 3. Select a VM and select **Backup Now** to create an on-demand backup. The **Backup Now** pane appears. 4. In the **Retain Backup Till** field, specify a date for the backup to be retained.
- ![The Backup Now calendar](./media/backup-azure-manage-vms/backup-now-check.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/backup-now-check.png" alt-text="Screenshot showing the Backup Now calendar.":::
5. Select **OK** to run the backup job.
To stop protection and retain data of a VM:
1. On the [vault item's dashboard](#view-vms-on-the-dashboard), select **Stop backup**. 2. Choose **Retain Backup Data**, and confirm your selection as needed. Add a comment if you want. If you aren't sure of the item's name, hover over the exclamation mark to view the name.
- ![Retain Backup data](./media/backup-azure-manage-vms/retain-backup-data.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/retain-backup-data.png" alt-text="Screenshot showing to retain Backup data.":::
A notification lets you know that the backup jobs have been stopped.
To stop protection and delete data of a VM:
1. On the [vault item's dashboard](#view-vms-on-the-dashboard), select **Stop backup**. 2. Choose **Delete Backup Data**, and confirm your selection as needed. Enter the name of the backup item and add a comment if you want.
- ![Delete backup data](./media/backup-azure-manage-vms/delete-backup-data.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/delete-backup-data.png" alt-text="Screenshot showing to delete backup data.":::
> [!NOTE] > After completing the delete operation the backed up data will be retained for 14 days in the [soft deleted state](./soft-delete-virtual-machines.md). <br>In addition, you can also [enable or disable soft delete](./backup-azure-security-feature-cloud.md#enabling-and-disabling-soft-delete).
To resume protection for a VM:
2. Follow the steps in [Manage backup policies](#manage-backup-policy-for-a-vm) to assign the policy for the VM. You don't need to choose the VM's initial protection policy. 3. After you apply the backup policy to the VM, you see the following message:
- ![Message indicating a successfully protected VM](./media/backup-azure-manage-vms/success-message.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/success-message.png" alt-text="Screenshot showing message indicating a successfully protected VM.":::
## Delete backup data
There are two ways to delete a VM's backup data:
* From the vault item dashboard, select Stop backup and follow the instructions for [Stop protection and delete backup data](#stop-protection-and-delete-backup-data) option.
- ![Select Stop backup](./media/backup-azure-manage-vms/stop-backup-button.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/stop-backup-button.png" alt-text="Screenshot showing to select Stop backup.":::
* From the vault item dashboard, select Delete backup data. This option is enabled if you had chosen to [Stop protection and retain backup data](#stop-protection-and-retain-backup-data) option during stop VM protection.
- ![Select Delete backup](./media/backup-azure-manage-vms/delete-backup-button.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/delete-backup-button.png" alt-text="Screenshot showing to select Delete backup.":::
* On the [vault item dashboard](#view-vms-on-the-dashboard), select **Delete backup data**. * Type the name of the backup item to confirm that you want to delete the recovery points.
- ![Delete backup data](./media/backup-azure-manage-vms/delete-backup-data.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/delete-backup-data.png" alt-text="Screenshot showing to delete backup data.":::
* To delete the backup data for the item, select **Delete**. A notification message lets you know that the backup data has been deleted.
cloud-services-extended-support Swap Cloud Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/swap-cloud-service.md
To save compute costs, you can delete one of the cloud services (designated as a
## REST API
-To use the [REST API](/rest/api/compute/load-balancers/swap-public-ip-addresses) to swap to a new cloud services deployment in Azure Cloud Services (extended support), use the following command and JSON configuration:
+To use the [REST API](/rest/api/load-balancer/load-balancers/swap-public-ip-addresses) to swap to a new cloud services deployment in Azure Cloud Services (extended support), use the following command and JSON configuration:
```http POST https://management.azure.com/subscriptions/subid/providers/Microsoft.Network/locations/westus/setLoadBalancerFrontendPublicIpAddresses?api-version=2021-02-01
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
The following neural voices are in public preview.
| Language | Locale | Gender | Voice name | Style support | |-||--|-||
-| English (United Kingdom) | `en-GB` | Female | `en-GB-AbbiNeural` <sup>New</sup> | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-BellaNeural` <sup>New</sup> | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-HollieNeural` <sup>New</sup> | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-OliviaNeural` <sup>New</sup> | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-MaisieNeural` <sup>New</sup> | General, child voice |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-AlfieNeural` <sup>New</sup> | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-ElliotNeural` <sup>New</sup> | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-EthanNeural` <sup>New</sup> | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-NoahNeural` <sup>New</sup> | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-OliverNeural` <sup>New</sup> | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-ThomasNeural` <sup>New</sup> | General |
-| English (United States) | `en-US` | Male | `en-US-DavisNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Female | `en-US-JaneNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Male | `en-US-JasonNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Female | `en-US-NancyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Male | `en-US-TonyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| French (France) | `fr-FR` | Female | `fr-FR-BrigitteNeural` <sup>New</sup> | General |
-| French (France) | `fr-FR` | Female | `fr-FR-CelesteNeural` <sup>New</sup> | General |
-| French (France) | `fr-FR` | Female | `fr-FR-CoralieNeural` <sup>New</sup> | General |
-| French (France) | `fr-FR` | Female | `fr-FR-JacquelineNeural` <sup>New</sup> | General |
-| French (France) | `fr-FR` | Female | `fr-FR-JosephineNeural` <sup>New</sup> | General |
-| French (France) | `fr-FR` | Female | `fr-FR-YvetteNeural` <sup>New</sup> | General |
-| French (France) | `fr-FR` | Female | `fr-FR-EloiseNeural` <sup>New</sup> | General, child voice |
-| French (France) | `fr-FR` | Male | `fr-FR-AlainNeural` <sup>New</sup> | General |
-| French (France) | `fr-FR` | Male | `fr-FR-ClaudeNeural` <sup>New</sup> | General |
-| French (France) | `fr-FR` | Male | `fr-FR-JeromeNeural` <sup>New</sup> | General |
-| French (France) | `fr-FR` | Male | `fr-FR-MauriceNeural` <sup>New</sup> | General |
-| French (France) | `fr-FR` | Male | `fr-FR-YvesNeural` <sup>New</sup> | General |
-| German (Germany) | `de-DE` | Female | `de-DE-AmalaNeural` <sup>New</sup> | General |
-| German (Germany) | `de-DE` | Female | `de-DE-ElkeNeural` <sup>New</sup> | General |
-| German (Germany) | `de-DE` | Female | `de-DE-KlarissaNeural` <sup>New</sup> | General |
-| German (Germany) | `de-DE` | Female | `de-DE-LouisaNeural` <sup>New</sup> | General |
-| German (Germany) | `de-DE` | Female | `de-DE-MajaNeural` <sup>New</sup> | General |
-| German (Germany) | `de-DE` | Female | `de-DE-TanjaNeural` <sup>New</sup> | General |
-| German (Germany) | `de-DE` | Female | `de-DE-GiselaNeural` <sup>New</sup> | General, child voice |
-| German (Germany) | `de-DE` | Male | `de-DE-BerndNeural` <sup>New</sup> | General |
-| German (Germany) | `de-DE` | Male | `de-DE-ChristophNeural` <sup>New</sup> | General |
-| German (Germany) | `de-DE` | Male | `de-DE-KasperNeural` <sup>New</sup> | General |
-| German (Germany) | `de-DE` | Male | `de-DE-KillianNeural` <sup>New</sup> | General |
-| German (Germany) | `de-DE` | Male | `de-DE-KlausNeural` <sup>New</sup> | General |
-| German (Germany) | `de-DE` | Male | `de-DE-RalfNeural` <sup>New</sup> | General |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunjianNeural` <sup>New</sup> | Optimized for broadcasting sports event, 2 new multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunhaoNeural` <sup>New</sup> | Optimized for promoting a product or service, 1 new multiple style available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunfengNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-AbbiNeural` | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-BellaNeural` | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-HollieNeural` | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-OliviaNeural` | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-MaisieNeural` | General, child voice |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-AlfieNeural` | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-ElliotNeural` | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-EthanNeural` | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-NoahNeural` | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-OliverNeural` | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-ThomasNeural` | General |
+| English (United States) | `en-US` | Male | `en-US-DavisNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Female | `en-US-JaneNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Male | `en-US-JasonNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Female | `en-US-NancyNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Male | `en-US-TonyNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| French (France) | `fr-FR` | Female | `fr-FR-BrigitteNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-CelesteNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-CoralieNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-JacquelineNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-JosephineNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-YvetteNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-EloiseNeural` | General, child voice |
+| French (France) | `fr-FR` | Male | `fr-FR-AlainNeural` | General |
+| French (France) | `fr-FR` | Male | `fr-FR-ClaudeNeural` | General |
+| French (France) | `fr-FR` | Male | `fr-FR-JeromeNeural` | General |
+| French (France) | `fr-FR` | Male | `fr-FR-MauriceNeural` | General |
+| French (France) | `fr-FR` | Male | `fr-FR-YvesNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-AmalaNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-ElkeNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-KlarissaNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-LouisaNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-MajaNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-TanjaNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-GiselaNeural` | General, child voice |
+| German (Germany) | `de-DE` | Male | `de-DE-BerndNeural` | General |
+| German (Germany) | `de-DE` | Male | `de-DE-ChristophNeural` | General |
+| German (Germany) | `de-DE` | Male | `de-DE-KasperNeural` | General |
+| German (Germany) | `de-DE` | Male | `de-DE-KillianNeural` | General |
+| German (Germany) | `de-DE` | Male | `de-DE-KlausNeural` | General |
+| German (Germany) | `de-DE` | Male | `de-DE-RalfNeural` | General |
### Voice styles and roles
Use the following table to determine supported styles and roles for each neural
|Voice|Styles|Style degree|Roles| |--|--|--|--| |en-US-AriaNeural|`angry`, `chat`, `cheerful`, `customerservice`, `empathetic`, `excited`, `friendly`, `hopeful`, `narration-professional`, `newscast-casual`, `newscast-formal`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-DavisNeural|`angry`, `chat`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|en-US-DavisNeural <sup>Public preview</sup>|`angry`, `chat`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
|en-US-GuyNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `newscast`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-JaneNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-JasonNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|en-US-JaneNeural <sup>Public preview</sup>|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|en-US-JasonNeural <sup>Public preview</sup>|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
|en-US-JennyNeural|`angry`, `assistant`, `chat`, `cheerful`,`customerservice`, `excited`, `friendly`, `hopeful`, `newscast`, `sad`, `shouting`, `terrified`, , `unfriendly`, `whispering`|||
-|en-US-NancyNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|en-US-NancyNeural <sup>Public preview</sup>|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
|en-US-SaraNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-TonyNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|fr-FR-DeniseNeural |`cheerful` <sup>Public preview</sup>, `sad`<sup>Public preview</sup>|||
+|en-US-TonyNeural <sup>Public preview</sup>|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|fr-FR-DeniseNeural |`cheerful`, `sad`|||
|ja-JP-NanamiNeural|`chat`, `cheerful`, `customerservice`||| |pt-BR-FranciscaNeural|`calm`||| |zh-CN-XiaohanNeural|`affectionate`, `angry`, `calm`, `cheerful`, `disgruntled`, `embarrassed`, `fearful`, `gentle`, `sad`, `serious`|Supported||
Use the following table to determine supported styles and roles for each neural
|zh-CN-YunxiNeural|`angry`, `assistant`, `cheerful`, `depressed`, `disgruntled`, `embarrassed`, `fearful`, `narration-relaxed`, `sad`, `serious`|Supported|Supported| |zh-CN-YunyangNeural|`customerservice`, `narration-professional`, `newscast-casual`|Supported|| |zh-CN-YunyeNeural|`angry`, `calm`, `cheerful`, `disgruntled`, `embarrassed`, `fearful`, `sad`, `serious`|Supported|Supported|
+|zh-CN-YunjianNeural <sup>Public preview</sup>|`narration-relaxed`, `sports-commentary` <sup>Public preview</sup>, `sports-commentary-excited` <sup>Public preview</sup>|Supported||
+|zh-CN-YunhaoNeural <sup>Public preview</sup>|`general`, `advertisement-upbeat` <sup>Public preview</sup>|Supported||
+|zh-CN-YunfengNeural <sup>Public preview</sup>|`calm`, `angry`, ` disgruntled`, `cheerful`, `fearful`, `sad`, `serious`, `depressed`|Supported||
### Custom Neural Voice
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
In the following tables, the parameters without the **Adjustable** row aren't ad
<sup>3</sup> For the free (F0) pricing tier, see also the monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/> <sup>4</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices) and [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).<br/>
-<sup>5</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#text-to-speech-increase-concurrent-request-limit-for-custom-neural-voices).<br/>
+<sup>5</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#text-to-speech-increase-concurrent-request-limit).<br/>
## Detailed description, quota adjustment, and best practices
Suppose that a Speech service resource has the concurrent request limit set to 3
Generally, it's a very good idea to test the workload and the workload patterns before going to production.
-### Text-to-speech: increase concurrent request limit for custom neural voices
+### Text-to-speech: increase concurrent request limit
-By default, the number of concurrent requests for Custom Neural Voice endpoints is limited to 10. For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling.
+For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling.
Increasing the limit of concurrent requests doesn't directly affect your costs. Speech service uses a payment model that requires that you pay only for what you use. The limit defines how high the service can scale before it starts throttle your requests.
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The following table has descriptions of each supported style.
|Style|Description| |--|-|
+|`style="advertisement-upbeat"`|Expresses an excited and high-energy tone for promoting a product or service.|
|`style="affectionate"`|Expresses a warm and affectionate tone, with higher pitch and vocal energy. The speaker is in a state of attracting the attention of the listener. The personality of the speaker is often endearing in nature.| |`style="angry"`|Expresses an angry and annoyed tone.| |`style="assistant"`|Expresses a warm and relaxed tone for digital assistants.|
The following table has descriptions of each supported style.
|`style="sad"`|Expresses a sorrowful tone.| |`style="serious"`|Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence.| |`style="shouting"`|Speaks like from a far distant or outside and to make self be clearly heard|
+|`style="sports-commentary"`|Expresses a relaxed and interesting tone for broadcasting a sports event.|
+|`style="sports-commentary-excited"`|Expresses an intensive and energetic tone for broadcasting exciting moments in a sports event.|
|`style="whispering"`|Speaks very softly and make a quiet and gentle sound| |`style="terrified"`|Expresses a very scared tone, with faster pace and a shakier voice. It sounds like the speaker is in an unsteady and frantic status.| |`style="unfriendly"`|Expresses a cold and indifferent tone.|
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/faq.md
Previously updated : 05/23/2022 Last updated : 05/31/2022
Yes, you can use [orchestration workflow](../orchestration-workflow/overview.md)
Add any out of scope utterances to the [none intent](./concepts/none-intent.md).
+## How do I control the none intent?
+
+You can control the none intent threshhold from UI through the project settings, by changing the none inten threshold value. The values can be between 0.0 and 1.0. Also, you can change this threshold from the APIs by changing the *confidenceThreshold* in settings object. Learn more about [none intent](./concepts/none-intent.md#none-score-threshold)
+ ## Is there any SDK support? Yes, only for predictions, and samples are available for [Python](https://aka.ms/sdk-samples-conversation-python) and [C#](https://aka.ms/sdk-sample-conversation-dot-net). There is currently no authoring support for the SDK.
cognitive-services Migrate Qnamaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/migrate-qnamaker.md
You can follow the steps below to migrate knowledge bases:
> [!div class="mx-imgBorder"] > ![Migrate QnAMaker with red selection box around the knowledge base selection option with a drop-down displaying three knowledge base names](../media/migrate-qnamaker/select-knowledge-bases.png)
-8. You can review the knowledge bases you plan to migrate. There could be some validation errors in project names as we follow stricter validation rules for custom question answering projects.
+8. You can review the knowledge bases you plan to migrate. There could be some validation errors in project names as we follow stricter validation rules for custom question answering projects. To resolve these errors occuring due to invalid characters, select the checkbox (in red) and click **Next**. This is a one-click method to replace the problematic charcaters in the name with the accepted characters. If there's a duplicate, a new unique project name is generated by the system.
> [!CAUTION] > If you migrate a knowledge base with the same name as a project that already exists in the target language resource, **the content of the project will be overridden** by the content of the selected knowledge base. > [!div class="mx-imgBorder"]
- > ![Screenshot of an error message starting project names can't contain special characters](../media/migrate-qnamaker/special-characters.png)
+ > ![Screenshot of an error message starting project names can't contain special characters](../media/migrate-qnamaker/migration-kb-name-validation.png)
-9. After resolving any validation errors, select **Next**
+9. After resolving the validation errors, select **Start migration**
> [!div class="mx-imgBorder"]
- > ![Screenshot with special characters removed](../media/migrate-qnamaker/validation-errors.png)
+ > ![Screenshot with special characters removed](../media/migrate-qnamaker/migration-kb-name-validation-success.png)
10. It will take a few minutes for the migration to occur. Do not cancel the migration while it is in progress. You can navigate to the migrated projects within the [Language Studio](https://language.azure.com/) post migration.
confidential-computing Confidential Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers.md
You can enable confidential containers in Azure Partners and Open Source Softwar
### Fortanix
-[Fortanix](https://www.fortanix.com/) has portal and Command Line Interface (CLI) experiences to convert their containerized applications to SGX-capable confidential containers. You don't need to modify or recompile the application. Fortanix provides the flexibility to run and manage a broad set of applications. You can use existing applications, new enclave-native applications, and pre-packaged applications. Start with Fortanix's [Enclave Manager](https://em.fortanix.com/) UI or [REST APIs](https://www.fortanix.com/api/em/). Create confidential containers using the Fortanix's [quickstart guide for AKS](https://hubs.li/Q017JnNt0).
+[Fortanix](https://www.fortanix.com/) has portal and Command Line Interface (CLI) experiences to convert their containerized applications to SGX-capable confidential containers. You don't need to modify or recompile the application. Fortanix provides the flexibility to run and manage a broad set of applications. You can use existing applications, new enclave-native applications, and pre-packaged applications. Start with Fortanix's [Enclave Manager](https://em.fortanix.com/) UI or [REST APIs](https://www.fortanix.com/api/). Create confidential containers using the Fortanix's [quickstart guide for AKS](https://hubs.li/Q017JnNt0).
![Diagram of Fortanix deployment process, showing steps to move applications to confidential containers and deploy.](./media/confidential-containers/fortanix-confidential-containers-flow.png)
connectors Connectors Native Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-http.md
ms.suite: integration Previously updated : 09/13/2021 Last updated : 05/31/2022 tags: connectors
This built-in action makes an HTTP call to the specified URL for an endpoint and
## Trigger and action outputs
-Here is more information about the outputs from an HTTP trigger or action, which returns this information:
+Here's more information about the outputs from an HTTP trigger or action, which returns this information:
| Property | Type | Description | |-||-|
Here is more information about the outputs from an HTTP trigger or action, which
If you have a **Logic App (Standard)** resource in single-tenant Azure Logic Apps, and you want to use an HTTP operation with any of the following authentication types, make sure to complete the extra setup steps for the corresponding authentication type. Otherwise, the call fails.
-* [TLS/SSL certificate](#tls-ssl-certificate-authentication): Add the app setting, `WEBSITE_LOAD_ROOT_CERTIFICATES`, and provide the thumbprint for your thumbprint for your TLS/SSL certificate.
+* [TLS/SSL certificate](#tls-ssl-certificate-authentication): Add the app setting, `WEBSITE_LOAD_ROOT_CERTIFICATES`, and set the value to the thumbprint for your TLS/SSL certificate.
* [Client certificate or Azure Active Directory Open Authentication (Azure AD OAuth) with the "Certificate" credential type](#client-certificate-authentication): Add the app setting, `WEBSITE_LOAD_USER_PROFILE`, and set the value to `1`.
For example, suppose you have a logic app that sends an HTTP POST request for an
![Multipart form data](./media/connectors-native-http/http-action-multipart.png)
-Here is the same example that shows the HTTP action's JSON definition in the underlying workflow definition:
+Here's the same example that shows the HTTP action's JSON definition in the underlying workflow definition:
```json "HTTP_action": {
HTTP requests have a [timeout limit](../logic-apps/logic-apps-limits-and-config.
To specify the number of seconds between retry attempts, you can add the `Retry-After` header to the HTTP action response. For example, if the target endpoint returns the `429 - Too many requests` status code, you can specify a longer interval between retries. The `Retry-After` header also works with the `202 - Accepted` status code.
-Here is the same example that shows the HTTP action response that contains `Retry-After`:
+Here's the same example that shows the HTTP action response that contains `Retry-After`:
```json {
Here is the same example that shows the HTTP action response that contains `Retr
} ```
+## Pagination support
+
+Sometimes, the target service responds by returning the results one page at a time. If the response specifies the next page with the **nextLink** or **@odata.nextLink** property, you can turn on the **Pagination** setting on the HTTP action. This setting causes the HTTP action to automatically follow these links and get the next page. However, if the response specifies the next page with any other tag, you might have to add a loop to your workflow. Make this loop follow that tag and manually get each page until the tag is null.
## Disable checking location headers
connectors Connectors Native Recurrence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-recurrence.md
Title: Schedule recurring tasks and workflows
-description: Schedule and run recurring automated tasks and workflows with the Recurrence trigger in Azure Logic Apps.
+ Title: Schedule and run recurring workflows
+description: Schedule and run recurring workflows with the generic Recurrence trigger in Azure Logic Apps.
ms.suite: integration Previously updated : 05/27/2022 Last updated : 06/01/2022
-# Create, schedule, and run recurring tasks and workflows with the Recurrence trigger in Azure Logic Apps
+# Schedule and run recurring workflows with the Recurrence trigger in Azure Logic Apps
-To regularly run tasks, processes, or jobs on specific schedule, you can start your logic app workflow with the built-in **Recurrence** trigger, which runs natively in Azure Logic Apps. You can set a date and time as well as a time zone for starting the workflow and a recurrence for repeating that workflow. If the trigger misses recurrences for any reason, for example, due to disruptions or disabled workflows, this trigger doesn't process the missed recurrences but restarts recurrences at the next scheduled interval. For more information about the built-in Schedule triggers and actions, see [Schedule and run recurring automated, tasks, and workflows with Azure Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
+To start and run your workflow on a schedule, you can use the generic Recurrence trigger as the first step. You can set a date, time, and time zone for starting the workflow and a recurrence for repeating that workflow. The following list includes some patterns that this trigger supports along with more advanced recurrences and complex schedules:
-Here are some patterns that this trigger supports along with more advanced recurrences and complex schedules:
+* Run at a specific date and time, then repeat every *n* number of seconds, minutes, hours, days, weeks, or months.
* Run immediately and repeat every *n* number of seconds, minutes, hours, days, weeks, or months.
-* Start at a specific date and time, then run and repeat every *n* number of seconds, minutes, hours, days, weeks, or months.
+* Run immediately and repeat daily at one or more specific times, such as 8:00 AM and 5:00 PM.
-* Run and repeat at one or more times each day, for example, at 8:00 AM and 5:00 PM.
+* Run immediately and repeat weekly on specific days, such as Saturday and Sunday.
-* Run and repeat each week, but only for specific days, such as Saturday and Sunday.
+* Run immediately and repeat weekly on specific days and times, such as Monday through Friday at 8:00 AM and 5:00 PM.
-* Run and repeat each week, but only for specific days and times, such as Monday through Friday at 8:00 AM and 5:00 PM.
+> [!NOTE]
+>
+> To start and run your workflow only once in the future, use workflow template named
+> **Scheduler: Run Once Jobs**. This template uses the Request trigger and HTTP action,
+> rather than the Recurrence trigger, which doesn't support this recurrence pattern.
+> For more information, see [Run jobs one time only](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#run-once).
-For differences between this trigger and the Sliding Window trigger or for more information about scheduling recurring workflows, see [Schedule and run recurring automated tasks, processes, and workflows with Azure Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
+The Recurrence trigger isn't associated with any specific service, so you can use the trigger with almost any workflow, such as [Consumption logic app workflows and Standard logic app *stateful* workflows](../logic-apps/logic-apps-overview.md#resource-environment-differences). This trigger is currently unavailable for [Standard logic app *stateless* workflows](../logic-apps/logic-apps-overview.md#resource-environment-differences).
-> [!TIP]
-> If you want to trigger your logic app and run only one time in the future, see
-> [Run jobs one time only](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#run-once).
+The Recurrence trigger is part of the built-in Schedule connector and runs natively on the Azure Logic Apps runtime. For more information about the built-in Schedule triggers and actions, see [Schedule and run recurring automated, tasks, and workflows with Azure Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
## Prerequisites * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Basic knowledge about [logic apps](../logic-apps/logic-apps-overview.md). If you're new to logic apps, learn [how to create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
+* Basic knowledge about [logic app workflows](../logic-apps/logic-apps-overview.md). If you're new to logic apps, learn [how to create your first logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md).
+
+<a name="add-recurrence-trigger"></a>
## Add the Recurrence trigger
-1. Sign in to the [Azure portal](https://portal.azure.com). Create a blank logic app.
+1. In the [Azure portal](https://portal.azure.com), create a blank logic app and workflow.
+
+ > [!NOTE]
+ >
+ > If you created a Standard logic app workflow, make sure to create a *stateful* workflow.
+ > The Recurrence trigger is currently unavailable for stateless workflows.
+
+1. In the designer, follow the corresponding steps, based on whether your logic app workflow is [Consumption or Standard](../logic-apps/logic-apps-overview.md#resource-environment-differences).
+
+ **Consumption**
+
+ 1. On the designer, under the search box, select **Built-in**.
+ 1. In the search box, enter **recurrence**.
+ 1. From the triggers list, select the trigger named **Recurrence**.
-1. After Logic App Designer appears, in the search box, enter `recurrence` as your filter. From the triggers list, select this trigger as the first step in your logic app workflow: **Recurrence**
+ ![Screenshot for Consumption logic app workflow designer with "Recurrence" trigger selected.](./media/connectors-native-recurrence/add-recurrence-trigger-consumption.png)
- ![Select "Recurrence" trigger](./media/connectors-native-recurrence/add-recurrence-trigger.png)
+ **Standard**
-1. Set the interval and frequency for the recurrence. In this example, set these properties to run your workflow every week.
+ 1. On the designer, select **Choose operation**.
+ 1. On the **Add a trigger** pane, under the search box, select **Built-in**.
+ 1. In the search box, enter **recurrence**.
+ 1. From the triggers list, select the trigger named **Recurrence**.
- ![Set interval and frequency](./media/connectors-native-recurrence/recurrence-trigger-details.png)
+ ![Screenshot for Standard logic app workflow designer with "Recurrence" trigger selected.](./media/connectors-native-recurrence/add-recurrence-trigger-standard.png)
+
+1. Set the interval and frequency for the recurrence. In this example, set these properties to run your workflow every week, for example:
+
+ **Consumption**
+
+ ![Screenshot for Consumption workflow designer with "Recurrence" trigger interval and frequency.](./media/connectors-native-recurrence/recurrence-trigger-details-consumption.png)
+
+ **Standard**
+
+ ![Screenshot for Standard workflow designer with "Recurrence" trigger interval and frequency.](./media/connectors-native-recurrence/recurrence-trigger-details-standard.png)
| Property | JSON name | Required | Type | Description | |-|--|-||-|
- | **Interval** | `interval` | Yes | Integer | A positive integer that describes how often the workflow runs based on the frequency. Here are the minimum and maximum intervals: <p>- Month: 1-16 months <br>- Week: 1-71 weeks <br>- Day: 1-500 days <br>- Hour: 1-12,000 hours <br>- Minute: 1-72,000 minutes <br>- Second: 1-9,999,999 seconds<p>For example, if the interval is 6, and the frequency is "Month", then the recurrence is every 6 months. |
+ | **Interval** | `interval` | Yes | Integer | A positive integer that describes how often the workflow runs based on the frequency. Here are the minimum and maximum intervals: <br><br>- Month: 1-16 months <br>- Week: 1-71 weeks <br>- Day: 1-500 days <br>- Hour: 1-12,000 hours <br>- Minute: 1-72,000 minutes <br>- Second: 1-9,999,999 seconds<br><br>For example, if the interval is 6, and the frequency is "Month", then the recurrence is every 6 months. |
| **Frequency** | `frequency` | Yes | String | The unit of time for the recurrence: **Second**, **Minute**, **Hour**, **Day**, **Week**, or **Month** | |||||| > [!IMPORTANT]
- > If you use the **Day**, **Week**, or **Month** frequency, and you specify a future date and time, make sure that you set up the recurrence in advance:
+ > If you use the **Day**, **Week**, or **Month** frequency, and you specify a future date and time,
+ > make sure that you set up the recurrence in advance. Otherwise, the workflow might skip the first recurrence.
> > * **Day**: Set up the daily recurrence at least 24 hours in advance. >
For differences between this trigger and the Sliding Window trigger or for more
> > * **Month**: Set up the monthly recurrence at least one month in advance. >
- > Otherwise, the workflow might skip the first recurrence.
- >
- > If a recurrence doesn't specify a specific [start date and time](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time), the first recurrence runs immediately
- > when you save or deploy the logic app, despite your trigger's recurrence setup. To avoid this behavior, provide a start
- > date and time for when you want the first recurrence to run.
+ > If a recurrence doesn't specify a specific [start date and time](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time),
+ > the first recurrence runs immediately when you save or deploy the logic app, despite your trigger's recurrence setup. To avoid this behavior,
+ > provide a start date and time for when you want the first recurrence to run.
> > If a recurrence doesn't specify any other advanced scheduling options such as specific times to run future recurrences, > those recurrences are based on the last run time. As a result, the start times for those recurrences might drift due to > factors such as latency during storage calls. To make sure that your logic app doesn't miss a recurrence, especially when
- > the frequency is in days or longer, try these options:
+ > the frequency is in days or longer, try the following options:
>
- > * Provide a start date and time for the recurrence plus the specific times when to run subsequent recurrences by using the properties
- > named **At these hours** and **At these minutes**, which are available only for the **Day** and **Week** frequencies.
+ > * Provide a start date and time for the recurrence and the specific times to run subsequent recurrences. You can use the
+ > properties named **At these hours** and **At these minutes**, which are available only for the **Day** and **Week** frequencies.
>
- > * Use the [Sliding Window trigger](../connectors/connectors-native-sliding-window.md), rather than the Recurrence trigger.
+ > * For Consumption logic app workflows, use the [Sliding Window trigger](../connectors/connectors-native-sliding-window.md),
+ > rather than the Recurrence trigger.
1. To set advanced scheduling options, open the **Add new parameter** list. Any options that you select appear on the trigger after selection.
- ![Advanced scheduling options](./media/connectors-native-recurrence/recurrence-trigger-more-options-details.png)
+ **Consumption**
+
+ ![Screenshot for Consumption workflow designer and "Recurrence" trigger with advanced scheduling options.](./media/connectors-native-recurrence/recurrence-trigger-advanced-consumption.png)
+
+ **Standard**
+
+ ![Screenshot for Standard workflow designer and "Recurrence" trigger with advanced scheduling options.](./media/connectors-native-recurrence/recurrence-trigger-advanced-standard.png)
| Property | JSON name | Required | Type | Description | |-|--|-||-| | **Time zone** | `timeZone` | No | String | Applies only when you specify a start time because this trigger doesn't accept [UTC offset](https://en.wikipedia.org/wiki/UTC_offset). Select the time zone that you want to apply. |
- | **Start time** | `startTime` | No | String | Provide a start date and time, which has a maximum of 49 years in the future and must follow the [ISO 8601 date time specification](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations) in [UTC date time format](https://en.wikipedia.org/wiki/Coordinated_Universal_Time), but without a [UTC offset](https://en.wikipedia.org/wiki/UTC_offset): <p><p>YYYY-MM-DDThh:mm:ss if you select a time zone <p>-or- <p>YYYY-MM-DDThh:mm:ssZ if you don't select a time zone <p>So for example, if you want September 18, 2020 at 2:00 PM, then specify "2020-09-18T14:00:00" and select a time zone such as Pacific Standard Time. Or, specify "2020-09-18T14:00:00Z" without a time zone. <p><p>**Important:** If you don't select a time zone, you must add the letter "Z" at the end without any spaces. This "Z" refers to the equivalent [nautical time](https://en.wikipedia.org/wiki/Nautical_time). If you select a time zone value, you don't need to add a "Z" to the end of your **Start time** value. If you do, Logic Apps ignores the time zone value because the "Z" signifies a UTC time format. <p><p>For simple schedules, the start time is the first occurrence, while for complex schedules, the trigger doesn't fire any sooner than the start time. [*What are the ways that I can use the start date and time?*](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time) |
+ | **Start time** | `startTime` | No | String | Provide a start date and time, which has a maximum of 49 years in the future and must follow the [ISO 8601 date time specification](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations) in [UTC date time format](https://en.wikipedia.org/wiki/Coordinated_Universal_Time), but without a [UTC offset](https://en.wikipedia.org/wiki/UTC_offset): <br><br>YYYY-MM-DDThh:mm:ss if you select a time zone <br><br>-or- <br><br>YYYY-MM-DDThh:mm:ssZ if you don't select a time zone <br><br>So for example, if you want September 18, 2020 at 2:00 PM, then specify "2020-09-18T14:00:00" and select a time zone such as Pacific Standard Time. Or, specify "2020-09-18T14:00:00Z" without a time zone. <br><br>**Important:** If you don't select a time zone, you must add the letter "Z" at the end without any spaces. This "Z" refers to the equivalent [nautical time](https://en.wikipedia.org/wiki/Nautical_time). If you select a time zone value, you don't need to add a "Z" to the end of your **Start time** value. If you do, Logic Apps ignores the time zone value because the "Z" signifies a UTC time format. <br><br>For simple schedules, the start time is the first occurrence, while for complex schedules, the trigger doesn't fire any sooner than the start time. [*What are the ways that I can use the start date and time?*](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time) |
| **On these days** | `weekDays` | No | String or string array | If you select "Week", you can select one or more days when you want to run the workflow: **Monday**, **Tuesday**, **Wednesday**, **Thursday**, **Friday**, **Saturday**, and **Sunday** |
- | **At these hours** | `hours` | No | Integer or integer array | If you select "Day" or "Week", you can select one or more integers from 0 to 23 as the hours of the day for when you want to run the workflow. <p><p>For example, if you specify "10", "12" and "14", you get 10 AM, 12 PM, and 2 PM for the hours of the day, but the minutes of the day are calculated based on when the recurrence starts. To set specific minutes of the day, for example, 10:00 AM, 12:00 PM, and 2:00 PM, specify those values by using the property named **At these minutes**. |
- | **At these minutes** | `minutes` | No | Integer or integer array | If you select "Day" or "Week", you can select one or more integers from 0 to 59 as the minutes of the hour when you want to run the workflow. <p>For example, you can specify "30" as the minute mark and using the previous example for hours of the day, you get 10:30 AM, 12:30 PM, and 2:30 PM. <p>**Note**: Sometimes, the timestamp for the triggered run might vary up to 1 minute from the scheduled time. If you need to pass the timestamp exactly as scheduled to subsequent actions, you can use template expressions to change the timestamp accordingly. For more information, see [Date and time functions for expressions](../logic-apps/workflow-definition-language-functions-reference.md#date-time-functions). |
+ | **At these hours** | `hours` | No | Integer or integer array | If you select "Day" or "Week", you can select one or more integers from 0 to 23 as the hours of the day for when you want to run the workflow. <br><br>For example, if you specify "10", "12" and "14", you get 10 AM, 12 PM, and 2 PM for the hours of the day, but the minutes of the day are calculated based on when the recurrence starts. To set specific minutes of the day, for example, 10:00 AM, 12:00 PM, and 2:00 PM, specify those values by using the property named **At these minutes**. |
+ | **At these minutes** | `minutes` | No | Integer or integer array | If you select "Day" or "Week", you can select one or more integers from 0 to 59 as the minutes of the hour when you want to run the workflow. <br><br>For example, you can specify "30" as the minute mark and using the previous example for hours of the day, you get 10:30 AM, 12:30 PM, and 2:30 PM. <br><br>**Note**: Sometimes, the timestamp for the triggered run might vary up to 1 minute from the scheduled time. If you need to pass the timestamp exactly as scheduled to subsequent actions, you can use template expressions to change the timestamp accordingly. For more information, see [Date and time functions for expressions](../logic-apps/workflow-definition-language-functions-reference.md#date-time-functions). |
|||||
- For example, suppose that today is Friday, September 4, 2020. The following Recurrence trigger doesn't fire *any sooner* than the start date and time, which is Friday, September 18, 2020 at 8:00 AM PST. However, the recurrence schedule is set for 10:30 AM, 12:30 PM, and 2:30 PM on Mondays only. So the first time that the trigger fires and creates a logic app workflow instance is on Monday at 10:30 AM. To learn more about how start times work, see these [start time examples](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time).
+ For example, suppose that today is Friday, September 4, 2020. The following Recurrence trigger doesn't fire *any sooner* than the specified start date and time, which is Friday, September 18, 2020 at 8:00 AM Pacific Time. However, the recurrence schedule is set for 10:30 AM, 12:30 PM, and 2:30 PM on Mondays only. The first time that the trigger fires and creates a workflow instance is on Monday at 10:30 AM. To learn more about how start times work, see these [start time examples](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time).
Future runs happen at 12:30 PM and 2:30 PM on the same day. Each recurrence creates their own workflow instance. After that, the entire schedule repeats all over again next Monday. [*What are some other example occurrences?*](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#example-recurrences)
- ![Advanced scheduling example](./media/connectors-native-recurrence/recurrence-trigger-advanced-schedule-options.png)
- > [!NOTE]
+ >
> The trigger shows a preview for your specified recurrence only when you select "Day" or "Week" as the frequency.
-1. Now build your remaining workflow with other actions. For more actions that you can add, see [Connectors for Azure Logic Apps](../connectors/apis-list.md).
+ **Consumption**
+
+ ![Screenshot showing Consumption workflow and "Recurrence" trigger with advanced scheduling example.](./media/connectors-native-recurrence/recurrence-trigger-advanced-example-consumption.png)
+
+ **Standard**
+
+ ![Screenshot showing Standard workflow and "Recurrence" trigger with advanced scheduling example.](./media/connectors-native-recurrence/recurrence-trigger-advanced-example-standard.png)
+
+1. Now continue building your workflow with other actions. For more actions that you can add, see [Connectors for Azure Logic Apps](../connectors/apis-list.md).
## Workflow definition - Recurrence
-In your logic app's underlying workflow definition, which uses JSON, you can view the [Recurrence trigger definition](../logic-apps/logic-apps-workflow-actions-triggers.md#recurrence-trigger) with the options that you chose. To view this definition, on the designer toolbar, choose **Code view**. To return to the designer, choose on the designer toolbar, **Designer**.
+You can view how the [Recurrence trigger definition](../logic-apps/logic-apps-workflow-actions-triggers.md#recurrence-trigger) appears with your chosen options by reviewing the underlying JSON definition for your workflow in Consumption logic apps and Standard logic apps (stateful only).
+
+Based on whether your logic app is Consumption or Standard, choose one of the following options:
-This example shows how a Recurrence trigger definition might look in an underlying workflow definition:
+* **Consumption**: On the designer toolbar, select **Code view**. To return to the designer, on the code view editor toolbar, select **Designer**.
+
+* **Standard**: On the workflow menu, select **Code view**. To return to the designer, on the workflow menu, select **Designer**.
+
+The following example shows how a Recurrence trigger definition might appear in the workflow's underlying JSON definition:
``` json "triggers": {
To schedule jobs, Azure Logic Apps puts the message for processing into the queu
Otherwise, if you don't select a time zone, daylight saving time (DST) events might affect when triggers run. For example, the start time shifts one hour forward when DST starts and one hour backward when DST ends. However, some time windows might cause problems when the time shifts. For more information and examples, see [Recurrence for daylight saving time and standard time](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#daylight-saving-standard-time). - ## Next steps * [Pause workflows with delay actions](../connectors/connectors-native-delay.md)
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md
You learn how to:
> [!div class="checklist"] > * Create an Azure Blob Storage for use as a Dapr state store
-> * Deploy a container apps environment to host container apps
+> * Deploy a Container Apps environment to host container apps
> * Deploy two dapr-enabled container apps: one that produces orders and one that consumes orders and stores them > * Verify the interaction between the two microservices.
New-AzStorageAccount -ResourceGroupName $RESOURCE_GROUP `
-Once your Azure Blob Storage account is created, the following values are needed for subsequent steps in this tutorial.
+Once your Azure Blob Storage account is created, you'll create a template where these storage parameters will use environment variable values. The values are passed in via the `parameters` argument when you deploy your apps with the `az deployment group create` command.
-- `storage_account_name` is the value of the `STORAGE_ACCOUNT` variable.
+- `storage_account_name` uses the value of the `STORAGE_ACCOUNT` variable.
-- `storage_container_name` is the value of the `STORAGE_ACCOUNT_CONTAINER` variable.-
-Dapr creates a container with this name when it doesn't already exist in your Azure Storage account.
+- `storage_container_name` uses the value of the `STORAGE_ACCOUNT_CONTAINER` variable. Dapr creates a container with this name when it doesn't already exist in your Azure Storage account.
::: zone pivot="container-apps-arm" ### Create Azure Resource Manager (ARM) template
-Create an ARM template to deploy a Container Apps environment including the associated Log Analytics workspace and Application Insights resource for distributed tracing, a dapr component for the state store and the two dapr-enabled container apps.
+Create an ARM template to deploy a Container Apps environment including:
+
+* the associated Log Analytics workspace
+* Application Insights resource for distributed tracing
+* a dapr component for the state store
+* two dapr-enabled container apps
Save the following file as _hello-world.json_:
Save the following file as _hello-world.json_:
### Create Azure Bicep templates
-Create a bicep template to deploy a Container Apps environment including the associated Log Analytics workspace and Application Insights resource for distributed tracing, a dapr component for the state store and the two dapr-enabled container apps.
+Create a bicep template to deploy a Container Apps environment including:
+
+* the associated Log Analytics workspace
+* Application Insights resource for distributed tracing
+* a dapr component for the state store
+* the two dapr-enabled container apps
Save the following file as _hello-world.bicep_:
resource nodeapp 'Microsoft.App/containerApps@2022-03-01' = {
image: 'dapriosamples/hello-k8s-node:latest' name: 'hello-k8s-node' resources: {
- cpu: '0.5'
+ cpu: json('0.5')
memory: '1.0Gi' } }
resource pythonapp 'Microsoft.App/containerApps@2022-03-01' = {
image: 'dapriosamples/hello-k8s-python:latest' name: 'hello-k8s-python' resources: {
- cpu: '0.5'
+ cpu: json('0.5')
memory: '1.0Gi' } }
New-AzResourceGroupDeployment `
This command deploys: -- the container apps environment and associated Log Analytics workspace for hosting the hello world dapr solution
+- the Container Apps environment and associated Log Analytics workspace for hosting the hello world dapr solution
- an Application Insights instance for Dapr distributed tracing - the `nodeapp` app server running on `targetPort: 3000` with dapr enabled and configured using: `"appId": "nodeapp"` and `"appPort": 3000` - the `daprComponents` object of `"type": "state.azure.blobstorage"` scoped for use by the `nodeapp` for storing state
nodeapp Got a new order! Order ID: 63 PrimaryResult 2021-10-22
## Clean up resources
-Once you are done, run the following command to delete your resource group along with all the resources you created in this tutorial.
+Once you're done, run the following command to delete your resource group along with all the resources you created in this tutorial.
# [Bash](#tab/bash)
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
$VNET_NAME="my-custom-vnet"
-Now create an instance of the virtual network to associate with the Container Apps environment. The virtual network must have two subnets available for the container apps instance.
+Now create an instance of the virtual network to associate with the Container Apps environment. The virtual network must have two subnets available for the container app instance.
> [!NOTE] > You can use an existing virtual network, but two empty subnets are required to use with Container Apps.
The following table describes the parameters used in for `containerapp env creat
| Parameter | Description | |||
-| `name` | Name of the container apps environment. |
+| `name` | Name of the Container Apps environment. |
| `resource-group` | Name of the resource group. |
-| `logs-workspace-id` | The ID of the Log Analytics workspace. |
-| `logs-workspace-key` | The Log Analytics client secret. |
+| `logs-workspace-id` | (Optional) The ID of an existing the Log Analytics workspace. If omitted, a workspace will be created for you. |
+| `logs-workspace-key` | The Log Analytics client secret. Required if using an existing workspace. |
| `location` | The Azure location where the environment is to deploy. | | `infrastructure-subnet-resource-id` | Resource ID of a subnet for infrastructure components and user application containers. |
-| `internal-only` | Optional parameter that scopes the environment to IP addresses only available the custom VNET. |
+| `internal-only` | (Optional) The environment doesn't use a public static IP, only internal IP addresses available in the custom VNET. (Requires an infrastructure subnet resource ID.) |
-With your environment created in your custom virtual network, you can deploy container apps into the environment using the `az containerapp create` command.
+With your environment created using your custom virtual network, you can deploy container apps into the environment using the `az containerapp create` command.
### Optional configuration
You must either provide values for all three of these properties, or none of the
| Parameter | Description | ||| | `platform-reserved-cidr` | The address range used internally for environment infrastructure services. Must have a size between `/21` and `/12`. |
-| `platform-reserved-dns-ip` | An IP address from the `platform-reserved-cidr` range that is used for the internal DNS server. The address can't be the first address in the range, or the network address. For example, if `platform-reserved-cidr` is set to `10.2.0.0/16`, then `platform-reserved-dns-ip` can't be `10.2.0.0` (this is the network address), or `10.2.0.1` (infrastructure reserves use of this IP). In this case, the first usable IP for the DNS would be `10.2.0.2`. |
+| `platform-reserved-dns-ip` | An IP address from the `platform-reserved-cidr` range that is used for the internal DNS server. The address can't be the first address in the range, or the network address. For example, if `platform-reserved-cidr` is set to `10.2.0.0/16`, then `platform-reserved-dns-ip` can't be `10.2.0.0` (the network address), or `10.2.0.1` (infrastructure reserves use of this IP). In this case, the first usable IP for the DNS would be `10.2.0.2`. |
| `docker-bridge-cidr` | The address range assigned to the Docker bridge network. This range must have a size between `/28` and `/12`. | - The `platform-reserved-cidr` and `docker-bridge-cidr` address ranges can't conflict with each other, or with the ranges of either provided subnet. Further, make sure these ranges don't conflict with any other address range in the VNET.
You must either provide values for all three of these properties, or none of the
## Clean up resources
-If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the **my-container-apps** resource group.
+If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the **my-container-apps** resource group. Deleting this resource group will also delete the resource group automatically created by the Container Apps service containing the custom network components.
::: zone pivot="azure-cli"
az group delete `
## Additional resources -- Refer to [What is Azure Private Endpoint](../private-link/private-endpoint-overview.md) for more details on configuring your private endpoint.
+- For more information about configuring your private endpoints, see [What is Azure Private Endpoint](../private-link/private-endpoint-overview.md).
- To set up DNS name resolution for internal services, you must [set up your own DNS server](../dns/index.yml).
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
The following table describes the parameters used in `containerapp env create`.
| Parameter | Description | |||
-| `name` | Name of the container apps environment. |
+| `name` | Name of the Container Apps environment. |
| `resource-group` | Name of the resource group. | | `location` | The Azure location where the environment is to deploy. | | `infrastructure-subnet-resource-id` | Resource ID of a subnet for infrastructure components and user application containers. |
You must either provide values for all three of these properties, or none of the
| Parameter | Description | ||| | `platform-reserved-cidr` | The address range used internally for environment infrastructure services. Must have a size between `/21` and `/12`. |
-| `platform-reserved-dns-ip` | An IP address from the `platform-reserved-cidr` range that is used for the internal DNS server. The address can't be the first address in the range, or the network address. For example, if `platform-reserved-cidr` is set to `10.2.0.0/16`, then `platform-reserved-dns-ip` can't be `10.2.0.0` (this is the network address), or `10.2.0.1` (infrastructure reserves use of this IP). In this case, the first usable IP for the DNS would be `10.2.0.2`. |
+| `platform-reserved-dns-ip` | An IP address from the `platform-reserved-cidr` range that is used for the internal DNS server. The address can't be the first address in the range, or the network address. For example, if `platform-reserved-cidr` is set to `10.2.0.0/16`, then `platform-reserved-dns-ip` can't be `10.2.0.0` (the network address), or `10.2.0.1` (infrastructure reserves use of this IP). In this case, the first usable IP for the DNS would be `10.2.0.2`. |
| `docker-bridge-cidr` | The address range assigned to the Docker bridge network. This range must have a size between `/28` and `/12`. | - The `platform-reserved-cidr` and `docker-bridge-cidr` address ranges can't conflict with each other, or with the ranges of either provided subnet. Further, make sure these ranges don't conflict with any other address range in the VNET.
You must either provide values for all three of these properties, or none of the
## Clean up resources
-If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the **my-container-apps** resource group.
+If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the **my-container-apps** resource group. Deleting this resource group will also delete the resource group automatically created by the Container Apps service containing the custom network components.
::: zone pivot="azure-cli"
az group delete `
## Additional resources -- Refer to [What is Azure Private Endpoint](../private-link/private-endpoint-overview.md) for more details on configuring your private endpoint.
+- For more information about configuring your private endpoints, see [What is Azure Private Endpoint](../private-link/private-endpoint-overview.md).
+ - To set up DNS name resolution for internal services, you must [set up your own DNS server](../dns/index.yml).
container-registry Container Registry Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-intro.md
Title: Managed container registries
-description: Introduction to the Azure Container Registry service, providing cloud-based, managed, private Docker registries.
+description: Introduction to the Azure Container Registry service, providing cloud-based, managed registries.
Last updated 02/10/2020
-# Introduction to private Docker container registries in Azure
+# Introduction to Container registries in Azure
-Azure Container Registry is a managed, private Docker registry service based on the open-source Docker Registry 2.0. Create and maintain Azure container registries to store and manage your private Docker container images and related artifacts.
+Azure Container Registry is a managed registry service based on the open-source Docker Registry 2.0. Create and maintain Azure container registries to store and manage your container images and related artifacts.
Use Azure container registries with your existing container development and deployment pipelines, or use Azure Container Registry Tasks to build container images in Azure. Build on demand, or fully automate builds with triggers such as source code commits and base image updates.
cosmos-db Access Key Vault Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/access-key-vault-managed-identity.md
+
+ Title: Use a managed identity to access Azure Key Vault from Azure Cosmos DB
+description: Use managed identity in Azure Cosmos DB to access Azure Key Vault.
+++
+ms.devlang: csharp
+ Last updated : 06/01/2022+++
+# Access Azure Key Vault from Azure Cosmos DB using a managed identity
+
+Azure Cosmos DB may need to read secret/key data from Azure Key Vault. For example, your Azure Cosmos DB may require a customer-managed key stored in Azure Key Vault. To do this, Azure Cosmos DB should be configured with a managed identity, and then an Azure Key Vault access policy should grant the managed identity access.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An existing Azure Cosmos DB SQL API account. [Create an Azure Cosmos DB SQL API account](sql/create-cosmosdb-resources-portal.md)
+- An existing Azure Key Vault resource. [Create a key vault using the Azure CLI](../key-vault/general/quick-create-cli.md)
+- To perform the steps in this article, install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in to Azure](/cli/azure/authenticate-azure-cli).
+
+## Prerequisite check
+
+1. In a terminal or command window, store the names of your Azure Key Vault resource, Azure Cosmos DB account and resource group as shell variables named ``keyVaultName``, ``cosmosName``, and ``resourceGroupName``.
+
+ ```azurecli-interactive
+ # Variable for function app name
+ keyVaultName="msdocs-keyvault"
+
+ # Variable for Cosmos DB account name
+ cosmosName="msdocs-cosmos-app"
+
+ # Variable for resource group name
+ resourceGroupName="msdocs-cosmos-keyvault-identity"
+ ```
+
+ > [!NOTE]
+ > These variables will be re-used in later steps. This example assumes your Azure Cosmos DB account name is ``msdocs-cosmos-app``, your key vault name is ``msdocs-keyvault`` and your resource group name is ``msdocs-cosmos-keyvault-identity``.
++
+## Create a system-assigned managed identity in Azure Cosmos DB
+
+First, create a system-assigned managed identity for the existing Azure Cosmos DB account.
+
+> [!IMPORTANT]
+> This how-to guide assumes that you are using a system-assigned managed identity. Many of the steps are similar when using a user-assigned managed identity.
+
+1. Run [``az cosmosdb identity assign``](/cli/azure/cosmosdb/identity#az-cosmosdb-identity-assign) to create a new system-assigned managed identity.
+
+ ```azurecli-interactive
+ az cosmosdb identity assign \
+ --resource-group $resourceGroupName \
+ --name $cosmosName
+ ```
+
+1. Retrieve the metadata of the system-assigned managed identity using [``az cosmosdb identity show``](/cli/azure/cosmosdb/identity#az-cosmosdb-identity-show), filter to just return the ``principalId`` property using the **query** parameter, and store the result in a shell variable named ``principal``.
+
+ ```azurecli-interactive
+ principal=$(
+ az cosmosdb identity show \
+ --resource-group $resourceGroupName \
+ --name $cosmosName \
+ --query principalId \
+ --output tsv
+ )
+
+ echo $principal
+ ```
+
+ > [!NOTE]
+ > This variable will be re-used in a later step.
+
+## Create an Azure Key Vault access policy
+
+In this step, create an access policy in Azure Key Vault using the previously managed identity.
+
+1. Use the [``az keyvault set-policy``](/cli/azure/keyvault#az-keyvault-set-policy) command to create an access policy in Azure Key Vault that gives the Azure Cosmos DB managed identity permission to access Key Vault. Specifically, the policy will use the **key-permissions** parameters to grant permissions to ``get``, ``list``, and ``import`` keys.
+
+ ```azurecli-itneractive
+ az keyvault set-policy \
+ --name $keyVaultName \
+ --object-id $principal \
+ --key-permissions get list import
+ ```
+
+## Next steps
+
+* To use customer-managed keys in Azure Key Vault with your Azure Cosmos account, see [configure customer-managed keys](how-to-setup-cmk.md#using-managed-identity)
+* To use Azure Key Vault to manage secrets, see [secure credentials](access-secrets-from-keyvault.md).
cosmos-db Access Secrets From Keyvault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/access-secrets-from-keyvault.md
Title: Use Key Vault to store and access Azure Cosmos DB keys description: Use Azure Key Vault to store and access Azure Cosmos DB connection string, keys, endpoints. --++ ms.devlang: csharp Previously updated : 05/23/2019- Last updated : 06/01/2022+
-# Secure Azure Cosmos keys using Azure Key Vault
+# Secure Azure Cosmos credentials using Azure Key Vault
[!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)] >[!IMPORTANT]
-> The recommended solution to access Azure Cosmos DB keys is to use a [system-assigned managed identity](managed-identity-based-authentication.md). If your service cannot take advantage of managed identities then use the [cert based solution](certificate-based-authentication.md). If both the managed identity solution and cert based solution do not meet your needs, please use the key vault solution below.
+> The recommended solution to access Azure Cosmos DB is to use a [system-assigned managed identity](managed-identity-based-authentication.md). If your service cannot take advantage of managed identities then use the [cert based solution](certificate-based-authentication.md). If both the managed identity solution and cert based solution do not meet your needs, please use the key vault solution below.
-When using Azure Cosmos DB for your applications, you can access the database, collections, documents by using the endpoint and the key within the app's configuration file. However, it's not safe to put keys and URL directly in the application code because they are available in clear text format to all the users. You want to make sure that the endpoint and keys are available but through a secured mechanism. This is where Azure Key Vault can help you to securely store and manage application secrets.
+When using Azure Cosmos DB, you can access the database, collections, documents by using the endpoint and the key within the app's configuration file. However, it's not safe to put keys and URL directly in the application code because they're available in clear text format to all the users. You want to make sure that the endpoint and keys are available but through a secured mechanism. This scenario is where Azure Key Vault can help you to securely store and manage application secrets.
The following steps are required to store and read Azure Cosmos DB access keys from Key Vault:
The following steps are required to store and read Azure Cosmos DB access keys f
2. Select **Create a resource > Security > Key Vault**. 3. On the **Create key vault** section provide the following information: * **Name:** Provide a unique name for your Key Vault.
- * **Subscription:** Choose the subscription that you will use.
- * Under **Resource Group** choose **Create new** and enter a resource group name.
+ * **Subscription:** Choose the subscription that you'll use.
+ * Within **Resource Group**, choose **Create new** and enter a resource group name.
* In the Location pull-down menu, choose a location. * Leave other options to their defaults. 4. After providing the information above, select **Create**.
The following steps are required to store and read Azure Cosmos DB access keys f
* Provide a **Name** for your secret * Provide the connection string of your Cosmos DB account into the **Value** field. And then select **Create**.
- :::image type="content" source="./media/access-secrets-from-keyvault/create-a-secret.png" alt-text="Create a secret":::
+ :::image type="content" source="./media/access-secrets-from-keyvault/create-a-secret.png" alt-text="Screenshot of the Create a secret dialog in the Azure portal.":::
-4. After the secret is created, open it and copy the **Secret Identifier that is in the following format. You will use this identifier in the next section.
+4. After the secret is created, open it and copy the **Secret Identifier that is in the following format. You'll use this identifier in the next section.
`https://<Key_Vault_Name>.vault.azure.net/secrets/<Secret _Name>/<ID>` ## Create an Azure web application
-1. Create an Azure web application or you can download the app from the [GitHub repository](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/Demo/keyvaultdemo). It is a simple MVC application.
+1. Create an Azure web application or you can download the app from the [GitHub repository](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/Demo/keyvaultdemo). It's a simple MVC application.
2. Unzip the downloaded application and open the **HomeController.cs** file. Update the secret ID in the following line: `var secret = await keyVaultClient.GetSecretAsync("<Your Key VaultΓÇÖs secret identifier>")` 3. **Save** the file, **Build** the solution.
-4. Next deploy the application to Azure. Right click on project and choose **publish**. Create a new app service profile (you can name the app WebAppKeyVault1) and select **Publish**.
+4. Next deploy the application to Azure. Open the context menu for the project and choose **publish**. Create a new app service profile (you can name the app WebAppKeyVault1) and select **Publish**.
-5. Once the application is deployed. From the Azure portal, navigate to web app that you deployed, and turn on the **Managed service identity** of this application.
+5. Once the application is deployed from the Azure portal, navigate to web app that you deployed, and turn on the **Managed service identity** of this application.
- :::image type="content" source="./media/access-secrets-from-keyvault/turn-on-managed-service-identity.png" alt-text="Managed service identity":::
+ :::image type="content" source="./media/access-secrets-from-keyvault/turn-on-managed-service-identity.png" alt-text="Screenshot of the Managed service identity page in the Azure portal.":::
-If you will run the application now, you will see the following error, as you have not given any permission to this application in Key Vault.
+If you run the application now, you'll see the following error, as you have not given any permission to this application in Key Vault.
## Register the application & grant permissions to read the Key Vault
Similarly, you can add a user to access the key Vault. You need to add yourself
## Next steps
-* To configure a firewall for Azure Cosmos DB see [firewall support](how-to-configure-firewall.md) article.
+* To configure a firewall for Azure Cosmos DB, see [firewall support](how-to-configure-firewall.md) article.
* To configure virtual network service endpoint, see [secure access by using VNet service endpoint](how-to-configure-vnet-service-endpoint.md) article.
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
Previously updated : 04/27/2022 Last updated : 05/30/2022 # Azure Cosmos DB service quotas
Depending on the current RU/s provisioned and resource settings, each resource c
| Maximum RU/s per container | 5,000 | | Maximum storage across all items per (logical) partition | 20 GB | | Maximum number of distinct (logical) partition keys | Unlimited |
-| Maximum storage per container (SQL API, Mongo API, Table API, Gremlin API)| 50 GB<sup>1</sup> |
-| Maximum storage per container (Cassandra API)| 30 GB |
+| Maximum storage per container (SQL API, Mongo API, Table API, Gremlin API)| 1 TB |
+| Maximum storage per container (Cassandra API)| 1 TB |
<sup>1</sup> Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md).
You can [provision and manage your Azure Cosmos account](how-to-manage-database-
| Resource | Limit | | | | | Maximum number of accounts per subscription | 50 by default. <sup>1</sup> |
-| Maximum number of regional failovers | 1/hour by default. <sup>1</sup> <sup>2</sup> |
+| Maximum number of regional failovers | 10/hour by default. <sup>1</sup> <sup>2</sup> |
<sup>1</sup> You can increase these limits by creating an [Azure Support request](create-support-request-quota-increase.md).
cosmos-db Custom Partitioning Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/custom-partitioning-analytical-store.md
It is important to note that custom partitioning ensures complete transactional
If you configured [managed private endpoints](analytical-store-private-endpoints.md) for your analytical store, to ensure network isolation for partitioned store, we recommend that you also add managed private endpoints for the partitioned store. The partitioned store is primary storage account associated with your Synapse workspace.
-Similarly, if you configured [customer-managed keys on analytical store](how-to-setup-cmk.md#is-it-possible-to-use-customer-managed-keys-in-conjunction-with-the-azure-cosmos-db-analytical-store), you must directly enable it on the Synapse workspace primary storage account, which is the partitioned store, as well.
+Similarly, if you configured [customer-managed keys on analytical store](how-to-setup-cmk.md#is-it-possible-to-use-customer-managed-keys-with-the-azure-cosmos-db-analytical-store), you must directly enable it on the Synapse workspace primary storage account, which is the partitioned store, as well.
## Partitioning strategies You could use one or more partition keys for your analytical data. If you are using multiple partition keys, below are some recommendations on how to partition the data:
cosmos-db Dedicated Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/dedicated-gateway.md
There are many different ways to provision a dedicated gateway:
- [Provision a dedicated gateway using the Azure Portal](how-to-configure-integrated-cache.md#provision-a-dedicated-gateway-cluster) - [Use Azure Cosmos DB's REAT API](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/service/create)-- [Azure CLI](/cli/azure/cosmosdb/service#az-cosmosdb-service-create)
+- [Azure CLI](/cli/azure/cosmosdb/service?view=azure-cli-latest&preserve-view=true#az-cosmosdb-service-create)
- [ARM template](/azure/templates/microsoft.documentdb/databaseaccounts/services?tabs=bicep) - Note: You cannot deprovision a dedicated gateway using ARM templates
cosmos-db Graph Visualization Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/graph-visualization-partners.md
With the Graphistry's GPU client/cloud technology, you can do interactive visual
Graphlytic is a highly customizable web application for graph visualization and analysis. Users can interactively explore the graph, look for patterns with the Gremlin language, or use filters to find answers to any graph question. Graph rendering is done with the 'Cytoscape.js' library, which allows Graphlytic to render tens of thousands of nodes and hundreds of thousands of relationships at once.
-Graphlytic is compatible with Azure Cosmos DB and can be deployed to Azure in minutes. GraphlyticΓÇÖs UI can be customized and extended in many ways, for instance the default [visualization configuration](https://graphlytic.biz/doc/latest/Visualization_settings.html), [data schema](https://graphlytic.biz/doc/latest/Data_schema.html), [style mappings](https://graphlytic.biz/doc/latest/Style_mappers.html), [virtual properties](https://graphlytic.biz/doc/latest/Virtual_properties.html) in the visualization, or custom implemented [widgets](https://graphlytic.biz/doc/latest/Widgets.html) that can enhance the visualization features with bespoke reports or integrations.
+Graphlytic is compatible with Azure Cosmos DB and can be deployed to Azure in minutes. GraphlyticΓÇÖs UI can be customized and extended in many ways, for instance the default [visualization configuration](https://graphlytic.biz/doc/latest/Visualization_Settings.html), [data schema](https://graphlytic.biz/doc/latest/Data_Schema.html), [style mappings](https://graphlytic.biz/doc/latest/Style_Mappers.html), [virtual properties](https://graphlytic.biz/doc/latest/Virtual_properties.html) in the visualization, or custom implemented [widgets](https://graphlytic.biz/doc/latest/Widgets.html) that can enhance the visualization features with bespoke reports or integrations.
The following are two example scenarios:
cosmos-db How To Setup Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-cmk.md
az cosmosdb show \
## <a id="using-managed-identity"></a> Using a managed identity in the Azure Key Vault access policy
-This access policy ensures that your encryption keys can be accessed by your Azure Cosmos DB account. This is done by granting access to a specific Azure Active Directory (AD) identity. Two types of identities are supported:
+This access policy ensures that your encryption keys can be accessed by your Azure Cosmos DB account. The access policy is implemented by granting access to a specific Azure Active Directory (AD) identity. Two types of identities are supported:
- Azure Cosmos DB's first-party identity can be used to grant access to the Azure Cosmos DB service. - Your Azure Cosmos DB account's [managed identity](how-to-setup-managed-identity.md) can be used to grant access to your account specifically.
This access policy ensures that your encryption keys can be accessed by your Azu
Because a system-assigned managed identity can only be retrieved after the creation of your account, you still need to initially create your account using the first-party identity, as described [above](#add-access-policy). Then:
-1. If this wasn't done during account creation, [enable a system-assigned managed identity](./how-to-setup-managed-identity.md#add-a-system-assigned-identity) on your account and copy the `principalId` that got assigned.
+1. If the system-assigned managed identity wasn't configured during account creation, [enable a system-assigned managed identity](./how-to-setup-managed-identity.md#add-a-system-assigned-identity) on your account and copy the `principalId` that got assigned.
-1. Add a new access policy to your Azure Key Vault account just as described [above](#add-access-policy), but using the `principalId` you copied at the previous step instead of Azure Cosmos DB's first-party identity.
+1. Add a new access policy to your Azure Key Vault account as described [above](#add-access-policy), but using the `principalId` you copied at the previous step instead of Azure Cosmos DB's first-party identity.
-1. Update your Azure Cosmos DB account to specify that you want to use the system-assigned managed identity when accessing your encryption keys in Azure Key Vault. You can do this:
+1. Update your Azure Cosmos DB account to specify that you want to use the system-assigned managed identity when accessing your encryption keys in Azure Key Vault. You have two options:
- - by specifying this property in your account's Azure Resource Manager template:
+ - Specify the property in your account's Azure Resource Manager template:
- ```json
- {
- "type": " Microsoft.DocumentDB/databaseAccounts",
- "properties": {
- "defaultIdentity": "SystemAssignedIdentity",
+ ```json
+ {
+ "type": " Microsoft.DocumentDB/databaseAccounts",
+ "properties": {
+ "defaultIdentity": "SystemAssignedIdentity",
+ // ...
+ },
// ...
- },
- // ...
- }
- ```
-
- - by updating your account with the Azure CLI:
+ }
+ ```
- ```azurecli
- resourceGroupName='myResourceGroup'
- accountName='mycosmosaccount'
+ - Update your account with the Azure CLI:
- az cosmosdb update --resource-group $resourceGroupName --name $accountName --default-identity "SystemAssignedIdentity"
- ```
+ ```azurecli
+ resourceGroupName='myResourceGroup'
+ accountName='mycosmosaccount'
+
+ az cosmosdb update --resource-group $resourceGroupName --name $accountName --default-identity "SystemAssignedIdentity"
+ ```
1. Optionally, you can then remove the Azure Cosmos DB first-party identity from your Azure Key Vault access policy.
Because a system-assigned managed identity can only be retrieved after the creat
1. When creating the new access policy in your Azure Key Vault account as described [above](#add-access-policy), use the `Object ID` of the managed identity you wish to use instead of Azure Cosmos DB's first-party identity.
-1. When creating your Azure Cosmos DB account, you must enable the user-assigned managed identity and specify that you want to use this identity when accessing your encryption keys in Azure Key Vault. You can do this:
-
- - in an Azure Resource Manager template:
+1. When creating your Azure Cosmos DB account, you must enable the user-assigned managed identity and specify that you want to use this identity when accessing your encryption keys in Azure Key Vault. Options include:
- ```json
- {
- "type": "Microsoft.DocumentDB/databaseAccounts",
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "<identity-resource-id>": {}
- }
- },
- // ...
- "properties": {
- "defaultIdentity": "UserAssignedIdentity=<identity-resource-id>"
- "keyVaultKeyUri": "<key-vault-key-uri>"
+ - Using an Azure Resource Manager template:
+
+ ```json
+ {
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<identity-resource-id>": {}
+ }
+ },
// ...
+ "properties": {
+ "defaultIdentity": "UserAssignedIdentity=<identity-resource-id>"
+ "keyVaultKeyUri": "<key-vault-key-uri>"
+ // ...
+ }
}
- }
- ```
+ ```
- - with the Azure CLI:
+ - Using the Azure CLI:
- ```azurecli
- resourceGroupName='myResourceGroup'
- accountName='mycosmosaccount'
- keyVaultKeyUri = 'https://<my-vault>.vault.azure.net/keys/<my-key>'
-
- az cosmosdb create \
- -n $accountName \
- -g $resourceGroupName \
- --key-uri $keyVaultKeyUri
- --assign-identity <identity-resource-id>
- --default-identity "UserAssignedIdentity=<identity-resource-id>"
- ```
+ ```azurecli
+ resourceGroupName='myResourceGroup'
+ accountName='mycosmosaccount'
+ keyVaultKeyUri = 'https://<my-vault>.vault.azure.net/keys/<my-key>'
+
+ az cosmosdb create \
+ -n $accountName \
+ -g $resourceGroupName \
+ --key-uri $keyVaultKeyUri
+ --assign-identity <identity-resource-id>
+ --default-identity "UserAssignedIdentity=<identity-resource-id>"
+ ```
## Use CMK with continuous backup
When you create a new Azure Cosmos account through an Azure Resource Manager tem
## Customer-managed keys and double encryption
-When using customer-managed keys, the data you store in your Azure Cosmos DB account ends up being encrypted twice:
+The data you store in your Azure Cosmos DB account when using customer-managed keys ends up being encrypted twice:
- Once through the default encryption performed with Microsoft-managed keys.-- Once through the additional encryption performed with customer-managed keys.
+- Once through the extra encryption performed with customer-managed keys.
-Note that **this only applies to the main Azure Cosmos DB transactional storage**. Some features involve internal replication of your data to a second tier of storage where double encryption isn't provided, even when using customer-managed keys. These features include:
+Double encryption only applies to the main Azure Cosmos DB transactional storage. Some features involve internal replication of your data to a second tier of storage where double encryption isn't provided, even with customer-managed keys. These features include:
-- [Synapse Link](./synapse-link.md)
+- [Azure Synapse Link](./synapse-link.md)
- [Continuous backups with point-in-time restore](./continuous-backup-restore-introduction.md) ## Key rotation
Rotating the customer-managed key used by your Azure Cosmos account can be done
- Create a new version of the key currently used from Azure Key Vault:
- :::image type="content" source="./media/how-to-setup-cmk/portal-akv-rot.png" alt-text="Create a new key version":::
+ :::image type="content" source="./media/how-to-setup-cmk/portal-akv-rot.png" alt-text="Screenshot of the New Version option in the Versions page of the Azure portal.":::
-- Swap the key currently used with a totally different one by updating the key URI on your account. From the Azure portal, go to your Azure Cosmos account and select **Data Encryption** from the left menu:
+- Swap the key currently used with a different one by updating the key URI on your account. From the Azure portal, go to your Azure Cosmos account and select **Data Encryption** from the left menu:
- :::image type="content" source="./media/how-to-setup-cmk/portal-data-encryption.png" alt-text="The Data Encryption menu entry":::
+ :::image type="content" source="./media/how-to-setup-cmk/portal-data-encryption.png" alt-text="Screenshot of the Data Encryption menu option in the Azure portal.":::
Then, replace the **Key URI** with the new key you want to use and select **Save**:
- :::image type="content" source="./media/how-to-setup-cmk/portal-key-swap.png" alt-text="Update the key URI":::
+ :::image type="content" source="./media/how-to-setup-cmk/portal-key-swap.png" alt-text="Screenshot of the Save option in the Key page of the Azure portal.":::
Here's how to do achieve the same result in PowerShell:
The previous key or key version can be disabled after the [Azure Key Vault audit
## Error handling
-When using customer-managed keys in Azure Cosmos DB, if there are any errors, Azure Cosmos DB returns the error details along with a HTTP sub-status code in the response. You can use this sub-status code to debug the root cause of the issue. See the [HTTP Status Codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) article to get the list of supported HTTP sub-status codes.
+If there are any errors with customer-managed keys in Azure Cosmos DB, Azure Cosmos DB returns the error details along with an HTTP substatus code in the response. You can use the HTTP substatus code to debug the root cause of the issue. See the [HTTP Status Codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) article to get the list of supported HTTP substatus codes.
## Frequently asked questions
-### Is there an additional charge to enable customer-managed keys?
+### Are there more charges to enable customer-managed keys?
No, there's no charge to enable this feature.
-### How do customer-managed keys impact capacity planning?
+### How do customer-managed keys influence capacity planning?
-When using customer-managed keys, [Request Units](./request-units.md) consumed by your database operations see an increase to reflect the additional processing required to perform encryption and decryption of your data. This may lead to slightly higher utilization of your provisioned capacity. Use the table below for guidance:
+[Request Units](./request-units.md) consumed by your database operations see an increase to reflect the extra processing required to perform encryption and decryption of your data when using customer-managed keys. The extra RU consumption may lead to slightly higher utilization of your provisioned capacity. Use the table below for guidance:
| Operation type | Request Unit increase | |||
All the data stored in your Azure Cosmos account is encrypted with the customer-
This feature is currently available only for new accounts.
-### Is it possible to use customer-managed keys in conjunction with the Azure Cosmos DB [analytical store](analytical-store-introduction.md)?
+### Is it possible to use customer-managed keys with the Azure Cosmos DB [analytical store](analytical-store-introduction.md)?
-Yes, Azure Synapse Link only supports configuring customer-managed keys using your Azure Cosmos DB account's managed identity. You must [use your Azure Cosmos DB account's managed identity](#using-managed-identity) in your Azure Key Vault access policy before [enabling Azure Synapse Link](configure-synapse-link.md#enable-synapse-link) on your account.
+Yes, Azure Synapse Link only supports configuring customer-managed keys using your Azure Cosmos DB account's managed identity. You must [use your Azure Cosmos DB account's managed identity](#using-managed-identity) in your Azure Key Vault access policy before [enabling Azure Synapse Link](configure-synapse-link.md#enable-synapse-link) on your account. For a how-to guide on how to enable managed identity and use it in an access policy, see [access Azure Key Vault from Azure Cosmos DB using a managed identity](access-key-vault-managed-identity.md).
### Is there a plan to support finer granularity than account-level keys?
You can also programmatically fetch the details of your Azure Cosmos account and
Azure Cosmos DB takes [regular and automatic backups](./online-backup-and-restore.md) of the data stored in your account. This operation backs up the encrypted data. The following conditions are necessary to successfully restore a periodic backup:-- The encryption key that you used at the time of the backup is required and must be available in Azure Key Vault. This means that no revocation was made and the version of the key that was used at the time of the backup is still enabled.-- If you [used a system-assigned managed identity in the Azure Key Vault access policy](#to-use-a-system-assigned-managed-identity) of the source account, you must temporarily grant access to the Azure Cosmos DB first-party identity in that access policy as described [here](#add-access-policy) before restoring your data. This is because a system-assigned managed identity is specific to an account and cannot be re-used in the target account. Once the data is fully restored to the target account, you can set your desired identity configuration and remove the first-party identity from the Key Vault access policy.
+- The encryption key that you used at the time of the backup is required and must be available in Azure Key Vault. This condition requires that no revocation was made and the version of the key that was used at the time of the backup is still enabled.
+- If you [used a system-assigned managed identity in the access policy](#to-use-a-system-assigned-managed-identity), temporarily [grant access to the Azure Cosmos DB first-party identity](#add-access-policy) before restoring your data. This requirement exists because a system-assigned managed identity is specific to an account and can't be reused in the target account. Once the data is fully restored to the target account, you can set your desired identity configuration and remove the first-party identity from the Key Vault access policy.
### How do customer-managed keys affect continuous backups?
-Azure Cosmos DB gives you the option to configure [continuous backups](./continuous-backup-restore-introduction.md) on your account. With continuous backups, you can restore your data to any point in time within the past 30 days. To use continuous backups on an account where customer-managed keys are enabled, you must [use a user-assigned managed identity](#to-use-a-user-assigned-managed-identity) in the Key Vault access policy; the Azure Cosmos DB first-party identity or a system-assigned managed identity aren't currently supported on accounts using continuous backups.
+Azure Cosmos DB gives you the option to configure [continuous backups](./continuous-backup-restore-introduction.md) on your account. With continuous backups, you can restore your data to any point in time within the past 30 days. To use continuous backups on an account where customer-managed keys are enabled, you must [use a user-assigned managed identity](#to-use-a-user-assigned-managed-identity) in the Key Vault access policy. Azure Cosmos DB first-party identities or system-assigned managed identities aren't currently supported on accounts using continuous backups.
The following conditions are necessary to successfully perform a point-in-time restore:-- The encryption key that you used at the time of the backup is required and must be available in Azure Key Vault. This means that no revocation was made and the version of the key that was used at the time of the backup is still enabled.
+- The encryption key that you used at the time of the backup is required and must be available in Azure Key Vault. This requirement means that no revocation was made and the version of the key that was used at the time of the backup is still enabled.
- You must ensure that the user-assigned managed identity originally used on the source account is still declared in the Key Vault access policy. > [!IMPORTANT]
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
The way you create a `TokenCredential` instance is beyond the scope of this arti
- [In .NET](/dotnet/api/overview/azure/identity-readme#credential-classes) - [In Java](/java/api/overview/azure/identity-readme#credential-classes) - [In JavaScript](/javascript/api/overview/azure/identity-readme#credential-classes)-- [In Python](/python/api/overview/azure/identity-readme#credential-classes)
+- [In Python](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true#credential-classes)
The examples below use a service principal with a `ClientSecretCredential` instance.
cosmos-db Local Emulator Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/local-emulator-release-notes.md
This article shows the Azure Cosmos DB Emulator released versions and it details
## Release notes
+### 2.14.7 (May 9, 2022)
+
+ - This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. In addition to this update there are couple issues that were addressed in this release:
+ * Update Data Explorer to the latest content and fix a broken link for the quick start sample documentation.
+ * Add option to enable the Mongo API version for the Linux Cosmos DB emulator by setting the environment variable: "AZURE_COSMOS_EMULATOR_ENABLE_MONGODB_ENDPOINT" in the Docker container setting. Valid setting are: "3.2", "3.6", "4.0" and "4.2"
+ ### 2.14.6 (March 7, 2022) - This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. In addition to this update there are couple issues that were addressed in this release:
cosmos-db Managed Identity Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/managed-identity-based-authentication.md
Title: How to use a system-assigned managed identity to access Azure Cosmos DB data
+ Title: Use system-assigned managed identities to access Azure Cosmos DB data
description: Learn how to configure an Azure Active Directory (Azure AD) system-assigned managed identity (managed service identity) to access keys from Azure Cosmos DB. -+ Previously updated : 07/02/2021-- Last updated : 06/01/2022++ - # Use system-assigned managed identities to access Azure Cosmos DB data-
-> [!TIP]
-> [Data plane role-based access control (RBAC)](how-to-setup-rbac.md) is now available on Azure Cosmos DB, providing a seamless way to authorize your requests with Azure Active Directory.
-
-In this article, you'll set up a *robust, key rotation agnostic* solution to access Azure Cosmos DB keys by using [managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md). The example in this article uses Azure Functions, but you can use any service that supports managed identities.
-
-You'll learn how to create a function app that can access Azure Cosmos DB data without needing to copy any Azure Cosmos DB keys. The function app will wake up every minute and record the current temperature of an aquarium fish tank. To learn how to set up a timer-triggered function app, see the [Create a function in Azure that is triggered by a timer](../azure-functions/functions-create-scheduled-function.md) article.
-
-To simplify the scenario, a [Time To Live](./time-to-live.md) setting is already configured to clean up older temperature documents.
-
-> [!IMPORTANT]
-> Because this approach fetches your account's primary key through the Azure Cosmos DB control plane, it will not work if [a read-only lock has been applied](../azure-resource-manager/management/lock-resources.md) to your account. In this situation, consider using the Azure Cosmos DB [data plane RBAC](how-to-setup-rbac.md) instead.
-
-## Assign a system-assigned managed identity to a function app
-
-In this step, you'll assign a system-assigned managed identity to your function app.
-1. In the [Azure portal](https://portal.azure.com/), open the **Azure Function** pane and go to your function app.
+In this article, you'll set up a *robust, key rotation agnostic* solution to access Azure Cosmos DB keys by using [managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md) and [data plane role-based access control](how-to-setup-rbac.md). The example in this article uses Azure Functions, but you can use any service that supports managed identities.
-1. Open the **Platform features** > **Identity** tab:
+You'll learn how to create a function app that can access Azure Cosmos DB data without needing to copy any Azure Cosmos DB keys. The function app will trigger when an HTTP request is made and then list all of the existing databases.
- :::image type="content" source="./media/managed-identity-based-authentication/identity-tab-selection.png" alt-text="Screenshot showing Platform features and Identity options for the function app.":::
+## Prerequisites
-1. On the **Identity** tab, turn **On** the system identity **Status** and select **Save**. The **Identity** pane should look as follows:
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An existing Azure Cosmos DB SQL API account. [Create an Azure Cosmos DB SQL API account](sql/create-cosmosdb-resources-portal.md)
+- An existing Azure Functions function app. [Create your first function in the Azure portal](../azure-functions/functions-create-function-app-portal.md)
+ - A system-assigned managed identity for the function app. [Add a system-assigned identity](/app-service/overview-managed-identity.md?tabs=cli#add-a-system-assigned-identity)
+- [Azure Functions Core Tools](../azure-functions/functions-run-local.md)
+- To perform the steps in this article, install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in to Azure](/cli/azure/authenticate-azure-cli).
- :::image type="content" source="./media/managed-identity-based-authentication/identity-tab-system-managed-on.png" alt-text="Screenshot showing system identity Status set to On.":::
+## Prerequisite check
-## Grant access to your Azure Cosmos account
-
-In this step, you'll assign a role to the function app's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity. For this solution, you'll use the following two roles:
-
-|Built-in role |Description |
-|||
-|[DocumentDB Account Contributor](../role-based-access-control/built-in-roles.md#documentdb-account-contributor)|Can manage Azure Cosmos DB accounts. Allows retrieval of read/write keys. |
-|[Cosmos DB Account Reader Role](../role-based-access-control/built-in-roles.md#cosmos-db-account-reader-role)|Can read Azure Cosmos DB account data. Allows retrieval of read keys. |
-
-> [!TIP]
-> When you assign roles, assign only the needed access. If your service requires only reading data, then assign the **Cosmos DB Account Reader** role to the managed identity. For more information about the importance of least privilege access, see the [Lower exposure of privileged accounts](../security/fundamentals/identity-management-best-practices.md#lower-exposure-of-privileged-accounts) article.
-
-In this scenario, the function app will read the temperature of the aquarium, then write back that data to a container in Azure Cosmos DB. Because the function app must write the data, you'll need to assign the **DocumentDB Account Contributor** role.
+1. In a terminal or command window, store the names of your Azure Functions function app, Azure Cosmos DB account and resource group as shell variables named ``functionName``, ``cosmosName``, and ``resourceGroupName``.
-### Assign the role using Azure portal
+ ```azurecli-interactive
+ # Variable for function app name
+ functionName="msdocs-function-app"
+
+ # Variable for Cosmos DB account name
+ cosmosName="msdocs-cosmos-app"
-1. Sign in to the Azure portal and go to your Azure Cosmos DB account.
+ # Variable for resource group name
+ resourceGroupName="msdocs-cosmos-functions-dotnet-identity"
+ ```
-1. Select **Access control (IAM)**.
+ > [!NOTE]
+ > These variables will be re-used in later steps. This example assumes your Azure Cosmos DB account name is ``msdocs-cosmos-app``, your function app name is ``msdocs-function-app`` and your resource group name is ``msdocs-cosmos-functions-dotnet-identity``.
-1. Select **Add** > **Add role assignment**.
+1. View the function app's properties using the [``az functionapp show``](/cli/azure/functionapp&preserve-view=true#az-functionapp-show) command.
- :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
+ ```azurecli-interactive
+ az functionapp show \
+ --resource-group $resourceGroupName \
+ --name $functionName
+ ```
-1. On the **Role** tab, select **DocumentDB Account Contributor**.
+1. View the properties of the system-assigned managed identity for your function app using [``az webapp identity show``](/cli/azure/webapp/identity#az-webapp-identity-show).
-1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+ ```azurecli-interactive
+ az webapp identity show \
+ --resource-group $resourceGroupName \
+ --name $functionName
+ ```
-1. Select your Azure subscription.
+1. View the Cosmos DB account's properties using [``az cosmosdb show``](/cli/azure/cosmosdb#az-cosmosdb-show).
-1. Under **System-assigned managed identity**, select **Function App**, and then select **FishTankTemperatureService**.
+ ```azurecli-interactive
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $cosmosName
+ ```
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+## Create Cosmos DB SQL API databases
-### Assign the role using Azure CLI
+In this step, you'll create two databases.
-To assign the role by using Azure CLI, open the Azure Cloud Shell and run the following commands:
+1. In a terminal or command window, create a new ``products`` database using [``az cosmosdb sql database create``](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create).
-```azurecli-interactive
+ ```azurecli-interactive
+ az cosmosdb sql database create \
+ --resource-group $resourceGroupName \
+ --name products \
+ --account-name $cosmosName
+ ```
-scope=$(az cosmosdb show --name '<Your_Azure_Cosmos_account_name>' --resource-group '<CosmosDB_Resource_Group>' --query id)
+1. Create a new ``customers`` database.
-principalId=$(az webapp identity show -n '<Your_Azure_Function_name>' -g '<Azure_Function_Resource_Group>' --query principalId)
+ ```azurecli-interactive
+ az cosmosdb sql database create \
+ --resource-group $resourceGroupName \
+ --name customers \
+ --account-name $cosmosName
+ ```
-az role assignment create --assignee $principalId --role "DocumentDB Account Contributor" --scope $scope
-```
+## Get Cosmos DB SQL API endpoint
-## Programmatically access the Azure Cosmos DB keys
+In this step, you'll query the document endpoint for the SQL API account.
-Now we have a function app that has a system-assigned managed identity with the **DocumentDB Account Contributor** role in the Azure Cosmos DB permissions. The following function app code will get the Azure Cosmos DB keys, create a CosmosClient object, get the temperature of the aquarium, and then save this to Azure Cosmos DB.
+1. Use ``az cosmosdb show`` with the **query** parameter set to ``documentEndpoint``. Record the result. You'll use this value in a later step.
-This sample uses the [List Keys API](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/database-accounts/list-keys) to access your Azure Cosmos DB account keys.
+ ```azurecli-interactive
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $cosmosName \
+ --query documentEndpoint
-> [!IMPORTANT]
-> If you want to [assign the Cosmos DB Account Reader](#grant-access-to-your-azure-cosmos-account) role, you'll need to use the [List Read Only Keys API](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/database-accounts/list-read-only-keys). This will populate just the read-only keys.
+ cosmosEndpoint=$(
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $cosmosName \
+ --query documentEndpoint \
+ --output tsv
+ )
+
+ echo $cosmosEndpoint
+ ```
-The List Keys API returns the `DatabaseAccountListKeysResult` object. This type isn't defined in the C# libraries. The following code shows the implementation of this class:
+ > [!NOTE]
+ > This variable will be re-used in a later step.
-```csharp
-namespace Monitor
-{
- public class DatabaseAccountListKeysResult
- {
- public string primaryMasterKey { get; set; }
- public string primaryReadonlyMasterKey { get; set; }
- public string secondaryMasterKey { get; set; }
- public string secondaryReadonlyMasterKey { get; set; }
- }
-}
-```
-
-The example also uses a simple document called "TemperatureRecord," which is defined as follows:
+## Grant access to your Azure Cosmos account
-```csharp
-using System;
+In this step, you'll assign a role to the function app's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity. For this solution, you'll use the [Cosmos DB Built-in Data Reader](how-to-setup-rbac.md#built-in-role-definitions) role.
-namespace Monitor
-{
- public class TemperatureRecord
+> [!TIP]
+> When you assign roles, assign only the needed access. If your service requires only reading data, then assign the **Cosmos DB Built-in Data Reader** role to the managed identity. For more information about the importance of least privilege access, see the [Lower exposure of privileged accounts](../security/fundamentals/identity-management-best-practices.md#lower-exposure-of-privileged-accounts) article.
+
+1. Use ``az cosmosdb show`` with the **query** parameter set to ``id``. Store the result in a shell variable named ``scope``.
+
+ ```azurecli-interactive
+ scope=$(
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $cosmosName \
+ --query id \
+ --output tsv
+ )
+
+ echo $scope
+ ```
+
+ > [!NOTE]
+ > This variable will be re-used in a later step.
+
+1. Use ``az webapp identity show`` with the **query** parameter set to ``principalId``. Store the result in a shell variable named ``principal``.
+
+ ```azurecli-interactive
+ principal=$(
+ az webapp identity show \
+ --resource-group $resourceGroupName \
+ --name $functionName \
+ --query principalId \
+ --output tsv
+ )
+
+ echo $principal
+ ```
+
+1. Create a new JSON object with the configuration of the new custom role.
+
+ ```json
{
- public string id { get; set; } = Guid.NewGuid().ToString();
- public DateTime RecordTime { get; set; }
- public int Temperature { get; set; }
+ "RoleName": "Read Cosmos Metadata",
+ "Type": "CustomRole",
+ "AssignableScopes": ["/"],
+ "Permissions": [{
+ "DataActions": [
+ "Microsoft.DocumentDB/databaseAccounts/readMetadata"
+ ]
+ }]
}
-}
-```
+ ```
-You'll use the [Microsoft.Azure.Services.AppAuthentication](https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication) library to get the system-assigned managed identity token. To learn other ways to get the token and find out more information about the `Microsoft.Azure.Service.AppAuthentication` library, see the [Service-to-service authentication](/dotnet/api/overview/azure/service-to-service-authentication) article.
+1. Use [``az role assignment create``](/cli/azure/cosmosdb/sql/role/assignment#az-cosmosdb-sql-role-assignment-create) to assign the ``Cosmos DB Built-in Data Reader`` role to the system-assigned managed identity.
+ ```azurecli-interactive
+ az cosmosdb sql role assignment create \
+ --resource-group $resourceGroupName \
+ --account-name $cosmosName \
+ --role-definition-name "Read Cosmos Metadata" \
+ --principal-id $principal \
+ --scope $scope
+ ```
-```csharp
-using System;
-using System.Net.Http;
-using System.Net.Http.Headers;
-using System.Threading.Tasks;
-using Microsoft.Azure.Cosmos;
-using Microsoft.Azure.Services.AppAuthentication;
-using Microsoft.Azure.WebJobs;
-using Microsoft.Extensions.Logging;
+## Programmatically access the Azure Cosmos DB keys
-namespace Monitor
-{
- public static class FishTankTemperatureService
+We now have a function app that has a system-assigned managed identity with the **Cosmos DB Built-in Data Reader** role. The following function app will query the Azure Cosmos DB account for a list of databases.
+
+1. Create a local function project with the ``--dotnet`` parameter in a folder named ``csmsfunc``. Change your shell's directory
+
+ ```azurecli-interactive
+ func init csmsfunc --dotnet
+
+ cd csmsfunc
+ ```
+
+1. Create a new function with the **template** parameter set to ``httptrigger`` and the **name** set to ``readdatabases``.
+
+ ```azurecli-interactive
+ func new --template httptrigger --name readdatabases
+ ```
+
+1. Add the [``Azure.Identity``](https://www.nuget.org/packages/Azure.Identity/) and [``Microsoft.Azure.Cosmos``](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/) NuGet package to the .NET project. Build the project using [``dotnet build``](/dotnet/core/tools/dotnet-build).
+
+ ```azurecli-interactive
+ dotnet add package Azure.Identity
+
+ dotnet add package Microsoft.Azure.Cosmos
+
+ dotnet build
+ ```
+
+1. Open the function code in an integrated developer environment (IDE).
+
+ > [!TIP]
+ > If you are using the Azure CLI locally or in the Azure Cloud Shell, you can open Visual Studio Code.
+ >
+ > ```azurecli
+ > code .
+ > ```
+ >
+
+1. Replace the code in the **readdatabases.cs** file with this sample function implementation. Save the updated file.
+
+ ```csharp
+ using System;
+ using System.Collections.Generic;
+ using System.Threading.Tasks;
+ using Azure.Identity;
+ using Microsoft.AspNetCore.Mvc;
+ using Microsoft.Azure.Cosmos;
+ using Microsoft.Azure.WebJobs;
+ using Microsoft.Azure.WebJobs.Extensions.Http;
+ using Microsoft.AspNetCore.Http;
+ using Microsoft.Extensions.Logging;
+
+ namespace csmsfunc
{
- private static string subscriptionId =
- "<azure subscription id>";
- private static string resourceGroupName =
- "<name of your azure resource group>";
- private static string accountName =
- "<Azure Cosmos DB account name>";
- private static string cosmosDbEndpoint =
- "<Azure Cosmos DB endpoint>";
- private static string databaseName =
- "<Azure Cosmos DB name>";
- private static string containerName =
- "<container to store the temperature in>";
-
- // HttpClient is intended to be instantiated once, rather than per-use.
- static readonly HttpClient httpClient = new HttpClient();
-
- [FunctionName("FishTankTemperatureService")]
- public static async Task Run([TimerTrigger("0 * * * * *")]TimerInfo myTimer, ILogger log)
+ public static class readdatabases
{
- log.LogInformation($"Starting temperature monitoring: {DateTime.Now}");
-
- // AzureServiceTokenProvider will help us to get the Service Managed token.
- var azureServiceTokenProvider = new AzureServiceTokenProvider();
-
- // Authenticate to the Azure Resource Manager to get the Service Managed token.
- string accessToken = await azureServiceTokenProvider.GetAccessTokenAsync("https://management.azure.com/");
-
- // Setup the List Keys API to get the Azure Cosmos DB keys.
- string endpoint = $"https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DocumentDB/databaseAccounts/{accountName}/listKeys?api-version=2019-12-12";
+ [FunctionName("readdatabases")]
+ public static async Task<IActionResult> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get")] HttpRequest req,
+ ILogger log)
+ {
+ log.LogTrace("Start function");
+
+ CosmosClient client = new CosmosClient(
+ accountEndpoint: Environment.GetEnvironmentVariable("COSMOS_ENDPOINT", EnvironmentVariableTarget.Process),
+ new DefaultAzureCredential()
+ );
+
+ using FeedIterator<DatabaseProperties> iterator = client.GetDatabaseQueryIterator<DatabaseProperties>();
+
+ List<(string name, string uri)> databases = new();
+ while(iterator.HasMoreResults)
+ {
+ foreach(DatabaseProperties database in await iterator.ReadNextAsync())
+ {
+ log.LogTrace($"[Database Found]\t{database.Id}");
+ databases.Add((database.Id, database.SelfLink));
+ }
+ }
+
+ return new OkObjectResult(databases);
+ }
+ }
+ }
+ ```
- // Add the access token to request headers.
- httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
+## (Optional) Run the function locally
- // Post to the endpoint to get the keys result.
- var result = await httpClient.PostAsync(endpoint, new StringContent(""));
+In a local environment, the [``DefaultAzureCredential``](/dotnet/api/azure.identity.defaultazurecredential) class will use various local credentials to determine the current identity. While running locally isn't required for the how-to, you can develop locally using your own identity or a service principal.
- // Get the result back as a DatabaseAccountListKeysResult.
- DatabaseAccountListKeysResult keys = await result.Content.ReadFromJsonAsync<DatabaseAccountListKeysResult>();
+1. In the **local.settings.json** file, add a new setting named ``COSMOS_ENDPOINT`` in the **Values** object. The value of the setting should be the document endpoint you recorded earlier in this how-to guide.
- log.LogInformation("Starting to create the client");
+ ```json
+ ...
+ "Values": {
+ ...
+ "COSMOS_ENDPOINT": "https://msdocs-cosmos-app.documents.azure.com:443/",
+ ...
+ }
+ ...
+ ```
- CosmosClient client = new CosmosClient(cosmosDbEndpoint, keys.primaryMasterKey);
+ > [!NOTE]
+ > This JSON object has been shortened for brevity. This JSON object also includes a sample value that assumes your account name is ``msdocs-cosmos-app``.
- log.LogInformation("Client created");
+1. Run the function app
- var database = client.GetDatabase(databaseName);
- var container = database.GetContainer(containerName);
+ ```azurecli
+ func start
+ ```
- log.LogInformation("Get the temperature.");
+## Deploy to Azure
- var tempRecord = new TemperatureRecord() { RecordTime = DateTime.UtcNow, Temperature = GetTemperature() };
+Once published, the ``DefaultAzureCredential`` class will use credentials from the environment or a managed identity. For this guide, the system-assigned managed identity will be used as a credential for the [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) constructor.
- log.LogInformation("Store temperature");
+1. Set the ``COSMOS_ENDPOINT`` setting on the function app already deployed in Azure.
- await container.CreateItemAsync<TemperatureRecord>(tempRecord);
+ ```azurecli-interactive
+ az functionapp config appsettings set \
+ --resource-group $resourceGroupName \
+ --name $functionName \
+ --settings "COSMOS_ENDPOINT=$cosmosEndpoint"
+ ```
- log.LogInformation($"Ending temperature monitor: {DateTime.Now}");
- }
+1. Deploy your function app to Azure by reusing the ``functionName`` shell variable:
- private static int GetTemperature()
- {
- // Fake the temperature sensor for this demo.
- Random r = new Random(DateTime.UtcNow.Second);
- return r.Next(0, 120);
- }
- }
-}
-```
+ ```azurecli-interactive
+ func azure functionapp publish $functionName
+ ```
-You are now ready to [deploy your function app](../azure-functions/create-first-function-vs-code-csharp.md).
+1. [Test your function in the Azure portal](../azure-functions/functions-create-function-app-portal.md#test-the-function).
## Next steps
cosmos-db Feature Support 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-40.md
$polygon | No |
## Sort operations
-When using the `findOneAndUpdate` operation, sort operations on a single field are supported but sort operations on multiple fields are not supported.
+When using the `findOneAndUpdate` operation with Mongo API version 4.0, sort operations on a single field and multiple fields are supported. Sort operations on multiple fields was a limitation of previous wire protocols.
## Indexing The API for MongoDB [supports a variety of indexes](mongodb-indexing.md) to enable sorting on multiple fields, improve query performance, and enforce uniqueness.
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-setup-rbac.md
az cosmosdb create -n <account_name> -g <azure_resource_group> --kind MongoDB --
8. Create a database for users to connect to in the Azure portal. 9. Create an RBAC user with built-in read role. ```powershell
-az cosmosdb mongodb user definition create --account-name <YOUR_DB_ACCOUNT> --resource-group <YOUR_RG> --body {\"Id\":\"testdb.read\",\"UserName\":\"<YOUR_USERNAME>\",\"Password\":\"<YOUR_PASSWORD>\",\"DatabaseName\":\"<YOUR_DB_NAME>\",\"CustomData\":\"Some_Random_Info\",\"Mechanisms\":\"SCRAM-SHA-256\",\"Roles\":[{\"Role\":\"read\",\"Db\":\"<YOUR_DB_NAME>\"}]}
+az cosmosdb mongodb user definition create --account-name <YOUR_DB_ACCOUNT> --resource-group <YOUR_RG> --body {\"Id\":\"<YOUR_DB_NAME>.<YOUR_USERNAME>\",\"UserName\":\"<YOUR_USERNAME>\",\"Password\":\"<YOUR_PASSWORD>\",\"DatabaseName\":\"<YOUR_DB_NAME>\",\"CustomData\":\"Some_Random_Info\",\"Mechanisms\":\"SCRAM-SHA-256\",\"Roles\":[{\"Role\":\"read\",\"Db\":\"<YOUR_DB_NAME>\"}]}
```
cosmos-db How To Convert Session Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-convert-session-token.md
This article explains how to convert between different session token formats to
> [!NOTE] > By default, the SDK keeps track of the session token automatically and it will use the most recent session token. For more information, please visit [Utilize session tokens](how-to-manage-consistency.md#utilize-session-tokens). The instructions in this article only apply with the following conditions: > * Your Azure Cosmos DB account uses Session consistency.
-> * You are managing the session tokens are manually.
+> * You are managing the session tokens manually.
> * You are using multiple versions of the SDK at the same time. ## Session token formats
Read the following articles:
* [Use session tokens to manage consistency in Azure Cosmos DB](how-to-manage-consistency.md#utilize-session-tokens) * [Choose the right consistency level in Azure Cosmos DB](../consistency-levels.md) * [Consistency, availability, and performance tradeoffs in Azure Cosmos DB](../consistency-levels.md)
-* [Availability and performance tradeoffs for various consistency levels](../consistency-levels.md)
+* [Availability and performance tradeoffs for various consistency levels](../consistency-levels.md)
cosmos-db How To Manage Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-manage-consistency.md
item = client.ReadItem(doc_link, options)
## Monitor Probabilistically Bounded Staleness (PBS) metric
-How eventual is eventual consistency? For the average case, can we offer staleness bounds with respect to version history and time. The [**Probabilistically Bounded Staleness (PBS)**](https://pbs.cs.berkeley.edu/) metric tries to quantify the probability of staleness and shows it as a metric. To view the PBS metric, go to your Azure Cosmos account in the Azure portal. Open the **Metrics** pane, and select the **Consistency** tab. Look at the graph named **Probability of strongly consistent reads based on your workload (see PBS)**.
+How eventual is eventual consistency? For the average case, can we offer staleness bounds with respect to version history and time. The [**Probabilistically Bounded Staleness (PBS)**](http://pbs.cs.berkeley.edu/) metric tries to quantify the probability of staleness and shows it as a metric. To view the PBS metric, go to your Azure Cosmos account in the Azure portal. Open the **Metrics** pane, and select the **Consistency** tab. Look at the graph named **Probability of strongly consistent reads based on your workload (see PBS)**.
:::image type="content" source="./media/how-to-manage-consistency/pbs-metric.png" alt-text="PBS graph in the Azure portal":::
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-dotnet-v3.md
The following classes have been replaced on the 3.0 SDK:
* `Microsoft.Azure.Documents.Resource`
-The Microsoft.Azure.Documents.UriFactory class has been replaced by the fluent design. The fluent design builds URLs internally and allows a single `Container` object to be passed around instead of a `DocumentClient`, `DatabaseName`, and `DocumentCollection`.
+The Microsoft.Azure.Documents.UriFactory class has been replaced by the fluent design.
Because the .NET v3 SDK allows users to configure a custom serialization engine, there's no direct replacement for the `Document` type. When using Newtonsoft.Json (default serialization engine), `JObject` can be used to achieve the same functionality. When using a different serialization engine, you can use its base json document type (for example, `JsonDocument` for System.Text.Json). The recommendation is to use a C# type that reflects the schema of your items instead of relying on generic types.
+# [.NET SDK v3](#tab/dotnet-v3)
+
+```csharp
+private readonly CosmosClient _client;
+private readonly Container _container;
+
+public Program()
+{
+ // Client should be a singleton
+ _client = new CosmosClient(
+ accountEndpoint: "https://testcosmos.documents.azure.com:443/",
+ authKeyOrResourceToken: "SuperSecretKey",
+ clientOptions: new CosmosClientOptions()
+ {
+ ApplicationPreferredRegions = new List<string>()
+ {
+ Regions.EastUS,
+ Regions.WestUS,
+ }
+ });
+
+ _container = _client.GetContainer("DatabaseName","ContainerName");
+}
+
+private async Task CreateItemAsync(SalesOrder salesOrder)
+{
+ ItemResponse<SalesOrder> response = await this._container.CreateItemAsync(
+ salesOrder,
+ new PartitionKey(salesOrder.AccountNumber));
+}
+
+```
+
+# [.NET SDK v2](#tab/dotnet-v2)
+
+```csharp
+private readonly DocumentClient _client;
+private readonly string _databaseName;
+private readonly string _containerName;
+
+public Program()
+{
+ ConnectionPolicy connectionPolicy = new ConnectionPolicy()
+ {
+ ConnectionMode = ConnectionMode.Direct, // Default for v2 is Gateway. v3 is Direct
+ ConnectionProtocol = Protocol.Tcp,
+ };
+
+ connectionPolicy.PreferredLocations.Add(LocationNames.EastUS);
+ connectionPolicy.PreferredLocations.Add(LocationNames.WestUS);
+
+ // Client should always be a singleton
+ _client = new DocumentClient(
+ new Uri("https://testcosmos.documents.azure.com:443/"),
+ "SuperSecretKey",
+ connectionPolicy);
+
+ _databaseName = "DatabaseName";
+ _containerName = "ContainerName";
+}
+
+private async Task CreateItemAsync(SalesOrder salesOrder)
+{
+ Uri collectionUri = UriFactory.CreateDocumentCollectionUri(_databaseName, _containerName)
+ await this._client.CreateDocumentAsync(
+ collectionUri,
+ salesOrder,
+ new RequestOptions { PartitionKey = new PartitionKey(salesOrder.AccountNumber) });
+}
+```
++ ### Changes to item ID generation Item ID is no longer auto populated in the .NET v3 SDK. Therefore, the Item ID must specifically include a generated ID. View the following example:
The following properties have been removed:
The .NET SDK v3 provides a fluent `CosmosClientBuilder` class that replaces the need for the SDK v2 URI Factory.
+The fluent design builds URLs internally and allows a single `Container` object to be passed around instead of a `DocumentClient`, `DatabaseName`, and `DocumentCollection`.
+ The following example creates a new `CosmosClientBuilder` with a strong ConsistencyLevel and a list of preferred locations: ```csharp
cosmos-db Sql Api Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-python.md
|**API documentation**|[Python API reference documentation](/python/api/azure-cosmos/azure.cosmos?preserve-view=true&view=azure-python)| |**SDK installation instructions**|[Python SDK installation instructions](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cosmos/azure-cosmos)| |**Get started**|[Get started with the Python SDK](create-sql-api-python.md)|
+|**Samples**|[Python SDK samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos/samples)|
|**Current supported platform**|[Python 3.6+](https://www.python.org/downloads/)| > [!IMPORTANT]
-> * Versions 4.3.0b2 and higher only support Python 3.6+. Python 2 is not supported.
+> * Versions 4.3.0b2 and higher support Async IO operations and only support Python 3.6+. Python 2 is not supported.
## Release history Release history is maintained in the azure-sdk-for-python repo, for detailed list of releases, see the [changelog file](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/cosmos/azure-cosmos/CHANGELOG.md).
Microsoft provides notification at least **12 months** in advance of retiring an
| Version | Release Date | Retirement Date | | | | |
+| 4.3.0 |May 23, 2022 | |
| 4.2.0 |Oct 09, 2020 | | | 4.1.0 |Aug 10, 2020 | | | 4.0.0 |May 20, 2020 | |
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md
try
ItemResponse<Book> response = await this.Container.CreateItemAsync<Book>(item: testItem); if (response.Diagnostics.GetClientElapsedTime() > ConfigurableSlowRequestTimeSpan) {
- // Log the diagnostics and add any additional info necessary to correlate to other logs
- Console.Write(response.Diagnostics.ToString());
+ // Log the response.Diagnostics.ToString() and add any additional info necessary to correlate to other logs
} } catch (CosmosException cosmosException) {
- // Log the full exception including the stack trace
- Console.Write(cosmosException.ToString());
- // The Diagnostics can be logged separately if required.
- Console.Write(cosmosException.Diagnostics.ToString());
+ // Log the full exception including the stack trace with: cosmosException.ToString()
+
+ // The Diagnostics can be logged separately if required with: cosmosException.Diagnostics.ToString()
} // When using Stream APIs ResponseMessage response = await this.Container.CreateItemStreamAsync(partitionKey, stream); if (response.Diagnostics.GetClientElapsedTime() > ConfigurableSlowRequestTimeSpan || !response.IsSuccessStatusCode) {
- // Log the diagnostics and add any additional info necessary to correlate to other logs
- Console.Write(response.Diagnostics.ToString());
+ // Log the diagnostics and add any additional info necessary to correlate to other logs with: response.Diagnostics.ToString()
} ```
Show the time for the different stages of sending and receiving a request in the
* *Transit time is large*, which leads to a networking problem. Compare this number to the `BELatencyInMs`. If `BELatencyInMs` is small, then the time was spent on the network, and not on the Azure Cosmos DB service. * *Received time is large* might be caused by a thread starvation problem. This is the time between having the response and returning the result.
+### <a name="ServiceEndpointStatistics"></a>ServiceEndpointStatistics
+Information about a particular backend server. The SDK can open multiple connections to a single backend server depending upon the number of pending requests and the MaxConcurrentRequestsPerConnection.
+
+* `inflightRequests` The number of pending requests to a backend server (maybe from different partitions). A high number may to lead to more traffic and higher latencies.
+* `openConnections` is the total Number of connections open to a single backend server. This can be useful to show SNAT port exhausion if this number is very high.
+
+### <a name="ConnectionStatistics"></a>ConnectionStatistics
+Information about the particular connection (new or old) the request get's assigned to.
+
+* `waitforConnectionInit`: The current request was waiting for new connection initialization to complete. This will lead to higher latencies.
+* `callsPendingReceive`: Number of calls that was pending receive before this call was sent. A high number can show us that there were a lot of calls before this call and it may lead to higher latencies. If this number is high it points to a head of line blocking issue possibly caused by another request like query or feed operation that is taking a long time to process. Try lowering the CosmosClientOptions.MaxRequestsPerTcpConnection to increase the number of channels.
+* `LastSentTime`: Time of last request that was sent to this server. This along with LastReceivedTime can be used to see connectivity or endpoint issues. For example if there are a lot of receive timeouts, Sent time will be much larger than the Receive time.
+* `lastReceive`: Time of last request that was received from this server
+* `lastSendAttempt`: Time of the last send attempt
+
+### <a name="Request and response sizes"></a>Request and response sizes
+* `requestSizeInBytes`: The total size of the request sent to Cosmos DB
+* `responseMetadataSizeInBytes`: The size of headers returned from Cosmos DB
+* `responseBodySizeInBytes`: The size of content returned from Cosmos DB
+ ```json "StoreResult": {
- "ActivityId": "a3d325c1-f4e9-405b-820c-bab4d329ee4c",
- "StatusCode": "Created",
+ "ActivityId": "bab6ade1-b8de-407f-b89d-fa2138a91284",
+ "StatusCode": "Ok",
"SubStatusCode": "Unknown",
- "LSN": 1766,
- "PartitionKeyRangeId": "0",
- "GlobalCommittedLSN": -1,
- "ItemLSN": -1,
- "UsingLocalLSN": false,
- "QuorumAckedLSN": 1765,
- "SessionToken": "-1#1766",
- "CurrentWriteQuorum": 1,
- "CurrentReplicaSetSize": 1,
+ "LSN": 453362,
+ "PartitionKeyRangeId": "1",
+ "GlobalCommittedLSN": 0,
+ "ItemLSN": 453358,
+ "UsingLocalLSN": true,
+ "QuorumAckedLSN": -1,
+ "SessionToken": "-1#453362",
+ "CurrentWriteQuorum": -1,
+ "CurrentReplicaSetSize": -1,
"NumberOfReadRegions": 0,
- "IsClientCpuOverloaded": false,
"IsValid": true,
- "StorePhysicalAddress": "rntbd://127.0.0.1:10253/apps/DocDbApp/services/DocDbServer92/partitions/a4cb49a8-38c8-11e6-8106-8cdcd42c33be/replicas/1p/",
- "RequestCharge": 11.05,
- "BELatencyInMs": "7.954",
- "RntbdRequestStats": [
- {
- "EventName": "Created",
- "StartTime": "2021-06-15T13:53:10.1302477Z",
- "DurationInMicroSec": "6383"
- },
- {
- "EventName": "ChannelAcquisitionStarted",
- "StartTime": "2021-06-15T13:53:10.1366314Z",
- "DurationInMicroSec": "96511"
- },
- {
- "EventName": "Pipelined",
- "StartTime": "2021-06-15T13:53:10.2331431Z",
- "DurationInMicroSec": "50834"
- },
- {
- "EventName": "Transit Time",
- "StartTime": "2021-06-15T13:53:10.2839774Z",
- "DurationInMicroSec": "17677"
+ "StorePhysicalAddress": "rntbd://127.0.0.1:10253/apps/DocDbApp/services/DocDbServer92/partitions/a4cb49a8-38c8-11e6-8106-8cdcd42c33be/replicas/1s/",
+ "RequestCharge": 1,
+ "RetryAfterInMs": null,
+ "BELatencyInMs": "0.304",
+ "transportRequestTimeline": {
+ "requestTimeline": [
+ {
+ "event": "Created",
+ "startTimeUtc": "2022-05-25T12:03:36.3081190Z",
+ "durationInMs": 0.0024
+ },
+ {
+ "event": "ChannelAcquisitionStarted",
+ "startTimeUtc": "2022-05-25T12:03:36.3081214Z",
+ "durationInMs": 0.0132
+ },
+ {
+ "event": "Pipelined",
+ "startTimeUtc": "2022-05-25T12:03:36.3081346Z",
+ "durationInMs": 0.0865
+ },
+ {
+ "event": "Transit Time",
+ "startTimeUtc": "2022-05-25T12:03:36.3082211Z",
+ "durationInMs": 1.3324
+ },
+ {
+ "event": "Received",
+ "startTimeUtc": "2022-05-25T12:03:36.3095535Z",
+ "durationInMs": 12.6128
+ },
+ {
+ "event": "Completed",
+ "startTimeUtc": "2022-05-25T12:03:36.8621663Z",
+ "durationInMs": 0
+ }
+ ],
+ "serviceEndpointStats": {
+ "inflightRequests": 1,
+ "openConnections": 1
},
- {
- "EventName": "Received",
- "StartTime": "2021-06-15T13:53:10.3016546Z",
- "DurationInMicroSec": "7079"
+ "connectionStats": {
+ "waitforConnectionInit": "False",
+ "callsPendingReceive": 0,
+ "lastSendAttempt": "2022-05-25T12:03:34.0222760Z",
+ "lastSend": "2022-05-25T12:03:34.0223280Z",
+ "lastReceive": "2022-05-25T12:03:34.0257728Z"
},
- {
- "EventName": "Completed",
- "StartTime": "2021-06-15T13:53:10.3087338Z",
- "DurationInMicroSec": "0"
- }
- ],
+ "requestSizeInBytes": 447,
+ "responseMetadataSizeInBytes": 438,
+ "responseBodySizeInBytes": 604
+ },
"TransportException": null } ```
Contact [Azure support](https://aka.ms/azure-support).
## Next steps * [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) problems when you use the Azure Cosmos DB .NET SDK.
-* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md).
+* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md).
cosmos-db Create Table Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-java.md
public class ExpandableWeatherObject {
} ```
-To insert or upsert such an object using the Table API, map the properties of the expandable object into a [TableEntity](/java/api/com.azure.data.tables.tableentity) object and use the [createEntity](/java/api/com.azure.data.tables.tableclient.createentity) or [upsertEntity](/java/api/com.azure.data.tables.tableclient.upsertentity) methods on the [TableClient](/java/api/com.azure.data.tables.tableclient) object as appropriate.
+To insert or upsert such an object using the Table API, map the properties of the expandable object into a [TableEntity](/java/api/com.azure.data.tables.models.tableentity) object and use the [createEntity](/java/api/com.azure.data.tables.tableclient.createentity) or [upsertEntity](/java/api/com.azure.data.tables.tableclient.upsertentity) methods on the [TableClient](/java/api/com.azure.data.tables.tableclient) object as appropriate.
```java public void insertExpandableEntity(ExpandableWeatherObject model) {
Remove-AzResourceGroup -Name $resourceGroupName
In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the Tables API. > [!div class="nextstepaction"]
-> [Import table data to the Tables API](table-import.md)
+> [Import table data to the Tables API](table-import.md)
cosmos-db How To Use Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-java.md
try
.buildClient(); // Create a filter condition where the partition key is "Sales".
- ListEntitiesOptions options = new ListEntitiesOptions().setFilter(PARTITION_KEY + " eq 'Sales' AND " + ROW_KEY + " lt '0004' AND ROW_KEY + " gt '0001'");
+ ListEntitiesOptions options = new ListEntitiesOptions().setFilter(PARTITION_KEY + " eq 'Sales' AND " + ROW_KEY + " lt '0004' AND " + ROW_KEY + " gt '0001'");
// Loop through the results, displaying information about the entities. tableClient.listEntities(options, null, null).forEach(tableEntity -> {
try
System.out.println(specificEntity.getPartitionKey() + " " + specificEntity.getRowKey() + "\t" + specificEntity.getProperty("FirstName") +
- "\t" + specificEntity.getProperty("LastName"));
+ "\t" + specificEntity.getProperty("LastName") +
"\t" + specificEntity.getProperty("Email") + "\t" + specificEntity.getProperty("PhoneNumber")); }
try
.tableName(tableName) .buildClient();
- Delete the entity for partition key 'Sales' and row key '0001' from the table.
+ // Delete the entity for partition key 'Sales' and row key '0001' from the table.
tableClient.deleteEntity("Sales", "0001"); } catch (Exception e)
For more information, visit [Azure for Java developers](/java/azure).
[Azure Tables client library for Java]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/tables/azure-data-tables [Azure Tables client library reference documentation]: https://azure.github.io/azure-sdk-for-java/tables.html [Azure Tables REST API]: ../../storage/tables/table-storage-overview.md
-[Azure Tables Team Blog]: https://blogs.msdn.microsoft.com/windowsazurestorage/
+[Azure Tables Team Blog]: https://blogs.msdn.microsoft.com/windowsazurestorage/
cost-management-billing Cost Mgt Alerts Monitor Usage Spending https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md
# Use cost alerts to monitor usage and spending
-This article helps you understand and use Cost Management alerts to monitor your Azure usage and spending. Cost alerts are automatically generated based when Azure resources are consumed. Alerts show all active cost management and billing alerts together in one place. When your consumption reaches a given threshold, alerts are generated by Cost Management. There are three types of cost alerts: budget alerts, credit alerts, and department spending quota alerts.
+This article helps you understand and use Cost Management alerts to monitor your Azure usage and spending. Cost alerts are automatically generated based when Azure resources are consumed. Alerts show all active cost management and billing alerts together in one place. When your consumption reaches a given threshold, alerts are generated by Cost Management. There are three main types of cost alerts: budget alerts, credit alerts, and department spending quota alerts.
+
+You can also [create a cost anomaly alert](../understand/analyze-unexpected-charges.md#create-an-anomaly-alert) to automatically get notified when an anomaly is detected.
## Required permissions for alerts
cost-management-billing Ea Portal Agreements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-agreements.md
An enrollment has one of the following status values. Each value determines how
- Migrate to the Microsoft Online Subscription Program (MOSP) - Confirm disablement of all services associated with the enrollment
+EA credit expires when the EA enrollment ends.
+ **Expired** - The EA enrollment expires when it reaches the enterprise agreement end date and is opted out of the extended term. Sign a new enrollment contract as soon as possible. Although your service won't be disabled immediately, there's a risk of it getting disabled. As of August 1, 2019, new opt-out forms aren't accepted for Azure commercial customers. Instead, all enrollments go into indefinite extended term. If you want to stop using Azure services, close your subscription in the [Azure portal](https://portal.azure.com). Or, your partner can submit a termination request. There's no change for customers with government agreement types.
cost-management-billing Grant Access To Create Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/grant-access-to-create-subscription.md
Previously updated : 02/24/2022 Last updated : 06/01/2022
As an Azure customer with an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/), you can give another user or service principal permission to create subscriptions billed to your account. In this article, you learn how to use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md) to share the ability to create subscriptions, and how to audit subscription creations. You must have the Owner role on the account you wish to share. > [!NOTE]
-> This API only works with the [legacy APIs for subscription creation](programmatically-create-subscription-preview.md). Unless you have a specific need to use the legacy APIs, you should use the information for the [latest GA version](programmatically-create-subscription-enterprise-agreement.md) about the latest API version [2019-10-01-preview](/rest/api/billing/2019-10-01-preview/enrollment-account-role-assignments/put). If you're migrating to use the newer APIs, you must grant owner permissions again using [2019-10-01-preview](/rest/api/billing/2019-10-01-preview/enrollment-account-role-assignments/put). Your previous configuration that uses the following APIs doesn't automatically convert for use with newer APIs.
+> - This API only works with the [legacy APIs for subscription creation](programmatically-create-subscription-preview.md).
+> - Unless you have a specific need to use the legacy APIs, you should use the information for the [latest GA version](programmatically-create-subscription-enterprise-agreement.md) about the latest API version. **See [Enrollment Account Role Assignments - Put](/rest/api/billing/2019-10-01-preview/enrollment-account-role-assignments/put) to grant permission to create EA subscriptions with the latest API**.
+> - If you're migrating to use the newer APIs, you must grant owner permissions again using [2019-10-01-preview](/rest/api/billing/2019-10-01-preview/enrollment-account-role-assignments/put). Your previous configuration that uses the following APIs doesn't automatically convert for use with newer APIs.
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
cost-management-billing Analyze Unexpected Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/analyze-unexpected-charges.md
Previously updated : 04/02/2022 Last updated : 05/31/2022
The article helps you identify anomalies and unexpected changes in your cloud costs using Cost Management and Billing. You'll start with anomaly detection for subscriptions in cost analysis to identify any atypical usage patterns based on your cost and usage trends. You'll then learn how to drill into cost information to find and investigate cost spikes and dips.
+You can also create an anomaly alert to automatically get notified when an anomaly is detected.
+ In general, there are three types of changes that you might want to investigate: - New costsΓÇöFor example, a resource that was started or added such as a virtual machine. New costs often appear as a cost starting from zero.
If you have an existing policy of [tagging resources](../costs/cost-mgt-best-pra
If you've used the preceding strategies and you still don't understand why you received a charge or if you need other help with billing issues, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+## Create an anomaly alert
+
+You can create an anomaly alert to automatically get notified when an anomaly is detected. All email recipients get notified when a subscription cost anomaly is detected.
+
+An anomaly alert email includes a summary of changes in resource group count and cost. It also includes the top resource group changes for the day compared to the previous 60 days. And, it has a direct link to the Azure portal so that you can review the cost and investigate further.
+
+1. Start on a subscription scope.
+1. In the left menu, select **Cost alerts**.
+1. On the Cost alerts page, select **+ Add** > **Add anomaly alert**.
+1. On the Subscribe to emails page, enter required information and then select **Save**.
+ :::image type="content" source="./media/analyze-unexpected-charges/subscribe-emails.png" alt-text="Screenshot showing the Subscribe to emails page where you enter notification information for an alert." lightbox="./media/analyze-unexpected-charges/subscribe-emails.png" :::
+
+Here's an example email generated for an anomaly alert.
++ ## Next steps - Learn about how to [Optimize your cloud investment with Cost Management](../costs/cost-mgt-best-practices.md).
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
Azure Data Factory is improved on an ongoing basis. To stay up to date with the
This page is updated monthly, so revisit it regularly.
+## May 2022
+<br>
+<table>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+
+<tr><td><b>Data flow</b></td><td>User Defined Functions for mapping data flows</td><td>Azure Data Factory introduces in public preview user defined functions and data flow libraries. A user defined function is a customized expression you can define to be able to reuse logic across multiple mapping data flows. User defined functions live in a collection called a data flow library to be able to easily group up common sets of customized functions.<br><a href="concepts-data-flow-udf.md">Learn more</a></td></tr>
+
+</table>
+ ## April 2022 <br> <table>
defender-for-cloud Episode Eight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-eight.md
Title: Microsoft Defender for IoT description: Learn how Defender for IoT discovers devices to monitor and how it fits in the Microsoft Security portfolio. Previously updated : 05/25/2022 Last updated : 06/01/2022 # Microsoft Defender for IoT
Last updated 05/25/2022
<br> <br>
-<iframe src="https://aka.ms/docs/player?id=05fdecf5-f6a1-4162-b95d-1e34478d1d60" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+<iframe src="https://aka.ms/docs/player?id=05fdecf5-f6a1-4162-b95d-1e34478d1d60" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
- [1:20](/shows/mdc-in-the-field/defender-for-iot#time=01m20s) - Overview of the Defender for IoT solution
defender-for-cloud Episode Eleven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-eleven.md
Title: Threat landscape for Defender for Containers description: Learn about the new detections that are available for different attacks and how Defender for Containers can help to quickly identify malicious activities in containers. Previously updated : 05/25/2022 Last updated : 06/01/2022 # Threat landscape for Defender for Containers
Last updated 05/25/2022
<br> <br>
-<iframe src="https://aka.ms/docs/player?id=646c2b9a-3f15-4705-af23-7802bd9549c5" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+<iframe src="https://aka.ms/docs/player?id=646c2b9a-3f15-4705-af23-7802bd9549c5" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
- [01:15](/shows/mdc-in-the-field/threat-landscape-containers#time=01m15s) - The evolution of attacks against Kubernetes
defender-for-cloud Episode Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-five.md
Title: Microsoft Defender for Servers description: Learn all about Microsoft Defender for Servers from the product manager. Previously updated : 05/25/2022 Last updated : 06/01/2022 # Microsoft Defender for Servers
Last updated 05/25/2022
<br> <br>
-<iframe src="https://aka.ms/docs/player?id=f62e1199-d0a8-4801-9793-5318fde27497" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+<iframe src="https://aka.ms/docs/player?id=f62e1199-d0a8-4801-9793-5318fde27497" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
- [1:22](/shows/mdc-in-the-field/defender-for-containers#time=01m22s) - Overview of the announcements for Microsoft Defender for Servers
defender-for-cloud Episode Four https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-four.md
Title: Security posture management improvements in Microsoft Defender for Cloud description: Learn how to manage your security posture with Microsoft Defender for Cloud. Previously updated : 05/25/2022 Last updated : 06/01/2022 # Security posture management improvements in Microsoft Defender for Cloud
Last updated 05/25/2022
<br> <br>
-<iframe src="https://aka.ms/docs/player?id=845108fd-e57d-40e0-808a-1239e78a7390" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+<iframe src="https://aka.ms/docs/player?id=845108fd-e57d-40e0-808a-1239e78a7390" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
- [1:24](/shows/mdc-in-the-field/defender-for-containers#time=01m24s) - Security recommendation refresh time changes
defender-for-cloud Episode Nine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-nine.md
Title: Microsoft Defender for Containers in a multi-cloud environment description: Learn about Microsoft Defender for Containers implementation in AWS and GCP. Previously updated : 05/25/2022 Last updated : 06/01/2022 # Microsoft Defender for Containers in a Multi-Cloud Environment
Maya explains about the new workload protection capabilities related to Containe
<br> <br>
-<iframe src="https://aka.ms/docs/player?id=f9470496-abe3-4344-8160-d6a6b65c077f" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+<iframe src="https://aka.ms/docs/player?id=f9470496-abe3-4344-8160-d6a6b65c077f" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
- [01:12](/shows/mdc-in-the-field/containers-multi-cloud#time=01m12s) - Container protection in a multi-cloud environment
defender-for-cloud Episode One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-one.md
Title: New AWS connector in Microsoft Defender for Cloud description: Learn all about the new AWS connector in Microsoft Defender for Cloud. Previously updated : 05/25/2022 Last updated : 05/29/2022 # New AWS connector in Microsoft Defender for Cloud
Last updated 05/25/2022
<br> <br>
-<iframe src="https://aka.ms/docs/player?id=26cbaec8-0f3f-4bb1-9918-1bf7d912db57" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+<iframe src="https://aka.ms/docs/player?id=26cbaec8-0f3f-4bb1-9918-1bf7d912db57" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
- [00:00](/shows/mdc-in-the-field/aws-connector) - Introduction
defender-for-cloud Episode Seven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-seven.md
Title: New GCP connector in Microsoft Defender for Cloud description: Learn all about the new GCP connector in Microsoft Defender for Cloud. Previously updated : 05/25/2022 Last updated : 05/29/2022 # New GCP connector in Microsoft Defender for Cloud
Last updated 05/25/2022
<br> <br>
-<iframe src="https://aka.ms/docs/player?id=80ba04f0-1551-48f3-94a2-d2e82e7073c9" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+<iframe src="https://aka.ms/docs/player?id=80ba04f0-1551-48f3-94a2-d2e82e7073c9" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
- [1:23](/shows/mdc-in-the-field/gcp-connector#time=01m23s) - Overview of the new GCP connector
defender-for-cloud Episode Six https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-six.md
Carlos also covers how Microsoft Defender for Cloud is used to fill the gap betw
<br> <br>
-<iframe src="https://aka.ms/docs/player?id=3811455b-cc20-4ee0-b1bf-9d4df5ee4eaf" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+<iframe src="https://aka.ms/docs/player?id=3811455b-cc20-4ee0-b1bf-9d4df5ee4eaf" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
- [1:30](/shows/mdc-in-the-field/lessons-from-the-field#time=01m30s) - Why Microsoft Defender for Cloud is a unique solution when compared with other competitors?
defender-for-cloud Episode Ten https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-ten.md
Title: Protecting containers in GCP with Defender for Containers description: Learn how to use Defender for Containers, to protect Containers that are located in Google Cloud Projects. Previously updated : 05/25/2022 Last updated : 05/29/2022 # Protecting containers in GCP with Defender for Containers
Nadav gives insights about workload protection for GKE and how to obtain visibil
<br> <br>
-<iframe src="https://aka.ms/docs/player?id=078af1f2-1f12-4030-bd3f-3e7616150562" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+<iframe src="https://aka.ms/docs/player?id=078af1f2-1f12-4030-bd3f-3e7616150562" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
- [00:55](/shows/mdc-in-the-field/gcp-containers#time=00m55s) - Architecture solution for Defender for Containers and support for GKE
defender-for-cloud Episode Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-three.md
Title: Microsoft Defender for Containers description: Learn how about Microsoft Defender for Containers. Previously updated : 05/25/2022 Last updated : 05/29/2022 # Microsoft Defender for Containers
Last updated 05/25/2022
<br> <br>
-<iframe src="https://aka.ms/docs/player?id=b8624912-ef9e-4fc6-8c0c-ea65e86d9128" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+<iframe src="https://aka.ms/docs/player?id=b8624912-ef9e-4fc6-8c0c-ea65e86d9128" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
- [1:09](/shows/mdc-in-the-field/defender-for-containers#time=01m09s) - What's new in the Defender for Containers plan?
defender-for-cloud Episode Twelve https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twelve.md
Title: Enhanced workload protection features in Defender for Servers description: Learn about the enhanced capabilities available in Defender for Servers, for VMs that are located in GCP, AWS and on-premises. Previously updated : 05/25/2022 Last updated : 05/29/2022 # Enhanced workload protection features in Defender for Servers
Netta explains how Defender for Servers applies Azure Arc as a bridge to onboard
<br> <br>
-<iframe src="https://aka.ms/docs/player?id=18fdbe74-4399-44fe-81e7-3e3ce92df451" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+<iframe src="https://aka.ms/docs/player?id=18fdbe74-4399-44fe-81e7-3e3ce92df451" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
- [00:55](/shows/mdc-in-the-field/enhanced-workload-protection#time=00m55s) - Arc Auto-provisioning in GCP
defender-for-cloud Episode Two https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-two.md
Title: Integrate Azure Purview with Microsoft Defender for Cloud description: Learn how to integrate Azure Purview with Microsoft Defender for Cloud. Previously updated : 05/25/2022 Last updated : 05/29/2022 # Integrate Azure Purview with Microsoft Defender for Cloud
David explains the use case scenarios for this integration and how the data clas
<br> <br>
-<iframe src="https://aka.ms/docs/player?id=9b911e9c-e933-4b7b-908a-5fd614f822c7" width="1080" height="530" max-width: 100%; min-width: 100%;"></iframe>
+<iframe src="https://aka.ms/docs/player?id=9b911e9c-e933-4b7b-908a-5fd614f822c7" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
- [1:36](/shows/mdc-in-the-field/integrate-with-purview) - Overview of Azure Purview
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
The Insights column of the page gives you more details for each recommendation.
| Icon | Name | Description | |--|--|--|
-| :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false"::: | *Preview recommendation** | This recommendation won't affect your secure score until it's GA. |
+| :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false"::: | Preview recommendation | This recommendation won't affect your secure score until it's GA. |
| :::image type="icon" source="media/secure-score-security-controls/fix-icon.png" border="false"::: | **Fix** | From within the recommendation details page, you can use 'Fix' to resolve this issue. | | :::image type="icon" source="media/secure-score-security-controls/enforce-icon.png" border="false"::: | **Enforce** | From within the recommendation details page, you can automatically deploy a policy to fix this issue whenever someone creates a non-compliant resource. | | :::image type="icon" source="media/secure-score-security-controls/deny-icon.png" border="false"::: | **Deny** | From within the recommendation details page, you can prevent new resources from being created with this issue. |
defender-for-iot Concept Micro Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-micro-agent-configuration.md
These configurations include process, and network activity collectors.
|--|--|--|--| | **Interval** | `High` <br>`Medium`<br>`Low` | Determines the sending frequency. | `Medium` | | **Aggregation mode** | `True` <br>`False` | Determines whether to process event aggregation for an identical event. | `True` |
-| **Cache size** | cycle FIFO | Defines the number of events collected in between the the times that data is sent. | `256` |
+| **Cache size** | cycle FIFO | Defines the number of events collected in between the times that data is sent. | `256` |
| **Disable collector** | `True` <br> `False` | Determines whether or not the collector is operational. | `False` | | | | | |
These configurations include process, and network activity collectors.
| Setting Name | Setting options | Description | Default | |--|--|--|--|
-| **Devices** | A list of the network devices separated by a comma. <br><br>For example `eth0,eth1` | Defines the list of network devices (interfaces) that the agent will use to monitor the traffic. <br><br>If a network device is not listed, the Network Raw events will not be recorded for the missing device.| `eth0` |
+| **Devices** | A list of the network devices separated by a comma. <br><br>For example `eth0,eth1` | Defines the list of network devices (interfaces) that the agent will use to monitor the traffic. <br><br>If a network device isn't listed, the Network Raw events will not be recorded for the missing device.| `eth0` |
| | | | | ## Process collector specific-settings
defender-for-iot Concept Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-recommendations.md
Title: Security recommendations for IoT Hub
-description: Learn about the concept of security recommendations and how they are used in the Defender for IoT Hub.
+description: Learn about the concept of security recommendations and how they're used in the Defender for IoT Hub.
Last updated 11/09/2021
defender-for-iot Concept Security Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-security-module.md
Title: Defender-IoT-micro-agent and device twins
-description: Learn about the concept of Defender-IoT-micro-agent twins and how they are used in Defender for IoT.
+description: Learn about the concept of Defender-IoT-micro-agent twins and how they're used in Defender for IoT.
Last updated 03/28/2022
defender-for-iot Configure Pam To Audit Sign In Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/configure-pam-to-audit-sign-in-events.md
Title: Configure Pluggable Authentication Modules (PAM) to audit sign-in events (Preview)
-description: Learn how to configure Pluggable Authentication Modules (PAM) to audit sign-in events when syslog is not configured for your device.
+description: Learn how to configure Pluggable Authentication Modules (PAM) to audit sign-in events when syslog isn't configured for your device.
Last updated 02/20/2022
defender-for-iot How To Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-agent-configuration.md
If the agent configuration object does not exist in the **azureiotsecurity** mod
## Configuration schema and validation
-Make sure to validate your agent configuration against this [schema](https://aka.ms/iot-security-github-module-schema). An agent will not launch if the configuration object does not match the schema.
+Make sure to validate your agent configuration against this [schema](https://aka.ms/iot-security-github-module-schema). An agent will not launch if the configuration object doesn't match the schema.
-If, while the agent is running, the configuration object is changed to a non-valid configuration (the configuration does not match the schema), the agent will ignore the invalid configuration and will continue using the current configuration.
+If, while the agent is running, the configuration object is changed to a non-valid configuration (the configuration doesn't match the schema), the agent will ignore the invalid configuration and will continue using the current configuration.
### Configuration validation
defender-for-iot How To Azure Rtos Security Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-azure-rtos-security-module.md
The default behavior of each configuration is provided in the following tables:
| ASC_SECURITY_MODULE_ID | String | defender-iot-micro-agent | The unique identifier of the device. | | SECURITY_MODULE_VERSION_(MAJOR)(MINOR)(PATCH) | Number | 3.2.1 | The version. | | ASC_SECURITY_MODULE_SEND_MESSAGE_RETRY_TIME | Number | 3 | The amount of time the Defender-IoT-micro-agent will take to send the security message after a fail. (in seconds) |
-| ASC_SECURITY_MODULE_PENDING_TIME | Number | 300 | The Defender-IoT-micro-agent pending time (in seconds). The state will change to suspend, if the time is exceeded.. |
+| ASC_SECURITY_MODULE_PENDING_TIME | Number | 300 | The Defender-IoT-micro-agent pending time (in seconds). The state will change to suspend, if the time is exceeded. |
## Collection
defender-for-iot How To Deploy Linux C https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-deploy-linux-c.md
This script performs the following function:
1. Installs prerequisites.
-1. Adds a service user (with interactive sign in disabled).
+1. Adds a service user (with interactive sign-in disabled).
1. Installs the agent as a **Daemon** - assumes the device uses **systemd** for service management.
defender-for-iot How To Deploy Linux Cs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-deploy-linux-cs.md
This script performs the following actions:
- Installs prerequisites. -- Adds a service user (with interactive sign in disabled).
+- Adds a service user (with interactive sign-in disabled).
- Installs the agent as a **Daemon** - assumes the device uses **systemd** for legacy deployment model.
defender-for-iot How To Install Micro Agent For Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-install-micro-agent-for-edge.md
This article explains how to install, and authenticate the Defender micro agent
systemctl status defender-iot-micro-agent.service ```
- 1. Ensure that the service is stable by making sure it is `active` and that the uptime of the process is appropriate
+ 1. Ensure that the service is stable by making sure it's `active` and that the uptime of the process is appropriate
:::image type="content" source="media/quickstart-standalone-agent-binary-installation/active-running.png" alt-text="Check to make sure your service is stable and active.":::
defender-for-iot How To Manage Device Inventory On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-manage-device-inventory-on-the-cloud.md
The device inventory can be used to view device systems, and network information
Some of the benefits of the device inventory include: -- Identify all IOT, and OT devices from different inputs. For example, allowing you to understand which devices in your environment are not communicating, and will require troubleshooting.
+- Identify all IOT, and OT devices from different inputs. For example, allowing you to understand which devices in your environment aren't communicating, and will require troubleshooting.
- Group, and filter devices by site, type, or vendor.
For a list of filters that can be applied to the device inventory table, see the
1. Select the **Apply button**.
-Multiple filters can be applied at one time. The filters are not saved when you leave the Device inventory page.
+Multiple filters can be applied at one time. The filters aren't saved when you leave the Device inventory page.
## View device information
Select the :::image type="icon" source="media/how-to-manage-device-inventory-on-
## How to identify devices that have not recently communicated with the Azure cloud
-If you are under the impression that certain devices are not actively communicating, there is a way to check, and see which devices have not communicated in a specified time period.
+If you are under the impression that certain devices are not actively communicating, there's a way to check, and see which devices have not communicated in a specified time period.
**To identify all devices that have not communicated recently**:
defender-for-iot How To Region Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-region-move.md
You can move a Microsoft Defender for IoT "iotsecuritysolutions" resource to a d
## Prepare
-In this section, you will prepare to move the resource for the move by finding the resource and confirming it is in a region you wish to move from.
+In this section, you'll prepare to move the resource for the move by finding the resource and confirming it is in a region you wish to move from.
Before transitioning the resource to the new region, we recommended using [log analytics](../../azure-monitor/logs/quick-create-workspace.md) to store alerts, and raw events.
Before transitioning the resource to the new region, we recommended using [log a
1. Select your hub from the list.
-1. Ensure that you have selected the correct hub, and that it is in the region you want to move it from.
+1. Ensure that you've selected the correct hub, and that it is in the region you want to move it from.
:::image type="content" source="media/region-move/location.png" alt-text="Screenshot showing you the region your hub is located in."::: ## Move
-You are now ready to move your resource to your new location. Follow [these instructions](../../iot-hub/iot-hub-how-to-clone.md) to move your IoT Hub.
+You're now ready to move your resource to your new location. Follow [these instructions](../../iot-hub/iot-hub-how-to-clone.md) to move your IoT Hub.
After transferring, and enabling the resource, you can link to the same log analytics workspace that was configured earlier. ## Verify
-In this section, you will verify that the resource has been moved, that the connection to the IoT Hub has been enabled, and that everything is working correctly.
+In this section, you'll verify that the resource has been moved, that the connection to the IoT Hub has been enabled, and that everything is working correctly.
**To verify the resource in in the correct region**:
defender-for-iot How To Security Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-security-data-access.md
Defender for IoT stores security alerts, recommendations, and raw security data
To configure which Log Analytics workspace is used: 1. Open your IoT hub.
-1. Click the **Settings** blade under the **Security** section.
-1. Click **Data Collection**, and change your Log Analytics workspace configuration.
+1. Select the **Settings** blade under the **Security** section.
+1. Select **Data Collection**, and change your Log Analytics workspace configuration.
To access your alerts and recommendations in your Log Analytics workspace after configuration: 1. Choose an alert or recommendation in Defender for IoT.
-1. Click **further investigation**, then click **To see which devices have this alert click here and view the DeviceId column**.
+1. Select **further investigation**, then select **To see which devices have this alert click here and view the DeviceId column**.
For details on querying data from Log Analytics, see [Get started with log queries in Azure Monitor](../../azure-monitor/logs/get-started-queries.md).
defender-for-iot Quickstart Onboard Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/quickstart-onboard-iot-hub.md
You can onboard Defender for IoT to an existing IoT Hub, where you can then moni
:::image type="content" source="media/quickstart-onboard-iot-hub/secure-your-iot-solution.png" alt-text="Select the secure your IoT solution button to secure your solution." lightbox="media/quickstart-onboard-iot-hub/secure-your-iot-solution-expanded.png":::
-The **Secure your IoT solution** button will only appear if the IoT Hub has not already been onboarded, or if you set the Defender for IoT toggle to **Off** while onboarding.
+The **Secure your IoT solution** button will only appear if the IoT Hub hasn't already been onboarded, or if you set the Defender for IoT toggle to **Off** while onboarding.
:::image type="content" source="media/quickstart-onboard-iot-hub/toggle-is-off.png" alt-text="If your toggle was set to off during onboarding.":::
defender-for-iot References Defender For Iot Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/references-defender-for-iot-glossary.md
This glossary provides a brief description of important terms and concepts for t
|--|--|--| | **Device twins** | Device twins are JSON documents that store device state information including metadata, configurations, and conditions. | [Module Twin](#m) <br /> <br />[Defender-IoT-micro-agent twin](#s) | | **Defender-IoT-micro-agent twin** `(DB)` | The Defender-IoT-micro-agent twin holds all of the information that is relevant to device security, for each specific device in your solution. | [Device twin](#d) <br /> <br />[Module Twin](#m) |
-| **Device inventory** | Defender for IoT identifies, and classifies devices as a single unique network device in the inventory for: <br><br> - Standalone IT, OT, and IoT devices with 1 or multiple NICs. <br><br> - Devices composed of multiple backplane components. This includes all racks, slots, and modules. <br><br> - Devices that act as network infrastructure. For example, switches, and routers with multiple NICs. <br><br> - Public internet IP addresses, multicast groups, and broadcast groups are not considered inventory devices. <br><br>Devices that have been inactive for more than 60 days are classified as inactive Inventory devices.|
+| **Device inventory** | Defender for IoT identifies, and classifies devices as a single unique network device in the inventory for: <br><br> - Standalone IT, OT, and IoT devices with 1 or multiple NICs. <br><br> - Devices composed of multiple backplane components. This includes all racks, slots, and modules. <br><br> - Devices that act as network infrastructure. For example, switches, and routers with multiple NICs. <br><br> - Public internet IP addresses, multicast groups, and broadcast groups aren't considered inventory devices. <br><br>Devices that have been inactive for more than 60 days are classified as inactive Inventory devices.|
## E
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/release-notes.md
Listed below are the support, breaking change policies for Defender for IoT, and
- **CIS benchmarks**: The micro agent now supports recommendations based on CIS Distribution Independent Linux Benchmarks, version 2.0.0, and the ability to disable specific CIS Benchmark checks or groups using twin configurations. For more information, see [Micro agent configurations (Preview)](concept-micro-agent-configuration.md). -- **Micro agent supported devices list expands**: The micro agent now supports Debian 11 AMD64 and ARM32v7 devices, as well as Ubuntu Server 18.04 ARM32 Linux devices & Ubuntu Server 20.04 ARM32 & ARM64 Linux devices.
+- **Micro agent supported devices list expands**: The micro agent now supports Debian 11 AMD64 and ARM32v7 devices, and Ubuntu Server 18.04 ARM32 Linux devices & Ubuntu Server 20.04 ARM32 & ARM64 Linux devices.
For more information, see [Agent portfolio overview and OS support (Preview)](concept-agent-portfolio-overview-os-support.md).
Listed below are the support, breaking change policies for Defender for IoT, and
- DNS network activity on managed devices is now supported. Microsoft threat intelligence security graph can now detect suspicious activity based on DNS traffic. -- [Leaf device proxying](../../iot-edge/how-to-connect-downstream-iot-edge-device.md#integrate-microsoft-defender-for-iot-with-iot-edge-gateway): There is now an enhanced integration with IoT Edge. This integration enhances the connectivity between the agent, and the cloud using leaf device proxying.
+- [Leaf device proxying](../../iot-edge/how-to-connect-downstream-iot-edge-device.md#integrate-microsoft-defender-for-iot-with-iot-edge-gateway): There's now an enhanced integration with IoT Edge. This integration enhances the connectivity between the agent, and the cloud using leaf device proxying.
## October 2021
defender-for-iot Resources Agent Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/resources-agent-frequently-asked-questions.md
This article provides a list of frequently asked questions and answers about the
## Do I have to install an embedded security agent?
-Agent installation on your IoT devices isn't mandatory in order to enable Defender for IoT. You can choose between the following two options There are four different levels of security monitoring, and management capabilities which will provide different levels of protection:
+Agent installation on your IoT devices isn't mandatory in order to enable Defender for IoT. You can choose between the following two options There are four different levels of security monitoring, and management capabilities, which will provide different levels of protection:
- Install the Defender for IoT embedded security agent with or without modifications. This option provides the highest level of enhanced security insights into device behavior and access.
defender-for-iot Security Agent Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/security-agent-architecture.md
Defender for IoT offers different installer agents for 32 bit and 64-bit Windows
## Next steps
-In this article, you got a high-level overview about Defender for IoT Defender-IoT-micro-agent architecture, and the available installers.To continue getting started with Defender for IoT deployment, review the security agent authentication methods that are available.
+In this article, you got a high-level overview about Defender for IoT Defender-IoT-micro-agent architecture, and the available installers. To continue getting started with Defender for IoT deployment, review the security agent authentication methods that are available.
> [!div class="nextstepaction"] > [Security agent authentication methods](concept-security-agent-authentication-methods.md)
defender-for-iot Troubleshoot Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/troubleshoot-agent.md
Last updated 03/28/2022
This article explains how to solve potential problems in the security agent start-up process.
-Microsoft Defender for IoT agent self-starts immediately after installation. The agent start up process includes reading local configuration, connecting to Azure IoT Hub, and retrieving the remote twin configuration. Failure in any one of these steps may cause the security agent to fail.
+Microsoft Defender for IoT agent self-starts immediately after installation. The agent start-up process includes reading local configuration, connecting to Azure IoT Hub, and retrieving the remote twin configuration. Failure in any one of these steps may cause the security agent to fail.
In this troubleshooting guide you'll learn how to:
In this troubleshooting guide you'll learn how to:
## Validate if the security agent is running
-1. To validate is the security agent is running, wait a few minutes after installing the agent and and run the following command.
+1. To validate that the security agent is running, wait a few minutes after installing the agent and run the following command.
<br> **C agent**
Defender for IoT agent encountered an error! Error in: {Error Code}, reason: {Er
| Error Code | Error sub code | Error details | Remediate C | Remediate C# | |--|--|--|--|--| | Local Configuration | Missing configuration | A configuration is missing in the local configuration file. The error message should state which key is missing. | Add the missing key to the /var/LocalConfiguration.json file, see the [cs-localconfig-reference](azure-iot-security-local-configuration-c.md) for details. | Add the missing key to the General.config file, see the [c#-localconfig-reference](azure-iot-security-local-configuration-csharp.md) for details. |
-| Local Configuration | Cant Parse Configuration | A configuration value can't be parsed. The error message should state which key can't be parsed. A configuration value cannot be parsed either because the value is not in the expected type, or the value is out of range. | Fix the value of the key in /var/LocalConfiguration.json file so that it matches the LocalConfiguration schema, see the [c#-localconfig-reference](azure-iot-security-local-configuration-csharp.md) for details. | Fix the value of the key in General.config file so that it matches the schema, see the [cs-localconfig-reference](azure-iot-security-local-configuration-c.md) for details. |
+| Local Configuration | Cant Parse Configuration | A configuration value can't be parsed. The error message should state which key can't be parsed. A configuration value cannot be parsed either because the value isn't in the expected type, or the value is out of range. | Fix the value of the key in /var/LocalConfiguration.json file so that it matches the LocalConfiguration schema, see the [c#-localconfig-reference](azure-iot-security-local-configuration-csharp.md) for details. | Fix the value of the key in General.config file so that it matches the schema, see the [cs-localconfig-reference](azure-iot-security-local-configuration-c.md) for details. |
| Local Configuration | File Format | Failed to parse configuration file. | The configuration file is corrupted, download the agent and re-install. | - |
-| Remote Configuration | Timeout | The agent could not fetch the azureiotsecurity module twin within the timeout period. | Make sure authentication configuration is correct and try again. | The agent could not fetch the azureiotsecurity module twin within timeout period. Make sure authentication configuration is correct and try again. |
-| Authentication | File Not Exist | The file in the given path does not exist. | Make sure the file exists in the given path or go to the **LocalConfiguration.json** file and change the **FilePath** configuration. | Make sure the file exists in the given path or go to the **Authentication.config** file and change the **filePath** configuration. |
+| Remote Configuration | Timeout | The agent could not fetch the azureiotsecurity module twin within the timeout period. | Make sure authentication configuration is correct and try again. | The agent couldn't fetch the azureiotsecurity module twin within timeout period. Make sure authentication configuration is correct and try again. |
+| Authentication | File Not Exist | The file in the given path doesn't exist. | Make sure the file exists in the given path or go to the **LocalConfiguration.json** file and change the **FilePath** configuration. | Make sure the file exists in the given path or go to the **Authentication.config** file and change the **filePath** configuration. |
| Authentication | File Permission | The agent does not have sufficient permissions to open the file. | Give the **asciotagent** user read permissions on the file in the given path. | Make sure the file is accessible. | | Authentication | File Format | The given file is not in the correct format. | Make sure the file is in the correct format. The supported file types are .pfx and .pem. | Make sure the file is a valid certificate file. | | Authentication | Unauthorized | The agent was not able to authenticate against IoT Hub with the given credentials. | Validate authentication configuration in LocalConfiguration file, go through the authentication configuration and make sure all the details are correct, validate that the secret in the file matches the authenticated identity. | Validate authentication configuration in Authentication.config, go through the authentication configuration and make sure all the details are correct, then validate that the secret in the file matches the authenticated identity. |
defender-for-iot Troubleshoot Defender Micro Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/troubleshoot-defender-micro-agent.md
To view the status of the service:
systemctl status defender-iot-micro-agent.service ```
-1. Check that the service is stable by making sure it is `active`, and that the uptime in the process is appropriate.
+1. Check that the service is stable by making sure it's `active`, and that the uptime in the process is appropriate.
- :::image type="content" source="media/troubleshooting/active-running.png" alt-text="Ensure your service is stable by checking to see that it is active and the uptime is appropriate.":::
+ :::image type="content" source="media/troubleshooting/active-running.png" alt-text="Ensure your service is stable by checking to see that it's active and the uptime is appropriate.":::
If the service is listed as `inactive`, use the following command to start the service:
defender-for-iot How To Connect Sensor By Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-connect-sensor-by-proxy.md
The following diagram shows data going from Microsoft Defender for IoT to the Io
## Set up your system
-For this scenario we will be installing, and configuring the latest version of [Squid](http://www.squid-cache.org/) on an Ubuntu 18 server.
+For this scenario we'll be installing, and configuring the latest version of [Squid](http://www.squid-cache.org/) on an Ubuntu 18 server.
> [!Note] > Microsoft Defender for IoT does not offer support for Squid or any other proxy service.
defender-for-iot How To Create And Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
This section describes how to define users. Cyberx, support, and administrator u
If users aren't active at the keyboard or mouse for a specific time, they're signed out of their session and must sign in again.
-When users haven't worked with their console mouse or keyboard for 30 minutes, a session sign out is forced.
+When users haven't worked with their console mouse or keyboard for 30 minutes, a session sign-out is forced.
This feature is enabled by default and on upgrade, but can be disabled. In addition, session counting times can be updated. Session times are defined in seconds. Definitions are applied per sensor and on-premises management console.
You can recover the password for the on-premises management console or the senso
**To recover the password for the on-premises management console, or the sensor**:
-1. On the sign in screen of either the on-premises management console or the sensor, select **Password recovery**. The **Password recovery** screen opens.
+1. On the sign-in screen of either the on-premises management console or the sensor, select **Password recovery**. The **Password recovery** screen opens.
- :::image type="content" source="media/how-to-create-and-manage-users/password-recovery.png" alt-text="Screenshot of the Select Password recovery from the sign in screen of either the on-premises management console, or the sensor.":::
+ :::image type="content" source="media/how-to-create-and-manage-users/password-recovery.png" alt-text="Screenshot of the Select Password recovery from the sign-in screen of either the on-premises management console, or the sensor.":::
1. Select either **CyberX** or **Support** from the drop-down menu, and copy the unique identifier code.
defender-for-iot References Defender For Iot Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-defender-for-iot-glossary.md
This glossary provides a brief description of important terms and concepts for t
|--|--|--| | **Data mining** | Generate comprehensive and granular reports about your network devices:<br /><br />- **SOC incident response**: Reports in real time to help deal with immediate incident response. For example, a report can list devices that might need patching.<br /><br />- **Forensics**: Reports based on historical data for investigative reports.<br /><br />- **IT network integrity**: Reports that help improve overall network security. For example, a report can list devices with weak authentication credentials.<br /><br />- **visibility**: Reports that cover all query items to view all baseline parameters of your network.<br /><br />Save data-mining reports for read-only users to view. | **[Baseline](#b)<br /><br />[Reports](#r)** | | **Defender for IoT platform** | The Defender for IoT solution installed on Defender for IoT sensors and the on-premises management console. | **[Sensor](#s)<br /><br />[On-premises management console](#o)** |
-| **Inventory device** | Defender for IoT will identify and classify devices as a single unique network device in the inventory for:
-1. Standalone IT/OT/IoT devices (w/ 1 or multiple NICs)
-1. Devices composed of multiple backplane components (including all racks/slots/modules)
-1. Devices acting as network infrastructure such as Switch/Router (w/ multiple NICs).
-Public internet IP addresses, multicast groups, and broadcast groups are not considered inventory devices. Devices that have been inactive for more than 60 days are classified as inactive Inventory devices.|
+| **Inventory device** | Defender for IoT will identify and classify devices as a single unique network device in the inventory for:<br><br>- Standalone IT/OT/IoT devices (w/ 1 or multiple NICs)<br>- Devices composed of multiple backplane components (including all racks/slots/modules)<br>- Devices acting as network infrastructure such as Switch/Router (w/ multiple NICs). <br><br>Public internet IP addresses, multicast groups, and broadcast groups are not considered inventory devices. Devices that have been inactive for more than 60 days are classified as inactive Inventory devices.|
| **Device map** | A graphical representation of network devices that Defender for IoT detects. It shows the connections between devices and information about each device. Use the map to:<br /><br />- Retrieve and control critical device information.<br /><br />- Analyze network slices.<br /><br />- Export device details and summaries. | **[Purdue layer group](#p)** | | **Device inventory - sensor** | The device inventory displays an extensive range of device attributes detected by Defender for IoT. Options are available to:<br /><br />- Filter displayed information.<br /><br />- Export this information to a CSV file.<br /><br />- Import Windows registry details. | **[Group](#g)** <br /><br />**[Device inventory- on-premises management console](#d)** | | **Device inventory - on-premises management console** | Device information from connected sensors can be viewed from the on-premises management console in the device inventory. This gives users of the on-premises management console a comprehensive view of all network information. | **[Device inventory - sensor](#d)<br /><br />[Device inventory - data integrator](#d)** |
digital-twins Concepts Data Ingress Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-ingress-egress.md
description: Learn about the data ingress and egress requirements for integrating Azure Digital Twins with other services. Previously updated : 03/01/2022 Last updated : 06/01/2022
Azure Digital Twins is typically used together with other services to create flexible, connected solutions that use your data in different kinds of ways. This article covers data ingress and egress for Azure Digital Twins and Azure services that can be used to take advantage of it.
-Using [event routes](concepts-route-events.md), Azure Digital Twins can receive data from upstream services such as [IoT Hub](../iot-hub/about-iot-hub.md) or [Logic Apps](../logic-apps/logic-apps-overview.md), which are used to deliver telemetry and notifications.
+Azure Digital Twins can receive data from upstream services such as [IoT Hub](../iot-hub/about-iot-hub.md) or [Logic Apps](../logic-apps/logic-apps-overview.md), which are used to deliver telemetry and notifications.
-Azure Digital Twins can also route data to downstream services, such as [Azure Maps](../azure-maps/about-azure-maps.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md), for storage, workflow integration, analytics, and more.
+Azure Digital Twins can also use [event routes](concepts-route-events.md) to send data to downstream services, such as [Azure Maps](../azure-maps/about-azure-maps.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md), for storage, workflow integration, analytics, and more.
## Data ingress
To ingest data from any source into Azure Digital Twins, use an [Azure function]
You can also learn how to connect Azure Digital Twins to a Logic Apps trigger in [Integrate with Logic Apps](how-to-integrate-logic-apps.md).
-## Data egress services
+## Data egress
-You may want to send Azure Digital Twins data to other downstream services for storage or additional processing.
+You may want to send Azure Digital Twins data to other downstream services for storage or additional processing.
-To send twin data to [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), set up a [data history (preview) connection](concepts-data-history.md) that automatically historizes digital twin property updates from your Azure Digital Twins instance to an Azure Data Explorer cluster. You can then query this data in Azure Data Explorer using the [Azure Digital Twins query plugin for Azure Data Explorer](concepts-data-explorer-plugin.md).
+Digital twin data can be sent to most Azure services using *endpoints*. If your destination is [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), you can use *data history* instead to automatically historize twin property updates to an Azure Data Explorer cluster, where they can be queried as time series data. The rest of this section describes these capabilities in more detail.
-To send data to other services, such as [Azure Maps](../azure-maps/about-azure-maps.md), [Time Series Insights](../time-series-insights/overview-what-is-tsi.md), or [Azure Storage](../storage/common/storage-introduction.md), start by attaching the destination service to an *endpoint*.
+>[!NOTE]
+>Azure Digital Twins implements *at least once* delivery for data emitted to egress services.
+
+### Endpoints
+
+To send Azure Digital Twins data to most Azure services, such as [Azure Maps](../azure-maps/about-azure-maps.md), [Time Series Insights](../time-series-insights/overview-what-is-tsi.md), or [Azure Storage](../storage/common/storage-introduction.md), start by attaching the destination service to an *endpoint*.
Endpoints can be instances of any of these Azure * [Event Hubs](../event-hubs/event-hubs-about.md)
The endpoint is attached to an Azure Digital Twins instance using management API
For detailed instructions on how to send Azure Digital Twins data to Azure Maps, see [Use Azure Digital Twins to update an Azure Maps indoor map](how-to-integrate-maps.md). For detailed instructions on how to send Azure Digital Twins data to Time Series Insights, see [Integrate with Time Series Insights](how-to-integrate-time-series-insights.md).
-Azure Digital Twins implements *at least once* delivery for data emitted to egress services.
+### Data history
+
+To send twin data to [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), set up a [data history (preview) connection](concepts-data-history.md) that automatically historizes digital twin property updates from your Azure Digital Twins instance to an Azure Data Explorer cluster. The data history connection requires an [event hub](../event-hubs/event-hubs-about.md), but doesn't require an explicit endpoint.
+
+Once the data has been historized, you can query this data in Azure Data Explorer using the [Azure Digital Twins query plugin for Azure Data Explorer](concepts-data-explorer-plugin.md).
+
+You can also use data history in combination with [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) to aggregate data from disparate sources. One useful application of this is to combine information technology (IT) data from ERP or CRM systems (like Dynamics 365, SAP, or Salesforce) with operational technology (OT) data from IoT devices and production management systems. For an example that illustrates how a company might combine this data, see the following blog post: [Integrating IT and OT Data with Azure Digital Twins, Azure Data Explorer, and Azure Synapse](https://techcommunity.microsoft.com/t5/internet-of-things-blog/integrating-it-and-ot-data-with-azure-digital-twins-azure-data/ba-p/3401981).
## Next steps
Learn more about endpoints and routing events to external
* [Endpoints and event routes](concepts-route-events.md) See how to set up Azure Digital Twins to ingest data from IoT Hub:
-* [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md)
+* [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md)
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
Azure national clouds are isolated from each other and from global commercial Az
| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | n/a | Supported | AT&T NetBond, British Telecom, Equinix, Level 3 Communications, Verizon | | **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | n/a | Supported | Equinix, Internet2, Megaport, Verizon | | **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | n/a | Supported | Equinix, CenturyLink Cloud Connect, Verizon |
-| **Phoenix** | [CyrusOne Chandler](https://cyrusone.com/data-center-locations/arizona/phoenix-data-center/) | US Gov Arizona | Supported | AT&T NetBond, CenturyLink Cloud Connect, Megaport |
+| **Phoenix** | [CyrusOne Chandler](https://cyrusone.com/locations/arizona/phoenix-arizona-chandler/) | US Gov Arizona | Supported | AT&T NetBond, CenturyLink Cloud Connect, Megaport |
| **San Antonio** | [CyrusOne SA2](https://cyrusone.com/locations/texas/san-antonio-texas-ii/) | US Gov Texas | Supported | CenturyLink Cloud Connect, Megaport | | **Silicon Valley** | [Equinix SV4](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv4/) | n/a | Supported | AT&T, Equinix, Level 3 Communications, Verizon | | **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | n/a | Supported | Equinix, Megaport |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
If you are remote and do not have fiber connectivity or you want to explore othe
| **[Stream Data Centers]( https://www.streamdatacenters.com/products-services/network-cloud/ )** | Megaport | | **[RagingWire Data Centers](https://www.ragingwire.com/wholesale/wholesale-data-centers-worldwide-nexcenters)** | IX Reach, Megaport, PacketFabric | | **[T5 Datacenters](https://t5datacenters.com/)** | IX Reach |
-| **[vXchnge](https://www.vxchnge.com)** | IX Reach, Megaport |
+| **vXchnge** | IX Reach, Megaport |
## Connectivity through National Research and Education Networks (NREN)
firewall-manager Manage Web Application Firewall Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/manage-web-application-firewall-policies.md
+
+ Title: Manage Azure Web Application Firewall policies (preview)
+description: Learn how to use Azure Firewall Manager to manage Azure Web Application Firewall policies
++++ Last updated : 06/01/2022++
+# Manage Web Application Firewall policies (preview)
+
+You can centrally create and associate Web Application Firewall (WAF) policies for your application delivery platforms, including Azure Front Door and Azure Application Gateway.
+
+> [!IMPORTANT]
+> Managing Web Application Firewall policies using Azure Firewall Manager is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+- A deployed [Azure Front Door](../frontdoor/quickstart-create-front-door.md) or [Azure Application Gateway](../application-gateway/quick-create-portal.md)
+
+## Associate a WAF policy
+
+1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+2. In the Azure portal search bar, type **Firewall Manager** and press **Enter**.
+3. On the Azure Firewall Manager page, select **Application Delivery Platforms**.
+ :::image type="content" source="media/manage-web-application-firewall-policies/application-delivery-platforms.png" alt-text="Screenshot of Firewall Manager application delivery platforms.":::
+1. Select your application delivery platform (Front Door or Application Gateway) to associate a WAF policy. In this example, we'll associate a WAF policy to a Front Door.
+1. Select **Manage Security** and then select **Associate WAF policy**.
+ :::image type="content" source="media/manage-web-application-firewall-policies/associate-waf-policy.png" alt-text="Screenshot of Firewall Manager associate WAF policy.":::
+1. Select either an existing policy or **Create New**.
+1. Select the domain(s) that you want the WAF policy to protect with your Azure Front Door profile.
+1. Select **Associate**.
+
+## View and manage WAF policies
+
+1. On the Azure Firewall Manager page, under **Security**, select **Web application firewall policies** to view all your policies.
+1. Select **Add** to create a new WAF policy or import settings from an existing WAF policy.
+ :::image type="content" source="media/manage-web-application-firewall-policies/web-application-firewall-policies.png" alt-text="Screenshot of Firewall Manager Web Application Firewall policies.":::
+
+## Upgrade Application Gateway WAF configuration to WAF policy
+
+For Application Gateway with WAF configuration, you can upgrade the WAF configuration to a WAF policy associated with Application Gateway.
+
+The WAF policy can be shared to multiple application gateways. Also, a WAF policy allows you to take advantage of advanced and new features like bot protection, newer rule sets, and reduced false positives. New features are only released on WAF policies.
+
+To upgrade a WAF configuration to a WAF policy, select **Upgrade from WAF configuration** from the desired application gateway.
++
+## Next steps
+
+- [Configure an Azure DDoS Protection Plan using Azure Firewall Manager (preview)](configure-ddos.md)
+
frontdoor Edge Locations By Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/edge-locations-by-region.md
Previously updated : 05/25/2021 Last updated : 06/01/2022 # Azure Front Door edge locations by metro
-This article lists current metros containing edge locations, sorted by region, for Azure Front Door. Each metro may contain more than one edge locations. Currently, Azure Front Door has 118 edge locations across 100 metro cities.
+This article lists current metros containing edge locations, sorted by region, for Azure Front Door. Each metro may contain more than one edge locations. Currently, Azure Front Door has 118 edge locations across 100 metro cities. Azure Front Door also has 4 edge locations across 4 Azure US Government cloud regions.
## Microsoft edge locations
frontdoor Front Door How To Onboard Apex Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-how-to-onboard-apex-domain.md
Previously updated : 11/13/2020 Last updated : 05/31/2022 -
+zone_pivot_groups: front-door-tiers
+ # Onboard a root or apex domain on your Front Door++
+Azure Front Door supports adding custom domain to Front Door profile. This is done by adding DNS TXT record for domain ownership validation and creating a CNAME record in your DNS configuration to route DNS queries for the custom domain to Azure Front Door endpoint. For apex domain, DNS TXT will continue to be used for domain validation. However, the DNS protocol prevents the assignment of CNAME records at the zone apex. For example, if your domain is `contoso.com`; you can create CNAME records for `somelabel.contoso.com`; but you can't create CNAME for `contoso.com` itself. Front Door doesn't expose the frontend IP address associated with your Front Door profile. So you can't map your apex domain to an IP address if your intent is to onboard it to Azure Front Door.
+++ Azure Front Door uses CNAME records to validate domain ownership for onboarding of custom domains. Front Door doesn't expose the frontend IP address associated with your Front Door profile. So you can't map your apex domain to an IP address if your intent is to onboard it to Azure Front Door. The DNS protocol prevents the assignment of CNAME records at the zone apex. For example, if your domain is `contoso.com`; you can create CNAME records for `somelabel.contoso.com`; but you can't create CNAME for `contoso.com` itself. This restriction presents a problem for application owners who have load-balanced applications behind Azure Front Door. Since using a Front Door profile requires creation of a CNAME record, it isn't possible to point at the Front Door profile from the zone apex. + This problem can be resolved by using alias records in Azure DNS. Unlike CNAME records, alias records are created at the zone apex. Application owners can use it to point their zone apex record to a Front Door profile that has public endpoints. Application owners point to the same Front Door profile that's used for any other domain within their DNS zone. For example, `contoso.com` and `www.contoso.com` can point to the same Front Door profile. Mapping your apex or root domain to your Front Door profile basically requires CNAME flattening or DNS chasing. A mechanism where the DNS provider recursively resolves the CNAME entry until it hits an IP address. This functionality is supported by Azure DNS for Front Door endpoints.
Mapping your apex or root domain to your Front Door profile basically requires C
You can use the Azure portal to onboard an apex domain on your Front Door and enable HTTPS on it by associating it with a certificate for TLS termination. Apex domains are also referred as root or naked domains. +
+## Onboard the custom domain to your Front Door
+
+1. Select **Domains** from under *Settings* on the left side pane for your Front Door profile and then select **+ Add** to add a new custom domain.
+
+ :::image type="content" source="./media/front-door-apex-domain/add-domain.png" alt-text="Screenshot of adding a new domain to Front Door profile.":::
+
+1. On **Add a domain** page, you'll enter information about the custom domain. You can choose Azure-managed DNS (recommended) or you can choose to use your DNS provider.
+
+ - **Azure-managed DNS** - select an existing DNS zone and for *Custom domain*, select **Add new**. Select **APEX domain** from the pop-up and then select **OK** to save.
+
+ :::image type="content" source="./media/front-door-apex-domain/add-custom-domain.png" alt-text="Screenshot of adding a new custom domain to Front Door profile.":::
+
+ - **Another DNS provider** - make sure the DNS provider supports CNAME flattening and follow the steps for [adding a custom domain](standard-premium/how-to-add-custom-domain.md#add-a-new-custom-domain).
+
+1. Select the **Pending** validation state. A new page will appear with DNS TXT record information needed to validate the custom domain. The TXT record is in the form of `_dnsauth.<your_subdomain>`.
+
+ :::image type="content" source="./media/front-door-apex-domain/pending-validation.png" alt-text="Screenshot of custom domain pending validation.":::
+
+ - **Azure DNS-based zone** - select the **Add** button and a new TXT record with the displayed record value will be created in the Azure DNS zone.
+
+ :::image type="content" source="./media/front-door-apex-domain/validate-custom-domain.png" alt-text="Screenshot of validate a new custom domain.":::
+
+ - If you're using another DNS provider, manually create a new TXT record of name `_dnsauth.<your_subdomain>` with the record value as shown on the page.
+
+1. Close the *Validate the custom domain* page and return to the *Domains* page for the Front Door profile. You should see the *Validation state* change from **Pending** to **Approved**. If not, wait up to 10 minutes for changes to reflect. If your validation doesn't get approved make sure your TXT record is correct and name servers are configured correctly if you're using Azure DNS.
+
+ :::image type="content" source="./media/front-door-apex-domain/validation-approved.png" alt-text="Screenshot of new custom domain passing validation.":::
+
+1. Select **Unassociated** from the *Endpoint association* column, to add the new custom domain to an endpoint.
+
+ :::image type="content" source="./media/front-door-apex-domain/unassociated-endpoint.png" alt-text="Screenshot of unassociated custom domain to an endpoint.":::
+
+1. On the *Associate endpoint and route* page, select the **Endpoint** and **Route** you would like to associate the domain to. Then select **Associate** to complete this step.
+
+ :::image type="content" source="./media/front-door-apex-domain/associate-endpoint.png" alt-text="Screenshot of associated endpoint and route page for a domain.":::
+
+1. Under the *DNS state* column, select the **CNAME record is currently not detected** to add the alias record to DNS provider.
+
+ - **Azure DNS** - select the **Add** button on the page.
+
+ :::image type="content" source="./media/front-door-apex-domain/cname-record.png" alt-text="Screenshot of add or update CNAME record page.":::
+
+ - **A DNS provider that supports CNAME flattening** - you must manually enter the alias record name.
+
+1. Once the alias record gets created and the custom domain is associated to the Azure Front Door endpoint, traffic will start flowing.
+
+ :::image type="content" source="./media/front-door-apex-domain/cname-record-added.png" alt-text="Screenshot of completed APEX domain configuration.":::
+
+> [!NOTE]
+> **DNS state** column is meant for CNAME mapping check. Because apex domain doesnΓÇÖt support CNAME record, the DNS state will show 'CNAME record is currently not detected' even after you added the alias record to the DNS provider.
+
+## Enable HTTPS on your custom domain
+
+Follow the guidance for [configuring HTTPS for your custom domain](standard-premium/how-to-configure-https-custom-domain.md) to enable HTTPS for your apex domain.
+
+## Managed certificate renewal for apex domain
+
+Front Door managed certificates will automatically rotate certificates only if the domain CNAME is pointed to Front Door endpoint. If the APEX domain doesnΓÇÖt have a CNAME record pointing to Front Door endpoint, the auto-rotation for managed certificate will fail until domain ownership is re-validated. The validation column will become `Pending-revalidation` 45 days before the managed certificate expires. Select the **Pending-revalidation** link and then select the **Regenerate** button to regenerate the TXT token. After that, add the TXT token to the DNS provider settings.
+++ ## Create an alias record for zone apex 1. Open **Azure DNS** configuration for the domain to be onboarded.
You can use the Azure portal to onboard an apex domain on your Front Door and en
> [!WARNING] > Ensure that you have created appropriate routing rules for your apex domain or added the domain to existing routing rules. + ## Next steps - Learn how to [create a Front Door](quickstart-create-front-door.md).
frontdoor Front Door Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-overview.md
Modernize your internet first applications on Azure with Cloud Native experience
For a comparison of supported features in Azure Front Door, see [Tier comparison](standard-premium/tier-comparison.md).
+## Where is the service available?
+
+Azure Front Door is available in Microsoft Azure (Commercial) and Microsoft Azure Government (US).
+ ## Pricing For pricing information, see [Front Door Pricing](https://azure.microsoft.com/pricing/details/frontdoor/). For information about service-level agreements, See [SLA for Azure Front Door](https://azure.microsoft.com/support/legal/sla/frontdoor/v1_0/).
governance Guest Configuration Baseline Docker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-docker.md
+
+ Title: Reference - Azure Policy guest configuration baseline for Docker
+description: Details of the Docker baseline on Azure implemented through Azure Policy guest configuration.
Last updated : 05/17/2022+++
+# Docker security baseline
+
+This article details the configuration settings for Docker hosts as applicable in the following
+implementations:
+
+- **\[Preview\]: Linux machines should meet requirements for the Azure security baseline for Docker hosts**
+- **Vulnerabilities in security configuration on your machines should be remediated** in Azure
+ Security Center
+
+For more information, see [Understand the guest configuration feature of Azure Policy](../concepts/guest-configuration.md) and
+[Overview of the Azure Security Benchmark (V2)](../../../security/benchmarks/overview.md).
+
+## General security controls
+
+|Name<br /><sub>(CCEID)</sub> |Details |Remediation check |
+||||
+|Docker inventory Information<br /><sub>(0.0)</sub> |Description: None |None |
+|Ensure a separate partition for containers has been created<br /><sub>(1.01)</sub> |Description: Docker depends on /var/lib/docker as the default directory where all Docker related files, including the images, are stored. This directory might fill up fast and soon Docker and the host could become unusable. So, it's advisable to create a separate partition (logical volume) for storing Docker files. |For new installations, create a separate partition for /var/lib/docker mount point. For systems that were previously installed, use the Logical Volume Manager (LVM) to create partitions. |
+|Ensure docker version is up-to-date<br /><sub>(1.03)</sub> |Description: Using up-to-date docker version will keep your host secure |Follow the docker documentation in aim to upgrade your version |
+|Ensure auditing is configured for the docker daemon<br /><sub>(1.05)</sub> |Description: Apart from auditing your regular Linux file system and system calls, audit Docker daemon as well. Docker daemon runs with root privileges. It's thus necessary to audit its activities and usage. |Add the line `-w /usr/bin/docker -k docker` into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: `service auditd restart` |
+|Ensure auditing is configured for Docker files and directories - /var/lib/docker<br /><sub>(1.06)</sub> |Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /var/lib/docker is one such directory. It holds all the information about containers. It must be audited. |Add the line `-w /var/lib/docker -k docker` into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: `service auditd restart` |
+|Ensure auditing is configured for Docker files and directories - /etc/docker<br /><sub>(1.07)</sub> |Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /etc/docker is one such directory. It holds various certificates and keys used for TLS communication between Docker daemon and Docker client. It must be audited. |Add the line `-w /etc/docker -k docker` into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: `service auditd restart` |
+|Ensure auditing is configured for Docker files and directories - docker.service<br /><sub>(1.08)</sub> |Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. Docker.service is one such file. The docker.service file might be present if the daemon parameters have been changed by an administrator. It holds various parameters for Docker daemon. It must be audited, if applicable. |Find out the 'docker.service' file location by running: `systemctl show -p FragmentPath docker.service` and add the line `-w {docker.service file location} -k docker` into the /etc/audit/audit.rules file where `{docker.service file location}` is the file path you have found earlier. Restart the audit daemon by running the command: `service auditd restart` |
+|Ensure auditing is configured for Docker files and directories - docker.socket<br /><sub>(1.09)</sub> |Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. Docker.socket is one such file. It holds various parameters for Docker daemon socket. It must be audited, if applicable. |Find out the 'docker.socket' file location by running: `systemctl show -p FragmentPath docker.socket` and add the line `-w {docker.socket file location} -k docker` into the /etc/audit/audit.rules file where `{docker.socket file location}` is the file path you have found earlier. Restart the audit daemon by running the command: `service auditd restart` |
+|Ensure auditing is configured for Docker files and directories - /etc/default/docker<br /><sub>(1.10)</sub> |Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /etc/default/docker is one such file. It holds various parameters for Docker daemon. It must be audited, if applicable. |Add the line `-w /etc/default/docker -k docker` into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: `service auditd restart` |
+|Ensure auditing is configured for Docker files and directories - /etc/docker/daemon.json<br /><sub>(1.11)</sub> |Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /etc/docker/daemon.json is one such file. It holds various parameters for Docker daemon. It must be audited, if applicable. |Add the line `-w /etc/docker/daemon.json -k docker` into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: `service auditd restart` |
+|Ensure auditing is configured for Docker files and directories - /usr/bin/docker-containerd<br /><sub>(1.12)</sub> |Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /usr/bin/docker-containerd is one such file. Docker now relies on containerd and runC to spawn containers. It must be audited, if applicable. |Add the line `-w /usr/bin/docker-containerd -k docker` into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: `service auditd restart` |
+|Ensure auditing is configured for Docker files and directories - /usr/bin/docker-runc<br /><sub>(1.13)</sub> |Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /usr/bin/docker-runc is one such file. Docker now relies on containerd and runC to spawn containers. It must be audited, if applicable. |Add the line `-w /usr/bin/docker-runc -k docker` into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: `service auditd restart` |
+|Ensure network traffic is restricted between containers on the default bridge<br /><sub>(2.01)</sub> |Description: The inter-container communication would be disabled on the default network bridge. If any communication between containers on the same host is desired, then it needs to be explicitly defined using container linking or alternatively custom networks have to be defined. |Run the docker in daemon mode and pass `--icc=false` as an argument or set the 'icc' setting to false in the daemon.json file. Alternatively, you can follow the Docker documentation and create a custom network and only join containers that need to communicate to that custom network. The `--icc` parameter only applies to the default docker bridge, if custom networks are used then the approach of segmenting networks should be adopted instead. |
+|Ensure the logging level is set to 'info'.<br /><sub>(2.02)</sub> |Description: Setting up an appropriate log level, configures the Docker daemon to log events that you would want to review later. A base log level of `info` and above would capture all logs except debug logs. Until and unless required, you shouldn't run Docker daemon at `debug` log level. |Run the Docker daemon as below: ```dockerd --log-level info``` |
+|Ensure Docker is allowed to make changes to iptables<br /><sub>(2.03)</sub> |Description: Docker will never make changes to your system `iptables` rules if you choose to do so. Docker server would automatically make the needed changes to iptables based on how you choose your networking options for the containers if it's allowed to do so. It's recommended to let Docker server make changes to `iptables`automatically to avoid networking misconfiguration that might hamper the communication between containers and to the outside world. Additionally, it would save you hassles of updating `iptable`every time you choose to run the containers or modify networking options. |Don't run the Docker daemon with `--iptables=false` parameter. For example, don't start the Docker daemon as below: ```dockerd --iptables=false``` |
+|Ensure insecure registries aren't used<br /><sub>(2.04)</sub> |Description: You shouldn't be using any insecure registries in the production environment. Insecure registries can be tampered with leading to possible compromise to your production system. |remove `--insecure-registry` flag from the dockerd start command |
+|The 'aufs' storage driver shouldn't be used by the docker daemon<br /><sub>(2.05)</sub> |Description: The 'aufs' storage driver is the oldest storage driver. It's based on a Linux kernel patch-set that is unlikely to be merged into the main Linux kernel. aufs driver is also known to cause some serious kernel crashes. aufs just has legacy support from Docker. Most importantly, aufs isn't a supported driver in many Linux distributions using latest Linux kernels |The 'aufs' storage driver should be replaced by a different storage driver, we recommend to use 'overlay2' |
+|Ensure TLS authentication for Docker daemon is configured<br /><sub>(2.06)</sub> |Description: By default, Docker daemon binds to a non-networked Unix socket and runs with `root` privileges. If you change the default docker daemon binding to a TCP port or any other Unix socket, anyone with access to that port or socket can have full access to Docker daemon and in turn to the host system. Hence, you shouldn't bind the Docker daemon to another IP/port or a Unix socket. If you must expose the Docker daemon via a network socket, configure TLS authentication for the daemon and Docker Swarm APIs (if using). This would restrict the connections to your Docker daemon over the network to a limited number of clients who could successfully authenticate over TLS. |Follow the steps mentioned in the Docker documentation or other references. |
+|Ensure the default ulimit's configured appropriately<br /><sub>(2.07)</sub> |Description: If the ulimits aren't set properly, the desired resource control might not be achieved and might even make the system unusable. |Run the docker in daemon mode and pass --default-ulimit as argument with respective ulimits as appropriate in your environment. Alternatively, you can also set a specific resource limitation to each container separately by using the `--ulimit` argument with respective ulimits as appropriate in your environment. |
+|Enable user namespace support<br /><sub>(2.08)</sub> |Description: The Linux kernel user namespace support in Docker daemon provides additional security for the Docker host system. It allows a container to have a unique range of user and group IDs which are outside the traditional user and group range utilized by the host system. For example, the root user will have expected administrative privilege inside the container but can effectively be mapped to an unprivileged UID on the host system. |Please consult Docker documentation for various ways in which this can be configured depending upon your requirements. Your steps might also vary based on platform - For example, on Red Hat, sub-UIDs and sub-GIDs mapping creation does not work automatically. You might have to create your own mapping. However, the high-level steps are as below: **Step 1:** Ensure that the files `/etc/subuid` and `/etc/subgid` exist.```touch /etc/subuid /etc/subgid```**Step 2:** Start the docker daemon with `--userns-remap` flag ```dockerd --userns-remap=default``` |
+|Ensure base device size isn't changed until needed<br /><sub>(2.10)</sub> |Description: Increasing the base device size allows all future images and containers to be of the new base device size, this may cause a denial of service by ending up in file system being over-allocated or full. |remove `--storage-opt dm.basesize` flag from the dockerd start command until you need it |
+|Ensure that authorization for Docker client commands is enabled<br /><sub>(2.11)</sub> |Description: DockerΓÇÖs out-of-the-box authorization model is all or nothing. Any user with permission to access the Docker daemon can run any Docker client command. The same is true for callers using DockerΓÇÖs remote API to contact the daemon. If you require greater access control, you can create authorization plugins and add them to your Docker daemon configuration. Using an authorization plugin, a Docker administrator can configure granular access policies for managing access to Docker daemon. Third party integrations of Docker may implement their own authorization models to require authorization with the Docker daemon outside of docker's native authorization plugin (i.e. Kubernetes, Cloud Foundry, OpenShift). |**Step 1**: Install/Create an authorization plugin. **Step 2**: Configure the authorization policy as desired. **Step 3**: Start the docker daemon as below: ```dockerd --authorization-plugin=``` |
+|Ensure centralized and remote logging is configured<br /><sub>(2.12)</sub> |Description: Centralized and remote logging ensures that all important log records are safe despite catastrophic events. Docker now supports various such logging drivers. Use the one that suits your environment the best. |**Step 1**: Setup the desired log driver by following its documentation. **Step 2**: Start the docker daemon with that logging driver. For example, ```dockerd --log-driver=syslog --log-opt syslog-address=tcp://192.xxx.xxx.xxx``` |
+|Ensure live restore is Enabled<br /><sub>(2.14)</sub> |Description: One of the important security triads is availability. Setting `--live-restore` flag in the docker daemon ensures that container execution isn't interrupted when the docker daemon isn't available. This also means that it's now easier to update and patch the docker daemon without execution downtime. |Run the docker in daemon mode and pass `--live-restore` as an argument. For Example,```dockerd --live-restore``` |
+|Ensure Userland Proxy is Disabled<br /><sub>(2.15)</sub> |Description: Docker engine provides two mechanisms for forwarding ports from the host to containers, hairpin NAT, and a userland proxy. In most circumstances, the hairpin NAT mode is preferred as it improves performance and makes use of native Linux iptables functionality instead of an additional component. Where hairpin NAT is available, the userland proxy should be disabled on startup to reduce the attack surface of the installation. |Run the Docker daemon as below: ```dockerd --userland-proxy=false``` |
+|Ensure experimental features are avoided in production<br /><sub>(2.17)</sub> |Description: Experimental is now a runtime docker daemon flag instead of a separate build. Passing `--experimental` as a runtime flag to the docker daemon, activates experimental features. Experimental is now considered a stable release, but with a couple of features which might not have tested and guaranteed API stability. |Don't pass `--experimental` as a runtime parameter to the docker daemon. |
+|Ensure containers are restricted from acquiring new privileges.<br /><sub>(2.18)</sub> |Description: A process can set the `no_new_priv` bit in the kernel. It persists across fork, clone and execve. The `no_new_priv` bit ensures that the process or its children processes don't gain any additional privileges via suid or sgid bits. This way numerous dangerous operations become a lot less dangerous because there's no possibility of subverting privileged binaries. Setting this at the daemon level ensures that by default all new containers are restricted from acquiring new privileges. |Run the Docker daemon as below: ```dockerd --no-new-privileges``` |
+|Ensure that docker.service file ownership is set to root:root.<br /><sub>(3.01)</sub> |Description: `docker.service` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it should be owned and group-owned by `root` to maintain the integrity of the file. |**Step 1**: Find out the file location: ```systemctl show -p FragmentPath docker.service``` **Step 2**: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the ownership and group ownership for the file to `root`. For example, ```chown root:root /usr/lib/systemd/system/docker.service``` |
+|Ensure that docker .service file permissions are set to 644 or more restrictive<br /><sub>(3.02)</sub> |Description: `docker.service` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it shouldn't be writable by any other user other than `root` to maintain the integrity of the file. |**Step 1**: Find out the file location: ```systemctl show -p FragmentPath docker.service``` **Step 2**: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the file permissions to `644`. For example, ```chmod 644 /usr/lib/systemd/system/docker.service``` |
+|Ensure that docker.socket file ownership is set to root:root.<br /><sub>(3.03)</sub> |Description: `docker.socket` file contains sensitive parameters that may alter the behavior of Docker remote API. Hence, it should be owned and group-owned by `root` to maintain the integrity of the file. |**Step 1**: Find out the file location: ```systemctl show -p FragmentPath docker.socket``` **Step 2**: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the ownership and group ownership for the file to `root`. For example, ```chown root:root /usr/lib/systemd/system/docker.socket``` |
+|Ensure that docker.socket file permissions are set to `644` or more restrictive<br /><sub>(3.04)</sub> |Description: `docker.socket` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it shouldn't be writable by any other user other than `root` to maintain the integrity of the file. |**Step 1**: Find out the file location: ```systemctl show -p FragmentPath docker.socket``` **Step 2**: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the file permissions to `644`. For example, ```chmod 644 /usr/lib/systemd/system/docker.service``` |
+|Ensure that /etc/docker directory ownership is set to `root:root`.<br /><sub>(3.05)</sub> |Description: /etc/docker directory contains certificates and keys in addition to various sensitive files. Hence, it should be owned and group-owned by `root` to maintain the integrity of the directory. | ```chown root:root /etc/docker``` This would set the ownership and group-ownership for the directory to `root`. |
+|Ensure that /etc/docker directory permissions are set to `755` or more restrictive<br /><sub>(3.06)</sub> |Description: /etc/docker directory contains certificates and keys in addition to various sensitive files. Hence, it should only be writable by `root` to maintain the integrity of the directory. | ```chmod 755 /etc/docker``` This would set the permissions for the directory to `755`. |
+|Ensure that registry certificate file ownership is set to root:root<br /><sub>(3.07)</sub> |Description: /etc/docker/certs.d/ directory contains Docker registry certificates. These certificate files must be owned and group-owned by `root` to maintain the integrity of the certificates. | ```chown root:root /etc/docker/certs.d//*``` This would set the ownership and group-ownership for the registry certificate files to `root`. |
+|Ensure that registry certificate file permissions are set to `444` or more restrictive<br /><sub>(3.08)</sub> |Description: /etc/docker/certs.d/ directory contains Docker registry certificates. These certificate files must have permissions of `444` to maintain the integrity of the certificates. | ```chmod 444 /etc/docker/certs.d//*``` This would set the permissions for registry certificate files to `444`. |
+|Ensure that TLS CA certificate file ownership is set to root:root<br /><sub>(3.09)</sub> |Description: The TLS CA certificate file should be protected from any tampering. It's used to authenticate Docker server based on given CA certificate. Hence, it must be owned and group-owned by `root` to maintain the integrity of the CA certificate. |```chown root:root``` This would set the ownership and group-ownership for the TLS CA certificate file to `root`. |
+|Ensure that TLS CA certificate file permissions are set to `444` or more restrictive<br /><sub>(3.10)</sub> |Description: The TLS CA certificate file should be protected from any tampering. It's used to authenticate Docker server based on given CA certificate. Hence, it must have permissions of `444` to maintain the integrity of the CA certificate. | ```chmod 444``` This would set the file permissions of the TLS CA file to `444`. |
+|Ensure that Docker server certificate file ownership is set to root:root<br /><sub>(3.11)</sub> |Description: The Docker server certificate file should be protected from any tampering. It's used to authenticate Docker server based on the given server certificate. Hence, it must be owned and group-owned by `root` to maintain the integrity of the certificate. | ```chown root:root``` This would set the ownership and group-ownership for the Docker server certificate file to `root`. |
+|Ensure that Docker server certificate file permissions are set to `444` or more restrictive<br /><sub>(3.12)</sub> |Description: The Docker server certificate file should be protected from any tampering. It's used to authenticate Docker server based on the given server certificate. Hence, it must have permissions of `444` to maintain the integrity of the certificate. | ```chmod 444``` This would set the file permissions of the Docker server file to `444`. |
+|Ensure that Docker server certificate key file ownership is set to root:root<br /><sub>(3.13)</sub> |Description: The Docker server certificate key file should be protected from any tampering or unneeded reads. It holds the private key for the Docker server certificate. Hence, it must be owned and group-owned by `root` to maintain the integrity of the Docker server certificate. | ```chown root:root``` This would set the ownership and group-ownership for the Docker server certificate key file to `root`. |
+|Ensure that Docker server certificate key file permissions are set to 400<br /><sub>(3.14)</sub> |Description: The Docker server certificate key file should be protected from any tampering or unneeded reads. It holds the private key for the Docker server certificate. Hence, it must have permissions of `400` to maintain the integrity of the Docker server certificate. | ```chmod 400``` This would set the Docker server certificate key file permissions to `400`. |
+|Ensure that Docker socket file ownership is set to root:docker<br /><sub>(3.15)</sub> |Description: Docker daemon runs as `root`. The default Unix socket hence must be owned by `root`. If any other user or process owns this socket, then it might be possible for that non-privileged user or process to interact with Docker daemon. Also, such a non-privileged user or process might interact with containers. This is neither secure nor desired behavior. Additionally, the Docker installer creates a Unix group called `docker`. You can add users to this group, and then those users would be able to read and write to default Docker Unix socket. The membership to the `docker` group is tightly controlled by the system administrator. If any other group owns this socket, then it might be possible for members of that group to interact with Docker daemon. Also, such a group might not be as tightly controlled as the `docker` group. This is neither secure nor desired behavior. Hence, the default Docker Unix socket file must be owned by `root` and group-owned by `docker` to maintain the integrity of the socket file. | ```chown root:docker /var/run/docker.sock``` This would set the ownership to `root` and group-ownership to `docker` for default Docker socket file. |
+|Ensure that Docker socket file permissions are set to `660` or more restrictive<br /><sub>(3.16)</sub> |Description: Only `root` and members of `docker` group should be allowed to read and write to default Docker Unix socket. Hence, the Docket socket file must have permissions of `660` or more restrictive. | ```chmod 660 /var/run/docker.sock``` This would set the file permissions of the Docker socket file to `660`. |
+|Ensure that daemon.json file ownership is set to root:root<br /><sub>(3.17)</sub> |Description: `daemon.json` file contains sensitive parameters that may alter the behavior of docker daemon. Hence, it should be owned and group-owned by `root` to maintain the integrity of the file. | ```chown root:root /etc/docker/daemon.json``` This would set the ownership and group-ownership for the file to `root`. |
+|Ensure that daemon.json file permissions are set to 644 or more restrictive<br /><sub>(3.18)</sub> |Description: `daemon.json` file contains sensitive parameters that may alter the behavior of docker daemon. Hence, it should be writable only by `root` to maintain the integrity of the file. | ```chmod 644 /etc/docker/daemon.json``` This would set the file permissions for this file to `644`. |
+|Ensure that /etc/default/docker file ownership is set to root:root<br /><sub>(3.19)</sub> |Description: `/etc/default/docker` file contains sensitive parameters that may alter the behavior of docker daemon. Hence, it should be owned and group-owned by `root` to maintain the integrity of the file. | ```chown root:root /etc/default/docker``` This would set the ownership and group-ownership for the file to `root`. |
+|Ensure that /etc/default/docker file permissions are set to 644 or more restrictive<br /><sub>(3.20)</sub> |Description: /etc/default/docker file contains sensitive parameters that may alter the behavior of docker daemon. Hence, it should be writable only by `root` to maintain the integrity of the file. | ```chmod 644 /etc/default/docker``` This would set the file permissions for this file to `644`. |
+|Ensure a user for the container has been created<br /><sub>(4.01)</sub> |Description: it's a good practice to run the container as a non-root user, if possible. Though user namespace mapping is now available, if a user is already defined in the container image, the container is run as that user by default and specific user namespace remapping isn't required. |Ensure that the Dockerfile for the container image contains: `USER {username or ID}` where username or ID refers to the user that could be found in the container base image. If there's no specific user created in the container base image, then add a useradd command to add the specific user before USER instruction. |
+|Ensure HEALTHCHECK instructions have been added to the container image<br /><sub>(4.06)</sub> |Description: One of the important security triads is availability. Adding `HEALTHCHECK` instruction to your container image ensures that the docker engine periodically checks the running container instances against that instruction to ensure that the instances are still working. Based on the reported health status, the docker engine could then exit non-working containers and instantiate new ones. |Follow Docker documentation and rebuild your container image with `HEALTHCHECK` instruction. |
+|Ensure either SELinux or AppArmor is enabled as appropriate<br /><sub>(5.01-2)</sub> |Description: AppArmor protects the Linux OS and applications from various threats by enforcing security policy which is also known as AppArmor profile. You can create your own AppArmor profile for containers or use the Docker's default AppArmor profile. This would enforce security policies on the containers as defined in the profile. SELinux provides a Mandatory Access Control (MAC) system that greatly augments the default Discretionary Access Control (DAC) model. You can thus add an extra layer of safety by enabling SELinux on your Linux host, if applicable. |After enabling the relevant Mandatory Access Control Plugin for your distro, run the docker daemon as ```docker run --interactive --tty --security-opt="apparmor:PROFILENAME" centos /bin/bash``` for AppArmor or ```docker run --interactive --tty --security-opt label=level:TopSecret centos /bin/bash``` for SELinux. |
+|Ensure Linux Kernel Capabilities are restricted within containers<br /><sub>(5.03)</sub> |Description: Docker supports the addition and removal of capabilities, allowing the use of a non-default profile. This may make Docker more secure through capability removal, or less secure through the addition of capabilities. It's thus recommended to remove all capabilities except those explicitly required for your container process. For example, capabilities such as below are usually not needed for container process: ```NET_ADMIN SYS_ADMIN SYS_MODULE``` |Execute the below command to add needed capabilities: ```$> docker run --cap-add={"Capability 1","Capability 2"}``` For example,```docker run --interactive --tty --cap-add={"NET_ADMIN","SYS_ADMIN"} centos:latest /bin/bash``` Execute the below command to drop unneeded capabilities: ```$> docker run --cap-drop={"Capability 1","Capability 2"}``` For example,```docker run --interactive --tty --cap-drop={"SETUID","SETGID"} centos:latest /bin/bash``` Alternatively, You may choose to drop all capabilities and add only the needed ones: $> docker run --cap-drop=all --cap-add={"Capability 1","Capability 2"} For example, ```docker run --interactive --tty --cap-drop=all --cap-add={"NET_ADMIN","SYS_ADMIN"} centos:latest /bin/bash``` |
+|Ensure privileged containers aren't used<br /><sub>(5.04)</sub> |Description: The `--privileged` flag gives all capabilities to the container, and it also lifts all the limitations enforced by the device cgroup controller. In other words, the container can then do almost everything that the host can do. This flag exists to allow special use-cases, like running Docker within Docker. |Don't run container with the `--privileged` flag. For example, don't start a container as below: ```docker run --interactive --tty --privileged centos /bin/bash``` |
+|Ensure sensitive host system directories aren't mounted on containers<br /><sub>(5.05)</sub> |Description: If sensitive directories are mounted in read-write mode, it would be possible to make changes to files within those sensitive directories. The changes might bring down security implications or unwarranted changes that could put the Docker host in compromised state. |Don't mount host sensitive directories on containers especially in read-write mode. |
+|Ensure the host's network namespace isn't shared<br /><sub>(5.09)</sub> |Description: This is potentially dangerous. It allows the container process to open low-numbered ports like any other `root` process. It also allows the container to access network services like D-bus on the Docker host. Thus, a container process can potentially do unexpected things such as shutting down the Docker host. You shouldn't use this option. |Don't pass `--net=host` option when starting the container. |
+|Ensure memory usage for container is limited<br /><sub>(5.10)</sub> |Description: By default, container can use all of the memory on the host. You can use memory limit mechanism to prevent a denial of service arising from one container consuming all of the hostΓÇÖs resources such that other containers on the same host cannot perform their intended functions. Having no limit on memory can lead to issues where one container can easily make the whole system unstable and as a result unusable. |Run the container with only as much memory as required. Always run the container using the `--memory` argument. For example, you could run a container as below: ```docker run --interactive --tty --memory 256m centos /bin/bash``` In the above example, the container is started with a memory limit of 256 MB. Note: Please note that the output of the below command would return values in scientific notation if memory limits are in place. ```docker inspect --format='{{.Config.Memory}}' 7c5a2d4c7fe0``` For example, if the memory limit's set to `256 MB` for the above container instance, the output of the above command would be `2.68435456e+08` and NOT 256m. You should convert this value using a scientific calculator or programmatic methods. |
+|Ensure the container's root filesystem is mounted as read only<br /><sub>(5.12)</sub> |Description: Enabling this option forces containers at runtime to explicitly define their data writing strategy to persist or not persist their data. This also reduces security attack vectors since the container instance's filesystem cannot be tampered with or written to unless it has explicit read-write permissions on its filesystem folder and directories. |Add a `--read-only` flag at a container's runtime to enforce the container's root filesystem to be mounted as read only.```docker run --read-only``` Enabling the `--read-only` option at a container's runtime should be used by administrators to force a container's executable processes to only write container data to explicit storage locations during the container's runtime. Examples of explicit storage locations during a container's runtime include, but not limited to: 1. Use the `--tmpfs` option to mount a temporary file system for non-persistent data writes. ```docker run --interactive --tty --read-only --tmpfs "/run" --tmpfs "/tmp" centos /bin/bash``` 2. Enabling Docker `rw` mounts at a container's runtime to persist container data directly on the Docker host filesystem. ```docker run --interactive --tty --read-only -v /opt/app/data:/run/app/data:rw centos /bin/bash``` 3. Utilizing Docker shared-storage volume plugins for Docker data volume to persist container data. ```docker volume create -d convoy --opt o=size=20GB my-named-volume``````docker run --interactive --tty --read-only -v my-named-volume:/run/app/data centos /bin/bash``` 4. Transmitting container data outside of the docker during the container's runtime for container data to persist container data. Examples include hosted databases, network file shares, and APIs. |
+|Ensure incoming container traffic is bound to a specific host interface<br /><sub>(5.13)</sub> |Description: If you have multiple network interfaces on your host machine, the container can accept connections on the exposed ports on any network interface. This might not be desired and may not be secured. Many times a particular interface is exposed externally and services such as intrusion detection, intrusion prevention, firewall, load balancing, etc. are run on those interfaces to screen incoming public traffic. Hence, you shouldn't accept incoming connections on any interface. You should only allow incoming connections from a particular external interface. |Bind the container port to a specific host interface on the desired host port. For example, ```docker run --detach --publish 10.2.3.4:49153:80 nginx``` In the example above, the container port `80` is bound to the host port on `49153` and would accept incoming connection only from `10.2.3.4` external interface. |
+|Ensure 'on-failure' container restart policy is set to '5' or lower<br /><sub>(5.14)</sub> |Description: If you indefinitely keep trying to start the container, it could possibly lead to a denial of service on the host. It could be an easy way to do a distributed denial of service attack especially if you have many containers on the same host. Additionally, ignoring the exit status of the container and `always` attempting to restart the container leads to non-investigation of the root cause behind containers getting terminated. If a container gets terminated, you should investigate on the reason behind it instead of just attempting to restart it indefinitely. Thus, it's recommended to use `on-failure` restart policy and limit it to maximum of `5` restart attempts. |If a container is desired to be restarted of its own then, for example, you could start the container as below: ```docker run --detach --restart=on-failure:5 nginx``` |
+|Ensure the host's process namespace isn't shared<br /><sub>(5.15)</sub> |Description: PID namespace provides separation of processes. The PID Namespace removes the view of the system processes, and allows process ID's to be reused including PID `1`. If the host's PID namespace is shared with the container, it would basically allow processes within the container to see all of the processes on the host system. This breaks the benefit of process level isolation between the host and the containers. Someone having access to the container can eventually know all the processes running on the host system and can even kill the host system processes from within the container. This can be catastrophic. Hence, don't share the host's process namespace with the containers. |Don't start a container with `--pid=host` argument. For example, don't start a container as below: ```docker run --interactive --tty --pid=host centos /bin/bash``` |
+|Ensure the host's IPC namespace isn't shared<br /><sub>(5.16)</sub> |Description: IPC namespace provides separation of IPC between the host and containers. If the host's IPC namespace is shared with the container, it would basically allow processes within the container to see all of the IPC on the host system. This breaks the benefit of IPC level isolation between the host and the containers. Someone having access to the container can eventually manipulate the host IPC. This can be catastrophic. Hence, don't share the host's IPC namespace with the containers. |Don't start a container with `--ipc=host` argument. For example, don't start a container as below: ```docker run --interactive --tty --ipc=host centos /bin/bas``` |
+|Ensure host devices aren't directly exposed to containers<br /><sub>(5.17)</sub> |Description: The `--device` option exposes the host devices to the containers and consequently, the containers can directly access such host devices. You would not require the container to run in `privileged` mode to access and manipulate the host devices. By default, the container will be able to read, write and mknod these devices. Additionally, it's possible for containers to remove block devices from the host. Hence, don't expose host devices to containers directly. If at all, you would want to expose the host device to a container, use the sharing permissions appropriately: - r - read only - w - writable - m - mknod allowed |Don't directly expose the host devices to containers. If at all, you need to expose the host devices to containers, use the correct set of permissions: For example, don't start a container as below: ```docker run --interactive --tty --device=/dev/tty0:/dev/tty0:rwm --device=/dev/temp_sda:/dev/temp_sda:rwm centos bash``` For example, share the host device with correct permissions: ```docker run --interactive --tty --device=/dev/tty0:/dev/tty0:rw --device=/dev/temp_sda:/dev/temp_sda:r centos bash``` |
+|Ensure mount propagation mode isn't set to shared<br /><sub>(5.19)</sub> |Description: A shared mount is replicated at all mounts and the changes made at any mount point are propagated to all mounts. Mounting a volume in shared mode does not restrict any other container to mount and make changes to that volume. This might be catastrophic if the mounted volume is sensitive to changes. Don't set mount propagation mode to shared until needed. |Don't mount volumes in shared mode propagation. For example, don't start container as below: ```docker run --volume=/hostPath:/containerPath:shared``` |
+|Ensure the host's UTS namespace isn't shared<br /><sub>(5.20)</sub> |Description: Sharing the UTS namespace with the host provides full permission to the container to change the hostname of the host. This is insecure and shouldn't be allowed. |Don't start a container with `--uts=host` argument. For example, don't start a container as below: ```docker run --rm --interactive --tty --uts=host rhel7.2``` |
+|Ensure cgroup usage is confirmed<br /><sub>(5.24)</sub> |Description: System administrators typically define cgroups under which containers are supposed to run. Even if cgroups aren't explicitly defined by the system administrators, containers run under `docker` cgroup by default. At run-time, it's possible to attach to a different cgroup other than the one that was expected to be used. This usage should be monitored and confirmed. By attaching to a different cgroup than the one that is expected, excess permissions and resources might be granted to the container and thus, can prove to be unsafe. |Don't use `--cgroup-parent` option in `docker run` command unless needed. |
+|Ensure the container is restricted from acquiring additional privileges<br /><sub>(5.25)</sub> |Description: A process can set the `no_new_priv` bit in the kernel. It persists across fork, clone and execve. The `no_new_priv` bit ensures that the process or its children processes don't gain any additional privileges via suid or sgid bits. This way numerous dangerous operations become a lot less dangerous because there's no possibility of subverting privileged binaries. |For example, you should start your container as below: ```docker run --rm -it --security-opt=no-new-privileges ubuntu bash``` |
+|Ensure container health is checked at runtime<br /><sub>(5.26)</sub> |Description: One of the important security triads is availability. If the container image you're using does not have a pre-defined `HEALTHCHECK` instruction, use the `--health-cmd` parameter to check container health at runtime. Based on the reported health status, you could take necessary actions. |Run the container using `--health-cmd` and the other parameters. For example, ```docker run -d --health-cmd='stat /etc/passwd || exit 1' nginx``` |
+|Ensure PIDs cgroup limit's used<br /><sub>(5.28)</sub> |Description: Attackers could launch a fork bomb with a single command inside the container. This fork bomb can crash the entire system and requires a restart of the host to make the system functional again. PIDs cgroup `--pids-limit` will prevent this kind of attacks by restricting the number of forks that can happen inside a container at a given time. |Use `--pids-limit` flag while launching the container with an appropriate value. For example, ```docker run -it --pids-limit 100``` In the above example, the number of processes allowed to run at any given time is set to 100. After a limit of 100 concurrently running processes is reached, docker would restrict any new process creation. |
+|Ensure Docker's default bridge docker0 isn't used<br /><sub>(5.29)</sub> |Description: Docker connects virtual interfaces created in the bridge mode to a common bridge called `docker0`. This default networking model is vulnerable to ARP spoofing and MAC flooding attacks since there's no filtering applied. |Follow Docker documentation and setup a user-defined network. Run all the containers in the defined network. |
+|Ensure the host's user namespace isn't shared<br /><sub>(5.30)</sub> |Description: User namespaces ensure that a root process inside the container will be mapped to a non-root process outside the container. Sharing the user namespaces of the host with the container thus does not isolate users on the host with users on the containers. |Don't share user namespaces between host and containers. For example, don't run a container as below: ```docker run --rm -it --userns=host ubuntu bash``` |
+|Ensure the Docker socket isn't mounted inside any containers<br /><sub>(5.31)</sub> |Description: If the docker socket is mounted inside a container it would allow processes running within the container to execute docker commands which effectively allows for full control of the host. |Ensure that no containers mount `docker.sock` as a volume. |
+|Ensure swarm services are bound to a specific host interface<br /><sub>(7.03)</sub> |Description: When a swarm is initialized the default value for the `--listen-addr` flag is `0.0.0.0:2377` which means that the swarm services will listen on all interfaces on the host. If a host has multiple network interfaces this may be undesirable as it may expose the docker swarm services to networks which aren't involved in the operation of the swarm. By passing a specific IP address to the `--listen-addr`, a specific network interface can be specified limiting this exposure. |Remediation of this requires re-initialization of the swarm specifying a specific interface for the `--listen-addr` parameter. |
+|Ensure data exchanged between containers are encrypted on different nodes on the overlay network<br /><sub>(7.04)</sub> |Description: By default, data exchanged between containers on different nodes on the overlay network isn't encrypted. This could potentially expose traffic between the container nodes. |Create overlay network with `--opt encrypted` flag. |
+|Ensure swarm manager is run in auto-lock mode<br /><sub>(7.06)</sub> |Description: When Docker restarts, both the TLS key used to encrypt communication among swarm nodes, and the key used to encrypt and decrypt Raft logs on disk, are loaded into each manager node's memory. You should protect the mutual TLS encryption key and the key used to encrypt and decrypt Raft logs at rest. This protection could be enabled by initializing swarm with `--autolock` flag. With `--autolock` enabled, when Docker restarts, you must unlock the swarm first, using a key encryption key generated by Docker when the swarm was initialized. |If you're initializing swarm, use the below command. ```docker swarm init --autolock``` If you want to set `--autolock` on an existing swarm manager node, use the below command.```docker swarm update --autolock``` |
+
+> [!NOTE]
+> Availability of specific Azure Policy guest configuration settings may vary in Azure Government
+> and other national clouds.
+
+## Next steps
+
+Additional articles about Azure Policy and guest configuration:
+
+- [Understand the guest configuration feature of Azure Policy]Understand the guest configuration feature of Azure Polic(../concepts/guest-configuration.md).
+- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview.
+- Review other examples at [Azure Policy samples](./index.md).
+- Review [Understanding policy effects](../concepts/effects.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
Title: Quickstart - Provision an X.509 certificate simulated device to Microsoft
description: Learn how to provision a simulated device that authenticates with an X.509 certificate in the Azure IoT Hub Device Provisioning Service Previously updated : 09/07/2021 Last updated : 05/31/2022
This quickstart demonstrates a solution for a Windows-based workstation. However
* Complete the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md). + The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation.
+* Install [Visual Studio](https://visualstudio.microsoft.com/vs/) 2022 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015, Visual Studio 2017, and Visual Studio 19 are also supported. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation.
-* Install [Visual Studio](https://visualstudio.microsoft.com/vs/) 2019 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015 and Visual Studio 2017 are also supported. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation.
+* Install the latest [CMake build system](https://cmake.org/download/). Make sure you check the option that adds the CMake executable to your path.
+
+ >[!IMPORTANT]
+ >Confirm that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system. Also, be aware that older versions of the CMake build system fail to generate the solution file used in this article. Make sure to use the latest version of CMake.
::: zone-end ::: zone pivot="programming-language-csharp"
+The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/doc/devbox_setup.md) in the SDK documentation.
+ * Install [.NET SDK 6.0](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
- ```bash
+ ```cmd
dotnet --info ```
The following prerequisites are for a Windows development environment. For Linux
::: zone pivot="programming-language-nodejs"
-* Install [Node.js v4.0 or above](https://nodejs.org) on your machine.
+The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/blob/main/doc/node-devbox-setup.md) in the SDK documentation.
-* Install [OpenSSL](https://www.openssl.org/) on your machine and is added to the environment variables accessible to the command window. This library can either be built and installed from source or downloaded and installed from a [third party](https://wiki.openssl.org/index.php/Binaries) such as [this](https://sourceforge.net/projects/openssl/).
+* Install [Node.js v4.0 or above](https://nodejs.org) on your machine.
::: zone-end ::: zone pivot="programming-language-python"
-* [Python 3.6 or later](https://www.python.org/downloads/) on your machine.
+The following prerequisites are for a Windows development environment.
-* Install [OpenSSL](https://www.openssl.org/) on your machine and is added to the environment variables accessible to the command window. This library can either be built and installed from source or downloaded and installed from a [third party](https://wiki.openssl.org/index.php/Binaries) such as [this](https://sourceforge.net/projects/openssl/).
+* [Python 3.6 or later](https://www.python.org/downloads/) on your machine.
::: zone-end ::: zone pivot="programming-language-java"
+The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-jav) in the SDK documentation.
+ * Install the [Java SE Development Kit 8](/azure/developer/java/fundamentals/java-support-on-azure) or later on your machine. * Download and install [Maven](https://maven.apache.org/install.html).
The following prerequisites are for a Windows development environment. For Linux
* Install the latest version of [Git](https://git-scm.com/download/). Make sure that Git is added to the environment variables accessible to the command window. See [Software Freedom Conservancy's Git client tools](https://git-scm.com/download/) for the latest version of `git` tools to install, which includes *Git Bash*, the command-line app that you can use to interact with your local Git repository.
+* Make sure [OpenSSL](https://www.openssl.org/) is installed on your machine. On Windows, your installation of Git includes an installation of OpenSSL. You can access OpenSSL from the Git Bash prompt. To verify that OpenSSL is installed, open a Git Bash prompt and enter `openssl version`.
+
+ >[!NOTE]
+ > Unless you're familiar with OpenSSL and already have it installed on your Windows machine, we recommend using OpenSSL from the Git Bash prompt. Alternatively, you can choose to download the source code and build OpenSSL. To learn more, see the [OpenSSL Downloads](https://www.openssl.org/source/) page. Or, you can download OpenSSL pre-built from a third-party. To learn more, see the [OpenSSL wiki](https://wiki.openssl.org/index.php/Binaries). Microsoft makes no guarantees about the validity of packages downloaded from third-parties. If you do choose to build or download OpenSSL make sure that the OpenSSL binary is accessible in your path and that the `OPENSSL_CNF` environment variable is set to the path of your *openssl.cnf* file.
+
+* Open both a Windows command prompt and a Git Bash prompt.
+
+ The steps in this quickstart assume that you're using a Windows machine and the OpenSSL installation that is installed as part of Git. You'll use the Git Bash prompt to issue OpenSSL commands and the Windows command prompt for everything else. If you're using Linux, you can issue all commands from a Bash shell.
+ ## Prepare your development environment ::: zone pivot="programming-language-ansi-c" In this section, you'll prepare a development environment that's used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The sample code attempts to provision the device, during the device's boot sequence.
-1. Download the latest [CMake build system](https://cmake.org/download/).
-
- >[!IMPORTANT]
- >Confirm that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system. Also, be aware that older versions of the CMake build system fail to generate the solution file used in this article. Make sure to use the latest version of CMake.
-
-2. Open a web browser, and go to the [Release page of the Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c/releases/latest).
+1. Open a web browser, and go to the [Release page of the Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c/releases/latest).
-3. Select the **Tags** tab at the top of the page.
+2. Select the **Tags** tab at the top of the page.
-4. Copy the tag name for the latest release of the Azure IoT C SDK.
+3. Copy the tag name for the latest release of the Azure IoT C SDK.
-5. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. (replace `<release-tag>` with the tag you copied in the previous step).
+4. In your Windows command prompt, run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. (replace `<release-tag>` with the tag you copied in the previous step).
- ```cmd/sh
+ ```cmd
git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git cd azure-iot-sdk-c git submodule update --init
In this section, you'll prepare a development environment that's used to build t
This operation could take several minutes to complete.
-6. When the operation is complete, run the following commands from the `azure-iot-sdk-c` directory:
+5. When the operation is complete, run the following commands from the `azure-iot-sdk-c` directory:
- ```cmd/sh
+ ```cmd
mkdir cmake cd cmake ```
-7. The code sample uses an X.509 certificate to provide attestation via X.509 authentication. Run the following command to build a version of the SDK specific to your development platform that includes the device provisioning client. A Visual Studio solution for the simulated device is generated in the `cmake` directory.
+6. The code sample uses an X.509 certificate to provide attestation via X.509 authentication. Run the following command to build a version of the SDK specific to your development platform that includes the device provisioning client. A Visual Studio solution for the simulated device is generated in the `cmake` directory.
+
+ When specifying the path used with `-Dhsm_custom_lib` in the command below, make sure to use the absolute path to the library in the `cmake` directory you previously created. The path shown below assumes that you cloned the C SDK in the root directory of the C drive. If you used another directory, adjust the path accordingly.
```cmd
- cmake -Duse_prov_client:BOOL=ON ..
+ cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=c:/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/Debug/custom_hsm_example.lib ..
``` >[!TIP] >If `cmake` does not find your C++ compiler, you may get build errors while running the above command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
-8. When the build succeeds, the last few output lines look similar to the following output:
-
- ```cmd/sh
- $ cmake -Duse_prov_client:BOOL=ON ..
- -- Building for: Visual Studio 16 2019
- -- The C compiler identification is MSVC 19.23.28107.0
- -- The CXX compiler identification is MSVC 19.23.28107.0
+7. When the build succeeds, the last few output lines look similar to the following output:
+ ```cmd
+ cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=c:/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/Debug/custom_hsm_example.lib ..
+ -- Building for: Visual Studio 17 2022
+ -- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.22000.
+ -- The C compiler identification is MSVC 19.32.31329.0
+ -- The CXX compiler identification is MSVC 19.32.31329.0
+
... -- Configuring done -- Generating done
- -- Build files have been written to: C:/code/azure-iot-sdk-c/cmake
+ -- Build files have been written to: C:/azure-iot-sdk-c/cmake
``` ::: zone-end ::: zone pivot="programming-language-csharp"
-1. Open a Git CMD or Git Bash command line environment.
-
-2. Clone the [Azure IoT Samples for C#](https://github.com/Azure-Samples/azure-iot-samples-csharp) GitHub repository using the following command:
+1. In your Windows command prompt, clone the [Azure IoT Samples for C#](https://github.com/Azure-Samples/azure-iot-samples-csharp) GitHub repository using the following command:
```cmd git clone https://github.com/Azure-Samples/azure-iot-samples-csharp.git
In this section, you'll prepare a development environment that's used to build t
::: zone pivot="programming-language-nodejs"
-1. Open a Git CMD or Git Bash command line environment.
-
-2. Clone the [Azure IoT Samples for Node.js](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
+1. In your Windows command prompt, clone the [Azure IoT Samples for Node.js](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
```cmd git clone https://github.com/Azure/azure-iot-sdk-node.git
In this section, you'll prepare a development environment that's used to build t
::: zone pivot="programming-language-python"
-1. Open a Git CMD or Git Bash command line environment.
-
-2. Clone the [Azure IoT Samples for Python](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
+1. In your Windows command prompt, clone the [Azure IoT Samples for Python](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
```cmd git clone https://github.com/Azure/azure-iot-sdk-python.git --recursive
In this section, you'll prepare a development environment that's used to build t
::: zone pivot="programming-language-java"
-1. Open a Git CMD or Git Bash command line environment.
-
-2. Clone the [Azure IoT Samples for Java](https://github.com/Azure/azure-iot-sdk-java.git) GitHub repository using the following command:
+1. In your Windows command prompt, clone the [Azure IoT Samples for Java](https://github.com/Azure/azure-iot-sdk-java.git) GitHub repository using the following command:
```cmd git clone https://github.com/Azure/azure-iot-sdk-java.git --recursive ```
-3. Go to the root `azure-iot-sdk-`java` directory and build the project to download all needed packages.
+2. Go to the root `azure-iot-sdk-java` directory and build the project to download all needed packages.
- ```cmd/sh
+ ```cmd
cd azure-iot-sdk-java mvn install -DskipTests=true ```
-4. Go to the certificate generator project and build the project.
-
- ```cmd/sh
- cd azure-iot-sdk-java/provisioning/provisioning-tools/provisioning-x509-cert-generator
- mvn clean install
- ```
- ::: zone-end ## Create a self-signed X.509 device certificate
-In this section, you'll use sample code from the Azure IoT SDK to create a self-signed X.509 certificate. This certificate must be uploaded to your provisioning service, and verified by the service.
+In this section, you'll use OpenSSL to create a self-signed X.509 certificate and a private key. This certificate will be uploaded to your provisioning service instance and verified by the service.
> [!CAUTION]
-> Use certificates created with the SDK tooling for development testing only.
+> Use certificates created with OpenSSL in this quickstart for development testing only.
> Do not use these certificates in production.
-> The SDK generated certificates contain hard-coded passwords, such as *1234*, and expire after 30 days.
-> To learn about obtaining certificates suitable for production use, see [How to get an X.509 CA certificate](../iot-hub/iot-hub-x509ca-overview.md#how-to-get-an-x509-ca-certificate) in the Azure IoT Hub documentation.
+> These certificates expire after 30 days and may contain hard-coded passwords, such as *1234*.
+> To learn about obtaining certificates suitable for use in production, see [How to get an X.509 CA certificate](../iot-hub/iot-hub-x509ca-overview.md#how-to-get-an-x509-ca-certificate) in the Azure IoT Hub documentation.
>
-To create the X.509 certificate:
-
+Perform the steps in this section in your Git Bash prompt.
-### Clone the Azure IoT C SDK
+1. In your Git Bash prompt, navigate to a directory where you'd like to create your certificates.
-The [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) contains test tooling that can help you create an X.509 certificate chain, upload a root or intermediate certificate from that chain, and do proof-of-possession with the service to verify the certificate.
+2. Run the following command:
-If you've already cloned the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository, skip to the [next section](#create-a-test-certificate).
+ # [Windows](#tab/windows)
-1. Open a web browser, and go to the [Release page of the Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c/releases/latest).
+ ```bash
+ winpty openssl req -outform PEM -x509 -sha256 -newkey rsa:4096 -keyout device-key.pem -out device-cert.pem -days 30 -extensions usr_cert -addext extendedKeyUsage=clientAuth -subj "//CN=my-x509-device"
+ ```
-2. Copy the tag name for the latest release of the Azure IoT C SDK.
+ > [!IMPORTANT]
+ > The extra forward slash given for the subject name (`//CN=my-x509-device`) is only required to escape the string with Git on Windows platforms.
-3. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. (replace `<release-tag>` with the tag you copied in the previous step).
+ # [Linux](#tab/linux)
- ```cmd/sh
- git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git
- cd azure-iot-sdk-c
- git submodule update --init
+ ```bash
+ openssl req -outform PEM -x509 -sha256 -newkey rsa:4096 -keyout device-key.pem -out device-cert.pem -days 30 -extensions usr_cert -addext extendedKeyUsage=clientAuth -subj "/CN=my-x509-device"
```
- This operation may take several minutes to complete.
-
-4. The test tooling should now be located in the *azure-iot-sdk-c/tools/CACertificates* of the repository that you cloned.
+
-### Create a test certificate
+3. When asked to **Enter PEM pass phrase:**, use the pass phrase `1234`.
-Follow the steps in [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md).
+4. When asked **Verifying - Enter PEM pass phrase:**, use the pass phrase `1234` again.
-In addition to the tooling in the C SDK, the [Group certificate verification sample](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/master/provisioning/Samples/service/GroupCertificateVerificationSample) in the *Microsoft Azure IoT SDK for .NET* shows how to do proof-of-possession in C# with an existing X.509 intermediate or root CA certificate.
+ A public key certificate file (*device-cert.pem*) and private key file (*device-key.pem*) should now be generated in the directory where you ran the `openssl` command.
+ The certificate file has its subject common name (CN) set to `my-x509-device`. For X.509-based enrollments, the [Registration ID](./concepts-service.md#registration-id) is set to the common name. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format.
+5. The certificate file is Base64 encoded. To view the subject common name (CN) and other properties of the certificate file, enter the following command:
-1. In a PowerShell prompt, change directories to the project directory for the X.509 device provisioning sample.
+ # [Windows](#tab/windows)
- ```powershell
- cd .\azure-iot-samples-csharp\provisioning\Samples\device\X509Sample
+ ```bash
+ winpty openssl x509 -in device-cert.pem -text -noout
```
-2. The sample code is set up to use X.509 certificates that are stored within a password-protected PKCS12 formatted file (`certificate.pfx`). Additionally, you'll need a public key certificate file (`certificate.cer`) to create an individual enrollment later in this quickstart. To generate the self-signed certificate and its associated `.cer` and `.pfx` files, run the following command:
+ # [Linux](#tab/linux)
- ```powershell
- PS D:\azure-iot-samples-csharp\provisioning\Samples\device\X509Sample> .\GenerateTestCertificate.ps1 iothubx509device1
+ ```bash
+ openssl x509 -in device-cert.pem -text -noout
```
- The certificate generated by this command has a subject common name (CN) of _iothubx509device1_. For X.509-based enrollments, the [Registration ID](./concepts-service.md#registration-id) is set to the common name. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format.
-
-3. The script prompts you for a PFX password. Remember this password, as you will use it later when you run the sample. Optionally, you can run `certutil` to dump the certificate and verify the subject name.
+
- ```powershell
- PS D:\azure-iot-samples-csharp\provisioning\Samples\device\X509Sample> certutil .\certificate.pfx
- Enter PFX password:
- ================ Certificate 0 ================
- ================ Begin Nesting Level 1 ================
- Element 0:
- Serial Number: 7b4a0e2af6f40eae4d91b3b7ff05a4ce
- Issuer: CN=iothubx509device1, O=TEST, C=US
- NotBefore: 2/1/2021 6:18 PM
- NotAfter: 2/1/2022 6:28 PM
- Subject: CN=iothubx509device1, O=TEST, C=US
- Signature matches Public Key
- Root Certificate: Subject matches Issuer
- Cert Hash(sha1): e3eb7b7cc1e2b601486bf8a733887a54cdab8ed6
- - End Nesting Level 1 -
- Provider = Microsoft Strong Cryptographic Provider
- Signature test passed
- CertUtil: -dump command completed successfully.
+ ```output
+ Certificate:
+ Data:
+ Version: 3 (0x2)
+ Serial Number:
+ 77:3e:1d:e4:7e:c8:40:14:08:c6:09:75:50:9c:1a:35:6e:19:52:e2
+ Signature Algorithm: sha256WithRSAEncryption
+ Issuer: CN = my-x509-device
+ Validity
+ Not Before: May 5 21:41:42 2022 GMT
+ Not After : Jun 4 21:41:42 2022 GMT
+ Subject: CN = my-x509-device
+ Subject Public Key Info:
+ Public Key Algorithm: rsaEncryption
+ RSA Public-Key: (4096 bit)
+ Modulus:
+ 00:d2:94:37:d6:1b:f7:43:b4:21:c6:08:1a:d6:d7:
+ e6:40:44:4e:4d:24:41:6c:3e:8c:b2:2c:b0:23:29:
+ ...
+ 23:6e:58:76:45:18:03:dc:2e:9d:3f:ac:a3:5c:1f:
+ 9f:66:b0:05:d5:1c:fe:69:de:a9:09:13:28:c6:85:
+ 0e:cd:53
+ Exponent: 65537 (0x10001)
+ X509v3 extensions:
+ X509v3 Basic Constraints:
+ CA:FALSE
+ Netscape Comment:
+ OpenSSL Generated Certificate
+ X509v3 Subject Key Identifier:
+ 63:C0:B5:93:BF:29:F8:57:F8:F9:26:44:70:6F:9B:A4:C7:E3:75:18
+ X509v3 Authority Key Identifier:
+ keyid:63:C0:B5:93:BF:29:F8:57:F8:F9:26:44:70:6F:9B:A4:C7:E3:75:18
+
+ X509v3 Extended Key Usage:
+ TLS Web Client Authentication
+ Signature Algorithm: sha256WithRSAEncryption
+ 82:8a:98:f8:47:00:85:be:21:15:64:b9:22:b0:13:cc:9e:9a:
+ ed:f5:93:b9:4b:57:0f:79:85:9d:89:47:69:95:65:5e:b3:b1:
+ ...
+ cc:b2:20:9a:b7:f2:5e:6b:81:a1:04:93:e9:2b:92:62:e0:1c:
+ ac:d2:49:b9:36:d2:b0:21
```
+6. The sample code requires a private key that isn't encrypted. Run the following command to create an unencrypted private key:
-1. Open a command prompt, and go to the certificate generator script and build the project:
+ # [Windows](#tab/windows)
- ```cmd/sh
- cd azure-iot-sdk-node/provisioning/tools
- npm install
+ ```bash
+ winpty openssl rsa -in device-key.pem -out unencrypted-device-key.pem
```
-2. Create a _leaf_ X.509 certificate by running the script using your own _certificate-name_. For X.509-based enrollments, the leaf certificate's common name becomes the [Registration ID](./concepts-service.md#registration-id). The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The _certificate-name_ parameter must adhere to this format.
+ # [Linux](#tab/linux)
- ```cmd/sh
- node create_test_cert.js device {certificate-name}
+ ```bash
+ openssl rsa -in device-key.pem -out unencrypted-device-key.pem
```
+
+
+7. When asked to **Enter pass phrase for device-key.pem:**, use the same pass phrase you did previously, `1234`.
+
+Keep the Git Bash prompt open. You'll need it later in this quickstart.
+ ::: zone-end
-1. In the Git Bash prompt, run the following command:
+The C# sample code is set up to use X.509 certificates that are stored in a password-protected PKCS12 formatted file (`certificate.pfx`). You'll still need the PEM formatted public key certificate file (`device-cert.pem`) that you just created to create an individual enrollment entry later in this quickstart.
+
+1. To generate the PKCS12 formatted file expected by the sample, enter the following command:
# [Windows](#tab/windows) ```bash
- winpty openssl req -outform PEM -x509 -sha256 -newkey rsa:4096 -keyout ./python-device.key.pem -out ./python-device.pem -days 365 -extensions usr_cert -subj "//CN=Python-device-01"
+ winpty openssl pkcs12 -inkey device-key.pem -in device-cert.pem -export -out certificate.pfx
```
- > [!IMPORTANT]
- > The extra forward slash given for the subject name (`//CN=Python-device-01`) is only required to escape the string with Git on Windows platforms.
- # [Linux](#tab/linux) ```bash
- openssl req -outform PEM -x509 -sha256 -newkey rsa:4096 -keyout ./python-device.key.pem -out ./python-device.pem -days 365 -extensions usr_cert -subj "/CN=Python-device-01"
+ openssl pkcs12 -inkey device-key.pem -in device-cert.pem -export -out certificate.pfx
```
-2. When asked to **Enter PEM pass phrase:**, use the pass phrase `1234`.
+1. When asked to **Enter pass phrase for device-key.pem:**, use the same pass phrase you did previously, `1234`.
-3. When asked **Verifying - Enter PEM pass phrase:**, use the pass phrase `1234` again.
+1. When asked to **Enter Export Password:**, use the password `1234`.
-A test certificate file (*python-device.pem*) and private key file (*python-device.key.pem*) should now be generated in the directory where you ran the `openssl` command.
+1. When asked **Verifying - Enter Export Password:**, use the password `1234` again.
-The certificate file has its subject common name (CN) set to `Python-device-01`. For an X.509-based enrollments, the [Registration ID](./concepts-service.md#registration-id) is set to the common name. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format.
+ A PKCS12 formatted certificate file (*certificate.pfx*) should now be generated in the directory where you ran the `openssl` command.
+
+1. Copy the PKCS12 formatted certificate file to the project directory for the X.509 device provisioning sample. The path given is relative to the location where you downloaded the sample repo.
+
+ ```bash
+ cp certificate.pfx ./azure-iot-samples-csharp/provisioning/Samples/device/X509Sample
+ ```
+
+You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
::: zone-end
-1. Using the command prompt from previous steps, go to the `target` folder.
+6. Copy the device certificate and private key to the project directory for the X.509 device provisioning sample. The path given is relative to the location where you downloaded the SDK.
-2. Run the .jar file created in the previous section.
+ ```bash
+ cp device-cert.pem ./azure-iot-sdk-node/provisioning/device/samples
+ cp device-key.pem ./azure-iot-sdk-node/provisioning/device/samples
+ ```
- ```cmd/sh
- cd target
- java -jar ./provisioning-x509-cert-generator-{version}-with-deps.jar
+You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
+++
+6. Copy the device certificate and private key to the project directory for the X.509 device provisioning sample. The path given is relative to the location where you downloaded the SDK.
+
+ ```bash
+ cp device-cert.pem ./azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios
+ cp device-key.pem ./azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios
```
-3. Enter **N** for _Do you want to input common name_. This creates a certificate with a subject common name (CN) of _microsoftriotcore_.
+You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
- For an X.509-based enrollments, the [Registration ID](./concepts-service.md#registration-id) is set to the common name. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format.
-4. Copy the output of `Client Cert` to the clipboard, starting from *--BEGIN CERTIFICATE--* through *--END CERTIFICATE--*.
+6. The Java sample code requires a private key that isn't encrypted. Run the following command to create an unencrypted private key:
- ![Individual certificate generator](./media/quick-create-simulated-device-x509/cert-generator-java.png)
+ # [Windows](#tab/windows)
-5. Create a file named *_X509individual.pem_* on your Windows machine.
+ ```bash
+ winpty openssl pkey -in device-key.pem -out unencrypted-device-key.pem
+ ```
-6. Open *_X509individual.pem_* in an editor of your choice, and copy the clipboard contents to this file.
+ # [Linux](#tab/linux)
-7. Save the file and close your editor.
+ ```bash
+ openssl pkey -in device-key.pem -out unencrypted-device-key.pem
+ ```
+
+
-8. In the command prompt, enter **N** for _Do you want to input Verification Code_ and keep the program output open for reference later in the quickstart. Copy the `Client Cert` and `Client Cert Private Key` values, for use in the next section.
+7. When asked to **Enter pass phrase for device-key.pem:**, use the same pass phrase you did previously, `1234`.
+
+Keep the Git Bash prompt open. You'll need it later in this quickstart.
::: zone-end
This article demonstrates an individual enrollment for a single device to be pro
5. At the top of the page, select **+ Add individual enrollment**. - 6. In the **Add Enrollment** page, enter the following information. * **Mechanism:** Select **X.509** as the identity attestation *Mechanism*.
- * **Primary certificate .pem or .cer file:** Choose **Select a file** to select the certificate file, *X509testcert.pem* that you created in the previous section.
- * **IoT Hub Device ID:** Enter *test-docs-cert-device* to give the device an ID.
+ * **Primary certificate .pem or .cer file:** Choose **Select a file** and navigate to and select the certificate file, *device-cert.pem*, that you created in the previous section.
+ * Leave **IoT Hub Device ID:** blank. Your device will be provisioned with its device ID set to the common name (CN) in the X.509 certificate, *my-x509-device*. This common name will also be the name used for the registration ID for the individual enrollment entry.
+ * Optionally, you can provide the following information:
+ * Select an IoT hub linked with your provisioning service.
+ * Update the **Initial device twin state** with the desired initial configuration for the device.
- :::image type="content" source="./media/quick-create-simulated-device-x509/device-enrollment.png" alt-text="Add device as individual enrollment with X.509 attestation.":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/add-individual-enrollment-with-cert.png" alt-text="Screenshot that shows adding an individual enrollment with X.509 attestation to D P S in Azure portal.":::
+7. Select **Save**. You'll be returned to **Manage enrollments**.
+8. Select **Individual Enrollments**. Your X.509 enrollment entry, *my-x509-device*, should appear in the list.
-6. In the **Add Enrollment** page, enter the following information.
+## Prepare and run the device provisioning code
- * **Mechanism:** Select **X.509** as the identity attestation *Mechanism*.
- * **Primary certificate .pem or .cer file:** Choose **Select a file** to select the certificate file, *certificate.cer* that you created in the previous section.
- * Leave **IoT Hub Device ID:** blank. Your device will be provisioned with its device ID set to the common name (CN) in the X.509 certificate, *iothubx509device1*. This common name will also be the name used for the registration ID for the individual enrollment entry.
- * Optionally, you can provide the following information:
- * Select an IoT hub linked with your provisioning service.
- * Update the **Initial device twin state** with the desired initial configuration for the device.
+In this section, you'll update the sample code to send the device's boot sequence to your Device Provisioning Service instance. This boot sequence will cause the device to be recognized and assigned to an IoT hub linked to the DPS instance.
- :::image type="content" source="./media/quick-create-simulated-device-x509/device-enrollment.png" alt-text="Add device as individual enrollment with X.509 attestation.":::
+In this section, you'll use your Git Bash prompt and the Visual Studio IDE.
+### Configure the provisioning device code
-6. In the **Add Enrollment** page, enter the following information.
+In this section, you update the sample code with your Device Provisioning Service instance information.
- * **Mechanism:** Select **X.509** as the identity attestation *Mechanism*.
- * **Primary certificate .pem or .cer file:** Choose **Select a file** to select the certificate file, *{certificate-name}_cert.pem* that you created in the previous section.
- * Optionally, you can provide the following information:
- * Select an IoT hub linked with your provisioning service.
- * Enter a unique device ID. Make sure to avoid sensitive data while naming your device.
- * Update the **Initial device twin state** with the desired initial configuration for the device.
- :::image type="content" source="./media/quick-create-simulated-device-x509/device-enrollment.png" alt-text="Add device as individual enrollment with X.509 attestation.":::
+1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
+1. Copy the **ID Scope** value.
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Screenshot of the I D scope on Azure portal.":::
-6. In the **Add Enrollment** page, enter the following information.
+1. Launch Visual Studio and open the new solution file that was created in the `cmake` directory you created in the root of the azure-iot-sdk-c git repository. The solution file is named `azure_iot_sdks.sln`.
- * **Mechanism:** Select **X.509** as the identity attestation *Mechanism*.
- * **Primary certificate .pem or .cer file:** Choose **Select a file** to select the certificate file, *python-device.pem* if you are using the test certificate created earlier.
- * Optionally, you can provide the following information:
- * Select an IoT hub linked with your provisioning service.
- * Update the **Initial device twin state** with the desired initial configuration for the device.
+1. In Solution Explorer for Visual Studio, navigate to **Provision_Samples > prov_dev_client_sample > Source Files** and open *prov_dev_client_sample.c*.
- :::image type="content" source="./media/quick-create-simulated-device-x509/device-enrollment.png" alt-text="Add device as individual enrollment with X.509 attestation.":::
+1. Find the `id_scope` constant, and replace the value with your **ID Scope** value that you copied in step 2.
+ ```c
+ static const char* id_scope = "0ne00000A0A";
+ ```
+1. Find the definition for the `main()` function in the same file. Make sure the `hsm_type` variable is set to `SECURE_DEVICE_TYPE_X509` as shown below.
-6. In the **Add Enrollment** panel, enter the following information:
- * Select **X.509** as the identity attestation *Mechanism*.
- * Under the *Primary certificate .pem or .cer file*, choose *Select a file* to select the certificate file *X509individual.pem* created in the previous steps.
- * Optionally, you may provide the following information:
- * Select an IoT hub linked with your provisioning service.
- * Enter a unique device ID. Make sure to avoid sensitive data while naming your device.
- * Update the **Initial device twin state** with the desired initial configuration for the device.
+ ```c
+ SECURE_DEVICE_TYPE hsm_type;
+ //hsm_type = SECURE_DEVICE_TYPE_TPM;
+ hsm_type = SECURE_DEVICE_TYPE_X509;
+ //hsm_type = SECURE_DEVICE_TYPE_SYMMETRIC_KEY;
+ ```
- :::image type="content" source="./media/quick-create-simulated-device-x509/device-enrollment.png" alt-text="Add device as individual enrollment with X.509 attestation.":::
+1. Save your changes.
+1. Right-click the **prov_dev_client_sample** project and select **Set as Startup Project**.
-7. Select **Save**. You'll be returned to **Manage enrollments**.
+### Configure the custom HSM stub code
-8. Select **Individual Enrollments**. Your X.509 enrollment entry should appear in the registration table.
+The specifics of interacting with actual secure hardware-based storage vary depending on the hardware. As a result, the certificate and private key used by the simulated device in this quickstart will be hardcoded in the custom Hardware Security Module (HSM) stub code.
-## Prepare and run the device provisioning code
+To update the custom HSM stub code to simulate the identity of the device with ID `my-x509-device`:
+1. In Solution Explorer for Visual Studio, navigate to **Provision_Samples > custom_hsm_example > Source Files** and open *custom_hsm_example.c*.
-In this section, we'll update the sample code to send the device's boot sequence to your Device Provisioning Service instance. This boot sequence will cause the device to be recognized and assigned to an IoT hub linked to the Device Provisioning Service instance.
+1. Update the string value of the `COMMON_NAME` string constant using the common name you used when generating the device certificate, `my-x509-device`.
-1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
+ ```c
+ static const char* const COMMON_NAME = "my-x509-device";
+ ```
-2. Copy the **_ID Scope_** value.
+1. Update the string value of the `CERTIFICATE` constant string using the device certificate, *device-cert.pem*, that you generated previously.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Copy ID Scope from the portal.":::
+ The syntax of certificate text in the sample must follow the pattern below with no extra spaces or parsing done by Visual Studio.
-3. In Visual Studio's *Solution Explorer* window, navigate to the **Provision\_Samples** folder. Expand the sample project named **prov\_dev\_client\_sample**. Expand **Source Files**, and open **prov\_dev\_client\_sample.c**.
+ ```c
+ static const char* const CERTIFICATE = "--BEGIN CERTIFICATE--\n"
+ "MIIFOjCCAyKgAwIBAgIJAPzMa6s7mj7+MA0GCSqGSIb3DQEBCwUAMCoxKDAmBgNV\n"
+ ...
+ "MDMwWhcNMjAxMTIyMjEzMDMwWjAqMSgwJgYDVQQDDB9BenVyZSBJb1QgSHViIENB\n"
+ "--END CERTIFICATE--";
+ ```
-4. Find the `id_scope` constant, and replace the value with your **ID Scope** value that you copied earlier.
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `CERTIFICATE` string constant value and write it to the output.
- ```c
- static const char* id_scope = "0ne00002193";
+ ```Bash
+ sed -e 's/^/"/;$ !s/$/""\\n"/;$ s/$/"/' device-cert.pem
```
-5. Find the definition for the `main()` function in the same file. Make sure the `hsm_type` variable is set to `SECURE_DEVICE_TYPE_X509` instead of `SECURE_DEVICE_TYPE_TPM` as shown below.
+ Copy and paste the output certificate text for the constant value.
+
+1. Update the string value of the `PRIVATE_KEY` constant with the unencrypted private key for your device certificate, *unencrypted-device-key.pem*.
+
+ The syntax of the private key text must follow the pattern below with no extra spaces or parsing done by Visual Studio.
```c
- SECURE_DEVICE_TYPE hsm_type;
- //hsm_type = SECURE_DEVICE_TYPE_TPM;
- hsm_type = SECURE_DEVICE_TYPE_X509;
+ static const char* const PRIVATE_KEY = "--BEGIN RSA PRIVATE KEY--\n"
+ "MIIJJwIBAAKCAgEAtjvKQjIhp0EE1PoADL1rfF/W6v4vlAzOSifKSQsaPeebqg8U\n"
+ ...
+ "X7fi9OZ26QpnkS5QjjPTYI/wwn0J9YAwNfKSlNeXTJDfJ+KpjXBcvaLxeBQbQhij\n"
+ "--END RSA PRIVATE KEY--";
```
-6. Right-click the **prov\_dev\_client\_sample** project and select **Set as Startup Project**.
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `PRIVATE_KEY` string constant value and write it to the output.
-7. On the Visual Studio menu, select **Debug** > **Start without debugging** to run the solution. In the prompt to rebuild the project, select **Yes** to rebuild the project before running.
+ ```Bash
+ sed -e 's/^/"/;$ !s/$/""\\n"/;$ s/$/"/' unencrypted-device-key.pem
+ ```
- The following output is an example of the provisioning device client sample successfully booting up, and connecting to the provisioning Service instance to get IoT hub information and registering:
+ Copy and paste the output private key text for the constant value.
- ```cmd
- Provisioning API Version: 1.2.7
+1. Save your changes.
+
+1. Right-click the **custom_hsm_-_example** project and select **Build**.
+
+ > [!IMPORTANT]
+ > You must build the **custom_hsm_example** project before you build the rest of the solution in the next section.
- Registering... Press enter key to interrupt.
+### Run the sample
+1. On the Visual Studio menu, select **Debug** > **Start without debugging** to run the solution. If you're prompted to rebuild the project, select **Yes** to rebuild the project before running.
+
+ The following output is an example of the simulated device `my-x509-device` successfully booting up, and connecting to the provisioning service. The device is assigned to an IoT hub and registered:
+
+ ```output
+ Provisioning API Version: 1.8.0
+
+ Registering Device
+
Provisioning Status: PROV_DEVICE_REG_STATUS_CONNECTED Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING
- Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING
-
- Registration Information received from service:
- test-docs-hub.azure-devices.net, deviceId: test-docs-cert-device
+
+ Registration Information received from service: contoso-iot-hub-2.azure-devices.net, deviceId: my-x509-device
+ Press enter key to exit:
``` ::: zone-end ::: zone pivot="programming-language-csharp"
+In this section, you'll use your Windows command prompt.
+ 1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
-2. Copy the **_ID Scope_** value.
+2. Copy the **ID Scope** value.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Copy ID Scope from the portal.":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Screenshot of the I D scope on Azure portal.":::
-3. Open a command prompt window.
+3. In your Windows command prompt, change to the X509Sample directory. This is located in the *.\azure-iot-samples-csharp\provisioning\Samples\device\X509Sample* directory off the directory where you cloned the samples on your computer.
-4. Type the following command to build and run the X.509 device provisioning sample (replace the `<IDScope>` value with the ID Scope that you copied in the previous section.). The certificate file will default to *./certificate.pfx* and prompt for the .pfx password. Type in your password.
+4. Enter the following command to build and run the X.509 device provisioning sample (replace the `<IDScope>` value with the ID Scope that you copied in the previous section. The certificate file will default to *./certificate.pfx* and prompt for the .pfx password.
- ```powershell
+ ```cmd
dotnet run -- -s <IDScope> ```
- If you want to pass everything as a parameter, you can use the following example format.
+ If you want to pass the certificate and password as a parameter, you can use the following format.
- ```powershell
+ ```cmd
dotnet run -- -s 0ne00000A0A -c certificate.pfx -p 1234 ```
-5. The device will now connect to DPS and be assigned to an IoT Hub. Then, the device will send a telemetry message to the hub.
+5. The device will connect to DPS and be assigned to an IoT hub. Then, the device will send a telemetry message to the IoT hub.
```output Loading the certificate...
- Found certificate: 10952E59D13A3E388F88E534444484F52CD3D9E4 CN=iothubx509device1, O=TEST, C=US; PrivateKey: True
- Using certificate 10952E59D13A3E388F88E534444484F52CD3D9E4 CN=iothubx509device1, O=TEST, C=US
+ Enter the PFX password for certificate.pfx:
+ ****
+ Found certificate: A33DB11B8883DEE5B1690ACFEAAB69E8E928080B CN=my-x509-device; PrivateKey: True
+ Using certificate A33DB11B8883DEE5B1690ACFEAAB69E8E928080B CN=my-x509-device
Initializing the device provisioning client...
- Initialized for registration Id iothubx509device1.
+ Initialized for registration Id my-x509-device.
Registering with the device provisioning service... Registration status: Assigned.
- Device iothubx509device2 registered to sample-iot-hub1.azure-devices.net.
+ Device my-x509-device registered to MyExampleHub.azure-devices.net.
Creating X509 authentication for IoT Hub... Testing the provisioned device with IoT Hub... Sending a telemetry message...
In this section, we'll update the sample code to send the device's boot sequence
::: zone pivot="programming-language-nodejs"
-1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
+In this section, you'll use your Windows command prompt.
-2. Copy the **_ID Scope_** and **Global device endpoint** values.
+1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Copy ID Scope from the portal.":::
+1. Copy the **ID Scope** and **Global device endpoint** values.
-3. Copy your _certificate_ and _key_ to the sample folder.
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot of the I D scope and global device endpoint on Azure portal.":::
- ```cmd/sh
- copy .\{certificate-name}_cert.pem ..\device\samples\{certificate-name}_cert.pem
- copy .\{certificate-name}_key.pem ..\device\samples\{certificate-name}_key.pem
- ```
+1. In your Windows command prompt, go to the sample directory, and install the packages needed by the sample. The path shown is relative to the location where you cloned the SDK.
-4. Navigate to the device test script and build the project.
-
- ```cmd/sh
- cd ..\device\samples
+ ```cmd
+ cd ./azure-iot-sdk-node/provisioning/device/samples
npm install ```
-5. Edit the **register\_x509.js** file with the following changes:
+1. Edit the **register_x509.js** file and make the following changes:
- * Replace `provisioning host` with the **_Global Device Endpoint_** noted in **Step 1** above.
- * Replace `id scope` with the **_ID Scope_** noted in **Step 1** above.
- * Replace `registration id` with the **_Registration ID_** noted in the previous section.
- * Replace `cert filename` and `key filename` with the files you copied in **Step 2** above.
+ * Replace `provisioning host` with the **Global Device Endpoint** noted in **Step 1** above.
+ * Replace `id scope` with the **ID Scope** noted in **Step 1** above.
+ * Replace `registration id` with the **Registration ID** noted in the previous section.
+ * Replace `cert filename` and `key filename` with the files you generated previously, *device-cert.pem* and *device-key.pem*.
-6. Save the file.
+1. Save the file.
-7. Execute the script and verify that the device was provisioned successfully.
+1. Run the sample and verify that the device was provisioned successfully.
- ```cmd/sh
+ ```cmd
node register_x509.js ```
In this section, we'll update the sample code to send the device's boot sequence
::: zone pivot="programming-language-python"
-The Python provisioning sample, [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/samples/async-hub-scenarios/provision_x509.py) is located in the `azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios` directory. This sample uses six environment variables to authenticate and provision an IoT device using DPS. These environment variables are:
-
-| Variable name | Description |
-| :- | :- |
-| `PROVISIONING_HOST` | This value is the global endpoint used for connecting to your DPS resource |
-| `PROVISIONING_IDSCOPE` | This value is the ID Scope for your DPS resource |
-| `DPS_X509_REGISTRATION_ID` | This value is the ID for your device. It must also match the subject name on the device certificate |
-| `X509_CERT_FILE` | Your device certificate filename |
-| `X509_KEY_FILE` | The private key filename for your device certificate |
-| `PASS_PHRASE` | The pass phrase you used to encrypt the certificate and private key file (`1234`). |
+In this section, you'll use your Windows command prompt.
1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
-2. Copy the **_ID Scope_** and **Global device endpoint** values.
+1. Copy the **ID Scope** and **Global device endpoint** values.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Copy ID Scope from the portal.":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot of the I D scope and global device endpoint on Azure portal.":::
-3. In your Git Bash prompt, use the following commands to add the environment variables for the global device endpoint and ID Scope.
+1. In your Windows command prompt, go to the directory of the [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/samples/async-hub-scenarios/provision_x509.py) sample. The path shown is relative to the location where you cloned the SDK.
- ```bash
- $export PROVISIONING_HOST=global.azure-devices-provisioning.net
- $export PROVISIONING_IDSCOPE=<ID scope for your DPS resource>
+ ```cmd
+ cd ./azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios
```
-4. The registration ID for the IoT device must match subject name on its device certificate. If you generated a self-signed test certificate, `Python-device-01` is both the subject name and the registration ID for the device.
+ This sample uses six environment variables to authenticate and provision an IoT device using DPS. These environment variables are:
- If you already have a device certificate, you can use `certutil` to verify the subject common name used for your device, as shown below:
+ | Variable name | Description |
+ | :- | :- |
+ | `PROVISIONING_HOST` | This value is the global endpoint used for connecting to your DPS resource |
+ | `PROVISIONING_IDSCOPE` | This value is the ID Scope for your DPS resource |
+ | `DPS_X509_REGISTRATION_ID` | This value is the ID for your device. It must also match the subject name on the device certificate |
+ | `X509_CERT_FILE` | Your device certificate filename |
+ | `X509_KEY_FILE` | The private key filename for your device certificate |
+ | `PASS_PHRASE` | The pass phrase you used to encrypt the certificate and private key file (`1234`). |
- ```bash
- $ certutil python-device.pem
- X509 Certificate:
- Version: 3
- Serial Number: fa33152fe1140dc8
- Signature Algorithm:
- Algorithm ObjectId: 1.2.840.113549.1.1.11 sha256RSA
- Algorithm Parameters:
- 05 00
- Issuer:
- CN=Python-device-01
- Name Hash(sha1): 1dd88de40e9501fb64892b698afe12d027011000
- Name Hash(md5): a62c784820daa931b9d3977739b30d12
-
- NotBefore: 1/29/2021 7:05 PM
- NotAfter: 1/29/2022 7:05 PM
-
- Subject:
- ===> CN=Python-device-01 <===
- Name Hash(sha1): 1dd88de40e9501fb64892b698afe12d027011000
- Name Hash(md5): a62c784820daa931b9d3977739b30d12
- ```
-
-5. In the Git Bash prompt, set the environment variable for the registration ID as follows:
+1. Add the environment variables for the global device endpoint and ID Scope.
- ```bash
- $export DPS_X509_REGISTRATION_ID=Python-device-01
+ ```cmd
+ set PROVISIONING_HOST=global.azure-devices-provisioning.net
+ set PROVISIONING_IDSCOPE=<ID scope for your DPS resource>
```
-6. In the Git Bash prompt, set the environment variables for the certificate file, private key file, and pass phrase.
+1. The registration ID for the IoT device must match subject name on its device certificate. If you generated a self-signed test certificate, `my-x509-device` is both the subject name and the registration ID for the device.
- ```bash
- $export X509_CERT_FILE=./python-device.pem
- $export X509_KEY_FILE=./python-device.key.pem
- $export PASS_PHRASE=1234
+1. Set the environment variable for the registration ID as follows:
+
+ ```cmd
+ set DPS_X509_REGISTRATION_ID=my-x509-device
```
-7. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-device/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())`.
+1. Set the environment variables for the certificate file, private key file, and pass phrase.
+
+ ```cmd
+ set X509_CERT_FILE=./device-cert.pem
+ set X509_KEY_FILE=./device-key.pem
+ set PASS_PHRASE=1234
+ ```
-8. Save your changes.
+1. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-device/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())`.
-9. Run the sample. The sample will connect, provision the device to a hub, and send some test messages to the hub.
+1. Save your changes.
- ```bash
- $ winpty python azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios/provision_x509.py
+1. Run the sample. The sample will connect to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample will send some test messages to the IoT hub.
+
+ ```cmd
+ $ python azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios/provision_x509.py
RegistrationStage(RequestAndResponseOperation): Op will transition into polling after interval 2. Setting timer. The complete registration result is
- Python-device-01
+ my-x509-device
TestHub12345.azure-devices.net initialAssignment null
The Python provisioning sample, [provision_x509.py](https://github.com/Azure/azu
::: zone pivot="programming-language-java"
+In this section, you'll use both your Windows command prompt and your Git Bash prompt.
+ 1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
-2. Copy the **_ID Scope_** and **Global device endpoint** values.
+1. Copy the **ID Scope** and **Global device endpoint** values.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Copy ID Scope from the portal.":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot of the I D scope and global device endpoint on Azure portal.":::
-3. Open a command prompt. Navigate to the sample project folder of the Java SDK repository.
+1. In your Windows command prompt, navigate to the sample project folder. The path shown is relative to the location where you cloned the SDK
- ```cmd/sh
- cd azure-iot-sdk-java/provisioning/provisioning-samples/provisioning-X509-sample
+ ```cmd
+ cd .\azure-iot-sdk-java\provisioning\provisioning-samples\provisioning-X509-sample
```
-4. Enter the provisioning service and X.509 identity information in your code. This is used during provisioning, for attestation of the simulated device, prior to device registration:
+1. Enter the provisioning service and X.509 identity information in the sample code. This is used during provisioning, for attestation of the simulated device, prior to device registration.
- * Edit the file `/src/main/java/samples/com/microsoft/azure/sdk/iot/ProvisioningX509Sample.java`, to include your _ID Scope_ and _Provisioning Service Global Endpoint_ as noted previously. Also include _Client Cert_ and _Client Cert Private Key_ as noted in the previous section.
+ 1. Open the file `.\src\main\java\samples\com/microsoft\azure\sdk\iot\ProvisioningX509Sample.java` in your favorite editor.
- ```java
- private static final String idScope = "[Your ID scope here]";
- private static final String globalEndpoint = "[Your Provisioning Service Global Endpoint here]";
- private static final ProvisioningDeviceClientTransportProtocol PROVISIONING_DEVICE_CLIENT_TRANSPORT_PROTOCOL = ProvisioningDeviceClientTransportProtocol.HTTPS;
- private static final String leafPublicPem = "<Your Public PEM Certificate here>";
- private static final String leafPrivateKey = "<Your Private PEM Key here>";
- ```
+ 1. Update the following values with the **ID Scope** and **Provisioning Service Global Endpoint** that you copied previously.
- * Use the following format when copying/pasting your certificate and private key:
+ ```java
+ private static final String idScope = "[Your ID scope here]";
+ private static final String globalEndpoint = "[Your Provisioning Service Global Endpoint here]";
+ private static final ProvisioningDeviceClientTransportProtocol PROVISIONING_DEVICE_CLIENT_TRANSPORT_PROTOCOL = ProvisioningDeviceClientTransportProtocol.HTTPS;
- ```java
- private static final String leafPublicPem = "--BEGIN CERTIFICATE--\n" +
- "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n" +
- "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n" +
- "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n" +
- "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n" +
- "+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n" +
- "--END CERTIFICATE--\n";
- private static final String leafPrivateKey = "--BEGIN PRIVATE KEY--\n" +
- "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n" +
- "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n" +
- "XXXXXXXXXX\n" +
- "--END PRIVATE KEY--\n";
- ```
+ 1. Update the value of the `leafPublicPem` constant string with the value of your certificate, *device-cert.pem*.
-5. Build the sample, and then go to the `target` folder and execute the created .jar file.
+ The syntax of certificate text must follow the pattern below with no extra spaces or characters.
- ```cmd/sh
- mvn clean install
- cd target
- java -jar ./provisioning-x509-sample-{version}-with-deps.jar
- ```
+ ```java
+ private static final String leafPublicPem = "--BEGIN CERTIFICATE--\n" +
+ "MIIFOjCCAyKgAwIBAgIJAPzMa6s7mj7+MA0GCSqGSIb3DQEBCwUAMCoxKDAmBgNV\n" +
+ ...
+ "MDMwWhcNMjAxMTIyMjEzMDMwWjAqMSgwJgYDVQQDDB9BenVyZSBJb1QgSHViIENB\n" +
+ "--END CERTIFICATE--";
+ ```
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `leafPublicPem` string constant value and write it to the output.
-## Confirm your device provisioning registration
+ ```Bash
+ sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' device-cert.pem
+ ```
-1. Go to the [Azure portal](https://portal.azure.com).
+ Copy and paste the output certificate text for the constant value.
-2. On the left-hand menu or on the portal page, select **All resources**.
+ 1. Update the string value of the `leafPrivateKey` constant with the unencrypted private key for your device certificate, *unencrypted-device-key.pem*.
+
+ The syntax of the private key text must follow the pattern below with no extra spaces or characters.
-3. Select the IoT hub to which your device was assigned.
+ ```java
+ private static final String leafPrivateKey = "--BEGIN PRIVATE KEY--\n" +
+ "MIIJJwIBAAKCAgEAtjvKQjIhp0EE1PoADL1rfF/W6v4vlAzOSifKSQsaPeebqg8U\n" +
+ ...
+ "X7fi9OZ26QpnkS5QjjPTYI/wwn0J9YAwNfKSlNeXTJDfJ+KpjXBcvaLxeBQbQhij\n" +
+ "--END PRIVATE KEY--";
+ ```
-4. In the **Explorers** menu, select **IoT Devices**.
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `leafPrivateKey` string constant value and write it to the output.
-5. If your device was provisioned successfully, the device ID should appear in the list, with **Status** set as *enabled*. If you don't see your device, select **Refresh** at the top of the page.
+ ```Bash
+ sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' unencrypted-device-key.pem
+ ```
- :::zone pivot="programming-language-ansi-c"
+ Copy and paste the output private key text for the constant value.
- :::image type="content" source="./media/quick-create-simulated-device-x509/hub-registration.png" alt-text="Device is registered with the IoT hub":::
+ 1. Save your changes.
- ::: zone-end
- :::zone pivot="programming-language-csharp"
+1. Build the sample, and then go to the `target` folder.
- :::image type="content" source="./media/quick-create-simulated-device-x509/hub-registration-csharp.png" alt-text="CSharp device is registered with the IoT hub":::
+ ```cmd
+ mvn clean install
+ cd target
+ ```
+
+1. The build outputs .jar file in the `target` folder with the following file format: `provisioning-x509-sample-{version}-with-deps.jar`; for example: `provisioning-x509-sample-1.8.1-with-deps.jar`. Execute the .jar file. You may need to replace the version in the command below.
+
+ ```cmd
+ java -jar ./provisioning-x509-sample-1.8.1-with-deps.jar
+ ```
+
+ The sample will connect to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample will send some test messages to the IoT hub.
+
+ ```output
+ Starting...
+ Beginning setup.
+ WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
+ 2022-05-11 09:42:05,025 DEBUG (main) [com.microsoft.azure.sdk.iot.provisioning.device.ProvisioningDeviceClient] - Initialized a ProvisioningDeviceClient instance using SDK version 2.0.0
+ 2022-05-11 09:42:05,027 DEBUG (main) [com.microsoft.azure.sdk.iot.provisioning.device.ProvisioningDeviceClient] - Starting provisioning thread...
+ Waiting for Provisioning Service to register
+ 2022-05-11 09:42:05,030 INFO (global.azure-devices-provisioning.net-6255a8ba-CxnPendingConnectionId-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Opening the connection to device provisioning service...
+ 2022-05-11 09:42:05,252 INFO (global.azure-devices-provisioning.net-6255a8ba-Cxn6255a8ba-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Connection to device provisioning service opened successfully, sending initial device registration message
+ 2022-05-11 09:42:05,286 INFO (global.azure-devices-provisioning.net-6255a8ba-Cxn6255a8ba-azure-iot-sdk-RegisterTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.RegisterTask] - Authenticating with device provisioning service using x509 certificates
+ 2022-05-11 09:42:06,083 INFO (global.azure-devices-provisioning.net-6255a8ba-Cxn6255a8ba-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Waiting for device provisioning service to provision this device...
+ 2022-05-11 09:42:06,083 INFO (global.azure-devices-provisioning.net-6255a8ba-Cxn6255a8ba-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Current provisioning status: ASSIGNING
+ Waiting for Provisioning Service to register
+ 2022-05-11 09:42:15,685 INFO (global.azure-devices-provisioning.net-6255a8ba-Cxn6255a8ba-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Device provisioning service assigned the device successfully
+ IotHUb Uri : MyExampleHub.azure-devices.net
+ Device ID : java-device-01
+ 2022-05-11 09:42:25,057 INFO (main) [com.microsoft.azure.sdk.iot.device.transport.ExponentialBackoffWithJitter] - NOTE: A new instance of ExponentialBackoffWithJitter has been created with the following properties. Retry Count: 2147483647, Min Backoff Interval: 100, Max Backoff Interval: 10000, Max Time Between Retries: 100, Fast Retry Enabled: true
+ 2022-05-11 09:42:25,080 INFO (main) [com.microsoft.azure.sdk.iot.device.transport.ExponentialBackoffWithJitter] - NOTE: A new instance of ExponentialBackoffWithJitter has been created with the following properties. Retry Count: 2147483647, Min Backoff Interval: 100, Max Backoff Interval: 10000, Max Time Between Retries: 100, Fast Retry Enabled: true
+ 2022-05-11 09:42:25,087 DEBUG (main) [com.microsoft.azure.sdk.iot.device.DeviceClient] - Initialized a DeviceClient instance using SDK version 2.0.3
+ 2022-05-11 09:42:25,129 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.MqttIotHubConnection] - Opening MQTT connection...
+ 2022-05-11 09:42:25,150 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sending MQTT CONNECT packet...
+ 2022-05-11 09:42:25,982 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sent MQTT CONNECT packet was acknowledged
+ 2022-05-11 09:42:25,983 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sending MQTT SUBSCRIBE packet for topic devices/java-device-01/messages/devicebound/#
+ 2022-05-11 09:42:26,068 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sent MQTT SUBSCRIBE packet for topic devices/java-device-01/messages/devicebound/# was acknowledged
+ 2022-05-11 09:42:26,068 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.MqttIotHubConnection] - MQTT connection opened successfully
+ 2022-05-11 09:42:26,070 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - The connection to the IoT Hub has been established
+ 2022-05-11 09:42:26,071 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Updating transport status to new status CONNECTED with reason CONNECTION_OK
+ 2022-05-11 09:42:26,071 DEBUG (main) [com.microsoft.azure.sdk.iot.device.DeviceIO] - Starting worker threads
+ 2022-05-11 09:42:26,073 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking connection status callbacks with new status details
+ 2022-05-11 09:42:26,074 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Client connection opened successfully
+ 2022-05-11 09:42:26,075 INFO (main) [com.microsoft.azure.sdk.iot.device.DeviceClient] - Device client opened successfully
+ Sending message from device to IoT Hub...
+ 2022-05-11 09:42:26,077 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Message was queued to be sent later ( Message details: Correlation Id [54d9c6b5-3da9-49fe-9343-caa6864f9a02] Message Id [28069a3d-f6be-4274-a48b-1ee539524eeb] )
+ Press any key to exit...
+ 2022-05-11 09:42:26,079 DEBUG (MyExampleHub.azure-devices.net-java-device-01-ee6c362d-Cxn7a1fb819-e46d-4658-9b03-ca50c88c0440-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Sending message ( Message details: Correlation Id [54d9c6b5-3da9-49fe-9343-caa6864f9a02] Message Id [28069a3d-f6be-4274-a48b-1ee539524eeb] )
+ 2022-05-11 09:42:26,422 DEBUG (MQTT Call: java-device-01) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - IotHub message was acknowledged. Checking if there is record of sending this message ( Message details: Correlation Id [54d9c6b5-3da9-49fe-9343-caa6864f9a02] Message Id [28069a3d-f6be-4274-a48b-1ee539524eeb] )
+ 2022-05-11 09:42:26,425 DEBUG (MyExampleHub.azure-devices.net-java-device-01-ee6c362d-Cxn7a1fb819-e46d-4658-9b03-ca50c88c0440-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking the callback function for sent message, IoT Hub responded to message ( Message details: Correlation Id [54d9c6b5-3da9-49fe-9343-caa6864f9a02] Message Id [28069a3d-f6be-4274-a48b-1ee539524eeb] ) with status OK
+ Message sent!
+ ```
++
+## Confirm your device provisioning registration
- ::: zone-end
+To see which IoT hub your device was provisioned to, examine the registration details of the individual enrollment you created previously:
- :::zone pivot="programming-language-nodejs"
+1. In Azure portal, go to your Device Provisioning Service.
- :::image type="content" source="./media/quick-create-simulated-device-x509/hub-registration-nodejs.png" alt-text="Node.js device is registered with the IoT hub":::
+1. In the **Settings** menu, select **Manage enrollments**.
- ::: zone-end
+1. Select **Individual Enrollments**. The X.509 enrollment entry that you created previously, *my-x509-device*, should appear in the list.
- :::zone pivot="programming-language-python"
+1. Select the enrollment entry. The IoT hub that your device was assigned to and its device ID appears under **Registration Status**.
- :::image type="content" source="./media/quick-create-simulated-device-x509/hub-registration-python.png" alt-text="Python device is registered with the IoT hub":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/individual-enrollment-after-registration.png" alt-text="Screenshot that shows the individual enrollment registration status tab for the device on Azure portal.":::
- ::: zone-end
+To verify the device on your IoT hub:
- ::: zone pivot="programming-language-java"
+1. In Azure portal, go to the IoT hub that your device was assigned to.
- :::image type="content" source="./media/quick-create-simulated-device-x509/hub-registration-java.png" alt-text="Java device is registered with the IoT hub":::
+1. In the **Device management** menu, select **Devices**.
- ::: zone-end
+1. If your device was provisioned successfully, its device ID, *my-x509-device*, should appear in the list, with **Status** set as *enabled*. If you don't see your device, select **Refresh**.
+ :::image type="content" source="./media/quick-create-simulated-device-x509/iot-hub-registration.png" alt-text="Screenshot that shows the device is registered with the I o T hub in Azure portal.":::
::: zone pivot="programming-language-csharp,programming-language-nodejs,programming-language-python,programming-language-java"
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
Previously updated : 02/22/2022 Last updated : 06/01/2022
IoT Hub enforces other operational limits:
| Direct method<sup>1</sup> | Maximum direct method payload size is 128 KB. | | Automatic device and module configurations<sup>1</sup> | 100 configurations per paid SKU hub. 10 configurations per free SKU hub. | | IoT Edge automatic deployments<sup>1</sup> | 50 modules per deployment. 100 deployments (including layered deployments) per paid SKU hub. 10 deployments per free SKU hub. |
-| Twins<sup>1</sup> | Maximum size of desired properties and reported properties sections are 32 KB each. Maximum size of tags section is 8 KB. |
+| Twins<sup>1</sup> | Maximum size of desired properties and reported properties sections are 32 KB each. Maximum size of tags section is 8 KB. Maximum size of each individual property in every section is 4 KB. |
| Shared access policies | Maximum number of shared access policies is 16. | | Restrict outbound network access | Maximum number of allowed FQDNs is 20. | | x509 CA certificates | Maximum number of x509 CA certificates that can be registered on IoT Hub is 25. |
lab-services How To Create Schedules Within Canvas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-schedules-within-canvas.md
Here is how schedules affect lab VM:
The scheduled running time of VMs does not count against the [quota](classroom-labs-concepts.md#quota) given to a user. The quota is for the time outside of schedule hours that a student spends on VMs.
-Educators can create, edit, and delete lab schedules within Canvas as in the Azure Lab Services portal. For more information on scheduling, see [Creating and managing schedules](how-to-create-schedules-within-canvas.md).
+Educators can create, edit, and delete lab schedules within Canvas as in the Azure Lab Services portal. For more information on scheduling, see [Creating and managing schedules](how-to-create-schedules.md).
> [!IMPORTANT] > Schedules will apply at the course level. If you have many sections of a course, consider using [automatic shutdown policies](how-to-configure-auto-shutdown-lab-plans.md) and/or [quotas hours](how-to-configure-student-usage.md#set-quotas-for-users).
lab-services How To Create Schedules Within Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-schedules-within-teams.md
Here's how schedules affect lab virtual machines:
> [!IMPORTANT] > The scheduled run time of VMs doesn't count against the quota allotted to a user. The alloted quota is for the time outside of schedule hours that a student spends on VMs.
-Users can create, edit, and delete lab schedules within Teams as in the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com). For more information, see [creating and managing schedules](how-to-create-schedules-within-teams.md).
+Users can create, edit, and delete lab schedules within Teams as in the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com). For more information, see [creating and managing schedules](how-to-create-schedules.md).
## Automatic shutdown and disconnect settings
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md
ms.suite: integration Previously updated : 04/28/2022 Last updated : 06/01/2022
With the **Logic App (Standard)** resource type, you can create these workflow t
Create a stateless workflow when you don't need to keep, review, or reference data from previous events in external storage after each run finishes for later review. These workflows save all the inputs and outputs for each action and their states *in memory only*, not in external storage. As a result, stateless workflows have shorter runs that are typically less than 5 minutes, faster performance with quicker response times, higher throughput, and reduced running costs because the run details and history aren't saved in external storage. However, if outages happen, interrupted runs aren't automatically restored, so the caller needs to manually resubmit interrupted runs.
- > [!IMPORTANT]
- > A stateless workflow provides the best performance when handling data or content, such as a file, that doesn't exceed 64 KB in *total* size.
- > Larger content sizes, such as multiple large attachments, might significantly slow your workflow's performance or even cause your workflow to
- > crash due to out-of-memory exceptions. If your workflow might have to handle larger content sizes, use a stateful workflow instead.
+ A stateless workflow provides the best performance when handling data or content, such as a file, that doesn't exceed 64 KB in *total* size. Larger content sizes, such as multiple large attachments, might significantly slow your workflow's performance or even cause your workflow to crash due to out-of-memory exceptions. If your workflow might have to handle larger content sizes, use a stateful workflow instead.
- Stateless workflows only run synchronously, so they don't use the standard [asynchronous operation pattern](/azure/architecture/patterns/async-request-reply) used by stateful workflows. Instead, all HTTP-based actions that return a ["202 ACCEPTED"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3) response proceed to the next step in the workflow execution. If the response includes a `location` header, a stateless workflow won't poll the specified URI to check the status. To follow the standard asynchronous operation pattern, use a stateful workflow instead.
+ In stateless workflows, [*managed connector actions*](../connectors/managed.md) are available, but *managed connector triggers* are unavailable. So, to start your workflow, select a [built-in trigger](../connectors/built-in.md) instead, such as the Request, Event Hubs, or Service Bus trigger. These triggers run natively on the Azure Logic Apps runtime. The Recurrence trigger is unavailable for stateless workflows and is available only for stateful workflows. For more information about limited, unavailable, or unsupported triggers, actions, and connectors, see [Changed, limited, unavailable, or unsupported capabilities](#limited-unavailable-unsupported).
- For easier debugging, you can enable run history for a stateless workflow, which has some impact on performance, and then disable the run history when you're done. For more information, see [Create single-tenant based workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#enable-run-history-stateless) or [Create single-tenant based workflows in the Azure portal](create-single-tenant-workflows-visual-studio-code.md#enable-run-history-stateless).
+ Stateless workflows run only synchronously, so they don't use the standard [asynchronous operation pattern](/azure/architecture/patterns/async-request-reply) used by stateful workflows. Instead, all HTTP-based actions that return a ["202 ACCEPTED"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3) response continue to the next step in the workflow execution. If the response includes a `location` header, a stateless workflow won't poll the specified URI to check the status. To follow the standard asynchronous operation pattern, use a stateful workflow instead.
- > [!NOTE]
- > Stateless workflows currently support only *actions* for [managed connectors](../connectors/managed.md),
- > which are deployed in Azure, and not triggers. To start your workflow, select either the
- > [built-in Request, Event Hubs, or Service Bus trigger](../connectors/built-in.md).
- > These triggers run natively in the Azure Logic Apps runtime. For more information about limited,
- > unavailable, or unsupported triggers, actions, and connectors, see
- > [Changed, limited, unavailable, or unsupported capabilities](#limited-unavailable-unsupported).
+ For easier debugging, you can enable run history for a stateless workflow, which has some impact on performance, and then disable the run history when you're done. For more information, see [Create single-tenant based workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#enable-run-history-stateless) or [Create single-tenant based workflows in the Azure portal](create-single-tenant-workflows-visual-studio-code.md#enable-run-history-stateless).
### Summary differences between stateful and stateless workflows <center>
-| Stateless | Stateful |
+| Stateful | Stateless |
|--|-|
-| Doesn't store run history, inputs, or outputs by default | Stores run history, inputs, and outputs |
-| Managed connector triggers are unavailable or not allowed | Managed connector triggers are available and allowed |
-| No support for chunking | Supports chunking |
-| No support for asynchronous operations | Supports asynchronous operations |
-| Best for workflows with max duration under 5 minutes | Edit default max run duration in host configuration |
-| Best for handling small message sizes (under 64K) | Handles large messages |
+| Stores run history, inputs, and outputs | Doesn't store run history, inputs, or outputs by default |
+| Managed connector triggers are available and allowed | Managed connector triggers are unavailable or not allowed |
+| Supports chunking | No support for chunking |
+| Supports asynchronous operations | No support for asynchronous operations |
+| Edit default max run duration in host configuration | Best for workflows with max duration under 5 minutes |
+| Handles large messages | Best for handling small message sizes (under 64K) |
||| </center>
With the **Logic App (Standard)** resource type, you can create these workflow t
You can [make a workflow callable](logic-apps-http-endpoint.md) from other workflows that exist in the same **Logic App (Standard)** resource by using the [Request trigger](../connectors/connectors-native-reqres.md), [HTTP Webhook trigger](../connectors/connectors-native-webhook.md), or managed connector triggers that have the [ApiConnectionWebhook type](logic-apps-workflow-actions-triggers.md#apiconnectionwebhook-trigger) and can receive HTTPS requests.
-Here are the behavior patterns that nested workflows can follow after a parent workflow calls a child workflow:
+The following list describes the behavior patterns that nested workflows can follow after a parent workflow calls a child workflow:
* Asynchronous polling pattern
- The parent doesn't wait for a response to their initial call, but continually checks the child's run history until the child finishes running. By default, stateful workflows follow this pattern, which is ideal for long-running child workflows that might exceed [request timeout limits](logic-apps-limits-and-config.md).
+ The parent workflow doesn't wait for the child workflow to respond to their initial call. However, the parent continually checks the child's run history until the child finishes running. By default, stateful workflows follow this pattern, which is ideal for long-running child workflows that might exceed [request timeout limits](logic-apps-limits-and-config.md).
* Synchronous pattern ("fire and forget")
- The child acknowledges the call by immediately returning a `202 ACCEPTED` response, and the parent continues to the next action without waiting for the results from the child. Instead, the parent receives the results when the child finishes running. Child stateful workflows that don't include a Response action always follow the synchronous pattern. For child stateful workflows, the run history is available for you to review.
+ The child workflow acknowledges the parent workflow's call by immediately returning a `202 ACCEPTED` response. However, the parent doesn't wait for the child to return results. Instead, the parent continues on to the next action in the workflow and receives the results when the child finishes running. Child stateful workflows that don't include a Response action always follow the synchronous pattern and provide a run history for you to review.
To enable this behavior, in the workflow's JSON definition, set the `operationOptions` property to `DisableAsyncPattern`. For more information, see [Trigger and action types - Operation options](logic-apps-workflow-actions-triggers.md#operation-options). * Trigger and wait
- For a child stateless workflow, the parent waits for a response that returns the results from the child. This pattern works similar to using the built-in [HTTP trigger or action](../connectors/connectors-native-http.md) to call a child workflow. Child stateless workflows that don't include a Response action immediately return a `202 ACCEPTED` response, but the parent waits for the child to finish before continuing to the next action. These behaviors apply only to child stateless workflows.
+ Stateless workflows run in memory. So when a parent workflow calls a child stateless workflow, the parent waits for a response that returns the results from the child. This pattern works similarly to using the built-in [HTTP trigger or action](../connectors/connectors-native-http.md) to call a child workflow. Child stateless workflows that don't include a Response action immediately return a `202 ACCEPTED` response, but the parent waits for the child to finish before continuing to the next action. These behaviors apply only to child stateless workflows.
-This table specifies the child workflow's behavior based on whether the parent and child are stateful, stateless, or are mixed workflow types:
+The following table identifies the child workflow's behavior based on whether the parent and child are stateful, stateless, or are mixed workflow types. The list after the table
| Parent workflow | Child workflow | Child behavior | |--|-|-|
The single-tenant model and **Logic App (Standard)** resource type include many
For the **Logic App (Standard)** resource, these capabilities have changed, or they are currently limited, unavailable, or unsupported:
-* **Triggers and actions**: Built-in triggers and actions run natively in Azure Logic Apps, while managed connectors are hosted and run in Azure. Some built-in triggers and actions are unavailable, such as Sliding Window, Batch, Azure App Services, and Azure API Management. To start a stateful or stateless workflow, use the [Request, HTTP, HTTP Webhook, Event Hubs, Service Bus trigger, and so on](../connectors/built-in.md). The Recurrence trigger is available only for stateful workflows, not stateless workflows. In the designer, built-in triggers and actions appear under the **Built-in** tab.
+* **Triggers and actions**: [Built-in triggers and actions](../connectors/built-in.md) run natively in Azure Logic Apps, while managed connectors are hosted and run in Azure. For Standard workflows, some built-in triggers and actions are currently unavailable, such as Sliding Window, Batch, Azure App Service, and Azure API Management. To start a stateful or stateless workflow, use a built-in trigger such as the Request, Event Hubs, or Service Bus trigger. The Recurrence trigger is available for stateful workflows, but not stateless workflows. In the designer, built-in triggers and actions appear on the **Built-in** tab, while [managed connector triggers and actions](../connectors/managed.md) appear on the **Azure** tab.
- For *stateful* workflows, [managed connector triggers and actions](../connectors/managed.md) appear under the **Azure** tab, except for the unavailable operations listed below. For *stateless* workflows, the **Azure** tab doesn't appear when you want to select a trigger. You can select only [managed connector *actions*, not triggers](../connectors/managed.md). Although you can enable Azure-hosted managed connectors for stateless workflows, the designer doesn't show any managed connector triggers for you to add.
+ For *stateless* workflows, *managed connector actions* are available, but *managed connector triggers* are unavailable. So the **Azure** tab appears only when you can select managed connector actions. Although you can enable managed connectors for stateless workflows, the designer doesn't show any managed connector triggers for you to add.
> [!NOTE] > To run locally in Visual Studio Code, webhook-based triggers and actions require additional setup. For more information, see
machine-learning Migrate Rebuild Integrate With Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-rebuild-integrate-with-client-app.md
Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Consume pipeline endpoints'
-description: Integrate pipeline endpoints with client applications in Azure Machine Learning.
+ Title: 'Migrate to Azure Machine Learning - Consume pipeline endpoints'
+description: Learn how to integrate pipeline endpoints with client applications in Azure Machine Learning as part of migrating from Machine Learning Studio (Classic).
+ Previously updated : 03/08/2021 Last updated : 05/31/2022 # Consume pipeline endpoints from client applications
Last updated 03/08/2021
In this article, you learn how to integrate client applications with Azure Machine Learning endpoints. For more information on writing application code, see [Consume an Azure Machine Learning endpoint](how-to-consume-web-service.md).
-This article is part of the Studio (classic) to Azure Machine Learning migration series. For more information on migrating to Azure Machine Learning, see [the migration overview article](migrate-overview.md).
+This article is part of the ML Studio (classic) to Azure Machine Learning migration series. For more information on migrating to Azure Machine Learning, see [the migration overview article](migrate-overview.md).
## Prerequisites
This article is part of the Studio (classic) to Azure Machine Learning migration
- An Azure Machine Learning workspace. [Create an Azure Machine Learning workspace](how-to-manage-workspace.md#create-a-workspace). - An [Azure Machine Learning real-time endpoint or pipeline endpoint](migrate-rebuild-web-service.md).
+## Consume a real-time endpoint
-## Consume a real-time endpoint
-
-If you deployed your model as a **real-time endpoint**, you can find its REST endpoint, and pre-generated consumption code in C#, Python, and R:
+If you deployed your model as a *real-time endpoint*, you can find its REST endpoint, and pre-generated consumption code in C#, Python, and R:
1. Go to Azure Machine Learning studio ([ml.azure.com](https://ml.azure.com)). 1. Go the **Endpoints** tab.
If you deployed your model as a **real-time endpoint**, you can find its REST en
> [!NOTE] > You can also find the Swagger specification for your endpoint in the **Details** tab. Use the Swagger definition to understand your endpoint schema. For more information on Swagger definition, see [Swagger official documentation](https://swagger.io/docs/specification/2-0/what-is-swagger/). - ## Consume a pipeline endpoint There are two ways to consume a pipeline endpoint:
Call the REST endpoint from your client application. You can use the Swagger spe
You can call your Azure Machine Learning pipeline as a step in an Azure Data Factory pipeline. For more information, see [Execute Azure Machine Learning pipelines in Azure Data Factory](../data-factory/transform-data-machine-learning-service.md). - ## Next steps In this article, you learned how to find schema and sample code for your pipeline endpoints. For more information on consuming endpoints from the client application, see [Consume an Azure Machine Learning endpoint](how-to-consume-web-service.md).
-See the rest of the articles in the Azure Machine Learning migration series:
-1. [Migration overview](migrate-overview.md).
-1. [Migrate dataset](migrate-register-dataset.md).
-1. [Rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md).
-1. [Rebuild a Studio (classic) web service](migrate-rebuild-web-service.md).
-1. **Integrate an Azure Machine Learning web service with client apps**.
-1. [Migrate Execute R Script](migrate-execute-r-script.md).
+See the rest of the articles in the Azure Machine Learning migration series:
+
+- [Migration overview](migrate-overview.md).
+- [Migrate dataset](migrate-register-dataset.md).
+- [Rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md).
+- [Rebuild a Studio (classic) web service](migrate-rebuild-web-service.md).
+- [Migrate Execute R Script](migrate-execute-r-script.md).
managed-instance-apache-cassandra Create Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-portal.md
Previously updated : 11/02/2021 Last updated : 05/31/2022 # Quickstart: Create an Azure Managed Instance for Apache Cassandra cluster from the Azure portal
This quickstart demonstrates how to use the Azure portal to create an Azure Mana
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-## <a id="create-account"></a>Create a managed instance cluster
+## Create a managed instance cluster
1. Sign in to the [Azure portal](https://portal.azure.com/).
If you don't have an Azure subscription, create a [free account](https://azure.m
:::image type="content" source="./media/create-cluster-portal/datacenter-1.png" alt-text="View datacenter nodes." lightbox="./media/create-cluster-portal/datacenter-1.png" border="true":::
-<!-- ## <a id="create-account"></a>Add a datacenter
+## Add a datacenter
1. To add another datacenter, click the add button in the **Data Center** pane:
If you don't have an Azure subscription, create a [free account](https://azure.m
* **Location** - Location where your datacenter will be deployed to. * **SKU Size** - Choose from the available Virtual Machine SKU sizes. * **No. of disks** - Choose the number of p30 disks to be attached to each Cassandra node.
- * **SKU Size** - Choose the number of Cassandra nodes that will be deployed to this datacenter.
+ * **No. of nodes** - Choose the number of Cassandra nodes that will be deployed to this datacenter.
* **Virtual Network** - Select an Exiting Virtual Network and Subnet. :::image type="content" source="./media/create-cluster-portal/add-datacenter-2.png" alt-text="Add Datacenter." lightbox="./media/create-cluster-portal/add-datacenter-2.png" border="true":::
If you don't have an Azure subscription, create a [free account](https://azure.m
If you encounter an error when applying permissions to your Virtual Network using Azure CLI, such as *Cannot find user or service principal in graph database for 'e5007d2c-4b13-4a74-9b6a-605d99f03501'*, you can apply the same permission manually from the Azure portal. Learn how to do this [here](add-service-principal.md). > [!NOTE]
-> The Azure Cosmos DB role assignment is used for deployment purposes only. Azure Managed Instanced for Apache Cassandra has no backend dependencies on Azure Cosmos DB. -->
+> The Azure Cosmos DB role assignment is used for deployment purposes only. Azure Managed Instanced for Apache Cassandra has no backend dependencies on Azure Cosmos DB.
## Connecting to your cluster
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-in-replication.md
Modifying the parameter `replicate_wild_ignore_table` used to create replication
- The source server version must be at least MySQL version 5.7. - Our recommendation is to have the same version for source and replica server versions. For example, both must be MySQL version 5.7 or both must be MySQL version 8.0.-- Our recommendation is to have a primary key in each table. If we have table without primary key, you might face slowness in replication. To create primary keys for tables you can use [invisible column](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html) if your MySQL version is greater than 8.0.23 `(ALTER TABLE <table name> ADD COLUMN <column name> bigint AUTO_INCREMENT INVISIBLE PRIMARY KEY;)`.
+- Our recommendation is to have a primary key in each table. If we have table without primary key, you might face slowness in replication.
- The source server should use the MySQL InnoDB engine. - User must have permissions to configure binary logging and create new users on the source server. - Binary log files on the source server shouldn't be purged before the replica applies those changes. If the source is Azure Database for MySQL refer how to configure binlog_expire_logs_seconds for [Flexible server](./concepts-server-parameters.md#binlog_expire_logs_seconds) or [Single server](../concepts-server-parameters.md#binlog_expire_logs_seconds)
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-high-availability.md
Automatic backups, both snapshots and log backups, are performed on locally redu
>[!Note] >For both zone-redundant and same-zone HA:
->* If there's a failure, the time needed for the standby replica to take over the role of primary depends on the binary log application on the standby. So we recommend that you use primary keys on all tables to reduce failover time. Failover times are typically between 60 and 120 seconds.To create primary keys for tables you can use [invisible column](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html) if your MySQL version is greater than 8.0.23 `(ALTER TABLE <table name> ADD COLUMN <column name> bigint AUTO_INCREMENT INVISIBLE PRIMARY KEY;)`.
+>* If there's a failure, the time needed for the standby replica to take over the role of primary depends on the binary log application on the standby. So we recommend that you use primary keys on all tables to reduce failover time. Failover times are typically between 60 and 120 seconds.
>* The standby server isn't available for read or write operations. It's a passive standby to enable fast failover. >* Always use a fully qualified domain name (FQDN) to connect to your primary server. Avoid using an IP address to connect. If there's a failover, after the primary and standby server roles are switched, a DNS A record might change. That change would prevent the application from connecting to the new primary server if an IP address is used in the connection string.
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
- **Announcing the addition of new Burstable compute instances for Azure Database for MySQL - Flexible Server** We are announcing the addition of new Burstable compute instances to support customersΓÇÖ auto-scaling compute requirements from 1 vCore up to 20 vCores. learn more about [Compute Option for Azure Database for MySQL - Flexible Server](https://docs.microsoft.com/azure/mysql/flexible-server/concepts-compute-storage).
+- **Known issues**
+ - The Reserved instances (RI) feature in Azure Database for MySQL ΓÇô Flexible server is not working properly for the Business Critical service tier, after its rebranding from the Memory Optimized service tier. Specifically, instance reservation has stopped working, and we are currently working to fix the issue.
+ - Private DNS integration details are not displayed on few Azure Database for MySQL Database flexible servers which have HA option enabled. This issue does not have any impact on availability of the server or name resolution. We are working on a permanent fix to resolve the issue and it will be available in the next deployment. Meanwhile, if you want to view the Private DNS Zone details, you can either search under [Private DNS zones](../../dns/private-dns-getstarted-portal.md) in the Azure portal or you can perform a [manual failover](concepts-high-availability.md#planned-forced-failover) of the HA enabled flexible server and refresh the Azure portal.
+ ## April 2022 - **Minor version upgrade for Azure Database for MySQL - Flexible server to 8.0.28**
mysql 03 Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/mysql-on-premises-azure-db/03-assessment.md
The most important of which include:
- Automatic significant database migration (5.6 to 5.7, 5.7 to 8.0)
- - When using [MySQL Server User-Defined Functions (UDFs),](https://dev.mysql.com/doc/refman/5.7/en/server-udfs.html) the only viable hosting option is Azure Hosted VMs, as there's no capability to upload the `so` or `dll` component to Azure Database for MySQL.
+ - When using MySQL Server User-Defined Functions (UDFs), the only viable hosting option is Azure Hosted VMs, as there's no capability to upload the `so` or `dll` component to Azure Database for MySQL.
Many of the other items are operational aspects that administrators should become familiar with as part of the operational data workload lifecycle management. This guide explores many of these operational aspects in the Post Migration Management section.
For the first phase, WWI focused solely on the ConferenceDB database. The team n
## Next steps > [!div class="nextstepaction"]
-> [Planning](./04-planning.md)
+> [Planning](./04-planning.md)
mysql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-reserved-pricing.md
Last updated 10/06/2021
Azure Database for MySQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for MySQL reserved instances, you make an upfront commitment on MySQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for MySQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br>
+>[!NOTE]
+>The Reserved instances (RI) feature in Azure Database for MySQL ΓÇô Flexible server is not working properly for the Business Critical service tier, after its rebranding > from the Memory Optimized service tier. Specifically, instance reservation has stopped working, and we are currently working to fix the issue.
++ ## How does the instance reservation work? You do not need to assign the reservation to specific Azure Database for MySQL servers. An already running Azure Database for MySQL or ones that are newly deployed, will automatically get the benefit of reserved pricing. By purchasing a reservation, you are pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for MySQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the MySQL Database server. At the end of the reservation term, the billing benefit expires, and the Azure Database for MySQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for MySQL reserved capacity offering](https://azure.microsoft.com/pricing/details/mysql/). </br>
mysql Single Server Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/single-server-whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
Enabled the ability to change the server parameter innodb_ft_server_stopword_table from Portal/CLI. Users can now change the value of the innodb_ft_server_stopword_table parameter using the Azure portal and CLI. This parameter helps to configure your own InnoDB FULLTEXT index stopword list for all InnoDB tables. For more information, see [innodb_ft_server_stopword_table](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_ft_server_stopword_table).
+**Known Issues**
+
+Customers using PHP driver with [enableRedirect](./how-to-redirection.md) can no longer connect to the Azure Database for MySQL Single Server, as the CA certificates of the host servers were changed from BaltimoreCyberTrustRoot to DigiCertGlobalRootG2 to address compliance requirements. For successful connections to your database using PHP driver with enableRedirect please visit this [link](./concepts-certificate-rotation.md#do-i-need-to-make-any-changes-on-my-client-to-maintain-connectivity).
+ ## March 2022 This release of Azure Database for MySQL - Single Server includes the following updates.
network-watcher Connection Monitor Create Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-create-using-portal.md
> [!IMPORTANT] > Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You will also not be able to add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor ](migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before 29 February 2024.
+> [!IMPORTANT]
+> Connection Monitor will now support end to end connectivity checks from and to *Azure Virtual Machine Scale Sets*, enabling faster performance monitoring and network troubleshooting across scale sets
+ Learn how to use Connection Monitor to monitor communication between your resources. This article describes how to create a monitor by using the Azure portal. Connection Monitor supports hybrid and Azure cloud deployments. ## Before you begin
-In connection monitors that you create by using Connection Monitor, you can add both on-premises machines and Azure VMs as sources. These connection monitors can also monitor connectivity to endpoints. The endpoints can be on Azure or on any other URL or IP.
+In connection monitors that you create by using Connection Monitor, you can add both on-premises machines, Azure VMs and Azure Virtual Machine scale sets as sources. These connection monitors can also monitor connectivity to endpoints. The endpoints can be on Azure or on any other URL or IP.
Here are some definitions to get you started:
Here are some definitions to get you started:
:::image type="content" source="./media/connection-monitor-2-preview/cm-tg-2.png" alt-text="Diagram that shows a connection monitor and defines the relationship between test groups and tests.":::
+ > [!NOTE]
+ > Connection Monitor now supports auto enablement of monitoring extensions for Azure & Non-Azure endpoints, thus eliminating the need for manual installation of monitoring solutions during the creation of Connection Monitor.
## Create a connection monitor
Connection Monitor creates the connection monitor resource in the background.
## Create test groups in a connection monitor
+ >[!NOTE]
+ >> Connection Monitor now supports auto enablement of monitoring extensions for Azure & Non-Azure endpoints, thus eliminating the need for manual installation of monitoring solutions during the creation of Connection Monitor.
+ Each test group in a connection monitor includes sources and destinations that get tested on network parameters. They're tested for the percentage of checks that fail and the RTT over test configurations. In the Azure portal, to create a test group in a connection monitor, you specify values for the following fields:
In the Azure portal, to create a test group in a connection monitor, you specify
* **Disable test group**: You can select this check box to disable monitoring for all sources and destinations that the test group specifies. This selection is cleared by default. * **Name**: Name your test group. * **Sources**: You can specify both Azure VMs and on-premises machines as sources if agents are installed on them. To learn about installing an agent for your source, see [Install monitoring agents](./connection-monitor-overview.md#install-monitoring-agents).
- * To choose Azure agents, select the **Azure endpoints** tab. Here you see only VMs that are bound to the region that you specified when you created the connection monitor. By default, VMs are grouped into the subscription that they belong to. These groups are collapsed.
+ * To choose Azure agents, select the **Azure endpoints** tab. Here you see only VMs or Virtual Machine scale sets that are bound to the region that you specified when you created the connection monitor. By default, VMs and Virtual Machine scale sets are grouped into the subscription that they belong to. These groups are collapsed.
You can drill down from the **Subscription** level to other levels in the hierarchy:
- **Subscription** > **Resource group** > **VNET** > **Subnet** > **VMs with agents**
+ **Subscription** > **Resource group** > **VNET** > **Subnet** > **VMs with agents**
You can also change the **Group by** selector to start the tree from any other level. For example, if you group by virtual network, you see the VMs that have agents in the hierarchy **VNET** > **Subnet** > **VMs with agents**.
- When you select a VNET, subnet, or single VM, the corresponding resource ID is set as the endpoint. By default, all VMs in the selected VNET or subnet that have the Azure Network Watcher extension participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
+ When you select a VNET, subnet, a single VM or a virtual machine scale set the corresponding resource ID is set as the endpoint. By default, all VMs in the selected VNET or subnet participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
- :::image type="content" source="./media/connection-monitor-2-preview/add-azure-sources.png" alt-text="Screenshot that shows the Add Sources pane and the Azure endpoints tab in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/add-sources-1.png" alt-text="Screenshot that shows the Add Sources pane and the Azure endpoints including V M S S tab in Connection Monitor.":::
* To choose on-premises agents, select the **NonΓÇôAzure endpoints** tab. By default, agents are grouped into workspaces by region. All these workspaces have the Network Performance Monitor configured.
In the Azure portal, to create a test group in a connection monitor, you specify
* To choose recently used endpoints, you can use the **Recent endpoint** tab
+ * You need not choose the endpoints with monitoring agents enabled only. You can select Azure or Non-Azure endpoints without the agent enabled and proceed with the creation of Connection Monitor. During the creation process, the monitoring agents for the endpoints will be automatically enabled.
+ :::image type="content" source="./media/connection-monitor-2-preview/unified-enablement.png" alt-text="Screenshot that shows the Add Sources pane and the Non-Azure endpoints tab in Connection Monitor with unified enablement.":::
+
* When you finish setting up sources, select **Done** at the bottom of the tab. You can still edit basic properties like the endpoint name by selecting the endpoint in the **Create Test Group** view. * **Destinations**: You can monitor connectivity to an Azure VM, an on-premises machine, or any endpoint (a public IP, URL, or FQDN) by specifying it as a destination. In a single test group, you can add Azure VMs, on-premises machines, Office 365 URLs, Dynamics 365 URLs, and custom endpoints.
In the Azure portal, to create a test group in a connection monitor, you specify
:::image type="content" source="./media/connection-monitor-2-preview/add-test-config.png" alt-text="Screenshot that shows where to set up a test configuration in Connection Monitor.":::
+* **Test Groups**: You can add one or more Test Groups to a Connection Monitor. These test groups can consist of multiple Azure or Non-Azure endpoints.
+ * For selected Azure VMs or Azure virtual machine scale sets and Non-Azure endpoints without monitoring extensions, the extension for Azure VMs and the NPM solution for Non-Azure endpoints will be auto enablement once the creation of Connection Monitor begins.
+ * In case the virtual machine scale set selected is set for manual upgradation, the user will have to upgrade the scale set post Network Watcher extension installation in order to continue setting up the Connection Monitor with virtual machine scale set as endpoints. In-case the virtual machine scale set is set to auto upgradation, the user need not worry about any upgradation after Network Watcher extension installation.
+ * In the scenario mentioned above, user can consent to auto upgradation of virtual machine scale set with auto enablement of Network Watcher extension during the creation of Connection Monitor for Virtual Machine scale sets with manual upgradation. This would eliminate the need for the user to manually upgrade the virtual machine scale set after installing the Network Watcher extension.
+
+ :::image type="content" source="./media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png" alt-text="Screenshot that shows where to set up a test groups and consent for auto-upgradation of V M S S in Connection Monitor.":::
+ ## Create alerts in Connection Monitor You can set up alerts on tests that are failing based on the thresholds set in test configurations.
In the Azure portal, to create alerts for a connection monitor, you specify valu
- **Enable rule upon creation**: Select this check box to enable the alert rule based on the condition. Disable this check box if you want to create the rule without enabling it. +
+Once all the steps are completed, the process will proceed with unified enablement of monitoring extensions for all endpoints without monitoring agents enabled, followed by creation of Connection Monitor.
+Once the creation process is successful , it will take ~ 5 mins for the connection monitor to show up on the dashboard.
## Scale limits
network-watcher Connection Monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-overview.md
> > To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor](migrate-to-connection-monitor-from-network-performance-monitor.md), or [migrate from Connection Monitor (Classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before February 29, 2024.
+> [!IMPORTANT]
+> Connection Monitor will now support end to end connectivity checks from and to *Azure Virtual Machine Scale Sets*, enabling faster performance monitoring and network troubleshooting across scale sets
+ Connection Monitor provides unified, end-to-end connection monitoring in Azure Network Watcher. The Connection Monitor feature supports hybrid and Azure cloud deployments. Network Watcher provides tools to monitor, diagnose, and view connectivity-related metrics for your Azure deployments. Here are some use cases for Connection Monitor: -- Your front-end web server virtual machine (VM) communicates with a database server VM in a multi-tier application. You want to check network connectivity between the two VMs.-- You want VMs in, for example, the East US region to ping VMs in the Central US region, and you want to compare cross-region network latencies.
+- Your front-end web server virtual machine (VM) or virtual machine scale set(VMSS) communicates with a database server VM in a multi-tier application. You want to check network connectivity between the two VM/or scale sets.
+- You want VMs/scale sets in, for example, the East US region to ping VMs/scale sets in the Central US region, and you want to compare cross-region network latencies.
- You have multiple on-premises office sites, one in Seattle, Washington, for example, and another in Ashburn, Virginia. Your office sites connect to Microsoft 365 URLs. For your users of Microsoft 365 URLs, you want to compare the latencies between Seattle and Ashburn. - Your hybrid application needs connectivity to an Azure storage account endpoint. Your on-premises site and your Azure application connect to the same endpoint. You want to compare the latencies of the on-premises site with the latencies of the Azure application.-- You want to check the connectivity between your on-premises setups and the Azure VMs that host your cloud application.
+- You want to check the connectivity between your on-premises setups and the Azure VMs/virtual machine scale sets that host your cloud application.
+- You want to check the connectivity from single or multiple instances of an Azure Virtual Machine Scale Set to your Azure or Non-Azure multi-tier application.
-Connection Monitor combines the best of two features: the Network Watcher [Connection Monitor (Classic)](./network-watcher-monitoring-overview.md#monitor-communication-between-a-virtual-machine-and-an-endpoint) feature and the NPM [Service Connectivity Monitor](../azure-monitor/insights/network-performance-monitor-service-connectivity.md), [ExpressRoute Monitoring](../expressroute/how-to-npm.md), and [Performance monitoring](../azure-monitor/insights/network-performance-monitor-performance-monitor.md) feature.
+Connection Monitor combines the best of two features: the Network Watcher [Connection Monitor (Classic)](./network-watcher-monitoring-overview.md#monitor-communication-between-a-virtual-machine-and-an-endpoint) feature and the Network Performance Monitor [Service Connectivity Monitor](../azure-monitor/insights/network-performance-monitor-service-connectivity.md), [ExpressRoute Monitoring](../expressroute/how-to-npm.md), and [Performance monitoring](../azure-monitor/insights/network-performance-monitor-performance-monitor.md) feature.
Here are some benefits of Connection Monitor:
Here are some benefits of Connection Monitor:
* Support for connectivity checks that are based on HTTP, Transmission Control Protocol (TCP), and Internet Control Message Protocol (ICMP) * Metrics and Log Analytics support for both Azure and non-Azure test setups
-![Diagram showing how Connection Monitor interacts with Azure VMs, non-Azure hosts, endpoints, and data storage locations.](./media/connection-monitor-2-preview/hero-graphic.png)
+![Diagram showing how Connection Monitor interacts with Azure VMs, non-Azure hosts, endpoints, and data storage locations.](./media/connection-monitor-2-preview/hero-graphic-new.png)
To start using Connection Monitor for monitoring, do the following:
The following sections provide details for these steps.
## Install monitoring agents
+ > [!NOTE]
+ > Connection Monitor now supports auto enablement of monitoring extensions for Azure & Non-Azure endpoints, thus eliminating the need for manual installation of monitoring solutions during the creation of Connection Monitor.
+
Connection Monitor relies on lightweight executable files to run connectivity checks. It supports connectivity checks from both Azure environments and on-premises environments. The executable file that you use depends on whether your VM is hosted on Azure or on-premises.
-### Agents for Azure virtual machines
+### Agents for Azure virtual machines and virtual machine scale sets
-To make Connection Monitor recognize your Azure VMs as monitoring sources, install the Network Watcher Agent virtual machine extension on them. This extension is also known as the *Network Watcher extension*. Azure virtual machines require the extension to trigger end-to-end monitoring and other advanced functionality.
+To make Connection Monitor recognize your Azure VMs or virtual machine scale sets as monitoring sources, install the Network Watcher Agent virtual machine extension on them. This extension is also known as the *Network Watcher extension*. Azure virtual machines and scale sets require the extension to trigger end-to-end monitoring and other advanced functionality.
-You can install the Network Watcher extension when you [create a VM](./connection-monitor.md#create-the-first-vm). You can also separately install, configure, and troubleshoot the Network Watcher extension for [Linux](../virtual-machines/extensions/network-watcher-linux.md) and [Windows](../virtual-machines/extensions/network-watcher-windows.md).
+You can install the Network Watcher extension when you [create a VM](./connection-monitor.md#create-the-first-vm) or when you [create a VM scale set](./connection-monitor-virtual-machine-scale-set.md#create-a-vm-scale-set). Follow similar steps for enabling the You can also separately install, configure, and troubleshoot the Network Watcher extension for [Linux](../virtual-machines/extensions/network-watcher-linux.md) and [Windows](../virtual-machines/extensions/network-watcher-windows.md).
Rules for a network security group (NSG) or firewall can block communication between the source and destination. Connection Monitor detects this issue and shows it as a diagnostics message in the topology. To enable connection monitoring, ensure that the NSG and firewall rules allow packets over TCP or ICMP between the source and destination.
+If you wish to escape the installation process for enabling Network Watcher extension, you can proceed with the creation of Connection Monitor and allow auto enablement of Network Watcher extensions on your Azure VMs and VM scale sets.
+
+ > [!Note]
+ > In the case the virtual machine scale sets is set for manual upgradation, the user will have to upgrade the scale set post Network Watcher extension installation in order to continue setting up the Connection Monitor with virtual machine scale sets as endpoints. In-case the virtual machine scale sets is set to auto upgradation, the user need not worry about any upgradation after Network Watcher extension installation.
+ > As Connection Monitor now supports unified auto enablement of monitoring extensions, user can consent to auto upgradation of VM scale set with auto enablement of Network Watcher extension during the creation on Connection Monitor for VM scale sets with manual upgradation.
+ ### Agents for on-premises machines
-To make Connection Monitor recognize your on-premises machines as sources for monitoring, install the Log Analytics agent on the machines. Then, enable the [Network Performance Monitor solution](../network-watcher/connection-monitor-overview.md#enable-the-npm-solution-for-on-premises-machines). These agents are linked to Log Analytics workspaces, so you need to set up the workspace ID and primary key before the agents can start monitoring.
+To make Connection Monitor recognize your on-premises machines as sources for monitoring, install the Log Analytics agent on the machines. Then, enable the [Network Performance Monitor solution](../network-watcher/connection-monitor-overview.md#enable-the-network-performance-monitor-solution-for-on-premises-machines). These agents are linked to Log Analytics workspaces, so you need to set up the workspace ID and primary key before the agents can start monitoring.
To install the Log Analytics agent for Windows machines, see [Install Log Analytics agent on Windows](../azure-monitor/agents/agent-windows.md).
The script configures only Windows Firewall locally. If you have a network firew
The Log Analytics Windows agent can be multihomed to send data to multiple workspaces and System Center Operations Manager management groups. The Linux agent can send data only to a single destination, either a workspace or management group.
-#### Enable the NPM solution for on-premises machines
+#### Enable the Network Performance Monitor solution for on-premises machines
-To enable the NPM solution for on-premises machines, do the following:
+To enable the Network Performance Monitor solution for on-premises machines, do the following:
1. In the Azure portal, go to **Network Watcher**. 1. On the left pane, under **Monitoring**, select **Network Performance Monitor**.
- A list of workspaces with NPM solution enabled is displayed, filtered by **Subscriptions**.
-1. To add the NPM solution in a new workspace, select **Add NPM** at the top left.
+ A list of workspaces with Network Performance Monitor solution enabled is displayed, filtered by **Subscriptions**.
+1. To add the Network Performance Monitor solution in a new workspace, select **Add NPM** at the top left.
1. Select the subscription and workspace in which you want to enable the solution, and then select **Create**. After you've enabled the solution, the workspace takes a couple of minutes to be displayed.
- :::image type="content" source="./media/connection-monitor/network-performance-monitor-solution-enable.png" alt-text="Screenshot showing how to add the NPM solution in Connection Monitor." lightbox="./media/connection-monitor/network-performance-monitor-solution-enable.png":::
+ :::image type="content" source="./media/connection-monitor/network-performance-monitor-solution-enable.png" alt-text="Screenshot showing how to add the Network Performance Monitor solution in Connection Monitor." lightbox="./media/connection-monitor/network-performance-monitor-solution-enable.png":::
+
+Unlike Log Analytics agents, the Network Performance Monitor solution can be configured to send data only to a single Log Analytics workspace.
-Unlike Log Analytics agents, the NPM solution can be configured to send data only to a single Log Analytics workspace.
+If you wish to escape the installation process for enabling Network Watcher extension, you can proceed with the creation of Connection Monitor and allow auto enablement of monitoring solution on your on-premises machines.
## Enable Network Watcher on your subscription
Make sure that Network Watcher is [available for your region](https://azure.micr
Connection Monitor monitors communication at regular intervals. It informs you of changes in reachability and latency. You can also check the current and historical network topology between source agents and destination endpoints.
-Sources can be Azure VMs or on-premises machines that have an installed monitoring agent. Destination endpoints can be Microsoft 365 URLs, Dynamics 365 URLs, custom URLs, Azure VM resource IDs, IPv4, IPv6, FQDN, or any domain name.
+Sources can be Azure VMs/ scale sets or on-premises machines that have an installed monitoring agent. Destination endpoints can be Microsoft 365 URLs, Dynamics 365 URLs, custom URLs, Azure VM resource IDs, IPv4, IPv6, FQDN, or any domain name.
### Access Connection Monitor
Sources can be Azure VMs or on-premises machines that have an installed monitori
### Create a connection monitor
-In connection monitors that you create in Connection Monitor, you can add both on-premises machines and Azure VMs as sources. These connection monitors can also monitor connectivity to endpoints. The endpoints can be on Azure or any other URL or IP address.
+In connection monitors that you create in Connection Monitor, you can add both on-premises machines and Azure VMs/ scale sets as sources. These connection monitors can also monitor connectivity to endpoints. The endpoints can be on Azure or any other URL or IP address.
Connection Monitor includes the following entities: * **Connection monitor resource**: A region-specific Azure resource. All the following entities are properties of a connection monitor resource.
-* **Endpoint**: A source or destination that participates in connectivity checks. Examples of endpoints include Azure VMs, on-premises agents, URLs, and IP addresses.
+* **Endpoint**: A source or destination that participates in connectivity checks. Examples of endpoints include Azure VMs/ scale sets, on-premises agents, URLs, and IP addresses.
* **Test configuration**: A protocol-specific configuration for a test. Based on the protocol you select, you can define the port, thresholds, test frequency, and other properties. * **Test group**: The group that contains source endpoints, destination endpoints, and test configurations. A connection monitor can contain more than one test group. * **Test**: The combination of a source endpoint, destination endpoint, and test configuration. A test is the most granular level at which monitoring data is available. The monitoring data includes the percentage of checks that failed and the round-trip time (RTT).
All sources, destinations, and test configurations that you add to a test group
| 12 | C | E | Config 2 | | | | ++ ### Scale limits Connection monitors have the following scale limits:
Connection monitors have the following scale limits:
* Maximum sources and destinations per connection monitor: 100 * Maximum test configurations per connection monitor: 20
+Monitoring coverage for Azure and Non Azure Resources:
+
+Connection Monitor now provides 5 different coverage levels for monitoring compound resources i.e. VNets, SubNets, VM Scale Sets. Coverage level is defined as the % of instances of a compound resource actually included in monitoring those resources as source or destinations.
+Users can manually select a coverage level from Low, Below Average, Average, Above Average and Full to define an approximate % of instances to be included in monitoring the particular resource as an endpoint
+ ## Analyze monitoring data and set alerts After you create a connection monitor, sources check connectivity to destinations based on your test configuration.
+While monitoring endpoints, Connection Monitor re-evaluates status of end points once every 24 hours. Hence, incase a VM gets deallocated or is turned-off during a 24-hour cycle, Connection Monitor would report indeterminate state due to absence of data in the network path till the end of 24-hour cycle before re evaluating the status of the VM and reporting the VM status as deallocated.
+
+ > [!NOTE]
+ > In case of monitoring an Azure Virtual Machine Scale Set, instances of a particular scale set selected for monitoring (either by the user or picked up by default as part of the coverage level selected) might get deallocated or scaled down in the middle of the 24 hour cycle. In this particular time-period, Connection Monitor will not be able to recognize this action and thus end-up reporting indeterminate state due to absence of data.
+ > Users are adviced to allow random selection of virtual machine scale sets instances within coverage levels instead of selecting particular instances of scale sets for monitoring, to minimize the risks of non-discoverability of deallocated or scaled down virtual machine scale sets instances in a 24 hours cycle and lead to indeterminate state of connection monitor.
+ ### Checks in a test Depending on the protocol that you select in the test configuration, Connection Monitor runs a series of checks for the source-destination pair. The checks run according to the test frequency that you select.
If you use HTTP, the service calculates the number of HTTP responses that return
If you use TCP or ICMP, the service calculates the packet-loss percentage to determine the percentage of failed checks. To calculate RTT, the service measures the time taken to receive the acknowledgment (ACK) for the packets that were sent. If you've enabled traceroute data for your network tests, you can view the hop-by-hop loss and latency for your on-premises network. + ### States of a test Depending on the data that the checks return, tests can have the following states:
network-watcher Connection Monitor Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-virtual-machine-scale-set.md
+
+ Title: Tutorial - Monitor network communication using the Azure portal using VM scale set
+description: In this tutorial, learn how to monitor network communication between two virtual machine scale sets with Azure Network Watcher's connection monitor capability.
+
+documentationcenter: na
+
+editor: ''
+tags: azure-resource-manager
+# Customer intent: I need to monitor communication between a VM scale set and another VM scale set. If the communication fails, I need to know why, so that I can resolve the problem.
+++
+ na
+ Last updated : 05/24/2022++++
+# Tutorial: Monitor network communication between two virtual machine scale sets using the Azure portal
+
+> [!NOTE]
+> This tutorial cover Connection Monitor (classic). Try the new and improved [Connection Monitor](connection-monitor-overview.md) to experience enhanced connectivity monitoring
+
+> [!IMPORTANT]
+> Starting 1 July 2021, you will not be able to add new connection monitors in Connection Monitor (classic) but you can continue to use existing connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate from Connection Monitor (classic) to the new Connection Monitor](migrate-to-connection-monitor-from-connection-monitor-classic.md) in Azure Network Watcher before 29 February 2024.
+
+Successful communication between a virtual machine scale set (VMSS) and an endpoint such as another VM, can be critical for your organization. Sometimes, configuration changes are introduced which can break communication. In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a VM scale set and a VM
+> * Monitor communication between VMs with the connection monitor capability of Network Watcher
+> * Generate alerts on Connection Monitor metrics
+> * Diagnose a communication problem between two VM scale sets, and learn how you can resolve it
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com).
+
+## Create a VM scale set
+
+Create a VM scale set
+
+## Create a load balancer
+
+Azure [load balancer](../load-balancer/load-balancer-overview.md) distributes incoming traffic among healthy virtual machine instances.
+
+First, create a public Standard Load Balancer by using the portal. The name and public IP address you create are automatically configured as the load balancer's front end.
+
+1. In the search box, type **load balancer**. Under **Marketplace** in the search results, pick **Load balancer**.
+1. In the **Basics** tab of the **Create load balancer** page, enter or select the following information:
+
+ | Setting | Value |
+ | | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **Create new** and type *myVMSSResourceGroup* in the text box.|
+ | Name | *myLoadBalancer* |
+ | Region | Select **East US**. |
+ | Type | Select **Public**. |
+ | SKU | Select **Standard**. |
+ | Public IP address | Select **Create new**. |
+ | Public IP address name | *myPip* |
+ | Assignment| Static |
+ | Availability zone | Select **Zone-redundant**. |
+
+1. When you are done, select **Review + create**
+1. After it passes validation, select **Create**.
++
+## Create virtual machine scale set
+
+You can deploy a scale set with a Windows Server image or Linux image such as RHEL, CentOS, Ubuntu, or SLES.
+
+1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual machine scale sets**. Select **Create** on the **Virtual machine scale sets** page, which will open the **Create a virtual machine scale set** page.
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and select *myVMSSResourceGroup* from resource group list.
+1. Type *myScaleSet* as the name for your scale set.
+1. In **Region**, select a region that is close to your area.
+1. Under **Orchestration**, ensure the *Uniform* option is selected for **Orchestration mode**.
+1. Select a marketplace image for **Image**. In this example, we have chosen *Ubuntu Server 18.04 LTS*.
+1. Enter your desired username, and select which authentication type you prefer.
+ - A **Password** must be at least 12 characters long and meet three out of the four following complexity requirements: one lower case character, one upper case character, one number, and one special character. For more information, see [username and password requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-).
+ - If you select a Linux OS disk image, you can instead choose **SSH public key**. Only provide your public key, such as *~/.ssh/id_rsa.pub*. You can use the Azure Cloud Shell from the portal to [create and use SSH keys](../virtual-machines/linux/mac-create-ssh-keys.md).
+
+
+1. Select **Next** to move the other pages.
+1. Leave the defaults for the **Instance** and **Disks** pages.
+1. On the **Networking** page, under **Load balancing**, select **Yes** to put the scale set instances behind a load balancer.
+1. In **Load balancing options**, select **Azure load balancer**.
+1. In **Select a load balancer**, select *myLoadBalancer* that you created earlier.
+1. For **Select a backend pool**, select **Create new**, type *myBackendPool*, then select **Create**.
+1. When you are done, select **Review + create**.
+1. After it passes validation, select **Create** to deploy the scale set.
++
+Once the scale set is created, follow the steps below to enable the Network Watcher extension in the scale set.
+
+1. Under **Settings**, select **Extensions**. Select **Add extension**, and select **Network Watcher Agent for Windows**, as shown in the following picture:
++
+
+1. Under **Network Watcher Agent for Windows**, select **Create**, under **Install extension** select **OK**, and then under **Extensions**, select **OK**.
+
+
+### Create the VM
+
+Complete the steps in [create a VM](./connection-monitor.md#create-the-first-vm) again, with the following changes:
+
+|Step|Setting|Value|
+||||
+| 1 | Select a version of **Ubuntu Server** | |
+| 3 | Name | myVm2 |
+| 3 | Authentication type | Paste your SSH public key or select **Password**, and enter a password. |
+| 3 | Resource group | Select **Use existing** and select **myResourceGroup**. |
+| 6 | Extensions | **Network Watcher Agent for Linux** |
+
+The VM takes a few minutes to deploy. Wait for the VM to finish deploying before continuing with the remaining steps.
++
+## Create a connection monitor
+
+Create a connection monitor to monitor communication over TCP port 22 from *myVmss1* to *myVm2*.
+
+1. On the left side of the portal, select **All services**.
+2. Start typing *network watcher* in the **Filter** box. When **Network Watcher** appears in the search results, select it.
+3. Under **MONITORING**, select **Connection monitor**.
+4. Select **+ Add**.
+5. Enter or select the information for the connection you want to monitor, and then select **Add**. In the example shown in the following picture, the connection monitored is from the *myVmss1* VM scale set to the *myVm2* VM over port 22:
+
+ | Setting | Value |
+ | | |
+ | Name | myVmss1-myVm2(22) |
+ | Source | |
+ | Virtual machine | myVmss1 |
+ | Destination | |
+ | Select a virtual machine | |
+ | Virtual machine | myVm2 |
+ | Port | 22 |
+
+ :::image type="content" source="./media/connection-monitor/add-connection-monitor.png" alt-text="Screenshot that shows addition of Connection Monitor.":::
+
+## View a connection monitor
+
+1. Complete steps 1-3 in [Create a connection monitor](#create-a-connection-monitor) to view connection monitoring. You see a list of existing connection monitors, as shown in the following picture:
+
+ :::image type="content" source="./media/connection-monitor/connection-monitors.png" alt-text="Screenshot that shows Connection Monitor.":::
+
+2. Select the monitor with the name **myVmss1-myVm2(22)**, as shown in the previous picture, to see details for the monitor, as shown in the following picture:
+
+ :::image type="content" source="./media/connection-monitor/vm-monitor.png" alt-text="Screenshot that shows virtual machine monitor.":::
+
+ Note the following information:
+
+ | Item | Value | Details |
+ | | |-- |
+ | Status | Reachable | Lets you know whether the endpoint is reachable or not.|
+ | AVG. ROUND-TRIP | Lets you know the round-trip time to make the connection, in milliseconds. Connection monitor probes the connection every 60 seconds, so you can monitor latency over time. |
+ | Hops | Connection monitor lets you know the hops between the two endpoints. In this example, the connection is between two VMs in the same virtual network, so there is only one hop, to the 10.0.0.5 IP address. If any existing system or custom routes, route traffic between the VMs through a VPN gateway, or network virtual appliance, for example, additional hops are listed. |
+ | STATUS | The green check marks for each endpoint let you know that each endpoint is healthy. ||
+
+## Generate alerts
+
+Alerts are created by alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals. A generated alert can automatically run one or more actions, such as to notify someone or start another process. When setting an alert rule, the resource that you target determines the list of available metrics that you can use to generate alerts.
+
+1. In Azure portal, select the **Monitor** service, and then select **Alerts** > **New alert rule**.
+2. Click **Select target**, and then select the resources that you want to target. Select the **Subscription**, and set **Resource type** to filter down to the Connection Monitor that you want to use.
+
+ :::image type="content" source="./media/connection-monitor/set-alert-rule.png" alt-text="Screenshot of alert rule.":::
+
+1. Once you have selected a resource to target, select **Add criteria**.The Network Watcher has [metrics on which you can create alerts](../azure-monitor/alerts/alerts-metric-near-real-time.md#metrics-and-dimensions-supported). Set **Available signals** to the metrics ProbesFailedPercent and AverageRoundtripMs:
+
+ :::image type="content" source="./media/connection-monitor/set-alert-signals.png" alt-text="Screenshot of alert signals.":::
+
+1. Fill out the alert details like alert rule name, description and severity. You can also add an action group to the alert to automate and customize the alert response.
+
+## View a problem
+
+By default, Azure allows communication over all ports between VMs in the same virtual network. Over time, you, or someone in your organization, might override Azure's default rules, inadvertently causing a communication failure. Complete the following steps to create a communication problem and then view the connection monitor again:
+
+1. In the search box at the top of the portal, enter *myResourceGroup*. When the **myResourceGroup** resource group appears in the search results, select it.
+2. Select the **myVm2-nsg** network security group.
+3. Select **Inbound security rules**, and then select **Add**, as shown in the following picture:
+
+ :::image type="content" source="./media/connection-monitor/inbound-security-rules.png" alt-text="Screenshot of network security rules.":::
+
+4. The default rule that allows communication between all VMs in a virtual network is the rule named **AllowVnetInBound**. Create a rule with a higher priority (lower number) than the **AllowVnetInBound** rule that denies inbound communication over port 22. Select, or enter, the following information, accept the remaining defaults, and then select **Add**:
+
+ | Setting | Value |
+ | | |
+ | Destination port ranges | 22 |
+ | Action | Deny |
+ | Priority | 100 |
+ | Name | DenySshInbound |
+
+5. Since connection monitor probes at 60-second intervals, wait a few minutes and then on the left side of the portal, select **Network Watcher**, then **Connection monitor**, and then select the **myVm1-myVm2(22)** monitor again. The results are different now, as shown in the following picture:
+
+ :::image type="content" source="./media/connection-monitor/vm-monitor-fault.png" alt-text="Screenshot of virtual machine at fault.":::
+
+ You can see that there's a red exclamation icon in the status column for the **myvm2529** network interface.
+
+6. To learn why the status has changed, select 10.0.0.5, in the previous picture. Connection monitor informs you that the reason for the communication failure is: *Traffic blocked due to the following network security group rule: UserRule_DenySshInbound*.
+
+ If you didn't know that someone had implemented the security rule you created in step 4, you'd learn from connection monitor that the rule is causing the communication problem. You could then change, override, or remove the rule, to restore communication between the VMs.
+
+## Clean up resources
+
+When no longer needed, delete the resource group and all of the resources it contains:
+
+1. Enter *myResourceGroup* in the **Search** box at the top of the portal. When you see **myResourceGroup** in the search results, select it.
+2. Select **Delete resource group**.
+3. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
+
+## Next steps
+
+In this tutorial, you learned how to monitor a connection between two VMs. You learned that a network security group rule prevented communication to a VM. To learn about all of the different responses connection monitor can return, see [response types](network-watcher-connectivity-overview.md#response). You can also monitor a connection between a VM, a fully qualified domain name, a uniform resource identifier, or an IP address.
+
+At some point, you may find that resources in a virtual network are unable to communicate with resources in other networks connected by an Azure virtual network gateway. Advance to the next tutorial to learn how to diagnose a problem with a virtual network gateway.
+
+> [!div class="nextstepaction"]
+> [Diagnose communication problems between networks](diagnose-communication-problem-between-networks.md)
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Flow logs are the source of truth for all network activity in your cloud environ
- All traffic flows in your network are evaluated using the rules in the applicable NSG. - The result of these evaluations is NSG Flow Logs. Flow logs are collected through the Azure platform and don't require any change to the customer resources. - Note: Rules are of two types - terminating & non-terminating, each with different logging behaviors.--
+ - NSG Deny rules are terminating. The NSG denying the traffic will log it in Flow logs and processing in this case would stop after any NSG denies traffic.
+ - NSG Allow rules are non-terminating, which means even if one NSG allows it, processing will continue to the next NSG. The last NSG allowing traffic will log the traffic to Flow logs.
- NSG Flow Logs are written to storage accounts from where they can be accessed. - You can export, process, analyze, and visualize Flow Logs using tools like Traffic Analytics, Splunk, Grafana, Stealthwatch, etc.
postgresql Concepts Connection Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-connection-pool.md
Previously updated : 08/03/2021 Last updated : 05/31/2022 # Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) connection pooling
actively run in the database doesn't change. Instead, PgBouncer queues excess
connections and runs them when the database is ready. Hyperscale (Citus) is now offering a managed instance of PgBouncer for server
-groups. It supports up to 2,000 simultaneous client connections. To connect
-through PgBouncer, follow these steps:
+groups. It supports up to 2,000 simultaneous client connections. Additionally,
+if a server group has [high availability](concepts-high-availability.md) (HA)
+enabled, then so does its managed PgBouncer.
+
+To connect through PgBouncer, follow these steps:
1. Go to the **Connection strings** page for your server group in the Azure portal.
postgresql Howto Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-connect.md
+
+ Title: Connect to server - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Learn how to connect to and query a Hyperscale (Citus) server group
+++++ Last updated : 05/25/2022++
+# Connect to a server group
+
+Choose your database client below to learn how to configure it to connect to
+Hyperscale (Citus).
+
+# [pgAdmin](#tab/pgadmin)
+
+[pgAdmin](https://www.pgadmin.org/) is a popular and feature-rich open source
+administration and development platform for PostgreSQL.
+
+1. [Download](https://www.pgadmin.org/download/) and install pgAdmin.
+
+2. Open the pgAdmin application on your client computer. From the Dashboard,
+ select **Add New Server**.
+
+ ![pgAdmin dashboard](../media/howto-hyperscale-connect/pgadmin-dashboard.png)
+
+3. Choose a **Name** in the General tab. Any name will work.
+
+ ![pgAdmin general connection settings](../media/howto-hyperscale-connect/pgadmin-general.png)
+
+4. Enter connection details in the Connection tab.
+
+ ![pgAdmin db connection settings](../media/howto-hyperscale-connect/pgadmin-connection.png)
+
+ Customize the following fields:
+
+ * **Host name/address**: Obtain this value from the **Overview** page for your
+ server group in the Azure portal. It's listed there as **Coordinator name**.
+ It will be of the form, `c.myservergroup.postgres.database.azure.com`.
+ * **Maintenance database**: use the value `citus`.
+ * **Username**: use the value `citus`.
+ * **Password**: the connection password.
+ * **Save password**: enable if desired.
+
+5. In the SSL tab, set **SSL mode** to **Require**.
+
+ ![pgAdmin ssl settings](../media/howto-hyperscale-connect/pgadmin-ssl.png)
+
+6. Select **Save** to save and connect to the database.
+
+# [psql](#tab/psql)
+
+The [psql utility](https://www.postgresql.org/docs/current/app-psql.html) is a
+terminal-based front-end to PostgreSQL. It enables you to type in queries
+interactively, issue them to PostgreSQL, and see the query results.
+
+1. Install psql. It's included with a [PostgreSQL
+ installation](https://www.postgresql.org/docs/current/tutorial-install.html),
+ or available separately in package managers for several operating systems.
+
+2. Obtain the connection string. In the server group page, select the
+ **Connection strings** menu item.
+
+ ![get connection string](../media/quickstart-connect-psql/get-connection-string.png)
+
+ Find the string marked **psql**. It will be of the form, `psql
+ "host=c.servergroup.postgres.database.azure.com port=5432 dbname=citus
+ user=citus password={your_password} sslmode=require"`
+
+ * Copy the string.
+ * Replace "{your\_password}" with the administrative password you chose earlier.
+ * Notice the hostname starts with a `c.`, for instance
+ `c.demo.postgres.database.azure.com`. This prefix indicates the
+ coordinator node of the server group.
+ * The default dbname and username is `citus` and can't be changed.
+
+3. In a local terminal prompt, paste the psql connection string, *substituting
+ your password for the string `{your_password}`*, then press enter.
+++
+**Next steps**
+
+* Troubleshoot [connection issues](howto-troubleshoot-common-connection-issues.md).
+* [Verify TLS](howto-ssl-connection-security.md) certificates in your
+ connections.
+* Now that you can connect to the database, learn how to [build scalable
+ apps](howto-build-scalable-apps-overview.md).
purview Concept Guidelines Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-guidelines-pricing.md
Microsoft Purview enables a unified governance experience by providing a single
## Factors impacting Azure Pricing
-There are **direct** and **indirect** costs that need to be considered while planning the Microsoft Purview budgeting and cost management.
+There are [**direct**](#direct-costs) and [**indirect**](#indirect-costs) costs that need to be considered while planning the Microsoft Purview budgeting and cost management.
-### Direct costs
+## Direct costs
Direct costs impacting Microsoft Purview pricing are based on the following three dimensions:-- **Elastic data map**-- **Automated scanning & classification**-- **Advanced resource sets**
+- [**Elastic data map**](#elastic-data-map)
+- [**Automated scanning & classification**](#automated-scanning-classification-and-ingestion)
+- [**Advanced resource sets**](#advanced-resource-sets)
-#### Elastic data map
+### Elastic data map
- The **Data map** is the foundation of the Microsoft Purview architecture and so needs to be up to date with asset information in the data estate at any given point
Direct costs impacting Microsoft Purview pricing are based on the following thre
- However, the data map scales automatically between the minimal and maximal limits of that elasticity window, to cater to changes in the data map with respect to two key factors - **operation throughput** and **metadata storage**
-##### Operation throughput
+#### Operation throughput
- An event driven factor based on the Create, Read, Update, Delete operations performed on the data map - Some examples of the data map operations would be:
Direct costs impacting Microsoft Purview pricing are based on the following thre
- The **burst duration** is the percentage of the month that such bursts (in elasticity) are expected because of growing metadata or higher number of operations on the data map
-##### Metadata storage
+#### Metadata storage
- If the number of assets reduces in the data estate, and are then removed in the data map through subsequent incremental scans, the storage component automatically reduces and so the data map scales down
-#### Automated scanning, classification and ingestion
+### Automated scanning, classification, and ingestion
There are two major automated processes that can trigger ingestion of metadata into Microsoft Purview: 1. Automatic scans using native [connectors](azure-purview-connector-overview.md). This process includes three main steps:
There are two major automated processes that can trigger ingestion of metadata i
2. Automated ingestion using Azure Data Factory and/or Azure Synapse pipelines. This process includes: - Ingestion of metadata and lineage into Microsoft Purview if Microsoft Purview account is connected to any Azure Data Factory or Azure Synapse pipelines.
-##### 1. Automatic scans using native connectors
+#### 1. Automatic scans using native connectors
- A **full scan** processes all assets within a selected scope of a data source whereas an **incremental scan** detects and processes assets, which have been created, modified, or deleted since the previous successful scan - All scans (full or Incremental scans) will pick up **updated, modified, or deleted** assets
There are two major automated processes that can trigger ingestion of metadata i
- Align your scan schedules with Self-Hosted Integration Runtime (SHIR) VMs (Virtual Machines) size to avoid extra costs linked to virtual machines
-##### 2. Automated ingestion using Azure Data Factory and/or Azure Synapse pipelines
+#### 2. Automated ingestion using Azure Data Factory and/or Azure Synapse pipelines
- metadata and lineage is ingested from Azure Data Factory or Azure Synapse pipelines every time the pipelines run in the source system.
-#### Advanced resource sets
+### Advanced resource sets
- Microsoft Purview uses **resource sets** to address the challenge of mapping large numbers of data assets to a single logical resource by providing the ability to scan all the files in the data lake and find patterns (GUID, localization patterns, etc.) to group them as a single asset in the data map
There are two major automated processes that can trigger ingestion of metadata i
- It is important to note that billing for Advanced Resource Sets is based on the compute used by the offline tier to aggregate resource set information and is dependent on the size/number of resource sets in your catalog
-### Indirect costs
+## Indirect costs
Indirect costs impacting Microsoft Purview pricing to be considered are:
remote-rendering System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/system-requirements.md
The following software must be installed:
For development with Unity, install a supported version of Unity [(download)](https://unity3d.com/get-unity/download). We recommend using Unity Hub for managing installations. > [!IMPORTANT]
-> In addition to the supported versions mentioned below, make sure to check out the [Unity known issues page](/mixed-reality/develop/unity/known-issues).
+> In addition to the supported versions mentioned below, make sure to check out the [Unity known issues page](/windows/mixed-reality/develop/unity/known-issues).
Make sure to include the following modules in your Unity installation: * **UWP** - Universal Windows Platform Build Support
For Unity 2020, use latest version of Unity 2020.3.
## Next steps
-* [Quickstart: Render a model with Unity](../quickstarts/render-model.md)
+* [Quickstart: Render a model with Unity](../quickstarts/render-model.md)
remote-rendering Convert Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/quickstarts/convert-model.md
The conversion script generates a *Shared Access Signature (SAS)* URI for the co
The SAS URI created by the conversion script will only be valid for 24 hours. However, after it expired you do not need to convert your model again. Instead, you can create a new SAS in the portal as described in the next steps: 1. Go to the [Azure portal](https://www.portal.azure.com)
-1. Click on your **Storage account** resource:
+2. Click on your **Storage account** resource:
+ ![Screenshot that highlights the selected Storage account resource.](./media/portal-storage-accounts.png)
-1. In the following screen, click on **Storage explorer** in the left panel and find your output model (*.arrAsset* file) in the *arroutput* blob storage container. Right-click on the file and select **Get Shared Access Signature** from the context menu:
-![Signature Access](./media/portal-storage-explorer.png)
-1. A new screen opens where you can select an expiry date. Press **Create**, and copy the URI that is shown in the next dialog. This new URI replaces the temporary URI that the script created.
+
+3. In the following screen, click on **Storage explorer** in the left panel and find your output model (*.arrAsset* file) in the *arroutput* blob storage container. Right-click on the file and select **Get Shared Access Signature** from the context menu:
+
+ ![Signature Access](./media/portal-storage-explorer.png)
+
+4. A new screen opens where you can select an expiry date. Press **Create**, and copy the URI that is shown in the next dialog. This new URI replaces the temporary URI that the script created.
## Next steps
route-server Vmware Solution Default Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/vmware-solution-default-route.md
If advertising less specific prefixes isn't possible as in the option described
:::image type="content" source="./media/scenarios/vmware-solution-to-on-premises.png" alt-text="Diagram of AVS to on-premises communication with Route Server in two regions.":::
-Note that some sort of encapsulation protocol such as VXLAN or IPsec is required between the NVAs. The reason why encapsulation is needed is because the NVA NICs would learn the routes from Azure Route Server with the NVA as next hop, and create a routing loop.
+Note that some sort of encapsulation protocol such as VXLAN or IPsec is required between the NVAs. The reason why encapsulation is needed is because the NVA NICs would learn the routes from Azure Route Server with the NVA as next hop, and create a routing loop. An alternative to using an overlay is by leveraging secondary NICs in the NVA that don't learn the routes from Azure Route Server, and configuring UDRs so that Azure can route traffic to the remote environment over those NICs. You can find more details in [Enterprise-scale network topology and connectivity for Azure VMware Solution][caf_avs_nw].
The main difference between this dual VNet design and the previously described single VNet design is that with two VNets you have full control on what is advertised to each ExpressRoute circuit, and this allows for a more dynamic and granular configuration. In comparison, in the single-VNet design described earlier in this document a common set of supernets or less specific prefixes are sent down both circuits to attract traffic to the VNet. Additional, in the single-VNet design there is a static configuration component in the UDRs that are required in the Gateway Subnet. Hence, although less cost-effective (two ExpressRoute gateways and two sets of NVAs are required), the double-VNet design might be a better alternative for very dynamic routing environments.
The main difference between this dual VNet design and the previously described s
* [Learn how Azure Route Server works with ExpressRoute](expressroute-vpn-support.md) * [Learn how Azure Route Server works with a network virtual appliance](resource-manager-template-samples.md)+
+[caf_avs_nw]: /azure/cloud-adoption-framework/scenarios/azure-vmware/eslz-network-topology-connectivity
search Cognitive Search Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-intro.md
The output of AI enrichment is either a [fully text-searchable index](search-wha
### Check content in a knowledge store
-In Azure Storage, a [knowledge store](knowledge-store-concept-intro.md) can assume the following forms: a blob container of JSON documents, a blob container of image objects, or tables in Table Storage. You can use [Storage Browser](knowledge-store-view-storage-explorer.md), [Power BI](knowledge-store-connect-power-bi.md), or any app that connects to Azure Storage to access your content.
+In Azure Storage, a [knowledge store](knowledge-store-concept-intro.md) can assume the following forms: a blob container of JSON documents, a blob container of image objects, or tables in Table Storage. You can use [Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md), [Power BI](knowledge-store-connect-power-bi.md), or any app that connects to Azure Storage to access your content.
+ A blob container captures enriched documents in their entirety, which is useful if you're creating a feed into other processes.
search Knowledge Store Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-concept-intro.md
Title: Knowledge store concepts
-description: Send enriched documents to Azure Storage where you can view, reshape, and consume enriched documents in Azure Cognitive Search and in other applications.
+description: A knowledge store is enriched content created by an Azure Cognitive Search skillset and saved to Azure Storage for use in other apps and non-search scenarios.
Previously updated : 09/02/2021 Last updated : 05/31/2022 # Knowledge store in Azure Cognitive Search
-Knowledge store is a data sink created by a Cognitive Search [AI enrichment pipeline](cognitive-search-concept-intro.md) that stores enriched content in tables and blob containers in Azure Storage for independent analysis or downstream processing in non-search scenarios, like knowledge mining.
+Knowledge store is a data sink created by a [Cognitive Search enrichment pipeline](cognitive-search-concept-intro.md) that stores AI-enriched content in tables and blob containers in Azure Storage for independent analysis or downstream processing in non-search scenarios like knowledge mining.
-If you have used cognitive skills in the past, you already know that *skillsets* move a document through a sequence of enrichments that invoke atomic transformations, such as recognizing entities or translating text. The outcome can be a search index, or projections in a knowledge store. The two outputs, search index and knowledge store, are mutually exclusive products of the same pipeline; derived from the same inputs, but resulting in output that is structured, stored, and used in different applications.
+If you've used cognitive skills in the past, you already know that enriched content is created by *skillsets*. Skillsets move a document through a sequence of enrichments that invoke atomic transformations, such as recognizing entities or translating text.
+
+Output can be a search index, or projections in a knowledge store. The two outputs, search index and knowledge store, are mutually exclusive products of the same pipeline. They are derived from the same inputs, but their content is structured, stored, and used in different applications.
:::image type="content" source="media/knowledge-store-concept-intro/knowledge-store-concept-intro.svg" alt-text="Pipeline with skillset" border="false"::: Physically, a knowledge store is [Azure Storage](../storage/common/storage-account-overview.md), either Azure Table Storage, Azure Blob Storage, or both. Any tool or process that can connect to Azure Storage can consume the contents of a knowledge store.
-Viewed through Storage Browser, a knowledge store looks like any other collection of tables, objects, or files. The following example shows a knowledge store composed of three tables with fields that are either carried forward from the data source, or created through enrichments (see "sentiment score" and "translated_text").
+Viewed through Azure portal, a knowledge store looks like any other collection of tables, objects, or files. The following screenshot shows a knowledge store composed of three tables. You can adopt a naming convention, such as a "kstore" prefix, to keep your content together.
:::image type="content" source="media/knowledge-store-concept-intro/kstore-in-storage-explorer.png" alt-text="Skills read and write from enrichment tree" border="true":::
The type of projection you specify in this structure determines the type of stor
## Create a knowledge store
-To create knowledge store, use the portal or an API. You will need [Azure Storage](../storage/index.yml), a [skillset](cognitive-search-working-with-skillsets.md), and an [indexer](search-indexer-overview.md). Because indexers require a search index, you will also need to provide an index definition.
+To create knowledge store, use the portal or an API.
+
+You'll need [Azure Storage](../storage/index.yml), a [skillset](cognitive-search-working-with-skillsets.md), and an [indexer](search-indexer-overview.md). Because indexers require a search index, you'll also need to provide an index definition.
Go with the portal approach for the fastest route to a finished knowledge store. Or, choose the REST API for a deeper understanding of how objects are defined and related.
Go with the portal approach for the fastest route to a finished knowledge store.
[**Create your first knowledge store in four steps**](knowledge-store-create-portal.md) using the **Import data** wizard.
-1. [Sign in to Azure portal](https://portal.azure.com).
-
-1. Define your data source.
+1. Define a data source that contains the data you want to enrich.
-1. Define your skillset and specify a knowledge store.
+1. Define a skillset. The skillset specifies enrichment steps and the knowledge store.
-1. Define an index schema. The wizard requires it and can infer one for you.
+1. Define an index schema. You might not need one, but indexers require it. The wizard can infer an index.
-1. Complete the wizard. Extraction, enrichment, and storage occur in this last step.
+1. Complete the wizard. Data extraction, enrichment, and knowledge store creation occur in this last step.
-The wizard automates tasks that you would otherwise have to be handled manually. Specifically, both shaping and projections (definitions of physical data structures in Azure Storage) are created for you.
+The wizard automates several tasks. Specifically, both shaping and projections (definitions of physical data structures in Azure Storage) are created for you.
### [**REST**](#tab/kstore-rest)
For .NET developers, use the [KnowledgeStore Class](/dotnet/api/azure.search.doc
## Connect with apps
-Once the enrichments exist in storage, any tool or technology that connects to Azure Blob or Table Storage can be used to explore, analyze, or consume the contents. The following list is a start:
+Once enriched content exists in storage, any tool or technology that connects to Azure Storage can be used to explore, analyze, or consume the contents. The following list is a start:
-+ [Storage Browser](knowledge-store-view-storage-explorer.md) to view enriched document structure and content. Consider this as your baseline tool for viewing knowledge store contents.
++ [Storage Explorer](../storage/blobs/quickstart-storage-explorer.md) or Storage browser (preview) in Azure portal to view enriched document structure and content. Consider this as your baseline tool for viewing knowledge store contents. + [Power BI](knowledge-store-connect-power-bi.md) for reporting and analysis.
Once the enrichments exist in storage, any tool or technology that connects to A
Each time you run the indexer and skillset, the knowledge store is updated if the skillset or underlying source data has changed. Any changes picked up by the indexer are propagated through the enrichment process to the projections in the knowledge store, ensuring that your projected data is a current representation of content in the originating data source.
-> [!Note]
+> [!NOTE]
> While you can edit the data in the projections, any edits will be overwritten on the next pipeline invocation, assuming the document in source data is updated. ### Changes in source data
search Knowledge Store Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-create-portal.md
Previously updated : 05/11/2022 Last updated : 05/31/2022 # Quickstart: Create a knowledge store in the Azure portal
-[Knowledge store](knowledge-store-concept-intro.md) is a feature of Azure Cognitive Search that accepts output from an [AI enrichment pipeline](cognitive-search-concept-intro.md) and makes it available in Azure Storage for downstream apps and workloads. Enrichments created by the pipeline - such as translated text, OCR text, tagged images, and recognized entities - are projected into tables or blobs, where they can be accessed by any app or workload that connects to Azure Storage.
+[Knowledge store](knowledge-store-concept-intro.md) is a feature of Azure Cognitive Search that accepts output from an [AI enrichment pipeline](cognitive-search-concept-intro.md) and makes it available in Azure Storage for downstream apps and workloads.
-In this quickstart, you'll set up your data and then run the **Import data** wizard to create an enrichment pipeline that also generates a knowledge store. The knowledge store will contain original text content pulled from the source (customer reviews of a hotel), plus AI-generated content that includes a sentiment label, key phrase extraction, and text translation of non-English customer comments.
+In this quickstart, you'll set up some sample data and then run the **Import data** wizard to create an enrichment pipeline that also generates a knowledge store. The knowledge store will contain original text content pulled from the source (customer reviews of a hotel), plus AI-generated content that includes a sentiment label, key phrase extraction, and text translation of non-English customer comments.
> [!NOTE] > This quickstart shows you the fastest route to a finished knowledge store in Azure Storage. For more detailed explanations of each step, see [Create a knowledge store in REST](knowledge-store-create-rest.md) instead.
In this wizard step, configure an indexer that will pull together the data sourc
In the **Overview** page, open the **Indexers** tab in the middle of the page, and then select **hotels-reviews-idxr**. Within a minute or two, status should progress from "In progress" to "Success" with zero errors and warnings.
-## Check tables in Storage Browser
+## Check tables in Azure portal
-In the Azure portal, switch to your Azure Storage account and use **Storage Browser** to view the new tables. You should see three tables, one for each projection that was offered in the "Save enrichments" section of the "Add enrichments" page.
+1. In the Azure portal, [open the Storage account](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) used to create the knowledge store.
-+ "hotelReviewssDocuments" contains all of the first-level nodes of a document's enrichment tree that are not collections.
+1. In the storage account's left navigation pane, select **Storage browser (preview)** to view the new tables.
-+ "hotelReviewssKeyPhrases" contains a long list of just the key phrases extracted from all reviews. Skills that output collections (arrays), such as key phrases and entities, will have output sent to a standalone table.
+ You should see three tables, one for each projection that was offered in the "Save enrichments" section of the "Add enrichments" page.
-+ "hotelReviewssPages" contains enriched fields created over each page that was split from the document. In this skillset and data source, page-level enrichments consisting of sentiment labels and translated text. A pages table (or a sentences table if you specify that particular level of granularity) is created when you choose "pages" granularity in the skillset definition.
+ + "hotelReviewssDocuments" contains all of the first-level nodes of a document's enrichment tree that are not collections.
+
+ + "hotelReviewssKeyPhrases" contains a long list of just the key phrases extracted from all reviews. Skills that output collections (arrays), such as key phrases and entities, will have output sent to a standalone table.
+
+ + "hotelReviewssPages" contains enriched fields created over each page that was split from the document. In this skillset and data source, page-level enrichments consisting of sentiment labels and translated text. A pages table (or a sentences table if you specify that particular level of granularity) is created when you choose "pages" granularity in the skillset definition.
All of these tables contain ID columns to support table relationships in other tools and apps. When you open a table, scroll past these fields to view the content fields added by the pipeline.
search Knowledge Store Create Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-create-rest.md
Previously updated : 05/11/2022 Last updated : 05/31/2022 # Create a knowledge store using REST and Postman
-Knowledge store is a feature of Azure Cognitive Search that sends skillset output from an [AI enrichment pipeline](cognitive-search-concept-intro.md) to Azure Storage for subsequent knowledge mining, data analysis, or downstream processing. After the knowledge store is populated, you can use tools like [Storage Browser](knowledge-store-view-storage-explorer.md) or [Power BI](knowledge-store-connect-power-bi.md) to explore the content.
+[Knowledge store](knowledge-store-concept-intro.md) is a feature of Azure Cognitive Search that accepts output from an [AI enrichment pipeline](cognitive-search-concept-intro.md) and makes it available in Azure Storage for downstream apps and workloads. After the knowledge store is populated, use tools like [Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) or [Power BI](knowledge-store-connect-power-bi.md) to explore the content.
-In this article, you'll learn how to use the REST API to ingest, enrich, and explore a set of customer reviews of hotel stays in a knowledge store in Azure Storage. The end result is a knowledge store that contains original text content pulled from the source, plus AI-generated content that includes a sentiment score, key phrase extraction, language detection, and text translation of non-English customer comments.
+In this article, you'll learn how to use the REST API to ingest, enrich, and explore a set of customer reviews of hotel stays in a knowledge store. The knowledge store contains original text content pulled from the source, plus AI-generated content that includes a sentiment score, key phrase extraction, language detection, and text translation of non-English customer comments.
To make the initial data set available, the hotel reviews are first imported into Azure Blob Storage. Post-processing, the results are saved as a knowledge store in Azure Table Storage. > [!NOTE]
-> The [source code](https://github.com/Azure-Samples/azure-search-postman-samples/tree/master/knowledge-store) for this article includes a Postman collection containing all of the requests. If you don't want to use Postman, you can [create the same knowledge store in the Azure portal](knowledge-store-create-portal.md) using the Import data wizard.
+> This article provides detailed explanations of each step. For a faster approach, see [Create a knowledge store in Azure portal](knowledge-store-create-portal.md) instead.
## Prerequisites
To make the initial data set available, the hotel reviews are first imported int
## Load data
-This uses Azure Cognitive Search, Azure Blob Storage, and [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes to provide free processing for up to 20 transactions daily. A small workload means that you can skip creating or attaching a Cognitive Services resource.
+This step uses Azure Cognitive Search, Azure Blob Storage, and [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes to provide free processing for up to 20 transactions daily. A small workload means that you can skip creating or attaching a Cognitive Services resource.
1. [Download HotelReviews_Free.csv](https://knowledgestoredemo.blob.core.windows.net/hotel-reviews/HotelReviews_Free.csv?sp=r&st=2019-11-04T01:23:53Z&se=2025-11-04T16:00:00Z&spr=https&sv=2019-02-02&sr=b&sig=siQgWOnI%2FDamhwOgxmj11qwBqqtKMaztQKFNqWx00AY%3D). This data is hotel review data saved in a CSV file (originates from Kaggle.com) and contains 19 pieces of customer feedback about a single hotel.
-1. In the Azure Storage resource, use **Storage Browser** to create a blob container named **hotel-reviews**.
+1. In Azure portal, on the Azure Storage resource page, use **Storage Browser** to create a blob container named **hotel-reviews**.
1. Select **Upload** at the top of the page to load the **HotelReviews-Free.csv** file you downloaded from the previous step.
After you send each request, the search service should respond with a 201 succes
In the Azure portal, go to the Azure Cognitive Search service's **Overview** page. Select the **Indexers** tab, and then select **hotels-reviews-ixr**. Within a minute or two, status should progress from "In progress" to "Success" with zero errors and warnings.
-## Check tables in Storage Browser
+## Check tables in Azure portal
In the Azure portal, switch to your Azure Storage account and use **Storage Browser** to view the new tables. You should see six tables, one for each projection defined in the skillset.
If you are using a free service, remember that you are limited to three indexes,
## Next steps
-Now that you've enriched your data by using Cognitive Services and projected the results to a knowledge store, you can use Storage Browser or other apps to explore your enriched data set.
-
-To learn how to explore this knowledge store by using Storage Browser, see this walkthrough:
+Now that you've enriched your data by using Cognitive Services and projected the results to a knowledge store, you can use Storage Explorer or other apps to explore your enriched data set.
> [!div class="nextstepaction"]
-> [View with Storage Browser](knowledge-store-view-storage-explorer.md)
+> [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md)
search Knowledge Store Projection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-projection-overview.md
Projections have a lifecycle that is tied to the source data in your data source
After the indexer is run, connect to projections and consume the data in other apps and workloads.
-+ Use [Storage Browser](knowledge-store-view-storage-explorer.md) to verify object creation and content.
++ Use Azure portal to verify object creation and content in Azure Storage. + Use [Power BI for data exploration](knowledge-store-connect-power-bi.md). This tool works best when the data is in Azure Table Storage. Within Power BI, you can manipulate data into new tables that are easier to query and analyze.
search Knowledge Store Projections Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-projections-examples.md
You can process projections by following these steps:
1. [Monitor indexer execution](search-howto-monitor-indexers.md) to check progress and catch any errors.
-1. [Use Storage Browser](knowledge-store-view-storage-explorer.md) to verify object creation in Azure Storage.
+1. Use Azure portal to verify object creation in Azure Storage.
1. If you are projecting tables, [import them into Power BI](knowledge-store-connect-power-bi.md) for table manipulation and visualization. In most cases, Power BI will auto-discover the relationships among tables.
search Knowledge Store View Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-view-storage-explorer.md
- Title: View a knowledge store-
-description: View a knowledge store using the Storage Browser in the Azure portal.
------ Previously updated : 11/03/2021--
-# View a knowledge store with Storage Browser
-
-A [knowledge store](knowledge-store-concept-intro.md) is content created by an Azure Cognitive Search skillset and saved to Azure Storage. In this article, you'll learn how to view the contents of a knowledge store using Storage Browser in the Azure portal.
-
-Start with an existing knowledge store created in the [Azure portal](knowledge-store-create-portal.md) or using the [REST APIs](knowledge-store-create-rest.md). Both the portal and REST walkthroughs create a knowledge store in Azure Table Storage.
-
-## Start Storage Browser
-
-1. In the Azure portal, [open the Storage account](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) that you used to create the knowledge store.
-
-1. In the storage account's left navigation pane, select **Storage Browser**.
-
-## View and edit tables
-
-1. Expand **Tables** to find the table projections of your knowledge store. If you used the quickstart or REST article to create the knowledge store, the tables will contain content related to customer reviews of a European hotel.
-
- :::image type="content" source="media/knowledge-store-concept-intro/kstore-in-storage-explorer.png" alt-text="Screenshot of Storage Browser" border="true":::
-
-1. Select a table from the list to views it's contents.
-
-1. To rearrange column order or delete a column, select **Edit columns** at the top of the page.
-
-In Storage Browser, you can only query one table at time using [supported query syntax](/rest/api/storageservices/Querying-Tables-and-Entities). To query across tables, consider using Power BI instead.
-
-## Next steps
-
-Connect this knowledge store to Power BI to build visualizations that include multiple tables.
-
-> [!div class="nextstepaction"]
-> [Connect with Power BI](knowledge-store-connect-power-bi.md)
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-sharepoint-online.md
The SharePoint indexer supports both [delegated and application](/graph/auth/aut
+ Application permissions, where the indexer runs under the identity of the SharePoint tenant with access to all sites and files within the SharePoint tenant. The indexer requires a [client secret](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) to access the SharePoint tenant. The indexer will also require [tenant admin approval](../active-directory/manage-apps/grant-admin-consent.md) before it can index any content.
-Note that if your Azure Active Directory organization has [Conditional Access enabled](/active-directory/conditional-access/overview.md) and your administrator is not able to grant any device access for Delegated permissions, you should consider Application permissions instead. For more information, refer to [SharePoint Conditional Access policies](/remove-search-indexer-troubleshooting.md#sharepoint-conditional-access-policies).
+Note that if your Azure Active Directory organization has [Conditional Access enabled](/azure/active-directory/conditional-access/overview) and your administrator is not able to grant any device access for Delegated permissions, you should consider Application permissions instead. For more information, refer to [SharePoint Conditional Access policies](/azure/search/search-indexer-troubleshooting#sharepoint-conditional-access-policies).
### Step 3: Create an Azure AD application
search Search Index Azure Sql Managed Instance With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-index-azure-sql-managed-instance-with-managed-identity.md
Before learning more about this feature, it is recommended that you have an unde
* Azure AD admin role on SQL Managed Instance:
- To assign read permissions on SQL Managed Instance, you must be an Azure Global Admin with a SQL Managed Instance. See [Configure and manage Azure AD authentication with SQL Managed Instance](/azure/azure-sql/managed-instance/authentication-aad-configure) and follow the steps to provision an Azure AD admin (SQL Managed Instance).
+ To assign read permissions on SQL Managed Instance, you must be an Azure Global Admin with a SQL Managed Instance. See [Configure and manage Azure AD authentication with SQL Managed Instance](/azure/azure-sql/database/authentication-aad-configure) and follow the steps to provision an Azure AD admin (SQL Managed Instance).
* [Configure public endpoint and NSG in SQL Managed Instance](search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md) to allow connections from Azure Cognitive Search.
sentinel Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd.md
After the deployment is complete:
- The content stored in your repository is displayed in your Microsoft Sentinel workspace, in the relevant Microsoft Sentinel page. -- The connection details on the **Repositories** page are updated with the link to the connection's deployment logs. For example:
+- The connection details on the **Repositories** page are updated with the link to the connection's deployment logs and the status and time of the last deployment. For example:
- :::image type="content" source="media/ci-cd/deployment-logs-link.png" alt-text="Screenshot of a GitHub repository connection's deployment logs.":::
+ :::image type="content" source="media/ci-cd/deployment-logs-status.png" alt-text="Screenshot of a GitHub repository connection's deployment logs.":::
### Improve deployment performance with smart deployments
sentinel Create Manage Use Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-manage-use-automation-rules.md
Playbook actions within an automation rule may be treated differently under some
| Less than a second | Immediately after playbook is completed | | Less than two minutes | Up to two minutes after playbook began running,<br>but no more than 10 seconds after the playbook is completed | | More than two minutes | Two minutes after playbook began running,<br>regardless of whether or not it was completed |
-|
## Next steps
sentinel Migration Arcsight Detection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-arcsight-detection-rules.md
Learn more about [best practices for migrating detection rules](https://techcomm
1. When you're satisfied, you can consider the rule migrated. Create a playbook for your rule action as needed. For more information, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).
-Learn more about analytics rules.
+Learn more about analytics rules:
- [**Create custom analytics rules to detect threats**](detect-threats-custom.md). Use [alert grouping](detect-threats-custom.md#alert-grouping) to reduce alert fatigue by grouping alerts that occur within a given timeframe. - [**Map data fields to entities in Microsoft Sentinel**](map-data-fields-to-entities.md) to enable SOC engineers to define entities as part of the evidence to track during an investigation. Entity mapping also makes it possible for SOC analysts to take advantage of an intuitive [investigation graph (investigate-cases.md#use-the-investigation-graph-to-deep-dive) that can help reduce time and effort.
SubjectDomainName
In this article, you learned how to map your migration rules from ArcSight to Microsoft Sentinel. > [!div class="nextstepaction"]
-> [Migrate your SOAR automation](migration-arcsight-automation.md)
+> [Migrate your SOAR automation](migration-arcsight-automation.md)
sentinel Migration Convert Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-convert-dashboards.md
After reviewing your dashboards, do the following to prepare for your dashboard
- Review all of the visualizations in each dashboard. The dashboards in your current SIEM might contain several charts or panels. It's crucial to review the content of your short-listed dashboards to eliminate any unwanted visualizations or data. - Capture the dashboard design and interactivity. - Identify any design elements that are important to your users. For example, the layout of the dashboard, the arrangement of the charts or even the font size or color of the graphs.-- Capture any interactivity such as drilldown, filtering, and others that you need to carry over to Azure Monitor Workbooks. We'll also discuss parameters and user inputs in the next step.
+- Capture any interactivity such as drilldown, filtering, and others that you need to carry over to Azure Monitor Workbooks.
- Identify required parameters or user inputs. In most cases, you need to define parameters for users to perform search, filtering, or scoping the results (for example, date range, account name and others). Hence, it's crucial to capture the details around parameters. Here are some of the key points to help you with collecting the parameter requirements: - The type of parameter for users to perform selection or input. For example, date range, text, or others. - How the parameters are represented, such as drop-down, text box, or others.
Once you've saved your workbook, specify the parameters, if any exist, and valid
In this article, you learned how to convert your dashboards to Azure workbooks. > [!div class="nextstepaction"]
-> [Update SOC processes](migration-security-operations-center-processes.md)
+> [Update SOC processes](migration-security-operations-center-processes.md)
sentinel Migration Ingestion Target Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-ingestion-target-platform.md
This article compares target platforms in terms of performance, cost, usability
|**Usability**: |**Great**<br><br>The archive and search options are simple to use and accessible from the Microsoft Sentinel portal. However, the data isn't immediately available for queries. You need to perform a search to retrieve the data, which might take some time, depending on the amount of data being scanned and returned. |**Good**<br><br>Fairly easy to use in the context of Microsoft Sentinel. For example, you can use an Azure workbook to visualize data spread across both Microsoft Sentinel and ADX. You can also query ADX data from the Microsoft Sentinel portal using the [ADX proxy](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md). |**Poor**<br><br>With historical data migrations, you might have to deal with millions of files, and exploring the data becomes a challenge. |**Fair**<br><br>While using the `externaldata` operator is very challenging with large numbers of blobs to reference, using external ADX tables eliminates this issue. The external table definition understands the blob storage folder structure, and allows you to transparently query the data contained in many different blobs and folders. | |**Management overhead**: |**Fully managed**<br><br>The search and archive options are fully managed and don't add management overhead. |**High**<br><br>ADX is external to Microsoft Sentinel, which requires monitoring and maintenance. |**Low**<br><br>While this platform requires little maintenance, selecting this platform adds monitoring and configuration tasks, such as setting up lifecycle management. |**Medium**<br><br>With this option, you maintain and monitor ADX and Azure Blob Storage, both of which are external components to Microsoft Sentinel. While ADX can be shut down at times, consider the extra management overhead with this option. | |**Performance**: |**Medium**<br><br>You typically interact with basic logs within the archive using [search jobs](../azure-monitor/logs/search-jobs.md), which are suitable when you want to maintain access to the data, but don't need immediate access to the data. |**High to low**<br><br>ΓÇó The query performance of an ADX cluster depends on the number of nodes in the cluster, the cluster virtual machine SKU, data partitioning, and more.<br>ΓÇó As you add nodes to the cluster, the performance improves, with added cost.<br>ΓÇó If you use ADX, we recommend that you configure your cluster size to balance performance and cost. This configuration depends on your organization's needs, including how fast your migration needs to complete, how often the data is accessed, and the expected response time. |**Low**<br><br>Offers two performance tiers: Premium or Standard. Although both tiers are an option for long-term storage, Standard is more cost-efficient. Learn about [performance and scalability limits](../storage/common/scalability-targets-standard-account.md). |**Low**<br><br>Because the data resides in the Blob Storage, the performance is limited by that platform. |
-|**Cost**: |**High**<br><br>The cost is composed of two components:<br>ΓÇó **Ingestion cost**. Every GB of data ingested into Basic Logs is subject to Microsoft Sentinel and Azure Monitor Logs ingestion costs, which sum up to approximately $1/GB. See the [pricing details](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).<br>ΓÇó **Archival cost**. The cost for data in the archive tier sums up to approximately $0.02/GB per month. See the [pricing details](https://azure.microsoft.com/pricing/details/monitor/).<br>In addition to these two cost components, if you need frequent access to the data, extra costs apply when you access data via search jobs. |**High to low**<br><br>ΓÇó Because ADX is a cluster of virtual machines, you're charged based on compute, storage and networking usage, plus an ADX markup (see the [pricing details](https://azure.microsoft.com/pricing/details/data-explorer/). Therefore, the more nodes you add to your cluster and the more data you store, the higher the cost.<br>ΓÇó ADX also offers autoscaling capabilities to adapt to workload on demand. ADX can also benefit from Reserved Instance pricing. You can run your own cost calculations in the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/). |**Low**<br><br>With optimal setup, Azure Blob Storage has the lowest costs. In addition, the data works in an automatic lifecycle, so older blobs move into lower-cost access tiers. |**Low**<br><br>The cluster size doesn't affect the cost, because ADX only acts as a proxy. In addition, you need to run the cluster only when you need quick and simple access to the data. |
-|**How to access data**: |[Search jobs](search-jobs.md) |Direct KQL queries |[externaldata](/azure/data-explorer/kusto/query/externaldata-operator) |Modified KQL data |
-|**Scenario**: |**Occasional access**<br><br>Relevant in scenarios where you donΓÇÖt need to run heavy analytics or trigger analytics rules. |**Frequent access**<br><br>Relevant in scenarios where you need to access the data frequently, and need to control how the cluster is sized and configured. |**Compliance/audit**<br><br>ΓÇó Optimal for storing massive amounts of unstructured data.<br>ΓÇó Relevant in scenarios where you don't need quick access to the data or high performance, such as for compliance or audits. |**Occasional access**<br><br>Relevant in scenarios where you want to benefit from the low cost of Azure Blob Storage, and maintain relatively quick access to the data. |
+|**Cost**: |**High**<br><br>The cost is composed of two components:<br>ΓÇó **Ingestion cost**. Every GB of data ingested into Basic Logs is subject to Microsoft Sentinel and Azure Monitor Logs ingestion costs, which sum up to approximately $1/GB. See the [pricing details](https://azure.microsoft.com/pricing/details/microsoft-sentinel/).<br>ΓÇó **Archival cost**. The cost for data in the archive tier sums up to approximately $0.02/GB per month. See the [pricing details](https://azure.microsoft.com/pricing/details/monitor/).<br>In addition to these two cost components, if you need frequent access to the data, extra costs apply when you access data via search jobs. |**High to low**<br><br>ΓÇó Because ADX is a cluster of virtual machines, you're charged based on compute, storage and networking usage, plus an ADX markup (see the [pricing details](https://azure.microsoft.com/pricing/details/data-explorer/). Therefore, the more nodes you add to your cluster and the more data you store, the higher the cost will be.<br>ΓÇó ADX also offers autoscaling capabilities to adapt to workload on demand. ADX can also benefit from Reserved Instance pricing. You can run your own cost calculations in the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/). |**Low**<br><br>With optimal setup, Azure Blob Storage has the lowest costs. For greater efficiency and cost savings, [Azure Storage lifecycle management](https://docs.microsoft.com/azure/storage/blobs/lifecycle-management-overview) can be used to automatically place older blobs into cheaper storage tiers. |**Low**<br><br>ADX only acts as a proxy in this case, so the cluster can be small. In addition, the cluster can be shut down when you don't need access to the data and only start it when data access is needed.. |
+|**How to access data**: |[Search jobs](search-jobs.md) |Direct KQL queries |[externaldata](/azure/data-explorer/kusto/query/externaldata-operator) |Modified KQL queries |
+|**Scenario**: |**Occasional access**<br><br>Relevant in scenarios where you donΓÇÖt need to run heavy analytics or trigger analytics rules, and you only need to access the data occasionally. |**Frequent access**<br><br>Relevant in scenarios where you need to access the data frequently, and need to control how the cluster is sized and configured. |**Compliance/audit**<br><br>ΓÇó Optimal for storing massive amounts of unstructured data.<br>ΓÇó Relevant in scenarios where you don't need quick access to the data or high performance, such as for compliance or audit purposes. |**Occasional access**<br><br>Relevant in scenarios where you want to benefit from the low cost of Azure Blob Storage, and maintain relatively quick access to the data. |
|**Complexity**: |Very low |Medium |Low |High | |**Readiness**: |Public Preview |GA |GA |GA |
To determine the minimum duration of the migration and where the bottleneck coul
In this article, you learned how to map your migration rules from QRadar to Microsoft Sentinel. > [!div class="nextstepaction"]
-> [Select a data ingestion tool](migration-ingestion-tool.md)
+> [Select a data ingestion tool](migration-ingestion-tool.md)
sentinel Migration Qradar Detection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-qradar-detection-rules.md
Learn more about [best practices for migrating detection rules](https://techcomm
1. When you're satisfied, you can consider the rule migrated. Create a playbook for your rule action as needed. For more information, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).
-Learn more about analytics rules.
+Learn more about analytics rules:
- [**Create custom analytics rules to detect threats**](detect-threats-custom.md). Use [alert grouping](detect-threats-custom.md#alert-grouping) to reduce alert fatigue by grouping alerts that occur within a given timeframe. - [**Map data fields to entities in Microsoft Sentinel**](map-data-fields-to-entities.md) to enable SOC engineers to define entities as part of the evidence to track during an investigation. Entity mapping also makes it possible for SOC analysts to take advantage of an intuitive [investigation graph (investigate-cases.md#use-the-investigation-graph-to-deep-dive) that can help reduce time and effort.
OfficeActivity
In this article, you learned how to map your migration rules from QRadar to Microsoft Sentinel. > [!div class="nextstepaction"]
-> [Migrate your SOAR automation](migration-qradar-automation.md)
+> [Migrate your SOAR automation](migration-qradar-automation.md)
sentinel Migration Security Operations Center Processes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-security-operations-center-processes.md
Use one of the following options to access playbooks:
- The Microsoft Sentinel [Content hub](sentinel-solutions-deploy.md) - The Microsoft Sentinel [GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks)
-These sources include a wide range of security-oriented playbooks to cover a substantial portion of use cases of varying complexity. To streamline your work with playbooks, use the templates under **Automation > Playbook templates**. Templates allow you to easily deploy playbooks into the Microsoft Sentinel instance, and then modify the playbooks to suit your organization's needs.
+These sources include a wide range of security-oriented playbooks to cover a substantial portion of use cases of varying complexity. To streamline your work with playbooks, use the templates under **Automation > Playbook templates**. Templates allow you to easily deploy playbooks into the Microsoft Sentinel instance, and then modify the playbooks to suit your organization's needs.
See the [SOC Process Framework](https://github.com/Azure/Azure-Sentinel/wiki/SOC-Process-Framework) to map your SOC process to Microsoft Sentinel capabilities. ## Compare SIEM concepts
-Use this table to compare the main concepts of your legacy SIEM to Microsoft Sentinel concepts.
+Use this table to compare the main concepts of your legacy SIEM to Microsoft Sentinel concepts.
| ArcSight | QRadar | Splunk | Microsoft Sentinel | |--|--|--|--|
Use this table to compare the main concepts of your legacy SIEM to Microsoft Sen
## Next steps
-After migration, explore Microsoft's Microsoft Sentinel resources to expand your skills and get the most out of Microsoft Sentinel.
+After migration, explore Microsoft's Microsoft Sentinel resources to expand your skills and get the most out of Microsoft Sentinel.
Also consider increasing your threat protection by using Microsoft Sentinel alongside [Microsoft 365 Defender](./microsoft-365-defender-sentinel-integration.md) and [Microsoft Defender for Cloud](../security-center/azure-defender.md) for [integrated threat protection](https://www.microsoft.com/security/business/threat-protection). Benefit from the breadth of visibility that Microsoft Sentinel delivers, while diving deeper into detailed threat analysis.
For more information, see:
- [Microsoft Sentinel learning path](/learn/paths/security-ops-sentinel/) - [SC-200 Microsoft Security Operations Analyst certification](/learn/certifications/exams/sc-200) - [Microsoft Sentinel Ninja training](https://techcommunity.microsoft.com/t5/azure-sentinel/become-an-azure-sentinel-ninja-the-complete-level-400-training/ba-p/1246310)-- [Investigate an attack on a hybrid environment with Microsoft Sentinel](https://mslearn.cloudguides.com/guides/Investigate%20an%20attack%20on%20a%20hybrid%20environment%20with%20Azure%20Sentinel)
+- [Investigate an attack on a hybrid environment with Microsoft Sentinel](https://mslearn.cloudguides.com/guides/Investigate%20an%20attack%20on%20a%20hybrid%20environment%20with%20Azure%20Sentinel)
sentinel Migration Splunk Detection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-splunk-detection-rules.md
Learn more about [best practices for migrating detection rules](https://techcomm
1. When you're satisfied, you can consider the rule migrated. Create a playbook for your rule action as needed. For more information, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).
-Learn more about analytics rules.
+Learn more about analytics rules:
- [**Create custom analytics rules to detect threats**](detect-threats-custom.md). Use [alert grouping](detect-threats-custom.md#alert-grouping) to reduce alert fatigue by grouping alerts that occur within a given timeframe. - [**Map data fields to entities in Microsoft Sentinel**](map-data-fields-to-entities.md) to enable SOC engineers to define entities as part of the evidence to track during an investigation. Entity mapping also makes it possible for SOC analysts to take advantage of an intuitive [investigation graph (investigate-cases.md#use-the-investigation-graph-to-deep-dive) that can help reduce time and effort.
urldecode("http%3A%2F%2Fwww.splunk.com%2Fdownload%3Fr%3Dheader")
In this article, you learned how to map your migration rules from Splunk to Microsoft Sentinel. > [!div class="nextstepaction"]
-> [Migrate your SOAR automation](migration-splunk-automation.md)
+> [Migrate your SOAR automation](migration-splunk-automation.md)
sentinel Sentinel Solutions Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-delete.md
+
+ Title: Delete installed Microsoft Sentinel out-of-the-box content and solutions
+description: Remove solutions and content you've deployed in Microsoft Sentinel.
++ Last updated : 05/16/2022+++
+# Delete installed Microsoft Sentinel out-of-the-box content and solutions (public preview)
+
+If you've installed a Microsoft Sentinel out-of-the-box solution, you can remove content items from the solution or delete the installed solution. If you later need to restore deleted content items, select **Reinstall** on the solution. Similarly, you can restore the solution by re-installing the solution.
+
+> [!IMPORTANT]
+>
+> Microsoft Sentinel solutions and the Microsoft Sentinel Content Hub are currently in **PREVIEW**, as are all individual solution packages. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Delete content items
+
+Delete content items for an installed solution deployed by the content hub.
+
+1. In the content hub, select an installed solution where the version is 2.0.0 or higher.
+1. On the solutions details page, select **Manage**.
+1. Select the content item or items you want to delete.
+1. Select **Delete items**.
+
+ :::image type="content" source="media/sentinel-solutions-delete/manage-solution-delete-item.png" alt-text="Screenshot of solution with content items selected for deletion.":::
+
+To restore deleted content items, select **Reinstall** on the solution.
+
+## Delete the solution
+
+Delete a solution and the related content templates from the content hub or in the manage solution view. Active, cloned, saved, or custom items associated with a content template aren't deleted.
+
+1. In the content hub, select an installed solution.
+1. On the solutions details page, select **Delete**.
+1. Select **Yes** to delete the solution and the templates.
+
+ :::image type="content" source="media/sentinel-solutions-delete/manage-solution-delete.png" alt-text="Screenshot of the delete confirmation prompt.":::
+
+To restore an out-of-the-box solution from the content hub, select the solution and **Install**.
+
+## Next steps
+
+- [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions (public preview)](sentinel-solutions-deploy.md)
+- [About Microsoft Sentinel content and solutions](sentinel-solutions.md)
+- [Microsoft Sentinel content hub catalog](sentinel-solutions-catalog.md)
sentinel Sentinel Solutions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-deploy.md
Title: Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions
-description: This article shows how customers can easily find and deploy data analysis tools, packaged together with data connectors and other content.
+description: Learn how to find and deploy data analysis tools, packaged together with data connectors and other content.
Previously updated : 11/09/2021 Last updated : 05/06/2022
-# Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions (Public preview)
-
-> [!IMPORTANT]
->
-> Microsoft Sentinel solutions and the Microsoft Sentinel Content Hub are currently in **PREVIEW**, as are all individual solution packages. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions (Public preview)
The Microsoft Sentinel Content hub provides access to Microsoft Sentinel out-of-the-box (built-in) content and solutions, which are packed with content for end-to-end product, domain, or industry needs.
This article describes how to install solutions in your Microsoft Sentinel works
- Install the solution in your workspace when you find one that fits your organization's needs. Make sure to keep it updated with the latest changes.
-> [!TIP]
-> If you are a partner who wants to create your own solution, see the [Microsoft Sentinel Solutions Build Guide](https://aka.ms/sentinelsolutionsbuildguide) for solution authoring and publishing.
+If you're a partner who wants to create your own solution, see the [Microsoft Sentinel Solutions Build Guide](https://aka.ms/sentinelsolutionsbuildguide) for solution authoring and publishing.
+
+> [!IMPORTANT]
>
+> Microsoft Sentinel solutions and the Microsoft Sentinel Content Hub are currently in **PREVIEW**, as are all individual solution packages. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ ## Find a solution 1. From the Microsoft Sentinel navigation menu, under **Content management**, select **Content hub (Preview)**.
This article describes how to install solutions in your Microsoft Sentinel works
Filter the list displayed, either by selecting specific values from the filters, or entering any part of a solution name or description in the **Search** field.
- For more information, see [Microsoft Sentinel out-of-the-box content and solution categories](sentinel-solutions.md#microsoft-sentinel-out-of-the-box-content-and-solution-categories).
+ For more information, see [Categories for Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions.md#categories-for-microsoft-sentinel-out-of-the-box-content-and-solutions).
> [!TIP] > If a solution that you've deployed has updates since you deployed it, an orange triangle will indicate that you have updates to deploy, and it'll be indicated in the blue triangle at the top of the page.
For example, in the following image, the **Cisco Umbrella** solution shows a cat
For more information, see [Microsoft Sentinel content hub catalog](sentinel-solutions-catalog.md) and [Find your Microsoft Sentinel data connector](data-connectors-reference.md).
+## Enable content items in a solution
+
+Centrally manage content items for an installed solution deployed by the content hub.
+
+1. In the content hub, select an installed solution where the version is 2.0.0 or higher.
+1. On the solutions details page, select **Manage**.
+
+ :::image type="content" source="media/sentinel-solutions-deploy/content-hub-manage-option.png" alt-text="Screenshot of manage button on details page of the Azure Activity content hub solution." lightbox="media/sentinel-solutions-deploy/content-hub-manage-option.png":::
+
+1. Review the list of content items.
+
+ :::image type="content" source="media/sentinel-solutions-deploy/manage-solution-azure-activity.png" alt-text="Screenshot of solution description and list of content items for Azure Activity solution." lightbox="media/sentinel-solutions-deploy/manage-solution-azure-activity.png":::
+
+1. Select a content item to get started. The following steps describe how you can interact with the different solution content types in the content hub.
+
+1. **Data connector** - Select **Open connector page**.
+
+ :::image type="content" source="media/sentinel-solutions-deploy/manage-solution-data-connector-open-connector.png" alt-text="Screenshot of data connector content item for Azure Activity solution where status is disconnected.":::
+
+ Complete the data connector configuration steps. After you configure the data connector, the content item status shows as **Connected**.
+1. **Analytics rule** - View the template in the analytics template gallery. Select **Create rule** and follow the steps to enable the analytics rule . The number of active rules created from the rule template is shown in the **Created content** column for the content item.
+
+ :::image type="content" source="media/sentinel-solutions-deploy/manage-solution-analytics-rule.png" alt-text="Screenshot of analytics rule content item in solution for Azure Activity.":::
+
+1. **Hunting query** - Select **Run query** from the details page. To customize the query, go to the hunting gallery and create a clone of the read-only hunting query template. The number of cloned queries associated with a hunting query is shown in the **Created content** column for the content item.
+
+ :::image type="content" source="media/sentinel-solutions-deploy/manage-solution-hunting-query.png" alt-text="Screenshot of cloned hunting query content item in solution for Azure Activity." lightbox="media/sentinel-solutions-deploy/manage-solution-hunting-query.png":::
+
+1. **Workbook** - Select **View template** to open the workbook and see the visualizations. To create an instance of the workbook template to customize, select **Manage in gallery** > **Save**. View your saved customizable workbook by selecting **1 item** in the **Created content** column.
+
+ :::image type="content" source="media/sentinel-solutions-deploy/manage-solution-workbook.png" alt-text="Screenshot of saved workbook item in solution for Azure Activity." lightbox="media/sentinel-solutions-deploy/manage-solution-workbook.png" :::
+
+1. **Parser** - Select **Load the function code** to open Azure Log Analytics and run the provided function code. Select **Use in editor** to open Azure Log Analytics with the parser.
+
+ :::image type="content" source="media/sentinel-solutions-deploy/manage-solution-parser.png" alt-text="Screenshot of parser content type in a solution.":::
+
+1. **Playbook** - Not yet supported in this view. In Microsoft Sentinel, go to **Playbook** to find and use the solution's playbook.
+ ## Find the support model for your solution Each solution lists details about its support model on the solution's details pane, in the **Support** box, where either **Microsoft** or a partner's name is listed. For example:
In this document, you learned about Microsoft Sentinel solutions and how to find
- Learn more about [Microsoft Sentinel solutions](sentinel-solutions.md). - See the full [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md).
+- [Delete installed Microsoft Sentinel out-of-the-box content and solutions (public preview)](sentinel-solutions-delete.md)
-Many solutions include data connectors that you'll need to configure so that you can start ingesting your data into Microsoft Sentinel. Each data connector will have it's own set of requirements, detailed on the data connector page in Microsoft Sentinel.
+Many solutions include data connectors that you'll need to configure so that you can start ingesting your data into Microsoft Sentinel. Each data connector will have its own set of requirements, detailed on the data connector page in Microsoft Sentinel.
For more information, see [Connect your data source](data-connectors-reference.md).
sentinel Sentinel Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions.md
Title: About Microsoft Sentinel content and solutions | Microsoft Docs description: This article describes Microsoft Sentinel content and solutions, which customers can use to find data analysis tools packaged together with data connectors.-+ Previously updated : 11/09/2021- Last updated : 05/06/2022+ # About Microsoft Sentinel content and solutions -
-> [!IMPORTANT]
->
-> The Microsoft Sentinel **Content hub** and solutions are currently in **PREVIEW**, as are all individual solution packages. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- Microsoft Sentinel *content* is Security Information and Event Management (SIEM) content that enables customers to ingest data, monitor, alert, hunt, investigate, respond, and connect with different products, platforms, and services in Microsoft Sentinel. Content in Microsoft Sentinel includes any of the following types:
Content in Microsoft Sentinel includes any of the following types:
Microsoft Sentinel *solutions* are packages of Microsoft Sentinel content or Microsoft Sentinel API integrations, which fulfill an end-to-end product, domain, or industry vertical scenario in Microsoft Sentinel.
-> [!TIP]
-> You can either customize out-of-the-box content for your own needs, or you can create your own solution with content to share with others in the community. For more information, see the [Microsoft Sentinel Solutions Build Guide](https://aka.ms/sentinelsolutionsbuildguide) for solutionsΓÇÖ authoring and publishing.
+You can either customize out-of-the-box content for your own needs, or you can create your own solution with content to share with others in the community. For more information, see the [Microsoft Sentinel Solutions Build Guide](https://aka.ms/sentinelsolutionsbuildguide) for solutionsΓÇÖ authoring and publishing.
+
+> [!IMPORTANT]
>
+> The Microsoft Sentinel **Content hub** and solutions are currently in **PREVIEW**, as are all individual solution packages. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ ## Discover and manage Microsoft Sentinel content Use the Microsoft Sentinel **Content hub** to centrally discover and install out-of-the-box (built-in) content. The Microsoft Sentinel Content Hub provides in-product discoverability, single-step deployment, and enablement of end-to-end product, domain, and/or vertical out-of-the-box solutions and content in Microsoft Sentinel. -- In the **Content hub**, filter by [categories](#microsoft-sentinel-out-of-the-box-content-and-solution-categories) and other parameters, or use the powerful text search, to find the content that works best for your organization's needs. The **Content hub** also indicates the [support model](#microsoft-sentinel-out-of-the-box-content-and-solution-support-models) applied to each piece of content, as some content is maintained by Microsoft and others are maintained by partners or the community.
+- In the **Content hub**, filter by [categories](#categories-for-microsoft-sentinel-out-of-the-box-content-and-solutions) and other parameters, or use the powerful text search, to find the content that works best for your organization's needs. The **Content hub** also indicates the [support model](#support-models-for-microsoft-sentinel-out-of-the-box-content-and-solutions) applied to each piece of content, as some content is maintained by Microsoft and others are maintained by partners or the community.
Manage [updates for out-of-the-box content](sentinel-solutions-deploy.md#install-or-update-a-solution) via the Microsoft Sentinel **Content hub**, and for custom content via the **Repositories** page.
For more information, see:
- [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md) - [Microsoft Sentinel Content hub catalog](sentinel-solutions-catalog.md)
-## Microsoft Sentinel out-of-the-box content and solution categories
+## Categories for Microsoft Sentinel out-of-the-box content and solutions
Microsoft Sentinel out-of-the-box content can be applied with one or more of the following categories. In the **Content hub**, select the categories you want to view to change the content displayed.
Microsoft Sentinel out-of-the-box content can be applied with one or more of the
| **Retail** | Products, services, and content specific for the retail industry |
-## Microsoft Sentinel out-of-the-box content and solution support models
+## Support models for Microsoft Sentinel out-of-the-box content and solutions
Both Microsoft and other organizations author Microsoft Sentinel out-of-the-box content and solutions. Each piece of out-of-the-box content or solution has one of the following support types:
Both Microsoft and other organizations author Microsoft Sentinel out-of-the-box
|**Partner-supported** | Applies to content/solutions authored by parties other than Microsoft. <br><br> The partner company provides support or maintenance for these pieces of content/solutions. The partner company can be an Independent Software Vendor, a Managed Service Provider (MSP/MSSP), a Systems Integrator (SI), or any organization whose contact information is provided on the Microsoft Sentinel page for the selected content/solutions.<br><br> For any issues with a partner-supported solution, contact the specified support contact.| |**Community-supported** |Applies to content/solutions authored by Microsoft or partner developers that don't have listed contacts for support and maintenance in Microsoft Sentinel.<br><br> For questions or issues with these solutions, [file an issue](https://github.com/Azure/Azure-Sentinel/issues/new/choose) in the [Microsoft Sentinel GitHub community](https://aka.ms/threathunters). |
+## Content sources for Microsoft Sentinel out-of-the-box content and solutions
+
+Each piece of out-of-the-box content or solution has one of the following content sources:
+
+|Content source |Description |
+|||
+|**Content hub** |Content or solutions deployed by the content hub that support lifecycle management |
+|**Custom** | Content or solutions you've customized in your workspace |
+|**Gallery content** | Content or solutions from the gallery that don't support lifecycle management |
+|**Repository** | Content or solutions from a repository connected to your workspace |
## Next steps
service-bus-messaging Service Bus Auto Forwarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-auto-forwarding.md
Title: Auto-forwarding Azure Service Bus messaging entities description: This article describes how to chain an Azure Service Bus queue or subscription to another queue or topic. Previously updated : 04/23/2021 Last updated : 05/31/2022 # Chaining Service Bus entities with autoforwarding
-The Service Bus *autoforwarding* feature enables you to chain a queue or subscription to another queue or topic that is part of the same namespace. When autoforwarding is enabled, Service Bus automatically removes messages that are placed in the first queue or subscription (source) and puts them in the second queue or topic (destination). It is still possible to send a message to the destination entity directly.
+The Service Bus *autoforwarding* feature enables you to chain a queue or subscription to another queue or topic that is part of the same namespace. When autoforwarding is enabled, Service Bus automatically removes messages that are placed in the first queue or subscription (source) and puts them in the second queue or topic (destination). It's still possible to send a message to the destination entity directly.
> [!NOTE]
-> The basic tier of Service Bus doesn't support the autoforwarding feature. The standard and premium tiers support the feature. For differences between these tiers, see [Service Bus pricing](https://azure.microsoft.com/pricing/details/service-bus/).
+> The basic tier of Service Bus doesn't support the autoforwarding feature. For differences between tiers, see [Service Bus pricing](https://azure.microsoft.com/pricing/details/service-bus/).
-The destination entity must exist at the time the source entity is created. If the destination entity does not exist, Service Bus returns an exception when asked to create the source entity.
+The destination entity must exist at the time the source entity is created. If the destination entity doesn't exist, Service Bus returns an exception when asked to create the source entity.
## Scenarios ### Scale out an individual topic
-You can use autoforwarding to scale out an individual topic. Service Bus limits the [number of subscriptions on a given topic](service-bus-quotas.md) to 2,000. You can accommodate additional subscriptions by creating second-level topics. Even if you are not bound by the Service Bus limitation on the number of subscriptions, adding a second level of topics can improve the overall throughput of your topic.
+You can use autoforwarding to scale out an individual topic. Service Bus limits the [number of subscriptions on a given topic](service-bus-quotas.md) to 2,000. You can accommodate additional subscriptions by creating second-level topics. Even if you aren't bound by the Service Bus limitation on the number of subscriptions, adding a second level of topics can improve the overall throughput of your topic.
![Diagram of an autoforwarding scenario showing a message processed through an Orders Topic that can branch to any of three second-level Orders Topics.][0]
You can also use autoforwarding to decouple message senders from receivers. For
![Diagram of an autoforwarding scenario showing three processing modules sending messages through three corresponding topics to two separate queues.][1]
-If Alice goes on vacation, her personal queue, rather than the ERP topic, fills up. In this scenario, because a sales representative has not received any messages, none of the ERP topics ever reach quota.
+If Alice goes on vacation, her personal queue, rather than the ERP topic, fills up. In this scenario, because a sales representative hasn't received any messages, none of the ERP topics ever reach quota.
> [!NOTE] > When autoforwarding is setup, the value for AutoDeleteOnIdle on **both the Source and the Destination** is automatically set to the maximum value of the data type.
If Alice goes on vacation, her personal queue, rather than the ERP topic, fills
## Autoforwarding considerations
-If the destination entity accumulates too many messages and exceeds the quota, or the destination entity is disabled, the source entity adds the messages to its [dead-letter queue](service-bus-dead-letter-queues.md) until there is space in the destination (or the entity is re-enabled). Those messages continue to live in the dead-letter queue, so you must explicitly receive and process them from the dead-letter queue.
+If the destination entity accumulates too many messages and exceeds the quota, or the destination entity is disabled, the source entity adds the messages to its [dead-letter queue](service-bus-dead-letter-queues.md) until there's space in the destination (or the entity is re-enabled). Those messages continue to live in the dead-letter queue, so you must explicitly receive and process them from the dead-letter queue.
-When chaining together individual topics to obtain a composite topic with many subscriptions, it is recommended that you have a moderate number of subscriptions on the first-level topic and many subscriptions on the second-level topics. For example, a first-level topic with 20 subscriptions, each of them chained to a second-level topic with 200 subscriptions, allows for higher throughput than a first-level topic with 200 subscriptions, each chained to a second-level topic with 20 subscriptions.
+When chaining together individual topics to obtain a composite topic with many subscriptions, it's recommended that you have a moderate number of subscriptions on the first-level topic and many subscriptions on the second-level topics. For example, a first-level topic with 20 subscriptions, each of them chained to a second-level topic with 200 subscriptions, allows for higher throughput than a first-level topic with 200 subscriptions, each chained to a second-level topic with 20 subscriptions.
Service Bus bills one operation for each forwarded message. For example, sending a message to a topic with 20 subscriptions, each of them configured to autoforward messages to another queue or topic, is billed as 21 operations if all first-level subscriptions receive a copy of the message.
To create a subscription that is chained to another queue or topic, the creator
Don't create a chain that exceeds 4 hops. Messages that exceed 4 hops are dead-lettered. ## Next steps
-To learn how to set enable or disable auto forwarding in different ways (Azure portal, PowerShell, CLI, Azure Resource Management template, etc.), see [Enable auto forwarding for queues and subscriptions](enable-auto-forward.md).
+To learn how to enable or disable auto forwarding in different ways (Azure portal, PowerShell, CLI, Azure Resource Management template, etc.), see [Enable auto forwarding for queues and subscriptions](enable-auto-forward.md).
[0]: ./media/service-bus-auto-forwarding/IC628631.gif
service-bus-messaging Service Bus Messaging Sql Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messaging-sql-filter.md
Title: Azure Service Bus Subscription Rule SQL Filter syntax | Microsoft Docs description: This article provides details about SQL filter grammar. A SQL filter supports a subset of the SQL-92 standard. Previously updated : 04/30/2021 Last updated : 05/31/2022 # Subscription Rule SQL Filter Syntax
service-fabric How To Managed Cluster Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-availability-zones.md
Last updated 1/20/2022
# Deploy a Service Fabric managed cluster across availability zones Availability Zones in Azure are a high-availability offering that protects your applications and data from datacenter failures. An Availability Zone is a unique physical location equipped with independent power, cooling, and networking within an Azure region.
-Service Fabric managed cluster supports deployments that span across multiple Availability Zones to provide zone resiliency. This configuration will ensure high-availability of the critical system services and your applications to protect from single-points-of-failure. Azure Availability Zones are only available in select regions. For more information, see [Azure Availability Zones Overview](../availability-zones/az-overview.md).
+Service Fabric managed cluster supports deployments that span across multiple Availability Zones to provide zone resiliency. This configuration will ensure high-availability of the critical system services and your applications to protect from single-points-of-failure. Azure Availability Zones are only available in select regions. For more information, see [Azure Availability Zones Overview](/azure/availability-zones/az-overview).
>[!NOTE] >Availability Zone spanning is only available on Standard SKU clusters.
Existing Service Fabric managed clusters which are not spanned across availabili
Requirements: * Standard SKU cluster
-* Three [availability zones in the region](/availability-zones/az-overview.md#azure-regions-with-availability-zones).
+* Three [availability zones in the region](/azure/availability-zones/az-overview#azure-regions-with-availability-zones).
>[!NOTE] >Migration to a zone resilient configuration can cause a brief loss of external connectivity through the load balancer, but will not affect cluster health. This occurs when a new Public IP needs to be created in order to make the networking resilient to Zone failures. Please plan the migration accordingly.
service-fabric Service Fabric Cluster Fabric Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-fabric-settings.md
The following is a list of Fabric settings that you can customize, organized by
|DeployedState |wstring, default is L"Disabled" |Static |2-stage removal of CSS. | |EnableSecretMonitoring|bool, default is FALSE |Static |Must be enabled to use Managed KeyVaultReferences. Default may become true in the future. For more information, see [KeyVaultReference support for Azure-deployed Service Fabric Applications](./service-fabric-keyvault-references.md)| |SecretMonitoringInterval|TimeSpan, default is Common::TimeSpan::FromMinutes(15) |Static |The rate at which Service Fabric will poll Key Vault for changes when using Managed KeyVaultReferences. This rate is a best effort, and changes in Key Vault may be reflected in the cluster earlier or later than the interval. For more information, see [KeyVaultReference support for Azure-deployed Service Fabric Applications](./service-fabric-keyvault-references.md) |- |UpdateEncryptionCertificateTimeout |TimeSpan, default is Common::TimeSpan::MaxValue |Static |Specify timespan in seconds. The default has changed to TimeSpan::MaxValue; but overrides are still respected. May be deprecated in the future. | ## CentralSecretService/Replication
The following is a list of Fabric settings that you can customize, organized by
|MoveExistingReplicaForPlacement | Bool, default is true |Dynamic|Setting which determines if to move existing replica during placement. | |MovementPerPartitionThrottleCountingInterval | Time in seconds, default is 600 |Static| Specify timespan in seconds. Indicate the length of the past interval for which to track replica movements for each partition (used along with MovementPerPartitionThrottleThreshold). | |MovementPerPartitionThrottleThreshold | Uint, default is 50 |Dynamic| No balancing-related movement will occur for a partition if the number of balancing related movements for replicas of that partition has reached or exceeded MovementPerFailoverUnitThrottleThreshold in the past interval indicated by MovementPerPartitionThrottleCountingInterval. |
-|MoveParentToFixAffinityViolation | Bool, default is true |Dynamic| Setting which determines if parent replicas can be moved to fix affinity constraints.|
+|MoveParentToFixAffinityViolation | Bool, default is false |Dynamic| Setting which determines if parent replicas can be moved to fix affinity constraints.|
|NodeTaggingEnabled | Bool, default is false |Dynamic| If true; NodeTagging feature will be enabled. | |NodeTaggingConstraintPriority | Int, default is 0 |Dynamic| Configurable priority of node tagging. | |PartiallyPlaceServices | Bool, default is true |Dynamic| Determines if all service replicas in cluster will be placed "all or nothing" given limited suitable nodes for them.|
The following is a list of Fabric settings that you can customize, organized by
|PropertyGroup| UserServiceMetricCapacitiesMap, default is None | Static | A collection of user services resource governance limits Needs to be static as it affects AutoDetection logic | ## Next steps
-For more information, see [Upgrade the configuration of an Azure cluster](service-fabric-cluster-config-upgrade-azure.md) and [Upgrade the configuration of a standalone cluster](service-fabric-cluster-config-upgrade-windows-server.md).
+For more information, see [Upgrade the configuration of an Azure cluster](service-fabric-cluster-config-upgrade-azure.md) and [Upgrade the configuration of a standalone cluster](service-fabric-cluster-config-upgrade-windows-server.md).
service-fabric Service Fabric Tutorial Create Vnet And Windows Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-create-vnet-and-windows-cluster.md
The following inbound traffic rules are enabled in the **Microsoft.Network/netwo
If other application ports are needed, you'll need to adjust the **Microsoft.Network/loadBalancers** resource and the **Microsoft.Network/networkSecurityGroups** resource to allow the traffic in. ### Windows Defender
-By default, the [Windows Defender antivirus program](/windows/security/threat-protection/windows-defender-antivirus/windows-defender-antivirus-on-windows-server-2016) is installed and functional on Windows Server 2016. The user interface is installed by default on some SKUs, but isn't required. For each node type/VM scale set declared in the template, the [Azure VM Antimalware extension](../virtual-machines/extensions/iaas-antimalware-windows.md) is used to exclude the Service Fabric directories and processes:
+By default, the [Windows Defender antivirus program](/microsoft-365/security/defender-endpoint/microsoft-defender-antivirus-windows) is installed and functional on Windows Server 2016. The user interface is installed by default on some SKUs, but isn't required. For each node type/VM scale set declared in the template, the [Azure VM Antimalware extension](../virtual-machines/extensions/iaas-antimalware-windows.md) is used to exclude the Service Fabric directories and processes:
```json {
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
To learn more about SFTP support for Azure Blob Storage, see [SSH File Transfer
## Enable SFTP support
-This section shows you how to enable SFTP support for an existing storage account. To view an Azure Resource Manager template that enables SFTP support as part of creating the account, see [Create an Azure Storage Account and Blob Container accessible using SFTP protocol on Azure](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.storage/storage-sftp).
+This section shows you how to enable SFTP support for an existing storage account. To view an Azure Resource Manager template that enables SFTP support as part of creating the account, see [Create an Azure Storage Account and Blob Container accessible using SFTP protocol on Azure](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.storage/storage-sftp). To view the Local User REST APIs and .NET references, see [Local Users](https://docs.microsoft.com/rest/api/storagerp/local-users) and [LocalUser Class](https://docs.microsoft.com/dotnet/api/microsoft.azure.management.storage.models.localuser?view=azure-dotnet).
### [Portal](#tab/azure-portal)
$storageAccountName = "<storage-account>"
Set-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName -EnableSftp $true ```
+ > [!NOTE]
+ > The `-EnableSftp` parameter is currently only available in preview versions of Azure Powershell. Use the command below to install the preview version:
+ > ```
+ > Install-Module -Name Az.Storage -RequiredVersion 4.1.2-preview -AllowPrerelease
+ > ```
### [Azure CLI](#tab/azure-cli)
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
Connect-AzAccount
# Define parameters # $StorageAccountName is the name of an existing storage account that you want to join to AD
-# $SamAccountName is an AD object, see https://docs.microsoft.com/en-us/windows/win32/adschema/a-samaccountname
+# $SamAccountName is the name of the to-be-created AD object, which is used by AD as the logon name for the object. See https://docs.microsoft.com/en-us/windows/win32/adschema/a-samaccountname
# for more information. $SubscriptionId = "<your-subscription-id-here>" $ResourceGroupName = "<resource-group-name-here>"
storage Storage How To Use Files Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-windows.md
description: Learn to use Azure file shares with Windows and Windows Server. Use
Previously updated : 09/10/2021 Last updated : 05/31/2022
To get this script:
1. Select **File shares**. 1. Select the file share you'd like to mount.
- :::image type="content" source="media/storage-how-to-use-files-windows/select-file-shares.png" alt-text="Screenshot of file shares blade, file share is highlighted.":::
+ :::image type="content" source="media/storage-how-to-use-files-windows/select-file-shares.png" alt-text="Screenshot of file shares blade, file share is highlighted." lightbox="media/storage-how-to-use-files-windows/select-file-shares.png":::
1. Select **Connect**.
You have now mounted your Azure file share.
### Mount the Azure file share with File Explorer > [!Note]
-> Note that the following instructions are shown on Windows 10 and may differ slightly on older releases.
+> Note that the following instructions are shown on Windows 10 and may differ slightly on older releases.
-1. Open File Explorer. This can be done by opening from the Start Menu, or by pressing Win+E shortcut.
+1. Open File Explorer by opening it from the Start Menu, or by pressing the Win+E shortcut.
1. Navigate to **This PC** on the left-hand side of the window. This will change the menus available in the ribbon. Under the Computer menu, select **Map network drive**.
-
- ![A screenshot of the "Map network drive" drop-down menu](./media/storage-how-to-use-files-windows/1_MountOnWindows10.png)
-1. Select the drive letter and enter the UNC path, the UNC path format is `\\<storageAccountName>.file.core.windows.net\<fileShareName>`. For example: `\\anexampleaccountname.file.core.windows.net\example-share-name`.
-
- ![A screenshot of the "Map Network Drive" dialog](./media/storage-how-to-use-files-windows/2_MountOnWindows10.png)
+ :::image type="content" source="media/storage-how-to-use-files-windows/1_MountOnWindows10.png" alt-text="Screenshot of the Map network drive drop-down menu.":::
-1. Use the storage account name prepended with `AZURE\` as the username and a storage account key as the password.
-
- ![A screenshot of the network credential dialog](./media/storage-how-to-use-files-windows/3_MountOnWindows10.png)
+1. Select the drive letter and enter the UNC path to your Azure file share. The UNC path format is `\\<storageAccountName>.file.core.windows.net\<fileShareName>`. For example: `\\anexampleaccountname.file.core.windows.net\file-share-name`. Check the **Connect using different credentials** checkbox. Select **Finish**.
+
+ :::image type="content" source="media/storage-how-to-use-files-windows/2_MountOnWindows10.png" alt-text="Screenshot of the Map Network Drive dialog.":::
+
+1. Select **More choices** > **Use a different account**. Under **Email address**, use the storage account name, and use a storage account key as the password. Select **OK**.
+
+ :::image type="content" source="media/storage-how-to-use-files-windows/credentials-use-a-different-account.png" alt-text="Screenshot of the network credential dialog selecting use a different account.":::
1. Use Azure file share as desired.
-
- ![Azure file share is now mounted](./media/storage-how-to-use-files-windows/4_MountOnWindows10.png)
-1. When you are ready to dismount the Azure file share, you can do so by right-clicking on the entry for the share under the **Network locations** in File Explorer and selecting **Disconnect**.
+ :::image type="content" source="media/storage-how-to-use-files-windows/4_MountOnWindows10.png" alt-text="Screenshot showing that the Azure file share is now mounted.":::
+
+1. When you're ready to dismount the Azure file share, right-click on the entry for the share under the **Network locations** in File Explorer and select **Disconnect**.
### Accessing share snapshots from Windows
-If you have taken a share snapshot, either manually or automatically through a script or service like Azure Backup, you can view previous versions of a share, a directory, or a particular file from file share on Windows. You can take a share snapshot using [Azure PowerShell](./storage-how-to-use-files-portal.md), [Azure CLI](./storage-how-to-use-files-portal.md), or the [Azure portal](storage-how-to-use-files-portal.md).
+If you've taken a share snapshot, either manually or automatically through a script or service like Azure Backup, you can view previous versions of a share, a directory, or a particular file from file share on Windows. You can take a share snapshot using [Azure PowerShell](./storage-how-to-use-files-portal.md), [Azure CLI](./storage-how-to-use-files-portal.md), or the [Azure portal](storage-how-to-use-files-portal.md).
#### List previous versions Browse to the item or parent item that needs to be restored. Double-click to go to the desired directory. Right-click and select **Properties** from the menu.
synapse-analytics Business Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/business-intelligence.md
To create your data warehouse solution, you can choose from different kinds of i
| ![Birst](./media/business-intelligence/birst_logo.png) |**Birst**<br>Birst connects the entire organization through a network of interwoven virtualized BI instances on-top of a shared common analytical fabric|[Product page](https://www.birst.com/)<br> | | ![Count](./media/business-intelligence/count-logo.png) |**Count**<br> Count is the next generation SQL editor, giving you the fastest way to explore and share your data with your team. At Count's core is a data notebook built for SQL, allowing you to structure your code, iterate quickly and stay in flow. Visualize your results instantly or customize them to build beautifully detailed charts in just a few clicks. Instantly share anything from one-off queries to full interactive data stories built off any of your Azure Synapse data sources. |[Product page](https://count.co/)<br>| | ![Dremio](./media/business-intelligence/dremio-logo.png) |**Dremio**<br> Analysts and data scientists can discover, explore and curate data using Dremio's intuitive UI, while IT maintains governance and security. Dremio makes it easy to join ADLS with Blob Storage, Azure SQL Database, Azure Synapse SQL, HDInsight, and more. With Dremio, Power BI analysts can search for new datasets stored on ADLS, immediately access that data in Power BI with no preparation by IT, create visualizations, and iteratively refine reports in real-time. And analysts can create new reports that combine data between ADLS and other databases. |[Product page](https://www.dremio.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dremiocorporation.dremio_ce)<br> |
-| ![Dundas](./media/business-intelligence/dundas_software_logo.png) |**Dundas BI**<br>Dundas Data Visualization is a leading, global provider of Business Intelligence and Data Visualization software. Dundas dashboards, reporting, and visual data analytics provide seamless integration into business applications, enabling better decisions and faster insights.|[Product page](https://www.dundas.com/dundas-bi)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dundas.dundas-bi)<br> |
+| ![Dundas](./media/business-intelligence/dundas_software_logo.png) |**Dundas BI**<br>Dundas Data Visualization is a leading, global provider of Business Intelligence and Data Visualization software. Dundas dashboards, reporting, and visual data analytics provide seamless integration into business applications, enabling better decisions and faster insights.|[Product page](https://www.dundas.com/dundas-bi)<br> |
| ![IBM Cognos](./media/business-intelligence/cognos_analytics_logo.png) |**IBM Cognos Analytics**<br>Cognos Analytics includes self-service capabilities that make it simple, clear, and easy to use, whether you're an experienced business analyst examining a vast supply chain, or a marketer optimizing a campaign. Cognos Analytics uses AI and other capabilities to guide data exploration. It makes it easier for users to get the answers they need|[Product page](https://www.ibm.com/products/cognos-analytics)<br>| | ![Information Builders](./media/business-intelligence/informationbuilders_logo.png) |**Information Builders (WebFOCUS)**<br>WebFOCUS business intelligence helps companies use data more strategically across and beyond the enterprise. It allows users and administrators to rapidly create dashboards that combine content from multiple data sources and formats. It also provides robust security and comprehensive governance that enables seamless and secure sharing of any BI and analytics content|[Product page](https://www.informationbuilders.com/products/bi-and-analytics-platform)<br> | | ![Jinfonet](./media/business-intelligence/jinfonet_logo.png) |**Jinfonet JReport**<br>JReport is an embeddable BI solution for the enterprise. The solution offers capabilities such as report creation, dashboards, and data analysis on cloud, big data, and transactional data sources. By visualizing data, you can conduct your own reporting and data discovery for agile, on-the-fly decision making. |[Product page](https://www.logianalytics.com/jreport/)<br> |
synapse-analytics Synapse Workspace Synapse Rbac Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md
The following table lists the built-in roles and the actions/permissions that ea
Role|Actions --|--
-Synapse Administrator|workspaces/read</br>workspaces/roleAssignments/write, delete</br>workspaces/managedPrivateEndpoint/write, delete</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/integrationRuntimes/useCompute/action</br>workspaces/integrationRuntimes/viewLogs/action</br>workspaces/artifacts/read</br>workspaces/notebooks/write, delete</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/sqlScripts/write, delete</br>workspaces/kqlScripts/write, delete</br>workspaces/dataFlows/write, delete</br>workspaces/pipelines/write, delete</br>workspaces/triggers/write, delete</br>workspaces/datasets/write, delete</br>workspaces/libraries/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action</br>workspaces/linkedServices/useSecret/action</br>workspaces/credentials/useSecret/action|
+Synapse Administrator|workspaces/read</br>workspaces/roleAssignments/write, delete</br>workspaces/managedPrivateEndpoint/write, delete</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/integrationRuntimes/useCompute/action</br>workspaces/integrationRuntimes/viewLogs/action</br>workspaces/artifacts/read</br>workspaces/notebooks/write, delete</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/sqlScripts/write, delete</br>workspaces/kqlScripts/write, delete</br>workspaces/dataFlows/write, delete</br>workspaces/pipelines/write, delete</br>workspaces/triggers/write, delete</br>workspaces/datasets/write, delete</br>workspaces/libraries/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action</br>workspaces/linkedServices/useSecret/action</br>workspaces/credentials/useSecret/action</br>workspaces/linkConnections/read</br>workspaces/linkConnections/write</br>workspaces/linkConnections/delete</br>workspaces/linkConnections/useCompute/action|
|Synapse Apache Spark Administrator|workspaces/read</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/notebooks/viewOutputs/action</br>workspaces/artifacts/read</br>workspaces/notebooks/write, delete</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/libraries/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete| |Synapse SQL Administrator|workspaces/read</br>workspaces/artifacts/read</br>workspaces/sqlScripts/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete|
-|Synapse Contributor|workspaces/read</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/integrationRuntimes/useCompute/action</br>workspaces/integrationRuntimes/viewLogs/action</br>workspaces/artifacts/read</br>workspaces/notebooks/write, delete</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/sqlScripts/write, delete</br>workspaces/kqlScripts/write, delete</br>workspaces/dataFlows/write, delete</br>workspaces/pipelines/write, delete</br>workspaces/triggers/write, delete</br>workspaces/datasets/write, delete</br>workspaces/libraries/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action|
+|Synapse Contributor|workspaces/read</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/integrationRuntimes/useCompute/action</br>workspaces/integrationRuntimes/viewLogs/action</br>workspaces/artifacts/read</br>workspaces/notebooks/write, delete</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/sqlScripts/write, delete</br>workspaces/kqlScripts/write, delete</br>workspaces/dataFlows/write, delete</br>workspaces/pipelines/write, delete</br>workspaces/triggers/write, delete</br>workspaces/datasets/write, delete</br>workspaces/libraries/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action</br>workspaces/linkConnections/read</br>workspaces/linkConnections/write</br>workspaces/linkConnections/delete</br>workspaces/linkConnections/useCompute/action|
|Synapse Artifact Publisher|workspaces/read</br>workspaces/artifacts/read</br>workspaces/notebooks/write, delete</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/sqlScripts/write, delete</br>workspaces/kqlScripts/write, delete</br>workspaces/dataFlows/write, delete</br>workspaces/pipelines/write, delete</br>workspaces/triggers/write, delete</br>workspaces/datasets/write, delete</br>workspaces/libraries/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action| |Synapse Artifact User|workspaces/read</br>workspaces/artifacts/read</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action|
-|Synapse Compute Operator |workspaces/read</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/integrationRuntimes/useCompute/action</br>workspaces/integrationRuntimes/viewLogs/action|
+|Synapse Compute Operator |workspaces/read</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/integrationRuntimes/useCompute/action</br>workspaces/integrationRuntimes/viewLogs/action</br>workspaces/linkConnections/read</br>workspaces/linkConnections/useCompute/action|
|Synapse Monitoring Operator |workspaces/read</br>workspaces/artifacts/read</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action</br>workspaces/integrationRuntimes/viewLogs/action</br>workspaces/bigDataPools/viewLogs/action| |Synapse Credential User|workspaces/read</br>workspaces/linkedServices/useSecret/action</br>workspaces/credentials/useSecret/action| |Synapse Linked Data Manager|workspaces/read</br>workspaces/managedPrivateEndpoint/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete|
workspaces/bigDataPools/useCompute/action|Synapse Administrator</br>Synapse Apac
workspaces/bigDataPools/viewLogs/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Compute Operator workspaces/integrationRuntimes/useCompute/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator</br>Synapse Monitoring Operator workspaces/integrationRuntimes/viewLogs/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator</br>Synapse Monitoring Operator
+workspaces/linkConnections/read|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator
+workspaces/linkConnections/useCompute/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator
workspaces/artifacts/read|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User workspaces/notebooks/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher workspaces/sparkJobDefinitions/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher
workspaces/sqlScripts/write, delete|Synapse Administrator</br>Synapse SQL Admini
workspaces/kqlScripts/write, delete|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher workspaces/dataFlows/write, delete|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher workspaces/pipelines/write, delete|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher
+workspaces/linkConnections/write, delete|Synapse Administrator</br>Synapse Contributor
workspaces/triggers/write, delete|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher workspaces/datasets/write, delete|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher workspaces/libraries/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher
synapse-analytics Query Cosmos Db Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-cosmos-db-analytical-store.md
The result of this query might look like the following table:
| bb1206963e831f1… | The Use of Convalescent Sera in Immune-E… | `{"title":"The Use of Convalescent…` | `[{"first":"Antonio","last":"Lavazza","suffix":"", …` | | bb378eca9aac649… | Tylosema esculentum (Marama) Tuber and B… | `{"title":"Tylosema esculentum (Ma…` | `[{"first":"Walter","last":"Chingwaru","suffix":"",…` |
-Learn more about analyzing [complex data types in Azure Synapse Link](../how-to-analyze-complex-schema.md) and [nested structures in a serverless SQL pool](query-parquet-nested-types.md).
+Learn more about analyzing [complex data types like Parquet files and containers in Azure Synapse Link for Azure Cosmos DB](../how-to-analyze-complex-schema.md) or [nested structures in a serverless SQL pool](query-parquet-nested-types.md).
> [!IMPORTANT] > If you see unexpected characters in your text like `MÃÂ&copy;lade` instead of `Mélade`, then your database collation isn't set to [UTF-8](/sql/relational-databases/collations/collation-and-unicode-support#utf8) collation.
virtual-desktop Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/language-packs.md
Title: Install language packs on Windows 10 VMs in Azure Virtual Desktop - Azure
description: How to install language packs for Windows 10 multi-session VMs in Azure Virtual Desktop. Previously updated : 04/01/2022 Last updated : 06/01/2022
You need the following things to customize your Windows 10 Enterprise multi-sess
- [Windows 10, version 20H2 Inbox Apps ISO](https://software-download.microsoft.com/download/pr/19041.508.200905-1327.vb_release_svc_prod1_amd64fre_InboxApps.iso) - [Windows 10, version 21H1 or 21H2 Inbox Apps ISO](https://software-download.microsoft.com/download/sg/19041.928.210407-2138.vb_release_svc_prod1_amd64fre_InboxApps.iso)
- - If you use Local Experience Pack (LXP) ISO files to localize your images, you will also need to download the appropriate LXP ISO for the best language experience
+ - If you use Local Experience Pack (LXP) ISO files to localize your images, you'll also need to download the appropriate LXP ISO for the best language experience
- If you're using Windows 10, version 1903 or 1909: - [Windows 10, version 1903 or 1909 LXP ISO](https://software-download.microsoft.com/download/pr/Win_10_1903_32_64_ARM64_MultiLng_LngPkAll_LXP_ONLY.iso) - If you're using Windows 10, version 2004, 20H2, or 21H1, use the information in [Adding languages in Windows 10: Known issues](/windows-hardware/manufacture/desktop/language-packs-known-issue) to figure out which of the following LXP ISOs is right for you:
To create the content repository for language packages and FODs and a repository
To create a custom Windows 10 Enterprise multi-session image manually: 1. Deploy an Azure VM, then go to the Azure Gallery and select the current version of Windows 10 Enterprise multi-session you're using.
-2. After you've deployed the VM, connect to it using RDP as a local admin.
-3. Make sure your VM has all the latest Windows Updates. Download the updates and restart the VM, if necessary.
-4. Connect to the language package, FOD, and Inbox Apps file share repository and mount it to a letter drive (for example, drive E).
+1. After you've deployed the VM, connect to it using RDP as a local admin.
+1. Make sure your VM has all the latest Windows Updates. Download the updates and restart the VM, if necessary.
+
+ > [!IMPORTANT]
+ > After you install a language pack, you have to reinstall the latest cumulative update that is installed on your image. If you do not reinstall the latest cumulative update, you may encounter errors. If the latest cumulative update is already installed, Windows Update does not offer it again; you have to manually reinstall it. For more information, see [Languages overview](/windows-hardware/manufacture/desktop/languages-overview.md?view=windows-10&preserve-view=true#considerations).
+
+1. Connect to the language package, FOD, and Inbox Apps file share repository and mount it to a letter drive (for example, drive E).
## Create a custom Windows 10 Enterprise multi-session image automatically
If you'd rather install languages through an automated process, you can set up a
```powershell ########################################################
-## Add Languages to running Windows Image for Capture##
+## Add Languages to running Windows Image for Capture ##
######################################################## ##Disable Language Pack Cleanup##
The script might take a while depending on the number of languages you need to i
Once the script is finished running, check to make sure the language packs installed correctly by going to **Start** > **Settings** > **Time & Language** > **Language**. If the language files are there, you're all set.
-After adding additional languages to the Windows image, the inbox apps are also required to be updated to support the added languages. This can be done by refreshing the pre-installed apps with the content from the inbox apps ISO.
+After you've added additional languages to the Windows image, the inbox apps are also required to be updated to support the added languages. This can be done by refreshing the pre-installed apps with the content from the inbox apps ISO.
To perform this refresh in an environment where the VM doesn't have internet access, you can use the following PowerShell script template to automate the process and update only installed versions of inbox apps. ```powershell
To run sysprep:
2. Stop the VM, then capture it in a managed image by following the instructions in [Create a managed image of a generalized VM in Azure](../virtual-machines/windows/capture-image-resource.md).
-3. You can now use the customized image to deploy a Azure Virtual Desktop host pool. To learn how to deploy a host pool, see [Tutorial: Create a host pool with the Azure portal](create-host-pools-azure-marketplace.md).
+3. You can now use the customized image to deploy an Azure Virtual Desktop host pool. To learn how to deploy a host pool, see [Tutorial: Create a host pool with the Azure portal](create-host-pools-azure-marketplace.md).
## Enable languages in Windows settings app
virtual-desktop Set Up Customize Master Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-customize-master-image.md
Title: Prepare and customize a VHD image of Azure Virtual Desktop - Azure
description: How to prepare, customize and upload a Azure Virtual Desktop image to Azure. Previously updated : 01/19/2021 Last updated : 06/01/2022
The second option is to create the image locally by downloading the image, provi
### Local image creation
-Once you've downloaded the image to a local location, open **Hyper-V Manager** to create a VM with the VHD you copied. The following instructions are a simple version, but you can find more detailed instructions in [Create a virtual machine in Hyper-V](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-machine-in-hyper-v/).
+You can download an image following the instructions in [Export an image version to a managed disk](../virtual-machines/managed-disk-from-image-version.md) and then [Download a Windows VHD from Azure](../virtual-machines/windows/download-vhd.md). Once you've downloaded the image to a local location, open **Hyper-V Manager** to create a VM with the VHD you copied. The following instructions are a simple version, but you can find more detailed instructions in [Create a virtual machine in Hyper-V](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-machine-in-hyper-v/).
To create a VM with the copied VHD:
If you create a VM from an existing VHD, it creates a dynamic disk by default. I
> [!div class="mx-imgBorder"] > ![A screenshot of the Edit Disk option.](media/35772414b5a0f81f06f54065561d1414.png)
-You can also run the following PowerShell cmdlet to change the disk to a fixed disk.
+You can also run the following PowerShell command to change the disk to a fixed disk.
```powershell Convert-VHD ΓÇôPath c:\test\MY-VM.vhdx ΓÇôDestinationPath c:\test\MY-NEW-VM.vhd -VHDType Fixed
To disable Automatic Updates via local Group Policy:
1. Open **Local Group Policy Editor\\Administrative Templates\\Windows Components\\Windows Update**. 2. Right-click **Configure Automatic Update** and set it to **Disabled**.
-You can also run the following command on a command prompt to disable Automatic Updates.
+You can also run the following command from an elevated PowerShell prompt to disable Automatic Updates.
-```cmd
-reg add "HKLM\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU" /v NoAutoUpdate /t REG_DWORD /d 1 /f
+```powershell
+New-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU" -Name NoAutoUpdate -PropertyType DWORD -Value 1 -Force
``` ### Specify Start layout for Windows 10 PCs (optional)
-Run this command to specify a Start layout for Windows 10 PCs.
+Run the following command from an elevated PowerShell prompt to specify a Start layout for Windows 10 PCs.
-```cmd
-reg add "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer" /v SpecialRoamingOverrideAllowed /t REG_DWORD /d 1 /f
+```powershell
+New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer" -Name SpecialRoamingOverrideAllowed -PropertyType DWORD -Value 1 -Force
``` ### Set up time zone redirection
To redirect time zones:
4. In the **Group Policy Management Editor**, navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**. 5. Enable the **Allow time zone redirection** setting.
-You can also run this command on the master image to redirect time zones:
+You can also run the following command from an elevated PowerShell prompt to redirect time zones:
-```cmd
-reg add "HKLM\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services" /v fEnableTimeZoneRedirection /t REG_DWORD /d 1 /f
+```powershell
+New-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services" -Name fEnableTimeZoneRedirection -PropertyType DWORD -Value 1 -Force
``` ### Disable Storage Sense
For Azure Virtual Desktop session hosts that use Windows 10 Enterprise or Window
> [!div class="mx-imgBorder"] > ![A screenshot of the Storage menu under Settings. The "Storage sense" option is turned off.](media/storagesense.png)
-You can also change the setting with the registry by running the following command:
+You can also run the following command from an elevated PowerShell prompt to disable Storage Sense:
-```cmd
-reg add "HKCU\Software\Microsoft\Windows\CurrentVersion\StorageSense\Parameters\StoragePolicy" /v 01 /t REG_DWORD /d 0 /f
+```powershell
+New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\StorageSense\Parameters\StoragePolicy" -Name 01 -PropertyType DWORD -Value 0 -Force
``` ### Include additional language support
This article doesn't cover how to configure language and regional support. For m
### Other applications and registry configuration
-This section covers application and operating system configuration. All configuration in this section is done through registry entries that can be executed by command-line and regedit tools.
-
->[!NOTE]
->You can implement best practices in configuration with either Group Policy Objects (GPOs) or registry imports. The administrator can choose either option based on their organization's requirements.
+This section covers application and operating system configuration. All configuration in this section is done through adding, changing, or removing registry entries.
-For feedback hub collection of telemetry data on Windows 10 Enterprise multi-session, run this command:
+For feedback hub collection of telemetry data on Windows 10 Enterprise multi-session, run the following command from an elevated PowerShell prompt:
-```cmd
-reg add "HKLM\SOFTWARE\Policies\Microsoft\Windows\DataCollection" /v AllowTelemetry /t REG_DWORD /d 3 /f
+```powershell
+New-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Microsoft\Windows\DataCollection" -Name AllowTelemetry -PropertyType DWORD -Value 3 -Force
```
-Run the following command to fix Watson crashes:
+To prevent Watson crashes, run the following command from an elevated PowerShell prompt:
-```cmd
-remove CorporateWerServer* from Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting
+```powershell
+Remove-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Windows\Windows Error Reporting" -Name Corporate* -Force -Verbose
```
-Enter the following commands into the registry editor to fix 5k resolution support. You must run the commands before you can enable the side-by-side stack.
+To enable 5k resolution support, run the following commands from an elevated PowerShell prompt. You must run the commands before you can enable the side-by-side stack.
-```cmd
-reg add "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp" /v MaxMonitors /t REG_DWORD /d 4 /f
-reg add "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp" /v MaxXResolution /t REG_DWORD /d 5120 /f
-reg add "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp" /v MaxYResolution /t REG_DWORD /d 2880 /f
-
-reg add "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\rdp-sxs" /v MaxMonitors /t REG_DWORD /d 4 /f
-reg add "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\rdp-sxs" /v MaxXResolution /t REG_DWORD /d 5120 /f
-reg add "HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\rdp-sxs" /v MaxYResolution /t REG_DWORD /d 2880 /f
+```powershell
+New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp" -Name MaxMonitors -PropertyType DWORD -Value 4 -Force
+New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp" -Name MaxXResolution -PropertyType DWORD -Value 5120 -Force
+New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp" -Name MaxYResolution -PropertyType DWORD -Value 2880 -Force
+New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\rdp-sxs" -Name MaxMonitors -PropertyType DWORD -Value 4 -Force
+New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\rdp-sxs" -Name MaxXResolution -PropertyType DWORD -Value 5120 -Force
+New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\rdp-sxs" -Name MaxYResolution -PropertyType DWORD -Value 2880 -Force
``` ## Prepare the image for upload to Azure
virtual-machines Dedicated Host Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-migration-guide.md
On Azure portal, go through the following steps:
#### Delete the old Dedicated Host
-Once all VMs have been migrated from your old Dedicated Host to the target Dedicated Host, [delete the old Dedicated Host](dedicated-hosts-how-to.md#deleting-hosts).
+Once all VMs have been migrated from your old Dedicated Host to the target Dedicated Host, [delete the old Dedicated Host](dedicated-hosts-how-to.md#deleting-a-host).
## Help and support
virtual-machines Dedicated Hosts How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-hosts-how-to.md
Tags : {}
-## Deleting hosts
+## Deleting a host
-being charged for your dedicated hosts even when no virtual machines are deployed. You should delete any hosts you're currently not using to save costs.
+You're being charged for your dedicated host even when no virtual machines are deployed on the host. You should delete any hosts you're currently not using to save costs.
You can only delete a host when there are no any longer virtual machines using it.
virtual-machines Lasv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/lasv3-series.md
+
+ Title: Lasv3-series - Azure Virtual Machines
+description: Specifications for the Lasv3-series of Azure Virtual Machines (Azure VMs).
++++ Last updated : 06/01/2022 +
+
+
+# Lasv3-series
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+The Lasv3-series of Azure Virtual Machines (Azure VMs) features high-throughput, low latency, directly mapped local NVMe storage. These VMs run on an AMD 3rd Generation EPYC&trade; 7763v processor in a multi-threaded configuration with an L3 cache of up to 256 MB that can achieve a boosted maximum frequency of 3.5 GHz. The Lasv3-series VMs are available in sizes from 8 to 80 vCPUs in a simultaneous multi-threading configuration. There are 8 GiB of memory per vCPU, and one 1.92 TB NVMe SSD device per 8 vCPUs, with up to 19.2 TB (10x1.92TB) available on the L80as_v3 size.
+
+> [!NOTE]
+> The Lasv3-series VMs are optimized to use the local disk on the node attached directly to the VM rather than using [durable data disks](disks-types.md). This method allows for greater IOPS and throughput for your workloads. The Lsv3, Lasv3, Lsv2, and Ls-series don't support the creation of a local cache to increase the IOPS achievable by durable data disks.
+>
+> The high throughput and IOPS of the local disk makes the Lasv3-series VMs ideal for NoSQL stores such as Apache Cassandra and MongoDB. These stores replicate data across multiple VMs to achieve persistence in the event of the failure of a single VM.
+>
+> To learn more, see how optimize performance on Lasv3-series [Windows-based VMs](../virtual-machines/windows/storage-performance.md) or [Linux-based VMs](../virtual-machines/linux/storage-performance.md).
+
+- [Premium Storage](premium-storage-performance.md): Supported
+- [Premium Storage caching](premium-storage-performance.md): Not Supported
+- [Live Migration](maintenance-and-updates.md): Not Supported
+- [Memory Preserving Updates](maintenance-and-updates.md): Supported
+- [VM Generation Support](generation-2.md): Generation 1 and 2
+- [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported
+- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+- [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported
+
+| Size | vCPU | Memory (GiB) | Temp disk (GiB) | NVMe Disks | NVMe Disk throughput (Read IOPS/MBps) | Uncached data disk throughput (IOPS/MBps) | Max burst uncached data disk throughput (IOPS/MBps)| Max Data Disks | Max NICs | Expected network bandwidth (Mbps) |
+||||||||||||
+| Standard_L8as_v3 | 8 | 64 | 80 | 1x1.92 TB | 400000/2000 | 12800/200 | 20000/1280 | 16 | 4 | 12500 |
+| Standard_L16as_v3 | 16 | 128 | 160 | 2x1.92 TB | 800000/4000 | 25600/384 | 40000/1280 | 32 | 8 | 12500 |
+| Standard_L32as_v3 | 32 | 256 | 320 | 4x1.92 TB | 1.5M/8000 | 51200/768 | 80000/1600 | 32 | 8 | 16000 |
+| Standard_L48as_v3 | 48 | 384 | 480 | 6x1.92 TB | 2.2M/14000 | 76800/1152 | 80000/2000 | 32 | 8 | 24000 |
+| Standard_L64as_v3 | 64 | 512 | 640 | 8x1.92 TB | 2.9M/16000 | 80000/1280 | 80000/2000 | 32 | 8 | 32000 |
+| Standard_L80as_v3 | 80 | 640 | 800 | 10x1.92TB | 3.8M/20000 | 80000/1400 | 80000/2000 | 32 | 8 | 32000 |
+
+1. **Temp disk**: Lasv3-series VMs have a standard SCSI-based temp resource disk for use by the OS paging or swap file (`D:` on Windows, `/dev/sdb` on Linux). This disk provides 80 GiB of storage, 4000 IOPS, and 80 MBps transfer rate for every 8 vCPUs. For example, Standard_L80as_v3 provides 800 GiB at 40000 IOPS and 800 MBPS. This configuration ensures that the NVMe drives can be fully dedicated to application use. This disk is ephemeral, and all data is lost on stop or deallocation.
+1. **NVMe Disks**: NVMe disk throughput can go higher than the specified numbers. However, higher performance isn't guaranteed. Local NVMe disks are ephemeral. Data is lost on these disks if you stop or deallocate your VM. Local NVMe disks aren't encrypted by [Azure Storage encryption](disk-encryption.md), even if you enable [encryption at host](disk-encryption.md#supported-vm-sizes).
+1. **NVMe Disk throughput**: Hyper-V NVMe Direct technology provides unthrottled access to local NVMe drives mapped securely into the guest VM space. Lasv3 NVMe disk throughput can go higher than the specified numbers, but higher performance isn't guaranteed. To achieve maximum performance, see how to optimize performance on Lasv3-series [Windows-based VMs](../virtual-machines/windows/storage-performance.md) or [Linux-based VMs](../virtual-machines/linux/storage-performance.md). Read/write performance varies based on IO size, drive load, and capacity utilization.
+1. **Max burst uncached data disk throughput**: Lasv3-series VMs can [burst their disk performance](./disk-bursting.md) for up to 30 minutes at a time.
+
+> [!NOTE]
+> Lasv3-series VMs don't provide a host cache for the data disk because this configuration doesn't benefit the Lasv3 workloads.
++
+## Other sizes and information
+
+- [General purpose](sizes-general.md)
+- [Memory optimized](sizes-memory.md)
+- [Storage optimized](sizes-storage.md)
+- [GPU optimized](sizes-gpu.md)
+- [High performance compute](sizes-hpc.md)
+- [Previous generations](sizes-previous-gen.md)
+
+Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+
+More information on Disks Types: [Disk Types](./disks-types.md#ultra-disks)
+
+## Next steps
+
+Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Linux Vm Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux-vm-connect.md
In Azure there are multiple ways to connect to a Linux virtual machine. The most common practice for connecting to a Linux VM is using the Secure Shell Protocol (SSH). This is done via any standard SSH aware client commonly found in Linux; on Windows you can use [Windows Sub System for Linux](/windows/wsl/about) or any local terminal. You can also use [Azure Cloud Shell](../cloud-shell/overview.md) from any browser.
-This document describes how to connect, via SSH, to a VM that has a public IP. If you need to connect to a VM without a public IP see [Azure Bastion Service](../bastion/bastion-overview.md)
+This document describes how to connect, via SSH, to a VM that has a public IP. If you need to connect to a VM without a public IP, see [Azure Bastion Service](../bastion/bastion-overview.md).
## Prerequisites -- You need an SSH key pair. If you don't already have one Azure will create a key pair during the deployment process. If you need help with creating one manually, see [Create and use an SSH public-private key pair for Linux VMs in Azure](./linux/mac-create-ssh-keys.md).--- In order to connect to a Linux Virtual Machine you need the appropriate port open: normally this will be port 22. The following instructions assume port 22 but the process is the same for other port numbers. You can validate an appropriate port is open for SSH using the troubleshooter or by checking manually in your VM settings. To check if port 22 is open:
+- You need an SSH key pair. If you don't already have one, Azure will create a key pair during the deployment process. If you need help with creating one manually, see [Create and use an SSH public-private key pair for Linux VMs in Azure](./linux/mac-create-ssh-keys.md).
+- You need an existing Network Security Group (NSG). Most VMs will have an NSG by default, but if you don't already have one you can create one and attach it manually. For more information, see [Create, change, or delete a network security group](../virtual-network/manage-network-security-group.md).
+- To connect to a Linux VM, you need the appropriate port open. Typically this will be port 22. The following instructions assume port 22 but the process is the same for other port numbers. You can validate an appropriate port is open for SSH using the troubleshooter or by checking manually in your VM settings. To check if port 22 is open:
1. On the page for the VM, select **Networking** from the left menu. 1. On the **Networking** page, check to see if there is a rule which allows TCP on port 22 from the IP address of the computer you are using to connect to the VM. If the rule exists, you can move to the next section.
This document describes how to connect, via SSH, to a VM that has a public IP. I
:::image type="content" source="media/linux-vm-connect/check-rule.png" alt-text="Screenshot showing how to check to see if there is already a rule allowing S S H connections."::: 1. If there isn't a rule, add one by selecting **Add inbound port rule**.
- 1. From the **Service** dropdown select **SSH**.
+ 1. For **Service**, select **SSH** from the dropdown.
:::image type="content" source="media/linux-vm-connect/create-rule.png" alt-text="Screenshot showing where to choose S S H when creating a new N S G rule."::: 1. Edit **Priority** and **Source** if necessary 1. For **Name**, type *SSH*.
- 1. When you are done, select **Add**.
+ 1. When you're done, select **Add**.
1. You should now have an SSH rule in the table of inbound port rules. - Your VM must have a public IP address. To check if your VM has a public IP address, select **Overview** from the left menu and look at the **Networking** section. If you see an IP address next to **Public IP address**, then your VM has a public IP
This document describes how to connect, via SSH, to a VM that has a public IP. I
## Connect to the VM
-Once the above prerequisites are met, you are ready to connect to your VM. Open your SSH client of choice.
+Once the above prerequisites are met, you're ready to connect to your VM. Open your SSH client of choice.
-- If you are using Linux or macOS this is most commonly terminal or shell.
+- If you're using Linux or macOS, the SSH client is usually terminal or shell.
- For a Windows machine this might be [WSL](/windows/wsl/about), or any local terminal like [PowerShell](/powershell/scripting/overview). If you do not have an SSH client you can [install WSL](/windows/wsl/install), or consider using [Azure Cloud Shell](../cloud-shell/overview.md). > [!NOTE]
Once the above prerequisites are met, you are ready to connect to your VM. Open
## [WSL, macOS, or native Linux client](#tab/Linux) ### SSH with a new key pair
-1. Ensure your public and private keys are in the correct directory. This is usually the ~/.ssh directory.
+1. Ensure your public and private keys are in the correct directory. The directory is usually `~/.ssh`.
If you generated keys manually or generated them with the CLI, then the keys are probably already there. However, if you downloaded them in pem format from the Azure portal, you may need to move them to the right location. This can be done with the following syntax: `mv PRIVATE_KEY_SOURCE PRIVATE_KEY_DESTINATION` For example, if the key is in the `Downloads` folder, and `myKey.pem` is the name of your SSH key, type: ```bash mv /Downloads/myKey.pem ~/.ssh
- ```
-2. Ensure you have read-only access to the private Key by running
+ ```
+ > [!NOTE]
+ > If you're using WSL, local files are found in the `mnt/c/` directory. Accordingly, the path to the downloads folder and SSH key would be `/mnt/c/Users/{USERNAME}/Downloads/myKey.pem`
+
+2. Ensure you have read-only access to the private key by running
```bash chmod 400 ~/.ssh/myKey.pem ```
Once the above prerequisites are met, you are ready to connect to your VM. Open
``` 4. Validate the returned fingerprint.
- If you have never connected to this VM before you will be asked to verify the hosts fingerprint. It is tempting to simply accept the fingerprint presented, however, this exposes you to a potential person in the middle attack. You should always validate the hosts fingerprint. You only need to do this on the first time you connect from a client. To obtain the host fingerprint via the portal, use the Run Command feature to execute the command:
+ If you have never connected to this VM before, you'll be asked to verify the hosts fingerprint. It's tempting to simply accept the fingerprint presented, but that exposes you to a potential person in the middle attack. You should always validate the hosts fingerprint. You only need to do this the first time you connect from a client. To get the host fingerprint via the portal, use the Run Command feature to execute the command:
```bash ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}' ```
-5. Success! You should now be connected to your VM. If you are unable to connect, see [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
+5. Success! You should now be connected to your VM. If you're unable to connect, see [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
### SSH With existing public key 1. Run the following command in your SSH client. In this example, *20.51.230.13* is the public IP Address of your VM and *azureuser* is the username you created when you created the VM.
Once the above prerequisites are met, you are ready to connect to your VM. Open
ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}' ```
-3. Success! You should now be connected to your VM. If you are unable to connect, see our troubleshooting guide [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
+3. Success! You should now be connected to your VM. If you're unable to connect, see our troubleshooting guide [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
### Password authentication
Once the above prerequisites are met, you are ready to connect to your VM. Open
ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}' ```
-3. Success! You should now be connected to your VM. If you are unable to connect using the correct method above, see [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
+3. Success! You should now be connected to your VM. If you're unable to connect using the correct method above, see [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
## [Windows 10 Command Line (cmd.exe, PowerShell etc.)](#tab/Windows)
Once the above prerequisites are met, you are ready to connect to your VM. Open
```bash ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}' ```
-4. Success! You should now be connected to your VM. If you are unable to connect, see [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
+4. Success! You should now be connected to your VM. If you're unable to connect, see [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
### Password authentication
Once the above prerequisites are met, you are ready to connect to your VM. Open
ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}' ```
-3. Success! You should now be connected to your VM. If you are unable to connect using the methods above, see [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
+3. Success! You should now be connected to your VM. If you're unable to connect using the methods above, see [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection).
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
This is the basic template format:
"identity": {}, "properties": { "buildTimeoutInMinutes": <minutes>,
+ "stagingResourceGroup": "/subscriptions/<subscriptionID>/resourceGroups/<stagingResourceGroupName>",
"vmProfile": { "vmSize": "<vmSize>", "proxyVmSize": "<vmSize>",
This is the basic template format:
] }, "source": {},
- "customize": {},
- "distribute": {}
+ "customize": [],
+ "validate": {},
+ "distribute": []
} } ```
There are two ways to add user assigned identities explained below.
### User Assigned Identity for Azure Image Builder image template resource
-Required - For Image Builder to have permissions to read/write images, read in scripts from Azure Storage you must create an Azure User-Assigned Identity, that has permissions to the individual resources. For details on how Image Builder permissions work, and relevant steps, please review the [documentation](image-builder-user-assigned-identity.md).
+Required - For Image Builder to have permissions to read/write images, read in scripts from Azure Storage you must create an Azure User-Assigned Identity, that has permissions to the individual resources. For details on how Image Builder permissions work, and relevant steps, review the [documentation](image-builder-user-assigned-identity.md).
```json
For more information on deploying this feature, see [Configure managed identitie
This field is only available in API versions 2021-10-01 and newer.
-Optional - The Image Builder Build VM, that is created by the Image Builder service in your subscription, is used to build and customize the image. For the Image Builder Build VM to have permissions to authenticate with other services like Azure Key Vault in your subscription, you must create one or more Azure User Assigned Identities that have permissions to the individual resources. Azure Image Builder can then associate these User Assigned Identities with the Build VM. Customizer scripts running inside the Build VM can then fetch tokens for these identities and interact with other Azure resources as needed. Please be aware, the user assigned identity for Azure Image Builder must have the "Managed Identity Operator" role assignment on all the user assigned identities for Azure Image Builder to be able to associate them to the build VM.
+Optional - The Image Builder Build VM, that is created by the Image Builder service in your subscription, is used to build and customize the image. For the Image Builder Build VM to have permissions to authenticate with other services like Azure Key Vault in your subscription, you must create one or more Azure User Assigned Identities that have permissions to the individual resources. Azure Image Builder can then associate these User Assigned Identities with the Build VM. Customizer scripts running inside the Build VM can then fetch tokens for these identities and interact with other Azure resources as needed. Be aware, the user assigned identity for Azure Image Builder must have the "Managed Identity Operator" role assignment on all the user assigned identities for Azure Image Builder to be able to associate them to the build VM.
> [!NOTE]
-> Please be aware that multiple identities can be specified for the Image Builder Build VM, including the identity you created for the [image template resource](#user-assigned-identity-for-azure-image-builder-image-template-resource). By default, the identity you created for the image template resource will not automatically be added to the build VM.
+> Be aware that multiple identities can be specified for the Image Builder Build VM, including the identity you created for the [image template resource](#user-assigned-identity-for-azure-image-builder-image-template-resource). By default, the identity you created for the image template resource will not automatically be added to the build VM.
```json "properties": {
The Image Builder Build VM User Assigned Identity:
To learn more, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md) and [How to use managed identities for Azure resources on an Azure VM for sign-in](../../active-directory/managed-identities-azure-resources/how-to-use-vm-sign-in.md).
+## Properties: stagingResourceGroup
+The `stagingResourceGroup` field contains information about the staging resource group that the Image Builder service will create for use during the image build process. The `stagingResourceGroup` is an optional field for anyone who wants more control over the resource group created by Image Builder during the image build process. You can create your own resource group and specify it in the `stagingResourceGroup` section or have Image Builder create one on your behalf.
+
+```json
+ "properties": {
+ "stagingResourceGroup": "/subscriptions/<subscriptionID>/resourceGroups/<stagingResourceGroupName>"
+ }
+```
+
+### Template Creation Scenarios
+
+#### The stagingResourceGroup field is left empty
+If the `stagingResourceGroup` field is not specified or specified with an empty string, the Image Builder service will create a staging resource group with the default name convention "IT_***". The staging resource group will have the default tags applied to it: `createdBy`, `imageTemplateName`, `imageTemplateResourceGroupName`. Also, the default RBAC will be applied to the identity assigned to the Azure Image Builder template resource, which is "Contributor".
+
+#### The stagingResourceGroup field is specified with a resource group that exists
+If the `stagingResourceGroup` field is specified with a resource group that does exist, then the Image Builder service will check to make sure the resource group is empty (no resources inside), in the same region as the image template, and has either "Contributor" or "Owner" RBAC applied to the identity assigned to the Azure Image Builder image template resource. If any of the aforementioned requirements are not met an error will be thrown. The staging resource group will have the following tags added to it: `usedBy`, `imageTemplateName`, `imageTemplateResourceGroupName`. Preexisting tags are not deleted.
+
+#### The stagingResourceGroup field is specified with a resource group that DOES NOT exist
+If the `stagingResourceGroup` field is specified with a resource group that does not exist, then the Image Builder service will create a staging resource group with the name provided in the `stagingResourceGroup` field. There will be an error if the given name does not meet Azure naming requirements for resource groups. The staging resource group will have the default tags applied to it: `createdBy`, `imageTemplateName`, `imageTemplateResourceGroupName`. By default the identity assigned to the Azure Image Builder image template resource will have the "Contributor" RBAC applied to it in the resource group.
+
+### Template Deletion
+Any staging resource group created by the Image Builder service will be deleted after the image template is deleted. This includes staging resource groups that were specified in the `stagingResourceGroup` field, but did not exist prior to the image build.
+
+If Image Builder did not create the staging resource group, but it did create resources inside of it, those resources will be deleted after the image template is deleted as long as the Image Builder service has the appropriate permissions or role required to delete resources.
++ ## Properties: source The `source` section contains information about the source image that will be used by Image Builder. Image Builder currently only natively supports creating Hyper-V generation (Gen1) 1 images to the Azure Compute Gallery (SIG) or managed image. If you want to create Gen2 images, then you need to use a source Gen2 image, and distribute to VHD. After, you will then need to create a managed image from the VHD, and inject it into the SIG as a Gen2 image.
The `imageId` should be the ResourceId of the managed image. Use `az image list`
Sets the source image as an existing image version in an Azure Compute Gallery. > [!NOTE]
-> The source shared image version must be of a supported OS and the image version must reside in the same region as your Azure Image Builder template, if not, please replicate the image version to the Image Builder Template region.
+> The source shared image version must be of a supported OS and the image version must reside in the same region as your Azure Image Builder template, if not, replicate the image version to the Image Builder Template region.
```json
If you find you need more time for customizations to complete, set this to what
Image Builder supports multiple `customizers`. Customizers are functions that are used to customize your image, such as running scripts, or rebooting servers. When using `customize`: -- You can use multiple customizers, but they must have a unique `name`.
+- You can use multiple customizers
- Customizers execute in the order specified in the template. - If one customizer fails, then the whole customization component will fail and report back an error.-- It is strongly advised you test the script thoroughly before using it in a template. Debugging the script on your own VM will be easier.
+- It is advised you test the script thoroughly before using it in a template. Debugging the script on your own VM will be easier.
- don't put sensitive data in the scripts. - The script locations need to be publicly accessible, unless you're using [MSI](./image-builder-user-assigned-identity.md).
Customize properties:
* To generate the sha256Checksum, using a terminal on Mac/Linux run: `sha256sum <fileName>` > [!NOTE]
-> Inline commands are stored as part of the image template definition, you can see these when you dump out the image definition. If you have sensitive commands or values (including passwords, SAS token, authentication tokens etc), it is strongly recommended these are moved into scripts, and use a user identity to authenticate to Azure Storage.
+> Inline commands are stored as part of the image template definition, you can see these when you dump out the image definition. If you have sensitive commands or values (including passwords, SAS token, authentication tokens etc), it is recommended these are moved into scripts, and use a user identity to authenticate to Azure Storage.
#### Super user privileges For commands to run with super user privileges, they must be prefixed with `sudo`, you can add these into scripts or use it inline commands, for example:
File customizer properties:
- **sourceUri** - an accessible storage endpoint, this can be GitHub or Azure storage. You can only download one file, not an entire directory. If you need to download a directory, use a compressed file, then uncompress it using the Shell or PowerShell customizers. > [!NOTE]
-> If the sourceUri is an Azure Storage Account, irrespective if the blob is marked public, you will to grant the Managed User Identity permissions to read access on the blob. Please see this [example](./image-builder-user-assigned-identity.md#create-a-resource-group) to set the storage permissions.
+> If the sourceUri is an Azure Storage Account, irrespective if the blob is marked public, you will to grant the Managed User Identity permissions to read access on the blob. See this [example](./image-builder-user-assigned-identity.md#create-a-resource-group) to set the storage permissions.
- **destination** ΓÇô this is the full destination path and file name. Any referenced path and subdirectories must exist, use the Shell or PowerShell customizers to set these up beforehand. You can use the script customizers to create the path.
If there is an error trying to download the file, or put it in a specified direc
> The file customizer is only suitable for small file downloads, < 20MB. For larger file downloads, use a script or inline command, then use code to download files, such as, Linux `wget` or `curl`, Windows, `Invoke-WebRequest`. ### Windows Update Customizer
-This customizer is built on the [community Windows Update Provisioner](https://packer.io/docs/provisioners/community-supported.html) for Packer, which is an open source project maintained by the Packer community. Microsoft tests and validate the provisioner with the Image Builder service, and will support investigating issues with it, and work to resolve issues, however the open source project is not officially supported by Microsoft. For detailed documentation on and help with the Windows Update Provisioner, please see the project repository.
+This customizer is built on the [community Windows Update Provisioner](https://packer.io/docs/provisioners/community-supported.html) for Packer, which is an open source project maintained by the Packer community. Microsoft tests and validate the provisioner with the Image Builder service, and will support investigating issues with it, and work to resolve issues, however the open source project is not officially supported by Microsoft. For detailed documentation on and help with the Windows Update Provisioner, see the project repository.
```json "customize": [
To override the commands, use the PowerShell or Shell script provisioners to cre
* Linux: /tmp/DeprovisioningScript.sh Image Builder will read these commands, these are written out to the AIB logs, `customization.log`. See [troubleshooting](image-builder-troubleshoot.md#customization-log) on how to collect logs.+
+## Properties: validate
+You can use the `validate` property to validate platform images and any customized images you create regardless of if you used Azure Image Builder to create them.
+
+Azure Image Builder supports a 'Source-Validation-Only' mode that can be set using the `sourceValidationOnly` field. If the `sourceValidationOnly` field is set to true, the image specified in the `source` section will directly be validated. No separate build will be run to generate and then validate a customized image.
+
+The `inVMValidations` field takes a list of validators that will be performed on the image. Azure Image Builder supports both PowerShell and Shell validators.
+
+The `continueDistributeOnFailure` field is responsible for whether the output image(s) will be distributed if validation fails. If validation fails and this field is set to false, the output image(s) will not be distributed (this is the default behavior). If validation fails and this field is set to true, the output image(s) will still be distributed. Use this option with caution as it may result in failed images being distributed for use. In either case (true or false), the end to end image run will be reported as a failed in the case of a validation failure. This field has no effect on whether validation succeeds or not.
+
+When using `validate`:
+- You can use multiple validators
+- Validators execute in the order specified in the template.
+- If one validator fails, then the whole validation component will fail and report back an error.
+- It is advised you test the script thoroughly before using it in a template. Debugging the script on your own VM will be easier.
+- Don't put sensitive data in the scripts.
+- The script locations need to be publicly accessible, unless you're using [MSI](./image-builder-user-assigned-identity.md).
+
+How to use the `validate` property to validate Windows images
+
+```json
+{
+ "properties": {
+ "validate": {
+ "continueDistributeOnFailure": false,
+ "sourceValidationOnly": false,
+ "inVMValidations": [
+ {
+ "type": "PowerShell",
+ "name": "test PowerShell validator inline",
+ "inline": [
+ "<command to run inline>"
+ ],
+ "validExitCodes": "<exit code>",
+ "runElevated": <true or false>,
+ "runAsSystem": <true or false>
+ },
+ {
+ "type": "PowerShell",
+ "name": "<name>",
+ "scriptUri": "<path to script>",
+ "runElevated": <true false>,
+ "sha256Checksum": "<sha256 checksum>"
+ }
+ ]
+ },
+ }
+}
+```
+
+`inVMValidations` properties:
+
+- **type** ΓÇô PowerShell.
+- **name** - name of the validator
+- **scriptUri** - URI of the PowerShell script file.
+- **inline** ΓÇô array of commands to be run, separated by commas.
+- **validExitCodes** ΓÇô Optional, valid codes that can be returned from the script/inline command, this will avoid reported failure of the script/inline command.
+- **runElevated** ΓÇô Optional, boolean, support for running commands and scripts with elevated permissions.
+- **sha256Checksum** - Value of sha256 checksum of the file, you generate this locally, and then Image Builder will checksum and validate.
+ * To generate the sha256Checksum, using a PowerShell on Windows [Get-Hash](/powershell/module/microsoft.powershell.utility/get-filehash)
+
+How to use the `validate` property to validate Linux images
+
+```json
+{
+ "properties": {
+ "validate": {
+ "continueDistributeOnFailure": false,
+ "sourceValidationOnly": false,
+ "inVMValidations": [
+ {
+ "type": "Shell",
+ "name": "<name>",
+ "inline": [
+ "<command to run inline>"
+ ]
+ },
+ {
+ "type": "Shell",
+ "name": "<name>",
+ "scriptUri": "<path to script>",
+ "sha256Checksum": "<sha256 checksum>"
+ }
+ ]
+ },
+ }
+ }
+```
+
+`inVMValidations` properties:
+
+- **type** ΓÇô Shell
+- **name** - name of the validator
+- **scriptUri** - URI of the script file
+- **inline** - array of commands to be run, separated by commas.
+- **sha256Checksum** - Value of sha256 checksum of the file, you generate this locally, and then Image Builder will checksum and validate.
+ * To generate the sha256Checksum, using a terminal on Mac/Linux run: `sha256sum <fileName>`
## Properties: distribute
az resource invoke-action \
### Cancelling an Image Build If you're running an image build that you believe is incorrect, waiting for user input, or you feel will never complete successfully, then you can cancel the build.
-The build can be canceled any time. If the distribution phase has started you can still cancel, but you will need to clean up any images that may not be completed. The cancel command doesn't wait for cancel to complete, please monitor `lastrunstatus.runstate` for canceling progress, using these status [commands](image-builder-troubleshoot.md#customization-log).
+The build can be canceled any time. If the distribution phase has started you can still cancel, but you will need to clean up any images that may not be completed. The cancel command doesn't wait for cancel to complete, monitor `lastrunstatus.runstate` for canceling progress, using these status [commands](image-builder-troubleshoot.md#customization-log).
Examples of `cancel` commands:
virtual-machines Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/storage-performance.md
Title: Optimize performance on Azure Lsv2-series virtual machines - Storage
-description: Learn how to optimize performance for your solution on the Lsv2-series virtual machines using a Linux example.
------ Previously updated : 08/05/2019---
-# Optimize performance on the Lsv2-series Linux virtual machines
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-Lsv2-series virtual machines support a variety of workloads that need high I/O and throughput on local storage across a wide range of applications and industries. The Lsv2-series is ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases, including Cassandra, MongoDB, Cloudera, and Redis.
+ Title: Optimize performance on Lsv3, Lasv3, and Lsv2-series Linux VMs
+description: Learn how to optimize performance for your solution on the Lsv3, Lasv3, and Lsv2-series Linux virtual machines (VMs) on Azure.
+++++
+ vm-linux
+ Last updated : 06/01/2022+
+
-The design of the Lsv2-series Virtual Machines (VMs) maximizes the AMD EPYCΓäó 7551 processor to provide the best performance between the processor, memory, NVMe devices, and the VMs. Working with partners in Linux, several builds are available Azure Marketplace that are optimized for Lsv2-series performance and currently include:
+# Optimize performance on Lsv3, Lasv3, and Lsv2-series Linux VMs
-- Ubuntu 18.04-- Ubuntu 16.04-- RHEL 8.0-- Debian 9-- Debian 10
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Uniform scale sets
-This article provides tips and suggestions to ensure your workloads and applications achieve the maximum performance designed into the VMs. The information on this page will be continuously updated as more Lsv2 optimized images are added to the Azure Marketplace.
+Lsv3, Lasv3, and Lsv2-series Azure Virtual Machines (Azure VMs) support various workloads that need high I/O and throughput on local storage across a wide range of applications and industries. The L-series is ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases, including Cassandra, MongoDB, Cloudera, and Redis.
-## AMD EPYCΓäó chipset architecture
+Several builds are available Azure Marketplace due to work with partners in Linux. These builds are optimized for Lsv3, Lasv3, and Lsv2-series performance. Available builds include the following and later versions of:
-Lsv2-series VMs use AMD EYPCΓäó server processors based on the Zen microarchitecture. AMD developed Infinity Fabric (IF) for EYPCΓäó as scalable interconnect for its NUMA model that could be used for on-die, on-package, and multi-package communications. Compared with QPI (Quick-Path Interconnect) and UPI (Ultra-Path Interconnect) used on Intel modern monolithic-die processors, AMDΓÇÖs many-NUMA small-die architecture may bring both performance benefits as well as challenges. The actual impact of memory bandwidth and latency constraints could vary depending on the type of workloads running.
+- Ubuntu 16.04
+- RHEL 8.0 and clones, including CentOS, Rocky Linux, and Alma Linux
+- Debian 9
+- SUSE Linux 15
+- Oracle Linux 8.0
-## Tips to maximize performance
+This article provides tips and suggestions to ensure your workloads and applications achieve the maximum performance designed into the VMs.
-* If you are uploading a custom Linux GuestOS for your workload, note that Accelerated Networking will be **OFF** by default. If you intend to enable Accelerated Networking, enable it at the time of VM creation for best performance.
+## AMD EPYC&trade; chipset architecture
-* The hardware that powers the Lsv2-series VMs utilizes NVMe devices with eight I/O Queue Pairs (QP)s. Every NVMe device I/O queue is actually a pair: a submission queue and a completion queue. The NVMe driver is set up to optimize the utilization of these eight I/O QPs by distributing I/OΓÇÖs in a round robin schedule. To gain max performance, run eight jobs per device to match.
+Lasv3 and Lsv2-series VMs use AMD EPYC&trade; server processors based on the Zen micro-architecture. AMD developed Infinity Fabric (IF) for EPYC&trade; as scalable interconnect for its NUMA model that can be used for on-die, on-package, and multi-package communications. Compared with QPI (Quick-Path Interconnect) and UPI (Ultra-Path Interconnect) used on Intel modern monolithic-die processors, AMD's many-NUMA small-die architecture can bring both performance benefits and challenges. The actual effects of memory bandwidth and latency constraints might vary depending on the type of workloads running.
-* Avoid mixing NVMe admin commands (for example, NVMe SMART info query, etc.) with NVMe I/O commands during active workloads. Lsv2 NVMe devices are backed by Hyper-V NVMe Direct technology, which switches into ΓÇ£slow modeΓÇ¥ whenever any NVMe admin commands are pending. Lsv2 users could see a dramatic performance drop in NVMe I/O performance if that happens.
+## Tips to maximize performance
-* Lsv2 users should not rely on device NUMA information (all 0) reported from within the VM for data drives to decide the NUMA affinity for their apps. The recommended way for better performance is to spread workloads across CPUs if possible.
+* If you're uploading a custom Linux GuestOS for your workload, Accelerated Networking is turned off by default. If you intend to enable Accelerated Networking, enable it at the time of VM creation for best performance.
+* To gain max performance, run multiple jobs with deep queue depth per device.
+* Avoid mixing NVMe admin commands (for example, NVMe SMART info query, etc.) with NVMe I/O commands during active workloads. Lsv3, Lasv3, and Lsv2 NVMe devices are backed by Hyper-V NVMe Direct technology, which switches into ΓÇ£slow modeΓÇ¥ whenever any NVMe admin commands are pending. Lsv3, Lasv3, and Lsv2 users might see a dramatic performance drop in NVMe I/O performance if that happens.
+* Lsv2 users aren't recommended to rely on device NUMA information (all 0) reported from within the VM for data drives to decide the NUMA affinity for their apps. The recommended way for better performance is to spread workloads across CPUs if possible.
+* The maximum supported queue depth per I/O queue pair for Lsv3, Lasv3, and Lsv2 VM NVMe device is 1024. Lsv3, Lasv3, and Lsv2 users are recommended to limit their (synthetic) benchmarking workloads to queue depth 1024 or lower to avoid triggering queue full conditions, which can reduce performance.
+* The best performance is obtained when I/O is done directly to each of the raw NVMe devices with no partitioning, no file systems, no RAID config, etc. Before starting a testing session, ensure the configuration is in a known fresh/clean state by running `blkdiscard` on each of the NVMe devices.
-* The maximum supported queue depth per I/O queue pair for Lsv2 VM NVMe device is 1024 (vs. Amazon i3 QD 32 limit). Lsv2 users should limit their (synthetic) benchmarking workloads to queue depth 1024 or lower to avoid triggering queue full conditions, which can reduce performance.
+## Utilizing local NVMe storage
-## Utilizing local NVMe storage
+Local storage on the 1.92 TB NVMe disk on all Lsv3, Lasv3, and Lsv2 VMs is ephemeral. During a successful standard reboot of the VM, the data on the local NVMe disk persists. The data doesn't persist on the NVMe if the VM is redeployed, de-allocated, or deleted. Data doesn't persist if another issue causes the VM, or the hardware it's running on, to become unhealthy. When scenario happens, any data on the old host is securely erased.
-Local storage on the 1.92 TB NVMe disk on all Lsv2 VMs is ephemeral. During a successful standard reboot of the VM, the data on the local NVMe disk will persist. The data will not persist on the NVMe if the VM is redeployed, de-allocated, or deleted. Data will not persist if another issue causes the VM, or the hardware it is running on, to become unhealthy. When this happens, any data on the old host is securely erased.
+There are also cases when the VM needs to be moved to a different host machine, for example, during a planned maintenance operation. Planned maintenance operations and some hardware failures can be anticipated with [Scheduled Events](scheduled-events.md). Use Scheduled Events to stay updated on any predicted maintenance and recovery operations.
-There will also be cases when the VM needs to be moved to a different host machine, for example, during a planned maintenance operation. Planned maintenance operations and some hardware failures can be anticipated with [Scheduled Events](scheduled-events.md). Scheduled Events should be used to stay updated on any predicted maintenance and recovery operations.
+In the case that a planned maintenance event requires the VM to be recreated on a new host with empty local disks, the data needs to be resynchronized (again, with any data on the old host being securely erased). This scenario occurs because Lsv3, Lasv3, and Lsv2-series VMs don't currently support live migration on the local NVMe disk.
-In the case that a planned maintenance event requires the VM to be recreated on a new host with empty local disks, the data will need to be resynchronized (again, with any data on the old host being securely erased). This occurs because Lsv2-series VMs do not currently support live migration on the local NVMe disk.
+There are two modes for planned maintenance.
-There are two modes for planned maintenance.
+### Standard VM customer-controlled maintenance
-### Standard VM customer-controlled maintenance
+- The VM is moved to an updated host during a 30-day window.
+- Lsv3, Lasv3, and Lsv2 local storage data could be lost, so backing-up data prior to the event is recommended.
-- The VM is moved to an updated host during a 30-day window.-- Lsv2 local storage data could be lost, so backing-up data prior to the event is recommended.
+### Automatic maintenance
-### Automatic maintenance
+- Occurs if the customer doesn't execute customer-controlled maintenance, or because of emergency procedures, such as a security zero-day event.
+- Intended to preserve customer data, but there's a small risk of a VM freeze or reboot.
+- Lsv3, Lasv3, and Lsv2 local storage data could be lost, so backing-up data prior to the event is recommended.
-- Occurs if the customer does not execute customer-controlled maintenance, or in the event of emergency procedures such as a security zero-day event.-- Intended to preserve customer data, but there is a small risk of a VM freeze or reboot.-- Lsv2 local storage data could be lost, so backing-up data prior to the event is recommended.
+For any upcoming service events, use the controlled maintenance process to select a time most convenient to you for the update. Prior to the event, back up your data in premium storage. After the maintenance event completes, you can return your data to the refreshed Lsv3, Lasv3, and Lsv2 VMs local NVMe storage.
-For any upcoming service events, use the controlled maintenance process to select a time most convenient to you for the update. Prior to the event, you may back up your data in premium storage. After the maintenance event completes, you can return your data to the refreshed Lsv2 VMs local NVMe storage.
+Scenarios that maintain data on local NVMe disks include:
-Scenarios that maintain data on local NVMe disks include:
+- The VM is running and healthy.
+- The VM is rebooted in place (by you or Azure).
+- The VM is paused (stopped without de-allocation).
+- Most the planned maintenance servicing operations.
-- The VM is running and healthy.-- The VM is rebooted in place (by you or Azure).-- The VM is paused (stopped without de-allocation).-- The majority of the planned maintenance servicing operations.
+Scenarios that securely erase data to protect the customer include:
-Scenarios that securely erase data to protect the customer include:
+- The VM is redeployed, stopped (de-allocated), or deleted (by you).
+- The VM becomes unhealthy and has to service heal to another node due to a hardware issue.
+- A few of the planned maintenance servicing operations that require the VM to be reallocated to another host for servicing.
-- The VM is redeployed, stopped (de-allocated), or deleted (by you).-- The VM becomes unhealthy and has to service heal to another node due to a hardware issue.-- A small number of the planned maintenance servicing operations that requires the VM to be reallocated to another host for servicing.
+## Frequently asked questions
-## Frequently asked questions
+The following are frequently asked questions about these series.
-* **How do I start deploying Lsv2-series VMs?**
- Much like any other VM, use the [Portal](quick-create-portal.md), [Azure CLI](quick-create-cli.md), or [PowerShell](quick-create-powershell.md) to create a VM.
+### How do I start deploying L-series VMs?
-* **Will a single NVMe disk failure cause all VMs on the host to fail?**
- If a disk failure is detected on the hardware node, the hardware is in a failed state. When this occurs, all VMs on the node are automatically de-allocated and moved to a healthy node. For Lsv2-series VMs, this means that the customerΓÇÖs data on the failing node is also securely erased and will need to be recreated by the customer on the new node. As noted, before live migration becomes available on Lsv2, the data on the failing node will be proactively moved with the VMs as they are transferred to another node.
+Much like any other VM, use the [Portal](quick-create-portal.md), [Azure CLI](quick-create-cli.md), or [PowerShell](quick-create-powershell.md) to create a VM.
-* **Do I need to make any adjustments to rq_affinity for performance?**
- The rq_affinity setting is a minor adjustment when using the absolute maximum input/output operations per second (IOPS). Once everything else is working well, then try to set rq_affinity to 0 to see if it makes a difference.
+### Does a single NVMe disk failure cause all VMs on the host to fail?
-* **Do I need to change the blk_mq settings?**
- RHEL/CentOS 7.x automatically uses blk-mq for the NVMe devices. No configuration changes or settings are necessary. The scsi_mod.use_blk_mq setting is for SCSI only and was used during Lsv2 Preview because the NVMe devices were visible in the guest VMs as SCSI devices. Currently, the NVMe devices are visible as NVMe devices, so the SCSI blk-mq setting is irrelevant.
+If a disk failure is detected on the hardware node, the hardware is in a failed state. When this problem occurs, all VMs on the node are automatically de-allocated and moved to a healthy node. For Lsv3, Lasv3, and Lsv2-series VMs, this problem means that the customer's data on the failing node is also securely erased. The customer needs to recreate the data on the new node.
-* **Do I need to change ΓÇ£fioΓÇ¥?**
- To get maximum IOPS with a performance measuring tool like ΓÇÿfioΓÇÖ in the L64v2 and L80v2 VM sizes, set ΓÇ£rq_affinityΓÇ¥ to 0 on each NVMe device. For example, this command line will set ΓÇ£rq_affinityΓÇ¥ to zero for all 10 NVMe devices in an L80v2 VM:
+### Do I need to change the blk_mq settings?
- ```console
- for i in `seq 0 9`; do echo 0 >/sys/block/nvme${i}n1/queue/rq_affinity; done
- ```
+RHEL/CentOS 7.x automatically uses blk-mq for the NVMe devices. No configuration changes or settings are necessary.
- Also note that the best performance is obtained when I/O is done directly to each of the raw NVMe devices with no partitioning, no file systems, no RAID 0 config, etc. Before starting a testing session, ensure the configuration is in a known fresh/clean state by running `blkdiscard` on each of the NVMe devices.
-
-## Next steps
+## Next steps
-* See specifications for all [VMs optimized for storage performance](../sizes-storage.md) on Azure
+See specifications for all [VMs optimized for storage performance](../sizes-storage.md) on Azure
virtual-machines Lsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/lsv2-series.md
Previously updated : 02/03/2020 Last updated : 06/01/2022 # Lsv2-series
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
The Lsv2-series features high throughput, low latency, directly mapped local NVMe storage running on the [AMD EPYC<sup>TM</sup> 7551 processor](https://www.amd.com/en/products/epyc-7000-series) with an all core boost of 2.55GHz and a max boost of 3.0GHz. The Lsv2-series VMs come in sizes from 8 to 80 vCPU in a simultaneous multi-threading configuration. There is 8 GiB of memory per vCPU, and one 1.92TB NVMe SSD M.2 device per 8 vCPUs, with up to 19.2TB (10x1.92TB) available on the L80s v2.
virtual-machines Lsv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/lsv3-series.md
+
+ Title: Lsv3-series - Azure Virtual Machines
+description: Specifications for the Lsv3-series of Azure Virtual Machines (Azure VMs).
++++ Last updated : 06/01/2022+
+
+
+# Lsv3-series
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+The Lsv3-series of Azure Virtual Machines (Azure VMs) features high-throughput, low latency, directly mapped local NVMe storage. These VMs run on the 3rd Generation Intel&reg; Xeon&reg; Platinum 8370C (Ice Lake) processor in a [hyper-threaded configuration](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html). This new processor features an all-core turbo clock speed of 3.5 GHz with [Intel&reg; Turbo Boost Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Advanced-Vector Extensions 512 (Intel&reg; AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html) and [Intel&reg; Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html).
+
+The Lsv3-series VMs are available in sizes from 8 to 80 vCPUs. There are 8 GiB of memory allocated per vCPU, and one 1.92TB NVMe SSD device allocated per 8 vCPUs, with up to 19.2TB (10x1.92TB) available on the L80s_v3 size.
+
+> [!NOTE]
+> The Lsv3-series VMs are optimized to use the local disk on the node attached directly to the VM rather than using [durable data disks](disks-types.md). This method allows for greater IOPS and throughput for your workloads. The Lsv3, Lasv3, Lsv2, and Ls-series VMs don't support the creation of a host cache to increase the IOPS achievable by durable data disks.
+>
+> The high throughput and IOPS of the local disk makes the Lsv3-series VMs ideal for NoSQL stores such as Apache Cassandra and MongoDB. These stores replicate data across multiple VMs to achieve persistence in the event of the failure of a single VM.
+>
+> To learn more, see how to optimize performance on the Lsv3-series [Windows-based VMs](../virtual-machines/windows/storage-performance.md) or [Linux-based VMs](../virtual-machines/linux/storage-performance.md).
+
+- [Premium Storage](premium-storage-performance.md): Supported
+- [Premium Storage caching](premium-storage-performance.md): Not Supported
+- [Live Migration](maintenance-and-updates.md): Not Supported
+- [Memory Preserving Updates](maintenance-and-updates.md): Supported
+- [VM Generation Support](generation-2.md): Generation 1 and 2
+- [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported
+- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+- [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported
+
+| Size | vCPU | Memory (GiB) | Temp disk (GiB) | NVMe Disks | NVMe Disk throughput (Read IOPS/MBps) | Uncached data disk throughput (IOPS/MBps) | Max burst uncached data disk throughput (IOPS/MBps) | Max Data Disks | Max NICs | Expected network bandwidth (Mbps) |
+||||||||||||
+| Standard_L8s_v3 | 8 | 64 | 80 | 1x1.92 TB | 400000/2000 | 12800/290 | 20000/1200 | 16 | 4 | 12500 |
+| Standard_L16s_v3 | 16 | 128 | 160 | 2x1.92 TB | 800000/4000 | 25600/600 | 40000/1600 | 32 | 8 | 12500 |
+| Standard_L32s_v3 | 32 | 256 | 320 | 4x1.92 TB | 1.5M/8000 | 51200/865 | 80000/2000 | 32 | 8 | 16000 |
+| Standard_L48s_v3 | 48 | 384 | 480 | 6x1.92 TB | 2.2M/14000 | 76800/1315 | 80000/3000 | 32 | 8 | 24000 |
+| Standard_L64s_v3 | 64 | 512 | 640 | 8x1.92 TB | 2.9M/16000 | 80000/1735 | 80000/3000 | 32 | 8 | 30000 |
+| Standard_L80s_v3 | 80 | 640 | 800 | 10x1.92TB | 3.8M/20000 | 80000/2160 | 80000/3000 | 32 | 8 | 32000 |
+
+1. **Temp disk**: Lsv3-series VMs have a standard SCSI-based temp resource disk for use by the OS paging or swap file (`D:` on Windows, `/dev/sdb` on Linux). This disk provides 80 GiB of storage, 4,000 IOPS, and 80 MBps transfer rate for every 8 vCPUs. For example, Standard_L80s_v3 provides 800 GiB at 40000 IOPS and 800 MBPS. This configuration ensures the NVMe drives can be fully dedicated to application use. This disk is ephemeral, and all data is lost on stop or deallocation.
+1. **NVMe Disks**: NVMe disk throughput can go higher than the specified numbers. However, higher performance isn't guaranteed. Local NVMe disks are ephemeral. Data is lost on these disks if you stop or deallocate your VM. Local NVMe disks aren't encrypted by [Azure Storage encryption](disk-encryption.md), even if you enable [encryption at host](disk-encryption.md#supported-vm-sizes).
+1. **NVMe Disk throughput**: Hyper-V NVMe Direct technology provides unthrottled access to local NVMe drives mapped securely into the guest VM space. Lsv3 NVMe disk throughput can go higher than the specified numbers, but higher performance isn't guaranteed. To achieve maximum performance, see how to optimize performance on the Lsv3-series [Windows-based VMs](../virtual-machines/windows/storage-performance.md) or [Linux-based VMs](../virtual-machines/linux/storage-performance.md). Read/write performance varies based on IO size, drive load, and capacity utilization.
+1. **Max burst uncached data disk throughput**: Lsv3-series VMs can [burst their disk performance](./disk-bursting.md) for up to 30 minutes at a time.
+
+> [!NOTE]
+> Lsv3-series VMs don't provide host cache for data disk as it doesn't benefit the Lsv3 workloads.
++
+## Other sizes and information
+
+- [General purpose](sizes-general.md)
+- [Memory optimized](sizes-memory.md)
+- [Storage optimized](sizes-storage.md)
+- [GPU optimized](sizes-gpu.md)
+- [High performance compute](sizes-hpc.md)
+- [Previous generations](sizes-previous-gen.md)
+
+Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+
+More information on Disks Types: [Disk Types](./disks-types.md#ultra-disks)
+
+## Next steps
+
+Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Nc A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nc-a100-v4-series.md
Title: NC A100 v4-series (preview)
+ Title: NC A100 v4-series
description: Specifications for the NC A100 v4-series Azure VMs. These VMs include Linux, Windows, Flexible scale sets, and uniform scale sets.``` Previously updated : 03/01/2022 Last updated : 06/01/2022
-# NC A100 v4-series (Preview)
-
-> [!IMPORTANT]
-> The NC A100 v4-series of Azure virtual machines (VMs) is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-> To use this preview feature, [sign up for the NC A100 v4 series preview](https://aka.ms/AzureNCA100v4Signup).
+# NC A100 v4-series
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
These VMs are ideal for real-world Applied AI workloads, such as:
To get started with NC A100 v4 VMs, refer to [HPC Workload Configuration and Optimization](./workloads/hpc/configure.md) for steps including driver and network configuration.
-Due to increased GPU memory I/O footprint, the NC A100 v4 requires the use of [Generation 2 VMs](./generation-2.md) and marketplace images. The [Azure HPC images](./workloads/hpc/configure.md) are strongly recommended. Azure HPC Ubuntu 18.04, 20.04 and Azure HPC CentOS 7.9 images are supported. Windows Service 2019 and Windows Service 2022 images are supported.
+Due to increased GPU memory I/O footprint, the NC A100 v4 requires the use of [Generation 2 VMs](./generation-2.md) and marketplace images. The [Azure HPC images](./workloads/hpc/configure.md) are strongly recommended. Azure HPC Ubuntu 18.04, 20.04 and Azure HPC CentOS 7.9, CentOS 8.4, RHEL 7.9 and RHEL 8.5 images are supported. Windows Service 2019 and Windows Service 2022 images are supported.
-Key Features:
-- [Premium Storage](premium-storage-performance.md) -- [Premium Storage caching](premium-storage-performance.md) -- [VM Generation 2](generation-2.md) -- [Ephemeral OS Disks](ephemeral-os-disks.md) -- NVIDIA NVLink Interconnect
-These features are not supported:[Live Migration](maintenance-and-updates.md), [Memory Preserving Updates](maintenance-and-updates.md) and [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization) .
+- [Premium Storage](premium-storage-performance.md): Supported
+- [Premium Storage caching](premium-storage-performance.md): Supported
+- [Ultra Disks](disks-types.md#ultra-disks): Not Supported
+- [Live Migration](maintenance-and-updates.md): Not Supported
+- [Memory Preserving Updates](maintenance-and-updates.md): Not Supported
+- [VM Generation Support](generation-2.md): Generation 2
+- [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported
+- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+- InfiniBand: Not Supported
+- Nvidia NVLink Interconnect: Supported
+- [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported
-> [!IMPORTANT]
-> This VM series is currently in preview. These specifications are subject to change.
->
-| Size | vCPU | Memory: GiB | Temp Storage (with NVMe): GiB | GPU | GPU Memory: GiB | Max data disks | Max uncached disk throughput: IOPS / MBps | Max NICs/network bandwidth (Mbps) |
+| Size | vCPU | Memory: GiB | Temp Storage (with NVMe) : GiB | GPU | GPU Memory: GiB | Max data disks | Max uncached disk throughput: IOPS / MBps | Max NICs/network bandwidth (Mbps) |
||||||||||
-| Standard_NC24ads_A100_v4 | 24 | 220 | 1123 | 1 | 80 | 12 | 30000/1000 | 2/20,000 |
-| Standard_NC48ads_A100_v4 | 48 | 440 | 2246 | 2 | 160 | 24 | 60000/2000 | 4/40,000 |
+| Standard_NC24ads_A100_v4 | 24 | 220 | 1123 | 1 | 80 | 12 | 30000/1000 | 2/20,000 |
+| Standard_NC48ads_A100_v4 | 48 | 440 | 2246 | 2 | 160 | 24 | 60000/2000 | 4/40,000 |
| Standard_NC96ads_A100_v4 | 96 | 880 | 4492 | 4 | 320 | 32 | 120000/4000 | 8/80,000 | 1 GPU = one A100 card
virtual-machines Create Managed Disk From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-managed-disk-from-snapshot.md
tags: azure-service-management ms.assetid:-+ ms.devlang: azurecli vm-linux
virtual-machines Create Vm From Managed Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-vm-from-managed-os-disks.md
editor: ramankum
tags: azure-service-management ms.assetid:-+ ms.devlang: azurecli vm-linux
virtual-machines Create Vm From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-vm-from-snapshot.md
editor: ramankum
tags: azure-service-management ms.assetid:-+ ms.devlang: azurecli vm-linux
virtual-machines Sizes Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-storage.md
- Title: Azure VM sizes - Storage | Microsoft Docs
-description: Lists the different storage optimized sizes available for virtual machines in Azure. Lists information about the number of vCPUs, data disks, and NICs as well as storage throughput and network bandwidth for sizes in this series.
-
+
+ Title: Storage optimized virtual machine sizes
+description: Learn about the different storage optimized sizes available for Azure Virtual Machines (Azure VMs). Find information about the number of vCPUs, data disks, NICs, storage throughput, and network bandwidth for sizes in this series.
-- Previously updated : 02/03/2020---+++ Last updated : 06/01/2022+
+
# Storage optimized virtual machine sizes
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-
-> [!TIP]
-> Try the **[Virtual machines selector tool](https://aka.ms/vm-selector)** to find other sizes that best fit your workload.
-
-Storage optimized VM sizes offer high disk throughput and IO, and are ideal for Big Data, SQL, NoSQL databases, data warehousing, and large transactional databases. Examples include Cassandra, MongoDB, Cloudera, and Redis. This article provides information about the number of vCPUs, data disks, and NICs as well as local storage throughput and network bandwidth for each optimized size.
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+Storage optimized virtual machine (VM) sizes offer high disk throughput and IO, and are ideal for Big Data, SQL, NoSQL databases, data warehousing, and large transactional databases. Examples include Cassandra, MongoDB, Cloudera, and Redis. This article provides information about the number of vCPUs, data disks, NICs, local storage throughput, and network bandwidth for each optimized size.
-The [Lsv2-series](lsv2-series.md) features high throughput, low latency, directly mapped local NVMe storage running on the [AMD EPYC<sup>TM</sup> 7551 processor](https://www.amd.com/en/products/epyc-7000-series) with an all core boost of 2.55GHz and a max boost of 3.0GHz. The Lsv2-series VMs come in sizes from 8 to 80 vCPU in a simultaneous multi-threading configuration. There is 8 GiB of memory per vCPU, and one 1.92TB NVMe SSD M.2 device per 8 vCPUs, with up to 19.2TB (10x1.92TB) available on the L80s v2.
+> [!TIP]
+> Try the [virtual machines selector tool](https://aka.ms/vm-selector) to find other sizes that best fit your workload.
-## Other sizes
+The Lsv3, Lasv3, and Lsv2-series feature high-throughput, low latency, directly mapped local NVMe storage. These VM series come in sizes from 8 to 80 vCPU. There are 8 GiB of memory per vCPU, and one 1.92TB NVMe SSD device per 8 vCPUs, with up to 19.2TB (10x1.92TB) available on the largest VM sizes.
-- [General purpose](sizes-general.md)-- [Compute optimized](sizes-compute.md)-- [Memory optimized](sizes-memory.md)-- [GPU optimized](sizes-gpu.md)-- [High performance compute](sizes-hpc.md)-- [Previous generations](sizes-previous-gen.md)
+- The [Lsv3-series](lsv3-series.md) runs on the third Generation Intel&reg; Xeon&reg; Platinum 8370C (Ice Lake) processor in a [hyper-threaded configuration](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html). This new processor features an all-core turbo clock speed of 3.5 GHz with [Intel&reg; Turbo Boost Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Advanced-Vector Extensions 512 (Intel&reg; AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html) and [Intel&reg; Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html).
+- The [Lasv3-series](lasv3-series.md) runs on the AMD 3rd Generation EPYC&trade; 7763v processor. This series runs in a multi-threaded configuration with up to 256 MB L3 cache, which can achieve a boosted maximum frequency of 3.5 GHz.
+- The [Lsv2-series](lsv2-series.md) runs on the [AMD EPYC&trade; 7551 processor](https://www.amd.com/en/products/epyc-7000-series) with an all-core boost of 2.55 GHz and a max boost of 3.0 GHz.
-## Next steps
+## Other sizes
-Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
+- [General purpose](sizes-general.md)
+- [Compute optimized](sizes-compute.md)
+- [Memory optimized](sizes-memory.md)
+- [GPU optimized](sizes-gpu.md)
+- [High performance compute](sizes-hpc.md)
+- [Previous generations](sizes-previous-gen.md)
-Learn how to optimize performance on the Lsv2-series virtual machines for [Windows](windows/storage-performance.md) or [Linux](linux/storage-performance.md).
+## Next steps
-For more information on how Azure names its VMs, see [Azure virtual machine sizes naming conventions](./vm-naming-conventions.md).
+- Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
+- Learn how to optimize performance on the Lsv2-series [Windows VMs](windows/storage-performance.md) and [Linux VMs](linux/storage-performance.md).
+- For more information on how Azure names its VMs, see [Azure virtual machine sizes naming conventions](./vm-naming-conventions.md).
virtual-machines Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes.md
Previously updated : 04/04/2022 Last updated : 06/01/2022
This article describes the available sizes and options for the Azure virtual mac
| [General purpose](sizes-general.md) | B, Dsv3, Dv3, Dasv4, Dav4, DSv2, Dv2, Av2, DC, DCv2, Dv4, Dsv4, Ddv4, Ddsv4, Dv5, Dsv5, Ddv5, Ddsv5, Dasv5, Dadsv5 | Balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. | | [Compute optimized](sizes-compute.md) | F, Fs, Fsv2, FX | High CPU-to-memory ratio. Good for medium traffic web servers, network appliances, batch processes, and application servers. | | [Memory optimized](sizes-memory.md) | Esv3, Ev3, Easv4, Eav4, Ebdsv5, Ebsv5, Ev4, Esv4, Edv4, Edsv4, Ev5, Esv5, Edv5, Edsv5, Easv5, Eadsv5, Mv2, M, DSv2, Dv2 | High memory-to-CPU ratio. Great for relational database servers, medium to large caches, and in-memory analytics. |
-| [Storage optimized](sizes-storage.md) | Lsv2 | High disk throughput and IO ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases. |
+| [Storage optimized](sizes-storage.md) | Lsv2, Lsv3, Lasv3 | High disk throughput and IO ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases. |
| [GPU](sizes-gpu.md) | NC, NCv2, NCv3, NCasT4_v3, ND, NDv2, NV, NVv3, NVv4, NDasrA100_v4, NDm_A100_v4 | Specialized virtual machines targeted for heavy graphic rendering and video editing, as well as model training and inferencing (ND) with deep learning. Available with single or multiple GPUs. | | [High performance compute](sizes-hpc.md) | HB, HBv2, HBv3, HC, H | Our fastest and most powerful CPU virtual machines with optional high-throughput network interfaces (RDMA). |
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
Previously updated : 05/02/2022 Last updated : 05/31/2022
Azure offers trusted launch as a seamless way to improve the security of [genera
**VM size support**: - B-series-- Dav4-series, Dasv4-series - DCsv2-series - Dv4-series, Dsv4-series, Dsv3-series, Dsv2-series
+- Dav4-series, Dasv4-series
- Ddv4-series, Ddsv4-series - Dv5-series, Dsv5-series - Ddv5-series, Ddsv5-series
Trusted launch now allows images to be created and shared through the Azure Comp
### Does trusted launch support Azure Backup?
-Trusted launch now supports Azure Backup in preview. For more information, see [Support matrix for Azure VM backup](../backup/backup-support-matrix-iaas.md#vm-compute-support).
+Trusted launch now supports Azure Backup. For more information, see [Support matrix for Azure VM backup](../backup/backup-support-matrix-iaas.md#vm-compute-support).
### Does trusted launch support ephemeral OS disks? Trusted launch now supports ephemeral OS disks in preview. Note that, while using ephemeral disks for Trusted Launch VMs, keys and secrets generated or sealed by the vTPM after the creation of the VM may not be persisted across operations like reimaging and platform events like service healing. For more information, see [Trusted Launch for Ephemeral OS disks (Preview)](https://aka.ms/ephemeral-os-disks-support-trusted-launch).
+### How can I find VM sizes that support Trusted launch?
+
+See the list of [Generation 2 VM sizes supporting Trusted launch](trusted-launch.md#limitations).
+
+The following commands can be used to check if a [Generation 2 VM Size](../virtual-machines/generation-2.md#generation-2-vm-sizes) does not support Trusted launch.
+
+#### CLI
+
+```azurecli
+subscription="<yourSubID>"
+region="westus"
+vmSize="Standard_NC12s_v3"
+
+az vm list-skus --resource-type virtualMachines --location $region --query "[?name=='$vmSize'].capabilities" --subscription $subscription
+```
+#### PowerShell
+
+```azurepowershell
+$region = "southeastasia"
+$vmSize = "Standard_M64"
+(Get-AzComputeResourceSku | where {$_.Locations.Contains($region) -and ($_.Name -eq $vmSize) })[0].Capabilities
+```
+
+The response will be similar to the following form. `TrustedLaunchDisabled True` in the output indicates that the Generation 2 VM size does not support Trusted launch. If it's a Generation 2 VM size and `TrustedLaunchDisabled` is not part of the output, it implies that Trusted launch is supported for that VM size.
+
+```
+Name Value
+- --
+MaxResourceVolumeMB 8192000
+OSVhdSizeMB 1047552
+vCPUs 64
+MemoryPreservingMaintenanceSupported False
+HyperVGenerations V1,V2
+MemoryGB 1000
+MaxDataDiskCount 64
+CpuArchitectureType x64
+MaxWriteAcceleratorDisksAllowed 8
+LowPriorityCapable True
+PremiumIO True
+VMDeploymentTypes IaaS
+vCPUsAvailable 64
+ACUs 160
+vCPUsPerCore 2
+CombinedTempDiskAndCachedIOPS 80000
+CombinedTempDiskAndCachedReadBytesPerSecond 838860800
+CombinedTempDiskAndCachedWriteBytesPerSecond 838860800
+CachedDiskBytes 1318554959872
+UncachedDiskIOPS 40000
+UncachedDiskBytesPerSecond 1048576000
+EphemeralOSDiskSupported True
+EncryptionAtHostSupported True
+CapacityReservationSupported False
+TrustedLaunchDisabled True
+AcceleratedNetworkingEnabled True
+RdmaEnabled False
+MaxNetworkInterfaces 8
+```
+ ### What is VM Guest State (VMGS)? VM Guest State (VMGS) is specific to Trusted Launch VM. It is a blob that is managed by Azure and contains the unified extensible firmware interface (UEFI) secure boot signature databases and other security information. The lifecycle of the VMGS blob is tied to that of the OS Disk.
virtual-machines Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/storage-performance.md
- Title: Optimize performance on Azure Lsv2-series virtual machines
-description: Learn how to optimize performance for your solution on the Lsv2-series virtual machines using a Windows example.
----- Previously updated : 04/17/2019--
+
+ Title: Optimize performance on Lsv3, Lasv3, and Lsv2-series Windows VMs
+description: Learn how to optimize performance for your solution on the Lsv2-series Windows virtual machines (VMs) on Azure.
+++++ Last updated : 06/01/2022+
+
+# Optimize performance on Lsv3, Lasv3, and Lsv2-series Windows VMs
-# Optimize performance on the Lsv2-series Windows virtual machines
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
-Lsv2-series virtual machines support a variety of workloads that need high I/O and throughput on local storage across a wide range of applications and industries. The Lsv2-series is ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases, including Cassandra, MongoDB, Cloudera, and Redis.
+Lsv3, Lasv3, and Lsv2-series Azure Virtual Machines (Azure VMs) support various workloads that need high I/O and throughput on local storage across a wide range of applications and industries. The L-series is ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases, including Cassandra, MongoDB, Cloudera, and Redis.
-The design of the Lsv2-series Virtual Machines (VMs) maximizes the AMD EPYCΓäó 7551 processor to provide the best performance between the processor, memory, NVMe devices, and the VMs. In addition to maximizing the hardware performance, Lsv2-series VMs are designed to work with the needs of Windows and Linux operating systems for better performance with the hardware and the software.
+Lsv3, Lasv3, and Lsv2-series VMs are designed to work with the needs of Windows and Linux operating systems for better performance with hardware and the software.
-Tuning the software and hardware resulted in the optimized version of [Windows Server 2019 Datacenter](https://www.microsoft.com/cloud-platform/windows-server-pricing), released in early December 2018 to the Azure Marketplace, which supports maximum performance on the NVMe devices in Lsv2-series VMs.
+Software and hardware tuning resulted in the optimized version of [Windows Server 2019 Datacenter](https://www.microsoft.com/cloud-platform/windows-server-pricing), released to the Azure Marketplace (and later versions), which support maximum performance on the NVMe devices in L-series VMs.
-This article provides tips and suggestions to ensure your workloads and applications achieve the maximum performance designed into the VMs. The information on this page will be continuously updated as more Lsv2 optimized images are added to the Azure Marketplace.
+This article provides tips and suggestions to ensure your workloads and applications achieve the maximum performance designed into the VMs.
-## AMD EYPCΓäó chipset architecture
+## AMD EPYC&trade; chipset architecture
-Lsv2-series VMs use AMD EYPCΓäó server processors based on the Zen microarchitecture. AMD developed Infinity Fabric (IF) for EYPCΓäó as scalable interconnect for its NUMA model that could be used for on-die, on-package, and multi-package communications. Compared with QPI (Quick-Path Interconnect) and UPI (Ultra-Path Interconnect) used on Intel modern monolithic-die processors, AMDΓÇÖs many-NUMA small-die architecture may bring both performance benefits as well as challenges. The actual impact of memory bandwidth and latency constraints could vary depending on the type of workloads running.
+Lasv3 and Lsv2-series VMs use AMD EPYC&trade; server processors based on the Zen micro-architecture. AMD developed Infinity Fabric (IF) for EPYC&trade; as a scalable interconnect for its NUMA model that can be used for on-die, on-package, and multi-package communications. Compared with QPI (Quick-Path Interconnect) and UPI (Ultra-Path Interconnect), used on Intel modern monolithic-die processors, AMD's many-NUMA small-die architecture can bring both performance benefits and challenges. The actual effects of memory bandwidth and latency constraints can vary depending on the type of workloads.
-## Tips for maximizing performance
+## Tips for maximizing performance
-* The hardware that powers the Lsv2-series VMs utilizes NVMe devices with eight I/O Queue Pairs (QP)s. Every NVMe device I/O queue is actually a pair: a submission queue and a completion queue. The NVMe driver is set up to optimize the utilization of these eight I/O QPs by distributing I/OΓÇÖs in a round robin schedule. To gain max performance, run eight jobs per device to match.
+- To gain max performance, run multiple jobs with deep queue depth per device.
-* Avoid mixing NVMe admin commands (for example, NVMe SMART info query, etc.) with NVMe I/O commands during active workloads. Lsv2 NVMe devices are backed by Hyper-V NVMe Direct technology, which switches into ΓÇ£slow modeΓÇ¥ whenever any NVMe admin commands are pending. Lsv2 users could see a dramatic performance drop in NVMe I/O performance if that happens.
+- Avoid mixing NVMe admin commands (for example, NVMe SMART info query) with NVMe I/O commands during active workloads. Lsv3, Lasv3, and Lsv2 NVMe devices are backed by Hyper-V NVMe Direct technology, which switches into "slow mode" whenever any NVMe admin commands are pending. Lsv3, Lasv3, and Lsv2 users might see a dramatic performance drop in NVMe I/O performance if that scenario happens.
-* Lsv2 users should not rely on device NUMA information (all 0) reported from within the VM for data drives to decide the NUMA affinity for their apps. The recommended way for better performance is to spread workloads across CPUs if possible.
+- It's not recommended for Lsv2 users to rely on device NUMA information (all 0) reported from within the VM for data drives to decide the NUMA affinity for their apps. For better performance, it's recommended to spread workloads across CPUs if possible.
-* The maximum supported queue depth per I/O queue pair for Lsv2 VM NVMe device is 1024 (vs. Amazon i3 QD 32 limit). Lsv2 users should limit their (synthetic) benchmarking workloads to queue depth 1024 or lower to avoid triggering queue full conditions, which can reduce performance.
+- The maximum supported queue depth per I/O queue pair for Lsv3, Lasv3, and Lsv2 VM NVMe device is 1024. Lsv3, Lasv3, and Lsv2 users are recommended to limit their (synthetic) benchmarking workloads to queue depth 1024 or lower to avoid triggering queue full conditions, which can reduce performance.
-## Utilizing local NVMe storage
+- The best performance is obtained when I/O is done directly to each of the raw NVMe devices with no partitioning, no file systems, no RAID config, etc.
+## Utilizing local NVMe storage
-Local storage on the 1.92 TB NVMe disk on all Lsv2 VMs is ephemeral. During a successful standard reboot of the VM, the data on the local NVMe disk will persist. The data will not persist on the NVMe if the VM is redeployed, de-allocated, or deleted. Data will not persist if another issue causes the VM, or the hardware it is running on, to become unhealthy. When this happens, any data on the old host is securely erased.
+Local storage on the 1.92 TB NVMe disk on all Lsv3, Lasv3, and Lsv2 VMs is ephemeral. During a successful standard reboot of the VM, the data on the local NVMe disk persists. The data doesn't persist on the NVMe if the VM is redeployed, deallocated, or deleted. Data doesn't persist if another issue causes the VM, or the hardware on which the VM is running, to become unhealthy. When this scenario happens, any data on the old host is securely erased.
-There will also be cases when the VM needs to be moved to a different host machine, for example, during a planned maintenance operation. Planned maintenance operations and some hardware failures can be anticipated with [Scheduled Events](scheduled-events.md). Scheduled Events should be used to stay updated on any predicted maintenance and recovery operations.
+There are also cases when the VM needs to be moved to a different host machine; for example, during a planned maintenance operation. Planned maintenance operations and some hardware failures can be anticipated with [Scheduled Events](scheduled-events.md). Use Scheduled Events to stay updated on any predicted maintenance and recovery operations.
-In the case that a planned maintenance event requires the VM to be recreated on a new host with empty local disks, the data will need to be resynchronized (again, with any data on the old host being securely erased). This occurs because Lsv2-series VMs do not currently support live migration on the local NVMe disk.
+In the case that a planned maintenance event requires the VM to be recreated on a new host with empty local disks, the data needs to be resynchronized (again, with any data on the old host being securely erased). This scenario occurs because Lsv3, Lasv3, and Lsv2-series VMs don't currently support live migration on the local NVMe disk.
-There are two modes for planned maintenance.
+There are two modes for planned maintenance: [standard VM customer-controlled maintenance](#standard-vm-customer-controlled-maintenance) and [automatic maintenance](#automatic-maintenance).
-### Standard VM customer-controlled maintenance
+For any upcoming service events, use the controlled maintenance process to select a time most convenient to you for the update. Prior to the event, back up your data in premium storage. After the maintenance event completes, return your data to the refreshed Lsv2 VMs local NVMe storage.
-- The VM is moved to an updated host during a 30-day window.-- Lsv2 local storage data could be lost, so backing-up data prior to the event is recommended.
+Scenarios that maintain data on local NVMe disks include when:
-### Automatic maintenance
+- The VM is running and healthy.
+- The VM is rebooted in place by you or by Azure.
+- The VM is paused (stopped without deallocation).
+- Most planned maintenance servicing operations.
-- Occurs if the customer does not execute customer-controlled maintenance, or in the event of emergency procedures such as a security zero-day event.-- Intended to preserve customer data, but there is a small risk of a VM freeze or reboot.-- Lsv2 local storage data could be lost, so backing-up data prior to the event is recommended.
+Scenarios that securely erase data to protect the customer include when:
-For any upcoming service events, use the controlled maintenance process to select a time most convenient to you for the update. Prior to the event, you may back up your data in premium storage. After the maintenance event completes, you can return your data to the refreshed Lsv2 VMs local NVMe storage.
+- The VM is redeployed, stopped (deallocated), or deleted by you.
+- The VM becomes unhealthy and has to service heal to another node due to a hardware issue.
+- A few the planned maintenance servicing operations that require the VM to be reallocated to another host for servicing.
-Scenarios that maintain data on local NVMe disks include:
+### Standard VM customer-controlled maintenance
-- The VM is running and healthy.-- The VM is rebooted in place (by you or Azure).-- The VM is paused (stopped without de-allocation).-- The majority of the planned maintenance servicing operations.
+In standard VM customer-controlled maintenance, the VM is moved to an updated host during a 30-day window.
-Scenarios that securely erase data to protect the customer include:
+Lsv3, Lasv3, and Lsv2 local storage data might be lost, so backing-up data prior to the event is recommended.
-- The VM is redeployed, stopped (de-allocated), or deleted (by you).-- The VM becomes unhealthy and has to service heal to another node due to a hardware issue.-- A small number of the planned maintenance servicing operations that requires the VM to be reallocated to another host for servicing.
+### Automatic maintenance
-## Frequently asked questions
+Automatic maintenance occurs if the customer doesn't execute customer-controlled maintenance. Automatic maintenance can also occur because of emergency procedures, such as a security zero-day event.
-* **How do I start deploying Lsv2-series VMs?**
- Much like any other VM, use the [Portal](quick-create-portal.md), [Azure CLI](quick-create-cli.md), or [PowerShell](quick-create-powershell.md) to create a VM.
+This type of maintenance is intended to preserve customer data, but there's a small risk of a VM freeze or reboot.
-* **Will a single NVMe disk failure cause all VMs on the host to fail?**
- If a disk failure is detected on the hardware node, the hardware is in a failed state. When this occurs, all VMs on the node are automatically de-allocated and moved to a healthy node. For Lsv2-series VMs, this means that the customerΓÇÖs data on the failing node is also securely erased and will need to be recreated by the customer on the new node. As noted, before live migration becomes available on Lsv2, the data on the failing node will be proactively moved with the VMs as they are transferred to another node.
+Lsv3, Lasv3, and Lsv2 local storage data might be lost, so backing-up data prior to the event is recommended.
-* **Do I need to make polling adjustments in Windows in Windows Server 2012 or Windows Server 2016?**
- NVMe polling is only available on Windows Server 2019 on Azure.
+## Frequently asked questions
-* **Can I switch back to a traditional interrupt service routine (ISR) model?**
- Lsv2-series VMs are optimized for NVMe polling. Updates are continuously provided to improve polling performance.
+The following are frequently asked questions about these series.
-* **Can I adjust the polling settings in Windows Server 2019?**
- The polling settings are not user adjustable.
-
-## Next steps
+### How do I start deploying L-series VMs?
-* See specifications for all [VMs optimized for storage performance](../sizes-storage.md) on Azure
+Much like any other VM, create a VM [using the Azure portal](quick-create-portal.md), [through the Azure Command-Line Interface (Azure CLI)](quick-create-cli.md), or [through PowerShell](quick-create-powershell.md).
+
+### Does a single NVMe disk failure cause all VMs on the host to fail?
+
+If a disk failure is detected on the hardware node, the hardware is in a failed state. When this problem occurs, all VMs on the node are automatically deallocated and moved to a healthy node. For Lsv3, Lasv3, and Lsv2-series VMs, this scenario means that the customer's data on the failing node is also securely erased. The customer needs to recreate the data on the new node.
+
+### Do I need to make polling adjustments in Windows Server 2012 or Windows Server 2016?
+
+NVMe polling is only available on Windows Server 2019 and later versions on Azure.
+
+### Can I switch back to a traditional interrupt service routine (ISR) model?
+
+Lasv3, and Lsv2-series VMs are optimized for NVMe polling. Updates are continuously provided to improve polling performance.
+
+### Can I adjust the polling settings in Windows Server 2019 or later versions?
+
+The polling settings aren't user adjustable.
+
+## Next steps
+
+See specifications for all [VMs optimized for storage performance](../sizes-storage.md) on Azure.
virtual-machines Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/cal-s4h.md
tags: azure-resource-manager
keywords: '' ms.assetid: 44bbd2b6-a376-4b5c-b824-e76917117fa9-+ vm-linux
web-application-firewall Afds Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/afds-overview.md
Previously updated : 03/30/2022 Last updated : 05/06/2022
If bot protection is enabled, incoming requests that match bot rules are logged
## Configuration
-You can configure and deploy all WAF rule types using the Azure portal, REST APIs, Azure Resource Manager templates, and Azure PowerShell.
+You can configure and deploy all WAF policies using the Azure portal, REST APIs, Azure Resource Manager templates, and Azure PowerShell. You can also configure and manage Azure WAF policies at scale using Firewall Manager integration (preview). For more information, see [Use Azure Firewall Manager to manage Web Application Firewall policies (preview)](../shared/manage-policies.md).
## Monitoring
web-application-firewall Ag Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/ag-overview.md
description: This article provides an overview of Web Application Firewall (WAF)
Previously updated : 04/21/2022 Last updated : 05/06/2022
There's a threshold of 5 for the Anomaly Score to block traffic. So, a single *C
> [!NOTE] > The message that's logged when a WAF rule matches traffic includes the action value "Blocked." But the traffic is actually only blocked for an Anomaly Score of 5 or higher. For more information, see [Troubleshoot Web Application Firewall (WAF) for Azure Application Gateway](web-application-firewall-troubleshoot.md#understanding-waf-logs).
+### Configuration
+
+You can configure and deploy all WAF policies using the Azure portal, REST APIs, Azure Resource Manager templates, and Azure PowerShell. You can also configure and manage Azure WAF policies at scale using Firewall Manager integration (preview). For more information, see [Use Azure Firewall Manager to manage Web Application Firewall policies (preview)](../shared/manage-policies.md).
+ ### WAF monitoring Monitoring the health of your application gateway is important. Monitoring the health of your WAF and the applications that it protects are supported by integration with Microsoft Defender for Cloud, Azure Monitor, and Azure Monitor logs.
web-application-firewall Manage Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/shared/manage-policies.md
+
+ Title: Use Azure Firewall Manager to manage Web Application Firewall policies (preview)
+description: Learn about managing Azure Web Application Firewall policies using Azure Firewall Manager
++++ Last updated : 06/01/2022++
+# Use Azure Firewall Manager to manage Web Application Firewall policies (preview)
+
+> [!IMPORTANT]
+> Managing Web Application Firewall policies using Azure Firewall Manager is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Azure Firewall Manager is a central network security policy and route management service that allows administrators and organizations to protect their networks and cloud platforms at scale, all in one central place.
+
+## Create and associate policies
+
+You can use Azure Firewall Manager to centrally create, associate, and manage Web Application Firewall (WAF) policies for your application delivery platforms, including Azure Front Door and Azure Application Gateway.
+
+## Next steps
+
+- [Manage Azure Web Application Firewall policies (preview)](../../firewall-manager/manage-web-application-firewall-policies.md)