Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Partner Bindid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md | Title: Configure Azure Active Directory B2C with Transmit Security + Title: Configure Transmit Security with Azure Active Directory B2C for passwordless authentication -description: Configure Azure Active Directory B2C with Transmit Security for passwordless strong customer authentication +description: Configure Azure AD B2C with Transmit Security BindID for passwordless customer authentication -+ Previously updated : 03/20/2022 Last updated : 04/27/2023 zone_pivot_groups: b2c-policy-type zone_pivot_groups: b2c-policy-type # Configure Transmit Security with Azure Active Directory B2C for passwordless authentication ----In this sample tutorial, learn how to integrate Azure Active Directory B2C (Azure AD B2C) authentication with [Transmit Security's](https://www.transmitsecurity.com/bindid) passwordless authentication solution **BindID**. BindID is a passwordless authentication service that uses strong Fast Identity Online (FIDO2) biometric authentication for a reliable omni-channel authentication experience. The solution ensures a smooth sign in experience for all customers across every device and channel, and it eliminates fraud, phishing, and credential reuse. -+In this tutorial, learn to integrate Azure Active Directory B2C (Azure AD B2C) authentication with [Transmit Security](https://www.transmitsecurity.com/bindid) BindID, a passwordless authentication solution. BindID uses strong Fast Identity Online (FIDO2) biometric authentication for reliable omni-channel authentication. The solution ensures a smooth sign in experience for customers across devices and channels, while reducing fraud, phishing, and credential reuse. ## Scenario description -The following architecture diagram shows the implementation. +The following architecture diagram illustrates the implementation. - + -|Step | Description | -|:--| :--| -| 1. | User opens Azure AD B2C's sign in page, and then signs in or signs up by entering their username. -| 2. | Azure AD B2C redirects the user to BindID using an OpenID Connect (OIDC) request. -| 3. | BindID authenticates the user using appless FIDO2 biometrics, such as fingerprint. -| 4. | A decentralized authentication response is returned to BindID. -| 5. | The OIDC response is passed on to Azure AD B2C. -| 6. | User is either granted or denied access to the customer application based on the verification results. +1. User opens the Azure AD B2C sign in page, and signs in or signs up. +2. Azure AD B2C redirects the user to BindID using an OpenID Connect (OIDC) request. +3. BindID authenticates the user using appless FIDO2 biometrics, such as fingerprint. +4. A decentralized authentication response is returned to BindID. +5. The OIDC response passes to Azure AD B2C. +6. User is granted or denied access to the application, based on verification results. ## Prerequisites -To get started, you'll need: --- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).--- An [Azure AD B2C tenant](./tutorial-create-tenant.md) that's linked to your Azure subscription.+To get started, you need: -- A BindID tenant. You can [sign up for free.](https://www.transmitsecurity.com/developer?utm_signup=dev_hub#try)+* An Azure AD subscription + * If you don't have one, get an [Azure free account](https://azure.microsoft.com/free/) +* An Azure AD B2C tenant linked to the Azure subscription + * See, [Tutorial: Create an Azure Active Directory B2C tenant](./tutorial-create-tenant.md) +* A BindID tenant + * Go to transmitsecurity.com to [get started](https://www.transmitsecurity.com/developer?utm_signup=dev_hub#try) +* Register a web application in the Azure portal + * [Tutorial: Register a web application in Azure Active Directory B2C](./tutorial-register-applications.md) +* Azure AD B2C custom policies + * If you can't use the policies, see [Tutorial: Create user flows and custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy) -- If you haven't already done so, [register](./tutorial-register-applications.md) a web application in the Azure portal.+## Register an app in BindID +To get started: -- Ability to use Azure AD B2C custom policies. If you can't, complete the steps in [Get started with custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy) to learn how to use custom policies.---## Step 1: Register an app in BindID --Follow the steps in [Configure Your Application](https://developer.bindid.io/docs/guides/quickstart/topics/quickstart_web#step-1-configure-your-application) to add you an application in [BindID Admin Portal](https://admin.bindid-sandbox.io/console/). The following information is needed: +1. Go to developer.bindid.io to [Configure Your Application](https://developer.bindid.io/docs/guides/quickstart/topics/quickstart_web#step-1-configure-your-application). +2. Add an application in [BindID Admin Portal](https://admin.bindid-sandbox.io/console/). Sign-in is required. | Property | Description | |:|:|-| Name | Name of your application such as `Azure AD B2C BindID app`| -| Domain | Enter `your-B2C-tenant-name.onmicrosoft.com`. Replace `your-B2C-tenant` with the name of your Azure AD B2C tenant.| +| Name | Application name| +| Domain | Enter `your-B2C-tenant-name.onmicrosoft.com`. Replace `your-B2C-tenant` with your Azure AD B2C tenant.| | Redirect URIs | [https://jwt.ms/](https://jwt.ms/)-| Redirect URLs | Enter `https://your-B2C-tenant-name.b2clogin.com/your-B2C-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-B2C-tenant` with the name of your Azure AD B2C tenant. If you use a custom domain, replace `your-B2C-tenant-name.b2clogin.com` with your custom domain such as `contoso.com`.| ---After you register the app in BindID, you'll get a **Client ID** and a **Client Secret**. Record the values as you'll need them later to configure BindID as an identity provider in Azure AD B2C. ---## Step 2: Configure BindID as an identity provider in Azure AD B2C --1. Sign in to the [Azure portal](https://portal.azure.com/#home) as the global administrator of your Azure AD B2C tenant. --1. Make sure you're using the directory that contains your Azure AD B2C tenant: -- 1. Select the **Directories + subscriptions** icon in the portal toolbar. -- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**. --1. In the top-left corner of the Azure portal, select **All services**, and then search for and select **Azure AD B2C**. --1. Select **Identity providers**, and then select **New OpenID Connect provider**. --1. Enter a **Name**. For example, enter `Login with BindID`. --1. For **Metadata url**, enter `https://signin.bindid-sandbox.io/.well-known/openid-configuration`. --1. For **Client ID**, enter the client ID that you previously recorded in [step 1](#step-1-register-an-app-in-bindid). --1. For **Client secret**, enter the Client secret that you previously recorded in [step 1](#step-1-register-an-app-in-bindid). --1. For the **Scope**, enter the `openid email`. --1. For **Response type**, select **code**. --1. For **Response mode**, select **form_post**. --1. Under **Identity provider claims mapping**, select the following claims: - - 1. **User ID**: `sub` - 1. **Email**: `email` --1. Select **Save**. --## Step 3: Create a user flow +| Redirect URLs | Enter `https://your-B2C-tenant-name.b2clogin.com/your-B2C-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-B2C-tenant` with your Azure AD B2C tenant. For a custom domain, replace `your-B2C-tenant-name.b2clogin.com` with your custom domain.| ++3. Upon registration, a **Client ID** and **Client Secret** appear. +4. Record the values to use later. ++## Configure BindID as an identity provider in Azure AD B2C ++For the following instructions, use the directory with your Azure AD B2C tenant. ++1. Sign in to the [Azure portal](https://portal.azure.com/#home) as Global Administrator. +2. In the portal toolbar, select **Directories + subscriptions**. +3. On the **Portal settings | Directories + subscriptions** page, in the **Directory name** list, find the Azure AD B2C directory. +4. Select **Switch**. +5. In the top-left corner of the Azure portal, select **All services**. +6. Search for and select **Azure AD B2C**. +7. Select **Identity providers**. +8. Select **New OpenID Connect provider**. +9. Enter a **Name**. +10. For **Metadata URL**, enter `https://signin.bindid-sandbox.io/.well-known/openid-configuration`. +11. For **Client ID**, enter the Client ID you recorded. +12. For **Client secret**, enter the Client Secret you recorded. +13. For the **Scope**, enter the `openid email`. +14. For **Response type**, select **code**. +15. For **Response mode**, select **form_post**. +16. Under **Identity provider claims mapping**, for **User ID**, select `sub`. +17. For **Email**, select `email`. +18. Select **Save**. ++## Create a user flow 1. In your Azure AD B2C tenant, under **Policies**, select **User flows**. +2. Select **New user flow**. +3. Select **Sign up and sign in** user flow type. +4. Select **Create**. +5. Enter a **Name**. +6. Under **Identity providers**, for **Local Accounts**, select **None**. This action disables email and password-based authentication. +7. For **Custom identity providers**, select the created BindID Identity provider such as **Login with BindID**. +8. Select **Create**. -1. Select **New user flow**. --1. Select **Sign up and sign in** user flow type,and then select **Create**. --1. Enter a **Name** for your user flow such as `signupsignin`. --1. Under **Identity providers**: - - 1. For **Local Accounts**, select **None** to disable email and password-based authentication. - - 1. For **Custom identity providers**, select your newly created BindID Identity provider such as **Login with BindID**. --1. Select **Create** --## Step 4: Test your user flow --1. In your Azure AD B2C tenant, select **User flows**. --1. Select the newly created user flow such as **B2C_1_signupsignin**. --1. For **Application**, select the web application that you previously registered as part of this article's prerequisites. The **Reply URL** should show `https://jwt.ms`. --1. Select the **Run user flow** button. Your browser should be redirected to the BindID sign in page. --1. Enter the registered account email and authenticates using appless FIDO2 biometrics, such as fingerprint. Once the authentication challenge is accepted, your browser should be redirect to `https://jwt.ms` which displays the contents of the token returned by Azure AD B2C. -+## Test the user flow +1. In the Azure AD B2C tenant, select **User flows**. +2. Select the created user flow, such as **B2C_1_signupsignin**. +3. For **Application**, select the web application you registered. The **Reply URL** is `https://jwt.ms`. +4. Select **Run user flow**. +5. The browser is redirected to the BindID sign in page. +6. Enter the registered account email. +7. Authenticates using appless FIDO2 biometrics, such as fingerprint. +8. The browser is redirect to `https://jwt.ms`. The contents appear for the token returned by Azure AD B2C. -## Step 2: Create a BindID policy key +## Create a BindID policy key -Add your BindID application's client Secret as a policy key: +Add the BindID application Client Secret as a policy key. For the following instructions, use the directory with your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/).--1. Make sure you're using the directory that contains your Azure AD B2C tenant: - 1. Select the **Directories + subscriptions** icon in the portal toolbar. -- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**. --1. On the Overview page, under **Policies**, select **Identity Experience Framework**. --1. Select **Policy Keys** and then select **Add**. --1. For **Options**, choose `Manual`. --1. Enter a **Name** for the policy key. For example, `BindIDClientSecret`. The prefix `B2C_1A_` is added automatically to the name of your key. --1. In **Secret**, enter your client secret that you previously recorded in [step 1](#step-1-register-an-app-in-bindid). --1. For **Key usage**, select `Signature`. --1. Select **Create**. --## Step 3: Configure BindID as an Identity provider --To enable users to sign in using BindID, you need to define BindID as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated using digital identity available on their device, proving the userΓÇÖs identity. --Use the following steps to add BindID as a claims provider: --1. Get the custom policy starter packs from GitHub, then update the XML files in the SocialAndLocalAccounts starter pack with your Azure AD B2C tenant name: +2. In the portal toolbar, select **Directories + subscriptions**. +3. On the **Portal settings | Directories + subscriptions** page, in the **Directory name** list, locate the Azure AD B2C directory. +4. Select **Switch**. +5. On the Overview page, under **Policies**, select **Identity Experience Framework**. +6. Select **Policy Keys**. +7. Select **Add**. +8. For **Options**, select **Manual**. +9. Enter a **Name**. The prefix `B2C_1A_` appends to the key name. +10. In **Secret**, enter the Client Secret you recorded. +11. For **Key usage**, select **Signature**. +12. Select **Create**. ++## Configure BindID as an identity provider ++To enable sign in with BindID, define BindID as a claims provider that Azure AD B2C communicates with through an endpoint. The endpoint provides claims used by Azure AD B2C to verify a user authenticated with digital identity on a device. ++Add BindID as a claims provider. To get started, obtain the custom policy starter packs from GitHub, then update the XML files in the SocialAndLocalAccounts starter pack with your Azure AD B2C tenant name: - 1. [Download the .zip file](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or clone the repository: - ``` - git clone https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack - ``` +1. Open the zip folder [active-directory-b2c-custom-policy-starterpack-main.zip](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or clone the repository: + + ``` + git clone https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack + ``` - 1. In all of the files in the **LocalAccounts** directory, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is `contoso`, all instances of `yourtenant.onmicrosoft.com` become `contoso.onmicrosoft.com`. --1. Open the `LocalAccounts/ TrustFrameworkExtensions.xml`. --1. Find the **ClaimsProviders** element. If it doesn't exist, add it under the root element. --1. Add a new **ClaimsProvider** similar to the one shown below: +2. In the files in the **LocalAccounts** directory, replace the string `yourtenant` with the Azure AD B2C tenant name. +3. Open the `LocalAccounts/ TrustFrameworkExtensions.xml`. +4. Find the **ClaimsProviders** element. If it doesn't appear, add it under the root element. +5. Add a new **ClaimsProvider** similar to the following example: ```xml <ClaimsProvider> Use the following steps to add BindID as a claims provider: </ClaimsProvider> ``` -1. Set **client_id** with the BindID Application ID that you previously recorded in [step 1](#step-1-register-an-app-in-bindid). --1. Save the changes. --## Step 4: Add a user journey --At this point, you've set up the identity provider, but it's not yet available in any of the sign in pages. If you've your own custom user journey continue to [step 5](#step-5-add-the-identity-provider-to-a-user-journey), otherwise, create a duplicate of an existing template user journey as follows: --1. Open the `LocalAccounts/ TrustFrameworkBase.xml` file from the starter pack. +6. Set **client_id** with the BindID Application ID you recorded. +7. Select **Save**. -1. Find and copy the entire contents of the **UserJourney** element that includes `Id=SignUpOrSignIn`. +## Add a user journey -1. Open the `LocalAccounts/ TrustFrameworkExtensions.xml` and find the **UserJourneys** element. If the element doesn't exist, add one. +The identity provider isn't on the sign-in pages. If you have a custom user journey, continue to **Add the identity provider to a user journey**, otherwise, create a duplicate template user journey: -1. Paste the entire content of the UserJourney element that you copied as a child of the UserJourneys element. +1. From the starter pack, open the `LocalAccounts/ TrustFrameworkBase.xml` file. +2. Find and copy the contents of the **UserJourney** element that includes `Id=SignUpOrSignIn`. +3. Open the `LocalAccounts/ TrustFrameworkExtensions.xml`. +4. Find the **UserJourneys** element. If there's no element, add one. +5. Paste the UserJourney element as a child of the UserJourneys element. +6. Rename the user journey **ID**. -1. Rename the `Id` of the user journey. For example, `Id=CustomSignUpSignIn` +## Add the identity provider to a user journey -## Step 5: Add the identity provider to a user journey +Add the new identity provider to the user journey. -Now that you have a user journey, add the new identity provider to the user journey. +1. Find the orchestration step element that includes `Type=CombinedSignInAndSignUp`, or `Type=ClaimsProviderSelection` in the user journey. It's usually the first orchestration step. The **ClaimsProviderSelections** element has an identity provider list that users sign in with. The order of the elements controls the order of the sign in buttons. +2. Add a **ClaimsProviderSelection** XML element. +3. Set the value of **TargetClaimsExchangeId** to a friendly name. +4. Add a **ClaimsExchange** element. +5. Set the **Id** to the value of the target claims exchange ID. This action links the BindID button to `BindID-SignIn`. +6. Update the **TechnicalProfileReferenceId** value to the technical profile ID you created. -1. Find the orchestration step element that includes `Type=CombinedSignInAndSignUp`, or `Type=ClaimsProviderSelection` in the user journey. It's usually the first orchestration step. The **ClaimsProviderSelections** element contains a list of identity providers that a user can sign in with. The order of the elements controls the order of the sign in buttons presented to the user. Add a **ClaimsProviderSelection** XML element. Set the value of **TargetClaimsExchangeId** to a friendly name, such as `BindIDExchange`. --1. In the next orchestration step, add a **ClaimsExchange** element. Set the **Id** to the value of the target claims exchange ID to link the BindID button to `BindID-SignIn` action. Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier while adding the claims provider. --The following XML demonstrates orchestration steps of a user journey with the identity provider: +The following XML demonstrates orchestration user journey with the identity provider. ```xml <OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin"> The following XML demonstrates orchestration steps of a user journey with the id </OrchestrationStep> ``` -## Step 6: Configure the relying party policy +## Configure the relying party policy ++The relying party policy, for example SignUpOrSignIn.xml, specifies the user journey Azure AD B2C executes. You can control claims passed to your application by adjusting the **OutputClaims** element of the **PolicyProfile** TechnicalProfile element. In this tutorial, the application receives the user attributes such as display name, given name, surname, email, objectId, identity provider, and tenantId. -The relying party policy, for example [SignUpOrSignIn.xml](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/blob/master/LocalAccounts/SignUpOrSignin.xml), specifies the user journey which Azure AD B2C will execute. You can also control what claims are passed to your application by adjusting the **OutputClaims** element of the **PolicyProfile** TechnicalProfile element. In this sample, the application receives the user attributes such as display name, given name, surname, email, objectId, identity provider, and tenantId. +See, [Azure-Samples/active-directory-b2c-custom-policy-starterpack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/blob/master/LocalAccounts/SignUpOrSignin.xml) ```xml <RelyingParty> The relying party policy, for example [SignUpOrSignIn.xml](https://github.com/Az </RelyingParty> ``` -## Step 7: Upload the custom policy +## Upload the custom policy 1. Sign in to the [Azure portal](https://portal.azure.com).--1. Make sure you're using the directory that contains your Azure AD B2C tenant: -- 1. Select the **Directories + subscriptions** icon in the portal toolbar. -- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**. --1. In the [Azure portal](https://portal.azure.com), search for and select **Azure AD B2C**. --1. Under **Policies**, select **Identity Experience Framework**. --1. Select **Upload Custom Policy**, and then upload the files in the **LocalAccounts** starter pack in the following order: the base policy, for example `TrustFrameworkBase.xml`, the localization policy, for example `TrustFrameworkLocalization.xml`, the extension policy, for example `TrustFrameworkExtensions.xml`, and the relying party policy, such as `SignUpOrSignIn.xml`. ---## Step 8: Test your custom policy ---1. In your Azure AD B2C tenant blade, and under **Policies**, select **Identity Experience Framework**. - -1. Under **Custom policies**, select **B2C_1A_signup_signin**. ---1. For **Application**, select the web application that you previously registered as part of this article's prerequisites. The **Reply URL** should show `https://jwt.ms`. --1. Select **Run now**. Your browser should be redirected to the BindID sign in page. --1. Enter the registered account email and authenticates using appless FIDO2 biometrics, such as fingerprint. Once the authentication challenge is accepted, your browser should be redirect to `https://jwt.ms` which displays the contents of the token returned by Azure AD B2C. -+2. In the portal toolbar, select **Directories + subscriptions**. +3. On the **Portal settings | Directories + subscriptions** page, in the **Directory name** list, find the Azure AD B2C directory. +4. Select **Switch**. +5. In the Azure portal, search for and select **Azure AD B2C**. +6. Under **Policies**, select **Identity Experience Framework**. +7. Select **Upload Custom Policy**. +8. Upload the files in the **LocalAccounts** starter pack in the following order: ++ * Base policy, for example `TrustFrameworkBase.xml` + * Localization policy, for example `TrustFrameworkLocalization.xml` + * Extension policy, for example `TrustFrameworkExtensions.xml` + * Relying party policy, such as `SignUpOrSignIn.xml` ++## Test your custom policy ++For the following instructions, use the directory with your Azure AD B2C tenant. ++1. In the Azure AD B2C tenant, and under **Policies**, select **Identity Experience Framework**. +2. Under **Custom policies**, select **B2C_1A_signup_signin**. +3. For **Application**, select the web application you registered. The **Reply URL** is `https://jwt.ms`. +4. Select **Run now**. +5. The browser is redirected to the BindID sign in page. +6. Enter the registered account email. +7. Authenticate using appless FIDO2 biometrics, such as fingerprint. +8. The browser is redirect to `https://jwt.ms`. The token contents, returned by Azure AD B2C, appear. ## Next steps For additional information, review the following articles: -- [Custom policies in Azure AD B2C](custom-policy-overview.md)--- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)--- [Sample custom policies for BindID and Azure AD B2C integration](https://github.com/TransmitSecurity/azure-ad-b2c-bindid-integration)+- [Azure AD B2C custom policy overview](custom-policy-overview.md) +- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy) +- [TransmitSecurity/azure-ad-b2c-bindid-integration](https://github.com/TransmitSecurity/azure-ad-b2c-bindid-integration) See, Azure AD B2C Integration |
active-directory | Define Conditional Rules For Provisioning User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md | Scoping filters are configured as part of the attribute mappings for each Azure f. **IS NOT NULL**. Clause returns "true" if the evaluated attribute isn't empty. - g. **REGEX MATCH**. Clause returns "true" if the evaluated attribute matches a regular expression pattern. For example: ([1-9][0-9]) matches any number between 10 and 99 (case sensitive). + g. **REGEX MATCH**. Clause returns "true" if the evaluated attribute matches a regular expression pattern. For example: `([1-9][0-9])` matches any number between 10 and 99 (case sensitive). h. **NOT REGEX MATCH**. Clause returns "true" if the evaluated attribute doesn't match a regular expression pattern. It will return "false" if the attribute is null / empty. Scoping filters are configured as part of the attribute mappings for each Azure >[!IMPORTANT] > Saving a new scoping filter triggers a new full sync for the application, where all users in the source system are evaluated again against the new scoping filter. If a user in the application was previously in scope for provisioning, but falls out of scope, their account is disabled or deprovisioned in the application. To override this default behavior, refer to [Skip deletion for user accounts that go out of scope](../app-provisioning/skip-out-of-scope-deletions.md). - ## Common scoping filters | Target Attribute| Operator | Value | Description| |-|-|-|-|-|userPrincipalName|REGEX MATCH|.\*@domain.com |All users with userPrincipal that has the domain @domain.com will be in scope for provisioning| -|userPrincipalName|NOT REGEX MATCH|.\*@domain.com|All users with userPrincipal that has the domain @domain.com will be out of scope for provisioning| -|department|EQUALS|sales|All users from the sales department are in scope for provisioning| -|workerID|REGEX MATCH|(1[0-9][0-9][0-9][0-9][0-9][0-9])| All employees with workerIDs between 1000000 and 2000000 are in scope for provisioning.| +|userPrincipalName|REGEX MATCH|`.\*@domain.com`|All users with userPrincipal that has the domain @domain.com will be in scope for provisioning| +|userPrincipalName|NOT REGEX MATCH|`.\*@domain.com`|All users with userPrincipal that has the domain @domain.com will be out of scope for provisioning| +|department|EQUALS|`sales`|All users from the sales department are in scope for provisioning| +|workerID|REGEX MATCH|`(1[0-9][0-9][0-9][0-9][0-9][0-9])`| All employees with workerIDs between 1000000 and 2000000 are in scope for provisioning.| ## Related articles * [Automate user provisioning and deprovisioning to SaaS applications](../app-provisioning/user-provisioning.md) |
active-directory | Sap Successfactors Integration Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md | -[Azure Active Directory user provisioning service](../app-provisioning/user-provisioning.md) integrates with [SAP SuccessFactors Employee Central](https://www.successfactors.com/products-services/core-hr-payroll/employee-central.html) to manage the identity life cycle of users. Azure Active Directory offers three pre-built integrations: +[Azure Active Directory user provisioning service](../app-provisioning/user-provisioning.md) integrates with [SAP SuccessFactors Employee Central](https://www.successfactors.com/products-services/core-hr-payroll/employee-central.html) to manage the identity life cycle of users. Azure Active Directory offers three prebuilt integrations: * [SuccessFactors to on-premises Active Directory user provisioning](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md) * [SuccessFactors to Azure Active Directory user provisioning](../saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md) Based on the attribute-mapping, during full sync Azure AD provisioning service s >| OData customPageSize query parameter | `100` | > [!NOTE]-> During the full initial sync, both active and terminated workers from SAP SuccessFactors will be fetched. +> During the full initial sync, both active and terminated workers from SAP SuccessFactors are fetched. For each SuccessFactors user, the provisioning service looks for an account in the target (Azure AD/on-premises Active Directory) using the matching attribute defined in the mapping. For example: if *personIdExternal* maps to *employeeId* and is set as the matching attribute, then the provisioning service uses the *personIdExternal* value to search for the user with *employeeId* filter. If a user match is found, then it updates the target attributes. If no match is found, then it creates a new entry in the target. -To validate the data returned by your OData API endpoint for a specific `personIdExternal`, update the `SuccessFactorsAPIEndpoint` in the API query with your API data center server URL and use a tool like [Postman](https://www.postman.com/downloads/) to invoke the query. If the "in" filter does not work, you can try the "eq" filter. +To validate the data returned by your OData API endpoint for a specific `personIdExternal`, update the `SuccessFactorsAPIEndpoint` in the API query with your API data center server URL and use a tool like [Postman](https://www.postman.com/downloads/) to invoke the query. If the "in" filter doesn't work, you can try the "eq" filter. ``` https://[SuccessFactorsAPIEndpoint]/odata/v2/PerPerson?$format=json& employmentNav/jobInfoNav/employmentTypeNav,employmentNav/jobInfoNav/employeeClas After full sync, Azure AD provisioning service maintains `LastExecutionTimestamp` and uses it to create delta queries for retrieving incremental changes. The timestamp attributes present in each SuccessFactors entity, such as `lastModifiedDateTime`, `startDate`, `endDate`, and `latestTerminationDate`, are evaluated to see if the change falls between the `LastExecutionTimestamp` and `CurrentExecutionTime`. If yes, then the entry change is considered to be effective and processed for sync. -Here is the OData API request template that Azure AD uses to query SuccessFactors for incremental changes. You can update the variables `SuccessFactorsAPIEndpoint`, `LastExecutionTimestamp` and `CurrentExecutionTime` in the request template use a tool like [Postman](https://www.postman.com/downloads/) to check what data is returned. Alternatively, you can also retrieve the actual request payload from SuccessFactors by [enabling OData API Audit logs](#enabling-odata-api-audit-logs-in-successfactors). +Here's the OData API request template that Azure AD uses to query SuccessFactors for incremental changes. You can update the variables `SuccessFactorsAPIEndpoint`, `LastExecutionTimestamp` and `CurrentExecutionTime` in the request template use a tool like [Postman](https://www.postman.com/downloads/) to check what data is returned. Alternatively, you can also retrieve the actual request payload from SuccessFactors by [enabling OData API Audit logs](#enabling-odata-api-audit-logs-in-successfactors). ``` https://[SuccessFactorsAPIEndpoint]/odata/v2/PerPerson/$count?$format=json&$filter=(personEmpTerminationInfoNav/activeEmploymentsCount ne null) and To retrieve more attributes, follow the steps listed: 1. Click on **Edit attribute list for SuccessFactors**. > [!NOTE] - > If the **Edit attribute list for SuccessFactors** option does not show in the Azure portal, use the URL *https://portal.azure.com/?Microsoft_AAD_IAM_forceSchemaEditorEnabled=true* to access the page. + > If the **Edit attribute list for SuccessFactors** option doesn't show in the Azure portal, use the URL *https://portal.azure.com/?Microsoft_AAD_IAM_forceSchemaEditorEnabled=true* to access the page. 1. The **API expression** column in this view displays the JSONPath expressions used by the connector. This section covers how you can customize the provisioning app for the following ### Retrieving more attributes -The default Azure AD SuccessFactors provisioning app schema ships with [90+ pre-defined attributes](sap-successfactors-attribute-reference.md). +The default Azure AD SuccessFactors provisioning app schema ships with [90+ predefined attributes](sap-successfactors-attribute-reference.md). To add more SuccessFactors attributes to the provisioning schema, use the steps listed: 1. Use the OData query to retrieve data for a valid test user from Employee Central. To add more SuccessFactors attributes to the provisioning schema, use the steps ### Retrieving custom attributes -By default, the following custom attributes are pre-defined in the Azure AD SuccessFactors provisioning app: +By default, the following custom attributes are predefined in the Azure AD SuccessFactors provisioning app: * *custom01-custom15* from the User (userNav) entity * *customString1-customString15* from the EmpEmployment (employmentNav) entity called *empNavCustomString1-empNavCustomString15* * *customString1-customString15* from the EmpJobInfo (jobInfoNav) entity called *empJobNavCustomString1-empNavJobCustomString15* Extending this scenario: ### Mapping employment status to account status By default, the Azure AD SuccessFactors connector uses the `activeEmploymentsCount` field of the `PersonEmpTerminationInfo` object to set account status. You may encounter one of the following issues with this attribute. -1. There is a known SAP SuccessFactors issue documented in [knowledge base article 3047486](https://launchpad.support.sap.com/#/notes/3047486) that at times this may disable the account of a terminated worker one day prior to the termination on the last day of work. -1. If the `PersonEmpTerminationInfo` object gets set to null, during termination, then AD account disabling will not work, as the provisioning engine filters out records where `personEmpTerminationInfoNav` object is set to null. +1. There's a known issue where the connector may disable the account of a terminated worker one day prior to the termination on the last day of work. The issue is documented in [knowledge base article 3047486](https://launchpad.support.sap.com/#/notes/3047486). +1. If the `PersonEmpTerminationInfo` object gets set to null, during termination, then AD account disabling doesn't work because the provisioning engine filters out records where the `personEmpTerminationInfoNav` object is set to null. -If you are running into any of these issues or prefer mapping employment status to account status, you can update the mapping to expand the `emplStatus` field and use the employment status code present in the field `emplStatus.externalCode`. Based on [SAP support note 2505526](https://launchpad.support.sap.com/#/notes/2505526), here is a list of employment status codes that you can retrieve in the provisioning app. +If you're running into any of these issues or prefer mapping employment status to account status, you can update the mapping to expand the `emplStatus` field and use the employment status code present in the field `emplStatus.externalCode`. Based on [SAP support note 2505526](https://launchpad.support.sap.com/#/notes/2505526), here's a list of employment status codes that you can retrieve in the provisioning app. * A = Active * D = Dormant * U = Unpaid Leave Use the steps to update your mapping to retrieve these codes. 1. Open the attribute-mapping blade of your SuccessFactors provisioning app. 1. Under **Show advanced options**, click on **Edit SuccessFactors attribute list**. -1. Find the attribute `emplStatus` and update the JSONPath to `$.employmentNav.results[0].jobInfoNav.results[0].emplStatusNav.externalCode`. This will enable the connector to retrieve the employment status codes in the table. +1. Find the attribute `emplStatus` and update the JSONPath to `$.employmentNav.results[0].jobInfoNav.results[0].emplStatusNav.externalCode`. The update makes the connector retrieve the employment status codes in the table. 1. Save the changes. 1. In the attribute mapping blade, update the expression mapping for the account status flag. Use the steps to update your mapping to retrieve these codes. If your HR process uses Option 1, then no changes are required to the provisioning schema. If your HR process uses Option 2, then Employee Central adds a new *EmpEmployment* entity along with a new *User* entity for the same *Person* entity. -To handle both these scenarios so that the new employment data shows up when a conversion or rehire occurs, you can bulk update the provisioning app schema using the steps listed: +You can handle both scenarios so that the new employment data shows up when a conversion or rehire occurs. Bulk update the provisioning app schema using the steps listed: 1. Open the attribute-mapping blade of your SuccessFactors provisioning app. 1. Scroll down and click **Show advanced options**. To handle both these scenarios so that the new employment data shows up when a c 1. After confirming that sync works as expected, restart the provisioning job. > [!NOTE]-> The approach described above only works if SAP SuccessFactors returns the employment objects in ascending order, where the latest employment record is always the last record in the *employmentNav* results array. The order in which multiple employment records are returned is not guaranteed by SuccessFactors. If your SuccessFactors instance has multiple employment records corresponding to a worker and you always want to retrieve attributes associated with the active employment record, use steps described in the next section. +> The approach described above only works if SAP SuccessFactors returns the employment objects in ascending order, where the latest employment record is always the last record in the *employmentNav* results array. The order in which multiple employment records are returned isn't guaranteed by SuccessFactors. If your SuccessFactors instance has multiple employment records corresponding to a worker and you always want to retrieve attributes associated with the active employment record, use steps described in the next section. ### Retrieving current active employment record This section describes how you can update the JSONPath settings to definitely re 1. Click on the link **Review your schema here** to open the schema editor. 1. Click on the **Download** link to save a copy of the schema before editing. 1. In the schema editor, press Ctrl-H key to open the find-replace control.-1. Perform the following find replace operations. Ensure there is no leading or trailing space when performing the find-replace operations. If you are using `[-1:]` index instead of `[0]`, then update the *string-to-find* field accordingly. +1. Perform the following find replace operations. Ensure there's no leading or trailing space when performing the find-replace operations. If you're using `[-1:]` index instead of `[0]`, then update the *string-to-find* field accordingly. | **String to find** | **String to use for replace** | **Purpose** | | | -- | |- | `$.employmentNav.results\[0\].<br>jobInfoNav.results\[0\].emplStatus` | `$.employmentNav..jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P' )\].emplStatusNav.externalCode` | With this find-replace, we are adding the ability to expand emplStatusNav OData object. | - | `$.employmentNav.results\[0\].<br>jobInfoNav.results\[0\]` | `$.employmentNav..jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P')\]` | With this find-replace, we instruct the connector to always retrieve attributes associated with the active SuccessFactors EmpJobInfo record. Attributes associated with terminated/inactive records in SuccessFactors will be ignored. | - | `$.employmentNav.results\[0\]` | `$.employmentNav..results\[?(@.jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P')\])\]` | With this find-replace, we instruct the connector to always retrieve attributes associated with the active SuccessFactors Employment record. Attributes associated with terminated/inactive records in SuccessFactors will be ignored. | + | `$.employmentNav.results\[0\].<br>jobInfoNav.results\[0\].emplStatus` | `$.employmentNav..jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P' )\].emplStatusNav.externalCode` | With this find-replace, we're adding the ability to expand emplStatusNav OData object. | + | `$.employmentNav.results\[0\].<br>jobInfoNav.results\[0\]` | `$.employmentNav..jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P')\]` | With this find-replace, we instruct the connector to always retrieve attributes associated with the active SuccessFactors EmpJobInfo record. Attributes associated with terminated/inactive records in SuccessFactors are ignored. | + | `$.employmentNav.results\[0\]` | `$.employmentNav..results\[?(@.jobInfoNav..results\[?(@.emplStatusNav.externalCode == 'A' \|\| @.emplStatusNav.externalCode == 'U' \|\| @.emplStatusNav.externalCode == 'P')\])\]` | With this find-replace, we instruct the connector to always retrieve attributes associated with the active SuccessFactors Employment record. Attributes associated with terminated/inactive records in SuccessFactors are ignored. | 1. Save the schema. 1. The above process updates all JSONPath expressions. -1. For pre-hire processing to work, the JSONPath associated with `startDate` attribute must use either `[0]` or `[-1:]` index. Under **Show advanced options**, click on **Edit SuccessFactors attribute list**. Find the attribute `startDate` and set it to the value `$.employmentNav.results[-1:].startDate` +1. For prehire processing to work, the JSONPath associated with `startDate` attribute must use either `[0]` or `[-1:]` index. Under **Show advanced options**, click on **Edit SuccessFactors attribute list**. Find the attribute `startDate` and set it to the value `$.employmentNav.results[-1:].startDate` 1. Save the schema. 1. To ensure that terminations are processed as expected, you can use one of the following settings in the attribute mapping section. To fetch attributes belonging to both jobs, use the steps listed: 1. Open the attribute-mapping blade of your SuccessFactors provisioning app. 1. Scroll down and click **Show advanced options**. 1. Click on **Edit attribute list for SuccessFactors**.-1. Let's say you want to pull the department associated with job 1 and job 2. The pre-defined attribute *department* already fetches the value of department for the first job. You can define a new attribute called *secondJobDepartment* and set the JSONPath expression to `$.employmentNav.results[1].jobInfoNav.results[0].departmentNav.name_localized` +1. Let's say you want to pull the department associated with job 1 and job 2. The predefined attribute *department* already fetches the value of department for the first job. You can define a new attribute called *secondJobDepartment* and set the JSONPath expression to `$.employmentNav.results[1].jobInfoNav.results[0].departmentNav.name_localized` 1. You can now either flow both department values to Active Directory attributes or selectively flow a value using expression mapping. 1. Save the mapping. 1. Test the configuration using [provision on demand](provision-on-demand.md). The SuccessFactors connector supports expansion of the position object. To expan | positionNameDE | $.employmentNav.results[0].jobInfoNav.results[0].positionNav.externalName_de_DE | ### Provisioning users in the Onboarding module-Inbound user provisioning from SAP SuccessFactors to on premises Active Directory and Azure AD now supports advance provisioning of prehires present in the SAP SuccessFactors Onboarding 2.0 module. When the Azure AD provisioning service encounters a new hire profile with a future start date, it queries SAP SuccessFactors to get new hires with one of the following status codes: `active`, `inactive`, `active_external_suite`. The status code `active_external_suite` corresponds to pre-hires present in the SAP SuccessFactors Onboarding 2.0 module. For a description of these status codes, refer to [SAP support note 2736579](https://launchpad.support.sap.com/#/notes/0002736579). +Inbound user provisioning from SAP SuccessFactors to on premises Active Directory and Azure AD now supports advance provisioning of prehires present in the SAP SuccessFactors Onboarding 2.0 module. When the Azure AD provisioning service encounters a new hire profile with a future start date, it queries SAP SuccessFactors to get new hires with one of the following status codes: `active`, `inactive`, `active_external_suite`. The status code `active_external_suite` corresponds to prehires present in the SAP SuccessFactors Onboarding 2.0 module. For a description of these status codes, refer to [SAP support note 2736579](https://launchpad.support.sap.com/#/notes/0002736579). -The default behavior of the provisioning service is to process pre-hires in the Onboarding module. +The default behavior of the provisioning service is to process prehires in the Onboarding module. -If you want to exclude processing of pre-hires in the Onboarding module, update your provisioning job configuration as follows: +If you want to exclude processing of prehires in the Onboarding module, update your provisioning job configuration as follows: 1. Open the attribute-mapping blade of your SuccessFactors provisioning app. 1. Under show advanced options, edit the SuccessFactors attribute list to add a new attribute called `userStatus`. 1. Set the JSONPath API expression for this attribute as: `$.employmentNav.results[0].userNav.status` 1. Save the schema to return back to the attribute mapping blade. -1. Edit the Source Object scope to apply a scoping filter `userStatus NOT EQUALS -----` +1. Edit the Source Object scope to apply a scoping filter `userStatus NOT EQUALS` 1. Save the mapping and validate that the scoping filter works using provisioning on demand. ### Enabling OData API Audit logs in SuccessFactors--The Azure AD SuccessFactors connector uses SuccessFactors OData API to retrieve changes and provision users. If you observe issues with the provisioning service and want to confirm what data was retrieved from SuccessFactors, you can enable OData API Audit logs in SuccessFactors by following steps documented in [SAP support note 2680837](https://userapps.support.sap.com/sap/support/knowledge/en/2680837). From these audit logs you can retrieve the request payload sent by Azure AD. To troubleshoot, you can copy this request payload in a tool like "Postman", set it up to use the same API user that is used by the connector and see if it returns the desired changes from SuccessFactors. -+The Azure AD SuccessFactors connector uses SuccessFactors OData API to retrieve changes and provision users. If you observe issues with the provisioning service and want to confirm what data was retrieved from SuccessFactors, you can enable OData API Audit logs in SuccessFactors. To enable audit logs, follow the steps documented in [SAP support note 2680837](https://userapps.support.sap.com/sap/support/knowledge/en/2680837). Retrieve the request payload sent by Azure AD from the audit logs. To troubleshoot, you can copy this request payload in a tool like [Postman](https://www.postman.com/downloads/), set it up to use the same API user that is used by the connector and see if it returns the desired changes from SuccessFactors. ## Writeback scenarios- This section covers different write-back scenarios. It recommends configuration approaches based on how email and phone number is set up in SuccessFactors. ### Supported scenarios for phone and email write-back - | \# | Scenario requirement | Email primary <br> flag value | Business phone <br> primary flag value | Cell phone <br> primary flag value | Business phone <br> mapping | Cell phone <br> mapping | |--|--|--|--|--|--|--| | 1 | * Only set business email as primary. <br> * Don't set phone numbers. | true | true | false | \[Not Set\] | \[Not Set\] | | 2 | * In SuccessFactors, business email and business phone is primary <br> * Always flow Azure AD telephone number to business phone and mobile to cell phone. | true | true | false | telephoneNumber | mobile | | 3 | * In SuccessFactors, business email and cell phone is primary <br> * Always flow Azure AD telephone number to business phone and mobile to cell phone | true | false | true | telephoneNumber | mobile | -| 4 | * In SuccessFactors business email is primary <br> * In Azure AD, check if work telephone number is present, if present, then check if mobile number is also present, mark work telephone number as primary only if mobile number is not present. | true | Use expression mapping: `IIF(IsPresent([telephoneNumber]), IIF(IsPresent([mobile]),"false", "true"), "false")` | Use expression mapping: `IIF(IsPresent([mobile]),"false", "true")` | telephoneNumber | mobile | +| 4 | * In SuccessFactors business email is primary. <br> * In Azure AD, check if work telephone number is present, if present, then check if mobile number is also present. Mark work telephone number as primary only if mobile number isn't present. | true | Use expression mapping: `IIF(IsPresent([telephoneNumber]), IIF(IsPresent([mobile]),"false", "true"), "false")` | Use expression mapping: `IIF(IsPresent([mobile]),"false", "true")` | telephoneNumber | mobile | | 5 | * In SuccessFactors business email and business phone is primary. <br> * In Azure AD, if mobile is available, then set it as the business phone, else use telephoneNumber. | true | true | false | `IIF(IsPresent([mobile]), [mobile], [telephoneNumber])` | \[Not Set\] | -* If there is no mapping for phone number in the write-back attribute-mapping, then only email is included in the write-back. -* During new hire onboarding in Employee Central, business email and phone number may not be available. If setting business email and business phone as primary is mandatory during onboarding, you can set a dummy value for business phone and email during new hire creation, which will eventually be updated by the write-back app. +* If there's no mapping for phone number in the write-back attribute-mapping, then only email is included in the write-back. +* During new hire onboarding in Employee Central, business email and phone number may not be available. If setting business email and business phone as primary is mandatory during onboarding, you can set a dummy value for business phone and email during new hire creation. After some time, the write-back app updates the value. ### Enabling writeback with UserID- The SuccessFactors Writeback app uses the following logic to update the User object attributes: -* As a first step, it looks for *userId* attribute in the change set. If it is present, then it uses "UserId" for making the SuccessFactors API call. -* If *userId* is not found, then it defaults to using the *personIdExternal* attribute value. +* As a first step, it looks for *userId* attribute in the changeset. If it's present, then it uses "UserId" for making the SuccessFactors API call. +* If *userId* isn't found, then it defaults to using the *personIdExternal* attribute value. -Usually the *personIdExternal* attribute value in SuccessFactors matches the *userId* attribute value. However, in scenarios such as rehiring and worker conversion, an employee in SuccessFactors may have two employment records, one active and one inactive. In such scenarios, to ensure that write-back updates the active user profile, please update the configuration of the SuccessFactors provisioning apps as described. This configuration ensures that *userId* is always present in the change set visible to the connector and is used in the SuccessFactors API call. +Usually the *personIdExternal* attribute value in SuccessFactors matches the *userId* attribute value. However, in scenarios such as rehiring and worker conversion, an employee in SuccessFactors may have two employment records, one active and one inactive. In such scenarios, to ensure that write-back updates the active user profile, update the configuration of the SuccessFactors provisioning apps as described. This configuration ensures that *userId* is always present in the changeset visible to the connector and is used in the SuccessFactors API call. 1. Open the SuccessFactors to Azure AD user provisioning app or SuccessFactors to on-premises AD user provisioning app. -1. Ensure that an extensionAttribute *(extensionAttribute1-15)* in Azure AD always stores the *userId* of every worker's active employment record. This can be achieved by mapping SuccessFactors *userId* attribute to an extensionAttribute in Azure AD. +1. Ensure that `extensionAttribute[1-15]` in Azure AD always stores the `userId` of every worker's active employment record. The record maps SuccessFactors `userId` attribute to `extensionAttribute[1-15]` in Azure AD. > [!div class="mx-imgBorder"] >  1. For guidance regarding JSONPath settings, refer to the section [Handling worker conversion and rehiring scenarios](#handling-worker-conversion-and-rehiring-scenarios) to ensure the *userId* value of the active employment record flows into Azure AD. 1. Save the mapping. 1. Run the provisioning job to ensure that the *userId* values flow into Azure AD. > [!NOTE]- > If you are using SuccessFactors to on-premises Active Directory user provisioning, configure AAD Connect to sync the *userId* attribute value from on-premises Active Directory to Azure AD. + > If you're using SuccessFactors to on-premises Active Directory user provisioning, configure AAD Connect to sync the *userId* attribute value from on-premises Active Directory to Azure AD. 1. Open the SuccessFactors Writeback app in the Azure portal. 1. Map the desired *extensionAttribute* that contains the userId value to the SuccessFactors *userId* attribute. > [!div class="mx-imgBorder"] Usually the *personIdExternal* attribute value in SuccessFactors matches the *us 1. Save the mapping. 1. Go to *Attribute mapping -> Advanced -> Review Schema* to open the JSON schema editor. 1. Download a copy of the schema as backup. -1. In the schema editor, hit Ctrl-F and search for the JSON node containing the userId mapping, where it is mapped to a source Azure AD attribute. +1. In the schema editor, hit Ctrl-F and search for the JSON node containing the userId mapping, where it's mapped to a source Azure AD attribute. 1. Update the flowBehavior attribute from "FlowWhenChanged" to "FlowAlways" as shown. > [!div class="mx-imgBorder"] >  1. Save the mapping and test the write-back scenario with provisioning-on-demand. ### Unsupported scenarios for phone and email write-back--* In Employee Central, during onboarding personal email and personal phone is set as primary. The write-back app cannot switch this setting and set business email and business phone as primary. -* In Employee Central, business phone is set as primary. The write-back app cannot change this and set cell phone as primary. -* The write-back app cannot read the current primary flag settings and use the same values for the write operation. The flag values configured in the attribute-mapping will always be used. +* In Employee Central, during onboarding personal email and personal phone is set as primary. The write-back app can't switch this setting and set business email and business phone as primary. +* In Employee Central, business phone is set as primary. The write-back app can't change this and set cell phone as primary. +* The write-back app can't read the current primary flag settings and use the same values for the write operation. The flag values configured in the attribute-mapping are always be used. ## Next steps- * [Learn how to configure SuccessFactors to Active Directory provisioning](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md) * [Learn how to configure writeback to SuccessFactors](../saas-apps/sap-successfactors-writeback-tutorial.md) * [Learn more about supported SuccessFactors Attributes for inbound provisioning](sap-successfactors-attribute-reference.md) |
active-directory | Workday Integration Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-integration-reference.md | -[Azure Active Directory user provisioning service](../app-provisioning/user-provisioning.md) integrates with [Workday HCM](https://www.workday.com) to manage the identity life cycle of users. Azure Active Directory offers three pre-built integrations: +[Azure Active Directory user provisioning service](../app-provisioning/user-provisioning.md) integrates with [Workday HCM](https://www.workday.com) to manage the identity life cycle of users. Azure Active Directory offers three prebuilt integrations: * [Workday to on-premises Active Directory user provisioning](../saas-apps/workday-inbound-tutorial.md) * [Workday to Azure Active Directory user provisioning](../saas-apps/workday-inbound-cloud-only-tutorial.md) To further secure the connectivity between Azure AD provisioning service and Wor 1. Copy all IP address ranges listed within the element *addressPrefixes* and use the range to build your IP address list. 1. Sign in to Workday admin portal. 1. Access the **Maintain IP Ranges** task to create a new IP range for Azure data centers. Specify the IP ranges (using CIDR notation) as a comma-separated list. -1. Access the **Manage Authentication Policies** task to create a new authentication policy. In the authentication policy, use the authentication allow list to specify the Azure AD IP range and the security group that will be allowed access from this IP range. Save the changes. +1. Access the **Manage Authentication Policies** task to create a new authentication policy. In the authentication policy, use the authentication allowlist to specify the Azure AD IP range and the security group that will be allowed access from this IP range. Save the changes. 1. Access the **Activate All Pending Authentication Policy Changes** task to confirm changes. ### Limiting access to worker data in Workday using constrained security groups The default steps to [configure the Workday integration system user](../saas-apps/workday-inbound-tutorial.md#configure-integration-system-user-in-workday) grants access to retrieve all users in your Workday tenant. In certain integration scenarios, you may want to limit the access, so that users belonging only to certain supervisory organizations are returned by the Get_Workers API call and processed by the Workday Azure AD connector. -You can fulfill this requirement by working with your Workday admin and configuring constrained integration system security groups. For more information, refer to [this Workday community article](https://community.workday.com/forums/customer-questions/620393) (*Workday Community access required for this article*) +You can limit access by working with your Workday admin and configuring constrained integration system security groups. For more information about Workday, see [Workday community](https://community.workday.com/forums/customer-questions/620393) (*Workday Community access required for this article*). This strategy of limiting access using constrained ISSG (Integration System Security Groups) is useful in the following scenarios: * **Phased rollout scenario**: You have a large Workday tenant and plan to perform a phased rollout of Workday to Azure AD automated provisioning. In this scenario, rather than excluding users who aren't in scope of the current phase with Azure AD scoping filters, we recommend configuring constrained ISSG so that only in-scope workers are visible to Azure AD. The *Get_Workers* API can return different data sets associated with a worker. D The table below provides guidance on mapping configuration to use to retrieve a specific data set. -| \# | Workday Entity | Included by default | XPATH pattern to specify in mapping to fetch non-default entities | +| \# | Workday Entity | Included by default | XPATH pattern to specify in mapping to fetch nondefault entities | |-|--||-| | 1 | Personal Data | Yes | `wd:Worker_Data/wd:Personal_Data` | | 2 | Employment Data | Yes | `wd:Worker_Data/wd:Employment_Data` | The above data sets aren't included by default. To retrieve these data sets: 1. Sign in to the Azure portal and open your Workday to AD/Azure AD user provisioning app. 1. In the Provisioning blade, edit the mappings and open the Workday attribute list from the advanced section. -1. Add the following attributes definitions and mark them as "Required". These attributes will not be mapped to any attribute in AD or Azure AD. They just serve as signals to the connector to retrieve the Cost Center, Cost Center Hierarchy and Pay Group information. +1. Add the following attributes definitions and mark them as "Required". These attributes won't be mapped to any attribute in AD or Azure AD. They just serve as signals to the connector to retrieve the Cost Center, Cost Center Hierarchy and Pay Group information. > [!div class="mx-tdCol2BreakAll"] >| Attribute Name | XPATH API expression | This section describes the Azure AD provisioning service support for scenarios w #### Scenario 1: Backdated conversion from FTE to CW or vice versa Your HR team may backdate a worker conversion transaction in Workday for valid business reasons, such as payroll processing, budget compliance, legal requirements or benefits management. Here's an example to illustrate how provisioning is handled for this scenario. -* It's January 15, 2022 and Jane Doe is employed as a contingent worker. HR offers Jane a full-time position. -* The terms of Jane's contract change require backdating the transaction so it aligns with the start of the current month. HR initiates a backdated worker conversion transaction Workday on January 15, 2022 with effective date as January 1, 2022. Now there are two worker profiles in Workday for Jane. The CW profile is inactive, while the FTE profile is active. -* The Azure AD provisioning service will detect this change in the Workday transaction log on January 15, 2022 and automatically provision attributes of the new FTE profile in the next sync cycle. +* It's January 15, 2023 and Jane Doe is employed as a contingent worker. HR offers Jane a full-time position. +* The terms of Jane's contract change require backdating the transaction so it aligns with the start of the current month. HR initiates a backdated worker conversion transaction Workday on January 15, 2023 with effective date as January 1, 2023. Now there are two worker profiles in Workday for Jane. The CW profile is inactive, while the FTE profile is active. +* The Azure AD provisioning service will detect this change in the Workday transaction log on January 15, 2023 and automatically provision attributes of the new FTE profile in the next sync cycle. * No changes are required in the provisioning app configuration to handle this scenario. #### Scenario 2: Worker employed as CW/FTE today, will change to FTE/CW today This scenario is similar to the above scenario, except that instead of backdatin #### Scenario 3: Worker employed as CW/FTE is terminated, rejoins as FTE/CW after a significant gap It's common for workers to start work at a company as a contingent worker, leave the company and then rejoin after several months as a full-time employee. Here's an example to illustrate how provisioning is handled for this scenario. -* It's January 1, 2022 and John Smith starts work at as a contingent worker. As there's no AD account associated with John's *WorkerID* (matching attribute), the provisioning service creates a new AD account and links John's contingent worker *WID (WorkdayID)* to John's AD account. -* John's contract ends on January 31, 2022. In the provisioning cycle that runs after end of day January 31, John's AD account is disabled. -* John applies for another position and decides to rejoin the company as full-time employee effective May 1, 2022. HR enters John's information as a pre-hire on April 15, 2022. Now there are two worker profiles in Workday for John. The CW profile is inactive, while the FTE profile is active. The two records have the same *WorkerID* but different *WID*s. -* On April 15, during incremental cycle, the Azure AD provisioning service automatically transfers ownership of the AD account to the active worker profile. In this case, it de-links the contingent worker profile from the AD account and establishes a new link between John's active employee worker profile and John's AD account. +* It's January 1, 2023 and John Smith starts work at as a contingent worker. As there's no AD account associated with John's *WorkerID* (matching attribute), the provisioning service creates a new AD account and links John's contingent worker *WID (WorkdayID)* to John's AD account. +* John's contract ends on January 31, 2023. In the provisioning cycle that runs after end of day January 31, John's AD account is disabled. +* John applies for another position and decides to rejoin the company as full-time employee effective May 1, 2023. HR enters John's information as a prehire employee on April 15, 2023. Now there are two worker profiles in Workday for John. The CW profile is inactive, while the FTE profile is active. The two records have the same *WorkerID* but different *WID*s. +* On April 15, during incremental cycle, the Azure AD provisioning service automatically transfers ownership of the AD account to the active worker profile. In this case, it unlinks the contingent worker profile from the AD account and establishes a new link between John's active employee worker profile and John's AD account. * No changes are required in the provisioning app configuration to handle this scenario. #### Scenario 4: Future-dated conversion, when worker is an active CW/FTE Sometimes, a worker may already be an active contingent worker, when HR initiates a future-dated worker conversion transaction. Here's an example to illustrate how provisioning is handled for this scenario and what configuration changes are required to support this scenario. -* It's January 1, 2022 and John Smith starts work at as a contingent worker. As there's no AD account associated with John's *WorkerID* (matching attribute), the provisioning service creates a new AD account and links John's contingent worker *WID (WorkdayID)* to John's AD account. -* On January 15, HR initiates a transaction to convert John from contingent worker to full-time employee effective February 1, 2022. -* Since Azure AD provisioning service automatically processes future-dated hires, it will process John's new full-time employee worker profile on January 15, and update John's profile in AD with full-time employment details even though he is still a contingent worker. -* To avoid this behavior and ensure that John's FTE details get provisioned on February 1, 2022, perform the following configuration changes. +* It's January 1, 2023 and John Smith starts work at as a contingent worker. As there's no AD account associated with John's *WorkerID* (matching attribute), the provisioning service creates a new AD account and links John's contingent worker *WID (WorkdayID)* to John's AD account. +* On January 15, HR initiates a transaction to convert John from contingent worker to full-time employee effective February 1, 2023. +* Since Azure AD provisioning service automatically processes future-dated hires, it will process John's new full-time employee worker profile on January 15, and update John's profile in AD with full-time employment details even though he's still a contingent worker. +* To avoid this behavior and ensure that John's FTE details get provisioned on February 1, 2023, perform the following configuration changes. **Configuration changes** 1. Engage your Workday admin to create a provisioning group called "Future-dated conversions". |
active-directory | Access Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md | If the application has custom signing keys as a result of using the [claims-mapp ### Claims based authorization -The business logic of an application determines how authorization should be handled. The general approach to authorization based on token claims, and which claims should be used, is described in the following sections. --After a token is validated with the correct `aud` claim, the token tenant, subject, actor must be authorized. --#### Tenant --First, always check that the `tid` in a token matches the tenant ID used to store data with the application. When information is stored for an application in the context of a tenant, it should only be accessed again later in the same tenant. Never allow data in one tenant to be accessed from another tenant. --#### Subject --Next, to determine if the token subject, such as the user (or app itself for an app-only token), is authorized, either check for specific `sub` or `oid` claims, or check that the subject belongs to an appropriate role or group with the `roles`, `groups`, `wids` claims. --For example, use the immutable claim values `tid` and `oid` as a combined key for application data and determining whether a user should be granted access. --The `roles`, `groups` or `wids` claims can also be used to determine if the subject has authorization to perform an operation. For example, an administrator may have permission to write to an API, but not a normal user, or the user may be in a group allowed to do some action. --> [!WARNING] -> Never use `email` or `upn` claim values to store or determine whether the user in an access token should have access to data. Mutable claim values like these can change over time, making them insecure and unreliable for authorization. --#### Actor --Lastly, when an app is acting for a user, this client app (the actor), must also be authorized. Use the `scp` claim (scope) to validate that the app has permission to perform an operation. --The application defines the scopes and the absence of the `scp` claim means full actor permissions. --> [!NOTE] -> An application may handle app-only tokens (requests from applications without users, such as daemon apps) and want to authorize a specific application across multiple tenants, rather than individual service principal IDs. In that case, check for an app-only token using the `idtyp` optional claim and use the `appid` claim (for v1.0 tokens) or the `azp` claim (for v2.0 tokens) along with `tid` to determine authorization based on tenant and application ID. -+For more information about validating the claims in a token to ensure security, see [Secure applications and APIs by validating claims](claims-validation.md) ## Token revocation |
active-directory | App Objects And Service Principals | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-objects-and-service-principals.md | |
active-directory | Claims Validation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/claims-validation.md | + + Title: Secure applications and APIs by validating claims +description: Learn about securing the business logic of your applications and APIs by validating claims in tokens. ++++++++ Last updated : 04/21/2023+++++# Secure applications and APIs by validating claims ++Interacting with tokens is a core piece of building applications to authorize users. In accordance with the [Zero Trust principle](zero-trust-for-developers.md) for least privileged access, it's essential that applications validate the values of certain claims present in the access token when performing authorization. ++Claims based authorization allows applications to ensure that the token contains the correct values for things such as the tenant, subject, and actor present in the token. That being said, claims based authorization can seem complex given the various methods to utilize and scenarios to keep track of. This article intends to simplify the claims based authorization process so that you can ensure your applications adhere to the most secure practices. ++To make sure that your authorization logic is secure, you must validate the following information in claims: ++* The appropriate audience is specified for the token. +* The tenant ID of the token matches the ID of the tenant where data is stored. +* The subject of the token is appropriate. +* The actor (client app) is authorized. ++> [!NOTE] +>Access tokens are only validated in the web APIs for which they were acquired by a client. The client should not validate access tokens. ++For more information about the claims mentioned in this article, see [Microsoft identity platform access tokens](access-tokens.md). ++## Validate the audience ++The `aud` claim identifies the intended audience of the token. Before validating claims, you must always verify that the value of the `aud` claim contained in the access token matches the Web API. The value can depend on how the client requested the token. The audience in the access token depends on the endpoint: ++* For v2.0 tokens, the audience is the client ID of the web API. It's a GUID. +* For v1.0 tokens, the audience is one of the appID URIs declared in the web API that validates the token. For example, +`api://{ApplicationID}`, or a unique name starting with a domain name (if the domain name is associated with a tenant). ++For more information about the appID URI of an application, see [Application ID URI](security-best-practices-for-app-registration.md#application-id-uri). ++## Validate the tenant ++Always check that the `tid` in a token matches the tenant ID used to store data with the application. When information is stored for an application in the context of a tenant, it should only be accessed again later in the same tenant. Never allow data in one tenant to be accessed from another tenant. ++## Validate the subject ++Determine if the token subject, such as the user (or application itself for an app-only token), is authorized. ++You can either check for specific `sub` or `oid` claims. ++Or, ++You can check that the subject belongs to an appropriate role or group with the `roles`, `groups`, `wids` claims. For example, use the immutable claim values `tid` and `oid` as a combined key for application data and determining whether a user should be granted access. ++The `roles`, `groups` or `wids` claims can also be used to determine if the subject has authorization to perform an operation. For example, an administrator may have permission to write to an API, but not a normal user, or the user may be in a group allowed to do some action. The `wid` claim represents the tenant-wide roles assigned to the user from the roles present in the Azure AD built-in roles. For more information, see [Azure AD built-in roles](../roles/permissions-reference.md). ++> [!WARNING] +> Never use claims like `email`, `preferred_username` or `unique_name` to store or determine whether the user in an access token should have access to data. These claims are not unique and can be controllable by tenant administrators or sometimes users, which makes them unsuitable for authorization decisions. They are only usable for display purposes. Also don't use the `upn` claim for authorization. While the UPN is unique, it often changes over the lifetime of a user principal, which makes it unreliable for authorization. ++## Validate the actor ++A client application that's acting on behalf of a user (referred to as the *actor*), must also be authorized. Use the `scp` claim (scope) to validate that the application has permission to perform an operation. The permissions in `scp` should be limited to what the user actually needs and follows the principles of [least privilege](secure-least-privileged-access.md). ++However, there are known scenarios where `scp` isn't present in the token: ++* Daemon apps / app only permission - validate the role claims instead of the `scp` claim. +* A separate role-based access control system is used - validate roles instead of `scp`. ++For more information about scopes and permissions, see [Scopes and permissions in the Microsoft identity platform](scopes-oidc.md). ++> [!NOTE] +> An application may handle app-only tokens (requests from applications without users, such as daemon apps) and want to authorize a specific application across multiple tenants, rather than individual service principal IDs. In that case, the `appid` claim (for v1.0 tokens) or the `azp` claim (for v2.0 tokens) can be used for subject authorization. However, when using these claims, the application must ensure that the token was issued directly for the application by validating the `idtyp` optional claim. Only tokens of type `app` can be authorized this way, as delegated user tokens can potentially be obtained by entities other than the application. ++## Next steps ++* Learn more about tokens and claims in [Security tokens](security-tokens.md) |
active-directory | Custom Extension Configure Saml App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-configure-saml-app.md | Test that the token is being enriched for users signing in to the application: [Troubleshoot your custom claims provider API](custom-extension-troubleshoot.md). -View the [Authentication Events Trigger for Azure Functions sample app](https://github.com/Azure/microsoft-azure-webJobs-extensions-authentication-events). +View the [Authentication Events Trigger for Azure Functions sample app](https://github.com/Azure/azure-docs-sdk-dotnet/blob/live/api/overview/azure/preview/microsoft.azure.webjobs.extensions.authenticationevents-readme.md). <!-- For information on the HTTP request and response formats, read the [protocol reference](custom-claims-provider-protocol-reference.md). --> |
active-directory | Howto Configure Publisher Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-publisher-domain.md | |
active-directory | Mark App As Publisher Verified | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mark-app-as-publisher-verified.md | |
active-directory | Publisher Verification Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md | |
active-directory | Troubleshoot Publisher Verification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/troubleshoot-publisher-verification.md | |
active-directory | Whats New Docs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md | Title: "What's new in Azure Active Directory External Identities" description: "New and updated documentation for the Azure Active Directory External Identities." Previously updated : 03/31/2023 Last updated : 04/28/2023 +## April 2023 ++### Updated articles ++- [Allow or block domains](allow-deny-list.md) Screenshots were updated. +- [Authentication and Conditional Access](authentication-conditional-access.md) Links to other articles were updated. +- [Code and Azure PowerShell samples](code-samples.md) Minor text updates. +- [Azure Active Directory](azure-ad-account.md) Minor text updates. + ## March 2023 ### Updated articles Welcome to what's new in Azure Active Directory External Identities documentatio - [Azure Active Directory External Identities: What's new](whats-new-docs.md) - [Authentication and Conditional Access for External Identities](authentication-conditional-access.md) - [Quickstart: Add a guest user with PowerShell](b2b-quickstart-invite-powershell.md)--## January 2023 --### Updated articles --- [Federation with SAML/WS-Fed identity providers for guest users](direct-federation.md)-- [Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md)-- [Add a custom approval workflow to self-service sign-up](self-service-sign-up-add-approvals.md)-- [Add Facebook as an identity provider for External Identities](facebook-federation.md)-- [Leave an organization as an external user](leave-the-organization.md)-- [External Identities in Azure Active Directory](external-identities-overview.md)-- [External Identities documentation](index.yml) |
active-directory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md | Azure AD receives improvements on an ongoing basis. To stay up to date with the This page updates monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md). +## April 2023 ++### Public Preview - Custom attributes for Azure Active Directory Domain Services ++**Type:** New feature +**Service category:** Azure Active Directory Domain Services +**Product capability:** Azure Active Directory Domain Services ++Azure Active Directory Domain Services will now support synchronizing custom attributes from Azure AD for on-premises accounts. For more information, see: [Custom attributes for Azure Active Directory Domain Services](/azure/active-directory-domain-services/concepts-custom-attributes). ++++### General Availability - Enablement of combined security information registration for MFA and self-service password reset (SSPR) ++**Type:** New feature +**Service category:** MFA +**Product capability:** Identity Security & Protection ++Last year we announced the combined registration user experience for MFA and self-service password reset (SSPR) was rolling out as the default experience for all organizations. We're happy to announce that the combined security information registration experience is now fully rolled out. This change doesn't affect tenants located in the China region. For more information, see: [Combined security information registration for Azure Active Directory overview](../authentication/concept-registration-mfa-sspr-combined.md). ++++### General Availability - PIM alert: Alert on active-permanent role assignments in Azure or assignments made outside of PIM ++**Type:** Fixed +**Service category:** Privileged Identity Management +**Product capability:** Privileged Identity Management ++[Alert on Azure subscription role assignments made outside of Privileged Identity Management (PIM)](../privileged-identity-management/pim-resource-roles-configure-alerts.md) provides an alert in PIM for Azure subscription assignments made outside of PIM. An owner or User Access Administrator can take a quick remediation action to remove those assignments. ++++### Public Preview - Enhanced Create User and Invite User Experiences ++**Type:** Changed feature +**Service category:** User Management +**Product capability:** User Management ++Admins can now define more properties when creating and inviting a user in the Entra admin portal. These improvements bring our UX to parity with our [Create User APIS](/graph/api/user-post-users). Additionally, admins can now add users to a group or administrative unit, as well as assign roles. For more information, see: [Add or delete users using Azure Active Directory](../fundamentals/add-users-azure-active-directory.md). ++++### Public Preview - Azure AD Conditional Access protected actions ++**Type:** Changed feature +**Service category:** RBAC +**Product capability:** Access Control ++The protected actions public preview introduces the ability to apply Conditional Access to select permissions. When a user performs a protected action, they must satisfy Conditional Access policy requirements. For more information, see: [What are protected actions in Azure AD? (preview)](../roles/protected-actions-overview.md). ++++### Public Preview - Token Protection for Sign-in Sessions ++**Type:** New feature +**Service category:** Conditional Access +**Product capability:** User Authentication ++Token Protection for sign-in sessions is our first release on a road-map to combat attacks involving token theft and replay. It provides conditional access enforcement of token proof-of-possession for supported clients and services that ensures that access to specified resources is only from a device to which the user has signed in. For more information, see: [Conditional Access: Token protection (preview)](../conditional-access/concept-token-protection.md). ++++### General Availability- New limits on number and size of group secrets starting June 2023 ++**Type:** Plan for change +**Service category:** Group Management +**Product capability:** Directory ++Starting in June 2023, the secrets stored on a single group can't exceed 48 individual secrets, or have a total size greater than 10KB across all secrets on a single group. Groups with more than 10KB of secrets will immediately stop working in June 2023. In June, groups exceeding 48 secrets are unable to increase the number of secrets they have, though they may still update or delete those secrets. We highly recommend reducing to fewer than 48 secrets by January 2024. ++Group secrets are typically created when a group is assigned credentials to an app using Password-based single sign-on. To reduce the number of secrets assigned to a group, we recommend creating additional groups, and splitting up group assignments to your Password-based SSO applications across those new groups. For more information, see: [Add password-based single sign-on to an application](../manage-apps/configure-password-single-sign-on-non-gallery-applications.md). ++++### Public Preview - Authenticator Lite in Outlook ++**Type:** New feature +**Service category:** Microsoft Authenticator App +**Product capability:** User Authentication ++Authenticator Lite is an additional surface for AAD users to complete multifactor authentication using push notifications on their Android or iOS device. With Authenticator Lite, users can satisfy a multifactor authentication requirement from the convenience of a familiar app. Authenticator Lite is currently enabled in the Outlook mobile app. Users may receive a notification in their Outlook mobile app to approve or deny, or use the Outlook app to generate an OATH verification code that can be entered during sign-in. The *'Microsoft managed'* setting for this feature will be set to enabled on May 26th, 2023. This will enable the feature for all users in tenants where the feature is set to Microsoft managed. If you wish to change the state of this feature, please do so before May 26th, 2023. For more information, see: [How to enable Microsoft Authenticator Lite for Outlook mobile (preview)](../authentication/how-to-mfa-authenticator-lite.md). ++++### General Availability - Updated look and feel for Per-user MFA ++**Type:** Plan for change +**Service category:** MFA +**Product capability:** Identity Security & Protection ++As part of ongoing service improvements, we are making updates to the per-user MFA admin configuration experience to align with the look and feel of Azure. This change does not include any changes to the core functionality and will only include visual improvements.  For more information, see: [Enable per-user Azure AD Multi-Factor Authentication to secure sign-in events](../authentication/howto-mfa-userstates.md). ++++### General Availability - Additional terms of use audit logs will be turned off ++**Type:** Fixed +**Service category:** Terms of Use +**Product capability:** AuthZ/Access Delegation ++Due to a technical issue, we have recently started to emit additional audit logs for terms of use. The additional audit logs will be turned off by the first of May and are tagged with the core directory service and the agreement category. If you have built a dependency on the additional audit logs, you must switch to the regular audit logs tagged with the terms of use service. ++++### General Availability - New Federated Apps available in Azure AD Application gallery - April 2023 ++++**Type:** New feature +**Service category:** Enterprise Apps +**Product capability:** 3rd Party Integration ++In April 2023 we've added the following 10 new applications in our App gallery with Federation support: ++[iTel Alert](https://www.itelalert.nl/), [goFLUENT](../saas-apps/gofluent-tutorial.md), [StructureFlow](https://app.structureflow.co/), [StructureFlow AU](https://au.structureflow.co/), [StructureFlow CA](https://ca.structureflow.co/), [StructureFlow EU](https://eu.structureflow.co/), [StructureFlow USA](https://us.structureflow.co/), [Predict360 SSO](../saas-apps/predict360-sso-tutorial.md), [Cegid Cloud](https://www.cegid.com/fr/nos-produits/), [HashiCorp Cloud Platform (HCP)](../saas-apps/hashicorp-cloud-platform-hcp-tutorial.md), [O'Reilly learning platform](../saas-apps/oreilly-learning-platform-tutorial.md), [LeftClick Web Services – RoomGuide](https://www.leftclick.cloud/digital_signage), [LeftClick Web Services – Sharepoint](https://www.leftclick.cloud/digital_signage), [LeftClick Web Services – Presence](https://www.leftclick.cloud/presence), [LeftClick Web Services - Single Sign-On](https://www.leftclick.cloud/presence), [InterPrice Technologies](http://www.interpricetech.com/), [WiggleDesk SSO](https://wiggledesk.com/), [Application Experience with Mist](https://www.mist.com/), [Connect Plans 360](https://connectplans360.com.au/), [Proactis Rego Source-to-Contract](../saas-apps/proactis-rego-source-to-contract-tutorial.md), [Danomics](https://www.danomics.com/), [Fountain](../saas-apps/fountain-tutorial.md), [Theom](../saas-apps/theom-tutorial.md), [DDC Web](../saas-apps/ddc-web-tutorial.md), [Dozuki](../saas-apps/dozuki-tutorial.md). +++You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial. ++For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest ++++### Public Preview - New provisioning connectors in the Azure AD Application Gallery - April 2023 ++**Type:** New feature +**Service category:** App Provisioning +**Product capability:** 3rd Party Integration + ++We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps: ++- [Alvao](../saas-apps/alvao-provisioning-tutorial.md) +- [Better Stack](../saas-apps/better-stack-provisioning-tutorial.md) +- [BIS](../saas-apps/bis-provisioning-tutorial.md) +- [Connecter](../saas-apps/connecter-provisioning-tutorial.md) +- [Howspace](../saas-apps/howspace-provisioning-tutorial.md) +- [Kno2fy](../saas-apps/kno2fy-provisioning-tutorial.md) +- [Netsparker Enterprise](../saas-apps/netsparker-enterprise-provisioning-tutorial.md) +- [uniFLOW Online](../saas-apps/uniflow-online-provisioning-tutorial.md) +++For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md). +++++### Public Preview - New PIM Azure resource picker ++**Type:** Changed feature +**Service category:** Privileged Identity Management +**Product capability:** End User Experiences ++With this new experience, PIM now automatically manages any type of resource in a tenant, so discovery and activation is no longer required. With the new resource picker, users can directly choose the scope they want to manage from the Management Group down to the resources themselves, making it faster and easier to locate the resources they need to administer. For more information, see: [Assign Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-assign-roles.md). ++++### General availability - Self Service Password Reset (SSPR) now supports PIM eligible users and indirect group role assignment ++**Type:** Changed feature +**Service category:** Self Service Password Reset +**Product capability:** Identity Security & Protection ++Self Service Password Reset (SSPR) can now PIM eligible users, and evaluate group-based memberships, along with direct memberships when checking if a user is in a particular administrator role. This capability provides more accurate SSPR policy enforcement by validating if users are in scope for the default SSPR admin policy or your organizations SSPR user policy. +++For more information, see: ++- [Administrator reset policy differences](../authentication/concept-sspr-policy.md#administrator-reset-policy-differences). +- [Create a role-assignable group in Azure Active Directory](../roles/groups-create-eligible.md) ++++ ## March 2023 |
active-directory | Howto Manage Inactive User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md | The following details relate to the `lastSignInDateTime` property. - Each interactive sign-in attempt results in an update of the underlying data store. Typically, sign-ins show up in the related sign-in report within 6 hours. -- To generate a `lastSignInDateTime` timestamp, you an attempted sign-in. The value of the `lastSignInDateTime` property may be blank if:+- To generate a `lastSignInDateTime` timestamp, you must attempt a sign-in. Either a failed or successful sign-in attempt, as long as it is recorded in the [Azure AD sign-in logs](concept-all-sign-ins.md), will generate a `lastSignInDateTime` timestamp. The value of the `lastSignInDateTime` property may be blank if: - The last attempted sign-in of a user took place before April 2020. - The affected user account was never used for a sign-in attempt. |
active-directory | Alchemer Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alchemer-tutorial.md | + + Title: Azure Active Directory SSO integration with Alchemer +description: Learn how to configure single sign-on between Azure Active Directory and Alchemer. ++++++++ Last updated : 04/27/2023+++++# Azure Active Directory SSO integration with Alchemer ++In this article, you learn how to integrate Alchemer with Azure Active Directory (Azure AD). Alchemer offers the worldΓÇÖs most flexible feedback and data collection platform that allows organizations to close the loop with their customers and employees quickly and effectively. When you integrate Alchemer with Azure AD, you can: ++* Control in Azure AD who has access to Alchemer. +* Enable your users to be automatically signed-in to Alchemer with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You configure and test Azure AD single sign-on for Alchemer in a test environment. Alchemer supports both **SP** and **IDP** initiated single sign-on and Just In Time user provisioning. ++## Prerequisites ++To integrate Azure Active Directory with Alchemer, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Alchemer single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Alchemer application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Alchemer from the Azure AD gallery ++Add Alchemer from the Azure AD application gallery to configure single sign-on with Alchemer. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Alchemer** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a URL using the following pattern: + `https://app.alchemer.com/login/getsamlxml/idp/<INSTANCE>` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://app.alchemer.com/login/ssologin/idp/<INSTANCE>` ++1. If you wish to configure the application in **SP** initiated mode, then perform the following step: ++ In the **Sign on URL** textbox, type a URL using the following pattern: + `https://app.alchemer.com/login/initiatelogin/idp/<INSTANCE>` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Alchemer Client support team](mailto:support@alchemer.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up Alchemer** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure Alchemer SSO ++To configure single sign-on on **Alchemer** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [Alchemer support team](mailto:support@alchemer.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Alchemer test user ++In this section, a user called B.Simon is created in Alchemer. Alchemer supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Alchemer, a new one is commonly created after authentication. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++#### SP initiated: ++* Click on **Test this application** in Azure portal. This will redirect to Alchemer Sign-on URL where you can initiate the login flow. ++* Go to Alchemer Sign-on URL directly and initiate the login flow from there. ++#### IDP initiated: ++* Click on **Test this application** in Azure portal and you should be automatically signed in to the Alchemer for which you set up the SSO. ++You can also use Microsoft My Apps to test the application in any mode. When you click the Alchemer tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Alchemer for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Alchemer you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Alvao Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alvao-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi 1. Determine what data to [map between Azure AD and ALVAO](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure ALVAO to support provisioning with Azure AD-1. Find your **Tenant SCIM Endpoint URL**, which is in the form: {ALVAO REST API address}/scim, for example, https://app.contoso.com/alvaorestapi/scim. +1. Find your **Tenant SCIM Endpoint URL**, which should have the format `{ALVAO REST API address}/scim` (for example, https://app.contoso.com/alvaorestapi/scim). 1. Generate a new **Secret Token** in **WebApp - Administration - Settings - [Active Directory and Azure Active Directory](https://doc.alvao.com/en/11.1/list-of-windows/alvao-webapp/administration/settings/activedirectory)** and copy its value. ## Step 3. Add ALVAO from the Azure AD application gallery |
active-directory | Cmd Ctrl Base Camp Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cmd-ctrl-base-camp-tutorial.md | + + Title: Azure Active Directory SSO integration with CMD+CTRL Base Camp +description: Learn how to configure single sign-on between Azure Active Directory and CMD+CTRL Base Camp. ++++++++ Last updated : 04/27/2023+++++# Azure Active Directory SSO integration with CMD+CTRL Base Camp ++In this article, you learn how to integrate CMD+CTRL Base Camp with Azure Active Directory (Azure AD). CMD+CTRL Base Camp is a unique learning platform that combines our modes of software security training courses, labs, and cyber ranges into an engaging and effective integrated learner experience. When you integrate CMD+CTRL Base Camp with Azure AD, you can: ++* Control in Azure AD who has access to CMD+CTRL Base Camp. +* Enable your users to be automatically signed-in to CMD+CTRL Base Camp with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You configure and test Azure AD single sign-on for CMD+CTRL Base Camp in a test environment. CMD+CTRL Base Camp supports **SP** initiated single sign-on and **Just In Time** user provisioning. ++## Prerequisites ++To integrate Azure Active Directory with CMD+CTRL Base Camp, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* CMD+CTRL Base Camp single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the CMD+CTRL Base Camp application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add CMD+CTRL Base Camp from the Azure AD gallery ++Add CMD+CTRL Base Camp from the Azure AD application gallery to configure single sign-on with CMD+CTRL Base Camp. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **CMD+CTRL Base Camp** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a value using the following pattern: + `urn:auth0:cmdnctrl:<ConnectionName>` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://login.cmdnctrl.net/login/callback?connection=<ConnectionName>` ++ c. In the **Sign on URL** textbox, type the URL: + `https://login.cmdnctrl.net` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [CMD+CTRL Base Camp Client support team](mailto:support@cmdnctrl.net) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up CMD+CTRL Base Camp** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure CMD+CTRL Base Camp SSO ++To configure single sign-on on **CMD+CTRL Base Camp** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CMD+CTRL Base Camp support team](mailto:support@cmdnctrl.net). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create CMD+CTRL Base Camp test user ++In this section, a user called B.Simon is created in CMD+CTRL Base Camp. CMD+CTRL Base Camp supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in CMD+CTRL Base Camp, a new one is commonly created after authentication. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to CMD+CTRL Base Camp Sign-on URL where you can initiate the login flow. ++* Go to CMD+CTRL Base Camp Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you click the CMD+CTRL Base Camp tile in the My Apps, this will redirect to CMD+CTRL Base Camp Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure CMD+CTRL Base Camp you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Document360 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/document360-tutorial.md | + + Title: Azure Active Directory SSO integration with Document360 +description: Learn how to configure single sign-on between Azure Active Directory and Document360. ++++++++ Last updated : 04/27/2023+++++# Azure Active Directory SSO integration with Document360 ++In this article, you learn how to integrate Document360 with Azure Active Directory (Azure AD). Document360 is an online self-service knowledge base software. When you integrate Document360 with Azure AD, you can: ++* Control in Azure AD who has access to Document360. +* Enable your users to be automatically signed-in to Document360 with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You configure and test Azure AD single sign-on for Document360 in a test environment. Document360 supports **SP** and **IDP** initiated single sign-on. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Prerequisites ++To integrate Azure Active Directory with Document360, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Document360 single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Document360 application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Document360 from the Azure AD gallery ++Add Document360 from the Azure AD application gallery to configure single sign-on with Document360. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Document360** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type one of the following URLs: ++ | **Identifier** | + |--| + | `https://identity.document360.io/saml` | + | `https://identity.us.document360.io/saml` | ++ b. In the **Reply URL** textbox, type a URL using one of the following patterns: ++ | **Reply URL** | + | -| + | `https://identity.document360.io/signin-saml-<ID>` | + | `https://identity.us.document360.io/signin-saml-<ID>` | ++1. If you wish to configure the application in **SP** initiated mode, then perform the following step: ++ In the **Sign on URL** textbox, type one of the following URLs: ++ | **Sign on URL** | + |--| + | `https://identity.document360.io ` | + | `https://identity.us.document360.io` | ++ > [!NOTE] + > The Reply URL is not real. Update this value with the actual Reply URL. Contact [Document360 Client support team](mailto:support@document360.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up Document360** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure Document360 SSO ++To configure single sign-on on **Document360** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [Document360 support team](mailto:support@document360.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Document360 test user ++In this section, you create a user called Britta Simon at Document360. Work with [Document360 support team](mailto:support@document360.com) to add the users in the Document360 platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++#### SP initiated: ++* Click on **Test this application** in Azure portal. This will redirect to Document360 Sign-on URL where you can initiate the login flow. ++* Go to Document360 Sign-on URL directly and initiate the login flow from there. ++#### IDP initiated: ++* Click on **Test this application** in Azure portal and you should be automatically signed in to the Document360 for which you set up the SSO. ++You can also use Microsoft My Apps to test the application in any mode. When you click the Document360 tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Document360 for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Document360 you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Efidigitalstorefront Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/efidigitalstorefront-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. `https://<COMPANY_NAME>.myprintdesk.net/DSF/asp4/` > [!NOTE]- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [EFI Digital StoreFront Client support team](https://www.efi.com/products/productivity-software/ecommerce-web-to-print/efi-digital-storefront/support/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [EFI Digital StoreFront Client support team](https://www.efi.com/support-and-downloads/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. 5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer. In this section, you'll enable B.Simon to use Azure single sign-on by granting a ## Configure EFI Digital StoreFront SSO -To configure single sign-on on **EFI Digital StoreFront** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [EFI Digital StoreFront Client support team](https://www.efi.com/products/productivity-software/ecommerce-web-to-print/efi-digital-storefront/support/). They set this setting to have the SAML SSO connection set properly on both sides. +To configure single sign-on on **EFI Digital StoreFront** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [EFI Digital StoreFront Client support team](https://www.efi.com/support-and-downloads/). They set this setting to have the SAML SSO connection set properly on both sides. ### Create EFI Digital StoreFront test user -In this section, you create a user called Britta Simon in EFI Digital StoreFront. Work with [EFI Digital StoreFront support team](https://www.efi.com/products/productivity-software/ecommerce-web-to-print/efi-digital-storefront/support/) to add the users in the EFI Digital StoreFront platform. Users must be created and activated before you use single sign-on. +In this section, you create a user called Britta Simon in EFI Digital StoreFront. Work with [EFI Digital StoreFront support team](https://www.efi.com/support-and-downloads/) to add the users in the EFI Digital StoreFront platform. Users must be created and activated before you use single sign-on. ## Test SSO |
active-directory | Gaggleamp Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gaggleamp-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. In the **Identifier** text box, type the URL: `https://accounts.gaggleamp.com/auth/saml/callback` -5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: -- In the **Sign-on URL** text box, type a URL using the following pattern: - `https://gaggleamp.com/i/<customerid>` -- > [!NOTE] - > The value is not real. Update the value with the actual Sign-on URL. Contact [GaggleAMP Client support team](mailto:sales@gaggleamp.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. - 6. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.  |
active-directory | Grovo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/grovo-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. `https://<subdomain>.grovo.com/sso/saml2/saml-assertion` > [!NOTE]- > These values are not real. Update these values with the actual Identifier, Reply URL, Sign-on URL and Relay State. Contact [Grovo Client support team](https://www.grovo.com/contact-us) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > These values are not real. Update these values with the actual Identifier, Reply URL, Sign-on URL and Relay State. Contact Grovo Client support team to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. In this section, you'll enable B.Simon to use Azure single sign-on by granting a In this section, a user called B.Simon is created in Grovo. Grovo supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Grovo, a new one is created after authentication. -> [!Note] -> If you need to create a user manually, Contact [Grovo support team](https://www.grovo.com/contact-us). - ## Test SSO In this section, you test your Azure AD single sign-on configuration using the Access Panel. |
active-directory | Sign In Enterprise Host Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sign-in-enterprise-host-provisioning-tutorial.md | Title: 'Tutorial: Configure Sign In Enterprise Host Provisioning for automatic user provisioning with Azure Active Directory' -description: Learn how to automatically provision and de-provision user accounts from Azure AD to Sign In Enterprise Host Provisioning. + Title: 'Tutorial: Configure Sign In Enterprise for automatic host provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision hosts from Azure AD to Sign In Enterprise. writer: twimmers Last updated 04/27/2023 -# Tutorial: Configure Sign In Enterprise Host Provisioning for automatic user provisioning +# Tutorial: Configure Sign In Enterprise for automatic host provisioning -This tutorial describes the steps you need to perform in both Sign In Enterprise Host Provisioning and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Sign In Enterprise Host Provisioning](https://signinenterprise.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +This tutorial describes the steps you need to perform in both Sign In Enterprise and Azure Active Directory (Azure AD) to configure automatic host provisioning. When configured, Azure AD automatically provisions and de-provisions hosts and host groups to [Sign In Enterprise](https://signinenterprise.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). ## Supported capabilities > [!div class="checklist"]-> * Create users in Sign In Enterprise Host Provisioning. -> * Remove users in Sign In Enterprise Host Provisioning when they do not require access anymore. -> * Keep user attributes synchronized between Azure AD and Sign In Enterprise Host Provisioning. -> * Provision groups and group memberships in Sign In Enterprise Host Provisioning. +> * Create hosts in Sign In Enterprise. +> * Provision host groups and their memberships in Sign In Enterprise. +> * Mark hosts as invisible in Sign In Enterprise that are unassigned from the application. +> * Delete host groups that are unassigned from the application. ## Prerequisites The scenario outlined in this tutorial assumes that you already have the followi * [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).-* A user account in Sign In Enterprise Host Provisioning with Admin permissions. +* A user account in Sign In Enterprise with Admin permissions. ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).-1. Determine what data to [map between Azure AD and Sign In Enterprise Host Provisioning](../app-provisioning/customize-application-attributes.md). +1. Determine what data to [map between Azure AD and Sign In Enterprise](../app-provisioning/customize-application-attributes.md). -## Step 2. Configure Sign In Enterprise Host Provisioning to support provisioning with Azure AD -Contact Sign In Enterprise Host support to configure Sign In Enterprise Host to support provisioning with Azure AD. +## Step 2. Gather SCIM Host Provisioning information from Sign In Enterprise ++1. Click on the gear icon in the top-right corner of your Sign In Enterprise account. +1. Click **Preferences**. +1. In the **General tab**, scroll down until you get to the **SCIM Host Provisioning** section. You will then need to copy both the URL and the Token, which will be needed in Step 5 below. ## Step 3. Add Sign In Enterprise Host Provisioning from the Azure AD application gallery -Add Sign In Enterprise Host Provisioning from the Azure AD application gallery to start managing provisioning to Sign In Enterprise Host Provisioning. If you have previously setup Sign In Enterprise Host Provisioning for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). +Add Sign In Enterprise Host Provisioning from the Azure AD application gallery to start managing provisioning to Sign In Enterprise. If you have previously setup Sign In Enterprise for SSO you can't use the same application. It's required that you create a separate app for Sign In Enterprise Host Provisioning. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ## Step 4. Define who will be in scope for provisioning The Azure AD provisioning service allows you to scope who will be provisioned ba * If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. -## Step 5. Configure automatic user provisioning to Sign In Enterprise Host Provisioning +## Step 5. Configure automatic user provisioning to Sign In Enterprise. This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. This section guides you through the steps to configure the Azure AD provisioning  -1. Under the **Admin Credentials** section, input your Sign In Enterprise Host Provisioning Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Sign In Enterprise Host Provisioning. If the connection fails, ensure your Sign In Enterprise Host Provisioning account has Admin permissions and try again. +1. Under the **Admin Credentials** section, input your Sign In Enterprise Tenant URL and Token you copied in Step 2. Click **Test Connection** to ensure Azure AD can connect to Sign In Enterprise. If the connection fails, ensure your and try again.  This section guides you through the steps to configure the Azure AD provisioning 1. Select **Save**. -1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Sign In Enterprise Host Provisioning**. +1. Under the **Mappings** section, select **Provision Azure Active Directory Users**. 1. Review the user attributes that are synchronized from Azure AD to Sign In Enterprise Host Provisioning in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Sign In Enterprise Host Provisioning for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Sign In Enterprise Host Provisioning API supports filtering users based on that attribute. Select the **Save** button to commit any changes. This section guides you through the steps to configure the Azure AD provisioning |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|| |emails[type eq "other"].value|String|| -1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Sign In Enterprise Host Provisioning**. +1. Under the **Mappings** section, select **Provision Azure Active Directory Groups**. -1. Review the group attributes that are synchronized from Azure AD to Sign In Enterprise Host Provisioning in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Sign In Enterprise Host Provisioning for update operations. Select the **Save** button to commit any changes. +1. Review the group attributes that are synchronized from Azure AD to Sign In Enterprise in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Sign In Enterprise for update operations. Select the **Save** button to commit any changes. |Attribute|Type|Supported for filtering|Required by Sign In Enterprise Host Provisioning| ||||| This section guides you through the steps to configure the Azure AD provisioning  -1. Define the users and/or groups that you would like to provision to Sign In Enterprise Host Provisioning by choosing the desired values in **Scope** in the **Settings** section. +1. Define the users and/or groups that you would like to provision to Sign In Enterprise by choosing the desired values in **Scope** in the **Settings** section.  |
active-directory | Soc Sst Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/soc-sst-tutorial.md | + + Title: Azure Active Directory SSO integration with SOC SST +description: Learn how to configure single sign-on between Azure Active Directory and SOC SST. ++++++++ Last updated : 04/27/2023+++++# Azure Active Directory SSO integration with SOC SST ++In this article, you learn how to integrate SOC SST with Azure Active Directory (Azure AD). The SOC complies with the mandatory legal documentation, which can be managed within the software by public and private companies that have registered employees (CLT). When you integrate SOC SST with Azure AD, you can: ++* Control in Azure AD who has access to SOC SST. +* Enable your users to be automatically signed-in to SOC SST with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You configure and test Azure AD single sign-on for SOC SST in a test environment. SOC SST supports **SP** and **IDP** initiated single sign-on. ++## Prerequisites ++To integrate Azure Active Directory with SOC SST, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* SOC SST single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the SOC SST application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add SOC SST from the Azure AD gallery ++Add SOC SST from the Azure AD application gallery to configure single sign-on with SOC SST. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **SOC SST** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a value using the following pattern: + `<InstanceName>.soc.com.br` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://sistema.soc.com.br/WebSoc/sso/<CustomerID>/saml/finalize.action` ++1. If you wish to configure the application in **SP** initiated mode, then perform the following step: ++ In the **Sign on URL** textbox, type a URL using the following pattern: + `https://sistema.soc.com.br/WebSoc/sp/<CustomerID>/login` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [SOC SST Client support team](mailto:suporte@soc.com.br) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up SOC SST** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure SOC SST SSO ++To configure single sign-on on **SOC SST** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [SOC SST support team](mailto:suporte@soc.com.br). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create SOC SST test user ++In this section, you create a user called Britta Simon at SOC SST. Work with [SOC SST support team](mailto:suporte@soc.com.br) to add the users in the SOC SST platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++#### SP initiated: ++* Click on **Test this application** in Azure portal. This will redirect to SOC SST Sign-on URL where you can initiate the login flow. ++* Go to SOC SST Sign-on URL directly and initiate the login flow from there. ++#### IDP initiated: ++* Click on **Test this application** in Azure portal and you should be automatically signed in to the SOC SST for which you set up the SSO. ++You can also use Microsoft My Apps to test the application in any mode. When you click the SOC SST tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the SOC SST for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure SOC SST you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Veda Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/veda-cloud-tutorial.md | + + Title: Azure Active Directory SSO integration with VEDA Cloud +description: Learn how to configure single sign-on between Azure Active Directory and VEDA Cloud. ++++++++ Last updated : 04/27/2023+++++# Azure Active Directory SSO integration with VEDA Cloud ++In this article, you learn how to integrate VEDA Cloud with Azure Active Directory (Azure AD). This application enables Azure AD to act as SAML IdP for authenticating users to your VEDA HR Cloud Solutions. When you integrate VEDA Cloud with Azure AD, you can: ++* Control in Azure AD who has access to VEDA Cloud. +* Enable your users to be automatically signed-in to VEDA Cloud with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You configure and test Azure AD single sign-on for VEDA Cloud in a test environment. VEDA Cloud supports **SP** initiated single sign-on. ++## Prerequisites ++To integrate Azure Active Directory with VEDA Cloud, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* VEDA Cloud single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the VEDA Cloud application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add VEDA Cloud from the Azure AD gallery ++Add VEDA Cloud from the Azure AD application gallery to configure single sign-on with VEDA Cloud. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **VEDA Cloud** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a URL using one of the following patterns: ++ | **Identifier** | + |--| + | `https://vedacustomers.b2clogin.com/<CUSTOMER>/B2C_1A_VEDASIGNIN` | + | `https://vedacustomersprod.b2clogin.com/<CUSTOMER>/B2C_1A_VEDASIGNIN` | + | `https://login.veda.net/<ID>/B2C_1A_VEDASIGNIN` | ++ b. In the **Reply URL** textbox, type a URL using one of the following patterns: ++ | **Reply URL** | + | -| + | `https://vedacustomers.b2clogin.com/<CUSTOMER>/B2C_1A_VEDASIGNIN/samlp/sso/assertionconsumer` | + | `https://vedacustomersprod.b2clogin.com/<CUSTOMER>/B2C_1A_VEDASIGNIN/samlp/sso/assertionconsumer` | + | `https://login.veda.net/<ID>/B2C_1A_VEDASIGNIN/samlp/sso/assertionconsumer` | ++ c. In the **Sign on URL** textbox, type a URL using the following pattern: + `https://<DOMAIN>.veda.net/<INSTANCE>` + + > [!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [VEDA Cloud Client support team](mailto:peoplemanagement@veda.net) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. VEDA Cloud application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. ++  ++1. In addition to above, VEDA Cloud application expects few more attributes to be passed back in SAML response, which are shown. These attributes are also pre populated but you can review them as per your requirements. ++ | Name | Source Attribute| + | | | + | displayname | user.displayname | ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ++  ++## Configure VEDA Cloud SSO ++To configure single sign-on on **VEDA Cloud** side, you need to send the **App Federation Metadata Url** to [VEDA Cloud support team](mailto:peoplemanagement@veda.net). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create VEDA Cloud test user ++In this section, you create a user called Britta Simon at VEDA Cloud. Work with [VEDA Cloud support team](mailto:peoplemanagement@veda.net) to add the users in the VEDA Cloud platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to VEDA Cloud Sign-on URL where you can initiate the login flow. ++* Go to VEDA Cloud Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you click the VEDA Cloud tile in the My Apps, this will redirect to VEDA Cloud Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure VEDA Cloud you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Memo 22 09 Other Areas Zero Trust | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-other-areas-zero-trust.md | -The other articles in this guidance set address the identity pillar of Zero Trust principles, as described in the US federal government's Office of Management and Budget (OMB) [memorandum 22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf). This article covers areas of the Zero Trust maturity model that are beyond the identity pillar. --This article addresses the following cross-cutting themes: --* Visibility and analytics +The other articles in this guidance address the identity pillar of Zero Trust principles, as described in the US Office of Management and Budget (OMB) [M 22-09 Memorandum for the Heads of Executive Departments and Agencies](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf). This article covers Zero Trust maturity model areas beyond the identity pillar, and it addresses the following themes: +* Visibility +* Analytics * Automation and orchestration- * Governance ## Visibility -It's important to monitor your Azure Active Directory (Azure AD) tenant. You must adopt an "assume breach" mindset and meet compliance standards in memorandum 22-09 and [memorandum 21-31](https://www.whitehouse.gov/wp-content/uploads/2021/08/M-21-31-Improving-the-Federal-Governments-Investigative-and-Remediation-Capabilities-Related-to-Cybersecurity-Incidents.pdf). Three primary log types are used for security analysis and ingestion: --* [Azure audit logs](../reports-monitoring/concept-audit-logs.md). Used for monitoring operational activities of the directory itself, such as creating, deleting, updating objects like users or groups. Also used for making changes to configurations of Azure AD, like modifications to a conditional access policy. --* [Azure AD sign-in logs](../reports-monitoring/concept-all-sign-ins.md). Used for monitoring all sign-in activities associated with users, applications, and service principals. The sign-in logs contain specific categories of sign-ins for easy differentiation: +It's important to monitor your Azure Active Directory (Azure AD) tenant. Assume a breach mindset and meet compliance standards in memorandum 22-09 and [Memorandum 21-31](https://www.whitehouse.gov/wp-content/uploads/2021/08/M-21-31-Improving-the-Federal-Governments-Investigative-and-Remediation-Capabilities-Related-to-Cybersecurity-Incidents.pdf). Three primary log types are used for security analysis and ingestion: - * Interactive sign-ins: Shows user successful and failed sign-ins for failures, the policies that might have been applied, and other relevant metadata. +* **Azure audit logs** to monitor operational activities of the directory, such as creating, deleting, updating objects like users or groups + * Use also to make changes to Azure AD configurations, like modifications to a Conditional Access policy + * See, [Audit logs in Azure AD](../reports-monitoring/concept-audit-logs.md) +* **Provisioning logs** have information about objects synchronized from Azure AD to applications like Service Now with Microsoft Identity Manager + * See, [Provisioning logs in Azure Active Directory](../reports-monitoring/concept-provisioning-logs.md) +* **Azure AD sign-in logs** to monitor sign-in activities associated with users, applications, and service principals. + * Sign-in logs have categories for differentiation + * Interactive sign-ins show successful and failed sign-ins, policies applied, and other metadata + * Non-interactive user sign-ins show no interaction during sign-in: clients signing in on behalf of the user, such as mobile applications or email clients + * Service principal sign-ins show service principal or application sign-in: services or applications accessing services, applications, or the Azure AD directory through the REST API + * Managed identities for Azure resource sign-in: Azure resources or applications accessing Azure resources, such as a web application service authenticating to an Azure SQL back end. + * See, [Sign-in logs in Azure Active Directory (preview)](../reports-monitoring/concept-all-sign-ins.md) - * Non-interactive user sign-ins: Shows sign-ins where a user did not perform an interaction during sign-in. These sign-ins are typically clients signing in on behalf of the user, such as mobile applications or email clients. +In Azure AD free tenants, log entries are stored for seven days. Tenants with an Azure AD premium license retain log entries for 30 days. - * Service principal sign-ins: Shows sign-ins by service principals or applications. Typically, these are headless and done by services or applications that are accessing other services, applications, or the Azure AD directory itself through the REST API. +Ensure a security information and event management (SIEM) tool ingests logs. Use sign-in and audit events to correlate with application, infrastructure, data, device, and network logs. - * Managed identities for Azure resource sign-ins: Shows sign-ins from resources with Azure managed identities. Typically, these are Azure resources or applications that are accessing other Azure resources, such as a web application service authenticating to an Azure SQL back end. +We recommend you integrate Azure AD logs with Microsoft Sentinel. Configure a connector to ingest Azure AD tenant logs. -* [Provisioning logs](../reports-monitoring/concept-provisioning-logs.md). Shows information about objects synchronized from Azure AD to applications like Service Now by using Microsoft Identity Manager. +Learn more: -Log entries are stored for 7 days in Azure AD free tenants. Tenants with an Azure AD premium license retain log entries for 30 days. +* [What is Microsoft Sentinel?](../../sentinel/overview.md) +* [Connect Azure AD to Microsoft Sentinel](../../sentinel/connect-azure-active-directory.md) -It's important to ensure that your logs are ingested by a security information and event management (SIEM) tool. Using a SIEM tool allows sign-in and audit events to be correlated with application, infrastructure, data, device, and network logs for a holistic view of your systems. +For the Azure AD tenant, you can configure the diagnostic settings to send the data to an Azure Storage account, Azure Event Hubs, or a Log Analytics workspace. Use these storage options to integrate other SIEM tools to collect data. -We recommend that you integrate your Azure AD logs with [Microsoft Sentinel](../../sentinel/overview.md) by configuring a connector to ingest your Azure AD tenant logs. For more information, see [Connect Azure Active Directory to Microsoft Sentinel](../../sentinel/connect-azure-active-directory.md). +Learn more: -You can also configure the [diagnostic settings](../reports-monitoring/overview-monitoring.md) on your Azure AD tenant to send the data to an Azure Storage account, Azure Event Hubs, or a Log Analytics workspace. These storage options allow you to integrate other SIEM tools to collect the data. For more information, see [Plan an Azure Active Directory reporting and monitoring deployment](../reports-monitoring/plan-monitoring-and-reporting.md). +* [What is Azure AD monitoring?](../reports-monitoring/overview-monitoring.md) +* [Azure AD reporting and monitoring deployment dependencies](../reports-monitoring/plan-monitoring-and-reporting.md) ## Analytics You can use analytics in the following tools to aggregate information from Azure AD and show trends in your security posture in comparison to your baseline. You can also use analytics to assess and look for patterns or threats across Azure AD. -* [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) actively analyzes sign-ins and other telemetry sources for risky behavior. Identity Protection assigns a risk score to a sign-in event. You can prevent sign-ins, or force a step-up authentication, to access a resource or application based on risk score. --* [Microsoft Sentinel](../../sentinel/get-visibility.md) offers the following ways to analyze information from Azure AD: -- * Microsoft Sentinel has [User and Entity Behavior Analytics (UEBA)](../../sentinel/identify-threats-with-entity-behavior-analytics.md). UEBA delivers high-fidelity, actionable intelligence on potential threats that involve user, host, IP address, and application entities. This intelligence enhances events across the enterprise to help detect anomalous behavior in users and systems. -- * You can use specific analytics rule templates that hunt for threats and alerts found in your Azure AD logs. Your security or operation analyst can then triage and remediate threats. -- * Microsoft Sentinel has [workbooks](../../sentinel/top-workbooks.md) that help you visualize multiple Azure AD data sources. These workbooks can show aggregate sign-ins by country, or applications that have the most sign-ins. You can also create or modify existing workbooks to view information or threats in a dashboard to gain insights. --* [Azure AD usage and insights reports](../reports-monitoring/concept-usage-insights-report.md) show information similar to Azure Sentinel workbooks, including which applications have the highest usage or sign-in trends over a time period. The reports are useful for understanding aggregate trends in your enterprise that might indicate an attack or other events. +* **Azure AD Identity Protection** analyzes sign-ins and other telemetry sources for risky behavior + * Identity Protection assigns a risk score to sign-in events + * Prevent sign-ins, or force a step-up authentication, to access a resource or application based on risk score + * See, [What is Identity Protection?](../identity-protection/overview-identity-protection.md) +* **Azure AD usage and insights reports** have information similar to Azure Sentinel workbooks, including applications with highest usage or sign-in trends. + * Use reports to understand aggregate trends that might indicate an attack or other events + * See, [Usage and insights in Azure AD](../reports-monitoring/concept-usage-insights-report.md) +* **Microsoft Sentinel** analyze information from Azure AD: + * Microsoft Sentinel User and Entity Behavior Analytics (UEBA) delivers intelligence on potential threats from user, host, IP address, and application entities. + * Use analytics rule templates to hunt for threats and alerts in your Azure AD logs. Your security or operation analyst can triage and remediate threats. + * Microsoft Sentinel workbooks help visualize Azure AD data sources. See sign-ins by country, region, or applications. + * See, [Commonly used Microsoft Sentinel workbooks](../../sentinel/top-workbooks.md) + * See, [Visualize collected data](../../sentinel/get-visibility.md) + * See, [Identify advanced threats with UEBA in Microsoft Sentinel](../../sentinel/identify-threats-with-entity-behavior-analytics.md) ## Automation and orchestration -Automation is an important aspect of Zero Trust, particularly in remediation of alerts that occur because of threats or security changes in your environment. In Azure AD, automation integrations are possible to help remediate alerts or perform actions that can improve your security posture. Automations are based on information received from monitoring and analytics. +Automation in Zero Trust helps remediate alerts due to threats or security changes. In Azure AD, automation integrations help clarify actions to improve your security posture. Automation is based on information received from monitoring and analytics. ++Use Microsoft Graph API REST calls to access Azure AD programmatically. This access requires an Azure AD identity with authorizations and scope. With the Graph API, integrate other tools. -[Microsoft Graph API](/graph/overview) REST calls are the most common way to programmatically access Azure AD. This API-based access requires an Azure AD identity with the necessary authorizations and scope. With the Graph API, you can integrate Microsoft's and other tools. Follow the principles outlined in this article when you're performing the integration. +We recommend you set up an Azure function or an Azure logic app to use a system-assigned managed identity. The logic app or function has steps or code to automate actions. Assign permissions to the managed identity to grant the service principal directory permissions to perform actions. Grant managed identities minimum rights. -We recommend that you set up an Azure function or an Azure logic app to use a [system-assigned managed identity](../managed-identities-azure-resources/overview.md). Your logic app or function contains the steps or code necessary to automate the desired actions. You assign permissions to the managed identity to grant the service principal the necessary directory permissions to perform the required actions. Grant managed identities only the minimum rights necessary. +Learn more: [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md) -Another automation integration point is [Azure AD PowerShell](/powershell/azure/active-directory/overview) modules. PowerShell is a useful automation tool for administrators and IT integrators who are performing common tasks or configurations in Azure AD. PowerShell can also be incorporated into Azure functions or Azure Automation runbooks. +Another automation integration point is Azure AD PowerShell modules. Use PowerShell to perform common tasks or configurations in Azure AD, or incorporate into Azure functions or Azure Automation runbooks. ## Governance -It's important that you understand and document clear processes for how you intend to operate your Azure AD environment. Azure AD has features that allow for governance-like functionality to be applied to scopes within Azure AD. Consider the following guidance to help with governance via Azure AD: +Document your processes for operating the Azure AD environment. Use Azure AD features for governance functionality applied to scopes in Azure AD. -* [Azure Active Directory governance operations reference guide](../fundamentals/active-directory-ops-guide-govern.md). -* [Azure Active Directory security operations guide](../fundamentals/security-operations-introduction.md). It can help you secure your operations and understand how security and governance overlap. +Learn more: -After you understand operational governance, you can use [governance features](../governance/identity-governance-overview.md) to implement portions of your governance controls. These include features mentioned in [Meet authorization requirements of memorandum 22-09](memo-22-09-authorization.md). +* [Azure AD governance operations reference guide](../fundamentals/active-directory-ops-guide-govern.md) +* [Azure AD security operations guide](../fundamentals/security-operations-introduction.md) +* [What is Microsoft Entra Identity Governance?](../governance/identity-governance-overview.md) +* [Meet authorization requirements of memorandum 22-09](memo-22-09-authorization.md). ## Next steps -The following articles are part of this documentation set: --[Meet identity requirements of memorandum 22-09](memo-22-09-meet-identity-requirements.md) --[Enterprise-wide identity management system](memo-22-09-enterprise-wide-identity-management-system.md) --[Multifactor authentication](memo-22-09-multi-factor-authentication.md) --[Authorization](memo-22-09-authorization.md) --For more information about Zero Trust, see: --[Securing identity with Zero Trust](/security/zero-trust/deploy/identity) +* [Meet identity requirements of memorandum 22-09 with Azure AD](memo-22-09-meet-identity-requirements.md) +* [Enterprise-wide identity management system](memo-22-09-enterprise-wide-identity-management-system.md) +* [Meet multifactor authentication requirements of memorandum 22-09](memo-22-09-multi-factor-authentication.md) +* [Meet authorization requirements of memorandum 22-09](memo-22-09-authorization.md) +* [Securing identity with Zero Trust](/security/zero-trust/deploy/identity) |
active-directory | Nist Authenticator Assurance Level 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-assurance-level-1.md | To achieve AAL1, you can use any NIST single-factor or multifactor [permitted au |Azure AD authentication method|NIST authenticator type | | - | - | |Password |Memorized Secret |-|Phone (SMS): Not recommended | Out-of-band | -|Microsoft Authenticator App for iOS (Passwordless) <br> Microsoft Authenticator App for Android (Passwordless)|Multi-factor Out-of-band | -|Single-factor certificate | Single-factor crypto software | +|Phone (SMS): Not recommended | Single-factor out-of-band | +|Microsoft Authenticator App (Passwordless)|Multi-factor out-of-band | +|Single-factor software certificate | Single-factor crypto software | |Multi-factor Software Certificate (PIN Protected) <br> Windows Hello for Business with software TPM <br> | Multi-factor crypto software | -|Windows Hello for Business with hardware TPM <br> Hardware protected certificate (smartcard/security key/TPM) <br> FIDO 2 security key | Multi-factor crypto hardware +|Hardware protected certificate (smartcard/security key/TPM) <br> FIDO 2 security key <br> Windows Hello for Business with hardware TPM <br> | Multi-factor crypto hardware > [!TIP] |
active-directory | Nist Authenticator Assurance Level 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-assurance-level-2.md | -The National Institute of Standards and Technology (NIST) develops technical requirements for US federal agencies implementing identity solutions. Organizations working with federal agencies must meet these requirements. +The National Institute of Standards and Technology (NIST) develops technical requirements for US federal agencies implementing identity solutions. Organizations working with federal agencies must meet these requirements. Before starting authenticator assurance level 2 (AAL2), you can see the following resources: Before starting authenticator assurance level 2 (AAL2), you can see the followin The following table has authenticator types permitted for AAL2: -| Azure AD authentication method| NIST authenticator type | +| Azure AD authentication method| NIST authenticator type | | - | - |-| **Recommended methods** | | -| Microsoft Authenticator app for iOS (passwordless) <br> Windows Hello for Business with software Trusted Platform Module (TPM) | Multi-factor crypto software | -| FIDO 2 security key <br> Microsoft Authenticator app for Android (passwordless) <br> Windows Hello for Business with hardware TPM <br>Smartcard (Active Directory Federation Services) | Multi-factor crypto hardware | +| **Recommended methods** | | +| Multi-factor Software Certificate (PIN Protected) <br> Windows Hello for Business with software Trusted Platform Module (TPM)| Multi-factor crypto software | +| Hardware protected certificate (smartcard/security key/TPM) <br> FIDO 2 security key <br> Windows Hello for Business with hardware TPM | Multi-factor crypto hardware | +|Microsoft Authenticator app (Passwordless) | Multi-factor out-of-band | **Additional methods** | |-| Password and phone (SMS) | Memorized secret and out-of-band | -| Password and Microsoft Authenticator app one-time password (OTP) <br> Password and single-factor OTP | Memorized secret and single-factor OTP| -| Password and Azure AD joined with software TPM <br> Password and compliant mobile device <br> Password and Hybrid Azure AD Joined with software TPM <br> Password and Microsoft Authenticator app (Notification) | Memorized secret and single-factor crypto software | -| Password and Azure AD joined with hardware TPM <br> Password and Hybrid Azure AD joined with hardware TPM | Memorized secret and single-factor crypto hardware | +| Password <br> **AND** <br>- Microsoft Authenticator app (Push Notification) <br>- **OR** <br>- Phone (SMS) | Memorized secret <br>**AND**<br> Single-factor out-of-band | +| Password <br> **AND** <br>- OATH hardware tokens (preview) <br>- **OR**<br>- Microsoft Authenticator app (OTP)<br>- **OR**<br>- OATH software tokens | Memorized secret <br>**AND** <br>Single-factor OTP| +| Password <br>**AND** <br>- Single-factor software certificate <br>- **OR**<br>- Azure AD joined with software TPM <br>- **OR**<br>- Hybrid Azure AD joined with software TPM <br>- **OR**<br>- Compliant mobile device | Memorized secret <br>**AND**<br> Single-factor crypto software | +| Password <br>**AND**<br>- Azure AD joined with hardware TPM <br>- **OR**<br>- Hybrid Azure AD joined with hardware TPM| Memorized secret <br>**AND**<br>Single-factor crypto hardware | > [!NOTE]-> In Conditional Access policy, the Authenticator is verifier impersonation resistance, if you require a device to be compliant or Hybrid Azure AD joined. +> Today, Microsoft Authenticator by itself is not phishing resistant. To gain protection from external phishing threats when using Microsoft Authenticator you must additionally configure conditional access policy requiring a managed device. ### AAL2 recommendations -For AAL2, use multi-factor cryptographic hardware or software authenticators. Passwordless authentication eliminates the greatest attack surface (the password), and offers users a streamlined method to authenticate. +For AAL2, use multi-factor cryptographic hardware or software authenticators. Passwordless authentication eliminates the greatest attack surface (the password), and offers users a streamlined method to authenticate. For guidance on selecting a passwordless authentication method, see [Plan a passwordless authentication deployment in Azure Active Directory](../authentication/howto-authentication-passwordless-deployment.md). See also, [Windows Hello for Business deployment guide](/windows/security/identity-protection/hello-for-business/hello-deployment-guide) Government agency cryptographic authenticators are validated for FIPS 140 Level * Windows Hello for Business with software or with hardware TPM -* Smartcard (Active Directory Federation Services) +* Certificate stored in software or hardware (smartcard/security key/TPM) -Although Microsoft Authenticator app (in notification, OTP, and passwordless modes) uses FIPS 140-approved cryptography, it's not validated for FIPS 140 Level 1. +Microsoft Authenticator app (Push Notification/OTP/passwordless) on iOS uses FIPS 140 level 1 validated cryptographic module and is FIPS 140 compliant. While Microsoft Authenticator app on Android (Push Notification/OTP/passwordless) uses FIPS 140 approved cryptography, it is not FIPS compliant. -FIDO 2 security key providers are in various stages of FIPS certification. We recommend you review the list of [supported FIDO 2 key vendors](../authentication/concept-authentication-passwordless.md#fido2-security-key-providers). Consult with your provider for current FIPS validation status. +For OATH hardware tokens and smartcards we recommend you consult with your provider for current FIPS validation status. +FIDO 2 security key providers are in various stages of FIPS certification. We recommend you review the list of [supported FIDO 2 key vendors](../authentication/concept-authentication-passwordless.md#fido2-security-key-providers). Consult with your provider for current FIPS validation status. -## Reauthentication +## Reauthentication For AAL2, the NIST requirement is reauthentication every 12 hours, regardless of user activity. Reauthentication is required after a period of inactivity of 30 minutes or longer. Because the session secret is something you have, presenting something you know, or are, is required. -To meet the requirement for reauthentication, regardless of user activity, Microsoft recommends configuring [user sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md) to 12 hours. +To meet the requirement for reauthentication, regardless of user activity, Microsoft recommends configuring [user sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md) to 12 hours. With NIST you can use compensating controls to confirm subscriber presence: With NIST you can use compensating controls to confirm subscriber presence: * Time out regardless of activity: Run a scheduled task (Configuration Manager, GPO, or Intune) to lock the machine after 12 hours, regardless of activity. -## Man-in-the-middle resistance +## Man-in-the-middle resistance Communications between the claimant and Azure AD are over an authenticated, protected channel. This configuration provides resistance to man-in-the-middle (MitM) attacks and satisfies the MitM resistance requirements for AAL1, AAL2, and AAL3. Communications between the claimant and Azure AD are over an authenticated, prot Azure AD authentication methods at AAL2 use nonce or challenges. The methods resist replay attacks because the verifier detects replayed authentication transactions. Such transactions won't contain needed nonce or timeliness data. -## Next steps +## Next steps [NIST overview](nist-overview.md) |
active-directory | Nist Authenticator Assurance Level 3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-assurance-level-3.md | -Use the information in this article for National Institute of Standards and Technology (NIST) authenticator assurance level 3 (AAL3). +Use the information in this article for National Institute of Standards and Technology (NIST) authenticator assurance level 3 (AAL3). Before obtaining AAL2, you can review the following resources: Before obtaining AAL2, you can review the following resources: ## Permitted authenticator types -Use Microsoft authentication methods to meet required NIST authenticator types. +Use Microsoft authentication methods to meet required NIST authenticator types. | Azure AD authentication methods| NIST authenticator type | | - | -| | **Recommended methods**| |-| FIDO 2 security key, or <br> Smart card (Active Directory Federation Services), or <br> Windows Hello for Business with hardware TPM| Multi-factor cryptographic hardware | -| **Additional methods**| | -| Password and <br> Hybrid Azure AD joined with hardware TPM or, <br> Azure AD joined with hardware TPM)| Memorized secret and <br> single-factor cryptographic hardware | -| Password and <br> single-factor one-time password OTP hardware, from an OTP manufacturer and <br> Hybrid Azure AD joined with software TPM or <br> Azure AD joined with software TPM or <br> [Compliant managed device](/mem/intune/protect/device-compliance-get-started))| Memorized secret and <br> single-factor OTP hardware and <br> single-factor cryptographic software | +| Hardware protected certificate (smartcard/security key/TPM) <br> FIDO 2 security key<br>Windows Hello for Business with hardware TPM| Multi-factor cryptographic hardware | +| **Additional methods**|| +|Password<br>**AND**<br>- Azure AD joined with hardware TPM <br>- **OR**<br>- Hybrid Azure AD joined with hardware TPM|Memorized secret <br>**AND**<br>Single-factor cryptographic hardware| +|Password<br>**AND**<br>OATH hardware tokens (Preview) <br>**AND**<br>- Single-factor software certificate<br>- **OR**<br>- Hybrid Azure AD Joined or compliant device with software TPM |Memorized secret<br>**AND**<br>Single-factor OTP hardware <br>**AND**<br>Single-factor cryptographic software| -### Recommendations +### Recommendations -For AAL3, we recommend using a multi-factor cryptographic hardware authenticator. Passwordless authentication eliminates the greatest attack surface, the password. Users have a streamlined authentication method. If your organization is cloud based, we recommend you use FIDO 2 security keys. --> [!NOTE] -> Windows Hello for Business is not validated at the required FIPS 140 Security Level. Federal customers should conduct risk assessments and evaluation before accepting this service as AAL3. +For AAL3, we recommend using a multi-factor cryptographic hardware authenticator that provides passwordless authentication eliminating the greatest attack surface, the password. For guidance, see [Plan a passwordless authentication deployment in Azure Active Directory](../authentication/howto-authentication-passwordless-deployment.md). See also [Windows Hello for Business deployment guide](/windows/security/identity-protection/hello-for-business/hello-deployment-guide). Azure AD uses the Windows FIPS 140 Level 1 overall validated cryptographic modul ### Authenticator requirements -Single-factor and multi-factor cryptographic hardware authenticator requirements. +Single-factor and multi-factor cryptographic hardware authenticator requirements. #### Single-factor cryptographic hardware Authenticators are required to be: * FIPS 140 Level 3 Physical Security, or higher -Azure AD joined and Hybrid Azure AD joined devices meet this requirement when: +Azure AD joined and Hybrid Azure AD joined devices meet this requirement when: * You run [Windows in a FIPS-140 approved mode](/windows/security/threat-protection/fips-140-validation) Consult your mobile device vendor to learn about their adherence with FIPS 140. #### Multi-factor cryptographic hardware -Authenticators are required to be: +Authenticators are required to be: * FIPS 140 Level 2 Overall, or higher FIDO 2 security keys, smart cards, and Windows Hello for Business can help you m **Windows Hello for Business** -FIPS 140 requires the cryptographic boundary, including software, firmware, and hardware, to be in scope for evaluation. Windows operating systems can be paired with thousands of combinations of hardware. Microsoft can't maintain FIPS certifications for each combination. +FIPS 140 requires the cryptographic boundary, including software, firmware, and hardware, to be in scope for evaluation. Windows operating systems can be paired with thousands of these combinations. As such, it is not feasible for Microsoft to have Windows Hello for Business validated at FIPS 140 Security Level 2. Federal customers should conduct risk assessments and evaluate each of the following component certifications as part of their risk acceptance before accepting this service as AAL3: -Evaluate the following component certifications in your risk assessment of using Windows Hello for Business as an AAL3 authenticator: +* **Windows 10 and Windows Server** use the [US Government Approved Protection Profile for General Purpose Operating Systems Version 4.2.1](https://www.niap-ccevs.org/Profile/Info.cfm?PPID=442&id=442) from the National Information Assurance Partnership (NIAP). This organization oversees a national program to evaluate commercial off-the-shelf (COTS) information technology products for conformance with the international Common Criteria. -* **Windows 10 and Windows Server** use the [US Government Approved Protection Profile for General Purpose Operating Systems Version 4.2.1](https://www.niap-ccevs.org/Profile/Info.cfm?PPID=442&id=442) from the National Information Assurance Partnership (NIAP). This organization oversees a national program to evaluate commercial off-the-shelf (COTS) information technology products for conformance with the international Common Criteria. +* **Windows Cryptographic Library** [has FIPS Level 1 Overall in the NIST Cryptographic Module Validation Program](https://csrc.nist.gov/Projects/cryptographic-module-validation-program/Certificate/3544) (CMVP), a joint effort between NIST and the Canadian Center for Cyber Security. This organization validates cryptographic modules against FIPS standards. -* **Windows Cryptographic Library** [has FIPS Level 1 Overall in the NIST Cryptographic Module Validation Program](https://csrc.nist.gov/Projects/cryptographic-module-validation-program/Certificate/3544) (CMVP), a joint effort between NIST and the Canadian Center for Cyber Security. This organization validates cryptographic modules against FIPS standards. --* Choose a **Trusted Platform Module (TPM)** that's FIPS 140 Level 2 Overall, and FIPS 140 Level 3 Physical Security. Your organization ensures hardware TPM meets the AAL level requirements you want. +* Choose a **Trusted Platform Module (TPM)** that's FIPS 140 Level 2 Overall, and FIPS 140 Level 3 Physical Security. Your organization ensures hardware TPM meets the AAL level requirements you want. To determine the TPMs that meet current standards, go to [NIST Computer Security Resource Center Cryptographic Module Validation Program](https://csrc.nist.gov/Projects/cryptographic-module-validation-program/validated-modules/Search). In the **Module Name** box, enter **Trusted Platform Module** for a list of hardware TPMs that meet standards. -## Reauthentication +## Reauthentication For AAL3, NIST requirements are reauthentication every 12 hours, regardless of user activity. Reauthentication is required after a period of inactivity 15 minutes or longer. Presenting both factors is required. -To meet the requirement for reauthentication, regardless of user activity, Microsoft recommends configuring [user sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md) to 12 hours. +To meet the requirement for reauthentication, regardless of user activity, Microsoft recommends configuring [user sign-in frequency](../conditional-access/howto-conditional-access-session-lifetime.md) to 12 hours. -Use NIST for compensating controls to confirm subscriber presence: +NIST allows for compensating controls to confirm subscriber presence: * Set a session inactivity time out of 15 minutes: Lock the device at the OS level by using Microsoft Configuration Manager, Group Policy Object (GPO), or Intune. For the subscriber to unlock it, require local authentication. * Set timeout, regardless of activity, by running a scheduled task using Configuration Manager, GPO, or Intune. Lock the machine after 12 hours, regardless of activity. -## Man-in-the-middle resistance +## Man-in-the-middle resistance Communications between the claimant and Azure AD are over an authenticated, protected channel for resistance to man-in-the-middle (MitM) attacks. This configuration satisfies the MitM resistance requirements for AAL1, AAL2, and AAL3. Azure AD authentication methods that meet AAL3 use cryptographic authenticators All Azure AD authentication methods that meet AAL3: -- Use a cryptographic authenticator that requires the verifier store a public key corresponding to a private key held by the authenticator-- Store the expected authenticator output by using FIPS-140 validated hash algorithms+* Use a cryptographic authenticator that requires the verifier store a public key corresponding to a private key held by the authenticator +* Store the expected authenticator output by using FIPS-140 validated hash algorithms For more information, see [Azure AD Data Security Considerations](https://aka.ms/AADDataWhitepaper). Azure AD authentication methods that meet AAL3 use nonce or challenges. These me ## Authentication intent -Authentication makes it more difficult for directly connected physical authenticators, like multi-factor cryptographic devices, to be used without the subject's knowledge. For example, by malware on the endpoint. --Use NIST compensating controls for mitigating malware risk. Any Intune-compliant device that runs Windows Defender System Guard and Windows Defender ATP meets this mitigation requirement. +Requiring authentication intent makes it more difficult for directly connected physical authenticators, like multi-factor cryptographic hardware, to be used without the subject's knowledge (for example, by malware on the endpoint). Azure AD methods that meet AAL3 require user entry of pin or biometric, demonstrating authentication intent. -## Next steps +## Next steps [NIST overview](nist-overview.md) |
active-directory | Nist Authenticator Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-types.md | The authentication process begins when a claimant asserts its control of one of |NIST authenticator type| Azure AD authentication method| | - | - | |Memorized secret <br> (something you know)| Password: Cloud accounts, federated, password hash sync, passthrough authentication|-|Look-up secret <br> (something you have)| None: A look-up secret is data not held in a system| -|Out-of-band <br>(something you have)| Phone (SMS): Not recommended | -|Single-factor one-time password (OTP) <br> (something you have)| Microsoft Authenticator App OTP <br> Single-factor OTP (OTP manufacturers) <sup data-htmlnode="">1</sup>| -|Multi-factor OTP <br> (something you have, know, or are)| Multi-factor OTP (OTP manufacturers) <sup data-htmlnode="">1</sup>| -|Single-factor crypto software <br> (something you have)|Compliant mobile device <br> Microsoft Authenticator App (notification) <br> Hybrid Azure AD joined <sup data-htmlnode="">2</sup> with software TPM <br> Azure AD joined <sup data-htmlnode="">2</sup> with software TPM | +|Look-up secret <br> (something you have)| None| +|Single-factor out-of-band <br>(something you have)| Microsoft Authenticator App (Push Notification) <br> Phone (SMS): Not recommended | +Multi-factor Out-of-band <br> (something you have + something you know/are) | Microsoft Authenticator App (Passwordless) | +|Single-factor one-time password (OTP) <br> (something you have)| Microsoft Authenticator App (OTP) <br> Single-factor Hardware/Software OTP<sup data-htmlnode="">1</sup>| +|Multi-factor OTP <br> (something you have + something you know/are)| Treated as single-factor OTP| +|Single-factor crypto software <br> (something you have)|Single-factor software certificate <br> Azure AD joined <sup data-htmlnode="">2</sup> with software TPM <br> Hybrid Azure AD joined <sup data-htmlnode="">2</sup> with software TPM <br> Compliant mobile device | |Single-factor crypto hardware <br> (something you have) | Azure AD joined <sup data-htmlnode="">2</sup> with hardware TPM <br> Hybrid Azure AD joined <sup data-htmlnode="">2</sup> with hardware TPM|-|Multi-factor crypto software <br> (something you have, know, or are) | Microsoft Authenticator app for iOS (passwordless) <br> Windows Hello for Business with software TPM | -|Multi-factor crypto hardware <br> (something you have, you know, or are) |Microsoft Authenticator app for Android (passwordless) <br> Windows Hello for Business with hardware TPM <br> Smartcard (Federated identity provider) <br> FIDO 2 security key| +|Multi-factor crypto software <br> (something you have + something you know/are) | Multi-factor Software Certificate (PIN Protected) <br> Windows Hello for Business with software TPM | +|Multi-factor crypto hardware <br> (something you have + something you know/are) |Hardware protected certificate (smartcard/security key/TPM) <br> Windows Hello for Business with hardware TPM <br> FIDO 2 security key| <sup data-htmlnode="">1</sup> 30-second or 60-second OATH-TOTP SHA-1 token <sup data-htmlnode="">2</sup> For more information on device join states, see [Azure AD device identity](../devices/index.yml) -## SMS isn't recommended +## Public Switch Telephone Network (PSTN) SMS/Voice are not recommended -SMS text messages meet the NIST standard, but NIST doesn't recommend them. The risks of device swap, SIM changes, number porting, and other behaviors can cause issues. If these actions are malicious, they can result in an insecure experience. Although SMS text messages aren't recommended, they're better than using only a password, because they require more effort for hackers. +NIST does not recommend SMS or voice. The risks of device swap, SIM changes, number porting, and other behaviors can cause issues. If these actions are malicious, they can result in an insecure experience. Although SMS/Voice are not recommended, they are better than using only a password, because they require more effort for hackers. -## Next steps +## Next steps [NIST overview](nist-overview.md) |
aks | Azure Disk Csi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md | Title: Use Container Storage Interface (CSI) driver for Azure Disks on Azure Kubernetes Service (AKS) -description: Learn how to use the Container Storage Interface (CSI) driver for Azure Disks in an Azure Kubernetes Service (AKS) cluster. + Title: Use Container Storage Interface (CSI) driver for Azure Disk on Azure Kubernetes Service (AKS) +description: Learn how to use the Container Storage Interface (CSI) driver for Azure Disk in an Azure Kubernetes Service (AKS) cluster. Last updated 04/19/2023 -# Use the Azure Disks Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS) +# Use the Azure Disk Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS) -The Azure Disks Container Storage Interface (CSI) driver is a [CSI specification](https://github.com/container-storage-interface/spec/blob/master/spec.md)-compliant driver used by Azure Kubernetes Service (AKS) to manage the lifecycle of Azure Disks. +The Azure Disks Container Storage Interface (CSI) driver is a [CSI specification](https://github.com/container-storage-interface/spec/blob/master/spec.md)-compliant driver used by Azure Kubernetes Service (AKS) to manage the lifecycle of Azure Disk. The CSI is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes. By adopting and using CSI, AKS now can write, deploy, and iterate plug-ins to expose new or improve existing storage systems in Kubernetes. Using CSI drivers in AKS avoids having to touch the core Kubernetes code and wait for its release cycles. |
aks | Concepts Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-identity.md | Title: Concepts - Access and identity in Azure Kubernetes Services (AKS) description: Learn about access and identity in Azure Kubernetes Service (AKS), including Azure Active Directory integration, Kubernetes role-based access control (Kubernetes RBAC), and roles and bindings. Previously updated : 09/27/2022 Last updated : 04/28/2023 There are two levels of access needed to fully operate an AKS cluster: With Azure RBAC, you can provide your users (or identities) with granular access to AKS resources across one or more subscriptions. For example, you could use the [Azure Kubernetes Service Contributor role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-contributor-role) to scale and upgrade your cluster. Meanwhile, another user with the [Azure Kubernetes Service Cluster Admin role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) only has permission to pull the Admin `kubeconfig`. -Alternatively, you could give your user the general [Contributor](../role-based-access-control/built-in-roles.md#contributor) role. With the general Contributor role, users can perform the above permissions and every action possible on the AKS resource, except managing permissions. - [Use Azure RBAC to define access to the Kubernetes configuration file in AKS](control-kubeconfig-access.md). ### Azure RBAC for Kubernetes Authorization |
aks | Manage Azure Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-azure-rbac.md | az group delete -n myResourceGroup To learn more about AKS authentication, authorization, Kubernetes RBAC, and Azure RBAC, see: -* [Access and identity options for AKS](./concepts-identity.md) +* [Access and identity options for AKS](/azure/aks/concepts-identity) * [What is Azure RBAC?](../role-based-access-control/overview.md) * [Microsoft.ContainerService operations](../role-based-access-control/resource-provider-operations.md#microsoftcontainerservice) To learn more about AKS authentication, authorization, Kubernetes RBAC, and Azur [install-azure-cli]: /cli/azure/install-azure-cli [az-role-definition-create]: /cli/azure/role/definition#az_role_definition_create [az-aks-get-credentials]: /cli/azure/aks#az_aks_get-credentials-[kubernetes-rbac]: ./concepts-identity.md#azure-rbac-for-kubernetes-authorization +[kubernetes-rbac]: /azure/aks/concepts-identity#azure-rbac-for-kubernetes-authorization |
aks | Operator Best Practices Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-storage.md | Title: Best practices for storage and backup description: Learn the cluster operator best practices for storage, data encryption, and backups in Azure Kubernetes Service (AKS) Previously updated : 11/30/2022 Last updated : 04/28/2023 This best practices article focuses on storage considerations for cluster operat ## Choose the appropriate storage type > **Best practice guidance**-> +> > Understand the needs of your application to pick the right storage. Use high performance, SSD-backed storage for production workloads. Plan for network-based storage when you need multiple concurrent connections. Applications often require different types and speeds of storage. Determine the most appropriate storage type by asking the following questions. Both Azure Files and Azure Disks are available in Standard and Premium performan - Backed by regular spinning disks (HDDs). - Good for archival or infrequently accessed data. +While the default storage tier for the Azure Disk CSI driver is Premium SSD, your custom StorageClass can use Premium SSD, Standard SSD, or Standard HDD. + Understand the application performance needs and access patterns to choose the appropriate storage tier. For more information about Managed Disks sizes and performance tiers, see [Azure Managed Disks overview][managed-disks]. ### Create and use storage classes to define application needs Work with your application development team to understand their storage capacity For more information about available VM sizes, see [Sizes for Linux virtual machines in Azure][vm-sizes]. -- ## Dynamically provision volumes > **Best practice guidance** |
aks | Support Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/support-policies.md | Microsoft doesn't provide technical support for the following examples: * Third-party closed-source software. This software can include security scanning tools and networking devices or software. * Network customizations other than the ones listed in the [AKS documentation](./index.yml). * Custom or 3rd-party CNI plugins used in [BYOCNI](use-byo-cni.md) mode.-+* Stand-by and proactive scenarios. Microsoft Support provides reactive support to help solve active issues in a timely and professional manner. However, standby or proactive support to help you eliminate operational risks, increase availability, and optimize performance are not covered. [Eligible customers](https://www.microsoft.com/unifiedsupport) can contact their account team to get nominated for Azure Event Management service[https://devblogs.microsoft.com/premier-developer/proactively-plan-for-your-critical-event-in-azure-with-enhanced-support-and-engineering-services/]. It's a paid service delivered by Microsoft support engineers that includes a proactive solution risk assessment and coverage during the event. ## AKS support coverage for agent nodes |
aks | Use Azure Ad Pod Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md | Title: Use Azure Active Directory pod-managed identities in Azure Kubernetes Ser description: Learn how to use Azure AD pod-managed identities in Azure Kubernetes Service (AKS) Previously updated : 03/23/2023 Last updated : 04/28/2023 # Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview) Azure Active Directory (Azure AD) pod-managed identities use Kubernetes primitiv > Kubernetes native capabilities to federate with any external identity providers on behalf of the > application. >-> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022. The AKS Managed add-on is still supported. +> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022 and the projected will be archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). +> The AKS Managed add-on will begin deprecation in Sept. 2023. ## Before you begin |
aks | Use Oidc Issuer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-oidc-issuer.md | Title: Create an OpenID Connect provider for your Azure Kubernetes Service (AKS) cluster description: Learn how to configure the OpenID Connect (OIDC) provider for a cluster in Azure Kubernetes Service (AKS) Previously updated : 04/04/2023 Last updated : 04/28/2023 ++ # Create an OpenID Connect provider on Azure Kubernetes Service (AKS) [OpenID Connect][open-id-connect-overview] (OIDC) extends the OAuth 2.0 authorization protocol for use as an additional authentication protocol issued by Azure Active Directory (Azure AD). You can use OIDC to enable single sign-on (SSO) between your OAuth-enabled applications, on your Azure Kubernetes Service (AKS) cluster, by using a security token called an ID token. With your AKS cluster, you can enable OpenID Connect (OIDC) Issuer, which allows Azure Active Directory (Azure AD) or other cloud provider identity and access management platform, to discover the API server's public signing keys. To get the OIDC Issuer URL, run the [az aks show][az-aks-show] command. Replace az aks show -n myAKScluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv ``` -### Rotate the OIDC key +## Rotate the OIDC key To rotate the OIDC key, run the [az aks oidc-issuer][az-aks-oidc-issuer] command. Replace the default values for the cluster name and the resource group name. az aks oidc-issuer rotate-signing-keys -n myAKSCluster -g myResourceGroup > [!IMPORTANT] > Once you rotate the key, the old key (key1) expires after 24 hours. This means that both the old key (key1) and the new key (key2) are valid within the 24-hour period. If you want to invalidate the old key (key1) immediately, you need to rotate the OIDC key twice. Then key2 and key3 are valid, and key1 is invalid. +## Check the OIDC keys ++### Get the OIDC Issuer URL +To get the OIDC Issuer URL, run the [az aks show][az-aks-show] command. Replace the default values for the cluster name and the resource group name. ++```azurecli-interactive +az aks show -n myAKScluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv +``` ++The output should resemble the following: ++```output +https://eastus.oic.prod-aks.azure.com/00000000-0000-0000-0000-000000000000/00000000-0000-0000-0000-000000000000/ +``` ++### Get the discovery document ++To get the discovery document, copy the URL `https://(OIDC issuer URL).well-known/openid-configuration` and open it in browser. ++The output should resemble the following: ++```output +{ + "issuer": "https://eastus.oic.prod-aks.azure.com/00000000-0000-0000-0000-000000000000/00000000-0000-0000-0000-000000000000/", + "jwks_uri": "https://eastus.oic.prod-aks.azure.com/00000000-0000-0000-0000-000000000000/00000000-0000-0000-0000-000000000000/openid/v1/jwks", + "response_types_supported": [ + "id_token" + ], + "subject_types_supported": [ + "public" + ], + "id_token_signing_alg_values_supported": [ + "RS256" + ] +} +``` ++### Get the JWK Set document ++To get the JWK Set document, copy the `jwks_uri` from the discovery document and past it in your browser's address bar. ++The output should resemble the following: +```output +{ + "keys": [ + { + "use": "sig", + "kty": "RSA", + "kid": "xxx", + "alg": "RS256", + "n": "xxxx", + "e": "AQAB" + }, + { + "use": "sig", + "kty": "RSA", + "kid": "xxx", + "alg": "RS256", + "n": "xxxx", + "e": "AQAB" + } + ] +} +``` ++During key rotation, there is one additional key present in the discovery document. + ## Next steps * See [configure creating a trust relationship between an app and an external identity provider](../active-directory/develop/workload-identity-federation-create-trust.md) to understand how a federated identity credential creates a trust relationship between an application on your cluster and an external identity provider. |
application-gateway | Application Gateway Private Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-private-deployment.md | This error can be ignored and will be clarified in a future release. ## Next steps -- See [Azure security baseline for Application Gateway](/security/benchmark/azure/baselines/application-gateway-security-baseline.md) for more security best practices.+- See [Azure security baseline for Application Gateway](/security/benchmark/azure/baselines/application-gateway-security-baseline) for more security best practices. |
application-gateway | Ingress Controller Autoscale Pods | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-autoscale-pods.md | As incoming traffic increases, it becomes crucial to scale up your applications In the following tutorial, we explain how you can use Application Gateway's `AvgRequestCountPerHealthyHost` metric to scale up your application. `AvgRequestCountPerHealthyHost` measures average requests sent to a specific backend pool and backend HTTP setting combination. -We are going to use following two components: +Use following two components: -* [`Azure Kubernetes Metric Adapter`](https://github.com/Azure/azure-k8s-metrics-adapter) - We will use the metric adapter to expose Application Gateway metrics through the metric server. The Azure Kubernetes Metric Adapter is an open source project under Azure, similar to the Application Gateway Ingress Controller. -* [`Horizontal Pod Autoscaler`](../aks/concepts-scale.md#horizontal-pod-autoscaler) - We will use HPA to use Application Gateway metrics and target a deployment for scaling. +* [`Azure Kubernetes Metric Adapter`](https://github.com/Azure/azure-k8s-metrics-adapter) - We use the metric adapter to expose Application Gateway metrics through the metric server. The Azure Kubernetes Metric Adapter is an open source project under Azure, similar to the Application Gateway Ingress Controller. +* [`Horizontal Pod Autoscaler`](../aks/concepts-scale.md#horizontal-pod-autoscaler) - We use HPA to use Application Gateway metrics and target a deployment for scaling. ## Setting up Azure Kubernetes Metric Adapter -1. We will first create an Azure AAD service principal and assign it `Monitoring Reader` access over Application Gateway's resource group. +1. First, create an Azure AD service principal and assign it `Monitoring Reader` access over Application Gateway's resource group. ```azurecli applicationGatewayGroupName="<application-gateway-group-id>" We are going to use following two components: az ad sp create-for-rbac -n "azure-k8s-metric-adapter-sp" --role "Monitoring Reader" --scopes applicationGatewayGroupId ``` -1. Now, We will deploy the [`Azure Kubernetes Metric Adapter`](https://github.com/Azure/azure-k8s-metrics-adapter) using the AAD service principal created above. +1. Now, deploy the [`Azure Kubernetes Metric Adapter`](https://github.com/Azure/azure-k8s-metrics-adapter) using the Azure AD service principal created previously. ```bash kubectl create namespace custom-metrics- # use values from service principal created above to create secret + # use values from service principal created previously to create secret kubectl create secret generic azure-k8s-metrics-adapter -n custom-metrics \ --from-literal=azure-tenant-id=<tenantid> \ --from-literal=azure-client-id=<clientid> \ We are going to use following two components: kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/Azure/azure-k8s-metrics-adapter/master/deploy/adapter.yaml -n custom-metrics ``` -1. We will create an `ExternalMetric` resource with name `appgw-request-count-metric`. This resource will instruct the metric adapter to expose `AvgRequestCountPerHealthyHost` metric for `myApplicationGateway` resource in `myResourceGroup` resource group. You can use the `filter` field to target a specific backend pool and backend HTTP setting in the Application Gateway. +1. Create an `ExternalMetric` resource with name `appgw-request-count-metric`. This resource instructs the metric adapter to expose `AvgRequestCountPerHealthyHost` metric for `myApplicationGateway` resource in `myResourceGroup` resource group. You can use the `filter` field to target a specific backend pool and backend HTTP setting in the Application Gateway. ```yaml apiVersion: azure.com/v1alpha2 kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/appg ## Using the new metric to scale up the deployment -Once we are able to expose `appgw-request-count-metric` through the metric server, we are ready to use [`Horizontal Pod Autoscaler`](../aks/concepts-scale.md#horizontal-pod-autoscaler) to scale up our target deployment. +Once we're able to expose `appgw-request-count-metric` through the metric server, we're ready to use [`Horizontal Pod Autoscaler`](../aks/concepts-scale.md#horizontal-pod-autoscaler) to scale up our target deployment. -In following example, we will target a sample deployment `aspnet`. We will scale up Pods when `appgw-request-count-metric` > 200 per Pod up to a max of `10` Pods. +In following example, we target a sample deployment `aspnet`. We scale up Pods when `appgw-request-count-metric` > 200 per Pod up to a max of `10` Pods. Replace your target deployment name and apply the following auto scale configuration: ```yaml |
application-gateway | Ingress Controller Expose Service Over Http Https | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-expose-service-over-http-https.md | These tutorials help illustrate the usage of [Kubernetes Ingress Resources](http - Installed `ingress-azure` helm chart. - [**Greenfield Deployment**](ingress-controller-install-new.md): If you're starting from scratch, refer to these installation instructions, which outlines steps to deploy an AKS cluster with Application Gateway and install application gateway ingress controller on the AKS cluster. - [**Brownfield Deployment**](ingress-controller-install-existing.md): If you have an existing AKS cluster and Application Gateway, refer to these instructions to install application gateway ingress controller on the AKS cluster.-- If you want to use HTTPS on this application, you'll need an x509 certificate and its private key.+- If you want to use HTTPS on this application, you need an x509 certificate and its private key. ## Deploy `guestbook` application -The guestbook application is a canonical Kubernetes application that composes of a Web UI frontend, a backend and a Redis database. By default, `guestbook` exposes its application through a service with name `frontend` on port `80`. Without a Kubernetes Ingress Resource, the service isn't accessible from outside the AKS cluster. We'll use the application and setup Ingress Resources to access the application through HTTP and HTTPS. +The guestbook application is a canonical Kubernetes application that composes of a Web UI frontend, a backend and a Redis database. By default, `guestbook` exposes its application through a service with name `frontend` on port `80`. Without a Kubernetes Ingress Resource, the service isn't accessible from outside the AKS cluster. We use the application and set up Ingress Resources to access the application through HTTP and HTTPS. -Follow the instructions below to deploy the guestbook application. +Use the following instructions to deploy the guestbook application. 1. Download `guestbook-all-in-one.yaml` from [here](https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/all-in-one/guestbook-all-in-one.yaml) 1. Deploy `guestbook-all-in-one.yaml` into your AKS cluster by running Now, the `guestbook` application has been deployed. ## Expose services over HTTP -In order to expose the guestbook application, we'll be using the following ingress resource: +To expose the guestbook application, use the following ingress resource: ```yaml apiVersion: extensions/v1beta1 spec: servicePort: 80 ``` -This ingress will expose the `frontend` service of the `guestbook-all-in-one` deployment +This ingress exposes the `frontend` service of the `guestbook-all-in-one` deployment as a default backend of the Application Gateway. Save the above ingress resource as `ing-guestbook.yaml`. Now the `guestbook` application should be available. You can check availability ### Without specified hostname -Without specifying hostname, the guestbook service will be available on all the host-names pointing to the application gateway. +Without specifying hostname, the guestbook service is available on all the host-names pointing to the application gateway. 1. Before deploying ingress, you need to create a kubernetes secret to host the certificate and private key. You can create a kubernetes secret by running Without specifying hostname, the guestbook service will be available on all the 1. Check the log of the ingress controller for deployment status. -Now the `guestbook` application will be available on both HTTP and HTTPS. +Now the `guestbook` application is available on both HTTP and HTTPS. ### With specified hostname You can also specify the hostname on the ingress in order to multiplex TLS configurations and services.-By specifying hostname, the guestbook service will only be available on the specified host. +By specifying hostname, the guestbook service is only available on the specified host. 1. Define the following ingress. In the ingress, specify the name of the secret in the `secretName` section and replace the hostname in the `hosts` section accordingly. By specifying hostname, the guestbook service will only be available on the spec 1. Check the log of the ingress controller for deployment status. -Now the `guestbook` application will be available on both HTTP and HTTPS only on the specified host (`<guestbook.contoso.com>` in this example). +Now the `guestbook` application is available on both HTTP and HTTPS only on the specified host (`<guestbook.contoso.com>` in this example). ## Integrate with other services -The following ingress will allow you to add other paths into this ingress and redirect those paths to other +The following ingress allows you to add other paths into this ingress and redirect those paths to other ```yaml apiVersion: extensions/v1beta1 |
application-gateway | Ingress Controller Install Existing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-existing.md | Apply the Helm changes: ``` As a result your AKS has a new instance of `AzureIngressProhibitedTarget` called `prohibit-all-targets`:-```bash -kubectl get AzureIngressProhibitedTargets prohibit-all-targets -o yaml -``` + ```bash + kubectl get AzureIngressProhibitedTargets prohibit-all-targets -o yaml + ``` The object `prohibit-all-targets`, as the name implies, prohibits AGIC from changing config for *any* host and path. Helm install with `appgw.shared=true` deploys AGIC, but doesn't make any changes to Application Gateway. |
application-gateway | Ingress Controller Install New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-new.md | Kubernetes. We'll use it to install the `application-gateway-kubernetes-ingress` Values: - `verbosityLevel`: Sets the verbosity level of the AGIC logging infrastructure. See [Logging Levels](https://github.com/Azure/application-gateway-kubernetes-ingress/blob/463a87213bbc3106af6fce0f4023477216d2ad78/docs/troubleshooting.md#logging-levels) for possible values.- - `appgw.environment`: Sets cloud environment. Possbile values: `AZURECHINACLOUD`, `AZUREGERMANCLOUD`, `AZUREPUBLICCLOUD`, `AZUREUSGOVERNMENTCLOUD` + - `appgw.environment`: Sets cloud environment. Possible values: `AZURECHINACLOUD`, `AZUREGERMANCLOUD`, `AZUREPUBLICCLOUD`, `AZUREUSGOVERNMENTCLOUD` - `appgw.subscriptionId`: The Azure Subscription ID in which Application Gateway resides. Example: `a123b234-a3b4-557d-b2df-a0bc12de1234` - `appgw.resourceGroup`: Name of the Azure Resource Group in which Application Gateway was created. Example: `app-gw-resource-group` - `appgw.name`: Name of the Application Gateway. Example: `applicationgatewayd0f0` Alternatively you can: * Download the YAML file above: -```bash -curl https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/aspnetapp.yaml -o aspnetapp.yaml -``` + ```bash + curl https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/aspnetapp.yaml -o aspnetapp.yaml + ``` * Apply the YAML file: -```bash -kubectl apply -f aspnetapp.yaml -``` + ```bash + kubectl apply -f aspnetapp.yaml + ``` ## Other Examples |
application-gateway | Ingress Controller Letsencrypt Certificate Application Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-letsencrypt-certificate-application-gateway.md | -This section configures your AKS to use [LetsEncrypt.org](https://letsencrypt.org/) and automatically obtain a TLS/SSL certificate for your domain. The certificate will be installed on Application Gateway, which will perform SSL/TLS termination for your AKS cluster. The setup described here uses the [cert-manager](https://github.com/jetstack/cert-manager) Kubernetes add-on, which automates the creation and management of certificates. +This section configures your AKS to use [LetsEncrypt.org](https://letsencrypt.org/) and automatically obtain a TLS/SSL certificate for your domain. The certificate is installed on Application Gateway, which performs SSL/TLS termination for your AKS cluster. The setup described here uses the [cert-manager](https://github.com/jetstack/cert-manager) Kubernetes add-on, which automates the creation and management of certificates. -Follow the steps below to install [cert-manager](https://docs.cert-manager.io) on your existing AKS cluster. +Use the following steps to install [cert-manager](https://docs.cert-manager.io) on your existing AKS cluster. 1. Helm Chart - Run the following script to install the `cert-manager` helm chart. This will: + Run the following script to install the `cert-manager` helm chart. The script performs the following actions: - - create a new `cert-manager` namespace on your AKS - - create the following CRDs: Certificate, Challenge, ClusterIssuer, Issuer, Order - - install cert-manager chart (from [docs.cert-manager.io)](https://cert-manager.io/docs/installation/compatibility/) + - creates a new `cert-manager` namespace on your AKS + - creates the following CRDs: Certificate, Challenge, ClusterIssuer, Issuer, Order + - installs cert-manager chart (from [docs.cert-manager.io)](https://cert-manager.io/docs/installation/compatibility/) ```bash #!/bin/bash Follow the steps below to install [cert-manager](https://docs.cert-manager.io) o 2. ClusterIssuer Resource - Create a `ClusterIssuer` resource. It's required by `cert-manager` to represent the `Lets Encrypt` certificate - authority where the signed certificates will be obtained. + Create a `ClusterIssuer` resource. This is required by `cert-manager` to represent the `Lets Encrypt` certificate authority where the signed certificate is obtained. - Using the non-namespaced `ClusterIssuer` resource, cert-manager will issue certificates that can be consumed from - multiple namespaces. `LetΓÇÖs Encrypt` uses the ACME protocol to verify that you control a given domain name and to issue - you a certificate. More details on configuring `ClusterIssuer` properties - [here](https://docs.cert-manager.io/en/latest/tasks/issuers/https://docsupdatetracker.net/index.html). `ClusterIssuer` will instruct `cert-manager` - to issue certificates using the `Lets Encrypt` staging environment used for testing (the root certificate not present - in browser/client trust stores). + Using the non-namespaced `ClusterIssuer` resource, cert-manager issues certificates that can be consumed from multiple namespaces. `LetΓÇÖs Encrypt` uses the ACME protocol to verify that you control a given domain name and to issue a certificate. More details on configuring `ClusterIssuer` properties [here](https://docs.cert-manager.io/en/latest/tasks/issuers/https://docsupdatetracker.net/index.html). `ClusterIssuer` instructs `cert-manager` to issue certificates using the `Lets Encrypt` staging environment used for testing (the root certificate not present in browser/client trust stores). - The default challenge type in the YAML below is `http01`. Other challenges are documented on [letsencrypt.org - Challenge Types](https://letsencrypt.org/docs/challenge-types/) + The default challenge type in the following YAML is `http01`. Other challenges are documented on [letsencrypt.org - Challenge Types](https://letsencrypt.org/docs/challenge-types/) > [!IMPORTANT] - > Update `<YOUR.EMAIL@ADDRESS>` in the YAML below + > Update `<YOUR.EMAIL@ADDRESS>` in the following YAML. ```bash #!/bin/bash Follow the steps below to install [cert-manager](https://docs.cert-manager.io) o spec: acme: # You must replace this email address with your own.- # Let's Encrypt will use this to contact you about expiring + # Let's Encrypt uses this to contact you about expiring # certificates, and issues related to your account. email: <YOUR.EMAIL@ADDRESS> # ACME server URL for LetΓÇÖs EncryptΓÇÖs staging environment. Follow the steps below to install [cert-manager](https://docs.cert-manager.io) o tagged Ingress resource. > [!IMPORTANT] - > Update `<PLACEHOLDERS.COM>` in the YAML below with your own domain (or the Application Gateway one, for example + > Update `<PLACEHOLDERS.COM>` in the following YAML with your own domain (or the Application Gateway one, for example 'kh-aks-ingress.westeurope.cloudapp.azure.com') ```bash Follow the steps below to install [cert-manager](https://docs.cert-manager.io) o 5. Certificate Expiration and Renewal - Before the `Lets Encrypt` certificate expires, `cert-manager` will automatically update the certificate in the Kubernetes secret store. At that point, Application Gateway Ingress Controller will apply the updated secret referenced in the ingress resources it's using to configure the Application Gateway. + Before the `Lets Encrypt` certificate expires, `cert-manager` automatically updates the certificate in the Kubernetes secret store. At that point, Application Gateway Ingress Controller applies the updated secret referenced in the ingress resources it's using to configure the Application Gateway. |
application-gateway | Ingress Controller Private Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-private-ip.md | -This feature allows to expose the ingress endpoint within the `Virtual Network` using a private IP. +This feature exposes the ingress endpoint within the `Virtual Network` using a private IP. -## Pre-requisites +## Prerequisites Application Gateway with a [Private IP configuration](./configure-application-gateway-with-private-frontend-ip.md) There are two ways to configure the controller to use Private IP for ingress, To expose a particular ingress over Private IP, use annotation [`appgw.ingress.k appgw.ingress.kubernetes.io/use-private-ip: "true" ``` -For Application Gateways without a Private IP, Ingresses annotated with `appgw.ingress.kubernetes.io/use-private-ip: "true"` will be ignored. This will be indicated in the ingress event and AGIC pod log. +For Application Gateways without a Private IP, Ingresses annotated with `appgw.ingress.kubernetes.io/use-private-ip: "true"` is ignored. This is indicated in the ingress event and AGIC pod log. * Error as indicated in the Ingress Event - ```bash + ```output Events: Type Reason Age From Message - - - - For Application Gateways without a Private IP, Ingresses annotated with `appgw.i * Error as indicated in AGIC Logs - ```bash + ```output E0730 18:57:37.914749 1 prune.go:65] Ingress default/hello-world-ingress requires Application Gateway applicationgateway3026 has a private IP address ``` appgw: usePrivateIP: true ``` -This will make the ingress controller filter the IP address configurations for a Private IP when configuring the frontend listeners on the Application Gateway. -AGIC will panic and crash if `usePrivateIP: true` and no Private IP is assigned. +This makes the ingress controller filter the IP address configurations for a Private IP when configuring the frontend listeners on the Application Gateway. +AGIC can panic and crash if `usePrivateIP: true` and no Private IP is assigned. > [!NOTE] > Application Gateway v2 SKU requires a Public IP. Should you require Application Gateway to be private, Attach a [`Network Security Group`](../virtual-network/network-security-groups-overview.md) to the Application Gateway's subnet to restrict traffic. |
application-gateway | Ingress Controller Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-troubleshoot.md | The following conditions must be in place for AGIC to function as expected: 1. AKS must have one or more healthy **pods**. Verify this configuration from [Cloud Shell](https://shell.azure.com/) with `kubectl get pods -o wide --show-labels` If you have a Pod with an `apsnetapp`, your output may look like this:- ```bash + ```output delyan@Azure:~$ kubectl get pods -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS The following conditions must be in place for AGIC to function as expected: 2. One or more **services**, referencing the pods above via matching `selector` labels. Verify this configuration from [Cloud Shell](https://shell.azure.com/) with `kubectl get services -o wide`- ```bash + ```output delyan@Azure:~$ kubectl get services -o wide --show-labels NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR LABELS The following conditions must be in place for AGIC to function as expected: 3. **Ingress**, annotated with `kubernetes.io/ingress.class: azure/application-gateway`, referencing the service above Verify this configuration from [Cloud Shell](https://shell.azure.com/) with `kubectl get ingress -o wide --show-labels`- ```bash + ```output delyan@Azure:~$ kubectl get ingress -o wide --show-labels NAME HOSTS ADDRESS PORTS AGE LABELS The following conditions must be in place for AGIC to function as expected: ``` 4. View annotations of the ingress above: `kubectl get ingress aspnetapp -o yaml` (substitute `aspnetapp` with the name of your ingress)- ```bash + ```output delyan@Azure:~$ kubectl get ingress aspnetapp -o yaml apiVersion: extensions/v1beta1 |
application-gateway | Ingress Controller Update Ingress Controller | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-update-ingress-controller.md | Before beginning the upgrade procedure, ensure that you've added the required re Sample response: - ```bash + ```output NAME CHART VERSION APP VERSION DESCRIPTION application-gateway-kubernetes-ingress/ingress-azure 0.7.0-rc1 0.7.0-rc1 Use Azure Application Gateway as the ingress for an Azure... application-gateway-kubernetes-ingress/ingress-azure 0.6.0 0.6.0 Use Azure Application Gateway as the ingress for an Azure... Before beginning the upgrade procedure, ensure that you've added the required re Sample response: - ```bash + ```output NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE odd-billygoat 22 Fri Jun 21 15:56:06 2019 FAILED ingress-azure-0.7.0-rc1 0.7.0-rc1 default ``` If the Helm deployment fails, you can roll back to a previous release. Sample output: - ```bash + ```output REVISION UPDATED STATUS CHART DESCRIPTION 1 Mon Jun 17 13:49:42 2019 DEPLOYED ingress-azure-0.6.0 Install complete 2 Fri Jun 21 15:56:06 2019 FAILED ingress-azure-xx xxxx |
application-gateway | Quick Create Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-cli.md | |
application-gateway | Redirect Http To Https Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-http-to-https-cli.md | -You can use the Azure CLI to create an [application gateway](overview.md) with a certificate for TLS/SSL termination. A routing rule is used to redirect HTTP traffic to the HTTPS port in your application gateway. In this example, you also create a [virtual machine scale set](../virtual-machine-scale-sets/overview.md) for the backend pool of the application gateway that contains two virtual machine instances. +You can use the Azure CLI to create an [application gateway](overview.md) with a certificate for TLS/SSL termination. A routing rule is used to redirect HTTP traffic to the HTTPS port in your application gateway. In this example, you also create a [Virtual Machine Scale Set](../virtual-machine-scale-sets/overview.md) for the backend pool of the application gateway that contains two virtual machine instances. In this article, you learn how to: In this article, you learn how to: * Set up a network * Create an application gateway with the certificate * Add a listener and redirection rule-* Create a virtual machine scale set with the default backend pool +* Create a Virtual Machine Scale Set with the default backend pool [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] az group create --name myResourceGroupAG --location eastus ## Create network resources -Create the virtual network named *myVNet* and the subnet named *myAGSubnet* using [az network vnet create](/cli/azure/network/vnet). You can then add the subnet named *myBackendSubnet* that's needed by the backend servers using [az network vnet subnet create](/cli/azure/network/vnet/subnet). Create the public IP address named *myAGPublicIPAddress* using [az network public-ip create](/cli/azure/network/public-ip). +Create the virtual network named *myVNet* and the subnet named *myAGSubnet* using [az network vnet create](/cli/azure/network/vnet). You can then add the subnet named *myBackendSubnet* needed by the backend servers using [az network vnet subnet create](/cli/azure/network/vnet/subnet). Create the public IP address named *myAGPublicIPAddress* using [az network public-ip create](/cli/azure/network/public-ip). ```azurecli-interactive az network vnet create \ az network application-gateway rule create \ --redirect-config httpToHttps ``` -## Create a virtual machine scale set +## Create a Virtual Machine Scale Set -In this example, you create a virtual machine scale set named *myvmss* that provides servers for the backend pool in the application gateway. The virtual machines in the scale set are associated with *myBackendSubnet* and *appGatewayBackendPool*. To create the scale set, you can use [az vmss create](/cli/azure/vmss#az-vmss-create). +In this example, you create a Virtual Machine Scale Set named *myvmss* that provides servers for the backend pool in the application gateway. The virtual machines in the scale set are associated with *myBackendSubnet* and *appGatewayBackendPool*. To create the scale set, you can use [az vmss create](/cli/azure/vmss#az-vmss-create). ```azurecli-interactive az vmss create \ |
application-gateway | Redirect Internal Site Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-internal-site-cli.md | |
application-gateway | Self Signed Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/self-signed-certificates.md | |
application-gateway | Tutorial Manage Web Traffic Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-manage-web-traffic-cli.md | Title: Manage web traffic - Azure CLI -description: Learn how to create an application gateway with a virtual machine scale set to manage web traffic using the Azure CLI. +description: Learn how to create an application gateway with a Virtual Machine Scale Set to manage web traffic using the Azure CLI. Previously updated : 07/20/2019 Last updated : 04/27/2023 # Manage web traffic with an application gateway using the Azure CLI -Application gateway is used to manage and secure web traffic to servers that you maintain. You can use the Azure CLI to create an [application gateway](overview.md) that uses a [virtual machine scale set](../virtual-machine-scale-sets/overview.md) for backend servers. In this example, the scale set contains two virtual machine instances. The scale set is added to the default backend pool of the application gateway. +Application gateway is used to manage and secure web traffic to servers that you maintain. You can use the Azure CLI to create an [application gateway](overview.md) that uses a [Virtual Machine Scale Set](../virtual-machine-scale-sets/overview.md) for backend servers. In this example, the scale set contains two virtual machine instances. The scale set is added to the default backend pool of the application gateway. In this article, you learn how to: * Set up the network * Create an application gateway-* Create a virtual machine scale set with the default backend pool +* Create a Virtual Machine Scale Set with the default backend pool If you prefer, you can complete this procedure using [Azure PowerShell](tutorial-manage-web-traffic-powershell.md). A resource group is a logical container into which Azure resources are deployed The following example creates a resource group named *myResourceGroupAG* in the *eastus* location. -```azurecli-interactive -az group create --name myResourceGroupAG --location eastus -``` + ```azurecli-interactive + az group create --name myResourceGroupAG --location eastus + ``` ## Create network resources -Create the virtual network named *myVNet* and the subnet named *myAGSubnet* using [az network vnet create](/cli/azure/network/vnet). You can then add the subnet named *myBackendSubnet* that's needed by the backend servers using [az network vnet subnet create](/cli/azure/network/vnet/subnet). Create the public IP address named *myAGPublicIPAddress* using [az network public-ip create](/cli/azure/network/public-ip). +Create the virtual network named *myVNet* and the subnet named *myAGSubnet* using [az network vnet create](/cli/azure/network/vnet). You can then add the subnet named *myBackendSubnet* needed by the backend servers using [az network vnet subnet create](/cli/azure/network/vnet/subnet). Create the public IP address named *myAGPublicIPAddress* using [az network public-ip create](/cli/azure/network/public-ip). -```azurecli-interactive -az network vnet create \ + ```azurecli-interactive + az network vnet create \ --name myVNet \ --resource-group myResourceGroupAG \ --location eastus \ az network vnet create \ --subnet-name myAGSubnet \ --subnet-prefix 10.0.1.0/24 -az network vnet subnet create \ + az network vnet subnet create \ --name myBackendSubnet \ --resource-group myResourceGroupAG \ --vnet-name myVNet \ --address-prefix 10.0.2.0/24 -az network public-ip create \ + az network public-ip create \ --resource-group myResourceGroupAG \ --name myAGPublicIPAddress \ --allocation-method Static \ --sku Standard-``` + ``` ## Create an application gateway Use [az network application-gateway create](/cli/azure/network/application-gateway) to create the application gateway named *myAppGateway*. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings. The application gateway is assigned to *myAGSubnet* and *myPublicIPAddress* that you previously created. -```azurecli-interactive -az network application-gateway create \ + ```azurecli-interactive + az network application-gateway create \ --name myAppGateway \ --location eastus \ --resource-group myResourceGroupAG \ az network application-gateway create \ --http-settings-port 80 \ --http-settings-protocol Http \ --public-ip-address myAGPublicIPAddress-``` + ``` - It may take several minutes for the application gateway to be created. After the application gateway is created, you'll see these new features: +It may take several minutes for the application gateway to be created. After the application gateway is created, you'll see these new features: - *appGatewayBackendPool* - An application gateway must have at least one backend address pool. - *appGatewayBackendHttpSettings* - Specifies that port 80 and an HTTP protocol is used for communication. az network application-gateway create \ ## Create a Virtual Machine Scale Set -In this example, you create a virtual machine scale set that provides servers for the backend pool in the application gateway. The virtual machines in the scale set are associated with *myBackendSubnet* and *appGatewayBackendPool*. To create the scale set, use [az vmss create](/cli/azure/vmss#az-vmss-create). +In this example, you create a Virtual Machine Scale Set that provides servers for the backend pool in the application gateway. The virtual machines in the scale set are associated with *myBackendSubnet* and *appGatewayBackendPool*. To create the scale set, use [az vmss create](/cli/azure/vmss#az-vmss-create). -```azurecli-interactive -az vmss create \ + ```azurecli-interactive + az vmss create \ --name myvmss \ --resource-group myResourceGroupAG \ --image UbuntuLTS \ az vmss create \ --upgrade-policy-mode Automatic \ --app-gateway myAppGateway \ --backend-pool-name appGatewayBackendPool-``` + ``` ### Install NGINX -Now you can install NGINX on the virtual machine scale set so you can test HTTP connectivity to the backend pool. +Now you can install NGINX on the Virtual Machine Scale Set so you can test HTTP connectivity to the backend pool. -```azurecli-interactive -az vmss extension set \ + ```azurecli-interactive + az vmss extension set \ --publisher Microsoft.Azure.Extensions \ --version 2.0 \ --name CustomScript \ --resource-group myResourceGroupAG \ --vmss-name myvmss \ --settings '{ "fileUris": ["https://raw.githubusercontent.com/Azure/azure-docs-powershell-samples/master/application-gateway/iis/install_nginx.sh"], "commandToExecute": "./install_nginx.sh" }'-``` + ``` ## Test the application gateway To get the public IP address of the application gateway, use [az network public-ip show](/cli/azure/network/public-ip). Copy the public IP address, and then paste it into the address bar of your browser. -```azurecli-interactive -az network public-ip show \ + ```azurecli-interactive + az network public-ip show \ --resource-group myResourceGroupAG \ --name myAGPublicIPAddress \ --query [ipAddress] \ --output tsv-``` + ```  az network public-ip show \ When no longer needed, remove the resource group, application gateway, and all related resources. -```azurecli-interactive -az group delete --name myResourceGroupAG --location eastus -``` + ```azurecli-interactive + az group delete --name myResourceGroupAG --location eastus + ``` ## Next steps |
application-gateway | Tutorial Multiple Sites Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-multiple-sites-cli.md | In this article, you learn how to: * Create an application gateway * Create backend listeners * Create routing rules-* Create virtual machine scale sets with the backend pools +* Create Virtual Machine Scale Sets with the backend pools * Create a CNAME record in your domain :::image type="content" source="./media/tutorial-multiple-sites-cli/scenario.png" alt-text="Multi-site Application Gateway"::: az network application-gateway rule create \ ``` -## Create virtual machine scale sets +## Create Virtual Machine Scale Sets -In this example, you create three virtual machine scale sets that support the three backend pools in the application gateway. The scale sets that you create are named *myvmss1*, *myvmss2*, and *myvmss3*. Each scale set contains two virtual machine instances on which you install IIS. +In this example, you create three Virtual Machine Scale Sets that support the three backend pools in the application gateway. The scale sets that you create are named *myvmss1*, *myvmss2*, and *myvmss3*. Each scale set contains two virtual machine instances on which you install IIS. ```azurecli-interactive for i in `seq 1 2`; do |
application-gateway | Tutorial Ssl Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ssl-cli.md | -You can use the Azure CLI to create an [application gateway](overview.md) with a certificate for [TLS termination](ssl-overview.md). For backend servers, you can use a [virtual machine scale set](../virtual-machine-scale-sets/overview.md). In this example, the scale set contains two virtual machine instances that are added to the default backend pool of the application gateway. +You can use the Azure CLI to create an [application gateway](overview.md) with a certificate for [TLS termination](ssl-overview.md). For backend servers, you can use a [Virtual Machine Scale Set](../virtual-machine-scale-sets/overview.md). In this example, the scale set contains two virtual machine instances that are added to the default backend pool of the application gateway. In this article, you learn how to: * Create a self-signed certificate * Set up a network * Create an application gateway with the certificate-* Create a virtual machine scale set with the default backend pool +* Create a Virtual Machine Scale Set with the default backend pool If you prefer, you can complete this procedure using [Azure PowerShell](tutorial-ssl-powershell.md). az group create --name myResourceGroupAG --location eastus ## Create network resources -Create the virtual network named *myVNet* and the subnet named *myAGSubnet* using [az network vnet create](/cli/azure/network/vnet). You can then add the subnet named *myBackendSubnet* that's needed by the backend servers using [az network vnet subnet create](/cli/azure/network/vnet/subnet). Create the public IP address named *myAGPublicIPAddress* using [az network public-ip create](/cli/azure/network/public-ip). +Create the virtual network named *myVNet* and the subnet named *myAGSubnet* using [az network vnet create](/cli/azure/network/vnet). You can then add the subnet named *myBackendSubnet* needed by the backend servers using [az network vnet subnet create](/cli/azure/network/vnet/subnet). Create the public IP address named *myAGPublicIPAddress* using [az network public-ip create](/cli/azure/network/public-ip). ```azurecli-interactive az network vnet create \ az network application-gateway create \ - *appGatewayFrontendIP* - Assigns *myAGPublicIPAddress* to *appGatewayHttpListener*. - *rule1* - The default routing rule that is associated with *appGatewayHttpListener*. -## Create a virtual machine scale set +## Create a Virtual Machine Scale Set -In this example, you create a virtual machine scale set that provides servers for the default backend pool in the application gateway. The virtual machines in the scale set are associated with *myBackendSubnet* and *appGatewayBackendPool*. To create the scale set, you can use [az vmss create](/cli/azure/vmss#az-vmss-create). +In this example, you create a Virtual Machine Scale Set that provides servers for the default backend pool in the application gateway. The virtual machines in the scale set are associated with *myBackendSubnet* and *appGatewayBackendPool*. To create the scale set, you can use [az vmss create](/cli/azure/vmss#az-vmss-create). ```azurecli-interactive az vmss create \ |
application-gateway | Tutorial Url Redirect Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-redirect-cli.md | |
application-gateway | Tutorial Url Route Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-route-cli.md | -As an IT administrator managing web traffic, you want to help your customers or users get the information they need as quickly as possible. One way you can optimize their experience is by routing different kinds of web traffic to different server resources. This article shows you how to use the Azure CLI to set up and configure Application Gateway routing for different types of traffic from your application. The routing then directs the traffic to different server pools based on the URL. +As an IT administrator managing web traffic, you want to help your customers and users get the information they need as quickly as possible. One way you can optimize their experience is by routing different kinds of web traffic to different server resources. This article shows you how to use the Azure CLI to set up and configure Application Gateway routing for different types of traffic from your application. The routing then directs the traffic to different server pools based on the URL.  In this article, you learn how to: -* Create a resource group for the network resources youΓÇÖll need +* Create a resource group for the network resources you need * Create the network resources * Create an application gateway for the traffic coming from your application * Specify server pools and routing rules for the different types of traffic az group create --name myResourceGroupAG --location eastus ## Create network resources -Create the virtual network named *myVNet* and the subnet named *myAGSubnet* using `az network vnet create`. Then add a subnet named *myBackendSubnet* that's needed by the backend servers using `az network vnet subnet create`. Create the public IP address named *myAGPublicIPAddress* using `az network public-ip create`. +Create the virtual network named *myVNet* and the subnet named *myAGSubnet* using `az network vnet create`. Then add a subnet named *myBackendSubnet* needed by the backend servers using `az network vnet subnet create`. Create the public IP address named *myAGPublicIPAddress* using `az network public-ip create`. ```azurecli-interactive az network vnet create \ az network application-gateway create \ |Feature |Description | ||| |appGatewayBackendPool |An application gateway must have at least one backend address pool.|-|appGatewayBackendHttpSettings |Specifies that port 80 and an HTTP protocol is used for communication.| +|appGatewayBackendHttpSettings |Specifies that port 80 and an HTTP protocol are used for communication.| |appGatewayHttpListener |The default listener associated with appGatewayBackendPool| |appGatewayFrontendIP |Assigns myAGPublicIPAddress to appGatewayHttpListener.| |rule1 |The default routing rule that is associated with appGatewayHttpListener.| az network application-gateway rule create \ --priority 200 ``` -## Create virtual machine scale sets +## Create Virtual Machine Scale Sets -In this article, you create three virtual machine scale sets that support the three backend pools you created. You create scale sets named *myvmss1*, *myvmss2*, and *myvmss3*. Each scale set contains two virtual machine instances where you install NGINX. +In this article, you create three Virtual Machine Scale Sets that support the three backend pools you created. You create scale sets named *myvmss1*, *myvmss2*, and *myvmss3*. Each scale set contains two virtual machine instances where you install NGINX. ```azurecli-interactive for i in `seq 1 3`; do |
applied-ai-services | Form Recognizer Disconnected Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-disconnected-containers.md | The following example shows the formatting for the `docker run` command to use w | `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` | | `{LICENSE_MOUNT}` | The path where the license is downloaded, and mounted. | `/host/license:/path/to/license/directory` | | `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |-| `{API_KEY}` | The key for your Text Analytics resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`{string}`| +| `{API_KEY}` | The key for your Form Recognizer resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`{string}`| | `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` | **Example `docker run` command** After you've configured the container, use the next section to run the container ## Form Recognizer container models and configuration -After you've [configured the container](#configure-the-container-to-be-run-in-a-disconnected-environment), the values for the downloaded translation models and container configuration will be generated and displayed in the container output: --```bash --e MODELS= /path/to/model1/, /path/to/model2/--e TRANSLATORSYSTEMCONFIG=/path/to/model/config/translatorsystemconfig.json-``` +After you've [configured the container](#configure-the-container-to-be-run-in-a-disconnected-environment), the values for the downloaded form recognizer models and container configuration will be generated and displayed in the container output. ## Run the container in a disconnected environment Run the container with an output mount and logging enabled. These settings enabl ## Next steps * [Deploy the Sample Labeling tool to an Azure Container Instance (ACI)](../deploy-label-tool.md#deploy-with-azure-container-instances-aci)-* [Change or end a commitment plan](../../../cognitive-services/containers/disconnected-containers.md#purchase-a-different-commitment-plan-for-disconnected-containers) +* [Change or end a commitment plan](../../../cognitive-services/containers/disconnected-containers.md#purchase-a-different-commitment-plan-for-disconnected-containers) |
applied-ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md | Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied- [!INCLUDE [Models](includes/model-type-name.md)] +## Video: Form Recognizer models ++The following video introduces Form Recognizer models and their associated output to help you choose the best model to address your document scenario needs.</br></br> ++ > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5fX1b] + ## Which Form Recognizer model should I use? This section helps you decide which **Form Recognizer v3.0** supported model you should use for your application: |
azure-arc | Azure Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md | For a conceptual overview of this feature, see [Azure RBAC on Azure Arc-enabled - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version. > [!NOTE]-> You can't set up this feature for managed Kubernetes offerings of cloud providers like Elastic Kubernetes Service or Google Kubernetes Engine where the user doesn't have access to the API server of the cluster. For Azure Kubernetes Service (AKS) clusters, this [feature is available natively](../../aks/manage-azure-rbac.md) and doesn't require the AKS cluster to be connected to Azure Arc. For AKS on Azure Stack HCI, see [Use Azure RBAC for AKS hybrid clusters (preview)](/azure/aks/hybrid/azure-rbac-aks-hybrid). +> You can't set up this feature for Red Hat OpenShift, or for managed Kubernetes offerings of cloud providers like Elastic Kubernetes Service or Google Kubernetes Engine where the user doesn't have access to the API server of the cluster. For Azure Kubernetes Service (AKS) clusters, this [feature is available natively](../../aks/manage-azure-rbac.md) and doesn't require the AKS cluster to be connected to Azure Arc. For AKS on Azure Stack HCI, see [Use Azure RBAC for AKS hybrid clusters (preview)](/azure/aks/hybrid/azure-rbac-aks-hybrid). ## Set up Azure AD applications |
azure-arc | Quick Start Connect Vcenter To Arc Using Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md | To start using the Azure Arc-enabled VMware vSphere (preview) features, you need First, the script deploys a virtual appliance called [Azure Arc resource bridge (preview)](../resource-bridge/overview.md) in your vCenter environment. Then, it installs a VMware cluster extension to provide a continuous connection between vCenter Server and Azure Arc. +> [!IMPORTANT] +> This article describes a way to connect a generic vCenter Server to Azure Arc. If you are trying to enable Arc for Azure VMware Solution (AVS) private cloud, please follow this guide instead - [Deploy Arc for Azure VMware Solution](../../azure-vmware/deploy-arc-for-azure-vmware-solution.md). With the Arc for AVS onboarding process you will need to provide fewer inputs and Arc capabilities are better integrated into the AVS private cloud portal experience. + ## Prerequisites ### Azure |
azure-functions | Create First Function Vs Code Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-node.md | In this section, you use Visual Studio Code to create a local Azure Functions pr |**Provide a function name**|Type `HttpExample`.| |**Select how you would like to open your project**|Choose `Open in current window`| - Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Azure Functions JavaScript developer guide](functions-reference-node.md). + Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Azure Functions JavaScript developer guide](functions-reference-node.md?tabs=javascript). ::: zone-end [!INCLUDE [functions-run-function-test-local-vs-code](../../includes/functions-run-function-test-local-vs-code.md)] |
azure-functions | Create First Function Vs Code Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-typescript.md | In this section, you use Visual Studio Code to create a local Azure Functions pr |**Provide a function name**|Type `HttpExample`.| |**Select how you would like to open your project**|Choose `Open in current window`| - Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Azure Functions TypeScript developer guide](functions-reference-node.md). + Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Azure Functions TypeScript developer guide](functions-reference-node.md?tabs=typescript). ::: zone-end [!INCLUDE [functions-run-function-test-local-vs-code](../../includes/functions-run-function-test-local-vs-code.md)] |
azure-functions | Functions Add Output Binding Azure Sql Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-azure-sql-vs-code.md | You've updated your HTTP triggered function to write data to Azure SQL Database. ::: zone pivot="programming-language-javascript" + [Examples of complete Function projects in JavaScript](/samples/browse/?products=azure-functions&languages=javascript). -+ [Azure Functions JavaScript developer guide](functions-reference-node.md) ++ [Azure Functions JavaScript developer guide](functions-reference-node.md?tabs=javascript) ::: zone-end ::: zone pivot="programming-language-python" + [Examples of complete Function projects in Python](/samples/browse/?products=azure-functions&languages=python). -+ [Azure Functions Python developer guide](functions-reference-node.md) ++ [Azure Functions Python developer guide](functions-reference-python.md) ::: zone-end |
azure-functions | Functions Add Output Binding Cosmos Db Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md | You've updated your HTTP triggered function to write JSON documents to an Azure ::: zone pivot="programming-language-javascript" + [Examples of complete Function projects in JavaScript](/samples/browse/?products=azure-functions&languages=javascript). -+ [Azure Functions JavaScript developer guide](functions-reference-node.md) ++ [Azure Functions JavaScript developer guide](functions-reference-node.md?tabs=javascript) ::: zone-end ::: zone pivot="programming-language-python" + [Examples of complete Function projects in Python](/samples/browse/?products=azure-functions&languages=python). -+ [Azure Functions Python developer guide](functions-reference-node.md) ++ [Azure Functions Python developer guide](functions-reference-python.md) ::: zone-end |
azure-functions | Functions Add Output Binding Storage Queue Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-cli.md | You've updated your HTTP triggered function to write data to a Storage queue. No ::: zone pivot="programming-language-javascript" + [Examples of complete Function projects in JavaScript](/samples/browse/?products=azure-functions&languages=javascript). -+ [Azure Functions JavaScript developer guide](functions-reference-node.md) ++ [Azure Functions JavaScript developer guide](functions-reference-node.md?tabs=javascript) [previous-quickstart]: create-first-function-cli-javascript.md ::: zone-end ::: zone pivot="programming-language-typescript" + [Examples of complete Function projects in TypeScript](/samples/browse/?products=azure-functions&languages=typescript). -+ [Azure Functions TypeScript developer guide](functions-reference-node.md#typescript) ++ [Azure Functions TypeScript developer guide](functions-reference-node.md?tabs=typescript) [previous-quickstart]: create-first-function-cli-typescript.md ::: zone-end |
azure-functions | Functions Add Output Binding Storage Queue Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-vs-code.md | You've updated your HTTP triggered function to write data to a Storage queue. No ::: zone pivot="programming-language-javascript" * [Examples of complete Function projects in JavaScript](/samples/browse/?products=azure-functions&languages=javascript). -* [Azure Functions JavaScript developer guide](functions-reference-node.md) +* [Azure Functions JavaScript developer guide](functions-reference-node.md?tabs=javascript) ::: zone-end ::: zone pivot="programming-language-java" * [Examples of complete Function projects in Java](/samples/browse/?products=azure-functions&languages=java). You've updated your HTTP triggered function to write data to a Storage queue. No ::: zone pivot="programming-language-typescript" * [Examples of complete Function projects in TypeScript](/samples/browse/?products=azure-functions&languages=typescript). -* [Azure Functions TypeScript developer guide](functions-reference-node.md#typescript) +* [Azure Functions TypeScript developer guide](functions-reference-node.md?tabs=typescript) ::: zone-end ::: zone pivot="programming-language-python" * [Examples of complete Function projects in Python](/samples/browse/?products=azure-functions&languages=python). |
azure-functions | Functions App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md | Valid values: | `dotnet` | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# (script)](functions-reference-csharp.md) | | `dotnet-isolated` | [C# (isolated worker process)](dotnet-isolated-process-guide.md) | | `java` | [Java](functions-reference-java.md) |-| `node` | [JavaScript](functions-reference-node.md)<br/>[TypeScript](functions-reference-node.md#typescript) | +| `node` | [JavaScript](functions-reference-node.md?tabs=javascript)<br/>[TypeScript](functions-reference-node.md?tabs=typescript) | | `powershell` | [PowerShell](functions-reference-powershell.md) | | `python` | [Python](functions-reference-python.md) | | `custom` | [Other](functions-custom-handlers.md) | |
azure-functions | Functions Develop Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md | The extension can be used with the following languages, which are supported by t * [C# compiled](functions-dotnet-class-library.md) * [C# script](functions-reference-csharp.md)<sup>*</sup>-* [JavaScript](functions-reference-node.md) +* [JavaScript](functions-reference-node.md?tabs=javascript) * [Java](functions-reference-java.md) * [PowerShell](functions-reference-powershell.md) * [Python](functions-reference-python.md)-* [TypeScript](functions-reference-node.md#typescript) +* [TypeScript](functions-reference-node.md?tabs=typescript) <sup>*</sup>Requires that you [set C# script as your default project language](#c-script-projects). |
azure-functions | Functions Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-get-started.md | Use the following resources to get started. | **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=javascript%2ctypescript&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=JavaScript%2CTypeScript) | | **Explore an interactive tutorial** | <li>[Choose the best Azure serverless technology for your business scenario](/training/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/training/modules/azure-well-architected-performance-efficiency/)<li>[Build Serverless APIs with Azure Functions](/training/modules/build-api-azure-functions/)<li>[Create serverless logic with Azure Functions](/training/modules/create-serverless-logic-with-azure-functions/)<li>[Refactor Node.js and Express APIs to Serverless APIs with Azure Functions](/training/modules/shift-nodejs-express-apis-serverless/) <br><br>See a [full listing of interactive tutorials](/training/browse/?expanded=azure&products=azure-functions).| | **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=javascript)<li>[Security](./security-concepts.md)|-| **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [JavaScript](./functions-reference-node.md) or [TypeScript](./functions-reference-node.md#typescript) language reference| +| **Learn more in-depth** | <li>Learn how functions [automatically increase or decrease](./functions-scale.md) instances to match demand<li>Explore the different [deployment methods](./functions-deployment-technologies.md) available<li>Use built-in [monitoring tools](./functions-monitoring.md) to help analyze your functions<li>Read the [JavaScript](./functions-reference-node.md?tabs=javascript) or [TypeScript](./functions-reference-node.md?tabs=typescript) language reference| ::: zone-end ::: zone pivot="programming-language-powershell" |
azure-functions | Functions Reference Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md | Title: JavaScript developer reference for Azure Functions -description: Understand how to develop functions by using JavaScript. + Title: Node.js developer reference for Azure Functions +description: Understand how to develop functions by using Node.js. ms.assetid: 45dedd78-3ff9-411f-bb4b-16d29a11384c Last updated 02/24/2022+ms.devlang: javascript, typescript zone_pivot_groups: functions-nodejs-model -# Azure Functions JavaScript developer guide +# Azure Functions Node.js developer guide This guide is an introduction to developing Azure Functions using JavaScript or TypeScript. The article assumes that you have already read the [Azure Functions developer guide](functions-reference.md). > [!IMPORTANT] > The content of this article changes based on your choice of the Node.js programming model in the selector at the top of this page. The version you choose should match the version of the [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package you are using in your app. If you do not have that package listed in your `package.json`, the default is v3. Learn more about the differences between v3 and v4 in the [upgrade guide](./functions-node-upgrade-v4.md). -As a JavaScript developer, you might also be interested in one of the following articles: +As a Node.js developer, you might also be interested in one of the following articles: | Getting started | Concepts| Guided learning | ||||-| <ul><li>[Node.js function using Visual Studio Code](./create-first-function-vs-code-node.md)</li><li>[Node.js function with terminal/command prompt](./create-first-function-cli-node.md)</li><li>[Node.js function using the Azure portal](functions-create-function-app-portal.md)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[TypeScript functions](#typescript)</li><li>[Performance considerations](functions-best-practices.md)</li></ul> | <ul><li>[Create serverless applications](/training/paths/create-serverless-applications/)</li><li>[Refactor Node.js and Express APIs to Serverless APIs](/training/modules/shift-nodejs-express-apis-serverless/)</li></ul> | +| <ul><li>[Node.js function using Visual Studio Code](./create-first-function-vs-code-node.md)</li><li>[Node.js function with terminal/command prompt](./create-first-function-cli-node.md)</li><li>[Node.js function using the Azure portal](functions-create-function-app-portal.md)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance considerations](functions-best-practices.md)</li></ul> | <ul><li>[Create serverless applications](/training/paths/create-serverless-applications/)</li><li>[Refactor Node.js and Express APIs to Serverless APIs](/training/modules/shift-nodejs-express-apis-serverless/)</li></ul> | [!INCLUDE [Programming Model Considerations](../../includes/functions-nodejs-model-considerations.md)] The following table shows each version of the Node.js programming model along wi ::: zone pivot="nodejs-model-v3" +# [JavaScript](#tab/javascript) + The required folder structure for a JavaScript project looks like the following example: ``` The main project folder, *<project_root>*, can contain the following files: - **local.settings.json**: Used to store app settings and connection strings when it's running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file). - **package.json**: Contains configuration options like a list of package dependencies, the main entrypoint, and scripts. +# [TypeScript](#tab/typescript) ++The required folder structure for a TypeScript project looks like the following example: ++``` +<project_root>/ + | - .vscode/ + | - dist/ + | - node_modules/ + | - myFirstFunction/ + | | - index.ts + | | - function.json + | - mySecondFunction/ + | | - index.ts + | | - function.json + | - .funcignore + | - host.json + | - local.settings.json + | - package.json + | - tsconfig.json +``` ++The main project folder, *<project_root>*, can contain the following files: ++- **.vscode/**: (Optional) Contains the stored Visual Studio Code configuration. To learn more, see [Visual Studio Code settings](https://code.visualstudio.com/docs/getstarted/settings). +- **dist/**: Contains the compiled JavaScript code after you run a build. The name of this folder can be configured in your "tsconfig.json" file, and should match the `scriptFile` property in your "function.json" files. +- **myFirstFunction/function.json**: Contains configuration for the function's trigger, inputs, and outputs. The name of the directory determines the name of your function. For TypeScript projects, this file must contain a `scriptFile` property pointing to your compiled JavaScript. +- **myFirstFunction/index.ts**: Stores your function code. To change this default file path, see [using scriptFile](#using-scriptfile). +- **.funcignore**: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains *.vscode/* to ignore your editor setting, *test/* to ignore test cases, and *local.settings.json* to prevent local app settings being published. +- **host.json**: Contains configuration options that affect all functions in a function app instance. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md). +- **local.settings.json**: Used to store app settings and connection strings when it's running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file). +- **package.json**: Contains configuration options like a list of package dependencies, the main entrypoint, and scripts. +- **tsconfig.json**: Contains TypeScript compiler options like the output directory. +++ ::: zone-end ::: zone pivot="nodejs-model-v4" +# [JavaScript](#tab/javascript) + The recommended folder structure for a JavaScript project looks like the following example: ``` The main project folder, *<project_root>*, can contain the following files: - **local.settings.json**: Used to store app settings and connection strings when it's running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file). - **package.json**: Contains configuration options like a list of package dependencies, the main entrypoint, and scripts. +# [TypeScript](#tab/typescript) ++The recommended folder structure for a TypeScript project looks like the following example: ++``` +<project_root>/ + | - .vscode/ + | - dist/ + | - node_modules/ + | - src/ + | | - functions/ + | | | - myFirstFunction.ts + | | | - mySecondFunction.ts + | - test/ + | | - functions/ + | | | - myFirstFunction.test.ts + | | | - mySecondFunction.test.ts + | - .funcignore + | - host.json + | - local.settings.json + | - package.json + | - tsconfig.json +``` ++The main project folder, *<project_root>*, can contain the following files: ++- **.vscode/**: (Optional) Contains the stored Visual Studio Code configuration. To learn more, see [Visual Studio Code settings](https://code.visualstudio.com/docs/getstarted/settings). +- **dist/**: Contains the compiled JavaScript code after you run a build. The name of this folder can be configured in your "tsconfig.json" file. +- **src/functions/**: The default location for all functions and their related triggers and bindings. +- **test/**: (Optional) Contains the test cases of your function app. +- **.funcignore**: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains *.vscode/* to ignore your editor setting, *test/* to ignore test cases, and *local.settings.json* to prevent local app settings being published. +- **host.json**: Contains configuration options that affect all functions in a function app instance. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md). +- **local.settings.json**: Used to store app settings and connection strings when it's running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file). +- **package.json**: Contains configuration options like a list of package dependencies, the main entrypoint, and scripts. +- **tsconfig.json**: Contains TypeScript compiler options like the output directory. +++ ::: zone-end <a name="exporting-an-async-function"></a> The main project folder, *<project_root>*, can contain the following files: ::: zone pivot="nodejs-model-v3" -The v3 model registers a function based on the existence of two files. First, you need a `function.json` file located in a folder one level down from the root of your app. The name of the folder determines the function's name and the file contains configuration for your function's inputs/outputs. Second, you need a JavaScript file containing your code. By default, the model looks for an `index.js` file in the same folder as your `function.json`. Your code must export a function using [`module.exports`](https://nodejs.org/api/modules.html#modules_module_exports) (or [`exports`](https://nodejs.org/api/modules.html#modules_exports)). To customize the file location or export name of your function, see [configuring your function's entry point](functions-reference-node.md#configure-function-entry-point). +The v3 model registers a function based on the existence of two files. First, you need a `function.json` file located in a folder one level down from the root of your app. Second, you need a JavaScript file that [exports](https://nodejs.org/api/modules.html#modules_module_exports) your function. By default, the model looks for an `index.js` file in the same folder as your `function.json`. If you're using TypeScript, you must use the [`scriptFile`](#using-scriptfile) property in `function.json` to point to the compiled JavaScript file. To customize the file location or export name of your function, see [configuring your function's entry point](functions-reference-node.md#configure-function-entry-point). The function you export should always be declared as an [`async function`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/async_function) in the v3 model. You can export a synchronous function, but then you must call [`context.done()`](#contextdone) to signal that your function is completed, which is deprecated and not recommended. Your function is passed an [invocation `context`](#invocation-context) as the fi The following example is a simple function that logs that it was triggered and responds with `Hello, world!`: +# [JavaScript](#tab/javascript) + ```json { "bindings": [ module.exports = async function (context, request) { }; ``` +# [TypeScript](#tab/typescript) ++```json +{ + "bindings": [ + { + "type": "httpTrigger", + "direction": "in", + "name": "req", + "authLevel": "anonymous", + "methods": [ + "get", + "post" + ] + }, + { + "type": "http", + "direction": "out", + "name": "res" + } + ], + "scriptFile": "../dist/HttpTrigger1/index.js" +} +``` ++```typescript +import { AzureFunction, Context, HttpRequest } from "@azure/functions"; ++const httpTrigger: AzureFunction = async function (context: Context, request: HttpRequest): Promise<void> { + context.log('Http function was triggered.'); + context.res = { body: 'Hello, world!' }; +}; ++export default httpTrigger; +``` +++ ::: zone-end ::: zone pivot="nodejs-model-v4" Registering a function can be done from any file in your project, as long as tha The following example is a simple function that logs that it was triggered and responds with `Hello, world!`: +# [JavaScript](#tab/javascript) + ```javascript const { app } = require('@azure/functions'); -app.http('httpTrigger1', { +app.http('helloWorld1', { methods: ['POST', 'GET'], handler: async (request, context) => { context.log('Http function was triggered.'); app.http('httpTrigger1', { }); ``` +# [TypeScript](#tab/typescript) ++```typescript +import { app, HttpRequest, HttpResponseInit, InvocationContext } from "@azure/functions"; ++async function helloWorld1(request: HttpRequest, context: InvocationContext): Promise<HttpResponseInit> { + context.log('Http function was triggered.'); + return { body: 'Hello, world!' }; +}; ++app.http('helloWorld1', { + methods: ['GET', 'POST'], + handler: helloWorld1 +}); +``` +++ ::: zone-end <a name="bindings"></a> Inputs can be accessed in several ways: - **_[Recommended]_ As arguments passed to your function:** Use the arguments in the same order that they're defined in `function.json`. The `name` property defined in `function.json` doesn't need to match the name of your argument, although it's recommended for the sake of organization. + # [JavaScript](#tab/javascript) + ```javascript module.exports = async function (context, myTrigger, myInput, myOtherInput) { ... }; ``` + # [TypeScript](#tab/typescript) ++ ```typescript + const httpTrigger: AzureFunction = async function (context: Context, myTrigger: HttpRequest, myInput: any, myOtherInput: any): Promise<void> { + ``` ++ + - **As properties of [`context.bindings`](#contextbindings):** Use the key matching the `name` property defined in `function.json`. + # [JavaScript](#tab/javascript) + ```javascript- module.exports = async function (context) { + module.exports = async function (context) { context.log("This is myTrigger: " + context.bindings.myTrigger); context.log("This is myInput: " + context.bindings.myInput); context.log("This is myOtherInput: " + context.bindings.myOtherInput); }; ``` + # [TypeScript](#tab/typescript) ++ ```typescript + import { AzureFunction, Context } from "@azure/functions"; ++ const httpTrigger: AzureFunction = async function (context: Context): Promise<void> { + context.log("This is myTrigger: " + context.bindings.myTrigger); + context.log("This is myInput: " + context.bindings.myInput); + context.log("This is myOtherInput: " + context.bindings.myOtherInput); + } ++ export default httpTrigger; + ``` ++ + <a name="returning-from-the-function"></a> ### Outputs Outputs are bindings with `direction` set to `out` and can be set in several way } ``` + # [JavaScript](#tab/javascript) + ```javascript module.exports = async function (context, request) { return { Outputs are bindings with `direction` set to `out` and can be set in several way } ``` + # [TypeScript](#tab/typescript) ++ ```typescript + import { AzureFunction, Context, HttpRequest, HttpResponseSimple } from "@azure/functions"; ++ const httpTrigger: AzureFunction = async function (context: Context, request: HttpRequest): Promise<HttpResponseSimple> { + return { + body: "Hello, world!" + }; + }; ++ export default httpTrigger; + ``` ++ + - **_[Recommended for multiple outputs]_ Return an object containing all outputs:** If you're using an async function, you can return an object with a property matching the name of each binding in your `function.json`. The following example uses output bindings named "httpResponse" and "queueOutput": ```json Outputs are bindings with `direction` set to `out` and can be set in several way } ``` + # [JavaScript](#tab/javascript) + ```javascript module.exports = async function (context, request) { let message = 'Hello, world!'; Outputs are bindings with `direction` set to `out` and can be set in several way }; ``` + # [TypeScript](#tab/typescript) ++ ```typescript + import { AzureFunction, Context, HttpRequest } from "@azure/functions"; ++ const httpTrigger: AzureFunction = async function (context: Context, request: HttpRequest): Promise<any> { + let message = 'Hello, world!'; + return { + httpResponse: { + body: message + }, + queueOutput: message + }; + }; ++ export default httpTrigger; + ``` ++ + - **Set values on `context.bindings`:** If you're not using an async function or you don't want to use the previous options, you can set values directly on `context.bindings`, where the key matches the name of the binding. The following example uses output bindings named "httpResponse" and "queueOutput": ```json Outputs are bindings with `direction` set to `out` and can be set in several way } ``` + # [JavaScript](#tab/javascript) + ```javascript module.exports = async function (context, request) { let message = 'Hello, world!'; Outputs are bindings with `direction` set to `out` and can be set in several way }; ``` + # [TypeScript](#tab/typescript) ++ ```typescript + import { AzureFunction, Context, HttpRequest } from "@azure/functions"; ++ const httpTrigger: AzureFunction = async function (context: Context, request: HttpRequest): Promise<void> { + let message = 'Hello, world!'; + context.bindings.httpResponse = { + body: message + }; + context.bindings.queueOutput = message; + }; ++ export default httpTrigger; + ``` ++ + ### Bindings data type You can use the `dataType` property on an input binding to change the type of your input, however it has some limitations: In the following example of a [storage queue trigger](./functions-bindings-stora } ``` +# [JavaScript](#tab/javascript) + ```javascript const { Buffer } = require('node:buffer'); module.exports = async function (context, myQueueItem) { }; ``` +# [TypeScript](#tab/typescript) ++```typescript +import { AzureFunction, Context } from "@azure/functions"; +import { Buffer } from 'node:buffer'; ++const queueTrigger1: AzureFunction = async function (context: Context, myQueueItem: string | Buffer): Promise<void> { + if (typeof myQueueItem === 'string') { + context.log('myQueueItem is a string'); + } else if (Buffer.isBuffer(myQueueItem)) { + context.log('myQueueItem is a buffer'); + } +}; ++export default queueTrigger1; +``` +++ ::: zone-end ::: zone pivot="nodejs-model-v4" Your function is required to have exactly one primary input called the trigger. The trigger is the only required input or output. For most trigger types, you register a function by using a method on the `app` object named after the trigger type. You can specify configuration specific to the trigger directly on the `options` argument. For example, an HTTP trigger allows you to specify a route. During execution, the value corresponding to this trigger is passed in as the first argument to your handler. +# [JavaScript](#tab/javascript) + ```javascript const { app } = require('@azure/functions'); app.http('helloWorld1', { route: 'hello/world',- handler: async (request, ...) => { + handler: async (request, context) => { ... } }); ``` +# [TypeScript](#tab/typescript) ++```typescript +import { app, HttpRequest, HttpResponseInit, InvocationContext } from "@azure/functions"; ++async function helloWorld1(request: HttpRequest, context: InvocationContext): Promise<HttpResponseInit> { + ... +}; ++app.http('helloWorld1', { + route: 'hello/world', + handler: helloWorld1 +}); +``` +++ ### Return output The return output is optional, and in some cases configured by default. For example, an HTTP trigger registered with `app.http` is configured to return an HTTP response output automatically. For most output types, you specify the return configuration on the `options` argument with the help of the `output` object exported from the `@azure/functions` module. During execution, you set this output by returning it from your handler. The following example uses a [timer trigger](./functions-bindings-timer.md) and a [storage queue output](./functions-bindings-storage-queue-output.md): +# [JavaScript](#tab/javascript) + ```javascript const { app, output } = require('@azure/functions'); app.timer('timerTrigger1', {- ... + schedule: '0 */5 * * * *', return: output.storageQueue({ connection: 'storage_APPSETTING', ... }),- handler: () => { + handler: (myTimer, context) => { return { hello: 'world' } } }); ``` +# [TypeScript](#tab/typescript) ++```typescript +import { app, InvocationContext, Timer, output } from "@azure/functions"; ++async function timerTrigger1(myTimer: Timer, context: InvocationContext): Promise<any> { + return { hello: 'world' } +} ++app.timer('timerTrigger1', { + schedule: '0 */5 * * * *', + return: output.storageQueue({ + connection: 'storage_APPSETTING', + ... + }), + handler: timerTrigger1 +}); +``` +++ ### Extra inputs and outputs In addition to the trigger and return, you may specify extra inputs or outputs on the `options` argument when registering a function. The `input` and `output` objects exported from the `@azure/functions` module provide type-specific methods to help construct the configuration. During execution, you get or set the values with `context.extraInputs.get` or `context.extraOutputs.set`, passing in the original configuration object as the first argument. The following example is a function triggered by a [storage queue](./functions-bindings-storage-queue-trigger.md), with an extra [storage blob input](./functions-bindings-storage-blob-input.md) that is copied to an extra [storage blob output](./functions-bindings-storage-blob-output.md). The queue message should be the name of a file and replaces `{queueTrigger}` as the blob name to be copied, with the help of a [binding expression](./functions-bindings-expressions-patterns.md). +# [JavaScript](#tab/javascript) + ```javascript const { app, input, output } = require('@azure/functions'); app.storageQueue('copyBlob1', { }); ``` +# [TypeScript](#tab/typescript) ++```typescript +import { app, InvocationContext, input, output } from "@azure/functions"; ++const blobInput = input.storageBlob({ + connection: 'storage_APPSETTING', + path: 'helloworld/{queueTrigger}', +}); ++const blobOutput = output.storageBlob({ + connection: 'storage_APPSETTING', + path: 'helloworld/{queueTrigger}-copy', +}); ++async function copyBlob1(queueItem: unknown, context: InvocationContext): Promise<void> { + const blobInputValue = context.extraInputs.get(blobInput); + context.extraOutputs.set(blobOutput, blobInputValue); +} ++app.storageQueue('copyBlob1', { + queueName: 'copyblobqueue', + connection: 'storage_APPSETTING', + extraInputs: [blobInput], + extraOutputs: [blobOutput], + handler: copyBlob1 +}); +``` +++ ### Generic inputs and outputs The `app`, `trigger`, `input`, and `output` objects exported by the `@azure/functions` module provide type-specific methods for most types. For all the types that aren't supported, a `generic` method has been provided to allow you to manually specify the configuration. The `generic` method can also be used if you want to change the default settings provided by a type-specific method. The following example is a simple HTTP triggered function using generic methods instead of type-specific methods. +# [JavaScript](#tab/javascript) + ```javascript const { app, output, trigger } = require('@azure/functions'); app.generic('helloWorld1', { trigger: trigger.generic({ type: 'httpTrigger',- methods: ['GET'] + methods: ['GET', 'POST'] }), return: output.generic({ type: 'http' app.generic('helloWorld1', { }); ``` +# [TypeScript](#tab/typescript) +++```typescript +import { app, InvocationContext, HttpRequest, HttpResponseInit, output, trigger } from "@azure/functions"; ++async function helloWorld1(request: HttpRequest, context: InvocationContext): Promise<HttpResponseInit> { + context.log(`Http function processed request for url "${request.url}"`); ++ return { body: `Hello, world!` }; +} ++app.generic('helloWorld1', { + trigger: trigger.generic({ + type: 'httpTrigger', + methods: ['GET', 'POST'] + }), + return: output.generic({ + type: 'http' + }), + handler: helloWorld1 +}); +``` +++ ::: zone-end <a name="context-object"></a> The `context.bindings` object is used to read inputs or set outputs. The followi } ``` +# [JavaScript](#tab/javascript) + ```javascript module.exports = async function (context, myQueueItem) { const blobValue = context.bindings.myInput; module.exports = async function (context, myQueueItem) { }; ``` +# [TypeScript](#tab/typescript) ++```typescript +import { AzureFunction, Context } from "@azure/functions"; ++const queueTrigger1: AzureFunction = async function (context: Context, myQueueItem: string): Promise<void> { + const blobValue = context.bindings.myInput; + context.bindings.myOutput = blobValue; +}; ++export default queueTrigger1; +``` +++ <a name="contextdone-method"></a> ### context.done The `context.done` method is deprecated. Before async functions were supported, you would signal your function is done by calling `context.done()`: +# [JavaScript](#tab/javascript) + ```javascript module.exports = function (context, request) { context.log("this pattern is now deprecated"); module.exports = function (context, request) { }; ``` +# [TypeScript](#tab/typescript) ++```typescript +import { AzureFunction, Context, HttpRequest } from "@azure/functions"; ++const httpTrigger: AzureFunction = function (context: Context, request: HttpRequest): void { + context.log("this pattern is now deprecated"); + context.done(); +}; ++export default httpTrigger; +``` +++ Now, it's recommended to remove the call to `context.done()` and mark your function as async so that it returns a promise (even if you don't `await` anything). As soon as your function finishes (in other words, the returned promise resolves), the v3 model knows your function is done. +# [JavaScript](#tab/javascript) + ```javascript module.exports = async function (context, request) { context.log("you don't need context.done or an awaited call") }; ``` +# [TypeScript](#tab/typescript) ++```typescript +import { AzureFunction, Context, HttpRequest } from "@azure/functions"; ++const httpTrigger: AzureFunction = async function (context: Context, request: HttpRequest): Promise<void> { + context.log("you don't need context.done or an awaited call") +}; ++export default httpTrigger; +``` +++ ::: zone-end ::: zone pivot="nodejs-model-v4" For more information, see [`retry-policies`](./functions-bindings-errors.md#retr In Azure Functions, it's recommended to use `context.log()` to write logs. Azure Functions integrates with Azure Application Insights to better capture your function app logs. Application Insights, part of Azure Monitor, provides facilities for collection, visual rendering, and analysis of both application logs and your trace outputs. To learn more, see [monitoring Azure Functions](functions-monitoring.md). -> [!NOTE] +> [!NOTE] > If you use the alternative Node.js `console.log` method, those logs are tracked at the app-level and will *not* be associated with any specific function. It is *highly recommended* to use `context` for logging instead of `console` so that all logs are associated with a specific function. The following example writes a log at the default "information" level, including the invocation ID: +# [JavaScript](#tab/javascript) + ```javascript-context.log(`Something has happened. Invocation ID: "${context.invocationId}"`); +context.log(`Something has happened. Invocation ID: "${context.invocationId}"`); ``` +# [TypeScript](#tab/typescript) ++```typescript +context.log(`Something has happened. Invocation ID: "${context.invocationId}"`); +``` +++ <a name="trace-levels"></a> ### Log levels Azure Functions lets you define the threshold level to be used when tracking and ## Track custom data -By default, Azure Functions writes output as traces to Application Insights. For more control, you can instead use the [Application Insights Node.js SDK](https://github.com/microsoft/applicationinsights-node.js) to send custom data to your Application Insights instance. +By default, Azure Functions writes output as traces to Application Insights. For more control, you can instead use the [Application Insights Node.js SDK](https://github.com/microsoft/applicationinsights-node.js) to send custom data to your Application Insights instance. ++# [JavaScript](#tab/javascript) ```javascript const appInsights = require("applicationinsights"); module.exports = async function (context, request) { }; ``` +# [TypeScript](#tab/typescript) ++```typescript +import { AzureFunction, Context, HttpRequest } from "@azure/functions"; +import * as appInsights from 'applicationinsights'; ++appInsights.setup(); +const client = appInsights.defaultClient; ++const httpTrigger: AzureFunction = async function (context: Context, request: HttpRequest): Promise<void> { + // Use this with 'tagOverrides' to correlate custom logs to the parent function invocation. + var operationIdOverride = {"ai.operation.id":context.traceContext.traceparent}; ++ client.trackEvent({name: "my custom event", tagOverrides:operationIdOverride, properties: {customProperty2: "custom property value"}}); + client.trackException({exception: new Error("handled exceptions can be logged with this method"), tagOverrides:operationIdOverride}); + client.trackMetric({name: "custom metric", value: 3, tagOverrides:operationIdOverride}); + client.trackTrace({message: "trace message", tagOverrides:operationIdOverride}); + client.trackDependency({target:"http://dbname", name:"select customers proc", data:"SELECT * FROM Customers", duration:231, resultCode:0, success: true, dependencyTypeName: "ZSQL", tagOverrides:operationIdOverride}); + client.trackRequest({name:"GET /customers", url:"http://myserver/customers", duration:309, resultCode:200, success:true, tagOverrides:operationIdOverride}); +}; ++export default httpTrigger; +``` +++ The `tagOverrides` parameter sets the `operation_Id` to the function's invocation ID. This setting enables you to correlate all of the automatically generated and custom logs for a given function invocation. ::: zone-end The request can be accessed in several ways: - **As the second argument to your function:** + # [JavaScript](#tab/javascript) + ```javascript module.exports = async function (context, request) { context.log(`Http function processed request for url "${request.url}"`); ``` -- **From the `context.req` property:** + # [TypeScript](#tab/typescript) ++ ```typescript + const httpTrigger: AzureFunction = async function (context: Context, request: HttpRequest): Promise<void> { + context.log(`Http function processed request for url "${request.url}"`); + ``` ++ ++- **From the `context.req` property:** ++ # [JavaScript](#tab/javascript) ```javascript module.exports = async function (context, request) { context.log(`Http function processed request for url "${context.req.url}"`); ``` -- **From the named input bindings:** This option works the same as any non HTTP binding. The binding name in `function.json` must match the key on `context.bindings`, or "request1" in the following example: + # [TypeScript](#tab/typescript) ++ ```typescript + const httpTrigger: AzureFunction = async function (context: Context, request: HttpRequest): Promise<void> { + context.log(`Http function processed request for url "${context.req.url}"`); + ``` ++ ++- **From the named input bindings:** This option works the same as any non HTTP binding. The binding name in `function.json` must match the key on `context.bindings`, or "request1" in the following example: ```json { The request can be accessed in several ways: ] } ```++ # [JavaScript](#tab/javascript) + ```javascript module.exports = async function (context, request) { context.log(`Http function processed request for url "${context.bindings.request1.url}"`); ``` + # [TypeScript](#tab/typescript) ++ ```typescript + const httpTrigger: AzureFunction = async function (context: Context, request: HttpRequest): Promise<void> { + context.log(`Http function processed request for url "${context.bindings.request1.url}"`); + ``` ++ + The `HttpRequest` object has the following properties: | Property | Type | Description | The `HttpRequest` object has the following properties: The request can be accessed as the first argument to your handler for an HTTP triggered function. +# [JavaScript](#tab/javascript) + ```javascript async (request, context) => { context.log(`Http function processed request for url "${request.url}"`); ``` +# [TypeScript](#tab/typescript) ++```typescript +async function helloWorld1(request: HttpRequest, context: InvocationContext): Promise<HttpResponseInit> { + context.log(`Http function processed request for url "${request.url}"`); +``` +++ The `HttpRequest` object has the following properties: | Property | Type | Description | In order to access a request or response's body, the following methods can be us | **`json()`** | `Promise<unknown>` | | **`text()`** | `Promise<string>` | -> [!NOTE] +> [!NOTE] > The body functions can be run only once; subsequent calls will resolve with empty strings/ArrayBuffers. ::: zone-end In order to access a request or response's body, the following methods can be us The response can be set in several ways: -- **Set the `context.res` property:** +- **Set the `context.res` property:** ++ # [JavaScript](#tab/javascript) ```javascript module.exports = async function (context, request) { context.res = { body: `Hello, world!` }; ``` + # [TypeScript](#tab/typescript) ++ ```typescript + const httpTrigger: AzureFunction = async function (context: Context, request: HttpRequest): Promise<void> { + context.res = { body: `Hello, world!` }; + ``` ++ + - **Return the response:** If your function is async and you set the binding name to `$return` in your `function.json`, you can return the response directly instead of setting it on `context`. ```json The response can be set in several ways: } ``` + # [JavaScript](#tab/javascript) + ```javascript module.exports = async function (context, request) { return { body: `Hello, world!` }; ``` + # [TypeScript](#tab/typescript) ++ ```typescript + const httpTrigger: AzureFunction = async function (context: Context, request: HttpRequest): Promise<HttpResponseSimple> { + return { body: `Hello, world!` }; + ``` ++ + - **Set the named output binding:** This option works the same as any non HTTP binding. The binding name in `function.json` must match the key on `context.bindings`, or "response1" in the following example: ```json The response can be set in several ways: } ``` + # [JavaScript](#tab/javascript) + ```javascript module.exports = async function (context, request) { context.bindings.response1 = { body: `Hello, world!` }; ``` + # [TypeScript](#tab/typescript) ++ ```typescript + const httpTrigger: AzureFunction = async function (context: Context, request: HttpRequest): Promise<void> { + context.bindings.response1 = { body: `Hello, world!` }; + ``` ++ + - **Call `context.res.send()`:** This option is deprecated. It implicitly calls `context.done()` and can't be used in an async function. + # [JavaScript](#tab/javascript) + ```javascript module.exports = function (context, request) { context.res.send(`Hello, world!`); ``` + # [TypeScript](#tab/typescript) ++ ```typescript + const httpTrigger: AzureFunction = function (context: Context, request: HttpRequest): void { + context.res.send(`Hello, world!`); + ``` ++ + If you create a new object when setting the response, that object must match the `HttpResponseSimple` interface, which has the following properties: | Property | Type | Description | The response can be set in several ways: - **As a simple interface with type `HttpResponseInit`:** This option is the most concise way of returning responses. + # [JavaScript](#tab/javascript) + ```javascript return { body: `Hello, world!` }; ``` + # [TypeScript](#tab/typescript) ++ ```typescript + return { body: `Hello, world!` }; + ``` ++ + The `HttpResponseInit` interface has the following properties: | Property | Type | Description | The response can be set in several ways: - **As a class with type `HttpResponse`:** This option provides helper methods for reading and modifying various parts of the response like the headers. + # [JavaScript](#tab/javascript) + ```javascript const response = new HttpResponse({ body: `Hello, world!` }); response.headers.set('content-type', 'application/json'); return response; ``` + # [TypeScript](#tab/typescript) ++ ```typescript + const response = new HttpResponse({ body: `Hello, world!` }); + response.headers.set('content-type', 'application/json'); + return response; + ``` ++ + The `HttpResponse` class accepts an optional `HttpResponseInit` as an argument to its constructor and has the following properties:- + | Property | Type | Description | | -- | - | -- | | **`status`** | `number` | HTTP response status code. | The following example logs the `WEBSITE_SITE_NAME` environment variable: ::: zone pivot="nodejs-model-v3" +# [JavaScript](#tab/javascript) + ```javascript module.exports = async function (context) { context.log(`WEBSITE_SITE_NAME: ${process.env["WEBSITE_SITE_NAME"]}`); } ``` +# [TypeScript](#tab/typescript) ++```typescript +const httpTrigger: AzureFunction = async function (context: Context, request: HttpRequest): Promise<void> { + context.log(`WEBSITE_SITE_NAME: ${process.env["WEBSITE_SITE_NAME"]}`); +} +``` +++ ::: zone-end ::: zone pivot="nodejs-model-v4" +# [JavaScript](#tab/javascript) + ```javascript async function timerTrigger1(myTimer, context) { context.log(`WEBSITE_SITE_NAME: ${process.env["WEBSITE_SITE_NAME"]}`); } ``` +# [TypeScript](#tab/typescript) ++```typescript +async function timerTrigger1(myTimer: Timer, context: InvocationContext): Promise<void> { + context.log(`WEBSITE_SITE_NAME: ${process.env["WEBSITE_SITE_NAME"]}`); +} +``` +++ ::: zone-end ### In local development environment -When you run locally, your functions project includes a [`local.settings.json` file](./functions-run-local.md), where you store your environment variables in the `Values` object. +When you run locally, your functions project includes a [`local.settings.json` file](./functions-run-local.md), where you store your environment variables in the `Values` object. ```json { When you run locally, your functions project includes a [`local.settings.json` f ### In Azure cloud environment -When you run in Azure, the function app lets you set and use [Application settings](functions-app-settings.md), such as service connection strings, and exposes these settings as environment variables during execution. +When you run in Azure, the function app lets you set and use [Application settings](functions-app-settings.md), such as service connection strings, and exposes these settings as environment variables during execution. [!INCLUDE [Function app settings](../../includes/functions-app-settings.md)] To use ES modules in a function, change its filename to use a `.mjs` extension. ::: zone pivot="nodejs-model-v3" -```js +# [JavaScript](#tab/javascript) ++```javascript import { v4 as uuidv4 } from 'uuid'; async function httpTrigger1(context, request) { context.res.body = uuidv4(); }; -export default httpTrigger1; +export default httpTrigger; +``` ++# [TypeScript](#tab/typescript) ++```typescript +import { AzureFunction, Context } from "@azure/functions"; +import { v4 as uuidv4 } from 'uuid'; ++const httpTrigger: AzureFunction = async function (context: Context, request: HttpRequest): Promise<void> { + context.res.body = uuidv4(); +}; ++export default httpTrigger; ``` ++ ::: zone-end ::: zone pivot="nodejs-model-v4" -```js +# [JavaScript](#tab/javascript) ++```javascript import { v4 as uuidv4 } from 'uuid'; async function httpTrigger1(request, context) {- context.res.body = uuidv4(); + return { body: uuidv4() }; };++app.http('httpTrigger1', { + methods: ['GET', 'POST'], + handler: httpTrigger1 +}); ``` +# [TypeScript](#tab/typescript) ++```typescript +import { app, HttpRequest, HttpResponseInit, InvocationContext } from "@azure/functions"; +import { v4 as uuidv4 } from 'uuid'; ++async function httpTrigger1(request: HttpRequest, context: InvocationContext): Promise<HttpResponseInit> { + return { body: uuidv4() }; +}; ++app.http('httpTrigger1', { + methods: ['GET', 'POST'], + handler: httpTrigger1 +}); +``` +++ ::: zone-end ::: zone pivot="nodejs-model-v3" ## Configure function entry point -The `function.json` properties `scriptFile` and `entryPoint` can be used to configure the location and name of your exported function. These properties can be important when your JavaScript is transpiled. +The `function.json` properties `scriptFile` and `entryPoint` can be used to configure the location and name of your exported function. The `scriptFile` property is required when you're using TypeScript and should point to the compiled JavaScript. ### Using `scriptFile` In the v3 model, a function must be exported using `module.exports` in order to } ``` +# [JavaScript](#tab/javascript) + ```javascript-async function logHello(context) { - context.log('Hello, world!'); +async function logHello(context) { + context.log('Hello, world!'); } module.exports = { logHello }; ``` --## Local debugging --It's recommended to use VS Code for local debugging, which starts your Node.js process in debug mode automatically and attaches to the process for you. For more information, see [run the function locally](./create-first-function-vs-code-node.md#run-the-function-locally). --If you're using a different tool for debugging or want to start your Node.js process in debug mode manually, add `"languageWorkers__node__arguments": "--inspect"` under `Values` in your [local.settings.json](./functions-develop-local.md#local-settings-file). The `--inspect` argument tells Node.js to listen for a debug client, on port 9229 by default. For more information, see the [Node.js debugging guide](https://nodejs.org/en/docs/guides/debugging-getting-started). --## TypeScript --Both [Azure Functions for Visual Studio Code](./create-first-function-cli-typescript.md) and the [Azure Functions Core Tools](functions-run-local.md) let you create function apps using a template that supports TypeScript function app projects. The template generates `package.json` and `tsconfig.json` project files that make it easier to transpile, run, and publish JavaScript functions from TypeScript code with these tools. --A generated `.funcignore` file is used to indicate which files are excluded when a project is published to Azure. ---TypeScript files (.ts) are transpiled into JavaScript files (.js) in the `dist` output directory. TypeScript templates use the [`scriptFile` parameter](#using-scriptfile) in `function.json` to indicate the location of the corresponding .js file in the `dist` folder. The setting `outDir` in your `tsconfig.json` file controls the output location. If you change this setting or the name of the folder, the runtime isn't able to find the code to run. ---The way that you locally develop and deploy from a TypeScript project depends on your development tool. --### Visual Studio Code --The [Azure Functions for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) extension lets you develop your functions using TypeScript. The Core Tools is a requirement of the Azure Functions extension. --To create a TypeScript function app in Visual Studio Code, choose `TypeScript` as your language when you create a function app. --When you press **F5** to run the app locally, transpilation is done before the host (func.exe) is initialized. --When you deploy your function app to Azure using the **Deploy to function app...** button, the Azure Functions extension first generates a production-ready build of JavaScript files from the TypeScript source files. +# [TypeScript](#tab/typescript) -### Azure Functions Core Tools +```typescript +import { AzureFunction, Context } from "@azure/functions"; -There are several ways in which a TypeScript project differs from a JavaScript project when using the Core Tools. --#### Create project --To create a TypeScript function app project using Core Tools, you must specify the TypeScript language option when you create your function app. You can create an app in one of the following ways: ---- Run the `func init` command, select `node` as your language stack, and then select `typescript`.-- Run the `func init --worker-runtime typescript` command.----- Run the `func init --model v4` command, select `node` as your language stack, and then select `typescript`.-- Run the `func init --model v4 --worker-runtime typescript` command.---#### Run local --To run your function app code locally using Core Tools, use the following commands instead of `func host start`: +const logHello: AzureFunction = async function (context: Context): Promise<void> { + context.log('Hello, world!'); +}; -```command -npm install -npm start +export default { logHello }; ``` -The `npm start` command is equivalent to the following commands: --- `npm run build`-- `tsc`-- `func start`--#### Publish to Azure + -Before you use the [`func azure functionapp publish`] command to deploy to Azure, you create a production-ready build of JavaScript files from the TypeScript source files. -The following commands prepare and publish your TypeScript project using Core Tools: +## Local debugging -```command -npm run build -func azure functionapp publish <APP_NAME> -``` +It's recommended to use VS Code for local debugging, which starts your Node.js process in debug mode automatically and attaches to the process for you. For more information, see [run the function locally](./create-first-function-vs-code-node.md#run-the-function-locally). -In this command, replace `<APP_NAME>` with the name of your function app. +If you're using a different tool for debugging or want to start your Node.js process in debug mode manually, add `"languageWorkers__node__arguments": "--inspect"` under `Values` in your [local.settings.json](./functions-develop-local.md#local-settings-file). The `--inspect` argument tells Node.js to listen for a debug client, on port 9229 by default. For more information, see the [Node.js debugging guide](https://nodejs.org/en/docs/guides/debugging-getting-started). <a name="considerations-for-javascript-functions"></a> This section describes several impactful patterns for Node.js apps that we recom ### Choose single-vCPU App Service plans -When you create a function app that uses the App Service plan, we recommend that you select a single-vCPU plan rather than a plan with multiple vCPUs. Today, Functions runs JavaScript functions more efficiently on single-vCPU VMs, and using larger VMs doesn't produce the expected performance improvements. When necessary, you can manually scale out by adding more single-vCPU VM instances, or you can enable autoscale. For more information, see [Scale instance count manually or automatically](../azure-monitor/autoscale/autoscale-get-started.md?toc=/azure/app-service/toc.json). +When you create a function app that uses the App Service plan, we recommend that you select a single-vCPU plan rather than a plan with multiple vCPUs. Today, Functions runs Node.js functions more efficiently on single-vCPU VMs, and using larger VMs doesn't produce the expected performance improvements. When necessary, you can manually scale out by adding more single-vCPU VM instances, or you can enable autoscale. For more information, see [Scale instance count manually or automatically](../azure-monitor/autoscale/autoscale-get-started.md?toc=/azure/app-service/toc.json). <a name="cold-start"></a> ### Run from a package file -When you develop Azure Functions in the serverless hosting model, cold starts are a reality. *Cold start* refers to the first time your function app starts after a period of inactivity, taking longer to start up. For JavaScript apps with large dependency trees in particular, cold start can be significant. To speed up the cold start process, [run your functions as a package file](run-functions-from-deployment-package.md) when possible. Many deployment methods use this model by default, but if you're experiencing large cold starts you should check to make sure you're running this way. +When you develop Azure Functions in the serverless hosting model, cold starts are a reality. *Cold start* refers to the first time your function app starts after a period of inactivity, taking longer to start up. For Node.js apps with large dependency trees in particular, cold start can be significant. To speed up the cold start process, [run your functions as a package file](run-functions-from-deployment-package.md) when possible. Many deployment methods use this model by default, but if you're experiencing large cold starts you should check to make sure you're running this way. <a name="connection-limits"></a> When you use a service-specific client in an Azure Functions application, don't ### Use `async` and `await` -When writing Azure Functions in JavaScript, you should write code using the `async` and `await` keywords. Writing code using `async` and `await` instead of callbacks or `.then` and `.catch` with Promises helps avoid two common problems: +When writing Azure Functions in Node.js, you should write code using the `async` and `await` keywords. Writing code using `async` and `await` instead of callbacks or `.then` and `.catch` with Promises helps avoid two common problems: - Throwing uncaught exceptions that [crash the Node.js process](https://nodejs.org/api/process.html#process_warning_using_uncaughtexception_correctly), potentially affecting the execution of other functions. - Unexpected behavior, such as missing logs from `context.log`, caused by asynchronous calls that aren't properly awaited. In the following example, the asynchronous method `fs.readFile` is invoked with an error-first callback function as its second parameter. This code causes both of the issues previously mentioned. An exception that isn't explicitly caught in the correct scope can crash the entire process (issue #1). Calling the deprecated `context.done()` method outside of the scope of the callback can signal the function is finished before the file is read (issue #2). In this example, calling `context.done()` too early results in missing log entries starting with `Data from file:`. +# [JavaScript](#tab/javascript) + ```javascript // NOT RECOMMENDED PATTERN const fs = require('fs'); module.exports = function (context) { } ``` +# [TypeScript](#tab/typescript) ++```typescript +// NOT RECOMMENDED PATTERN +import { AzureFunction, Context } from "@azure/functions"; +import * as fs from 'fs'; ++const trigger1: AzureFunction = function (context: Context): void { + fs.readFile('./hello.txt', (err, data) => { + if (err) { + context.log.error('ERROR', err); + // BUG #1: This will result in an uncaught exception that crashes the entire process + throw err; + } + context.log(`Data from file: ${data}`); + // context.done() should be called here + }); + // BUG #2: Data is not guaranteed to be read before the Azure Function's invocation ends + context.done(); +} ++export default trigger1; +``` +++ Use the `async` and `await` keywords to help avoid both of these issues. Most APIs in the Node.js ecosystem have been converted to support promises in some form. For example, starting in v14, Node.js provides an `fs/promises` API to replace the `fs` callback API. In the following example, any unhandled exceptions thrown during the function execution only fail the individual invocation that raised the exception. The `await` keyword means that steps following `readFile` only execute after it's complete. With `async` and `await`, you also don't need to call the `context.done()` callback. +# [JavaScript](#tab/javascript) + ```javascript // Recommended pattern const fs = require('fs/promises'); module.exports = async function (context) { } ``` +# [TypeScript](#tab/typescript) ++```typescript +// Recommended pattern +import { AzureFunction, Context } from "@azure/functions"; +import * as fs from 'fs/promises'; ++const trigger1: AzureFunction = async function (context: Context): Promise<void> { + let data: Buffer; + try { + data = await fs.readFile('./hello.txt'); + } catch (err) { + context.log.error('ERROR', err); + // This rethrown exception will be handled by the Functions Runtime and will only fail the individual invocation + throw err; + } + context.log(`Data from file: ${data}`); +} ++export default trigger1; +``` +++ ::: zone-end ## Next steps |
azure-functions | Functions Run Local | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md | There are no other considerations for PowerShell. + To use a `--worker-runtime` value of `node`, specify the `--language` as `typescript`. -+ See the [TypeScript section in the JavaScript developer reference](functions-reference-node.md#typescript) for `func init` behaviors specific to TypeScript. - ## Register extensions |
azure-functions | Supported Languages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/supported-languages.md | To learn more about how to develop functions in the supported languages, see the + [C# class library developer reference](functions-dotnet-class-library.md) + [C# script developer reference](functions-reference-csharp.md) + [Java developer reference](functions-reference-java.md)-+ [JavaScript developer reference](functions-reference-node.md) ++ [JavaScript developer reference](functions-reference-node.md?tabs=javascript) + [PowerShell developer reference](functions-reference-powershell.md) + [Python developer reference](functions-reference-python.md)-+ [TypeScript developer reference](functions-reference-node.md#typescript) ++ [TypeScript developer reference](functions-reference-node.md?tabs=typescript) |
azure-monitor | Azure Monitor Agent Troubleshoot Linux Vm Rsyslog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md | Linux AMA buffers events to `/var/opt/microsoft/azuremonitoragent/events` prior ### Confirming the issue of Full Disk The `df` command shows almost no space available on `/dev/sda1`, as shown below: +```bash + df -h ```-$ df -h +```output Filesystem Size Used Avail Use% Mounted on udev 63G 0 63G 0% /dev tmpfs 13G 720K 13G 1% /run tmpfs 13G 0 13G 0% /run/user/1000 The `du` command can be used to inspect the disk to determine which files are causing the disk to be full. For example: +```bash + cd /var/log + du -h syslog* ```-/var/log$ du -h syslog* +```output 6.7G syslog 18G syslog.1 ``` In some cases, `du` may not report any significantly large files/directories. It may be possible that a [file marked as (deleted) is taking up the space](https://unix.stackexchange.com/questions/182077/best-way-to-free-disk-space-from-deleted-files-that-are-held-open). This issue can happen when some other process has attempted to delete a file, but there remains a process with the file still open. The `lsof` command can be used to check for such files. In the example below, we see that `/var/log/syslog` is marked as deleted, but is taking up 3.6 GB of disk space. It hasn't been deleted because a process with PID 1484 still has the file open. -``` -$ sudo lsof +L1 +```bash + sudo lsof +L1 +``` ++```output COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME none 849 root txt REG 0,1 8632 0 16764 / (deleted) rsyslogd 1484 syslog 14w REG 8,1 3601566564 0 35280 /var/log/syslog (deleted) AMA doesn't rely on syslog events being logged to `/var/log/syslog`. Instead, it If you're sending a high log volume through rsyslog, consider modifying the default rsyslog config to avoid logging these events to this location `/var/log/syslog`. The events for this facility would still be forwarded to AMA because of the config in `/etc/rsyslog.d/10-azuremonitoragent.conf`. 1. For example, to remove local4 events from being logged at `/var/log/syslog`, change this line in `/etc/rsyslog.d/50-default.conf` from this:- ``` + ```config *.*;auth,authpriv.none -/var/log/syslog ``` To this (add local4.none;): - ``` + ```config *.*;local4.none;auth,authpriv.none -/var/log/syslog ``` 2. `sudo systemctl restart rsyslog` |
azure-monitor | Itsm Connector Secure Webhook Connections Azure Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-connector-secure-webhook-connections-azure-configuration.md | Title: IT Service Management Connector - Secure Webhook in Azure Monitor - Azure Configurations -description: This article shows you how to configure Azure in order to connect your ITSM products/services with Secure Webhook in Azure Monitor to centrally monitor and manage ITSM work items. + Title: 'IT Service Management Connector: Secure Webhook in Azure Monitor - Azure configurations' +description: This article shows you how to configure Azure to connect your ITSM products or services with Secure Webhook in Azure Monitor to centrally monitor and manage ITSM work items. Last updated 04/28/2022 -# Configure Azure to connect ITSM tools using Secure Webhook +# Configure Azure to connect ITSM tools by using Secure Webhook This article describes the required Azure configurations for using Secure Webhook.+ ## Register with Azure Active Directory -Follow these steps to register the application with Azure AD: +To register the application with Azure Active Directory (Azure AD): 1. Follow the steps in [Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).-2. In Azure AD, select **Expose application**. -3. Select **Set** for **Application ID URI**. +1. In Azure AD, select **Expose application**. +1. Select **Set** for **Application ID URI**. ++ [](media/itsm-connector-secure-webhook-connections-azure-configuration/azure-ad-expand.png#lightbox) +1. Select **Save**. - [](media/itsm-connector-secure-webhook-connections-azure-configuration/azure-ad-expand.png#lightbox) -4. Select **Save**. +## Define a service principal -## Define service principal +The action group service is a first-party application. It has permission to acquire authentication tokens from your Azure AD application to authenticate with ServiceNow. -The Action group service is a first party application, and has permission to acquire authentication tokens from your Azure AD application in order to authenticate with ServiceNow. -As an optional step you can define application role in the created appΓÇÖs manifest, which can allow you to further restrict, access in a way that only certain applications with that specific role can send messages. This role has to be then assigned to the Action Group service principal (Requires tenant admin privileges). +As an optional step, you can define an application role in the created app's manifest. This way, you can further restrict access so that only certain applications with that specific role can send messages. This role has to be then assigned to the Action Group service principal. Tenant admin privileges are required. -This step can be done through the same [PowerShell commands](../alerts/action-groups.md#secure-webhook-powershell-script). +You can do this step by using the same [PowerShell commands](../alerts/action-groups.md#secure-webhook-powershell-script). ## Create a Secure Webhook action group -After your application is registered with Azure AD, you can create work items in your ITSM tool based on Azure alerts, by using the Secure Webhook action in action groups. +After your application is registered with Azure AD, you can create work items in your ITSM tool based on Azure alerts by using the Secure Webhook action in action groups. ++Action groups provide a modular and reusable way of triggering actions for Azure alerts. You can use action groups with metric alerts, activity log alerts, and Log Analytics alerts in the Azure portal. -Action groups provide a modular and reusable way of triggering actions for Azure alerts. You can use action groups with metric alerts, Activity Log alerts, and Azure Log Analytics alerts in the Azure portal. To learn more about action groups, see [Create and manage action groups in the Azure portal](../alerts/action-groups.md). > [!NOTE] To learn more about action groups, see [Create and manage action groups in the A To add a webhook to an action, follow these instructions for Secure Webhook: 1. In the [Azure portal](https://portal.azure.com/), search for and select **Monitor**. The **Monitor** pane consolidates all your monitoring settings and data in one view.-2. Select **Alerts** > **Manage actions**. -3. Select [Add action group](../alerts/action-groups.md#create-an-action-group-by-using-the-azure-portal), and fill in the fields. -4. Enter a name in the **Action group name** box, and enter a name in the **Short name** box. The short name is used in place of a full action group name when notifications are sent using this group. -5. Select **Secure Webhook**. -6. Select these details: - 1. Select the object ID of the Azure Active Directory instance that you registered. - 2. For the URI, paste in the webhook URL that you copied from the [ITSM tool environment](#configure-the-itsm-tool-environment). - 3. Set **Enable the common Alert Schema** to **Yes**. +1. Select **Alerts** > **Manage actions**. +1. Select [Add action group](../alerts/action-groups.md#create-an-action-group-by-using-the-azure-portal) and fill in the fields. +1. Enter a name in the **Action group name** box and enter a name in the **Short name** box. The short name is used in place of a full action group name when notifications are sent by using this group. +1. Select **Secure Webhook**. +1. Select these details: + 1. Select the object ID of the Azure AD instance that you registered. + 1. For the URI, paste in the webhook URL that you copied from the [ITSM tool environment](#configure-the-itsm-tool-environment). + 1. Set **Enable the common Alert Schema** to **Yes**. The following image shows the configuration of a sample Secure Webhook action: To add a webhook to an action, follow these instructions for Secure Webhook: ## Configure the ITSM tool environment Secure Webhook supports connections with the following ITSM tools:- * [ServiceNow](./itsmc-secure-webhook-connections-servicenow.md) - * [BMC Helix](./itsmc-secure-webhook-connections-bmc.md) + * [ServiceNow](./itsmc-secure-webhook-connections-servicenow.md) + * [BMC Helix](./itsmc-secure-webhook-connections-bmc.md) To configure the ITSM tool environment:-1. Get the URI for the secure Webhook definition. -2. Create definitions based on ITSM tool flow. ++1. Get the URI for the Secure Webhook definition. +1. Create definitions based on ITSM tool flow. + ## Next steps -* [ServiceNow Secure Webhook Configuration](./itsmc-secure-webhook-connections-servicenow.md) -* [BMC Secure Webhook Configuration](./itsmc-secure-webhook-connections-bmc.md) +* [ServiceNow Secure Webhook configuration](./itsmc-secure-webhook-connections-servicenow.md) +* [BMC Secure Webhook configuration](./itsmc-secure-webhook-connections-bmc.md) |
azure-monitor | Itsmc Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-overview.md | Depending on your integration, start connecting to your ITSM tool with these ste - For ServiceNow ITOM events or BMC Helix, use the secure webhook action: 1. [Register your app with Azure Active Directory](./itsm-connector-secure-webhook-connections-azure-configuration.md#register-with-azure-active-directory).- 1. [Define a service principal](./itsm-connector-secure-webhook-connections-azure-configuration.md#define-service-principal). + 1. [Define a service principal](./itsm-connector-secure-webhook-connections-azure-configuration.md#define-a-service-principal). 1. [Create a secure webhook action group](./itsm-connector-secure-webhook-connections-azure-configuration.md#create-a-secure-webhook-action-group). 1. Configure your partner environment. Secure Export supports connections with the following ITSM tools: - [ServiceNow ITOM](./itsmc-secure-webhook-connections-servicenow.md) |
azure-monitor | Itsmc Secure Webhook Connections Bmc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-secure-webhook-connections-bmc.md | Title: IT Service Management Connector - Secure Webhook in Azure Monitor - Configuration with BMC -description: This article shows you how to connect your ITSM products/services with BMC on Secure Webhook in Azure Monitor. + Title: 'IT Service Management Connector: Secure Webhook in Azure Monitor - Configuration with BMC' +description: This article shows you how to connect your ITSM products or services with BMC on Secure Webhook in Azure Monitor. Last updated 03/30/2022 The following sections provide details about how to connect your BMC Helix produ Ensure that you've met the following prerequisites: -* Azure AD is registered. +* Azure Active Directory is registered. * You have the supported version of BMC Helix Multi-Cloud Service Management (version 19.08 or later). ## Configure the BMC Helix connection -1. Use the following procedure in the BMC Helix environment in order to get the URI for the secure Webhook: +1. Use the following procedure in the BMC Helix environment to get the URI for the Secure Webhook: - 1. Log in to Integration Studio. + 1. Sign in to Integration Studio. 1. Search for the **Create Incident from Azure Alerts** flow.- 1. Copy the webhook URL . + 1. Copy the webhook URL. -  +  -2. Follow the instructions according to the version: - * [Enabling prebuilt integration with Azure Monitor for version 20.02](https://docs.bmc.com/docs/multicloud/enabling-prebuilt-integration-with-azure-monitor-879728195.html). - * [Enabling prebuilt integration with Azure Monitor for version 19.11](https://docs.bmc.com/docs/multicloudprevious/enabling-prebuilt-integration-with-azure-monitor-904157623.html). +1. Follow the instructions according to the version: + * [Enabling prebuilt integration with Azure Monitor for version 20.02](https://docs.bmc.com/docs/multicloud/enabling-prebuilt-integration-with-azure-monitor-879728195.html) + * [Enabling prebuilt integration with Azure Monitor for version 19.11](https://docs.bmc.com/docs/multicloudprevious/enabling-prebuilt-integration-with-azure-monitor-904157623.html) -3. As a part of the configuration of the connection in BMC Helix, go into your integration BMC instance and follow these instructions: +1. As a part of the configuration of the connection in BMC Helix, go into your integration BMC instance and follow these instructions: 1. Select **catalog**.- 2. Select **Azure alerts**. - 3. Select **connectors**. - 4. Select **configuration**. - 5. Select the **add new connection** configuration. - 6. Fill in the information for the configuration section: + 1. Select **Azure alerts**. + 1. Select **connectors**. + 1. Select **configuration**. + 1. Select the **add new connection** configuration. + 1. Fill in the information for the configuration section: - **Name**: Make up your own. - **Authorization type**: **NONE** - **Description**: Make up your own. Ensure that you've met the following prerequisites: - **Check**: Selected by default to enable usage. - The Azure tenant ID and Azure application ID are taken from the application that you defined earlier. - +  |
azure-monitor | Itsmc Synced Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-synced-data.md | Title: Data synced from your ITSM product to LA Workspace -description: This article provides an overview of Data synced from your ITSM product to LA Workspace. +description: This article provides an overview of data synced from your ITSM product to LA Workspace. Last updated 2/23/2022 -Incidents and change requests are synced from [ServiceNow](./itsmc-connections-servicenow.md) to your Log Analytics workspace, based on the connection's configuration using the "Sync Data" field: +Incidents and change requests are synced from [ServiceNow](./itsmc-connections-servicenow.md) to your Log Analytics workspace based on the connection's configuration by using the **Sync Data** field. + ## Synced data -This section shows some examples of data gathered by ITSMC. +This section shows some examples of data gathered by ITSM Connector. -The fields in **ServiceDesk_CL** vary depending on the work item type that you import into Log Analytics. Here's a list of fields for two work item types: +The fields in **ServiceDesk_CL** vary depending on the work item type that you import into Log Analytics. Here are fields for two work item types: -**Work item:** **Incidents** +**Work item:** **Incidents** ServiceDeskWorkItemType_s="Incident" **Fields** ServiceDeskWorkItemType_s="ChangeRequest" | Title_s| Short description | | Description_s| Notes | | CreatedDate_t| Opened |-| ClosedDate_t| closed| +| ClosedDate_t| Closed| | ResolvedDate_t|Resolved| | Computer | Configuration item | ServiceDeskWorkItemType_s="ChangeRequest" | WorkStartDate_t | Actual start date | | WorkEndDate_t | Actual end date| | Description_s | Description |-| Computer | Configuration Item | +| Computer | Configuration item | ## Next steps -* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md) +[Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md) |
azure-monitor | Proactive Arm Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-arm-config.md | Title: Smart detection rule settings - Azure Application Insights -description: Automate management and configuration of Azure Application Insights smart detection rules with Azure Resource Manager Templates + Title: 'Smart detection rule settings: Application Insights' +description: Automate management and configuration of Application Insights smart detection rules with Azure Resource Manager templates. Last updated 02/14/2021 -# Manage Application Insights smart detection rules using Azure Resource Manager templates +# Manage Application Insights smart detection rules by using Azure Resource Manager templates >[!NOTE]->You can migrate your Application Insight resources to alerts-bases smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections. +>You can migrate your Application Insight resources to alerts-based smart detection (preview). The migration creates alert rules for the different smart detection modules. After you create the rules, you can manage and configure them like any other Azure Monitor alert rules. You can also configure action groups for these rules to enable multiple methods of taking actions or triggering notification on new detections. >-> See [Smart Detection Alerts migration](./alerts-smart-detections-migration.md) for more details on the migration process and the behavior of smart detection after the migration. -> +> For more information on the migration process and the behavior of smart detection after the migration, see [Smart detection alerts migration](./alerts-smart-detections-migration.md). +> ++ You can manage and configure smart detection rules in Application Insights by using [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md). -Smart detection rules in Application Insights can be managed and configured using [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md). -This method can be used when deploying new Application Insights resources with Azure Resource Manager automation, or for modifying the settings of existing resources. +You can use this method when you deploy new Application Insights resources with Resource Manager automation or when you modify the settings of existing resources. ## Smart detection rule configuration You can configure the following settings for a smart detection rule:-- If the rule is enabled (the default is **true**.)-- If emails should be sent to users associated to the subscriptionΓÇÖs [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) and [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) roles when a detection is found (the default is **true**.)-- Any additional email recipients who should get a notification when a detection is found.- - Email configuration is not available for Smart Detection rules marked as _preview_. +- If the rule is enabled. (The default is **true**.) +- If emails should be sent to users associated to the subscription's [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) and [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) roles when a detection is found. (The default is **true**.) +- Any other email recipients who should get a notification when a detection is found. + - Email configuration isn't available for smart detection rules marked as _preview_. ++To allow configuring the rule settings via Resource Manager, the smart detection rule configuration is available as an inner resource within the Application Insights resource. It's named **ProactiveDetectionConfigs**. -To allow configuring the rule settings via Azure Resource Manager, the smart detection rule configuration is now available as an inner resource within the Application Insights resource, named **ProactiveDetectionConfigs**. -For maximal flexibility, each smart detection rule can be configured with unique notification settings. +For maximal flexibility, you can configure each smart detection rule with unique notification settings. ## Examples -Below are a few examples showing how to configure the settings of smart detection rules using Azure Resource Manager templates. -All samples refer to an Application Insights resource named _ΓÇ£myApplicationΓÇ¥_, and to the "long dependency duration smart detection rule", which is internally named _ΓÇ£longdependencydurationΓÇ¥_. -Make sure to replace the Application Insights resource name, and to specify the relevant smart detection rule internal name. Check the table below for a list of the corresponding internal Azure Resource Manager names for each smart detection rule. +The following examples show how to configure the settings of smart detection rules by using Resource Manager templates. ++All samples refer to an Application Insights resource named _"myApplication"_. They also refer to the "long dependency duration smart detection rule." It's internally named _"longdependencyduration"_. ++Make sure to replace the Application Insights resource name and to specify the relevant smart detection rule internal name. Check the following table for a list of the corresponding internal Resource Manager names for each smart detection rule. ### Disable a smart detection rule Make sure to replace the Application Insights resource name, and to specify the } ``` -### Add additional email recipients for a smart detection rule +### Add more email recipients for a smart detection rule ```json { Make sure to replace the Application Insights resource name, and to specify the ``` - ## Smart detection rule names -Below is a table of smart detection rule names as they appear in the portal, along with their internal names, that should be used in the Azure Resource Manager template. +The following table shows smart detection rule names as they appear in the portal. The table also shows their internal names to use in the Resource Manager template. > [!NOTE]-> Smart detection rules marked as _preview_ donΓÇÖt support email notifications. Therefore, you can only set the _enabled_ property for these rules. +> Smart detection rules marked as _preview_ don't support email notifications. You can only set the _enabled_ property for these rules. | Azure portal rule name | Internal name |:|:| Below is a table of smart detection rule names as they appear in the portal, alo ### Failure Anomalies alert rule -This Azure Resource Manager template demonstrates configuring a Failure Anomalies alert rule with a severity of 2. +This Resource Manager template demonstrates how to configure a Failure Anomalies alert rule with a severity of 2. > [!NOTE]-> Failure Anomalies is a global service therefore rule location is created on the global location. +> Failure Anomalies is a global service, so rule location is created on the global location. ```json { This Azure Resource Manager template demonstrates configuring a Failure Anomalie ``` > [!NOTE]-> This Azure Resource Manager template is unique to the Failure Anomalies alert rule and is different from the other classic Smart Detection rules described in this article. If you want to manage Failure Anomalies manually this is done in Azure Monitor Alerts whereas all other Smart Detection rules are managed in the Smart Detection pane of the UI. +> This Resource Manager template is unique to the Failure Anomalies alert rule and is different from the other classic smart detection rules described in this article. If you want to manage Failure Anomalies manually, use Azure Monitor alerts. All other smart detection rules are managed in the **Smart Detection** pane of the UI. -## Next Steps +## Next steps Learn more about automatically detecting: - [Failure anomalies](./proactive-failure-diagnostics.md)-- [Memory Leaks](./proactive-potential-memory-leak.md)+- [Memory leaks](./proactive-potential-memory-leak.md) - [Performance anomalies](./smart-detection-performance.md) |
azure-monitor | Proactive Potential Memory Leak | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-potential-memory-leak.md | Title: Detect memory leak - Azure Application Insights smart detection -description: Monitor applications with Azure Application Insights for potential memory leaks. + Title: 'Detect memory leak: Application Insights smart detection' +description: Monitor applications with Application Insights for potential memory leaks. Last updated 12/12/2017 ->You can migrate your Application Insight resources to alerts-bases smart detection (preview). The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections. +>You can migrate your Application Insight resources to alerts-based smart detection (preview). The migration creates alert rules for the different smart detection modules. After you create the rules, you can manage and configure them like any other Azure Monitor alert rules. You can also configure action groups for these rules to enable multiple methods of taking actions or triggering notification on new detections. >-> For more information, see [Smart Detection Alerts migration](./alerts-smart-detections-migration.md). +> For more information, see [Smart detection alerts migration](./alerts-smart-detections-migration.md). -Smart detection automatically analyzes the memory consumption of each process in your application, and can warn you about potential memory leaks or increased memory consumption. +Smart detection automatically analyzes the memory consumption of each process in your application. It can warn you about potential memory leaks or increased memory consumption. -This feature requires no special setup, other than [configuring performance counters](../app/performance-counters.md) for your app. It's active when your app generates enough memory performance counters telemetry (for example, Private Bytes). +This feature requires no special setup other than [configuring performance counters](../app/performance-counters.md) for your app. It's active when your app generates enough memory performance counters telemetry (for example, Private Bytes). ## When would I get this type of smart detection notification?-A typical notification will follow a consistent increase in memory consumption, over a long period of time, in one or more processes or machines, which are part of your application. Machine learning algorithms are used for detecting increased memory consumption that matches the pattern of a memory leak. +A typical notification follows a consistent increase: ++- In memory consumption over a long period of time. +- In one or more processes or machines that are part of your application. ++Machine learning algorithms are used to detect increased memory consumption that matches the pattern of a memory leak. ## Does my app really have a problem?-A notification doesn't mean that your app definitely has a problem. Although memory leak patterns many times indicate an application issue, these patterns could be typical to your specific process, or could have a natural business justification. In such case the notification can be ignored. +A notification doesn't mean that your app definitely has a problem. Although memory leak patterns might indicate an application issue, these patterns might be typical to your specific process. Memory leak patterns might also have a natural business justification. In such cases, you can ignore the notification. ## How do I fix it? The notifications include diagnostic information to support in the diagnostic analysis process:-1. **Triage.** The notification shows you the amount of memory increase (in GB), and the time range in which the memory has increased. This information can help you assign a priority to the problem. -2. **Scope.** How many machines exhibited the memory leak pattern? How many exceptions were triggered during the potential memory leak? This information can be obtained from the notification. -3. **Diagnose.** The detection contains the memory leak pattern, showing memory consumption of the process over time. You can also use the related items and reports linking to supporting information, to help you further diagnose the issue. +1. **Triage:** The notification shows you the amount of memory increase (in GB) and the time range in which the memory has increased. This information can help you assign a priority to the problem. +1. **Scope:** How many machines exhibited the memory leak pattern? How many exceptions were triggered during the potential memory leak? You can obtain this information from the notification. +1. **Diagnose:** The detection contains the memory leak pattern and shows memory consumption of the process over time. You can also use the related items and reports linking to supporting information to help you further diagnose the issue. |
azure-monitor | Api Custom Events Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md | The recommended way to send request telemetry is where the request acts as an <a You can correlate telemetry items together by associating them with operation context. The standard request-tracking module does this for exceptions and other events that are sent while an HTTP request is being processed. In [Search](./diagnostic-search.md) and [Analytics](../logs/log-query-overview.md), you can easily find any events associated with the request by using its operation ID. -For more information on correlation, see [Telemetry correlation in Application Insights](./correlation.md). +For more information on correlation, see [Telemetry correlation in Application Insights](distributed-tracing-telemetry-correlation.md). When you track telemetry manually, the easiest way to ensure telemetry correlation is by using this pattern: |
azure-monitor | App Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md | Application Insights provides other features including, but not limited to: - [Usage](usage-overview.md): Understand which features are popular with users and how users interact and use your application. - [Smart detection](proactive-diagnostics.md): Detect failures and anomalies automatically through proactive telemetry analysis. -Application Insights supports [distributed tracing](distributed-tracing.md), which is also known as distributed component correlation. This feature allows [searching for](diagnostic-search.md) and [visualizing](transaction-diagnostics.md) an end-to-end flow of a specific execution or transaction. The ability to trace activity from end to end is important for applications that were built as distributed components or [microservices](/azure/architecture/guide/architecture-styles/microservices). +Application Insights supports [distributed tracing](distributed-tracing-telemetry-correlation.md), which is also known as distributed component correlation. This feature allows [searching for](diagnostic-search.md) and [visualizing](transaction-diagnostics.md) an end-to-end flow of a specific execution or transaction. The ability to trace activity from end to end is important for applications that were built as distributed components or [microservices](/azure/architecture/guide/architecture-styles/microservices). The [Application Map](app-map.md) allows a high-level, top-down view of the application architecture and at-a-glance visual references to component health and responsiveness. |
azure-monitor | App Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md | To provide feedback, use the feedback option. ## Next steps -* To learn more about how correlation works in Application Insights, see [Telemetry correlation](correlation.md). +* To learn more about how correlation works in Application Insights, see [Telemetry correlation](distributed-tracing-telemetry-correlation.md). * The [end-to-end transaction diagnostic experience](transaction-diagnostics.md) correlates server-side telemetry from across all your Application Insights-monitored components into a single view. * For advanced correlation scenarios in ASP.NET Core and ASP.NET, see [Track custom operations](custom-operations-tracking.md). |
azure-monitor | Custom Operations Tracking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md | This article provides guidance on how to track custom operations with the Applic ## Overview -An operation is a logical piece of work run by an application. It has a name, start time, duration, result, and a context of execution like user name, properties, and result. If operation A was initiated by operation B, then operation B is set as a parent for A. An operation can have only one parent, but it can have many child operations. For more information on operations and telemetry correlation, see [Application Insights telemetry correlation](correlation.md). +An operation is a logical piece of work run by an application. It has a name, start time, duration, result, and a context of execution like user name, properties, and result. If operation A was initiated by operation B, then operation B is set as a parent for A. An operation can have only one parent, but it can have many child operations. For more information on operations and telemetry correlation, see [Application Insights telemetry correlation](distributed-tracing-telemetry-correlation.md). In the Application Insights .NET SDK, the operation is described by the abstract class [OperationTelemetry](https://github.com/microsoft/ApplicationInsights-dotnet/blob/7633ae849edc826a8547745b6bf9f3174715d4bd/BASE/src/Microsoft.ApplicationInsights/Extensibility/Implementation/OperationTelemetry.cs) and its descendants [RequestTelemetry](https://github.com/microsoft/ApplicationInsights-dotnet/blob/7633ae849edc826a8547745b6bf9f3174715d4bd/BASE/src/Microsoft.ApplicationInsights/DataContracts/RequestTelemetry.cs) and [DependencyTelemetry](https://github.com/microsoft/ApplicationInsights-dotnet/blob/7633ae849edc826a8547745b6bf9f3174715d4bd/BASE/src/Microsoft.ApplicationInsights/DataContracts/DependencyTelemetry.cs). Each Application Insights operation (request or dependency) involves `Activity`. ## Next steps -- Learn the basics of [telemetry correlation](correlation.md) in Application Insights.+- Learn the basics of [telemetry correlation](distributed-tracing-telemetry-correlation.md) in Application Insights. - Check out how correlated data powers [transaction diagnostics experience](./transaction-diagnostics.md) and [Application Map](./app-map.md). - See the [data model](./data-model-complete.md) for Application Insights types and data model. - Report custom [events and metrics](./api-custom-events-metrics.md) to Application Insights. |
azure-monitor | Data Model Complete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-complete.md | The following types of telemetry are used to monitor the execution of your app. * [Request](#request): Generated to log a request received by your app. For example, the Application Insights web SDK automatically generates a Request telemetry item for each HTTP request that your web app receives. - An *operation* is made up of the threads of execution that process a request. You can also [write code](./api-custom-events-metrics.md#trackrequest) to monitor other types of operation, such as a "wake up" in a web job or function that periodically processes data. Each operation has an ID. The ID can be used to [group](./correlation.md) all telemetry generated while your app is processing the request. Each operation either succeeds or fails and has a duration of time. + An *operation* is made up of the threads of execution that process a request. You can also [write code](./api-custom-events-metrics.md#trackrequest) to monitor other types of operation, such as a "wake up" in a web job or function that periodically processes data. Each operation has an ID. The ID can be used to [group](distributed-tracing-telemetry-correlation.md) all telemetry generated while your app is processing the request. Each operation either succeeds or fails and has a duration of time. * [Exception](#exception): Typically represents an exception that causes an operation to fail. * [Dependency](#dependency): Represents a call from your app to an external service or storage, such as a REST API or SQL. In ASP.NET, dependency calls to SQL are defined by `System.Data`. Calls to HTTP endpoints are defined by `System.Net`. Every telemetry item can define the [context information](#context) like applica You can use session ID to calculate an outage or an issue impact on users. Calculating the distinct count of session ID values for a specific failed dependency, error trace, or critical exception gives you a good understanding of an impact. -The Application Insights telemetry model defines a way to [correlate](./correlation.md) telemetry to the operation of which it's a part. For example, a request can make a SQL Database call and record diagnostics information. You can set the correlation context for those telemetry items that tie it back to the request telemetry. +The Application Insights telemetry model defines a way to [correlate](distributed-tracing-telemetry-correlation.md) telemetry to the operation of which it's a part. For example, a request can make a SQL Database call and record diagnostics information. You can set the correlation context for those telemetry items that tie it back to the request telemetry. ## Schema improvements The Application Insights web SDK sends a request name "as is" about letter case. ### ID -ID is the identifier of a request call instance. It's used for correlation between the request and other telemetry items. The ID should be globally unique. For more information, see [Telemetry correlation in Application Insights](./correlation.md). +ID is the identifier of a request call instance. It's used for correlation between the request and other telemetry items. The ID should be globally unique. For more information, see [Telemetry correlation in Application Insights](distributed-tracing-telemetry-correlation.md). **Maximum length:** 128 characters URL is the request URL with all query string parameters. ### Source -Source is the source of the request. Examples are the instrumentation key of the caller or the IP address of the caller. For more information, see [Telemetry correlation in Application Insights](./correlation.md). +Source is the source of the request. Examples are the instrumentation key of the caller or the IP address of the caller. For more information, see [Telemetry correlation in Application Insights](distributed-tracing-telemetry-correlation.md). **Maximum length:** 1,024 characters This field is the name of the command initiated with this dependency call. It ha ### ID -ID is the identifier of a dependency call instance. It's used for correlation with the request telemetry item that corresponds to this dependency call. For more information, see [Telemetry correlation in Application Insights](./correlation.md). +ID is the identifier of a dependency call instance. It's used for correlation with the request telemetry item that corresponds to this dependency call. For more information, see [Telemetry correlation in Application Insights](distributed-tracing-telemetry-correlation.md). ### Data This field is the dependency type name. It has a low cardinality value for logic ### Target -This field is the target site of a dependency call. Examples are server name and host address. For more information, see [Telemetry correlation in Application Insights](./correlation.md). +This field is the target site of a dependency call. Examples are server name and host address. For more information, see [Telemetry correlation in Application Insights](distributed-tracing-telemetry-correlation.md). ### Duration Originally, this field was used to indicate the type of the device the user of t ### Operation ID -This field is the unique identifier of the root operation. This identifier allows grouping telemetry across multiple components. For more information, see [Telemetry correlation](./correlation.md). Either a request or a page view creates the operation ID. All other telemetry sets this field to the value for the containing request or page view. +This field is the unique identifier of the root operation. This identifier allows grouping telemetry across multiple components. For more information, see [Telemetry correlation](distributed-tracing-telemetry-correlation.md). Either a request or a page view creates the operation ID. All other telemetry sets this field to the value for the containing request or page view. **Maximum length:** 128 ### Parent operation ID -This field is the unique identifier of the telemetry item's immediate parent. For more information, see [Telemetry correlation](./correlation.md). +This field is the unique identifier of the telemetry item's immediate parent. For more information, see [Telemetry correlation](distributed-tracing-telemetry-correlation.md). **Maximum length:** 128 |
azure-monitor | Ip Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md | Content-Length: 54 The Powershell 'Update-AzApplicationInsights' cmdlet can disable IP masking with the `DisableIPMasking` parameter. ```powershell-Update-AzApplicationInsights -Name "aiName" -ResourceGroupName "rgName" -DisableIPMasking $false +Update-AzApplicationInsights -Name "aiName" -ResourceGroupName "rgName" -DisableIPMasking:$true ``` For more information on the 'Update-AzApplicationInsights' cmdlet, see [Update-AzApplicationInsights](https://learn.microsoft.com/powershell/module/az.applicationinsights/update-azapplicationinsights) |
azure-monitor | Kubernetes Codeless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/kubernetes-codeless.md | Troubleshoot the following issue. ## Next steps * Learn more about [Azure Monitor](../overview.md) and [Application Insights](./app-insights-overview.md).-* Get an overview of [distributed tracing](./distributed-tracing.md) and see what [Application Map](./app-map.md?tabs=net) can do for your business. +* Get an overview of [distributed tracing](distributed-tracing-telemetry-correlation.md) and see what [Application Map](./app-map.md?tabs=net) can do for your business. |
azure-monitor | Monitor Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md | To collect custom telemetry from services such as Redis, Memcached, and MongoDB, ## Next steps * Read more instructions and information about [monitoring Azure Functions](../../azure-functions/functions-monitoring.md).-* Get an overview of [distributed tracing](./distributed-tracing.md). +* Get an overview of [distributed tracing](distributed-tracing-telemetry-correlation.md). * See what [Application Map](./app-map.md?tabs=net) can do for your business. * Read about [requests and dependencies for Java apps](./java-in-process-agent.md). * Learn more about [Azure Monitor](../overview.md) and [Application Insights](./app-insights-overview.md). |
azure-monitor | Opencensus Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md | For information on sampling in OpenCensus, see [Sampling in OpenCensus](sampling #### Log correlation -For information on how to enrich your logs with trace context data, see OpenCensus Python [logs integration](./correlation.md#log-correlation). +For information on how to enrich your logs with trace context data, see OpenCensus Python [logs integration](distributed-tracing-telemetry-correlation.md#log-correlation). #### Modify telemetry For information on sampling in OpenCensus, see [Sampling in OpenCensus](sampling #### Trace correlation -For more information on telemetry correlation in your trace data, see OpenCensus Python [telemetry correlation](./correlation.md#telemetry-correlation-in-opencensus-python). +For more information on telemetry correlation in your trace data, see OpenCensus Python [telemetry correlation](distributed-tracing-telemetry-correlation.md#telemetry-correlation-in-opencensus-python). #### Modify telemetry |
azure-monitor | Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md | In [`ApplicationInsights.config`](./configuration-with-applicationinsights-confi * `<MaxTelemetryItemsPerSecond>5</MaxTelemetryItemsPerSecond>` - The target rate of [logical operations](./correlation.md#data-model-for-telemetry-correlation) that the adaptive algorithm aims to collect **on each server host**. If your web app runs on many hosts, reduce this value so as to remain within your target rate of traffic at the Application Insights portal. + The target rate of [logical operations](distributed-tracing-telemetry-correlation.md#data-model-for-telemetry-correlation) that the adaptive algorithm aims to collect **on each server host**. If your web app runs on many hosts, reduce this value so as to remain within your target rate of traffic at the Application Insights portal. * `<EvaluationInterval>00:00:15</EvaluationInterval>` |
azure-monitor | Sdk Connection String | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md | Connection string: `APPLICATIONINSIGHTS_CONNECTION_STRING` ### Code samples -# [.NET/.NetCore](#tab/net) +# [.NET 5.0+](#tab/dotnet5) ++1. Set the instrumentation key in the `appsettings.json` file: ++ ```json + { + "ApplicationInsights": { + "InstrumentationKey" : "InstrumentationKey=00000000-0000-0000-0000-000000000000;" + } + } + ``` ++2. Retrieve the instrumentation key in `Program.cs` when registering the `ApplicationInsightsTelemetry` service: ++ ```csharp + var options = new ApplicationInsightsServiceOptions { ConnectionString = app.Configuration["ApplicationInsights:InstrumentationKey"] }; + builder.Services.AddApplicationInsightsTelemetry(options: options); + ``` ++> [!NOTE] +> When deploying applications to Azure in production scenarios, consider placing instrumentation keys or other configuration secrets in secure locations such as App Service configuration settings or Azure Key Vault. Avoid including secrets in your application code or checking them into source control where they might be exposed or misused. The preceding code example will also work if the instrumentation key is stored in App Service configuration settings. Learn more about [configuring App Service settings](/azure/app-service/configure-common). ++# [.NET Framework](#tab/dotnet-framework) Set the property [TelemetryConfiguration.ConnectionString](https://github.com/microsoft/ApplicationInsights-dotnet/blob/add45ceed35a817dc7202ec07d3df1672d1f610d/BASE/src/Microsoft.ApplicationInsights/Extensibility/TelemetryConfiguration.cs#L271-L274) or [ApplicationInsightsServiceOptions.ConnectionString](https://github.com/microsoft/ApplicationInsights-dotnet/blob/81288f26921df1e8e713d31e7e9c2187ac9e6590/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs#L66-L69). -.NET explicitly set: +Explicitly set the instrumentation key in code: + ```csharp var configuration = new TelemetryConfiguration { var configuration = new TelemetryConfiguration }; ``` -.NET config file: +Set the instrumentation key using a configuration file: ```xml <?xml version="1.0" encoding="utf-8"?> var configuration = new TelemetryConfiguration </ApplicationInsights> ``` -.NET Core explicitly set: -```csharp -public void ConfigureServices(IServiceCollection services) -{ - var options = new ApplicationInsightsServiceOptions { ConnectionString = "InstrumentationKey=00000000-0000-0000-0000-000000000000;" }; - services.AddApplicationInsightsTelemetry(options: options); -} -``` --.NET Core config.json: --```json -{ - "ApplicationInsights": { - "ConnectionString" : "InstrumentationKey=00000000-0000-0000-0000-000000000000;" - } - } -``` - # [Java](#tab/java) You can set the connection string in the `applicationinsights.json` configuration file: |
azure-monitor | Transaction Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md | This behavior is by design. All the related items, across all components, are al ### Is there a way to see fewer events per transaction when I use the Application Insights JavaScript SDK? -The transaction diagnostics experience shows all telemetry in a [single operation](correlation.md#data-model-for-telemetry-correlation) that shares an [Operation ID](data-model-complete.md#operation-id). By default, the Application Insights SDK for JavaScript creates a new operation for each unique page view. In a single-page application (SPA), only one page view event will be generated and a single Operation ID will be used for all telemetry generated. As a result, many events might be correlated to the same operation. +The transaction diagnostics experience shows all telemetry in a [single operation](distributed-tracing-telemetry-correlation.md#data-model-for-telemetry-correlation) that shares an [Operation ID](data-model-complete.md#operation-id). By default, the Application Insights SDK for JavaScript creates a new operation for each unique page view. In a single-page application (SPA), only one page view event will be generated and a single Operation ID will be used for all telemetry generated. As a result, many events might be correlated to the same operation. In these scenarios, you can use Automatic Route Tracking to automatically create new operations for navigation in your SPA. You must turn on [enableAutoRouteTracking](javascript.md#single-page-applications) so that a page view is generated every time the URL route is updated (logical page view occurs). If you want to manually refresh the Operation ID, call `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. Manually triggering a PageView event also resets the Operation ID. If all calls were instrumented, in process is the likely root cause for the time ### What if I see the message ***Error retrieving data*** while navigating Application Insights in the Azure portal? -This error indicates that the browser was unable to call into a required API or the API returned a failure response. To troubleshoot the behavior, open a browser [InPrivate window](https://support.microsoft.com/microsoft-edge/browse-inprivate-in-microsoft-edge-cd2c9a48-0bc4-b98e-5e46-ac40c84e27e2) and [disable any browser extensions](https://support.microsoft.com/microsoft-edge/add-turn-off-or-remove-extensions-in-microsoft-edge-9c0ec68c-2fbc-2f2c-9ff0-bdc76f46b026) that are running, then identify if you can still reproduce the portal behavior. If the portal error still occurs, try testing with other browsers, or other machines, investigate DNS or other network related issues from the client machine where the API calls are failing. If the portal error persists and requires further investigations, then [collect a browser network trace](../../azure-portal/capture-browser-trace.md#capture-a-browser-trace-for-troubleshooting) while you reproduce the unexpected portal behavior and open a support case from the Azure portal. +This error indicates that the browser was unable to call into a required API or the API returned a failure response. To troubleshoot the behavior, open a browser [InPrivate window](https://support.microsoft.com/microsoft-edge/browse-inprivate-in-microsoft-edge-cd2c9a48-0bc4-b98e-5e46-ac40c84e27e2) and [disable any browser extensions](https://support.microsoft.com/microsoft-edge/add-turn-off-or-remove-extensions-in-microsoft-edge-9c0ec68c-2fbc-2f2c-9ff0-bdc76f46b026) that are running, then identify if you can still reproduce the portal behavior. If the portal error still occurs, try testing with other browsers, or other machines, investigate DNS or other network related issues from the client machine where the API calls are failing. If the portal error persists and requires further investigations, then [collect a browser network trace](../../azure-portal/capture-browser-trace.md#capture-a-browser-trace-for-troubleshooting) while you reproduce the unexpected portal behavior and open a support case from the Azure portal. |
azure-monitor | Worker Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md | Run your application. The workers from all the preceding examples make an HTTP c Application Insights collects these ILogger logs, with a severity of Warning or above by default, and dependencies. They're correlated to `RequestTelemetry` with a parent-child relationship. Correlation also works across process/network boundaries. For example, if the call was made to another monitored component, it's correlated to this parent as well. -This custom operation of `RequestTelemetry` can be thought of as the equivalent of an incoming web request in a typical web application. It isn't necessary to use an operation, but it fits best with the [Application Insights correlation data model](./correlation.md). `RequestTelemetry` acts as the parent operation and every telemetry generated inside the worker iteration is treated as logically belonging to the same operation. +This custom operation of `RequestTelemetry` can be thought of as the equivalent of an incoming web request in a typical web application. It isn't necessary to use an operation, but it fits best with the [Application Insights correlation data model](distributed-tracing-telemetry-correlation.md). `RequestTelemetry` acts as the parent operation and every telemetry generated inside the worker iteration is treated as logically belonging to the same operation. This approach also ensures all the telemetry generated, both automatic and manual, will have the same `operation_id`. Because sampling is based on `operation_id`, the sampling algorithm either keeps or drops all the telemetry from a single iteration. |
azure-monitor | Container Insights Analyze | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-analyze.md | You can [split](../essentials/metrics-charts.md#apply-splitting) a metric to vie When you switch to the **Nodes**, **Controllers**, and **Containers** tabs, a property pane automatically displays on the right side of the page. It shows the properties of the item selected, which includes the labels you defined to organize Kubernetes objects. When a Linux node is selected, the **Local Disk Capacity** section also shows the available disk space and the percentage used for each disk presented to the node. Select the **>>** link in the pane to view or hide the pane. -As you expand the objects in the hierarchy, the properties pane updates based on the object selected. From the pane, you also can view Kubernetes container logs (stdout/stderror), events, and pod metrics by selecting the **Live Events** tab at the top of the pane. For more information about the configuration required to grant and control access to view this data, see [Set up the Live Data (preview)](container-insights-livedata-setup.md). +As you expand the objects in the hierarchy, the properties pane updates based on the object selected. From the pane, you also can view Kubernetes container logs (stdout/stderror), events, and pod metrics by selecting the **Live Events** tab at the top of the pane. For more information about the configuration required to grant and control access to view this data, see [Set up the Live Data](container-insights-livedata-setup.md). While you review cluster resources, you can see this data from the container in real time. For more information about this feature, see [How to view Kubernetes logs, events, and pod metrics in real time](container-insights-livedata-overview.md). The icons in the status field indicate the online statuses of pods, as described ## Monitor and visualize network configurations -Azure Network Policy Manager includes informative Prometheus metrics that you can use to monitor and better understand your network configurations. It provides built-in visualizations in either the Azure portal or Grafana Labs. For more information, see [Monitor and visualize network configurations with Azure NPM](../../virtual-network/kubernetes-network-policies.md#monitor-and-visualize-network-configurations-with-azure-npm). +Azure Network Policy Manager includes informative Prometheus metrics that you can use to monitor and better understand your network configurations. It provides built-in visualizations in either the Azure portal or Grafana Labs. For more information, see [Monitor and visualize network configurations with Azure npm](../../virtual-network/kubernetes-network-policies.md#monitor-and-visualize-network-configurations-with-azure-npm). ## Workbooks |
azure-monitor | Container Insights Livedata Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-metrics.md | -With Container insights Live Data (preview), you can visualize metrics about node and pod state in a cluster in real time. The feature emulates direct access to the `kubectl top nodes`, `kubectl get pods ΓÇôall-namespaces`, and `kubectl get nodes` commands to call, parse, and visualize the data in performance charts that are included with this insight. +With Container insights Live Data, you can visualize metrics about node and pod state in a cluster in real time. The feature emulates direct access to the `kubectl top nodes`, `kubectl get pods --all-namespaces`, and `kubectl get nodes` commands to call, parse, and visualize the data in performance charts that are included with this insight. This article provides a detailed overview and helps you understand how to use this feature. >[!NOTE] >Azure Kubernetes Service (AKS) clusters enabled as [private clusters](https://azure.microsoft.com/updates/aks-private-cluster/) aren't supported with this feature. This feature relies on directly accessing the Kubernetes API through a proxy server from your browser. Enabling networking security to block the Kubernetes API from this proxy will block this traffic. -For help with setting up or troubleshooting the Live Data (preview) feature, review the [setup guide](container-insights-livedata-setup.md). +For help with setting up or troubleshooting the Live Data feature, review the [setup guide](container-insights-livedata-setup.md). ## How it works -The Live Data (preview) feature directly accesses the Kubernetes API. For more information about the authentication model, see [The Kubernetes API](https://kubernetes.io/docs/concepts/overview/kubernetes-api/). +The Live Data feature directly accesses the Kubernetes API. For more information about the authentication model, see [The Kubernetes API](https://kubernetes.io/docs/concepts/overview/kubernetes-api/). This feature performs a polling operation against the metrics endpoints including `/api/v1/nodes`, `/apis/metrics.k8s.io/v1beta1/nodes`, and `/api/v1/pods`. The interval is every five seconds by default. This data is cached in your browser and charted in four performance charts included in Container insights. Each subsequent poll is charted into a rolling five-minute visualization window. To see the charts, select **Go Live (preview)** and then select the **Cluster** tab. - The polling interval is configured from the **Set interval** dropdown list. Use this dropdown list to set polling for new data every 1, 5, 15, and 30 seconds. Nodes are reported either in a **Ready** or **Not Ready** state and they're coun ### Active pod count -This performance chart maps to an equivalent of invoking `kubectl get pods ΓÇôall-namespaces` and maps the **STATUS** column the chart grouped by status types. +This performance chart maps to an equivalent of invoking `kubectl get pods --all-namespaces` and maps the **STATUS** column the chart grouped by status types.  |
azure-monitor | Container Insights Livedata Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-overview.md | +> [!NOTE] +> AKS uses [Kubernetes cluster-level logging architectures](https://kubernetes.io/docs/concepts/cluster-administration/logging/#cluster-level-logging-architectures). You can use tools such as Fluentd or Fluent Bit to collect logs. + This article provides an overview of this feature and helps you understand how to use it. For help with setting up or troubleshooting the Live Data feature, see the [Setup guide](container-insights-livedata-setup.md). This feature directly accesses the Kubernetes API. For more information about the authentication model, see [The Kubernetes API](https://kubernetes.io/docs/concepts/overview/kubernetes-api/). ## View AKS resource live logs -To view the live logs for pods, deployments, and replica sets with or without Container insights from the AKS resource view: +To view the live logs for pods, deployments, replica sets, stateful sets, daemon sets, and jobs with or without Container insights from the AKS resource view: 1. In the Azure portal, browse to the AKS cluster resource group and select your AKS resource. 1. Select **Workloads** in the **Kubernetes resources** section of the menu. -1. Select a pod, deployment, or replica set from the respective tab. +1. Select a pod, deployment, replica set, stateful set, daemon set, or job from the respective tab. 1. Select **Live Logs** from the resource's menu. 1. Select a pod to start collecting the live data. - [](./media/container-insights-livedata-overview/live-data-deployment.png#lightbox) + :::image type="content" source="./media/container-insights-livedata-overview/live-data-deployment.png" alt-text="Screenshot that shows the deployment of live logs." lightbox="./media/container-insights-livedata-overview/live-data-deployment.png"::: ## View logs You can view real-time log data as it's generated by the container engine on the 1. Select the **Nodes**, **Controllers**, or **Containers** tab. -1. Select an object from the performance grid. In the **Properties** pane on the right side, select **View live data**. If the AKS cluster is configured with single sign-on by using Azure Active Directory (Azure AD), you're prompted to authenticate on first use during that browser session. Select your account and finish authentication with Azure. +1. Select an object from the performance grid. In the **Properties** pane on the right side, select the **Live Logs** tab. If the AKS cluster is configured with single sign-on by using Azure Active Directory (Azure AD), you're prompted to authenticate on first use during that browser session. Select your account and finish authentication with Azure. >[!NOTE]- >To view the data from your Log Analytics workspace, select **View in analytics** in the **Properties** pane. The log search results potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers**. These logs might no longer exist. Attempting to search logs for a container that isn't available in `kubectl` will also fail here. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container insights](container-insights-log-query.md). --After successful authentication, the Live Data console pane appears below the performance data grid. You can view log data here in a continuous stream. If the fetch status indicator shows a green check mark at the far right, it means data can be retrieved, and it begins streaming to your console. + >To view the data from your Log Analytics workspace, select **View in Log analytics** in the **Properties** pane. The log search results potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Stateful Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers**. These logs might no longer exist. The log search results for **Stateful Sets** shows the data for the pods in a stateful set. Attempting to search logs for a container that isn't available in `kubectl` will also fail here. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container insights](container-insights-log-query.md). - +After successful authentication, if data can be retrieved, it begins streaming to the Live Logs tab. You can view log data here in a continuous stream. -The pane title shows the name of the pod the container is grouped with. ## View events -You can view real-time event data as it's generated by the container engine on the **Nodes**, **Controllers**, **Containers**, or **Deployments** view when a container, pod, node, ReplicaSet, DaemonSet, job, CronJob, or Deployment is selected. To view events: +You can view real-time event data as it's generated by the container engine on the **Nodes**, **Controllers**, **Containers**, or **Deployments** view when a container, pod, node, ReplicaSet, StatefulSet, DaemonSet, job, CronJob, or Deployment is selected. To view events: 1. In the Azure portal, browse to the AKS cluster resource group and select your AKS resource. You can view real-time event data as it's generated by the container engine on t 1. Select the **Nodes**, **Controllers**, **Containers**, or **Deployments** tab. -1. Select an object from the performance grid. In the **Properties** pane on the right side, select **View live data**. If the AKS cluster is configured with single sign-on by using Azure AD, you're prompted to authenticate on first use during that browser session. Select your account and finish authentication with Azure. +1. Select an object from the performance grid. In the **Properties** pane on the right side, select the **Live Events** tab. If the AKS cluster is configured with single sign-on by using Azure AD, you're prompted to authenticate on first use during that browser session. Select your account and finish authentication with Azure. >[!NOTE]- >To view the data from your Log Analytics workspace, select **View in analytics** in the **Properties** pane. The log search results potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers**. These logs might no longer exist. Attempting to search logs for a container that isn't available in `kubectl` will also fail here. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container insights](container-insights-log-query.md). --After successful authentication, the Live Data console pane appears below the performance data grid. If the fetch status indicator shows a green check mark at the far right, it means data can be retrieved, and it begins streaming to your console. --If the object you selected was a container, select the **Events** option in the pane. If you selected a node, pod, or controller, viewing events is automatically selected. + >To view the data from your Log Analytics workspace, select **View in Log Analytics** in the **Properties** pane. The log search results potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Stateful Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers**. These logs might no longer exist. The log search results for **Stateful Sets** shows the data for the pods in a stateful set. Attempting to search logs for a container that isn't available in `kubectl` will also fail here. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container insights](container-insights-log-query.md). - +After successful authentication, if data can be retrieved, it begins streaming to the Live Events tab. -The pane title shows the name of the Pod the container is grouped with. ### Filter events -While you view events, you can also limit the results by using the **Filter** pill found to the right of the search bar. Depending on the resource you select, the pill lists a pod, namespace, or cluster to choose from. +While you view events, you can also limit the results by using the **Filter** pill found below the search bar. Depending on the resource you select, the pill lists a node, pod, namespace, or cluster to choose from. ## View metrics You can view real-time metric data as it's generated by the container engine fro 1. Select either the **Nodes** or **Controllers** tab. -1. Select a **Pod** object from the performance grid. In the **Properties** pane on the right side, select **View live data**. If the AKS cluster is configured with single sign-on by using Azure AD, you're prompted to authenticate on first use during that browser session. Select your account and finish authentication with Azure. +1. Select a **Pod** object from the performance grid. In the **Properties** pane on the right side, select the **Live Metrics** tab. If the AKS cluster is configured with single sign-on by using Azure AD, you're prompted to authenticate on first use during that browser session. Select your account and finish authentication with Azure. >[!NOTE]- >To view the data from your Log Analytics workspace, select the **View in analytics** option in the **Properties** pane. The log search results potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers**. These logs might no longer exist. Attempting to search logs for a container that isn't available in `kubectl` will also fail here. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container insights](container-insights-log-query.md). + >To view the data from your Log Analytics workspace, select the **View in Log Analytics** option in the **Properties** pane. The log search results potentially show **Nodes**, **Daemon Sets**, **Replica Sets**, **Stateful Sets**, **Jobs**, **Cron Jobs**, **Pods**, and **Containers**. These logs might no longer exist. The log search results for **Stateful Sets** shows the data for the pods in a stateful set. Attempting to search logs for a container that isn't available in `kubectl` will also fail here. To learn more about viewing historical logs, events, and metrics, see [How to query logs from Container insights](container-insights-log-query.md). -After successful authentication, the Live Data console pane appears below the performance data grid. Metric data is retrieved and begins streaming to your console for presentation in the two charts. The pane title shows the name of the pod the container is grouped with. +After successful authentication, metric data is retrieved and begins streaming to the Live Metrics tab for presentation in the two charts. - ## Use live data views The following sections describe functionality that you can use in the different ### Search -The Live Data feature includes search functionality. In the **Search** box, you can filter results by entering a keyword or term. Any matching results are highlighted to allow quick review. While you view the events, you can also limit the results by using the **Filter** feature to the right of the search bar. Depending on what resource you've selected, you can choose from a pod, namespace, or cluster. +The Live Data feature includes search functionality. In the **Search** box, you can filter results by entering a keyword or term. Any matching results are highlighted to allow quick review. While you view the events, you can also limit the results by using the **Filter** feature below the search bar. Depending on what resource you've selected, you can choose from a node, pod, namespace, or cluster. - - ### Scroll lock and pause -To suspend autoscroll and control the behavior of the pane so that you can manually scroll through the new data read, select the **Scroll** option. To re-enable autoscroll, select **Scroll** again. You can also pause retrieval of log or event data by selecting the **Pause** option. When you're ready to resume, select **Play**. +To suspend autoscroll and control the behavior of the tab so that you can manually scroll through the new data read, select the **Scroll** option. To re-enable autoscroll, select **Scroll** again. You can also pause retrieval of log or event data by selecting the **Pause** option. When you're ready to resume, select **Play**. - - Suspend or pause autoscroll for only a short period of time while you're troubleshooting an issue. These requests might affect the availability and throttling of the Kubernetes API on your cluster. |
azure-monitor | Container Insights Metric Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md | To disable custom alert rules, use the same ARM template to create the rule, but -## Migrate from metric rules to Prometheus rules (preview) +## Migrate from metric rules to Prometheus rules (preview) If you're using metric alert rules to monitor your Kubernetes cluster, you should transition to Prometheus recommended alert rules (preview) before March 14, 2026 when metric alerts are retired. -1. Follow the steps at [Enable Prometheus alert rules](#enable-prometheus-alert-rules) to configure Prometheus recommended alert rules (preview). -2. Follow the steps at [Disable metric alert rules](#disable-metric-alert-rules) to remove metric alert rules from your clusters. +1. Follow the steps at [Enable Prometheus alert rules](#enable-prometheus-alert-rules) to configure Prometheus recommended alert rules (preview). +2. Follow the steps at [Disable metric alert rules](#disable-metric-alert-rules) to remove metric alert rules from your clusters. ## Alert rule details Source code for the recommended alerts can be found in [GitHub](https://github.c | Average Persistent Volume Usage % | Average Persistent Volume Usage % | Calculates average persistent volume usage per pod. | 80% | | Average Working set memory % | Average Working set memory % | Calculates average Working set memory for a node. | 80% | | Restarting container count | Restarting container count | Calculates number of restarting containers. | 0 |-| Failed Pod Counts | Failed Pod Counts | Calculates number of restarting containers. | 0 | +| Failed Pod Counts | Failed Pod Counts | Calculates number of pods in failed state. | 0 | | Node NotReady status | Node NotReady status | Calculates if any node is in NotReady state. | 0 | | OOM Killed Containers | OOM Killed Containers | Calculates number of OOM killed containers. | 0 | | Pods ready % | Pods ready % | Calculates the average ready state of pods. | 80% | |
azure-monitor | Container Insights Prometheus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus.md | -[Prometheus](https://aka.ms/azureprometheus-promio) is a popular open-source metric monitoring solution and is the most common monitoring tool used to monitor Kubernetes clusters. Container insights uses its containerized agent to collect much of the same data that is typically collected from the cluster by Prometheus without requiring a Prometheus server. This data is presented in Container insights views and available to other Azure Monitor features such as [log queries](container-insights-log-query.md) and [log alerts](container-insights-log-alerts.md). +[Prometheus](https://aka.ms/azureprometheus-promio) is a popular open-source metric monitoring solution and is the most common monitoring tool used to monitor Kubernetes clusters. Container insights uses its containerized agent to collect much of the same data that Prometheus typically collects from the cluster without requiring a Prometheus server. This data is presented in Container insights views and available to other Azure Monitor features such as [log queries](container-insights-log-query.md) and [log alerts](container-insights-log-alerts.md). Container insights can also scrape Prometheus metrics from your cluster and send the data to either Azure Monitor Logs or to Azure Monitor managed service for Prometheus. This requires exposing the Prometheus metrics endpoint through your exporters or pods and then configuring one of the addons for the Azure Monitor agent used by Container insights as shown the following diagram. Container insights can also scrape Prometheus metrics from your cluster and send ## Send data to Azure Monitor managed service for Prometheus-[Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) is a fully managed Prometheus-compatible service that supports industry standard features such as PromQL, Grafana dashboards, and Prometheus alerts. This requires configuring the *metrics addon* for the Azure Monitor agent, which sends data to Prometheus. +[Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) is a fully managed Prometheus-compatible service that supports industry standard features such as PromQL, Grafana dashboards, and Prometheus alerts. This service requires configuring the *metrics addon* for the Azure Monitor agent, which sends data to Prometheus. > [!NOTE] > The metrics addon used to collect Prometheus metrics for Managed Prometheus currently only supports AKS clusters and cannot be used as an Arc enabled Kubernetes extension. To collect Prometheus metrics from Kubernetes clusters that are running self-managed Prometheus we recommend looking at the [remote write capabilities of Managed Prometheus](../essentials/prometheus-remote-write.md). Container insights can also scrape Prometheus metrics from your cluster and send > You don't need to enable Container insights to configure your AKS cluster to send data to managed Prometheus. See [Collect Prometheus metrics from AKS cluster (preview)](../essentials/prometheus-metrics-enable.md) for details on how to configure your cluster without enabling Container insights. -Use the following procedure to add Promtheus collection to your cluster that's already using Container insights. +Use the following procedure to add Prometheus collection to your cluster that's already using Container insights. 1. Open the **Kubernetes services** menu in the Azure portal and select your AKS cluster. 2. Click **Insights**. Use the following procedure to add Promtheus collection to your cluster that's a See [Collect Prometheus metrics from AKS cluster (preview)](../essentials/prometheus-metrics-enable.md) for details on [verifying your deployment](../essentials/prometheus-metrics-enable.md#verify-deployment) and [limitations](../essentials/prometheus-metrics-enable.md#limitations) ## Send metrics to Azure Monitor Logs-You may want to collect additional data in addition to the predefined set of data collected by Container insights. This data isn't used by Container insights views but is available for log queries and alerts like the other data it collects. This requires configuring the *monitoring addon* for the Azure Monitor agent, which is the one currently used by Container insights to send data to a Log Analytics workspace. +You may want to collect more data in addition to the predefined set of data collected by Container insights. This data isn't used by Container insights views but is available for log queries and alerts like the other data it collects. This requires configuring the *monitoring addon* for the Azure Monitor agent, which is the one currently used by Container insights to send data to a Log Analytics workspace. ### Prometheus scraping settings When a URL is specified, Container insights only scrapes the endpoint. When Kube | Cluster-wide | | | | Specify any one of the following three methods to scrape endpoints for metrics. | | | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) | | | `kubernetes_services` | String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. Fully qualified domain names must be used here. For example,`kubernetes_services = ["http://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace.svc.cluster.local:9100/metrics]`|-| | `monitor_kubernetes_pods` | Boolean | true or false | When set to `true` in the cluster-wide settings, the Container insights agent will scrape Kubernetes pods across the entire cluster for the following Prometheus annotations:<br> `prometheus.io/scrape:`<br> `prometheus.io/scheme:`<br> `prometheus.io/path:`<br> `prometheus.io/port:` | +| | `monitor_kubernetes_pods` | Boolean | true or false | When set to `true` in the cluster-wide settings, the Container insights agent scrapes Kubernetes pods across the entire cluster for the following Prometheus annotations:<br> `prometheus.io/scrape:`<br> `prometheus.io/scheme:`<br> `prometheus.io/path:`<br> `prometheus.io/port:` | | | `prometheus.io/scrape` | Boolean | true or false | Enables scraping of the pod, and `monitor_kubernetes_pods` must be set to `true`. | | | `prometheus.io/scheme` | String | http | Defaults to scraping over HTTP. | | | `prometheus.io/path` | String | Comma-separated array | The HTTP resource path from which to fetch metrics. If the metrics path isn't `/metrics`, define it with this annotation. |-| | `prometheus.io/port` | String | 9102 | Specify a port to scrape from. If the port isn't set, it will default to 9102. | +| | `prometheus.io/port` | String | 9102 | Specify a port to scrape from. If the port isn't set, it defaults to 9102. | | | `monitor_kubernetes_pods_namespaces` | String | Comma-separated array | An allowlist of namespaces to scrape metrics from Kubernetes pods.<br> For example, `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]` | | Node-wide | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) | | Node-wide or cluster-wide | `interval` | String | 60s | The collection interval default is one minute (60 seconds). You can modify the collection for either the *[prometheus_data_collection_settings.node]* and/or *[prometheus_data_collection_settings.cluster]* to time units such as s, m, and h. | The configuration change can take a few minutes to finish before taking effect. To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs ama-logs-fdf58 -n=kube-system`. -If there are configuration errors from the Azure Monitor Agent pods, the output will show errors similar to the following example: +If there are configuration errors from the Azure Monitor Agent pods, the output shows errors similar to the following example: ``` ***************Start Config Processing******************** Errors related to applying configuration changes are also available for review. - From an agent pod logs using the same `kubectl logs` command. -- From Live Data (preview). Live Data (preview) logs show errors similar to the following example:+- From Live Data. Live Data logs show errors similar to the following example: ``` 2019-07-08T18:55:00Z E! [inputs.prometheus]: Error in plugin: error making HTTP request to http://invalidurl:1010/metrics: Get http://invalidurl:1010/metrics: dial tcp: lookup invalidurl on 10.0.0.10:53: no such host ``` -- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Warning* severity for scrape errors and *Error* severity for configuration errors. If there are no errors, the entry in the table will have data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour.+- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Warning* severity for scrape errors and *Error* severity for configuration errors. If there are no errors, the entry in the table has data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour. - For Azure Red Hat OpenShift v3.x and v4.x, check the Azure Monitor Agent logs by searching the **ContainerLog** table to verify if log collection of openshift-azure-logging is enabled. Errors prevent Azure Monitor Agent from parsing the file, causing it to restart and use the default configuration. After you correct the errors in ConfigMap on clusters other than Azure Red Hat OpenShift v3.x, save the YAML file and apply the updated ConfigMaps by running the command `kubectl apply -f <configmap_yaml_file.yaml`. |
azure-monitor | Collect Custom Metrics Linux Telegraf | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-linux-telegraf.md | In the **Connect to virtual machine** page, keep the default options to connect ```cmd ssh azureuser@XXXX.XX.XXX ```- Paste the SSH connection command into a shell, such as Azure Cloud Shell or Bash on Ubuntu on Windows, or use an SSH client of your choice to create the connection. ## Install and configure Telegraf To install the Telegraf Debian package onto the VM, run the following commands from your SSH session: +# [Ubuntu, Debian](#tab/ubuntu) ++Add the repository: + ```bash # download the package to the VM curl -s https://repos.influxdata.com/influxdb.key | sudo apt-key add - source /etc/lsb-release-echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list +sudo echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list +sudo curl -fsSL https://repos.influxdata.com/influxdata-archive_compat.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg add ```+Instal the package: ++```bash + apt-get update + apt-get install telegraf +``` +# [RHEL, CentOS, Oracle Linux](#tab/redhat) ++Add the repository: +```bash +cat <<EOF | sudo tee /etc/yum.repos.d/influxdb.repo +[influxdb] +name = InfluxDB Repository - RHEL $releasever +baseurl = https://repos.influxdata.com/rhel/$releasever/$basearch/stable +enabled = 1 +gpgcheck = 1 +gpgkey = https://repos.influxdata.com/influxdata-archive_compat.key +EOF +``` +Instal the package: ++```bash + sudo yum -y install telegraf +``` + Telegraf's configuration file defines Telegraf's operations. By default, an example configuration file is installed at the path **/etc/telegraf/telegraf.conf**. The example configuration file lists all possible input and output plug-ins. However, we'll create a custom configuration file and have the agent use it by running the following commands: ```bash Finally, to have the agent start using the new configuration, we force the agent ```bash # stop the telegraf agent on the VM sudo systemctl stop telegraf -# start the telegraf agent on the VM to ensure it picks up the latest configuration -sudo systemctl start telegraf +# start and enable the telegraf agent on the VM to ensure it picks up the latest configuration +sudo systemctl enable --now telegraf ``` Now the agent will collect metrics from each of the input plug-ins specified and emit them to Azure Monitor. |
azure-monitor | Prometheus Metrics Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md | Use `az aks update` with the `-enable-azuremonitormetrics` option to install the - **Create a new default Azure Monitor workspace.**<br> If no Azure Monitor workspace is specified, a default Azure Monitor workspace is created in the `DefaultRG-<cluster_region>` following the format `DefaultAzureMonitorWorkspace-<mapped_region>`. This Azure Monitor workspace is in the region specified in [Region mappings](#region-mappings).- + ```azurecli az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> ``` If the Azure Monitor workspace is linked to one or more Grafana workspaces, the - **Use an existing Azure Monitor workspace and link with an existing Grafana workspace.**<br> This option creates a link between the Azure Monitor workspace and the Grafana workspace.- + ```azurecli az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <azure-monitor-workspace-name-resource-id> --grafana-resource-id <grafana-workspace-name-resource-id> ``` In this JSON, `full_resource_id_1` and `full_resource_id_2` were already in the The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor workspace resource ID provided in the parameters file. +## [Terraform](#tab/terraform) ++### Prerequisites ++- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`. +- If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor Workspaces subscription, register the Azure Monitor Workspace subscription with the `Microsoft.Dashboard` resource provider by following [this documentation](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider). +- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created. +- The template needs to be deployed in the same resource group as the Azure Managed Grafana workspace. +- Users with the User Access Administrator role in the subscription of the AKS cluster can enable the Monitoring Data Reader role directly by deploying the template. ++### Retrieve required values for a Grafana resource ++On the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**. ++If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, the instance hasn't been linked with any Azure Monitor workspace. Update the azure_monitor_workspace_integrations block(shown below) in main.tf with the list of grafana integrations. ++```.tf + azure_monitor_workspace_integrations { + resource_id = var.monitor_workspace_id[var.monitor_workspace_id1, var.monitor_workspace_id2] + } +``` ++### Download and edit the templates ++If you are deploying a new AKS cluster using Terraform with managed Prometheus addon enabled, follow the steps below. ++1. Please download all files under [AddonTerraformTemplate](https://aka.ms/AAkm357). +2. Edit the variables in variables.tf file with the correct parameter values. +3. Run `terraform init -upgrade` to initialize the Terraform deployment. +4. Run `terraform plan -out main.tfplan` to initialize the Terraform deployment. +5. Run `terraform apply main.tfplan` to apply the execution plan to your cloud infrastructure. +++Note: Pass the variables for `annotations_allowed` and `labels_allowed` keys in main.tf only when those values exist. These are optional blocks. ++**NOTE** +- Please edit the main.tf file appropriately before running the terraform template +- Please add in any existing azure_monitor_workspace_integrations values to the grafana resource before running the template otherwise the older values will get deleted and replaced with what is there in the template at the time of deployment +- Users with 'User Access Administrator' role in the subscription of the AKS cluster can be able to enable 'Monitoring Data Reader' role directly by deploying the template. +- Please edit the grafanaSku parameter if you are using a non standard SKU. +- Please run this template in the Grafana Resources RG. + ## [Azure Policy](#tab/azurepolicy) ### Prerequisites As of version 6.4.0-main-02-22-2023-3ee44b9e, Windows metric collection has been ``` The number of pods should be equal to the number of nodes on the cluster. The output should resemble the following example:- + ``` User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE As of version 6.4.0-main-02-22-2023-3ee44b9e, Windows metric collection has been ``` kubectl get ds ama-metrics-win-node --namespace=kube-system ```- + The output should resemble the following example:- + ``` User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE As of version 6.4.0-main-02-22-2023-3ee44b9e, Windows metric collection has been ``` kubectl get rs --namespace=kube-system ```- + The output should resemble the following example:- + ``` User@aksuser:~$kubectl get rs --namespace=kube-system NAME DESIRED CURRENT READY AGE Currently, the Azure CLI is the only option to remove the metrics add-on and sto ``` az extension add --name aks-preview ```- + For more information on installing a CLI extension, see [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).- + > [!NOTE] > Upgrade your az cli version to the latest version and ensure that the aks-preview version you're using is at least '0.5.132'. Find your current version by using the `az version`.- + ```azurecli az extension add --name aks-preview ``` |
azure-monitor | Profiler Aspnetcore Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-aspnetcore-linux.md | Title: Enable Profiler for ASP.NET Core web applications hosted in Linux on App Services | Microsoft Docs -description: Learn how to enable Profiler on your ASP.NET Core web application hosted in Linux on App Services. + Title: Enable Profiler for ASP.NET Core web apps hosted in Linux on App Service | Microsoft Docs +description: Learn how to enable Profiler on your ASP.NET Core web application hosted in Linux on Azure App Service. ms.devlang: csharp Last updated 07/18/2022 -# Enable Profiler for ASP.NET Core web applications hosted in Linux on App Services +# Enable Profiler for ASP.NET Core web apps hosted in Linux on App Service -Using Profiler, you can track how much time is spent in each method of your live ASP.NET Core web apps that are hosted in Linux on Azure App Service. While this guide focuses on web apps hosted in Linux, you can experiment using Linux, Windows, and Mac development environments. +By using Profiler, you can track how much time is spent in each method of your live ASP.NET Core web apps that are hosted in Linux on Azure App Service. This article focuses on web apps hosted in Linux. You can also experiment by using Linux, Windows, and Mac development environments. -In this guide, you'll: +In this article, you: > [!div class="checklist"] > - Set up and deploy an ASP.NET Core web application hosted on Linux. > - Add Application Insights Profiler to the ASP.NET Core web application.- + ## Prerequisites -- Install the [latest and greatest .NET Core SDK](https://dotnet.microsoft.com/download/dotnet).-- Install Git by following the instructions at [Getting Started - Installing Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git).+- Install the [latest .NET Core SDK](https://dotnet.microsoft.com/download/dotnet). +- Install Git by following the instructions at [Getting started: Installing Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). - Review the following samples for context: - [Enable Service Profiler for containerized ASP.NET Core Application (.NET 6)](https://github.com/microsoft/ApplicationInsights-Profiler-AspNetCore/tree/main/examples/EnableServiceProfilerForContainerAppNet6)- - [Application Insights Profiler for Worker Service Example](https://github.com/microsoft/ApplicationInsights-Profiler-AspNetCore/tree/main/examples/ServiceProfilerInWorkerNet6) + - [Application Insights Profiler for Worker Service example](https://github.com/microsoft/ApplicationInsights-Profiler-AspNetCore/tree/main/examples/ServiceProfilerInWorkerNet6) ## Set up the project locally -1. Open a Command Prompt window on your machine. +1. Open a command prompt window on your machine. 1. Create an ASP.NET Core MVC web application: In this guide, you'll: ## Create the Linux web app to host your project -1. In the Azure portal, create a web app environment by using App Service on Linux: +1. In the Azure portal, create a web app environment by using App Service on Linux. - :::image type="content" source="./media/profiler-aspnetcore-linux/create-web-app.png" alt-text="Screenshot of creating the Linux web app."::: + :::image type="content" source="./media/profiler-aspnetcore-linux/create-web-app.png" alt-text="Screenshot that shows creating the Linux web app."::: -1. Go to your new web app resource and select **Deployment Center** > **FTPS credentials** to create the deployment credentials. Make note of your credentials to use later. +1. Go to your new web app resource and select **Deployment Center** > **FTPS credentials** to create the deployment credentials. Make a note of your credentials to use later. - :::image type="content" source="./media/profiler-aspnetcore-linux/credentials.png" alt-text="Screenshot of creating the deployment credentials."::: + :::image type="content" source="./media/profiler-aspnetcore-linux/credentials.png" alt-text="Screenshot that shows creating the deployment credentials."::: -1. Click **Save**. -1. Select the **Settings** tab. -1. In the drop-down, select **Local Git** to set up a local Git repository in the web app. +1. Select **Save**. +1. Select the **Settings** tab. +1. In the dropdown, select **Local Git** to set up a local Git repository in the web app. - :::image type="content" source="./media/profiler-aspnetcore-linux/deployment-options.png" alt-text="Screenshot of view deployment options in a drop-down."::: + :::image type="content" source="./media/profiler-aspnetcore-linux/deployment-options.png" alt-text="Screenshot that shows view deployment options in a dropdown."::: -1. Click **Save** to create a Git repository with a Git Clone Uri. +1. Select **Save** to create a Git repository with a Git clone URI. - :::image type="content" source="./media/profiler-aspnetcore-linux/local-git-repo.png" alt-text="Screenshot of setting up the local Git repository."::: + :::image type="content" source="./media/profiler-aspnetcore-linux/local-git-repo.png" alt-text="Screenshot that shows setting up the local Git repository."::: - For more deployment options, see [App Service documentation](../../app-service/deploy-best-practices.md). + For more deployment options, see the [App Service documentation](../../app-service/deploy-best-practices.md). ## Deploy your project -1. In your Command Prompt window, browse to the root folder for your project. Add a Git remote repository to point to the repository on App Service: +1. In your command prompt window, browse to the root folder for your project. Add a Git remote repository to point to the repository on App Service: ```console git remote add azure https://<username>@<app_name>.scm.azurewebsites.net:443/<app_name>.git In this guide, you'll: * Use the **username** that you used to create the deployment credentials. * Use the **app name** that you used to create the web app by using App Service on Linux. -2. Deploy the project by pushing the changes to Azure: +1. Deploy the project by pushing the changes to Azure: ```console git push azure main In this guide, you'll: ## Add Application Insights to monitor your web app -You can add Application Insights to your web app either via: +You have three options to add Application Insights to your web app: -- The Application Insights pane in the Azure portal,-- The Configuration pane in the Azure portal, or -- Manually adding to your web app settings.+- By using the **Application Insights** pane in the Azure portal. +- By using the **Configuration** pane in the Azure portal. +- By manually adding to your web app settings. # [Application Insights pane](#tab/enablement) -1. In your web app on the Azure portal, select **Application Insights** in the left side menu. -1. Click **Turn on Application Insights**. +1. In your web app on the Azure portal, select **Application Insights** on the left pane. +1. Select **Turn on Application Insights**. - :::image type="content" source="./media/profiler-aspnetcore-linux/turn-on-app-insights.png" alt-text="Screenshot of turning on Application Insights."::: + :::image type="content" source="./media/profiler-aspnetcore-linux/turn-on-app-insights.png" alt-text="Screenshot that shows turning on Application Insights."::: 1. Under **Application Insights**, select **Enable**. - :::image type="content" source="./media/profiler-aspnetcore-linux/enable-app-insights.png" alt-text="Screenshot of enabling Application Insights."::: + :::image type="content" source="./media/profiler-aspnetcore-linux/enable-app-insights.png" alt-text="Screenshot that shows enabling Application Insights."::: -1. Under **Link to an Application Insights resource**, either create a new resource or select an existing resource. For this example, we'll create a new resource. +1. Under **Link to an Application Insights resource**, either create a new resource or select an existing resource. For this example, we create a new resource. - :::image type="content" source="./media/profiler-aspnetcore-linux/link-app-insights.png" alt-text="Screenshot of linking your Application Insights to a new or existing resource."::: + :::image type="content" source="./media/profiler-aspnetcore-linux/link-app-insights.png" alt-text="Screenshot that shows linking Application Insights to a new or existing resource."::: -1. Click **Apply** > **Yes** to apply and confirm. +1. Select **Apply** > **Yes** to apply and confirm. # [Configuration pane](#tab/config) -1. [Create an Application Insights resource](../app/create-workspace-resource.md) in the same Azure subscription as your App Service. -1. Navigate to the Application Insights resource. +1. [Create an Application Insights resource](../app/create-workspace-resource.md) in the same Azure subscription as your App Service instance. +1. Go to the Application Insights resource. 1. Copy the **Instrumentation Key** (iKey).-1. In your web app on the Azure portal, select **Configuration** in the left side menu. -1. Click **New application setting**. +1. In your web app in the Azure portal, select **Configuration** on the left pane. +1. Select **New application setting**. - :::image type="content" source="./media/profiler-aspnetcore-linux/new-setting-configuration.png" alt-text="Screenshot of adding new application setting in the configuration pane."::: + :::image type="content" source="./media/profiler-aspnetcore-linux/new-setting-configuration.png" alt-text="Screenshot that shows adding a new application setting in the Configuration pane."::: -1. Add the following settings in the **Add/Edit application setting** pane, using your saved iKey: +1. Add the following settings in the **Add/Edit application setting** pane by using your saved iKey: | Name | Value | | - | -- | | APPINSIGHTS_INSTRUMENTATIONKEY | [YOUR_APPINSIGHTS_KEY] | - :::image type="content" source="./media/profiler-aspnetcore-linux/add-ikey-settings.png" alt-text="Screenshot of adding iKey to the settings pane."::: + :::image type="content" source="./media/profiler-aspnetcore-linux/add-ikey-settings.png" alt-text="Screenshot that shows adding the iKey to the Settings pane."::: -1. Click **OK**. +1. Select **OK**. - :::image type="content" source="./media/profiler-aspnetcore-linux/save-app-insights-key.png" alt-text="Screenshot of saving the application insights key settings."::: + :::image type="content" source="./media/profiler-aspnetcore-linux/save-app-insights-key.png" alt-text="Screenshot that shows saving the Application Insights key settings."::: -1. Click **Save**. +1. Select **Save**. # [Web app settings](#tab/appsettings) -1. [Create an Application Insights resource](../app/create-workspace-resource.md) in the same Azure subscription as your App Service. -1. Navigate to the Application Insights resource. +1. [Create an Application Insights resource](../app/create-workspace-resource.md) in the same Azure subscription as your App Service instance. +1. Go to the Application Insights resource. 1. Copy the **Instrumentation Key** (iKey).-1. In your preferred code editor, navigate to your ASP.NET Core project's `appsettings.json` file. -1. Add the following and insert your copied iKey: +1. In your preferred code editor, go to your ASP.NET Core project's `appsettings.json` file. +1. Add the following code and insert your copied iKey: ```json "ApplicationInsights": You can add Application Insights to your web app either via: ## Next steps-Learn how to... + > [!div class="nextstepaction"] > [Generate load and view Profiler traces](./profiler-data.md) |
azure-monitor | Profiler Bring Your Own Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-bring-your-own-storage.md | Title: Configure BYOS (Bring Your Own Storage) for Profiler & Snapshot Debugger -description: Configure BYOS (Bring Your Own Storage) for Profiler & Snapshot Debugger + Title: Configure BYOS for Profiler and Snapshot Debugger +description: Configure Bring Your Own Storage (BYOS) for Profiler and Snapshot Debugger. Last updated 08/18/2022 -# Configure Bring Your Own Storage (BYOS) for Application Insights Profiler and Snapshot Debugger +# Configure BYOS for Application Insights Profiler and Snapshot Debugger -## What is Bring Your Own Storage (BYOS) and why might I need it? +This article shows you how to configure Bring Your Own Storage (BYOS) for Application Insights Profiler and Snapshot Debugger. -When you use Application Insights Profiler or Snapshot Debugger, artifacts generated by your application are uploaded into Azure storage accounts over the public Internet. For these artifacts and storage accounts, Microsoft controls and covers the cost for: +## What is BYOS and why might I need it? ++When you use Application Insights Profiler or Snapshot Debugger, artifacts generated by your application are uploaded into Azure Storage accounts over the public internet. For these artifacts and storage accounts, Microsoft controls and covers the cost for: * Processing and analysis. * Encryption-at-rest and lifetime management policies. -When you configure Bring Your Own Storage (BYOS), artifacts are uploaded into a storage account that you control. That means you control and are responsible for the cost of: +When you configure BYOS, artifacts are uploaded into a storage account that you control. That means you control and are responsible for the cost of: * The encryption-at-rest policy and the Lifetime management policy. * Network access. > [!NOTE]-> BYOS is required if you are enabling Private Link or Customer-Managed Keys. -+> BYOS is required if you're enabling Azure Private Link or customer-managed keys. +> > * [Learn more about Private Link for Application Insights](../logs/private-link-security.md).-> * [Learn more about Customer-Managed Keys for Application Insights](../logs/customer-managed-keys.md). +> * [Learn more about customer-managed keys for Application Insights](../logs/customer-managed-keys.md). -## How will my storage account be accessed? +## How is my storage account accessed? -1. Agents running in your Virtual Machines or App Service will upload artifacts (profiles, snapshots, and symbols) to blob containers in your account. +1. Agents running in your virtual machines or Azure App Service upload artifacts (profiles, snapshots, and symbols) to blob containers in your account. - This process involves contacting the Profiler or Snapshot Debugger service to obtain a Shared Access Signature (SAS) token to a new blob in your storage account. + This process involves contacting Profiler or Snapshot Debugger to obtain a shared access signature token to a new blob in your storage account. -1. The Profiler or Snapshot Debugger service will: +1. Profiler or Snapshot Debugger will: - 1. Analyze the incoming blob. - 1. Write back the analysis results and log files into blob storage. + - Analyze the incoming blob. + - Write back the analysis results and log files into blob storage. - Depending on available compute capacity, this process may occur anytime after upload. + Depending on available compute capacity, this process might occur anytime after upload. -1. When you view the Profiler traces or Snapshot Debugger analysis, the service fetches the analysis results from blob storage. +1. When you view Profiler traces or Snapshot Debugger analysis, the service fetches the analysis results from blob storage. ## Prerequisites -* Create your Storage Account in the same location as your Application Insights resource. +* Create your storage account in the same location as your Application Insights resource. - For example, if your Application Insights resource is in West US 2, your Storage Account must be also in West US 2. + For example, if your Application Insights resource is in West US 2, your storage account must also be in West US 2. -* Grant the `Storage Blob Data Contributor` role to the Azure AD application named `Diagnostic Services Trusted Storage Access` via the [Access Control (IAM)](../../role-based-access-control/role-assignments-portal.md) page in your storage account. +* Grant the `Storage Blob Data Contributor` role to the Azure Active Directory (Azure AD) application named `Diagnostic Services Trusted Storage Access` via the [Access Control (IAM)](../../role-based-access-control/role-assignments-portal.md) page in your storage account. * If Private Link is enabled, allow connection to our Trusted Microsoft Service from your virtual network. ## Enable BYOS -### Grant Access to Diagnostic Services to your Storage Account +This section shows you how to enable BYOS. -A BYOS storage account will be linked to an Application Insights resource. There may be only one storage account per Application Insights resource and both must be in the same location. You may use the same storage account with more than one Application Insights resource. +### Grant access to Diagnostic Services to your storage account -First, the Application Insights Profiler, and Snapshot Debugger service needs to be granted access to the storage account. To grant access, add the role `Storage Blob Data Contributor` to the Azure AD application named `Diagnostic Services Trusted Storage Access` via the Access Control (IAM) page in your storage account as shown in Figure 1.0. +A BYOS storage account is linked to an Application Insights resource. There might be only one storage account per Application Insights resource and both must be in the same location. You might use the same storage account with more than one Application Insights resource. -Steps: +First, Application Insights Profiler and Snapshot Debugger must be granted access to the storage account. To grant access, add the role `Storage Blob Data Contributor` to the Azure AD application named `Diagnostic Services Trusted Storage Access` via the **Access Control (IAM)** page in your storage account. 1. Select **Access control (IAM)**. -1. Select **Add** > **Add role assignment** to open the Add role assignment page. +1. Select **Add** > **Add role assignment** to open the **Add role assignment** page. -1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). +1. Assign the following role. For more information, see [Assign Azure roles by using the Azure portal](../../role-based-access-control/role-assignments-portal.md). | Setting | Value | | | | Steps: | Assign access to | User, group, or service principal | | Members | Diagnostic Services Trusted Storage Access | - :::image type="content" source="media/profiler-bring-your-own-storage/add-role-assignment-page.png" alt-text="Screenshot showing how to add role assignment page in Azure portal."::: - *Figure 1.0* --After you added the role, it will appear under the "**Role assignments**" section, like the below Figure 1.1. - :::image type="content" source="media/profiler-bring-your-own-storage/figure-11.png" alt-text="Screenshot showing the IAM screen after Role assignments."::: - *Figure 1.1* --If you're also using Private Link, it's required one additional configuration to allow connection to our Trusted Microsoft Service from your Virtual Network. or more information, see [Storage Network Security documentation](../../storage/common/storage-network-security.md#trusted-microsoft-services). + :::image type="content" source="media/profiler-bring-your-own-storage/add-role-assignment-page.png" alt-text="Screenshot that shows the Add role assignment page in the Azure portal."::: + + After you add the role, it appears under the **Role assignments** section. + :::image type="content" source="media/profiler-bring-your-own-storage/figure-11.png" alt-text="Screenshot that shows the IAM screen after Role assignments."::: + +If you're also using Private Link, one more configuration is required to allow connection to our Trusted Microsoft Service from your virtual network. For more information, see [Storage network security documentation](../../storage/common/storage-network-security.md#trusted-microsoft-services). -### Link your Storage Account with your Application Insights resource +### Link your storage account with your Application Insights resource -To configure BYOS for code-level diagnostics (Profiler/Debugger), there are three options: +To configure BYOS for code-level diagnostics (Profiler/Snapshot Debugger), there are three options: -* Using Azure PowerShell cmdlets. -* Using the Azure CLI. -* Using Azure Resource Manager templates. +* Use Azure PowerShell cmdlets. +* Use the Azure CLI. +* Use Azure Resource Manager templates. #### [PowerShell](#tab/azure-powershell) -1. Make sure you have installed Az PowerShell 4.2.0 or greater. +1. Make sure you've installed Az PowerShell 4.2.0 or greater. - To install Azure PowerShell, refer to the [Official Azure PowerShell documentation](/powershell/azure/install-az-ps). + To install Azure PowerShell, see the [Azure PowerShell documentation](/powershell/azure/install-az-ps). 1. Install the Application Insights PowerShell extension. To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre Connect-AzAccount -Subscription "{subscription_id}" ``` - For more information on how to sign in, refer to the [Connect-AzAccount documentation](/powershell/module/az.accounts/connect-azaccount). + For more information on how to sign in, see the [Connect-AzAccount documentation](/powershell/module/az.accounts/connect-azaccount). -1. Remove previous Storage Account linked to your Application Insights resource. +1. Remove any previous storage account linked to your Application Insights resource. Pattern: To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre Remove-AzApplicationInsightsLinkedStorageAccount -ResourceId $appInsights.Id ``` -1. Connect your Storage Account with your Application Insights resource. +1. Connect your storage account with your Application Insights resource. Pattern: To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre #### [Azure CLI](#tab/azure-cli) -1. Make sure you have installed Azure CLI. +1. Make sure you've installed the Azure CLI. - To install Azure CLI, refer to the [Official Azure CLI documentation](/cli/azure/install-azure-cli). + To install the Azure CLI, see the [Azure CLI documentation](/cli/azure/install-azure-cli). 1. Install the Application Insights CLI extension. To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre az extension add -n application-insights ``` -1. Connect your Storage Account with your Application Insights resource. +1. Connect your storage account with your Application Insights resource. Pattern: To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre ``` > [!NOTE]- > For performing updates on the linked Storage Accounts to your Application Insights resource, refer to the [Application Insights CLI documentation](/cli/azure/monitor/app-insights/component/linked-storage). + > For performing updates on the linked storage accounts to your Application Insights resource, see the [Application Insights CLI documentation](/cli/azure/monitor/app-insights/component/linked-storage). -#### [Resource Manager Template](#tab/azure-resource-manager) +#### [Resource Manager template](#tab/azure-resource-manager) 1. Create an Azure Resource Manager template file with the following content (*byos.template.json*): To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre } ``` -1. Run the following PowerShell command to deploy the above template: +1. Run the following PowerShell command to deploy the preceding template: Syntax: To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre New-AzResourceGroupDeployment -ResourceGroupName "byos-test" -TemplateFile "D:\Docs\byos.template.json" ``` -1. Provide the following parameters when prompted in the PowerShell console: +1. Provide the following parameters when you're prompted in the PowerShell console: | Parameter | Description | |-|--| | `application_insights_name` | The name of the Application Insights resource to enable BYOS. |- | `storage_account_name` | The name of the Storage Account resource that you'll use as your BYOS. | + | `storage_account_name` | The name of the storage account resource that you'll use as your BYOS. | Expected output: To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre DeploymentDebugLogLevel : ``` -1. Enable code-level diagnostics (Profiler/Debugger) on the workload of interest through the Azure portal. In this example, **App Service** > **Application Insights**. +1. Enable code-level diagnostics (Profiler/Snapshot Debugger) on the workload of interest through the Azure portal. In this example, it's **App Service** > **Application Insights**. - :::image type="content" source="media/profiler-bring-your-own-storage/figure-20.png" alt-text="Screenshot showing the code level diagnostics on Azure portal."::: - *Figure 2.0* + :::image type="content" source="media/profiler-bring-your-own-storage/figure-20.png" alt-text="Screenshot that shows the code-level diagnostics in the Azure portal."::: -## Troubleshoot +## Troubleshooting ++This section offers troubleshooting tips for common issues. ### Template schema '{schema_uri}' isn't supported -* Make sure that the `$schema` property of the template is valid. It must follow the following pattern: -`https://schema.management.azure.com/schemas/{schema_version}/deploymentTemplate.json#` +* Make sure that the `$schema` property of the template is valid. It must follow this pattern: +`https://schema.management.azure.com/schemas/{schema_version}/deploymentTemplate.json#`. * Make sure that the `schema_version` of the template is within valid values: `2014-04-01-preview, 2015-01-01, 2018-05-01, 2019-04-01, 2019-08-01`.+ Error message: ```powershell To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre * Make sure that the `apiVersion` of the resource `microsoft.insights/components` is `2015-05-01`. * Make sure that the `apiVersion` of the resource `linkedStorageAccount` is `2020-03-01-preview`.+ Error message: ```powershell To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre }' ``` -### Storage Account location should match AI component location +### Storage account location should match AI component location -* Make sure that the location of the Application Insights resource is the same as the Storage Account. +* Make sure that the location of the Application Insights resource is the same as the storage account. + Error message: ```powershell To configure BYOS for code-level diagnostics (Profiler/Debugger), there are thre }' ``` -For general Profiler troubleshooting, refer to the [Profiler Troubleshoot documentation](profiler-troubleshooting.md). +For general Profiler troubleshooting, see the [Profiler troubleshooting documentation](profiler-troubleshooting.md). -For general Snapshot Debugger troubleshooting, refer to the [Snapshot Debugger Troubleshoot documentation](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot). +For general Snapshot Debugger troubleshooting, see the [Snapshot Debugger troubleshooting documentation](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot). ## Frequently asked questions -### If I have enabled Profiler/Snapshot Debugger and BYOS, will my data be migrated into my Storage Account? +This section provides answers to common questions. ++### If I've enabled Profiler/Snapshot Debugger and BYOS, will my data be migrated into my storage account? - *No, it won't.* + No, it won't. -### Will BYOS work with encryption-at-rest and Customer-Managed Key? +### Will BYOS work with encryption-at-rest and customer-managed keys? - *Yes, to be precise, BYOS is a requisite to have Profiler/Snapshot Debugger enabled with Customer-Manager Keys.* + Yes. To be precise, BYOS is a requirement to have Profiler/Snapshot Debugger enabled with customer-manager keys. -### Will BYOS work in an environment isolated from the Internet? +### Will BYOS work in an environment isolated from the internet? - *Yes, BYOS is a requirement for isolated network scenarios.* + Yes. BYOS is a requirement for isolated network scenarios. -### Will BYOS work with both Customer-Managed Keys and Private Link enabled? +### Will BYOS work with both customer-managed keys and Private Link enabled? - *Yes, it can be possible.* + Yes, it's possible. -### If I have enabled BYOS, can I go back using Diagnostic Services storage accounts to store my data collected? +### If I've enabled BYOS, can I go back to using Diagnostic Services storage accounts to store my collected data? - *Yes, you can, but we don't currently support data migration from your BYOS.* + Yes, you can, but we don't currently support data migration from your BYOS. -### After enabling BYOS, will I take over of all the related costs of storage and networking? +### After I enable BYOS, will I take over all the related costs of storage and networking? - *Yes.* + Yes. |
azure-monitor | Profiler Cloudservice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-cloudservice.md | Last updated 07/15/2022 # Enable Profiler for Azure Cloud Services -Receive performance traces for your [Azure Cloud Service](../../cloud-services-extended-support/overview.md) by enabling the Application Insights Profiler. The Profiler is installed on your Cloud Service via the [Azure Diagnostics extension](../agents/diagnostics-extension-overview.md). +Receive performance traces for your instance of [Azure Cloud Services](../../cloud-services-extended-support/overview.md) by enabling the Application Insights Profiler. Profiler is installed on your instance of Azure Cloud Services via the [Azure Diagnostics extension](../agents/diagnostics-extension-overview.md). -In this article, you will: +In this article, you: -- Enable your Cloud Service to send diagnostics data to Application Insights.+- Enable your instance of Azure Cloud Services to send diagnostics data to Application Insights. - Configure the Azure Diagnostics extension within your solution to install Profiler.-- Deploy your service and generate traffic to view Profiler traces. +- Deploy your service and generate traffic to view Profiler traces. -## Pre-requisites +## Prerequisites -- Make sure you've [set up diagnostics for Azure Cloud Services](/visualstudio/azure/vs-azure-tools-diagnostics-for-cloud-services-and-virtual-machines).-- Use [.NET Framework 4.6.1](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) or newer. - - If you're using [OS Family 4](../../cloud-services/cloud-services-guestos-update-matrix.md#family-4-releases), install .NET Framework 4.6.1 or newer with a [startup task](../../cloud-services/cloud-services-dotnet-install-dotnet.md). - - [OS Family 5](../../cloud-services/cloud-services-guestos-update-matrix.md#family-5-releases) includes a compatible version of .NET Framework by default. +- Make sure you've [set up diagnostics for your instance of Azure Cloud Services](/visualstudio/azure/vs-azure-tools-diagnostics-for-cloud-services-and-virtual-machines). +- Use [.NET Framework 4.6.1](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) or newer. + - If you're using [OS Family 4](../../cloud-services/cloud-services-guestos-update-matrix.md#family-4-releases), install .NET Framework 4.6.1 or newer with a [startup task](../../cloud-services/cloud-services-dotnet-install-dotnet.md). + - [OS Family 5](../../cloud-services/cloud-services-guestos-update-matrix.md#family-5-releases) includes a compatible version of .NET Framework by default. ## Track requests with Application Insights -When publishing your CloudService to Azure portal, add the [Application Insights SDK to Azure Cloud Services](../app/azure-web-apps-net-core.md). +When you publish your instance of Azure Cloud Services to the Azure portal, add the [Application Insights SDK to Azure Cloud Services](../app/azure-web-apps-net-core.md). -Once you've added the SDK and published your Cloud Service to the Azure portal, track requests using Application Insights. +After you've added the SDK and published your instance of Azure Cloud Services to the Azure portal, track requests by using Application Insights: -- **For ASP.NET web roles**, Application Insights tracks the requests automatically.-- **For worker roles**, you need to [add code manually to your application to track requests](profiler-trackrequests.md).+- **For ASP.NET web roles**: Application Insights tracks the requests automatically. +- **For worker roles**: You need to [add code manually to your application to track requests](profiler-trackrequests.md). ## Configure the Azure Diagnostics extension -Locate the Azure Diagnostics *diagnostics.wadcfgx* file for your application role: +Locate the Azure Diagnostics *diagnostics.wadcfgx* file for your application role. -Add the following `SinksConfig` section as a child element of `WadCfg`: +Add the following `SinksConfig` section as a child element of `WadCfg`: ```xml <WadCfg> Add the following `SinksConfig` section as a child element of `WadCfg`: ``` > [!NOTE]-> The instrumentation keys that are used by the application and the ApplicationInsightsProfiler sink need to match each other. +> The instrumentation keys that are used by the application and the `ApplicationInsightsProfiler` sink must match each other. -Deploy your service with the new Diagnostics configuration. Application Insights Profiler is now configured to run on your Cloud Service. +Deploy your service with the new Diagnostics configuration. Application Insights Profiler is now configured to run on your instance of Azure Cloud Services. ## Next steps -Learn how to... > [!div class="nextstepaction"] > [Generate load and view Profiler traces](./profiler-data.md) |
azure-monitor | Profiler Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-containers.md | Title: Profile Azure Containers with Application Insights Profiler -description: Enable Application Insights Profiler for Azure Containers. + Title: Profile Azure containers with Application Insights Profiler +description: Enable Application Insights Profiler for Azure containers. ms.contributor: charles.weininger Last updated 07/15/2022-You can enable the Application Insights Profiler for ASP.NET Core application running in your container almost without code. To enable the Application Insights Profiler on your container instance, you'll need to: +You can enable the Application Insights Profiler for ASP.NET Core application running in your container almost without code. To enable the Application Insights Profiler on your container instance, you need to: * Add the reference to the `Microsoft.ApplicationInsights.Profiler.AspNetCore` NuGet package. * Set the environment variables to enable it. -In this article, you'll learn the various ways you can: -- Install the NuGet package in the project. -- Set the environment variable via the orchestrator (like Kubernetes). -- Learn security considerations around production deployment, like protecting your Application Insights Instrumentation key.+In this article, you learn about the various ways that you can: -## Pre-requisites +- Install the NuGet package in the project. +- Set the environment variable via the orchestrator (like Kubernetes). +- Learn security considerations around production deployment, like protecting your Application Insights instrumentation key. ++## Prerequisites - [An Application Insights resource](/previous-versions/azure/azure-monitor/app/create-new-resource). Make note of the instrumentation key.-- [Docker Desktop](https://www.docker.com/products/docker-desktop/) to build docker images.+- [Docker Desktop](https://www.docker.com/products/docker-desktop/) to build Docker images. - [.NET 6 SDK](https://dotnet.microsoft.com/download/dotnet/6.0) installed. ## Set up the environment In this article, you'll learn the various ways you can: git clone https://github.com/microsoft/ApplicationInsights-Profiler-AspNetCore.git ``` -1. Navigate to the Container App example: +1. Go to the Container App example: ```bash cd examples/EnableServiceProfilerForContainerAppNet6 ``` -1. This example is a bare bone project created by calling the following CLI command: +1. This example is a barebones project created by calling the following CLI command: ```powershell dotnet new mvc -n EnableServiceProfilerForContainerApp In this article, you'll learn the various ways you can: dotnet add package Microsoft.ApplicationInsights.Profiler.AspNetCore ``` -1. Enable Application Insights and Profiler: - +1. Enable Application Insights and Profiler. + ### [ASP.NET Core 6 and later](#tab/net-core-new) Add `builder.Services.AddApplicationInsightsTelemetry()` and `builder.Services.AddServiceProfiler()` after the `WebApplication.CreateBuilder()` method in `Program.cs`: In this article, you'll learn the various ways you can: var app = builder.Build(); ``` - + ### [ASP.NET Core 5 and earlier](#tab/net-core-old) Add `services.AddApplicationInsightsTelemetry()` and `services.AddServiceProfiler()` to the `ConfigureServices()` method in `Startup.cs`: In this article, you'll learn the various ways you can: ## Pull the latest ASP.NET Core build/runtime images -1. Navigate to the .NET Core 6.0 example directory. +1. Go to the .NET Core 6.0 example directory: ```bash cd examples/EnableServiceProfilerForContainerAppNet6 ``` -1. Pull the latest ASP.NET Core images +1. Pull the latest ASP.NET Core images: ```shell docker pull mcr.microsoft.com/dotnet/sdk:6.0 In this article, you'll learn the various ways you can: ``` > [!TIP]-> Find the official images for Docker [SDK](https://hub.docker.com/_/microsoft-dotnet-sdk) and [runtime](https://hub.docker.com/_/microsoft-dotnet-aspnet). +> Find the official images for the Docker [SDK](https://hub.docker.com/_/microsoft-dotnet-sdk) and [runtime](https://hub.docker.com/_/microsoft-dotnet-aspnet). ## Add your Application Insights key 1. Via your Application Insights resource in the Azure portal, take note of your Application Insights instrumentation key. - :::image type="content" source="./media/profiler-containerinstances/application-insights-key.png" alt-text="Screenshot of finding instrumentation key in Azure portal."::: + :::image type="content" source="./media/profiler-containerinstances/application-insights-key.png" alt-text="Screenshot that shows finding the instrumentation key in the Azure portal."::: 1. Open `appsettings.json` and add your Application Insights instrumentation key to this code section: In this article, you'll learn the various ways you can: ## Build and run the Docker image -1. Review the `Dockerfile`. +1. Review the Docker file. 1. Build the example image: In this article, you'll learn the various ways you can: ## View the container via your browser -To hit the endpoint, either: +To hit the endpoint, you have two options: -- Visit `http://localhost:8080/weatherforecast` in your browser, or+- Visit `http://localhost:8080/weatherforecast` in your browser. - Use curl: ```terraform curl http://localhost:8080/weatherforecast ``` - ## Inspect the logs Optionally, inspect the local log to see if a session of profiling finished: Service Profiler session finished. # A profiling session is complet ## View the Service Profiler traces -1. Wait for 2-5 minutes so the events can be aggregated to Application Insights. -1. Open the **Performance** pane in your Application Insights resource. -1. Once the trace process is complete, you'll see the Profiler Traces button like it below: -- :::image type="content" source="./media/profiler-containerinstances/profiler-traces.png" alt-text="Screenshot of Profile traces in the performance pane."::: -+1. Wait for 2 to 5 minutes so that the events can be aggregated to Application Insights. +1. Open the **Performance** pane in your Application Insights resource. +1. After the trace process is finished, the **Profiler Traces** button appears. + :::image type="content" source="./media/profiler-containerinstances/profiler-traces.png" alt-text="Screenshot that shows the Profiler traces button in the Performance pane."::: ## Clean up resources Run the following command to stop the example project: docker rm -f testapp ``` -## Next Steps -Learn how to... +## Next steps + > [!div class="nextstepaction"] > [Generate load and view Profiler traces](./profiler-data.md) |
azure-monitor | Profiler Servicefabric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-servicefabric.md | Title: Enable Profiler for Azure Service Fabric applications -description: Profile live Azure Service Fabric apps with Application Insights +description: Profile live Azure Service Fabric apps with Application Insights. Last updated 07/15/2022 Last updated 07/15/2022 # Enable Profiler for Azure Service Fabric applications -Application Insights Profiler is included with Azure Diagnostics. You can install the Azure Diagnostics extension by using an Azure Resource Manager template for your Service Fabric cluster. Get a [template that installs Azure Diagnostics on a Service Fabric Cluster](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/ServiceFabricCluster.json). +Application Insights Profiler is included with Azure Diagnostics. You can install the Azure Diagnostics extension by using an Azure Resource Manager template (ARM template) for your Azure Service Fabric cluster. Get a [template that installs Azure Diagnostics on a Service Fabric cluster](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/ServiceFabricCluster.json). -In this article, you will: +In this article, you: -- Add the Application Insights Profiler property to your Azure Resource Manager template.+- Add the Application Insights Profiler property to your ARM template. - Deploy your Service Fabric cluster with the Application Insights Profiler instrumentation key. - Enable Application Insights on your Service Fabric application. - Redeploy your Service Fabric cluster to enable Profiler. In this article, you will: - Confirm that the deployed OS is `Windows Server 2012 R2` or later. - [An Azure Service Fabric managed cluster](../../service-fabric/quickstart-managed-cluster-portal.md). -## Create deployment template +## Create a deployment template -1. In your Service Fabric managed cluster, navigate to where you've implemented the [Azure Resource Manager template](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/ServiceFabricCluster.json). +1. In your Service Fabric managed cluster, go to where you implemented the [ARM template](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/ServiceFabricCluster.json). 1. Locate the `WadCfg` tags in the [Azure Diagnostics](../agents/diagnostics-extension-overview.md) extension in the deployment template file. -1. Add the following `SinksConfig` section as a child element of `WadCfg`. Replace the `ApplicationInsightsProfiler` property value with your own Application Insights instrumentation key: -- ```json - "settings": { - "WadCfg": { - "SinksConfig": { - "Sink": [ - { - "name": "MyApplicationInsightsProfilerSinkVMSS", - "ApplicationInsightsProfiler": "YOUR_APPLICATION_INSIGHTS_INSTRUMENTATION_KEY" - } - ] +1. Add the following `SinksConfig` section as a child element of `WadCfg`. Replace the `ApplicationInsightsProfiler` property value with your own Application Insights instrumentation key: + + ```json + "settings": { + "WadCfg": { + "SinksConfig": { + "Sink": [ + { + "name": "MyApplicationInsightsProfilerSinkVMSS", + "ApplicationInsightsProfiler": "YOUR_APPLICATION_INSIGHTS_INSTRUMENTATION_KEY" + } + ] + }, },- }, - } - ``` + } + ``` - For information about adding the Diagnostics extension to your deployment template, see [Use monitoring and diagnostics with a Windows VM and Azure Resource Manager templates](../../virtual-machines/extensions/diagnostics-template.md). + For information about how to add the Diagnostics extension to your deployment template, see [Use monitoring and diagnostics with a Windows VM and Azure Resource Manager templates](../../virtual-machines/extensions/diagnostics-template.md). ## Deploy your Service Fabric cluster -After updating the `WadCfg` with your instrumentation key, deploy your Service Fabric cluster. - -Application Insights Profiler will be installed and enabled when the Azure Diagnostics extension is installed. +After you update `WadCfg` with your instrumentation key, deploy your Service Fabric cluster. ++Application Insights Profiler is installed and enabled when the Azure Diagnostics extension is installed. ## Enable Application Insights on your Service Fabric application -For Profiler to collect profiles for your requests, your application must be tracking operations with Application Insights. +For Profiler to collect profiles for your requests, your application must be tracking operations with Application Insights. -- **For stateless APIs**, you can refer to instructions for [tracking requests for profiling](./profiler-trackrequests.md). -- **For tracking custom operations in other kinds of apps**, see [track custom operations with Application Insights .NET SDK](../app/custom-operations-tracking.md).+- **For stateless APIs**: See the instructions for [tracking requests for profiling](./profiler-trackrequests.md). +- **For tracking custom operations in other kinds of apps**: See [Track custom operations with Application Insights .NET SDK](../app/custom-operations-tracking.md). -Redeploy your application once you've enabled Application Insights. +After you enable Application Insights, redeploy your application. ## Generate traffic and view Profiler traces -1. Launch an [availability test](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) to generate traffic to your application. +1. Launch an [availability test](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) to generate traffic to your application. 1. Wait 10 to 15 minutes for traces to be sent to the Application Insights instance.-1. View the [Profiler traces](./profiler-overview.md) via the Application Insights instance the Azure portal. +1. View the [Profiler traces](./profiler-overview.md) via the Application Insights instance in the Azure portal. ## Next steps -Learn how to... > [!div class="nextstepaction"] > [Generate load and view Profiler traces](./profiler-data.md) - [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] |
azure-monitor | Profiler Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-troubleshooting.md | Title: Troubleshoot the Application Insights Profiler -description: Walk through troubleshooting steps and information to enable and use Azure Application Insights Profiler. + Title: Troubleshoot Application Insights Profiler +description: Walk through troubleshooting steps and information to enable and use Application Insights Profiler. Last updated 07/21/2022 -# Troubleshoot the Application Insights Profiler +# Troubleshoot Application Insights Profiler -## Make sure you're using the appropriate Profiler Endpoint +This article presents troubleshooting steps and information to enable you to use Application Insights Profiler. ++## Are you using the appropriate Profiler endpoint? Currently, the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide). -|App Setting | US Government Cloud | China Cloud | +|App setting | US Government Cloud | China Cloud | |||-| |ApplicationInsightsProfilerEndpoint | `https://profiler.monitor.azure.us` | `https://profiler.monitor.azure.cn` | |ApplicationInsightsEndpoint | `https://dc.applicationinsights.us` | `https://dc.applicationinsights.azure.cn` | -## Make sure your app is running on the right versions +## Is your app running on the right version? Profiler is supported on the [.NET Framework later than 4.6.2](https://dotnet.microsoft.com/download/dotnet-framework). If your web app is an ASP.NET Core application, it must be running on the [latest supported ASP.NET Core runtime](https://dotnet.microsoft.com/en-us/download/dotnet/6.0). -## Make sure you're using the right Azure service plan +## Are you using the right Azure service plan? Profiler isn't currently supported on free or shared app service plans. Upgrade to one of the basic plans for Profiler to start working. > [!NOTE] > The Azure Functions consumption plan isn't supported. See [Profile live Azure Functions app with Application Insights](./profiler-azure-functions.md). -## Make sure you're searching for Profiler data within the right timeframe +## Are you searching for Profiler data within the right time frame? -If the data you're trying to view is older than a couple of weeks, try limiting your time filter and try again. Traces are deleted after seven days. +If the data you're trying to view is older than two weeks, try limiting your time filter and try again. Traces are deleted after seven days. -## Make sure you can access the gateway +## Can you access the gateway? -Check that proxies or a firewall isn't blocking your access to https://gateway.azureserviceprofiler.net. +Check that a firewall or proxies aren't blocking your access to [this webpage](https://gateway.azureserviceprofiler.net). -## Make sure the Profiler is running +## Is Profiler running? -Profiling data is uploaded only when it can be attached to a request that happened while Profiler was running. The Profiler collects data for two minutes each hour. You can also trigger the Profiler by [starting a profiling session](./profiler-settings.md#profile-now). +Profiling data is uploaded only when it can be attached to a request that happened while Profiler was running. Profiler collects data for two minutes each hour. You can also trigger Profiler by [starting a profiling session](./profiler-settings.md#profile-now). Profiler writes trace messages and custom events to your Application Insights resource. You can use these events to see how Profiler is running. -Search for trace messages and custom events sent by Profiler to your Application Insights resource. +Search for trace messages and custom events sent by Profiler to your Application Insights resource. -1. In your Application Insights resource, select **Search** from the top menu bar. +1. In your Application Insights resource, select **Search**. - :::image type="content" source="./media/profiler-troubleshooting/search-trace-messages.png" alt-text="Screenshot of selecting the search button from the Application Insights resource."::: + :::image type="content" source="./media/profiler-troubleshooting/search-trace-messages.png" alt-text="Screenshot that shows selecting the Search button from the Application Insights resource."::: 1. Use the following search string to find the relevant data: Search for trace messages and custom events sent by Profiler to your Application stopprofiler OR startprofiler OR upload OR ServiceProfilerSample ``` - :::image type="content" source="./media/profiler-troubleshooting/search-results.png" alt-text="Screenshot of the search results from aforementioned search string."::: + :::image type="content" source="./media/profiler-troubleshooting/search-results.png" alt-text="Screenshot that shows the search results from aforementioned search string."::: - The search results above include two examples of searches from two AI resources: + The preceding search results include two examples of searches from two AI resources: - - If the application isn't receiving requests while Profiler is running, the message explains that the upload was canceled because of no activity. + - If the application isn't receiving requests while Profiler is running, the message explains that the upload was canceled because of no activity. - Profiler started and sent custom events when it detected requests that happened while Profiler was running. If the `ServiceProfilerSample` custom event is displayed, it means that a profile was captured and is available in the **Application Insights Performance** pane. - If no records are displayed, Profiler isn't running. Make sure you've [enabled Profiler on your Azure service](./profiler.md). + If no records are displayed, Profiler isn't running. Make sure you've [enabled Profiler on your Azure service](./profiler.md). ## Double counting in parallel threads -When two or more parallel threads are associated with a request, the total time metric in the stack viewer may be more than the duration of the request. In that case, the total thread time is more than the actual elapsed time. --For example, one thread might be waiting on the other to be completed. The viewer tries to detect this situation and omits the uninteresting wait. In doing so, it errs on the side of displaying too much information, rather than omitting what might be critical information. +When two or more parallel threads are associated with a request, the total time metric in the stack viewer might be more than the duration of the request. In that case, the total thread time is more than the actual elapsed time. -When you see parallel threads in your traces, determine which threads are waiting so that you can identify the hot path for the request. Usually, the thread that quickly goes into a wait state is simply waiting on the other threads. Concentrate on the other threads, and ignore the time in the waiting threads. +For example, one thread might be waiting on the other to be completed. The viewer tries to detect this situation and omits the uninteresting wait. In doing so, it errs on the side of displaying too much information rather than omitting what might be critical information. +When you see parallel threads in your traces, determine which threads are waiting so that you can identify the hot path for the request. Usually, the thread that quickly goes into a wait state is waiting on the other threads. Concentrate on the other threads and ignore the time in the waiting threads. ## Troubleshoot Profiler on your specific Azure service +The following sections walk you through troubleshooting steps for using Profiler on Azure App Service or Azure Cloud Services. + ### Azure App Service For Profiler to work properly, make sure: -- Your web app has [Application Insights enabled](./profiler.md) with the [right settings](./profiler.md#for-application-insights-and-app-service-in-different-subscriptions)+- Your web app has [Application Insights enabled](./profiler.md) with the [right settings](./profiler.md#for-application-insights-and-app-service-in-different-subscriptions). - The [**ApplicationInsightsProfiler3** WebJob]() is running. To check the webjob:- 1. Go to [Kudu](https://github.com/projectkudu/kudu/wiki/Accessing-the-kudu-service). From the Azure portal: - 1. In your App Service, select **Advanced Tools** from the left side menu. + 1. Go to [Kudu](https://github.com/projectkudu/kudu/wiki/Accessing-the-kudu-service). In the Azure portal: + 1. In your App Service instance, select **Advanced Tools** on the left pane. 1. Select **Go**.- 1. In the top menu, select **Tools** > **WebJobs dashboard**. - The **WebJobs** pane opens. + 1. On the top menu, select **Tools** > **WebJobs dashboard**. + The **WebJobs** pane opens. - :::image type="content" source="./media/profiler-troubleshooting/profiler-web-job.png" alt-text="Screenshot of the WebJobs pane, which displays the name, status, and last run time of jobs."::: + :::image type="content" source="./media/profiler-troubleshooting/profiler-web-job.png" alt-text="Screenshot that shows the WebJobs pane, which displays the name, status, and last runtime of jobs."::: - 1. To view the details of the webjob, including the log, select the **ApplicationInsightsProfiler3** link. + 1. To view the details of the WebJob, including the log, select the **ApplicationInsightsProfiler3** link. The **Continuous WebJob Details** pane opens. - :::image type="content" source="./media/profiler-troubleshooting/profiler-web-job-log.png" alt-text="Screenshot of the Continuous WebJob Details pane."::: + :::image type="content" source="./media/profiler-troubleshooting/profiler-web-job-log.png" alt-text="Screenshot that shows the Continuous WebJob Details pane."::: -If Profiler still isn't working for you, you can download the log and [submit an Azure support ticket](https://azure.microsoft.com/support/). +If Profiler still isn't working for you, download the log and [submit an Azure support ticket](https://azure.microsoft.com/support/). -#### Check the Diagnostic Services site extension' status page +#### Check the Diagnostic Services site extension status page -If Profiler was enabled through the [Application Insights pane](profiler.md) in the portal, it was enabled by the Diagnostic Services site extension. You can check the status page of this extension by going to the following url: -`https://{site-name}.scm.azurewebsites.net/DiagnosticServices` +If Profiler was enabled through the [Application Insights pane](profiler.md) in the portal, it was enabled by the Diagnostic Services site extension. You can check the status page of this extension by going to +`https://{site-name}.scm.azurewebsites.net/DiagnosticServices`. > [!NOTE]-> The domain of the status page link will vary depending on the cloud. This domain will be the same as the Kudu management site for App Service. +> The domain of the status page link varies depending on the cloud. This domain is the same as the Kudu management site for App Service. ++The status page shows the installation state of the Profiler and [Snapshot Debugger](../snapshot-debugger/snapshot-debugger.md) agents. If there was an unexpected error, it appears along with steps on how to fix it. -The status page shows the installation state of the Profiler and [Snapshot Debugger](../snapshot-debugger/snapshot-debugger.md) agents. If there was an unexpected error, it will be displayed and show how to fix it. +You can use the Kudu management site for App Service to get the base URL of this status page: -You can use the Kudu management site for App Service to get the base url of this status page: 1. Open your App Service application in the Azure portal.-2. Select **Advanced Tools**. -3. Select **Go**. -4. Once you are on the Kudu management site: +1. Select **Advanced Tools**. +1. Select **Go**. +1. On the Kudu management site: 1. Append `/DiagnosticServices` to the URL.- 1. Press enter. + 1. Select Enter. -It will end like this: `https://<kudu-url>/DiagnosticServices`. +It ends like `https://<kudu-url>/DiagnosticServices`. -It will display a status page similar to: +A status page appears similar to the following example. - + > [!NOTE]-> Codeless installation of Application Insights Profiler follows the .NET Core support policy. For more information about supported runtimes, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core). +> Codeless installation of Application Insights Profiler follows the .NET Core support policy. For more information about supported runtimes, see [.NET Core support policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core). #### Manual installation -When you configure Profiler, updates are made to the web app's settings. If necessary, you can [apply the updates manually](./profiler.md#verify-the-always-on-setting-is-enabled). +When you configure Profiler, updates are made to the web app's settings. If necessary, you can [apply the updates manually](./profiler.md#verify-the-always-on-setting-is-enabled). #### Too many active profiling sessions -You can enable Profiler on a maximum of four Web Apps that are running in the same service plan. If you've more than four, Profiler might throw the following error: +You can enable Profiler on a maximum of four web apps that are running in the same service plan. If you have more than four, Profiler might throw the following error: -*Microsoft.ServiceProfiler.Exceptions.TooManyETWSessionException*. +`Microsoft.ServiceProfiler.Exceptions.TooManyETWSessionException` To solve it, move some web apps to a different service plan. To solve it, move some web apps to a different service plan. If you're redeploying your web app to a Web Apps resource with Profiler enabled, you might see the following message: -*Directory Not Empty 'D:\\home\\site\\wwwroot\\App_Data\\jobs'* +"Directory Not Empty 'D:\\home\\site\\wwwroot\\App_Data\\jobs'" -This error occurs if you run Web Deploy from scripts or from the Azure Pipelines. Resolve by adding the following deployment parameters to the Web Deploy task: +This error occurs if you run Web Deploy from scripts or from Azure Pipelines. Resolve it by adding the following deployment parameters to the Web Deploy task: ``` -skip:Directory='.*\\App_Data\\jobs\\continuous\\ApplicationInsightsProfiler.*' -skip:skipAction=Delete,objectname='dirPath',absolutepath='.*\\App_Data\\jobs\\continuous$' -skip:skipAction=Delete,objectname='dirPath',absolutepath='.*\\App_Data\\jobs$' -skip:skipAction=Delete,objectname='dirPath',absolutepath='.*\\App_Data$' This error occurs if you run Web Deploy from scripts or from the Azure Pipelines These parameters delete the folder used by Application Insights Profiler and unblock the redeploy process. They don't affect the Profiler instance that's currently running. -#### Is the Profiler running? +#### Is Application Insights Profiler running? -Profiler runs as a continuous webjob in the web app. You can open the web app resource in the [Azure portal](https://portal.azure.com). In the **WebJobs** pane, check the status of **ApplicationInsightsProfiler**. If it isn't running, open **Logs** to get more information. +Profiler runs as a continuous WebJob in the web app. You can open the web app resource in the [Azure portal](https://portal.azure.com). In the **WebJobs** pane, check the status of **ApplicationInsightsProfiler**. If it isn't running, open **Logs** to get more information. -### VMs and Cloud Services +### VMs and Azure Cloud Services To see whether Profiler is configured correctly by Azure Diagnostics:- -1. Verify that the content of the Azure Diagnostics configuration deployed is what you expect. -1. Make sure the Azure Diagnostics passes the proper iKey on the Profiler command line. +1. Verify that the content of the Azure Diagnostics configuration deployed is what you expect. -1. Check the Profiler log file to see whether Profiler ran but returned an error. +1. Make sure Azure Diagnostics passes the proper iKey on the Profiler command line. ++1. Check the Profiler log file to see whether Profiler ran but returned an error. To check the settings that were used to configure Azure Diagnostics: 1. Sign in to the virtual machine (VM). -1. Open the log file at this location. The plugin version may be newer on your machine. +1. Open the log file at this location. The plug-in version might be newer on your machine. For VMs: ``` c:\WindowsAzure\logs\Plugins\Microsoft.Azure.Diagnostics.PaaSDiagnostics\1.11.3.12\DiagnosticsPlugin.log ``` - For Cloud + For Azure Cloud ``` c:\logs\Plugins\Microsoft.Azure.Diagnostics.PaaSDiagnostics\1.11.3.12\DiagnosticsPlugin.log ``` To check the settings that were used to configure Azure Diagnostics: 1. Check to see whether the iKey used by the Profiler sink is correct. -1. Check the command line that's used to start Profiler. The arguments that are used to launch Profiler are in the following file (the drive could be `c:` or `d:` and the directory may be hidden): +1. Check the command line that's used to start Profiler. The arguments that are used to launch Profiler are in the following file (the drive could be `c:` or `d:` and the directory might be hidden): For VMs: ``` C:\ProgramData\ApplicationInsightsProfiler\config.json ``` - for Cloud + For Azure Cloud ``` D:\ProgramData\ApplicationInsightsProfiler\config.json ``` -1. Make sure that the iKey on the Profiler command line is correct. +1. Make sure that the iKey on the Profiler command line is correct. ++1. By using the path found in the preceding *config.json* file, check the Profiler log file, called `BootstrapN.log`. It displays: -1. Using the path found in the preceding *config.json* file, check the Profiler log file, called `BootstrapN.log`. It displays: - - The debug information that indicates the settings that Profiler is using. - - Status and error messages from Profiler. + - The debug information that indicates the settings that Profiler is using. + - Status and error messages from Profiler. You can find the file: To check the settings that were used to configure Azure Diagnostics: C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.IaaSDiagnostics\1.17.0.6\ApplicationInsightsProfiler ``` - For Cloud + For Azure Cloud ``` C:\Logs\Plugins\Microsoft.Azure.Diagnostics.IaaSDiagnostics\1.17.0.6\ApplicationInsightsProfiler ``` -1. If Profiler is running while your application is receiving requests, the following message is displayed: *Activity detected from iKey*. +1. If Profiler is running while your application is receiving requests, the following message appears: "Activity detected from iKey." -1. When the trace is being uploaded, the following message is displayed: *Start to upload trace*. +1. When the trace is being uploaded, the following message appears: "Start to upload trace." ### Edit network proxy or firewall rules -If your application connects to the Internet via a proxy or a firewall, you may need to update the rules to communicate with the Profiler service. --The IPs used by Application Insights Profiler are included in the Azure Monitor service tag. For more information, see [Service Tags documentation](../../virtual-network/service-tags-overview.md). --## If all else fails... +If your application connects to the internet via a proxy or a firewall, you might need to update the rules to communicate with Profiler. -Submit a support ticket in the Azure portal. Include the correlation ID from the error message. +The IPs used by Application Insights Profiler are included in the Azure Monitor service tag. For more information, see [Service tags documentation](../../virtual-network/service-tags-overview.md). +## Support +If you still need help, submit a support ticket in the Azure portal. Include the correlation ID from the error message. |
azure-resource-manager | Bicep Config Linter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md | The following example shows the rules that are available for configuration. "level": "warning" }, "use-recent-api-versions": {- "level": "warning" + "level": "warning", + "maxAllowedAgeInDays": 730 }, "use-resource-id-functions": { "level": "warning" |
azure-resource-manager | Bicep Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md | Title: Bicep config file description: Describes the configuration file for your Bicep deployments Previously updated : 02/21/2023 Last updated : 04/28/2023 # Configure your Bicep environment You can enable preview features by adding: } ``` +> [!WARNING] +> To utilize the experimental features, it's necessary to have the latest version of [Azure CLI](./install.md#azure-cli). + The preceding sample enables 'userDefineTypes' and 'extensibility`. The available experimental features include: - **extensibility**: Allows Bicep to use a provider model to deploy non-ARM resources. Currently, we only support a Kubernetes provider. See [Bicep extensibility Kubernetes provider](./bicep-extensibility-kubernetes-provider.md). |
azure-resource-manager | Linter Rule Use Recent Api Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-recent-api-versions.md | Title: Linter rule - use recent API versions description: Linter rule - use recent API versions Previously updated : 02/13/2023 Last updated : 04/28/2023 # Linter rule - use recent API versions Use the following value in the [Bicep configuration file](bicep-config-linter.md `use-recent-api-versions` +The rule includes a configuration property named `maxAllowedAgeInDays`, with a default value of **730** days (equivalent to 2 years). A value of **0** indicates that the apiVersion must be the latest non-preview version available or the latest preview version if only previews are available. + ## Solution Use the most recent API version, or one that is no older than 730 days. |
azure-resource-manager | User Defined Data Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-data-types.md | The valid type expressions include: } ``` + Decorators may be used on properties. `*` may be used to make all values require a constrant. Additional properties may still be defined when using `*`. This example creates an object that requires a key of type int named `id`, and that all other entries in the object must be a string value at least 10 characters long. ++ ```bicep + type obj = { + @description('The object ID') + id: int ++ @description('Additional properties') + @minLength(10) + *: string + } + ``` + **Recursion** Object types may use direct or indirect recursion so long as at least leg of the path to the recursion point is optional. For example, the `myObjectType` definition in the following example is valid because the directly recursive `recursiveProp` property is optional: |
azure-signalr | Signalr Quickstart Azure Functions Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-python.md | You can use this sample function as a template for your own functions. { "type": "http", "direction": "out",- "name": "$result" + "name": "$return" } ] } You can use this sample function as a template for your own functions. 1. Edit *index/\__init\__.py* and replace the contents with the following code: - ```javascript + ```python import os import azure.functions as func You can use this sample function as a template for your own functions. def main(req: func.HttpRequest) -> func.HttpResponse: f = open(os.path.dirname(os.path.realpath(__file__)) + '/../content/https://docsupdatetracker.net/index.html') return func.HttpResponse(f.read(), mimetype='text/html')- ``` + ``` ### Create the negotiate function |
azure-video-indexer | Indexing Configuration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/indexing-configuration-guide.md | Title: Indexing configuration guide description: This article explains the configuration options of indexing process with Azure Video Indexer. Previously updated : 11/01/2022 Last updated : 04/27/2023 Below are the indexing type options with details of their insights provided. To ### Audio only -- **Basic**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles.-- **Standard**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles, emotions, keywords, named entities (brands, locations, people), sentiments, speakers, and topics. -- **Advanced**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles, audio effects (preview), emotions, keywords, named entities (brands, locations, people), sentiments, speakers, and articles. +- **Basic**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles (closed captions). +- **Standard**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles (closed captions), automatic language detection, emotions, keywords, named entities (brands, locations, people), sentiments, speakers, topic extraction, and textual content moderation. +- **Advanced**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles (closed captions), automatic language detection, audio event detection, emotions, keywords, named entities (brands, locations, people), sentiments, speakers, topic extraction, and textual content moderation. ### Video only -- **Standard**: Indexes and extract insights by using video only (ignoring audio) and provides the following insights: labels (OCR), named entities (OCR - brands, locations, people), OCR, people, scenes (keyframes and shots), and topics (OCR). -- **Advanced**: Indexes and extract insights by using video only (ignoring audio) and provides the following insights: labels (OCR), matched person (preview), named entities (OCR - brands, locations, people), OCR, observed people (preview), people, scenes (keyframes and shots), and topics (OCR). +- **Standard**: Indexes and extract insights by using video only (ignoring audio) and provides the following insights: labels (OCR), named entities (OCR - brands, locations, people), OCR, people, scenes (keyframes and shots), black frames, visual content moderation, and topic extraction (OCR). +- **Advanced**: Indexes and extract insights by using video only (ignoring audio) and provides the following insights: labels (OCR), matched person (preview), named entities (OCR - brands, locations, people), OCR, observed people (preview), people, scenes (keyframes and shots), clapperboard detection, digital pattern detection, featured clothing insight, textless slate detection, textual logo detection, black frames, visual content moderation, and topic extraction (OCR). ### Audio and Video -- **Standard**: Indexes and extract insights by using audio and video and provides the following insights: transcription, translation, formatting of output captions and subtitles, audio effects (preview), emotions, keywords, named entities (brands, locations, people), OCR, people, sentiments, speakers, and topics. -- **Advanced**: Indexes and extract insights by using audio and video and provides the following insights: transcription, translation, formatting of output captions and subtitles, audio effects (preview), emotions, keywords, matched person (preview), named entities (brands, locations, people), OCR, observed people (preview), people, sentiments, speakers, and topics. +- **Standard**: Indexes and extract insights by using audio and video and provides the following insights: transcription, translation, formatting of output captions and subtitles (closed captions), automatic language detection, emotions, keywords, named entities (brands, locations, people), OCR, scenes (keyframes and shots), black frames, visual content moderation, people, sentiments, speakers, topic extraction, and textual content moderation. +- **Advanced**: Indexes and extract insights by using audio and video and provides the following insights: transcription, translation, formatting of output captions and subtitles (closed captions), automatic language detection, textual content moderation, audio event detection, emotions, keywords, matched person, named entities (brands, locations, people), OCR, observed people (preview), people, clapperboard detection, digital pattern detection, featured clothing insight, textless slate detection, sentiments, speakers, scenes (keyframes and shots), textual logo detection, black frames, visual content moderation, and topic extraction. ### Streaming quality options |
azure-video-indexer | Video Indexer Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md | To learn about compliance, privacy and security in Azure Video Indexer please vi You're ready to get started with Azure Video Indexer. For more information, see the following articles: +- [Indexing and configuration guide](indexing-configuration-guide.md) - [Pricing](https://azure.microsoft.com/pricing/details/video-indexer/) - [Get started with the Azure Video Indexer website](video-indexer-get-started.md). - [Process content with Azure Video Indexer REST API](video-indexer-use-apis.md). |
baremetal-infrastructure | Solution Design | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/solution-design.md | The following table describes the network topologies supported by each network f The following table describes whatΓÇÖs supported for each network features configuration: -|Features |Supported | +|Features |Basic network features | | :- | -: | |Delegated subnet per VNet |1| |[Network Security Groups](../../../virtual-network/network-security-groups-overview.md) on NC2 on Azure-delegated subnets|No| |[User-defined routes (UDRs)](../../../virtual-network/virtual-networks-udr-overview.md#user-defined) on NC2 on Azure-delegated subnets|No|-|Connectivity to [private endpoints](../../../private-link/private-endpoint-overview.md)|No| +|Connectivity from UVMs on NC2 nodes to Azure resources|Yes| +|Connectivity to [private endpoints](../../../private-link/private-endpoint-overview.md) from resources on Azure-delegated subnets|No| |Load balancers for NC2 on Azure traffic|No| |Dual stack (IPv4 and IPv6) virtual network|IPv4 only supported| |
baremetal-infrastructure | Supported Instances And Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/supported-instances-and-regions.md | NC2 on Azure supports the following regions using AN36P: * Southeast Asia * Australia East * UK South+* West Europe ## Next steps |
batch | Batch Pool Compute Intensive Sizes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-compute-intensive-sizes.md | Title: Use compute-intensive Azure VMs with Batch description: How to take advantage of HPC and GPU virtual machine sizes in Azure Batch pools. Learn about OS dependencies and see several scenario examples. Previously updated : 03/20/2023 Last updated : 04/26/2023 # Use RDMA or GPU instances in Batch pools The RDMA or GPU capabilities of compute-intensive sizes in Batch are supported o | Size | Capability | Operating systems | Required software | Pool settings | | -- | -- | -- | -- | -- |-| [H16r, H16mr, A8, A9](../virtual-machines/sizes-hpc.md)<br/>[NC24r, NC24rs_v2, NC24rs_v3, ND24rs<sup>*</sup>](../virtual-machines/linux/n-series-driver-setup.md#rdma-network-connectivity) | RDMA | Ubuntu 16.04 LTS, or<br/>CentOS-based HPC<br/>(Azure Marketplace) | Intel MPI 5<br/><br/>Linux RDMA drivers | Enable inter-node communication, disable concurrent task execution | -| [NC, NCv2, NCv3, NDv2 series](../virtual-machines/linux/n-series-driver-setup.md) | NVIDIA Tesla GPU (varies by series) | Ubuntu 16.04 LTS, or<br/>CentOS 7.3 or 7.4<br/>(Azure Marketplace) | NVIDIA CUDA or CUDA Toolkit drivers | N/A | -| [NV, NVv2 series](../virtual-machines/linux/n-series-driver-setup.md) | NVIDIA Tesla M60 GPU | Ubuntu 16.04 LTS, or<br/>CentOS 7.3<br/>(Azure Marketplace) | NVIDIA GRID drivers | N/A | +| [H16r, H16mr, A8, A9](../virtual-machines/sizes-hpc.md)<br/>[NC24r, NC24rs_v2, NC24rs_v3, ND24rs<sup>*</sup>](../virtual-machines/linux/n-series-driver-setup.md#rdma-network-connectivity) | RDMA | Ubuntu 22.04 LTS, or<br/>CentOS-based HPC<br/>(Azure Marketplace) | Intel MPI 5<br/><br/>Linux RDMA drivers | Enable inter-node communication, disable concurrent task execution | +| [NC, NCv2, NCv3, NDv2 series](../virtual-machines/linux/n-series-driver-setup.md) | NVIDIA Tesla GPU (varies by series) | Ubuntu 22.04 LTS, or<br/>CentOS 7.3 or 7.4<br/>(Azure Marketplace) | NVIDIA CUDA or CUDA Toolkit drivers | N/A | +| [NV, NVv2 series](../virtual-machines/linux/n-series-driver-setup.md) | NVIDIA Tesla M60 GPU | Ubuntu 22.04 LTS, or<br/>CentOS 7.3<br/>(Azure Marketplace) | NVIDIA GRID drivers | N/A | <sup>*</sup>RDMA-capable N-series sizes also include NVIDIA Tesla GPUs +> [!Important] +> This document references a release version of Linux that is nearing or at, End of Life(EOL). Please consider updating to a more current version. ++ ### Windows pools - Virtual Machine Configuration | Size | Capability | Operating systems | Required software | Pool settings | To run CUDA applications on a pool of Windows NC nodes, you need to install NVDI To run CUDA applications on a pool of Linux NC nodes, you need to install necessary NVIDIA Tesla GPU drivers from the CUDA Toolkit. The following sample steps create and deploy a custom Ubuntu 16.04 LTS image with the GPU drivers: -1. Deploy an Azure NC-series VM running Ubuntu 16.04 LTS. For example, create the VM in the US South Central region. +1. Deploy an Azure NC-series VM running Ubuntu 22.04 LTS. For example, create the VM in the US South Central region. 2. Add the [NVIDIA GPU Drivers extension](../virtual-machines/extensions/hpccompute-gpu-linux.md) to the VM by using the Azure portal, a client computer that connects to the Azure subscription, or Azure Cloud Shell. Alternatively, follow the steps to connect to the VM and [install CUDA drivers](../virtual-machines/linux/n-series-driver-setup.md) manually. 3. Follow the steps to create an [Azure Compute Gallery image](batch-sig-images.md) for Batch. 4. Create a Batch account in a region that supports NC VMs. To run CUDA applications on a pool of Linux NC nodes, you need to install necess | - | - | | **Image Type** | Custom Image | | **Custom Image** | *Name of the image* |-| **Node agent SKU** | batch.node.ubuntu 16.04 | +| **Node agent SKU** | batch.node.ubuntu 22.04 | | **Node size** | NC6 Standard | ## Example: Microsoft MPI on a Windows H16r VM pool |
cloud-services | Cloud Services Guestos Msrc Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md | The following tables show the Microsoft Security Response Center (MSRC) updates ## April 2023 Guest OS ->[!NOTE] -->The April Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the April Guest OS. This list is subject to change. - | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |-| Rel 23-04 | [5025228] | Latest Cumulative Update(LCU) | 5.80 | Apr 11, 2023 | -| Rel 23-04 | [5022835] | IE Cumulative Updates | 2.136, 3.123, 4.116 | Feb 14, 2023 | -| Rel 23-04 | [5025230] | Latest Cumulative Update(LCU) | 7.24 | Apr 11, 2023 | -| Rel 23-04 | [5025229] | Latest Cumulative Update(LCU) | 6.56 | Apr 11, 2023 | -| Rel 23-04 | [5022523] | .NET Framework 3.5 Security and Quality Rollup LKG  | 2.136 | Feb 14, 2023 | -| Rel 23-04 | [5022515] | .NET Framework 4.6.2 Security and Quality Rollup LKG  | 2.136 | Feb 14, 2023 | -| Rel 23-04 | [5022525] | .NET Framework 3.5 Security and Quality Rollup LKG  | 4.116 | Feb 14, 2023 | -| Rel 23-04 | [5022513] | .NET Framework 4.6.2 Security and Quality Rollup LKG  | 4.116 | Feb 14, 2023 | -| Rel 23-04 | [5022574] | .NET Framework 3.5 Security       and Quality Rollup LKG  | 3.123 | Feb 14, 2023 | -| Rel 23-04 | [5022512] | .NET Framework 4.6.2 Security and Quality Rollup LKG  | 3.123 | Feb 14, 2023 | -| Rel 23-04 | [5022511] | . NET Framework 4.7.2 Cumulative Update LKG  | 6.56 | Feb 14, 2023 | -| Rel 23-04 | [5022507] | .NET Framework 4.8 Security and Quality Rollup LKG  | 7.24 | Feb 14, 2023 | -| Rel 23-04 | [5025279] | Monthly Rollup  | 2.136 | Apr 11, 2023 | -| Rel 23-04 | [5025287] | Monthly Rollup  | 3.123 | Apr 11, 2023 | -| Rel 23-04 | [5025285] | Monthly Rollup  | 4.116 | Apr 11, 2023 | -| Rel 23-04 | [5023791] | Servicing Stack Update LKG  | 3.123 | Mar 14, 2023 | -| Rel 23-04 | [5023790] | Servicing Stack Update LKG  | 4.116 | Mar 14, 2022 | -| Rel 23-04 | [4578013] | OOB Standalone Security Update  | 4.116 | Aug 19, 2020 | -| Rel 23-04 | [5023788] | Servicing Stack Update LKG  | 5.80 | Mar 14, 2023 | -| Rel 23-04 | [5017397] | Servicing Stack Update LKG  | 2.136 | Sep 13, 2022 | -| Rel 23-04 | [4494175] | Microcode  | 5.80 | Sep 1, 2020 | -| Rel 23-04 | [4494174] | Microcode  | 6.56 | Sep 1, 2020 | -| Rel 23-04 | 5025314 | Servicing Stack Update  | 7.24 | | +| Rel 23-04 | [5025228] | Latest Cumulative Update(LCU) | [5.80] | Apr 11, 2023 | +| Rel 23-04 | [5022835] | IE Cumulative Updates | [2.136], [3.124], [4.116] | Feb 14, 2023 | +| Rel 23-04 | [5025230] | Latest Cumulative Update(LCU) | [7.24] | Apr 11, 2023 | +| Rel 23-04 | [5025229] | Latest Cumulative Update(LCU) | [6.56] | Apr 11, 2023 | +| Rel 23-04 | [5022523] | .NET Framework 3.5 Security and Quality Rollup LKG  | [2.136] | Feb 14, 2023 | +| Rel 23-04 | [5022515] | .NET Framework 4.6.2 Security and Quality Rollup LKG  | [2.136] | Feb 14, 2023 | +| Rel 23-04 | [5022525] | .NET Framework 3.5 Security and Quality Rollup LKG  | [4.116] | Feb 14, 2023 | +| Rel 23-04 | [5022513] | .NET Framework 4.6.2 Security and Quality Rollup LKG  | [4.116] | Feb 14, 2023 | +| Rel 23-04 | [5022574] | .NET Framework 3.5 Security       and Quality Rollup LKG  | [3.124] | Feb 14, 2023 | +| Rel 23-04 | [5022512] | .NET Framework 4.6.2 Security and Quality Rollup LKG  | [3.124] | Feb 14, 2023 | +| Rel 23-04 | [5022511] | . NET Framework 4.7.2 Cumulative Update LKG  | [6.56] | Feb 14, 2023 | +| Rel 23-04 | [5022507] | .NET Framework 4.8 Security and Quality Rollup LKG  | [7.24] | Feb 14, 2023 | +| Rel 23-04 | [5025279] | Monthly Rollup  | [2.136] | Apr 11, 2023 | +| Rel 23-04 | [5025287] | Monthly Rollup  | [3.124] | Apr 11, 2023 | +| Rel 23-04 | [5025285] | Monthly Rollup  | [4.116] | Apr 11, 2023 | +| Rel 23-04 | [5023791] | Servicing Stack Update LKG  | [3.124] | Mar 14, 2023 | +| Rel 23-04 | [5023790] | Servicing Stack Update LKG  | [4.116] | Mar 14, 2022 | +| Rel 23-04 | [4578013] | OOB Standalone Security Update  | [4.116] | Aug 19, 2020 | +| Rel 23-04 | [5023788] | Servicing Stack Update LKG  | [5.80] | Mar 14, 2023 | +| Rel 23-04 | [5017397] | Servicing Stack Update LKG  | [2.136] | Sep 13, 2022 | +| Rel 23-04 | [4494175] | Microcode  | [5.80] | Sep 1, 2020 | +| Rel 23-04 | [4494174] | Microcode  | [6.56] | Sep 1, 2020 | +| Rel 23-04 | 5025314 | Servicing Stack Update  | [7.24] | | [5025228]: https://support.microsoft.com/kb/5025228 [5022835]: https://support.microsoft.com/kb/5022835 The following tables show the Microsoft Security Response Center (MSRC) updates [5017397]: https://support.microsoft.com/kb/5017397 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174+[2.136]: ./cloud-services-guestos-update-matrix.md#family-2-releases +[3.124]: ./cloud-services-guestos-update-matrix.md#family-3-releases +[4.116]: ./cloud-services-guestos-update-matrix.md#family-4-releases +[5.80]: ./cloud-services-guestos-update-matrix.md#family-5-releases +[6.56]: ./cloud-services-guestos-update-matrix.md#family-6-releases +[7.24]: ./cloud-services-guestos-update-matrix.md#family-7-releases ## March 2023 Guest OS |
cloud-services | Cloud Services Guestos Update Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md | Unsure about how to update your Guest OS? Check [this][cloud updates] out. ## News updates +###### **April 27, 2023** +The April Guest OS has released. + ###### **March 28, 2023** The March Guest OS has released. The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-7.24_202304-01 | April 27, 2023 | Post 7.26 | | WA-GUEST-OS-7.23_202303-01 | March 28, 2023 | Post 7.25 |-| WA-GUEST-OS-7.22_202302-01 | March 1, 2023 | Post 7.24 | +|~~WA-GUEST-OS-7.22_202302-01~~| March 1, 2023 | April 27, 2023 | |~~WA-GUEST-OS-7.21_202301-01~~| January 31, 2023 | March 28, 2023 | |~~WA-GUEST-OS-7.20_202212-01~~| January 19, 2023 | March 1, 2023 | |~~WA-GUEST-OS-7.19_202211-01~~| December 12, 2022 | January 31, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-6.56_202304-01 | April 27, 2023 | Post 6.58 | | WA-GUEST-OS-6.55_202303-01 | March 28, 2023 | Post 6.57 |-| WA-GUEST-OS-6.54_202302-01 | March 1, 2023 | Post 6.56 | +|~~WA-GUEST-OS-6.54_202302-01~~| March 1, 2023 | April 27, 2023 | |~~WA-GUEST-OS-6.53_202301-01~~| January 31, 2023 | March 28, 2023 | |~~WA-GUEST-OS-6.52_202212-01~~| January 19, 2023 | March 1, 2023 | |~~WA-GUEST-OS-6.51_202211-01~~| December 12, 2022 | January 31, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-5.80_202304-01 | April 27, 2023 | Post 5.82 | | WA-GUEST-OS-5.79_202303-01 | March 28, 2023 | Post 5.81 | -| WA-GUEST-OS-5.78_202302-01 | March 1, 2023 | Post 5.80 | +|~~WA-GUEST-OS-5.78_202302-01~~| March 1, 2023 | April 27, 2023 | |~~WA-GUEST-OS-5.77_202301-01~~| January 31, 2023 | March 28, 2023 | |~~WA-GUEST-OS-5.76_202212-01~~| January 19, 2023 | March 1, 2023 | |~~WA-GUEST-OS-5.75_202211-01~~| December 12, 2022 | January 31, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-4.116_202304-01 | April 27, 2023 | Post 4.118 | | WA-GUEST-OS-4.115_202303-01 | March 28, 2023 | Post 4.117 |-| WA-GUEST-OS-4.114_202302-01 | March 1, 2023 | Post 4.116 | +|~~WA-GUEST-OS-4.114_202302-01~~| March 1, 2023 | April 27, 2023 | |~~WA-GUEST-OS-4.113_202301-01~~| January 31, 2023 | March 28, 2023 | |~~WA-GUEST-OS-4.112_202212-01~~| January 19, 2023 | March 1, 2023 | |~~WA-GUEST-OS-4.111_202211-01~~| December 12, 2022 | January 31, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |-| WA-GUEST-OS-3.122_202303-01 | March 28, 2023 | Post 3.124 | -| WA-GUEST-OS-3.121_202302-01 | March 1, 2023 | Post 3.123 | +| WA-GUEST-OS-3.124_202304-02 | April 27, 2023 | Post 3.126 | +| WA-GUEST-OS-3.122_202303-01 | March 28, 2023 | Post 3.125 | +|~~WA-GUEST-OS-3.121_202302-01~~| March 1, 2023 | April 27, 2023 | |~~WA-GUEST-OS-3.120_202301-01~~| January 31, 2023 | March 28, 2023 | |~~WA-GUEST-OS-3.119_202212-01~~| January 19, 2023 | March 1, 2023 | |~~WA-GUEST-OS-3.118_202211-01~~| December 12, 2022 | January 31, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-2.136_202304-01 | April 27, 2023 | Post 2.138 | | WA-GUEST-OS-2.135_202303-01 | March 28, 2023 | Post 2.137 |-| WA-GUEST-OS-2.134_202302-01 | March 1, 2023 | Post 2.136 | +|~~WA-GUEST-OS-2.134_202302-01~~| March 1, 2023 | April 27, 2023 | |~~WA-GUEST-OS-2.133_202301-01~~| January 31, 2023 | March 28, 2023 | |~~WA-GUEST-OS-2.132_202212-01~~| January 19, 2023 | March 1, 2023 | |~~WA-GUEST-OS-2.131_202211-01~~| December 12, 2022 | January 31, 2023 | |
cloud-shell | Msi Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/msi-authorization.md | has expired. - There's an allowlist of resources that Cloud Shell tokens can be provided for. When you try to use a token with a service that is not listed, you may see the following error message: - ``` + ```output "error":{"code":"AudienceNotSupported","message":"Audience https://newservice.azure.com/ isn't a supported MSI token audience...."} ``` |
cognitive-services | Language Identification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md | -# Language identification (preview) +# Language identification Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). For speech recognition, the initial latency is higher with language identificati ## Configuration options > [!IMPORTANT]-> Language Identification (preview) APIs have been simplified in the Speech SDK version 1.25. The +> Language Identification APIs are simplified with the Speech SDK version 1.25 and later. The `SpeechServiceConnection_SingleLanguageIdPriority` and `SpeechServiceConnection_ContinuousLanguageIdPriority` properties have been removed and replaced by a single property `SpeechServiceConnection_LanguageIdMode`. Prioritizing between low latency and high accuracy is no longer necessary following recent model improvements. Now, you only need to select whether to run at-start or continuous Language Identification when doing continuous speech recognition or translation. |
cognitive-services | Disconnected Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md | Containers enable you to run Cognitive Services APIs in your own environment, an * [Custom Speech-to-Text](../speech-service/speech-container-howto.md?tabs=cstt) * [Neural Text-to-Speech](../speech-service/speech-container-howto.md?tabs=ntts) * [Text Translation (Standard)](../translator/containers/translator-disconnected-containers.md)-* [Language Understanding (LUIS)](../LUIS/luis-container-howto.md) * Azure Cognitive Service for Language * [Sentiment Analysis](../language-service/sentiment-opinion-mining/how-to/use-containers.md) * [Key Phrase Extraction](../language-service/key-phrase-extraction/how-to/use-containers.md) |
cognitive-services | Use Autolabeling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/use-autolabeling.md | - Title: How to use autolabeling in custom named entity recognition- -description: Learn how to use autolabeling in custom named entity recognition. ------- Previously updated : 03/20/2023----# How to use autolabeling for Custom Named Entity Recognition --[Labeling process](tag-data.md) is an important part of preparing your dataset. Since this process requires both time and effort, you can use the autolabeling feature to automatically label your entities. You can start autolabeling jobs based on a model you've previously trained or using GPT models. With autolabeling based on a model you've previously trained, you can start labeling a few of your documents, train a model, then create an autolabeling job to produce entity labels for other documents based on that model. With autolabeling with GPT, you may immediately trigger an autolabeling job without any prior model training. This feature can save you the time and effort of manually labeling your entities. --## Prerequisites --### [Autolabel based on a model you've trained](#tab/autolabel-model) --Before you can use autolabeling based on a model you've trained, you need: -* A successfully [created project](create-project.md) with a configured Azure blob storage account. -* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account. -* [Labeled data](tag-data.md) -* A [successfully trained model](train-model.md) ---### [Autolabel with GPT](#tab/autolabel-gpt) -Before you can use autolabeling with GPT, you need: -* A successfully [created project](create-project.md) with a configured Azure blob storage account. -* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account. -* Entity names that are meaningful. The GPT models label entities in your documents based on the name of the entity you've provided. -* [Labeled data](tag-data.md) isn't required. -* An Azure OpenAI [resource and deployment](../../../openai/how-to/create-resource.md). ----## Trigger an autolabeling job --### [Autolabel based on a model you've trained](#tab/autolabel-model) --When you trigger an autolabeling job based on a model you've trained, there's a monthly limit of 5,000 text records per month, per resource. This means the same limit applies on all projects within the same resource. --> [!TIP] -> A text record is calculated as the ceiling of (Number of characters in a document / 1,000). For example, if a document has 8921 characters, the number of text records is: -> -> `ceil(8921/1000) = ceil(8.921)`, which is 9 text records. --1. From the left navigation menu, select **Data labeling**. -2. Select the **Autolabel** button under the Activity pane to the right of the page. --- :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job." lightbox="../media/trigger-autotag.png"::: - -3. Choose Autolabel based on a model you've trained and click on Next. -- :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png"::: - -4. Choose a trained model. It's recommended to check the model performance before using it for autolabeling. -- :::image type="content" source="../media/choose-model-trained.png" alt-text="A screenshot showing how to choose trained model for autotagging." lightbox="../media/choose-model-trained.png"::: --5. Choose the entities you want to be included in the autolabeling job. By default, all entities are selected. You can see the total labels, precision and recall of each entity. It's recommended to include entities that perform well to ensure the quality of the automatically labeled entities. -- :::image type="content" source="../media/choose-entities.png" alt-text="A screenshot showing which entities to be included in autotag job." lightbox="../media/choose-entities.png"::: --6. Choose the documents you want to be automatically labeled. The number of text records of each document is displayed. When you select one or more documents, you should see the number of texts records selected. It's recommended to choose the unlabeled documents from the filter. -- > [!NOTE] - > * If an entity was automatically labeled, but has a user defined label, only the user defined label is used and visible. - > * You can view the documents by clicking on the document name. - - :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png"::: --7. Select **Autolabel** to trigger the autolabeling job. -You should see the model used, number of documents included in the autolabeling job, number of text records and entities to be automatically labeled. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included. -- :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png"::: --### [Autolabel with GPT](#tab/autolabel-gpt) --When you trigger an autolabeling job with GPT, you're charged to your Azure OpenAI resource as per your consumption. You're charged an estimate of the number of tokens in each document being autolabeled. Refer to the [Azure OpenAI pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) for a detailed breakdown of pricing per token of different models. --1. From the left navigation menu, select **Data labeling**. -2. Select the **Autolabel** button under the Activity pane to the right of the page. -- :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job from the activity pane." lightbox="../media/trigger-autotag.png"::: --4. Choose Autolabel with GPT and click on Next. -- :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png"::: --5. Choose your Azure OpenAI resource and deployment. You must [create an Azure OpenAI resource and deploy a model](../../../openai/how-to/create-resource.md) in order to proceed. -- :::image type="content" source="../media/autotag-choose-open-ai.png" alt-text="A screenshot showing how to choose OpenAI resource and deployments" lightbox="../media/autotag-choose-open-ai.png"::: - -6. Choose the entities you want to be included in the autolabeling job. By default, all entities are selected. Having descriptive names for labels, and including examples for each label is recommended to achieve good quality labeling with GPT. -- :::image type="content" source="../media/choose-entities.png" alt-text="A screenshot showing which entities to be included in autotag job." lightbox="../media/choose-entities.png"::: - -7. Choose the documents you want to be automatically labeled. It's recommended to choose the unlabeled documents from the filter. -- > [!NOTE] - > * If an entity was automatically labeled, but has a user defined label, only the user defined label is used and visible. - > * You can view the documents by clicking on the document name. - - :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png"::: --8. Select **Start job** to trigger the autolabeling job. -You should be directed to the autolabeling page displaying the autolabeling jobs initiated. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included. -- :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png"::: -----## Review the auto labeled documents --When the autolabeling job is complete, you can see the output documents in the **Data labeling** page of Language Studio. Select **Review documents with autolabels** to view the documents with the **Auto labeled** filter applied. ---Entities that have been automatically labeled appear with a dotted line. These entities have two selectors (a checkmark and an "X") that allow you to accept or reject the automatic label. --Once an entity is accepted, the dotted line changes to a solid one, and the label is included in any further model training becoming a user defined label. --Alternatively, you can accept or reject all automatically labeled entities within the document, using **Accept all** or **Reject all** in the top right corner of the screen. --After you accept or reject the labeled entities, select **Save labels** to apply the changes. --> [!NOTE] -> * We recommend validating automatically labeled entities before accepting them. -> * All labels that were not accepted are be deleted when you train your model. ---## Next steps --* Learn more about [labeling your data](tag-data.md). |
cognitive-services | Use Autolabeling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/use-autolabeling.md | - Title: How to use autolabeling in custom text classification- -description: Learn how to use autolabeling in custom text classification. ------- Previously updated : 3/15/2023----# How to use autolabeling for Custom Text Classification --[Labeling process](tag-data.md) is an important part of preparing your dataset. Since this process requires much time and effort, you can use the autolabeling feature to automatically label your documents with the classes you want to categorize them into. You can currently start autolabeling jobs based on a model using GPT models where you may immediately trigger an autolabeling job without any prior model training. This feature can save you the time and effort of manually labeling your documents. --## Prerequisites --Before you can use autolabeling with GPT, you need: -* A successfully [created project](create-project.md) with a configured Azure blob storage account. -* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account. -* Class names that are meaningful. The GPT models label documents based on the names of the classes you've provided. -* [Labeled data](tag-data.md) isn't required. -* An Azure OpenAI [resource and deployment](../../../openai/how-to/create-resource.md). ----## Trigger an autolabeling job --When you trigger an autolabeling job with GPT, you're charged to your Azure OpenAI resource as per your consumption. You're charged an estimate of the number of tokens in each document being autolabeled. Refer to the [Azure OpenAI pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) for a detailed breakdown of pricing per token of different models. --1. From the left navigation menu, select **Data labeling**. -2. Select the **Autolabel** button under the Activity pane to the right of the page. -- :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job from the activity pane." lightbox="../media/trigger-autotag.png"::: --4. Choose Autolabel with GPT and click on Next. -- :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png"::: --5. Choose your Azure OpenAI resource and deployment. You must [create an Azure OpenAI resource and deploy a model](../../../openai/how-to/create-resource.md) in order to proceed. -- :::image type="content" source="../media/autotag-choose-open-ai.png" alt-text="A screenshot showing how to choose OpenAI resource and deployments" lightbox="../media/autotag-choose-open-ai.png"::: - -6. Select the classes you want to be included in the autolabeling job. By default, all classes are selected. Having descriptive names for classes, and including examples for each class is recommended to achieve good quality labeling with GPT. -- :::image type="content" source="../media/choose-classes.png" alt-text="A screenshot showing which labels to be included in autotag job." lightbox="../media/choose-classes.png"::: - -7. Choose the documents you want to be automatically labeled. It's recommended to choose the unlabeled documents from the filter. -- > [!NOTE] - > * If a document was automatically labeled, but this label was already user defined, only the user defined label is used. - > * You can view the documents by clicking on the document name. - - :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png"::: --8. Select **Start job** to trigger the autolabeling job. -You should be directed to the autolabeling page displaying the autolabeling jobs initiated. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included. -- :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png"::: -----## Review the auto labeled documents --When the autolabeling job is complete, you can see the output documents in the **Data labeling** page of Language Studio. Select **Review documents with autolabels** to view the documents with the **Auto labeled** filter applied. ---Documents that have been automatically classified have suggested labels in the activity pane highlighted in purple. Each suggested label has two selectors (a checkmark and a cancel icon) that allow you to accept or reject the automatic label. --Once a label is accepted, the purple color changes to the default blue one, and the label is included in any further model training becoming a user defined label. --After you accept or reject the labels for the autolabeled documents, select **Save labels** to apply the changes. --> [!NOTE] -> * We recommend validating automatically labeled documents before accepting them. -> * All labels that were not accepted are deleted when you train your model. ---## Next steps --* Learn more about [labeling your data](tag-data.md). |
communication-services | Call Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md | The Call Automation events are sent to the web hook callback URI specified when To understand which events are published for different actions, refer to [this guide](../../how-tos/call-automation/actions-for-call-control.md) that provides code samples as well as sequence diagrams for various call control flows. +To learn how to secure the callback event delivery, refer to [this guide](../../how-tos/call-automation/secure-webhook-endpoint.md). + ## Known issues 1. Using the incorrect IdentifierType for endpoints for `Transfer` requests (like using CommunicationUserIdentifier to specify a phone number) returns a 500 error instead of a 400 error code. Solution: Use the correct type, CommunicationUserIdentifier for Communication Users and PhoneNumberIdentifier for phone numbers. |
communication-services | Secure Webhook Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/secure-webhook-endpoint.md | + + Title: Azure Communication Services Call Automation how-to for securing webhook endpoint ++description: Provides a how-to guide on securing deliver the delivery of incoming call and callback event +++++ Last updated : 04/13/2023+++++++# How to secure webhook endpoint +++Securing the delivery of messages from end to end is crucial for ensuring the confidentiality, integrity, and trustworthiness of sensitive information transmitted between systems. Your ability and willingness to trust information received from a remote system relies on the sender providing their identity. Call Automation has two ways of communicating events that can be secured; the shared IncomingCall event sent by Azure Event Grid, and all other mid-call events sent by the Call Automation platform via webhook. ++## Incoming Call Event +Azure Communication Services relies on Azure Event Grid subscriptions to deliver the [IncomingCall event](../../concepts/call-automation/incoming-call-notification.md). You can refer to the Azure Event Grid team for their [documentation about how to secure a webhook subscription](../../../event-grid/secure-webhook-delivery.md). ++## Call Automation webhook events ++[Call Automation events](../../concepts/call-automation/call-automation.md#call-automation-webhook-events) are sent to the webhook callback URI specified when you answer a call, or place a new outbound call. Your callback URI must be a public endpoint with a valid HTTPS certificate, DNS name, and IP address with the correct firewall ports open to enable Call Automation to reach it. This anonymous public webserver could create a security risk if you don't take the necessary steps to secure it from unauthorized access. ++A common way you can improve this security is by implementing an API KEY mechanism. Your webserver can generate the key at runtime and provide it in the callback URI as a query parameter when you answer or create a call. Your webserver can verify the key in the webhook callback from Call Automation before allowing access. Some customers require more security measures. In these cases, a perimeter network device may verify the inbound webhook, separate from the webserver or application itself. The API key mechanism alone may not be sufficient. ++## Improving Call Automation webhook callback security ++Each mid-call webhook callback sent by Call Automation uses a signed JSON Web Token (JWT) in the Authentication header of the inbound HTTPS request. You can use standard Open ID Connect (OIDC) JWT validation techniques to ensure the integrity of the token as follows. The lifetime of the JWT is five (5) minutes and a new token is created for every event sent to the callback URI. ++1. Obtain the Open ID configuration URL: https://acscallautomation.communication.azure.com/calling/.well-known/acsopenidconfiguration +2. Install the [Microsoft.AspNetCore.Authentication.JwtBearer NuGet](https://www.nuget.org/packages/Microsoft.AspNetCore.Authentication.JwtBearer) package. +3. Configure your application to validate the JWT using the NuGet package and the configuration of your ACS resource. You need the `audience` values as it is present in the JWT payload. +4. Validate the issuer, audience and the JWT token. + - The audience is your ACS resource ID you used to setup your Call Automation client. Refer [here](../../quickstarts/voice-video-calling/get-resource-id.md) about how to get it. + - The JSON Web Key Set (JWKS) endpoint in the OpenId configuration contains the keys used to validate the JWT token. When the signature is valid and the token hasn't expired (within 5 minutes of generation), the client can use the token for authorization. ++This sample code demonstrates how to use `Microsoft.IdentityModel.Protocols.OpenIdConnect` to validate webhook payload +## [csharp](#tab/csharp) +```csharp +using Microsoft.AspNetCore.Authentication.JwtBearer; +using Microsoft.IdentityModel.Protocols; +using Microsoft.IdentityModel.Protocols.OpenIdConnect; +using Microsoft.IdentityModel.Tokens; ++var builder = WebApplication.CreateBuilder(args); ++builder.Services.AddEndpointsApiExplorer(); +builder.Services.AddSwaggerGen(); ++// Add ACS CallAutomation OpenID configuration +var configurationManager = new ConfigurationManager<OpenIdConnectConfiguration>( + builder.Configuration["OpenIdConfigUrl"], + new OpenIdConnectConfigurationRetriever()); +var configuration = configurationManager.GetConfigurationAsync().Result; ++builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) + .AddJwtBearer(options => + { + options.Configuration = configuration; + options.TokenValidationParameters = new TokenValidationParameters + { + ValidAudience = builder.Configuration["AllowedAudience"] + }; + }); ++builder.Services.AddAuthorization(); ++var app = builder.Build(); ++// Configure the HTTP request pipeline. +if (app.Environment.IsDevelopment()) +{ + app.UseSwagger(); + app.UseSwaggerUI(); +} ++app.UseHttpsRedirection(); ++app.MapPost("/api/callback", (CloudEvent[] events) => +{ + // Your implemenation on the callback event + return Results.Ok(); +}) +.RequireAuthorization() +.WithOpenApi(); ++app.UseAuthentication(); +app.UseAuthorization(); ++app.Run(); ++``` ++## Next steps +- Learn more about [How to control and steer calls with Call Automation](../call-automation/actions-for-call-control.md). |
communication-services | Get Started Video Effects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-video-effects.md | Last updated 01/09/2023 +zone_pivot_groups: acs-plat-web-android-windows # QuickStart: Add video effects to your video calls+ [!INCLUDE [Video effects with JavaScript](./includes/video-effects/video-effects-javascript.md)]++ ## Next steps |
confidential-computing | Use Cases Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/use-cases-scenarios.md | + This article provides an overview of several common scenarios for Azure confidential computing. The recommendations in this article serve as a starting point as you develop your application using confidential computing services and frameworks. In this secure multi-party computation example, multiple banks share data with e Through confidential computing, these financial institutions can increase fraud detection rates, address money laundering scenarios, reduce false positives, and continue learning from larger data sets. + ### Drug development in healthcare Partnered health facilities contribute private health data sets to train an ML m ### Protecting privacy with IoT and smart-building solutions -Many countries have strict privacy laws about gathering and using data on peopleΓÇÖs presence and movements inside buildings. This may include data that is directly personally identifiable data from CCTV or security badge swipes. Or, indirectly identifiable where different sets of sensor data could be considered personally identifiable when grouped together. +Many countries have strict privacy laws about gathering and using data on peopleΓÇÖs presence and movements inside buildings. This may include data that is directly personally identifiable data from CCTV or security badge scans. Or, indirectly identifiable where different sets of sensor data could be considered personally identifiable when grouped together. Privacy needs to be balanced with cost & environmental needs where organizations are keen to understand occupancy/movement in-order to provide the most efficient use of energy to heat and light a building. Confidential compute is used here by placing the analysis application (in this e The aggregate data-sets from many types of sensor and data feed are managed in an Azure SQL Always Encrypted with Enclaves database, this protects in-use queries by encrypting them in-memory. This prevents a server administrator from being able to access the aggregate data set while it is being queried and analyzed. +[](media/use-cases-scenarios/iot-sensors.jpg#lightbox) +++## Legal or jurisdictional requirements ++Commonly applicable to FSI and healthcare where there are legal or regulatory requirements that limit where certain workloads can be processed and be stored at-rest. ++In this use-case we use a combination of Azure Confidential Compute technologies with Azure Policy, Network Security Groups (NSGs) and Azure Active Directory Conditional Access to ensure that the following protection goals are met for the ΓÇÿlift & shiftΓÇÖ of an existing application: ++- Application is protected from the cloud operator whilst in-use using Confidential Compute +- Application resources can only be deployed in the West Europe Azure region +- Consumers of the application authenticating with modern authentication protocols can be mapped to the sovereign region they're connecting from, and denied access unless they are in an allowed region. +- Access using administrative protocols (RDP, SSH etc.) is limited to access from the Azure Bastion service that is integrated with Privileged Identity Management (PIM). The PIM policy requires a Conditional Access Policy that validates which sovereign region the administrator is accessing from. +- All services log actions to Azure Monitor. ++[](media/use-cases-scenarios/restricted-workload.jpg#lightbox) ++## Manufacturing ΓÇô IP Protection ++Manufacturing organizations protect the IP around their manufacturing processes and technologies, often manufacturing is outsourced to third parties who deal with the physical production processes, which could be considered ΓÇÿhostileΓÇÖ environments where there are active threats to steal that IP. ++In this example Tailspin Toys are developing a new toy-line, the specific dimensions and innovative designs of their toys are company proprietary and they want to keep them safe, whilst being flexible over which company they choose to physically produce their prototypes. ++Contoso, a high-quality 3D printing and testing company provide the systems that physically print prototypes at large-scale and run them through safety tests required for safety approvals. ++Contoso deploy customer managed containerized applications and data within the Contoso tenant, which uses their 3D printing machinery via an IoT-type API. ++Contoso use the telemetry from the physical manufacturing systems to drive their billing, scheduling and materials ordering systems whilst Tailspin Toys use telemetry from their application suite to determine how successfully their toys can be manufactured and defect rates. ++Contoso operators are able to load the Tailspin Toys application suite into the Contoso tenant using the provided container images over the Internet. ++Tailspin Toys configuration policy mandates deployment on Confidential Compute enabled hardware so that all Tailspin application servers and databases are protected whilst in-use from Contoso administrators even though they are running in the Contoso tenant. ++If, for example a rogue admin at Contoso tries moving the Tailspin Toys provided containers to general x86 compute hardware that isn't able to provide a Trusted Execution Environment, it could mean potential exposure of confidential IP. ++In this case, the Azure Container Instance policy engine would refuse to release the decryption keys or start containers if the attestation call reveals that the policy requirements aren't able to be met, ensuring Tailspin Toys IP is protected in-use and at-rest. ++The Tailspin Toys application itself is coded to periodically make a call to the attestation service and report the results back to Tailspin Toys over the Internet to ensure there's a continual heartbeat of security status. ++The attestation service returns cryptographically signed details from the hardware supporting the Contoso tenant to validate that the workload is running inside a confidential enclave as expected, the attestation is outside the control of the Contoso administrators and is based on the hardware root of trust that Confidential Compute provides. ++[](media/use-cases-scenarios/manufacturing-ip-protection.jpg#lightbox) + ## Enhanced customer data privacy Confidential computing goes in this direction by allowing customers incremental ### Data sovereignty -In Government and public agencies, Azure confidential computing is a solution to raise the degree of trust towards the ability to protect data sovereignty in the public cloud. Moreover, thanks to the increasingly adoption of confidential computing capabilities into PaaS services in Azure, a higher degree of trust can be achieved with a reduced impact to the innovation ability provided by public cloud services. This combination of protecting data sovereignty with a reduced impact to the innovation ability makes Azure confidential computing a very effective response to the needs of sovereignty and digital transformation of Government services. +In Government and public agencies, Azure confidential computing is a solution to raise the degree of trust towards the ability to protect data sovereignty in the public cloud. Moreover, thanks to the increasing adoption of confidential computing capabilities into PaaS services in Azure, a higher degree of trust can be achieved with a reduced impact to the innovation ability provided by public cloud services. This combination of protecting data sovereignty with a reduced impact to the innovation ability makes Azure confidential computing a very effective response to the needs of sovereignty and digital transformation of Government services. ### Reduced chain of trust |
confidential-ledger | Verify Write Transaction Receipts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/verify-write-transaction-receipts.md | The third step is to verify that the cryptographic signature produced over the r ### Verify signing node certificate endorsement -In addition to the above, it's also required to verify that the signing node certificate is endorsed (that is, signed) by the current ledger certificate. This step doesn't depend on the other three previous steps and can be carried out independently from the others. +In addition to the previous step, it's also required to verify that the signing node certificate is endorsed (that is, signed) by the current ledger certificate. This step doesn't depend on the other three previous steps and can be carried out independently from the others. It's possible that the current service identity that issued the receipt is different from the one that endorsed the signing node (for example, due to a certificate renewal). In this case, it's required to verify the chain of certificates trust from the signing node certificate (that is, the `cert` field in the receipt) up to the trusted root Certificate Authority (CA) (that is, the current service identity certificate) through other previous service identities (that is, the `serviceEndorsements` list field in the receipt). The `serviceEndorsements` list is provided as an ordered list from the oldest to the latest service identity. Certificate endorsement need to be verified for the entire chain and follows the exact same digital signature verification process outlined in the previous subsection. There are popular open-source cryptographic libraries (for example, [OpenSSL](https://www.openssl.org/)) that can be typically used to carry out a certificate endorsement step. +### Verify application claims digest ++As an optional step, in case application claims are attached to a receipt, it's possible to compute the claims digest from the exposed claims (following a specific algorithm) and verify that the digest matches the `claimsDigest` contained in the receipt payload. To compute the digest from the exposed claim objects, it's required to iterate through each application claim object in the list and checks its `kind` field. ++If the claim object is of kind `LedgerEntry`, the ledger collection ID (`collectionId`) and contents (`contents`) of the claim should be extracted and used to compute their HMAC digests using the secret key (`secretKey`) specified in the claim object. These two digests are then concatenated and the SHA-256 hash of the concatenation is computed. The protocol (`protocol`) and the resulting claim data digest are then concatenated and another SHA-256 hash of the concatenation is computed to get the final digest. ++If the claim object is of kind `ClaimDigest`, the claim digest (`value`) should be extracted, concatenated with the protocol (`protocol`), and the SHA-256 hash of the concatenation is computed to get the final digest. ++After computing each single claim digest, it's necessary to concatenate all the computed digests from each application claim object (in the same order they're presented in the receipt). The concatenation should then be prepended with the number of claims processed. The SHA-256 hash of the previous concatenation produces the final claims digest, which should match the `claimsDigest` present in the receipt object. + ### More resources For more information about the content of an Azure Confidential Ledger write transaction receipt and explanation of each field, see the [dedicated article](write-transaction-receipts.md#write-transaction-receipt-content). The [CCF documentation](https://microsoft.github.io/CCF) also contains more information about receipt verification and other related resources at the following links: For more information about the content of an Azure Confidential Ledger write tra * [Merkle Tree](https://microsoft.github.io/CCF/main/architecture/merkle_tree.html) * [Cryptography](https://microsoft.github.io/CCF/main/architecture/cryptography.html) * [Certificates](https://microsoft.github.io/CCF/main/operations/certificates.html)+* [Application Claims](https://microsoft.github.io/CCF/main/use_apps/verify_tx.html#application-claims) +* [User-Defined Claims in Receipts](https://microsoft.github.io/CCF/main/build_apps/example_cpp.html#user-defined-claims-in-receipts) ## Verify write transaction receipts -### Setup and pre-requisites +### Receipt verification utilities ++The [Azure Confidential Ledger client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/confidentialledger/azure-confidentialledger) provides utility functions to verify write transaction receipts and compute the claims digest from a list of application claims. For more information on how to use the Data Plane SDK and the receipt-specific utilities, see [this section](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/confidentialledger/azure-confidentialledger#verify-write-transaction-receipts) and [this sample code](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/confidentialledger/azure-confidentialledger/samples/get_and_verify_receipt.py). ++### Setup and prerequisites -For reference purposes, we provide sample code in Python to fully verify Azure Confidential Ledger write transaction receipts following the steps outlined above. +For reference purposes, we provide sample code in Python to fully verify Azure Confidential Ledger write transaction receipts following the steps outlined in the previous section. To run the full verification algorithm, the current service network certificate and a write transaction receipt from a running Confidential Ledger resource are required. Refer to [this article](write-transaction-receipts.md#get-write-transaction-receipts) for details on how to fetch a write transaction receipt and the service certificate from a Confidential Ledger instance. ### Code walkthrough -The following code can be used to initialize the required objects and run the receipt verification algorithm. A separate utility (`verify_receipt`) is used to run the full verification algorithm, and accepts input the content of the `receipt` field in a `GET_RECEIPT` response as a dictionary and the service certificate as a simple string. The function throws an exception if the receipt isn't valid or if any error was encountered during the processing. +The following code can be used to initialize the required objects and run the receipt verification algorithm. A separate utility (`verify_receipt`) is used to run the full verification algorithm, and accepts the content of the `receipt` field in a `GET_RECEIPT` response as a dictionary and the service certificate as a simple string. The function throws an exception if the receipt isn't valid or if any error was encountered during the processing. It's assumed that both the receipt and the service certificate can be loaded from files. Make sure to update both the `service_certificate_file_name` and `receipt_file_name` constants with the respective files names of the service certificate and receipt you would like to verify. leaf_node_hex = compute_leaf_node( The `compute_leaf_node` function accepts as parameters the leaf components of the receipt (the `claimsDigest`, the `commitEvidence`, and the `writeSetDigest`) and returns the leaf node hash in hexadecimal form. -As detailed above, we compute the digest of `commitEvidence` (using the SHA256 `hashlib` function). Then, we convert both `writeSetDigest` and `claimsDigest` into arrays of bytes. Finally, we concatenate the three arrays, and we digest the result using the SHA256 function. +As detailed previously, we compute the digest of `commitEvidence` (using the SHA-256 `hashlib` function). Then, we convert both `writeSetDigest` and `claimsDigest` into arrays of bytes. Finally, we concatenate the three arrays, and we digest the result using the SHA256 function. ```python def compute_leaf_node( The last step of receipt verification is validating the certificate that was use check_endorsements(node_cert, service_cert, service_endorsements_certs) ``` -Likewise, we can use the CCF utility `check_endorsements` to validate that the certificate of the signing node is endorsed by the service identity. The certificate chain could be composed of previous service certificates, so we should validate that the endorsement is applied transitively if `serviceEndorsements` isn't an empty list. +Likewise, we can use the CCF utility `check_endorsements` to validate that the service identity endorses the signing node. The certificate chain could be composed of previous service certificates, so we should validate that the endorsement is applied transitively if `serviceEndorsements` isn't an empty list. ```python def check_endorsement(endorsee: Certificate, endorser: Certificate): def verify_openssl_certificate( ### Sample code -The full sample code used in the code walkthrough can be found below. +The full sample code used in the code walkthrough is provided. #### Main program |
confidential-ledger | Write Transaction Receipts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/write-transaction-receipts.md | More details about how a Merkle Tree is used in a Confidential Ledger can be fou ## Get write transaction receipts -### Setup and pre-requisites +### Setup and prerequisites Azure Confidential Ledger users can get a receipt for a specific transaction by using the [Azure Confidential Ledger client library](quickstart-python.md#use-the-data-plane-client-library). The following example shows how to get a write receipt using the [client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/confidentialledger/azure-confidentialledger), but the steps are the same with any other supported SDK for Azure Confidential Ledger. get_receipt_result = get_receipt_poller.result() ### Sample code -The full sample code used in the code walkthrough can be found below. +The full sample code used in the code walkthrough is provided. ```python import json The JSON response contains the following fields at the root level. * **state**: The status of the returned JSON response. The following are the possible values allowed: * `Ready`: The receipt returned in the response is available- * `Loading`: The receipt isn't yet available to be retrieved and the request will have to be retried + * `Loading`: The receipt isn't yet available to be retrieved and the request have to be retried * **transactionId**: The transaction ID associated with the write transaction receipt. The `receipt` field contains the following fields. -* **cert**: String with the [PEM](https://en.wikipedia.org/wiki/Privacy-Enhanced_Mail) public key certificate of the CCF node that signed the write transaction. The certificate of the signing node should always be endorsed by the service identity certificate. See also more details about how transactions get regularly signed and how the signature transactions are appended to the ledger in CCF at the following [link](https://microsoft.github.io/CCF/main/architecture/merkle_tree.html). +* **cert**: String with the [PEM](https://en.wikipedia.org/wiki/Privacy-Enhanced_Mail) public key certificate of the CCF node that signed the write transaction. The service identity certificate should always endorse the certificate of the signing node. See also more details about how transactions get regularly signed and how the signature transactions are appended to the ledger in CCF at the following [link](https://microsoft.github.io/CCF/main/architecture/merkle_tree.html). * **nodeId**: Hexadecimal string representing the [SHA-256](https://en.wikipedia.org/wiki/SHA-2) hash digest of the public key of the signing CCF node. The `leafComponents` field contains the following fields. * **writeSetDigest**: Hexadecimal string representing the SHA-256 hash digest of the [Key-Value store](https://microsoft.github.io/CCF/main/build_apps/kv/https://docsupdatetracker.net/index.html), which contains all the keys and values written at the time the transaction was completed. For more information about the write set, see the related [CCF documentation](https://microsoft.github.io/CCF/main/overview/glossary.html#term-Write-Set). +## Application claims +Azure Confidential Ledger applications can attach arbitrary data, called [application claims](https://microsoft.github.io/CCF/main/use_apps/verify_tx.html#application-claims), to write transactions. These claims represent the actions executed during a write operation. When attached to a transaction, the SHA-256 digest of the claims object is included in the ledger and committed as part of the write transaction. The inclusion of the claim in the write transaction guarantees that the claim digest is signed in place and can't be tampered with. ++Later, application claims can be revealed in their plain format in the receipt payload corresponding to the same transaction where they were added. The exposed claims allow users to recompute the same claims digest that was attached and signed in place by the ledger during the transaction. The claims digest can be used as part of the write transaction receipt verification process, providing an offline way for users to fully verify the authenticity of the recorded claims. ++Application claims are currently supported in the preview API version `2023-01-18-preview`. ++### Write transaction receipt content with application claims ++Here's an example of a JSON response payload returned by an Azure Confidential Ledger instance that recorded application claims, when calling the `GET_RECEIPT` endpoint. ++```json +{ + "applicationClaims": [ + { + "kind": "LedgerEntry", + "ledgerEntry": { + "collectionId": "subledger:0", + "contents": "Hello world", + "protocol": "LedgerEntryV1", + "secretKey": "Jde/VvaIfyrjQ/B19P+UJCBwmcrgN7sERStoyHnYO0M=" + } + } + ], + "receipt": { + "cert": "--BEGIN CERTIFICATE--\nMIIBxTCCAUygAwIBAgIRAMR89lUNeIghDUfpyHi3QzIwCgYIKoZIzj0EAwMwFjEU\nMBIGA1UEAwwLQ0NGIE5ldHdvcmswHhcNMjMwNDI1MTgxNDE5WhcNMjMwNzI0MTgx\nNDE4WjATMREwDwYDVQQDDAhDQ0YgTm9kZTB2MBAGByqGSM49AgEGBSuBBAAiA2IA\nBB1DiBUBr9/qapmvAIPm1o3o3LRViSOkfFVI4oPrw3SodLlousHrLz+HIe+BqHoj\n4nBjt0KAS2C0Av6Q+Xg5Po6GCu99GQSoSfajGqmjy3j3bwjsGJi5wHh1pNbPmMm/\nTqNhMF8wDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUCPaDohOGjVgQ2Lb8Pmubg7Y5\nDJAwHwYDVR0jBBgwFoAU25KejcEmXDNnKvSLUwW/CQZIVq4wDwYDVR0RBAgwBocE\nfwAAATAKBggqhkjOPQQDAwNnADBkAjA8Ci9myzieoLoIy+7mUswVEjUG3wrEXtxA\nDRmt2PK9bTDo2m3aJ4nCQJtCWQRUlN0CMCMOsXL4NnfsSxaG5CwAVkDwLBUPv7Zy\nLfSh2oZ3Wn4FTxL0UfnJeFOz/CkDUtJI1A==\n--END CERTIFICATE--\n", + "leafComponents": { + "claimsDigest": "d08d8764437d09b2d4d07d52293cddaf40f44a3ea2176a0528819a80002df9f6", + "commitEvidence": "ce:2.13:850a25da46643fa41392750b6ca03c7c7d117c27ae14e3322873de6322aa7cd3", + "writeSetDigest": "6637eddb8741ab54cc8a44725be67fd9be390e605f0537e5a278703860ace035" + }, + "nodeId": "0db9a22e9301d1167a2a81596fa234642ad24bc742451a415b8d653af056795c", + "proof": [ + { + "left": "bcce25aa51854bd15257cfb0c81edc568a5a5fa3b81e7106c125649db93ff599" + }, + { + "left": "cc82daa27e76b7525a1f37ed7379bb80f6aab99f2b36e2e06c750dd9393cd51b" + }, + { + "left": "c53a15cbcc97e30ce748c0f44516ac3440e3e9cc19db0852f3aa3a3d5554dfae" + } + ], + "signature": "MGYCMQClZXVAFn+vflIIikwMz64YZGoH71DKnfMr3LXkQ0lhljSsvDrmtmi/oWwOsqy28PsCMQCMe4n9aXXK4R+vY0SIfRWSCCfaADD6teclFCkVNK4317ep+5ENM/5T/vDJf3V4IvI=" + }, + "state": "Ready", + "transactionId": "2.13" +} +``` ++Compared to the receipt example shown in the previous section, the JSON response contains another `applicationClaims` field that represents the list of application claims recorded by the ledger during the write transaction. Each object inside the `applicationClaims` list contains the following fields. ++* **kind**: It represents the kind of the application claim. The value indicates how to parse the application claim object for the provided type. ++* **ledgerEntry**: It represents an application claim derived from ledger entry data. The claim would contain the data recorded by the application during a write transaction (for example, the collection ID and the contents provided by the user) and the required information to compute the digest corresponding to the single claim object. ++* **digest**: It represents an application claim in digested form. This claim object would contain the precomputed digest by the application and the protocol used for the computation. ++The `ledgerEntry` field contains the following fields. ++* **protocol**: It represents the protocol to be used to compute the digest of a claim from the given claim data. ++* **collectionId**: The identifier of the collection written during the corresponding write transaction. ++* **contents**: The contents of the ledger written during the corresponding write transaction. ++* **secretKey**: A base64-encoded secret key. This key is to be used in the HMAC algorithm with the values provided in the application claim to obtain the claim digest. ++The `digest` field contains the following fields. ++* **protocol**: It represents the protocol used to compute the digest of the given claim. ++* **value**: The digest of the application claim, in hexadecimal form. This value would have to be hashed with the `protocol` value to compute the complete digest of the application claim. + ### More resources For more information about write transaction receipts and how CCF ensures the integrity of each transaction, see the following links: For more information about write transaction receipts and how CCF ensures the in * [Merkle Tree](https://microsoft.github.io/CCF/main/architecture/merkle_tree.html) * [Cryptography](https://microsoft.github.io/CCF/main/architecture/cryptography.html) * [Certificates](https://microsoft.github.io/CCF/main/operations/certificates.html)+* [Application Claims](https://microsoft.github.io/CCF/main/use_apps/verify_tx.html#application-claims) +* [User-Defined Claims in Receipts](https://microsoft.github.io/CCF/main/build_apps/example_cpp.html#user-defined-claims-in-receipts) ## Next steps |
container-apps | Manage Secrets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/manage-secrets.md | Secrets Key Vault references aren't supported in PowerShell. +> [!NOTE] +> If you're using [UDR With Azure Firewall](./networking.md#user-defined-routes-udrpreview), you will need to add the `AzureKeyVault` service tag and the *login.microsoft.com* FQDN to the allow list for your firewall. To learn more, see [configuring UDR with Azure Firewall](./networking.md#configuring-udr-with-azure-firewallpreview). + #### Key Vault secret URI and secret rotation The Key Vault secret URI must be in one of the following formats: |
container-apps | Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md | User Defined Routes (UDR) and controlled egress through NAT Gateway are supporte ### User defined routes (UDR) - preview -You can use UDR on the workload profiles architecture to restrict outbound traffic from your container app through Azure Firewall or other network appliances. Configuring UDR is done outside of the Container Apps environment scope. +> [!NOTE] +> When using UDR with Azure Firewall in Azure Container Apps, you will need to add certain FQDN's and service tags to the allowlist for the firewall. To learn more, see [configuring UDR with Azure Firewall](./networking.md#configuring-udr-with-azure-firewallpreview). ++You can use UDR on the workload profiles architecture to restrict outbound traffic from your container app through Azure Firewall or other network appliances. Configuring UDR is done outside of the Container Apps environment scope. UDR isn't supported for external environments. :::image type="content" source="media/networking/udr-architecture.png" alt-text="Diagram of how UDR is implemented for Container Apps."::: -Important notes for configuring UDR with Azure Firewall: +Azure creates a default route table for your virtual networks upon create. By implementing a user-defined route table, you can control how traffic is routed within your virtual network. For example, you can create a UDR that routes all traffic to the firewall. ++#### Configuring UDR with Azure Firewall - preview: -- You need to allow the `MicrosoftContainerRegistry` and its dependency `AzureFrontDoor.FirstParty` service tags to your Azure Firewall. Alternatively, you can add the following FQDNs: *mcr.microsoft.com* and **.data.mcr.microsoft.com*.+UDR is only supported on the workload profiles architecture. For a guide on how to setup UDR with Container Apps to restrict outbound traffic with Azure Firewall, visit the [how to for Container Apps and Azure Firewall](./user-defined-routes.md). ++The following FQDNs and service tags must be added to the allowlist for your firewall depending on which resources you are using: ++- For all scenarios, you need to allow the `MicrosoftContainerRegistry` and its dependency `AzureFrontDoor.FirstParty` service tags through your Azure Firewall. Alternatively, you can add the following FQDNs: *mcr.microsoft.com* and **.data.mcr.microsoft.com*. - If you're using Azure Container Registry (ACR), you need to add the `AzureContainerRegistry` service tag and the **.blob.core.windows.net* FQDN in the Azure Firewall. - If you're using [Docker Hub registry](https://docs.docker.com/desktop/allow-list/) and want to access it through the firewall, you need to add the following FQDNs to your firewall: *hub.docker.com*, *registry-1.docker.io*, and *production.cloudflare.docker.com*. - If you're using [Azure Key Vault references](./manage-secrets.md#reference-secret-from-key-vault), you will need to add the `AzureKeyVault` service tag and the *login.microsoft.com* FQDN to the allow list for your firewall.-- External environments aren't supported.--Azure creates a default route table for your virtual networks upon create. By implementing a user-defined route table, you can control how traffic is routed within your virtual network. For example, you can create a UDR that routes all traffic to the firewall. For a guide on how to setup UDR with Container Apps to restrict outbound traffic with Azure Firewall, visit the [how to for Container Apps and Azure Firewall](./user-defined-routes.md). ### NAT gateway integration - preview |
container-apps | User Defined Routes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/user-defined-routes.md | Your virtual networks in Azure have default route tables in place when you creat ## Configure firewall policies +> [!NOTE] +> When using UDR with Azure Firewall in Azure Container Apps, you will need to add certain FQDN's and service tags to the allowlist for the firewall. For example, the FQDNs *mcr.microsoft.com* and **.data.mcr.microsoft.com* are required for all scenarios. To learn more, see [configuring UDR with Azure Firewall](./networking.md#configuring-udr-with-azure-firewallpreview). + Now, all outbound traffic from your container app is routed to the firewall. Currently, the firewall still allows all outbound traffic through. In order to manage what outbound traffic is allowed or denied, you need to configure firewall policies. 1. In your *Azure Firewall* resource on the *Overview* page, select **Firewall policy** |
cosmos-db | Merge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md | To enroll in the preview, your Azure Cosmos DB account must meet all the followi - Azure Data Factory - Azure Stream Analytics - Logic Apps- - Azure Functions + - Azure Functions < 4.0.0 - Azure Search - Azure Cosmos DB Spark connector < 4.18.0 - Any third party library or tool that has a dependency on an Azure Cosmos DB SDK that isn't .NET v3 SDK >= v3.27.0 or Java v4 SDK >= 4.42.0 If you enroll in the preview, the following connectors fail. - Azure Data Factory ┬╣ - Azure Stream Analytics ┬╣ - Logic Apps ┬╣-- Azure Functions ┬╣+- Azure Functions < 4.0.0 - Azure Search ┬╣ - Azure Cosmos DB Spark connector < 4.18.0 - Any third party library or tool that has a dependency on an Azure Cosmos DB SDK that isn't .NET v3 SDK >= v3.27.0 or Java v4 SDK >= 4.42.0 |
cost-management-billing | Overview Azure Hybrid Benefit Scope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/overview-azure-hybrid-benefit-scope.md | description: Azure Hybrid Benefit is a licensing benefit that lets you bring you keywords: Previously updated : 04/20/2023 Last updated : 04/28/2023 -You can centrally manage your Azure Hybrid Benefit for SQL Server across the scope of an entire Azure subscription or overall billing account. +You can centrally manage your Azure Hybrid Benefit for SQL Server across the scope of an entire Azure subscription or overall billing account. To quickly learn how it works, watch the following video. ++>[!VIDEO https://www.youtube.com/embed/ReoLB9N76Lo] To use centrally managed licenses, you must have a specific role assigned to you, depending on your Azure agreement type: |
databox-online | Azure Stack Edge Gpu Configure Tls Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-configure-tls-settings.md | If you want to set system-wide TLS 1.2 for your environment, follow the guidelin - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 You can also add these cipher suites by directly editing the registry settings.+ The variable $HklmSoftwarePath should be defined + $HklmSoftwarePath = 'HKLM:\SOFTWARE' ```azurepowershell New-ItemProperty -Path "$HklmSoftwarePath\Policies\Microsoft\Cryptography\Configuration\SSL\00010002" -Name "Functions" -PropertyType String -Value ("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384") If you want to set system-wide TLS 1.2 for your environment, follow the guidelin ## Next steps -[Connect to Azure Resource Manager](./azure-stack-edge-gpu-connect-resource-manager.md) +[Connect to Azure Resource Manager](./azure-stack-edge-gpu-connect-resource-manager.md) |
databox-online | Azure Stack Edge Pro 2 Deploy Configure Network Compute Web Proxy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md | Follow these steps to configure advanced network settings such as creating a swi 1. Select a **Virtual switch** to which you'll add a virtual network. 1. Provide a **Name** for the virtual network.- 1. Supply a unique number from 1-4096 as your **VLAN ID**. + 1. Supply a unique number from 1-4096 as your **VLAN ID**. You must specify a valid VLAN that's configured on the network. 1. Enter a **Subnet mask** and a **Gateway** depending on the configuration of your physical network in the environment. 1. Select **Apply**. You can add or delete virtual networks associated with your virtual switches. To 1. Select a virtual switch for which you want to create a virtual network. 1. Provide a **Name** for your virtual network.- 1. Enter a **VLAN ID** as a unique number in 1-4094 range. + 1. Enter a **VLAN ID** as a unique number in 1-4094 range. You must specify a valid VLAN that's configured on the network. 1. Specify the **Subnet mask** and **Gateway** for your virtual LAN network as per the physical network configuration. 1. Select **Apply**. |
deployment-environments | How To Configure Catalog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md | Title: Add and configure a catalog -description: Learn how to add and configure a catalog in your Azure Deployment Environments dev center to provide deployment templates for your development teams. Catalogs are specialized repositories stored in GitHub or Azure DevOps. +description: Learn how to add a catalog in your dev center to provide environment templates for your developers. Catalogs are repositories stored in GitHub or Azure DevOps. |
deployment-environments | How To Configure Project Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-project-admin.md | Title: Provide administrative access to projects -description: Learn how to configure administrative access for dev managers by using the DevCenter Project Admin built-in role. +description: Learn how to configure administrative access for dev team leads by using the DevCenter Project Admin built-in role. Last updated 04/25/2023 -# Provide access for dev managers to Deployment Environments projects +# Provide access for dev team leads to Deployment Environments projects In Azure Deployment Environments, you can create multiple projects associated with the dev center to align with each team's requirements. By using the built-in DevCenter Project Admin role, you can delegate project administration to a member of a team. DevCenter Project Admin users can configure [project environment types](concept-environments-key-concepts.md#project-environment-types) to enable developers to create various types of [environments](concept-environments-key-concepts.md#environments) and apply settings to each environment type. -You can assign the DevCenter Project Admin role to a dev manager at either the project level or the environment type level. +You can assign the DevCenter Project Admin role to a dev team lead at either the project level or the environment type level. Based on the scope of access that you allow, a DevCenter Project Admin user can: Based on the scope of access that you allow, a DevCenter Project Admin user can: When you assign the role at the project level, the user can perform the preceding actions on all environment types at the project level. When you assign the role to specific environment types, the user can perform the actions only on the respective environment types. -## Assign permissions to dev managers for a project +## Assign permissions to dev team leads for a project 1. Select the project that you want your development team members to be able to access. 1. Select **Access control (IAM)** from the left menu. When you assign the role at the project level, the user can perform the precedin The users can now view the project and manage all the environment types that you've enabled within it. DevCenter Project Admin users can also [create environments from the Azure CLI](./quickstart-create-access-environments.md). -## Assign permissions to dev managers for an environment type +## Assign permissions to dev team leads for an environment type 1. Select the project that you want your development team members to be able to access. 2. Select **Environment types**, and then select the ellipsis (**...**) beside the specific environment type. |
deployment-environments | Quickstart Create And Configure Devcenter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md | Title: Create and configure a dev center -description: Learn how to create and configure a dev center in Azure Deployment Environments. In the quickstart, you create a dev center, attach an identity, attach a catalog, and create environment types. +description: Learn how to configure a dev center in Deployment Environments. You'll create a dev center, attach an identity, attach a catalog, and create environment types. |
deployment-environments | Tutorial Deploy Environments In Cicd Github | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/tutorial-deploy-environments-in-cicd-github.md | + + Title: 'Tutorial: Deploy environments in CI/CD with GitHub' +description: Learn how to integrate Azure Deployment Environments into your CI/CD pipeline by using GitHub Actions. ++++ Last updated : 04/13/2023+++# Tutorial: Deploy environments in CI/CD with GitHub +Continuous integration and continuous delivery (CI/CD) is a software development approach that helps teams to automate the process of building, testing, and deploying software changes. CI/CD enables you to release software changes more frequently and with greater confidence. ++In this tutorial, you'll Learn how to integrate Azure Deployment Environments into your CI/CD pipeline by using GitHub Actions. You use a workflow that features three branches: main, dev, and test. ++- The *main* branch is always considered production. +- You create feature branches from the *main* branch. +- You create pull requests to merge feature branches into *main*. ++This workflow is a small example for the purposes of this tutorial. Real world workflows may be more complex. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> * Create and configure a dev center +> * Create a key vault +> * Create and configure a GitHub repository +> * Connect the catalog to your dev center +> * Configure deployment identities +> * Configure GitHub environments +> * Test the CI/CD pipeline ++## Prerequisites ++- An Azure account with an active subscription. + - [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- Owner permissions on the Azure subscription. +- A GitHub account. + - If you don't have one, sign up for [free](https://github.com/join). +- Install [Git](https://github.com/git-guides/install-git). +- Install the [Azure CLI](/cli/azure/install-azure-cli). +++## 1. Create and configure a dev center ++In this section, you create a dev center and project with three environment types; Dev, Test and Prod ++- The Prod environment type contains the single production environment +- A new environment is created in Dev for each feature branch +- A new environment is created in Test for each pull request +### 1.1 Setup the Azure CLI ++To begin, sign in to Azure. Run the following command, and follow the prompts to complete the authentication process. ++```azurecli +az login +``` ++Next, install the Azure Dev Center extension for the CLI. ++```azurecli +az extension add --name devcenter --upgrade +``` ++Now that the current extension is installed, register the `Microsoft.DevCenter` namespace. ++```azurecli +az provider register --namespace Microsoft.DevCenter +``` ++> [!TIP] +> Throughout this tutorial, you'll save several values as environment variables to use later. You may also want to record these value elsewhere to ensure they are available when needed. ++Get your user's ID and set it to an environment variable for later: ++```azurecli +MY_AZURE_ID=$(az ad signed-in-user show --query id -o tsv) +``` ++Retrieve the subscription ID for your current subscription. ++```azurecli +AZURE_SUBSCRIPTION_ID=$(az account show --query id --output tsv) +``` ++Retrieve the tenant ID for your current tenant. ++```azurecli +AZURE_TENANT_ID=$(az account show --query tenantId --output tsv) +``` ++Set the following environment variables: ++```azurecli +LOCATION="eastus" +AZURE_RESOURCE_GROUP="my-dev-center-rg" +AZURE_DEVCENTER="my-dev-center" +AZURE_PROJECT="my-project" +AZURE_KEYVAULT="myuniquekeyvaultname" +``` ++### 1.2 Create a dev center ++A dev center is a collection of projects and environments that have similar settings. Dev centers provide access to a catalog of templates and artifacts that can be used to create environments. Dev centers also provide a way to manage access to environments and projects. ++Create a Resource Group ++```azurecli +az group create \ + --name $AZURE_RESOURCE_GROUP \ + --location $LOCATION +``` ++Create a new Dev Center ++```azurecli +az devcenter admin devcenter create \ + --name $AZURE_DEVCENTER \ + --identity-type SystemAssigned \ + --resource-group $AZURE_RESOURCE_GROUP \ + --location $LOCATION +``` ++The previous command outputs JSON. Save the values for `id` and `identity.principalId` as environment variables to use later. ++```azurecli +AZURE_DEVCENTER_ID=<id> +AZURE_DEVCENTER_PRINCIPAL_ID=<identity.principalId> +``` ++### 1.3 Assign dev center identity owner role on subscription ++A dev center needs permissions to assign roles on subscriptions associated with environment types. ++To reduce unnecessary complexity, in this tutorial, you use a single subscription for the dev center and all environment types. In practice, the dev center and target deployment subscriptions would likely be separate subscriptions with different policies applied. ++```azurecli +az role assignment create \ + --scope /subscriptions/$AZURE_SUBSCRIPTION_ID \ + --role Owner \ + --assignee-object-id $AZURE_DEVCENTER_PRINCIPAL_ID \ + --assignee-principal-type ServicePrincipal +``` ++### 1.4 Create the environment types ++ At the dev center level, environment types define the environments that development teams can create, like dev, test, sandbox, preproduction, or production. ++Create three new Environment types: **Dev**, **Test**, and **Prod**. ++```azurecli +az devcenter admin environment-type create \ + --name Dev \ + --resource-group $AZURE_RESOURCE_GROUP \ + --dev-center $AZURE_DEVCENTER +``` ++```azurecli +az devcenter admin environment-type create \ + --name Test \ + --resource-group $AZURE_RESOURCE_GROUP \ + --dev-center $AZURE_DEVCENTER +``` ++```azurecli +az devcenter admin environment-type create \ + --name Prod \ + --resource-group $AZURE_RESOURCE_GROUP \ + --dev-center $AZURE_DEVCENTER +``` ++### 1.5 Create a project ++A project is the point of access for the development team. Each project is associated with a dev center. ++Create a new Project ++```azurecli +az devcenter admin project create \ + --name $AZURE_PROJECT \ + --resource-group $AZURE_RESOURCE_GROUP \ + --location $LOCATION \ + --dev-center-id $AZURE_DEVCENTER_ID +``` ++The previous command outputs JSON. Save the `id` value as an environment variable to use later. ++```azurecli +AZURE_PROJECT_ID=<id> +``` ++Assign yourself the "DevCenter Project Admin" role on the project ++```azurecli +az role assignment create \ + --scope "$AZURE_PROJECT_ID" \ + --role "DevCenter Project Admin" \ + --assignee-object-id $MY_AZURE_ID \ + --assignee-principal-type User +``` ++### 1.6 Create project environment types ++At the project level, dev infra admins specify which environment types are appropriate for the development team. ++Create a new Project Environment Type for each of the Environment Types we created on the dev center ++```azurecli +az devcenter admin project-environment-type create \ + --name Dev \ + --roles "{\"b24988ac-6180-42a0-ab88-20f7382dd24c\":{}}" \ + --deployment-target-id /subscriptions/$AZURE_SUBSCRIPTION_ID \ + --resource-group $AZURE_RESOURCE_GROUP \ + --location $LOCATION \ + --project $AZURE_PROJECT \ + --identity-type SystemAssigned \ + --status Enabled +``` ++```azurecli +az devcenter admin project-environment-type create \ + --name Test \ + --roles "{\"b24988ac-6180-42a0-ab88-20f7382dd24c\":{}}" \ + --deployment-target-id /subscriptions/$AZURE_SUBSCRIPTION_ID \ + --resource-group $AZURE_RESOURCE_GROUP \ + --location $LOCATION \ + --project $AZURE_PROJECT \ + --identity-type SystemAssigned \ + --status Enabled +``` ++```azurecli +az devcenter admin project-environment-type create \ + --name Prod \ + --roles "{\"b24988ac-6180-42a0-ab88-20f7382dd24c\":{}}" \ + --deployment-target-id /subscriptions/$AZURE_SUBSCRIPTION_ID \ + --resource-group $AZURE_RESOURCE_GROUP \ + --location $LOCATION \ + --project $AZURE_PROJECT \ + --identity-type SystemAssigned \ + --status Enabled +``` ++## 2. Create a key vault ++In this section, you create a new key vault. You use this key vault later in the tutorial to save a [personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token#fine-grained-personal-access-tokens) from GitHub. ++```azurecli +az keyvault create \ + --name $AZURE_KEYVAULT \ + --resource-group $AZURE_RESOURCE_GROUP \ + --location $LOCATION \ + --enable-rbac-authorization true +``` ++Again, save the `id` from the previous command's JSON output as an environment variable. ++```azurecli +AZURE_KEYVAULT_ID=<id> +``` ++Give yourself the "Key Vault Administrator" role on the new key vault. ++```azurecli +az role assignment create \ + --scope $AZURE_KEYVAULT_ID \ + --role "Key Vault Administrator" \ + --assignee-object-id $MY_AZURE_ID \ + --assignee-principal-type User +``` ++Assign the dev center's identity the role of "Key Vault Secrets User" ++```azurecli +az role assignment create \ + --scope $AZURE_KEYVAULT_ID \ + --role "Key Vault Secrets User" \ + --assignee-object-id $AZURE_DEVCENTER_PRINCIPAL_ID \ + --assignee-principal-type ServicePrincipal +``` ++## 3. Create and configure a GitHub repository ++In this section, you create a new GitHub repository to store a catalog. Azure Deployment Environments supports both GitHub and Azure DevOps repositories. In this tutorial, you use GitHub. +### 3.1 Create a new GitHub repository ++In this step, you create a new repository in your GitHub account that has a predefined directory structure, branches, and files. These items are generated from a sample template repository. ++1. Use this link to generate a new GitHub repository from the [sample template](https://github.com/Azure-Samples/deployment-environments-cicd-tutorial/generate). + + :::image type="content" source="media/tutorial-deploy-environments-in-cicd-github/github-generate-from-template.png" alt-text="Screenshot showing the GitHub create repository from template page."::: ++1. If you don't have a paid GitHub account, set your repository to **Public**. ++1. Select **Create repository from template**. ++1. On the **Actions** tab, notice that the Create Environment action fails. This behavior is expected, you can proceed with the next step. ++### 3.2 Protect the repository's *main* branch ++You can protect important branches by setting branch protection rules. Protection rules define whether collaborators can delete or force push to the branch. They also set requirements for any pushes to the branch, such as passing status checks or a linear commit history. ++> [!NOTE] +> Protected branches are available in public repositories with GitHub Free and GitHub Free for organizations, and in public and private repositories with GitHub Pro, GitHub Team, GitHub Enterprise Cloud, and GitHub Enterprise Server. For more information, see "[GitHubΓÇÖs products](https://docs.github.com/en/get-started/learning-about-github/githubs-products)". ++1. If it's not already open, navigate to the main page of your repository. ++1. Under your repository name, select **Settings**. If you can't see the "Settings" tab, select the **...** dropdown menu, then select **Settings**. + + :::image type="content" source="media/tutorial-deploy-environments-in-cicd-github/github-repo-settings.png" alt-text="Screenshot showing the GitHub repository page with settings highlighted."::: ++1. In the **Code and automation** section of the sidebar, select **Branches**. ++ :::image type="content" source="media/tutorial-deploy-environments-in-cicd-github/github-branches-protect.png" alt-text="Screenshot showing the settings page, with branches highlighted."::: ++1. Under **Branch protection rules**, select **Add branch protection rule**. + + :::image type="content" source="media/tutorial-deploy-environments-in-cicd-github/github-protect-rule.png" alt-text="Screenshot showing the branch protection rule page, with Add branch protection rule highlighted. "::: ++1. Under **Branch name pattern**, enter <*main*>. + + :::image type="content" source="media/tutorial-deploy-environments-in-cicd-github/github-branch-name-pattern.png" alt-text="Screenshot showing the branch name pattern text box, with main highlighted."::: + +1. Under **Protect matching branches**, select **Require a pull request before merging**. ++ :::image type="content" source="media/tutorial-deploy-environments-in-cicd-github/github-require-pull-request.png" alt-text="Screenshot showing protect matching branches with Require a pull request before merging selected and highlighted."::: ++1. Optionally, you can enable [more protection rules](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/managing-a-branch-protection-rule#creating-a-branch-protection-rule). ++1. Select **Create**. ++### 3.3 Configure repository variables ++> [!NOTE] +> Configuration variables for GitHub Actions are in beta and subject to change. ++1. In the **Security** section of the sidebar, select **Secrets and variables**, then select **Actions**. ++1. Select the **Variables** tab. + + :::image type="content" source="media/tutorial-deploy-environments-in-cicd-github/github-security-menu.png" alt-text="Screenshot showing the Security section of the sidebar with Actions highlighted."::: ++1. For each item in the table: ++ 1. Select **New repository variable**. + 2. In the **Name** field, enter the variable name. + 3. In the **Value** field, enter the value described in the table. + 4. Select **Add variable**. ++ | Variable name | Variable value | + | | - | + | AZURE_DEVCENTER | Dev center name | + | AZURE_PROJECT | Project name | + | AZURE_CATALOG | Set to: _Environments_ | + | AZURE_CATALOG_ITEM | Set to: _FunctionApp_ | + | AZURE_SUBSCRIPTION_ID | Azure subscription ID (GUID) | + | AZURE_TENANT_ID | Azure tenant ID (GUID) | ++ :::image type="content" source="media/tutorial-deploy-environments-in-cicd-github/github-variables.png" alt-text="Screenshot showing the variables page with the variables table."::: ++### 3.4 Create a GitHub personal access token ++Next, create a [fine-grained personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token#fine-grained-personal-access-tokens) to enable your dev center to connect to your repository and consume the environment catalog. ++> [!NOTE] +> Fine-grained personal access token are currently in beta and subject to change. To leave feedback, see the [feedback discussion](https://github.com/community/community/discussions/36441). ++1. In the upper-right corner of any page on GitHub.com, select your profile photo, then select **Settings**. ++1. In the left sidebar, select **Developer settings**. ++1. In the left sidebar, under **Personal access tokens**, select **Fine-grained tokens**, and then select **Generate new token**. + + :::image type="content" source="media/tutorial-deploy-environments-in-cicd-github/github-fine-grained-personal-access-token.png" alt-text="Screenshot showing the GitHub personal access token options, with Fine-grained tokens and Generate new token highlighted."::: ++1. On the New fine-grained personal access token page, under **Token name**, enter a name for the token. ++1. Under **Expiration**, select an expiration for the token. ++1. Select your GitHub user under **Resource owner**. ++1. Under **Repository access**, select **Only select repositories** then in the **Selected repositories** dropdown, search and select the repository you created. + + :::image type="content" source="media/tutorial-deploy-environments-in-cicd-github/github-repo-access.png" alt-text="Screenshot showing GitHub repository access options, with Only select repositories highlighted."::: ++1. Under **Permissions**, select **Repository permissions**, and change **Contents** to **Read-only**. ++ :::image type="content" source="media/tutorial-deploy-environments-in-cicd-github/github-repo-permissions.png" alt-text="Screenshot showing GitHub repository permissions with Contents highlighted."::: ++1. Select **Generate token**. + +1. Copy your personal access token now. You cannot view it again. ++### 3.5 Save personal access token to key vault ++Next, save the PAT token as a key vault secret named _pat_. ++```azurecli +az keyvault secret set \ + --name pat \ + --vault-name $AZURE_KEYVAULT \ + --value "github_pat_..." +``` ++## 4. Connect the catalog to your dev center ++A catalog is a repository that contains a set of catalog items. Catalog items consist of an IaC template and a manifest file. The template defines the environment, and the manifest provides metadata about the template. Development teams use catalog items from the catalog to create environments. ++The template you used to create your GitHub repository contains a catalog in the _Environments_ folder. ++#### Add the catalog to your dev center ++In the following command, replace `< Organization/Repository >` with your GitHub organization and repository name. ++```azurecli +az devcenter admin catalog create \ + --name Environments \ + --resource-group $AZURE_RESOURCE_GROUP \ + --dev-center $AZURE_DEVCENTER \ + --git-hub path="/Environments" branch="main" secret-identifier="https://$AZURE_KEYVAULT.vault.azure.net/secrets/pat" uri="https://github.com/< Organization/Repository >.git" +``` ++## 5. Configure deployment identities ++[OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is an authentication method that uses short-lived tokens to offer hardened security. It's the recommended way to authenticate GitHub Actions to Azure. ++You can also authenticate a service principal directly using a secret, but that is out of scope for this tutorial. +### 5.1 Generate deployment identities ++1. Register [new Active Directory applications and service principals](../active-directory/develop/howto-create-service-principal-portal.md) for each of the three environment types. ++ Create the Active Directory application for **Dev**. ++ ```azurecli + az ad app create --display-name "$AZURE_PROJECT-Dev" + ``` ++ This command outputs JSON with an `id` that you use when creating federated credentials with Graph API, and an `appId` (also called client ID). ++ Set the following environment variables: ++ ```azurecli + DEV_AZURE_CLIENT_ID=<appId> + DEV_APPLICATION_ID=<id> + ``` ++ Repeat for **Test**: ++ ```azurecli + az ad app create --display-name "$AZURE_PROJECT-Test" + ``` ++ ```azurecli + TEST_AZURE_CLIENT_ID=<appId> + TEST_APPLICATION_ID=<id> + ``` ++ And **Prod**: ++ ```azurecli + az ad app create --display-name "$AZURE_PROJECT-Prod" + ``` ++ ```azurecli + PROD_AZURE_CLIENT_ID=<appId> + PROD_APPLICATION_ID=<id> + ``` ++2. Create a service principal for each application. ++ Run the following command to create a new service principal for **Dev**. ++ ```azurecli + az ad sp create --id $DEV_AZURE_CLIENT_ID + ``` ++ This command generates JSON output with a different `id` and will be used in the next step. ++ Set the following environment variables: ++ ```azurecli + DEV_SERVICE_PRINCIPAL_ID=<id> + ``` ++ Repeat for **Test**: ++ ```azurecli + az ad sp create --id $TEST_AZURE_CLIENT_ID + ``` ++ ```azurecli + TEST_SERVICE_PRINCIPAL_ID=<id> + ``` ++ And **Prod**: ++ ```azurecli + az ad sp create --id $PROD_AZURE_CLIENT_ID + ``` ++ ```azurecli + PROD_SERVICE_PRINCIPAL_ID=<id> + ``` ++3. Run the following commands to [create a new federated identity credentials](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for each active directory application. ++ In each of the three following commands, replace `< Organization/Repository >` with your GitHub organization and repository name. ++ Create the federated identity credential for **Dev**: ++ ```azurecli + az rest --method POST \ + --uri "https://graph.microsoft.com/beta/applications/$DEV_APPLICATION_ID/federatedIdentityCredentials" \ + --body '{"name":"ADEDev","issuer":"https://token.actions.githubusercontent.com","subject":"repo:< Organization/Repository >:environment:Dev","description":"Dev","audiences":["api://AzureADTokenExchange"]}' + ``` ++ For **Test**: ++ ```azurecli + az rest --method POST \ + --uri "https://graph.microsoft.com/beta/applications/$TEST_APPLICATION_ID/federatedIdentityCredentials" \ + --body '{"name":"ADETest","issuer":"https://token.actions.githubusercontent.com","subject":"repo:< Organization/Repository >:environment:Test","description":"Test","audiences":["api://AzureADTokenExchange"]}' + ``` ++ And **Prod**: ++ ```azurecli + az rest --method POST \ + --uri "https://graph.microsoft.com/beta/applications/$PROD_APPLICATION_ID/federatedIdentityCredentials" \ + --body '{"name":"ADEProd","issuer":"https://token.actions.githubusercontent.com","subject":"repo:< Organization/Repository >:environment:Prod","description":"Prod","audiences":["api://AzureADTokenExchange"]}' + ``` ++### 5.2 Assign roles to deployment identities ++1. Assign each deployment identity the Reader role on the project. ++ ```azurecli + az role assignment create \ + --scope "$AZURE_PROJECT_ID" \ + --role Reader \ + --assignee-object-id $DEV_SERVICE_PRINCIPAL_ID \ + --assignee-principal-type ServicePrincipal + ``` ++ ```azurecli + az role assignment create \ + --scope "$AZURE_PROJECT_ID" \ + --role Reader \ + --assignee-object-id $TEST_SERVICE_PRINCIPAL_ID \ + --assignee-principal-type ServicePrincipal + ``` ++ ```azurecli + az role assignment create \ + --scope "$AZURE_PROJECT_ID" \ + --role Reader \ + --assignee-object-id $PROD_SERVICE_PRINCIPAL_ID \ + --assignee-principal-type ServicePrincipal + ``` ++1. Assign each deployment identity the "Deployment Environments User" role to it's corresponding environment type ++ ```azurecli + az role assignment create \ + --scope "$AZURE_PROJECT_ID/environmentTypes/Dev" \ + --role "Deployment Environments User" \ + --assignee-object-id $DEV_SERVICE_PRINCIPAL_ID \ + --assignee-principal-type ServicePrincipal + ``` ++ ```azurecli + az role assignment create \ + --scope "$AZURE_PROJECT_ID/environmentTypes/Test" \ + --role "Deployment Environments User" \ + --assignee-object-id $TEST_SERVICE_PRINCIPAL_ID \ + --assignee-principal-type ServicePrincipal + ``` ++ ```azurecli + az role assignment create \ + --scope "$AZURE_PROJECT_ID/environmentTypes/Prod" \ + --role "Deployment Environments User" \ + --assignee-object-id $PROD_SERVICE_PRINCIPAL_ID \ + --assignee-principal-type ServicePrincipal + ``` ++## 6. Configure GitHub environments ++With GitHub environments, you can configure environments with protection rules and secrets. A workflow job that references an environment must follow any protection rules for the environment before running or accessing the environment's secrets. ++Create three environments: Dev, Test, and Prod to map to the project's environment types. ++> [!NOTE] +> Environments, environment secrets, and environment protection rules are available in public repositories for all products. For access to environments, environment secrets, and deployment branches in **private** or **internal** repositories, you must use GitHub Pro, GitHub Team, or GitHub Enterprise. For access to other environment protection rules in **private** or **internal** repositories, you must use GitHub Enterprise. For more information, see "[GitHubΓÇÖs products.](https://docs.github.com/en/get-started/learning-about-github/githubs-products)" ++### 6.1 Create the Dev environment ++1. On GitHub.com, navigate to the main page of your repository. ++1. Under your repository name, select **Settings**. If you can't see the "Settings" tab, select the **...** dropdown menu, then select **Settings**. ++1. In the left sidebar, select **Environments**. + +1. Select **New environment** and enter _Dev_ for the environment name, then select **Configure environment**. + + :::image type="content" source="media/tutorial-deploy-environments-in-cicd-github/github-create-environment.png" alt-text="Screenshot showing the Environments Add pane, with the environment name Dev, and Configure Environment highlighted. "::: ++1. Under **Environment secrets**, select **Add Secret** and enter _AZURE_CLIENT_ID_ for **Name**. ++ :::image type="content" source="media/tutorial-deploy-environments-in-cicd-github/github-secret.png" alt-text="Screenshot showing the Environment Configure Dev pane, with Add secret highlighted."::: ++1. For **Value**, enter the client ID (`appId`) for the **Dev** Azure AD app you created earlier (saved as the `$DEV_AZURE_CLIENT_ID` environment variable). + + :::image type="content" source="media/tutorial-deploy-environments-in-cicd-github/github-add-secret.png" alt-text="Screenshot of the Add secret box with the name AZURE CLIENT ID, the value set to an ID number, and add secret highlighted."::: ++1. Select **Add secret**. ++### 6.2 Create the Test environment ++Return to the main environments page by selecting **Environments** in the left sidebar. ++1. Select **New environment** and enter _Test_ for the environment name, then select **Configure environment**. ++2. Under **Environment secrets**, select **Add Secret** and enter _AZURE_CLIENT_ID_ for **Name**. ++3. For **Value**, enter the client ID (`appId`) for the **Test** Azure AD app you created earlier (saved as the `$TEST_AZURE_CLIENT_ID` environment variable). ++4. Select **Add secret**. ++### 6.3 Create the Prod environment ++Once more, return to the main environments page by selecting **Environments** in the left sidebar ++1. Select **New environment** and enter _Prod_ for the environment name, then select **Configure environment**. ++2. Under **Environment secrets**, select **Add Secret** and enter _AZURE_CLIENT_ID_ for **Name**. ++3. For **Value**, enter the client ID (`appId`) for the **Prod** Azure AD app you created earlier (saved as the `$PROD_AZURE_CLIENT_ID` environment variable). ++4. Select **Add secret**. ++Next, set yourself as a [required reviewer](https://docs.github.com/en/actions/managing-workflow-runs/reviewing-deployments) for this environment. When attempting to deploy to Prod, the GitHub Actions wait for an approval before starting. While a job is awaiting approval, it has a status of "Waiting". If a job isn't approved within 30 days, it automatically fails. ++For more information about environments and required approvals, see "[Using environments for deployment](https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deployment)." ++1. Select **Required reviewers**. ++2. Search for and select your GitHub user. You may enter up to six people or teams. Only one of the required reviewers needs to approve the job for it to proceed. ++3. Select **Save protection rules**. ++Finally configure *main* as the deployment branch: ++1. In the **Deployment branches dropdown**, choose **Selected branches**. ++2. Select **Add deployment branch rule** and enter *main* for the **Branch name pattern**. ++3. Select **Add rule**. ++## 7. Test the CI/CD pipeline ++In this section, you make some changes to the repository and test the CI/CD pipeline. +### 7.1 Clone the repository ++1. In your terminal, cd into a folder where you'd like to clone your repository locally. ++2. Clone the repository. Be sure to replace `< Organization/Repository >` in the following command with your GitHub organization and repository name. ++ ```azurecli + git clone https://github.com/< Organization/Repository >.git + ``` ++3. Navigate into the cloned directory. ++ ```azurecli + cd Repository + ``` ++4. Next, create a new branch and publish it remotely. ++ ```azurecli + git checkout -b feature1 + ``` ++ ```azurecli + git push -u origin feature1 + ``` ++ A new environment is created in Azure specific to this branch. ++5. Go to [GitHub](https://github.com) and navigate to the main page of your newly created repository. ++6. Under your repository name, select **Actions**. ++ You should see a new Create Environment workflow running. ++### 7.2 Make a change to the code ++1. Open the locally cloned repo in VS Code. ++1. In the ADE.Tutorial folder, make a change to a file. ++1. Save your change. ++### 7.3 Push your changes to update the environment ++1. Stage your changes and push to the `feature1` branch. ++ ``` azurecli + git add . + git commit -m '<commit message>' + git push + ``` ++1. On your repository's **Actions** page, you see a new Update Environment workflow running. ++### 7.4 Create a pull request ++1. Create a pull request on GitHub.com `main <- feature1`. ++1. On your repository's **Actions** page, you see a new workflow is started to create an environment specific to the PR using the Test environment type. ++### 7.5 Merge the PR ++1. On [GitHub](https://github.com), navigate to the pull request you created. ++1. Merge the PR. ++ Your changes are published into the production environment, and delete the branch and pull request environments. ++## Clean up resources +++## Next steps ++- Learn more about managing your environments by using the CLI in [Create and access an environment by using the Azure CLI](how-to-create-access-environments.md). +- For complete command listings, refer to the [Microsoft Deployment Environments and Azure Deployment Environments Azure CLI documentation](https://aka.ms/CLI-reference). |
dns | Dns Reverse Dns Hosting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-hosting.md | To configure reverse DNS for an Azure-owned IP address assigned to your Azure se Before reading this article, you should familiarize yourself with the [overview of reverse DNS](dns-reverse-dns-overview.md) and it's supported in Azure. -In this article, you'll learn how to create your first reverse lookup DNS zone and record by using the Azure portal, Azure PowerShell, Azure classic CLI, and Azure CLI. +In this article, you learn how to create your first reverse lookup DNS zone and record by using the Azure portal, Azure PowerShell, Azure classic CLI, and Azure CLI. ## Create a reverse lookup DNS zone In this article, you'll learn how to create your first reverse lookup DNS zone a | **Subscription** | Select a subscription to create the DNS zone in.| | **Resource group** | Select or create a new resource group. To learn more about resource groups, read the [Resource Manager](../azure-resource-manager/management/overview.md?toc=%2fazure%2fdns%2ftoc.json#resource-groups) overview article.| | **Name** | Enter a name for the DNS zone. The name of the zone is crafted differently for IPv4 and IPv6 prefixes. Use the instructions for [IPv4](#ipv4) or [IPv6](#ipv6) to name your zone. |- | **Location** | Select the location for the resource group. The location will already be selected if you're using a previously created resource group. | + | **Location** | Select the location for the resource group. The location is already be selected if you're using a previously created resource group. | 1. Select **Review + create**, and then select **Create** once validation has passed. The name of an IPv4 reverse lookup zone is based on the IP range that it represe > [!NOTE] > When you're creating classless reverse DNS lookup zones in Azure DNS, you must use a hyphen (`-`) instead of a forward slash (`/`) in the zone name. >-> For example, for the IP range of 192.0.2.128/26, you'll use `128-26.2.0.192.in-addr.arpa` as the zone name instead of `128/26.2.0.192.in-addr.arpa`. +> For example, for the IP range of 192.0.2.128/26, use `128-26.2.0.192.in-addr.arpa` as the zone name instead of `128/26.2.0.192.in-addr.arpa`. > > Although the DNS standards support both methods, Azure DNS doesn't support DNS zone names that contain the forward slash (`/`) character. az network dns zone create -g mydnsresourcegroup -n 0.0.0.0.d.c.b.a.8.b.d.0.1.0. Once the reverse DNS lookup zone gets created, you then need to make sure the zone gets delegated from the parent zone. DNS delegation enables the DNS name resolution process to find the name servers that host your reverse DNS lookup zone. Those name servers can then answer DNS reverse queries for the IP addresses in your address range. -For forward lookup zones, the process of delegating a DNS zone is described in [Delegate your domain to Azure DNS](dns-delegate-domain-azure-dns.md). Delegation for reverse lookup zones works the same way. The only difference is that you'll need to configure the name servers with the ISP. The ISP manages your IP range, that's why they need to update the name servers instead of domain name registrar. +For forward lookup zones, the process of delegating a DNS zone is described in [Delegate your domain to Azure DNS](dns-delegate-domain-azure-dns.md). Delegation for reverse lookup zones works the same way. The only difference is that you need to configure the name servers with the ISP. The ISP manages your IP range, that's why they need to update the name servers instead of domain name registrar. ## Create a DNS PTR record The following example explains the process of creating a PTR record for a revers :::image type="content" source="./media/dns-reverse-dns-hosting/create-record-set-ipv4.png" alt-text="Screenshot of create IPv4 pointer record set."::: -1. The name of the record set for a PTR record will be the rest of the IPv4 address in reverse order. +1. The name of the record set for a PTR record is the rest of the IPv4 address in reverse order. - In this example, the first three octets are already populated as part of the zone name `.2.0.192`. That's why only the last octet is needed in the **Name** box. For example, you'll give your record set the name of **15** for a resource whose IP address is `192.0.2.15`. + In this example, the first three octets are already populated as part of the zone name `.2.0.192`. That's why only the last octet is needed in the **Name** box. For example, give your record set the name of **15** for a resource whose IP address is `192.0.2.15`. :::image type="content" source="./media/dns-reverse-dns-hosting/create-ipv4-ptr.png" alt-text="Screenshot of create IPv4 pointer record."::: The following example explains the process of creating new PTR record for IPv6. :::image type="content" source="./media/dns-reverse-dns-hosting/create-record-set-ipv6.png" alt-text="Screenshot of create IPv6 pointer record set."::: -1. The name of the record set for a PTR record will be the rest of the IPv6 address in reverse order. It must not include any zero compression. +1. The name of the record set for a PTR record is the rest of the IPv6 address in reverse order. It must not include any zero compression. - In this example, the first 64 bits of the IPv6 gets populated as part of the zone name (0.0.0.0.c.d.b.a.8.b.d.0.1.0.0.2.ip6.arpa). That's why only the last 64 bits are supplied in the **Name** box. The last 64 bits of the IP address gets entered in reverse order, with a period as the delimiter between each hexadecimal number. You'll name your record set **e.5.0.4.9.f.a.1.c.b.0.1.4.2.5.f** if you have a resource whose IP address is 2001:0db8:abdc:0000:f524:10bc:1af9:405e. + In this example, the first 64 bits of the IPv6 gets populated as part of the zone name (0.0.0.0.c.d.b.a.8.b.d.0.1.0.0.2.ip6.arpa). That's why only the last 64 bits are supplied in the **Name** box. The last 64 bits of the IP address gets entered in reverse order, with a period as the delimiter between each hexadecimal number. Name your record set **e.5.0.4.9.f.a.1.c.b.0.1.4.2.5.f** if you have a resource whose IP address is 2001:0db8:abdc:0000:f524:10bc:1af9:405e. :::image type="content" source="./media/dns-reverse-dns-hosting/create-ipv6-ptr.png" alt-text="Screenshot of create IPv6 pointer record."::: To view the records that you created, browse to your DNS zone in the Azure porta ### IPv4 -The **DNS zone** page will show the IPv4 PTR record: +The **DNS zone** page shows the IPv4 PTR record: :::image type="content" source="./media/dns-reverse-dns-hosting/view-ipv4-ptr-record.png" alt-text="Screenshot of IPv4 pointer record on overview page." lightbox="./media/dns-reverse-dns-hosting/view-ipv4-ptr-record-expanded.png"::: |
dns | Dns Reverse Dns Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-overview.md | |
external-attack-surface-management | Deploying The Defender Easm Azure Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/deploying-the-defender-easm-azure-resource.md | Before you create a Defender EASM resource group, we recommend that you are fami 3. Select or enter the following property values: - - **Subscription**: Select an Azure subscription. - - **Resource Group**: Give the resource group a name. - - **Region**: Specify an Azure location. This location is where the resource group stores metadata about the resource. For compliance reasons, you may want to specify where that metadata is stored. In general, we recommend that you specify a location where most of your resources will be. Using the same location can simplify your template. The following regions are supported: -- - southcentralus - - eastus - - australiaeast - - westus3 - - swedencentral - - eastasia - - japaneast - - westeurope - - northeurope - - switzerlandnorth +- **Subscription**: Select an Azure subscription. +- **Resource Group**: Give the resource group a name. +- **Region**: Specify an Azure location. This location is where the resource group stores metadata about the resource. For compliance reasons, you may want to specify where that metadata is stored. In general, we recommend that you specify a location where most of your resources will be. Using the same location can simplify your template. The following regions are supported: ++- southcentralus +- eastus +- australiaeast +- westus3 +- swedencentral +- eastasia +- japaneast +- westeurope +- northeurope +- switzerlandnorth  After you create a resource group, you can create EASM resources within the grou - [Understanding dashboards](understanding-dashboards.md) ++ |
firewall | Compliance Certifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/compliance-certifications.md | -Azure Firewall is Payment Card Industry (PCI), Service Organization Controls (SOC), International Organization for Standardization (ISO), and HITRUST compliant. +To help you meet your own compliance obligations across regulated industries and markets worldwide, Azure maintains the largest compliance portfolio in the industry both in terms of breadth (total number of offerings) and depth (number of [customer-facing services](https://azure.microsoft.com/services/) in assessment scope). For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). -The following certifications are for global Azure and Azure Government. +## Azure Firewall audit scope -## Global Azure certifications +Microsoft retains independent, third-party auditing firms to conduct audits of Microsoft cloud services. The resulting compliance assurances are applicable to both Azure and Azure Government cloud environments. Compliance offerings are grouped into four segments: globally applicable, US government, industry specific, and region/country specific. Azure compliance certificates and audit reports state clearly which cloud services are in scope for independent third-party audits. Different audits may have different cloud services in audit scope. -The following Azure Firewall certifications are for global Azure: --- 23 NYCRR 500-- AFM and DNB (Netherlands)-- AMF and ACPR (France)-- APRA(Australia)-- Argentina PDPA-- Australia IRAP-- CDSA-- CFTC 1.31-- CSA STAR Attestation-- CSA STAR Certification-- CSA STAR Self-Assessment-- Canadian Privacy Laws-- DPP(UK)-- EU ENISA IAF-- EU Model Clauses-- European Banking Authority-- FCA and PRA (UK)-- FERPA (US)-- FFIEC(US)-- FINMA (Switzerland)-- FSA (Denmark)-- GLBA (US)-- Germany C5-- GxP (FDA 21 CFR Part 11)-- HIPAA-- HITECH Act (US)-- HITRUST-- ISO 20000-1:2011-- ISO 22301:2012-- ISO 27001:2013-- ISO 27017:2015-- ISO 27018:2014-- ISO 9001:2015-- Japan My Number Act-- K-ISMS-- KNF(Poland)-- MAS and ABS (Singapore)-- MPAA(US)-- NBB and FSMA (Belgium)-- NEN 7510:2011 (Netherlands)-- NHS IG Toolkit (UK)-- Netherlands BIR 2012-- OSFI(Canada)-- PCI DSS Level 1-- RBI and IRDAI (India)-- SOC 1 Type 2-- SOC 2 Type 2-- SOC 3-- SOX (US)-- Spain DPA-- TISAX-- TruSight-- UK G-Cloud-- WCAG 2.0---## Azure Government certifications --The following Azure Firewall certifications are for Azure Government: --- CJIS-- CNSSI 1253-- CSA STAR Attestation-- DFARS-- DoD DISA SRG Level 2-- DoE 10 CFR Part 810-- EAR-- FIPS 140-2-- FedRAMP High-- HIPAA-- HITECH Act (US)-- HITRUST-- IRS 1075-- ITAR-- MARS-E (US)-- NERC-- NIST Cybersecurity Framework-- NIST SP 800-171-- SOC 1 Type 2-- SOC 2 Type 2-- SOC 3-- SOX (US)-- Section 508 VPATs+Azure Firewall is included in many Azure compliance audits such as CSA STAR, ISO, SOC, PCI DSS, HITRUST, FedRAMP, DoD, and others. For the latest insight into Azure Firewall compliance audit scope, see [Cloud services in audit scope](/azure/compliance/offerings/cloud-services-in-audit-scope). ## Next steps -For more information about Microsoft compliance, see the following information. +For more information about Azure compliance, see the following information. -- [Microsoft Compliance Guide](https://servicetrust.microsoft.com/ViewPage/MSComplianceGuide)-- [Overview of Microsoft Azure compliance](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942)+- [Azure compliance](../compliance/index.yml) +- [Azure and other Microsoft services compliance offerings](/azure/compliance/offerings/) |
firewall | Enable Top Ten And Flow Trace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/enable-top-ten-and-flow-trace.md | Azure Firewall has two new diagnostics logs you can use to help monitor your fir The Top flows log (known in the industry as Fat Flows), shows the top connections that are contributing to the highest throughput through the firewall. +Because of the CPU impact, enable Top flows only when you need to troubleshoot a specific issue. The recommendation is to enable Top flows no longer than one week at a time. + ### Prerequisites - Enable [structured logs](firewall-structured-logs.md#enabledisable-structured-logs) There are a few ways to verify the update was successful, but you can navigate t ## Flow trace -Currently, the firewall logs show traffic through the firewall in the first attempt of a TCP connection, known as the *syn* packet. However, this doesn't show the full journey of the packet in the TCP handshake. As a result, it's difficult to troubleshoot if a packet is dropped, or asymmetric routing has occurred. +Currently, the firewall logs show traffic through the firewall in the first attempt of a TCP connection, known as the *syn* packet. However, this doesn't show the full journey of the packet in the TCP handshake. As a result, it's difficult to troubleshoot if a packet is dropped, or asymmetric routing has occurred. ++Because of the disk impact, enable Flow trace only when you need to troubleshoot a specific issue. The recommendation is to enable Flow trace no longer than one week at a time. The following additional properties can be added: -- SYN-ACK -- FIN -- FIN-ACK -- RST +- SYN-ACK ++ Ack flag that indicates acknowledgment of SYN packet. +- FIN ++ Finished flag of the original packet flow. No more data is transmitted in the TCP flow. +- FIN-ACK ++ Ack flag that indicates acknowledgment of FIN packet. ++- RST ++ Reset flag that indicates that original sender won't receive more data. + - INVALID (flows) + Indicates packet canΓÇÖt be identified or don't have any state; TCP packet is landing on a Virtual Machine Scale Sets instance, which doesn't have any prior history to this packet. + ### Prerequisites - Enable [structured logs](firewall-structured-logs.md#enabledisable-structured-logs) Select-AzSubscription -Subscription <subscription_id> or <subscription_name> Register-AzProviderFeature -FeatureName AFWEnableTcpConnectionLogging -ProviderNamespace Microsoft.Network Register-AzResourceProvider -ProviderNamespace Microsoft.Network ```++It can take several minutes for this to take effect. Once the feature is completely registered, consider performing an update on Azure Firewall for the change to take effect immediately. ++To check the status of the AzResourceProvider registration, you can run the Azure PowerShell command: ++`Get-AzProviderFeature -FeatureName "AFWEnableTcpConnectionLogging" -ProviderNamespace "Microsoft.Network"` + ### Create a diagnostic setting and enable Resource Specific Table 1. In the Diagnostic settings tab, select **Add diagnostic setting**. |
firewall | Firewall Sftp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-sftp.md | |
firewall | Premium Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md | The Azure Firewall signatures/rulesets include: - 20 to 40+ new rules are released each day. - Low false positive rating by using state-of-the-art malware detection techniques such as global sensor network feedback loop. -IDPS allows you to detect attacks in all ports and protocols for non-encrypted traffic. However, when HTTPS traffic needs to be inspected, Azure Firewall can use its TLS inspection capability to decrypt the traffic and better detect malicious activities. +IDPS allows you to detect attacks in all ports and protocols for nonencrypted traffic. However, when HTTPS traffic needs to be inspected, Azure Firewall can use its TLS inspection capability to decrypt the traffic and better detect malicious activities. The IDPS Bypass List allows you to not filter traffic to any of the IP addresses, ranges, and subnets specified in the bypass list. You can view traffic that has been filtered by **Web categories** in the Applica ### Category exceptions -You can create exceptions to your web category rules. Create a separate allow or deny rule collection with a higher priority within the rule collection group. For example, you can configure a rule collection that allows `www.linkedin.com` with priority 100, with a rule collection that denies **Social networking** with priority 200. This creates the exception for the pre-defined **Social networking** web category. +You can create exceptions to your web category rules. Create a separate allow or deny rule collection with a higher priority within the rule collection group. For example, you can configure a rule collection that allows `www.linkedin.com` with priority 100, with a rule collection that denies **Social networking** with priority 200. This creates the exception for the predefined **Social networking** web category. ### Web category search Under the **Web Categories** tab in **Firewall Policy Settings**, you can reques :::image type="content" source="media/premium-features/firewall-category-change.png" alt-text="Firewall category report dialog"::: +### Web categories that don't support TLS termination ++Due to privacy and compliance reasons, certain web traffic that is encrypted can't be decrypted using TLS termination. For example, employee health data transmitted through web traffic over a corporate network shouldn't be TLS terminated due to privacy reasons. ++As a result, the following Web Categories don't support TLS termination: +- Education +- Finance +- Government +- Health and medicine ++As a workaround, if you want a specific URL to support TLS termination, you can manually add the URL(s) with TLS termination in application rules. For example, you can add `www.princeton.edu` to application rules to allow this website. + ## Supported regions For the supported regions for Azure Firewall, see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-firewall). |
frontdoor | How To Enable Private Link Storage Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-storage-account.md | In this section, you'll map the Private Link service to a private endpoint creat 1. Then select **Add** to save your configuration. Then select **Update** to save the origin group settings. +> [!NOTE] +> Ensure the **origin path** in your routing rule is configured correctly with the storage container file path so file requests can be acquired. +> + ## Approve private endpoint connection from the storage account 1. Go to the storage account you configure Private Link for in the last section. Select **Networking** under **Settings**. |
hdinsight | Domain Joined Authentication Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/domain-joined-authentication-issues.md | Title: Authentication issues in Azure HDInsight description: Authentication issues in Azure HDInsight Previously updated : 03/31/2022 Last updated : 04/28/2023 # Authentication issues in Azure HDInsight This article describes troubleshooting steps and possible resolutions for issues when interacting with Azure HDInsight clusters. -On secure clusters backed by Azure Data Lake (Gen1 or Gen2), when domain users sign in to the cluster services through HDI Gateway (like signing in to the Apache Ambari portal), HDI Gateway will try to obtain an OAuth token from Azure Active Directory (Azure AD) first, and then get a Kerberos ticket from Azure AD DS. Authentication can fail in either of these stages. This article is aimed at debugging some of those issues. +On secure clusters backed by Azure Data Lake (Gen1 or Gen2), when domain users sign in to the cluster services through HDI Gateway (like signing in to the Apache Ambari portal), HDI Gateway tries to obtain an OAuth token from Azure Active Directory (Azure AD) first, and then get a Kerberos ticket from Azure AD DS. Authentication can fail in either of these stages. This article is aimed at debugging some of those issues. -When the authentication fails, you will get prompted for credentials. If you cancel this dialog, the error message will be printed. Here are some of the common error messages: +When the authentication fails, you gets prompted for credentials. If you cancel this dialog, the error message is printed. Here are some of the common error messages: ## invalid_grant or unauthorized_client, 50126 Reason: Bad Request, Detailed Response: {"error":"invalid_grant","error_descript ### Cause -Azure AD error code 50126 means the `AllowCloudPasswordValidation` policy has not been set by the tenant. +Azure AD error code 50126 means the `AllowCloudPasswordValidation` policy not set by the tenant. ### Resolution The Global Administrator of the Azure AD tenant should enable Azure AD to use pa Sign in fails with error code 50034. Error message is similar to: ```-{"error":"invalid_grant","error_description":"AADSTS50034: The user account Microsoft.AzureAD.Telemetry.Diagnostics.PII does not exist in the 0c349e3f-1ac3-4610-8599-9db831cbaf62 directory. To sign into this application, the account must be added to the directory.\r\nTrace ID: bbb819b2-4c6f-4745-854d-0b72006d6800\r\nCorrelation ID: b009c737-ee52-43b2-83fd-706061a72b41\r\nTimestamp: 2019-04-29 15:52:16Z", "error_codes":[50034],"timestamp":"2019-04-29 15:52:16Z","trace_id":"bbb819b2-4c6f-4745-854d-0b72006d6800", "correlation_id":"b009c737-ee52-43b2-83fd-706061a72b41"} +{"error":"invalid_grant","error_description":"AADSTS50034: The user account Microsoft.AzureAD.Telemetry.Diagnostics.PII doesn't exist in the 0c349e3f-1ac3-4610-8599-9db831cbaf62 directory. To sign into this application, the account must be added to the directory.\r\nTrace ID: bbb819b2-4c6f-4745-854d-0b72006d6800\r\nCorrelation ID: b009c737-ee52-43b2-83fd-706061a72b41\r\nTimestamp: 2019-04-29 15:52:16Z", "error_codes":[50034],"timestamp":"2019-04-29 15:52:16Z","trace_id":"bbb819b2-4c6f-4745-854d-0b72006d6800", "correlation_id":"b009c737-ee52-43b2-83fd-706061a72b41"} ``` ### Cause -User name is incorrect (does not exist). The user is not using the same username that is used in Azure portal. +User name is incorrect (doesn't exist). The user isn't using the same username that is used in Azure portal. ### Resolution Receive error message `interaction_required`. ### Cause -The conditional access policy or MFA is being applied to the user. Since interactive authentication is not supported yet, the user or the cluster needs to be exempted from MFA / Conditional access. If you choose to exempt the cluster (IP address based exemption policy), then make sure that the AD `ServiceEndpoints` are enabled for that vnet. +The conditional access policy or MFA is being applied to the user. Since interactive authentication isn't supported yet, the user or the cluster needs to be exempted from MFA / Conditional access. If you choose to exempt the cluster (IP address based exemption policy), then make sure that the AD `ServiceEndpoints` are enabled for that vnet. ### Resolution -Use conditional access policy and exempt the HDInisght clusters from MFA as shown in [Configure a HDInsight cluster with Enterprise Security Package by using Azure Active Directory Domain Services](./apache-domain-joined-configure-using-azure-adds.md). +Use conditional access policy and exempt the HDInsight clusters from MFA as shown in [Configure a HDInsight cluster with Enterprise Security Package by using Azure Active Directory Domain Services](./apache-domain-joined-configure-using-azure-adds.md). Use conditional access policy and exempt the HDInisght clusters from MFA as show ### Issue -Sign in is denied. +Sign in denied. ### Cause -To get to this stage, your OAuth authentication is not an issue, but Kerberos authentication is. If this cluster is backed by ADLS, OAuth sign in has succeeded before Kerberos auth is attempted. On WASB clusters, OAuth sign in is not attempted. There could be many reasons for Kerberos failure - like password hashes are out of sync, user account locked out in Azure AD DS, and so on. Password hashes sync only when the user changes password. When you create the Azure AD DS instance, it will start syncing passwords that are changed after the creation. It won't retroactively sync passwords that were set before its inception. +To get to this stage, your OAuth authentication isn't an issue, but Kerberos authentication is. If this cluster is backed by ADLS, OAuth sign in has succeeded before Kerberos auth is attempted. On WASB clusters, OAuth sign in isn't attempted. There could be many reasons for Kerberos failure - like password hashes are out of sync, user account locked out in Azure AD DS, and so on. Password hashes sync only when the user changes password. When you create the Azure AD DS instance, it will start syncing passwords that are changed after the creation. It can't retroactively sync passwords that were set before its inception. ### Resolution If you think passwords may not be in sync, try changing the password and wait for a few minutes to sync. -Try to SSH into a You will need to try to authenticate (kinit) using the same user credentials, from a machine that is joined to the domain. SSH into the head / edge node with a local user and then run kinit. +Try to SSH into a You need to try to authenticate (kinit) using the same user credentials, from a machine that is joined to the domain. SSH into the head / edge node with a local user and then run kinit. Varies. ### Resolution -For kinit to succeed, you need to know your `sAMAccountName` (this is the short account name without the realm). `sAMAccountName` is usually the account prefix (like bob in `bob@contoso.com`). For some users, it could be different. You will need the ability to browse / search the directory to learn your `sAMAccountName`. +For kinit to succeed, you need to know your `sAMAccountName` (this is the short account name without the realm). `sAMAccountName` is usually the account prefix (like bob in `bob@contoso.com`). For some users, it could be different. You need the ability to browse / search the directory to learn your `sAMAccountName`. Ways to find `sAMAccountName`: Incorrect username or password. ### Resolution -Check your username and password. Also check for other properties described above. To enable verbose debugging, run `export KRB5_TRACE=/tmp/krb.log` from the session before trying kinit. +Check your username and password. Also check for other properties described. To enable verbose debugging, run `export KRB5_TRACE=/tmp/krb.log` from the session before trying kinit. Job / HDFS command fails due to `TokenNotFoundException`. ### Cause -The required OAuth access token was not found for the job / command to succeed. The ADLS / ABFS driver will try to retrieve the OAuth access token from the credential service before making storage requests. This token gets registered when you sign in to the Ambari portal using the same user. +The required OAuth access token wasn't found for the job / command to succeed. The ADLS / ABFS driver tries to retrieve the OAuth access token from the credential service before making storage requests. This token gets registered when you sign in to the Ambari portal using the same user. ### Resolution |
hdinsight | Apache Hadoop Use Mapreduce Curl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-mapreduce-curl.md | description: Learn how to remotely run MapReduce jobs with Apache Hadoop on HDIn Previously updated : 01/13/2020 Last updated : 04/28/2023 # Run MapReduce jobs with Apache Hadoop on HDInsight using REST |
hdinsight | Hdinsight Troubleshoot Soft Lockup Cpu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-troubleshoot-soft-lockup-cpu.md | Title: Watchdog BUG soft lockup CPU error from Azure HDInsight cluster description: Watchdog BUG soft lockup CPU appears in kernel syslogs from Azure HDInsight cluster Previously updated : 08/05/2019 Last updated : 04/28/2023 # Scenario: "watchdog: BUG: soft lockup - CPU" error from an Azure HDInsight cluster Apply kernel patch. The script below upgrades the linux kernel and reboots the m ## Next steps |
hdinsight | Apache Hbase Rest Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-rest-sdk.md | description: Use the HBase .NET SDK to create and delete tables, and to read and Previously updated : 12/02/2019 Last updated : 04/28/2023 # Use the .NET SDK for Apache HBase finally ## Next steps * [Get started with an Apache HBase example in HDInsight](apache-hbase-tutorial-get-started-linux.md)-* Build an end-to-end application with [Analyze real-time Twitter sentiment with Apache HBase](./apache-hbase-tutorial-get-started-linux.md) +* Build an end-to-end application with [Analyze real-time Twitter sentiment with Apache HBase](./apache-hbase-tutorial-get-started-linux.md) |
hdinsight | Hdinsight Grafana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/hdinsight-grafana.md | Title: Use Grafana on Azure HDInsight description: Learn how to access the Grafana dashboard with Apache Hadoop clusters in Azure HDInsight Previously updated : 12/27/2019 Last updated : 04/28/2023 # Access Grafana in Azure HDInsight |
hdinsight | Spark Cruise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/spark-cruise.md | Title: Use SparkCruise on Azure HDInsight to speed up Apache Spark queries description: Learn how to use the SparkCruise optimization platform to improve efficiency of Apache Spark queries. Previously updated : 07/27/2020 Last updated : 04/28/2023 # Customer intent: As an Apache Spark developer, I would like to learn about the tools and features to optimize my Spark workloads on Azure HDInsight. |
healthcare-apis | How To Run A Reindex | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/how-to-run-a-reindex.md | POST {{FHIR_URL}}/$reindex } ``` -Leave the `"parameter": []` field blank (as shown) if you don't need to tweak the compute resources allocated to the reindex job. If the request is successful, you will receive a **201 Created** status code in addition to a `Parameters` resource in response: +Leave the `"parameter": []` field blank (as shown) if you don't need to tweak the compute resources allocated to the reindex job. ++If the request is successful, you will receive a **201 Created** status code in addition to a `Parameters` resource in response ```json HTTP/1.1 201 Created Content-Location: https://{{FHIR URL}}/_operations/reindex/560c7c61-2c70-4c54-b8 ] } ```+In case, there is a need to run reindex job against specific custom search parameter, use the following `POST` call with the JSON formatted `Parameters` resource in the request body: ++```json +POST {{FHIR_URL}}/$reindex ++{ ++"resourceType": "Parameters", ++"parameter": [ + "name": "targetSearchParameterTypes", + "valueString": "{url of custom search parameter. In case of multiple custom search parameters, url list can be comma seperated.}" ++] ++} + ``` + > [!NOTE] > To check the status of a reindex job or to cancel the job, you'll need the reindex ID. This is the `"id"` carried in the `"parameter"` value returned in the response. In the example above, the ID for the reindex job would be `560c7c61-2c70-4c54-b86d-c53a9d29495e`. POST {{FHIR_URL}}/$reindex ] } ```- ## Next steps In this article, you've learned how to perform a reindex job in your FHIR service. To learn how to define custom search parameters, see |
healthcare-apis | Concepts Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-machine-learning.md | Title: The MedTech service and Azure Machine Learning Service - Azure Health Data Services -description: In this article, you'll learn how to use the MedTech service and the Azure Machine Learning Service +description: Learn how to use the MedTech service and the Azure Machine Learning Service Previously updated : 02/27/2023 Last updated : 04/28/2023 -In this article, we'll explore using the MedTech service and Azure Machine Learning Service. +In this article, we explore using the MedTech service and the Azure Machine Learning Service. ## The MedTech service and Azure Machine Learning Service reference architecture -The MedTech service enables IoT devices seamless integration with FHIR services. This reference architecture is designed to accelerate adoption of Internet of Medical Things (IoMT) projects. This solution uses Azure Databricks for the Machine Learning (ML) compute. However, Azure ML Services with Kubernetes or a partner ML solution could fit into the Machine Learning Scoring Environment. +The MedTech service enables IoT devices to seamless integration with FHIR services. This reference architecture is designed to accelerate adoption of Internet of Things (IoT) projects. This solution uses Azure Databricks for the Machine Learning (ML) compute. However, Azure Machine Learning Services with Kubernetes or a partner ML solution could fit into the Machine Learning Scoring Environment. The four line colors show the different parts of the data journey. The four line colors show the different parts of the data journey. 1. Data from IoT device or via device gateway sent to Azure IoT Hub/Azure IoT Edge. 2. Data from Azure IoT Edge sent to Azure IoT Hub. 3. Copy of raw IoT device data sent to a secure storage environment for device administration.-4. PHI IoMT payload moves from Azure IoT Hub to the MedTech service. Multiple Azure services are represented by the MedTech service icon. +4. PHI IoT payload moves from Azure IoT Hub to the MedTech service. The MedTech service icon represents multiple Azure services. 5. Three parts to number 5: a. The MedTech service requests Patient resource from the FHIR service. b. The FHIR service sends Patient resource back to the MedTech service. The four line colors show the different parts of the data journey. **Machine Learning and AI Data Route ΓÇô Steps 6 through 11** 6. Normalized ungrouped data stream sent to an Azure Function (ML Input).-7. Azure Function (ML Input) requests Patient resource to merge with IoMT payload. -8. IoMT payload with PHI is sent to an event hub for distribution to Machine Learning compute and storage. -9. PHI IoMT payload is sent to Azure Data Lake Storage Gen 2 for scoring observation over longer time windows. -10. PHI IoMT payload is sent to Azure Databricks for windowing, data fitting, and data scoring. +7. Azure Function (ML Input) requests Patient resource to merge with IoT payload. +8. IoT payload with PHI is sent to an event hub for distribution to Machine Learning compute and storage. +9. PHI IoT payload is sent to Azure Data Lake Storage Gen 2 for scoring observation over longer time windows. +10. PHI IoT payload is sent to Azure Databricks for windowing, data fitting, and data scoring. 11. The Azure Databricks requests more patient data from data lake as needed. a. Azure Databricks also sends a copy of the scored data to the data lake. **Notification and Care Coordination ΓÇô Steps 12 - 18** The four line colors show the different parts of the data journey. **Hot path** 12. Azure Databricks sends a payload to an Azure Function (ML Output).-13. RiskAssessment and/or Flag resource submitted to FHIR service. a. For each observation window, a RiskAssessment resource will be submitted to the FHIR service. b. For observation windows where the risk assessment is outside the acceptable range a Flag resource should also be submitted to the FHIR service. +13. RiskAssessment and/or Flag resource submitted to FHIR service. a. For each observation window, a RiskAssessment resource is submitted to the FHIR service. b. For observation windows where the risk assessment is outside the acceptable range a Flag resource should also be submitted to the FHIR service. 14. Scored data sent to data repository for routing to appropriate care team. Azure SQL Server is the data repository used in this design because of its native interaction with Power BI. 15. Power BI Dashboard is updated with Risk Assessment output in under 15 minutes. |
healthcare-apis | Concepts Power Bi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-power-bi.md | Title: MedTech service Microsoft Power BI - Azure Health Data Services -description: In this article, you'll learn how to use the MedTech service and Power BI +description: Learn how to use the MedTech service and Power BI Previously updated : 02/27/2023 Last updated : 04/28/2023 -In this article, we'll explore using the MedTech service and Microsoft Power Business Intelligence (BI). +In this article, we explore using the MedTech service and Microsoft Power Business Intelligence (BI). ## The MedTech service and Power BI reference architecture -This reference architecture shows the basic components of using the Microsoft cloud services to enable Power BI on top of Internet of Medical Things (IoMT) and FHIR data. +This reference architecture shows the basic components of using the Microsoft cloud services to enable Power BI on top of Internet of Things (IoT) and FHIR data. You can even embed Power BI dashboards inside the Microsoft Teams client to further enhance care team coordination. For more information on embedding Power BI in Teams, visit [here](/power-bi/collaborate-share/service-embed-report-microsoft-teams). |
healthcare-apis | Concepts Teams | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-teams.md | Title: The MedTech service and Teams notifications - Azure Health Data Services -description: In this article, you'll learn how to use the MedTech service and Teams notifications +description: Learn how to use the MedTech service and Teams notifications Previously updated : 02/27/2023 Last updated : 04/28/2023 -In this article, we'll explore using the MedTech service and Microsoft Teams for notifications. +In this article, we explore using the MedTech service and Microsoft Teams for notifications. ## The MedTech service and Teams notifications reference architecture -When combining the MedTech service, a FHIR service, and Teams, you can enable multiple care solutions. +When combining the MedTech service, the FHIR service, and Teams, you can enable multiple care solutions. The diagram is a MedTech service to Teams notifications conceptual architecture for enabling the MedTech service, the FHIR service, and the Teams Patient App. |
healthcare-apis | Deploy Arm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-arm-template.md | To implement infrastructure as code for your Azure solutions, use Azure Resource In this quickstart, learn how to: -- Open an ARM template in the Azure portal.-- Configure the ARM template for your deployment.-- Deploy the ARM template. +* Open an ARM template in the Azure portal. +* Configure the ARM template for your deployment. +* Deploy the ARM template. > [!TIP] > To learn more about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md) To begin your deployment and complete the quickstart, you must have the followin - An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/). -- Owner or Contributor and User Access Administrator role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md)+- **Owner** or **Contributor and User Access Administrator** role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md) - The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). When you have these prerequisites, you're ready to configure the ARM template by using the **Deploy to Azure** button. -## Review the ARM template - Optional +## Review the ARM template (Optional) The ARM template used to deploy the resources in this quickstart is available at [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors/) by using the *azuredeploy.json* file on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/). To begin deployment in the Azure portal, select the **Deploy to Azure** button: 1. In the Azure portal, on the Basics tab of the Azure Quickstart Template, select or enter the following information for your deployment: - - **Subscription** - The Azure subscription to use for the deployment. + * **Subscription** - The Azure subscription to use for the deployment. - - **Resource group** - An existing resource group, or you can create a new resource group. + * **Resource group** - An existing resource group, or you can create a new resource group. - - **Region** - The Azure region of the resource group that's used for the deployment. Region autofills by using the resource group region. + * **Region** - The Azure region of the resource group that's used for the deployment. Region autofills by using the resource group region. - - **Basename** - A value that's appended to the name of the Azure resources and services that are deployed. + * **Basename** - A value that's appended to the name of the Azure resources and services that are deployed. - - **Location** - Use the drop-down list to select a supported Azure region for the Azure Health Data Services (the value could be the same or different region than your resource group). + * **Location** - Use the drop-down list to select a supported Azure region for the Azure Health Data Services (the value could be the same or different region than your resource group). - - **Device Mapping** - Don't change the default values for this quickstart. + * **Device Mapping** - Don't change the default values for this quickstart. - - **Destination Mapping** - Don't change the default values for this quickstart. + * **Destination Mapping** - Don't change the default values for this quickstart. :::image type="content" source="media\deploy-arm-template\iot-deploy-quickstart-options.png" alt-text="Screenshot of Azure portal page displaying deployment options for the Azure Health Data Service MedTech service." lightbox="media\deploy-arm-template\iot-deploy-quickstart-options.png"::: To begin deployment in the Azure portal, select the **Deploy to Azure** button: :::image type="content" source="media\deploy-arm-template\iot-deployment-complete-banner.png" alt-text="Screenshot that shows a green checkmark and the message Your deployment is complete."::: > [!IMPORTANT]- > If you're going to allow access from multiple services to the device message event hub, it's required that each service has its own event hub consumer group. + > If you're going to allow access from multiple services to the event hub, it's required that each service has its own event hub consumer group. > > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). > > Examples: >- > * Two MedTech services accessing the same device message event hub. + > * Two MedTech services accessing the same event hub. >- > * A MedTech service and a storage writer application accessing the same device message event hub. + > * A MedTech service and a storage writer application accessing the same event hub. ## Review deployed resources and access permissions When deployment is completed, the following resources and access roles are creat * Event Hubs namespace and event hub. In this deployment, the event hub is named *devicedata*. - * An event hub consumer group. In this deployment, the consumer group is named *$Default*. + * Event hub consumer group. In this deployment, the consumer group is named *$Default*. - * An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md). + * **Azure Event Hubs Data Sender** role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md). -* A Health Data Services workspace. +* Health Data Services workspace. -* A Health Data Services Fast Healthcare Interoperability Resources FHIR service. +* Health Data Services Fast Healthcare Interoperability Resources FHIR service. -* A Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles: +* Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles: - * For the event hub, the Azure Event Hubs Data Receiver role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub. + * For the event hub, the **Azure Event Hubs Data Receiver** role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the event hub. - * For the FHIR service, the FHIR Data Writer role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service. + * For the FHIR service, the **FHIR Data Writer** role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service. > [!IMPORTANT] > In this quickstart, the ARM template configures the MedTech service to operate in **Create** mode. A patient resource and a device resource are created for each device that sends data to your FHIR service. >-> To learn more about the MedTech service resolution types Create and Lookup, see [Destination properties](deploy-new-config.md#destination-properties). +> To learn about the MedTech service resolution types **Create** and **Lookup**, see [Configure the Destination tab](deploy-manual-portal.md#configure-the-destination-tab). ## Post-deployment mappings |
healthcare-apis | Deploy Bicep Powershell Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-bicep-powershell-cli.md | Title: Deploy the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI - Azure Health Data Services -description: In this article, you'll learn how to deploy the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI. +description: Learn how to deploy the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI. Previously updated : 04/14/2023 Last updated : 04/28/2023 -In this quickstart, you'll learn how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using a Bicep file. +In this quickstart, learn how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using a Bicep file. > [!TIP] > To learn more about Bicep, see [What is Bicep?](../../azure-resource-manager/bicep/overview.md?tabs=bicep) In this quickstart, you'll learn how to use Azure PowerShell or the Azure CLI to To begin your deployment and complete the quickstart, you must have the following prerequisites: -- An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/).+* An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/). -- Owner or Contributor and User Access Administrator role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md)+* **Owner** or **Contributor and User Access Administrator** role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md) -- The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).+* The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). -- [Azure PowerShell](/powershell/azure/install-az-ps) and/or the [Azure CLI](/cli/azure/install-azure-cli) installed locally.- - For Azure PowerShell, you'll also need to install [Bicep CLI](../../azure-resource-manager/bicep/install.md#windows) to deploy the Bicep file used in this quickstart. +* [Azure PowerShell](/powershell/azure/install-az-ps) and/or the [Azure CLI](/cli/azure/install-azure-cli) installed locally. + * For Azure PowerShell, install the [Bicep CLI](../../azure-resource-manager/bicep/install.md#windows) to deploy the Bicep file used in this quickstart. When you have these prerequisites, you're ready to deploy the Bicep file. -## Review the Bicep file - Optional +## Review the Bicep file (Optional) The Bicep file used to deploy the resources in this quickstart is available at [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors/) by using the *main.bicep* file on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/). ## Save the Bicep file locally -Save the Bicep file locally as *main.bicep*. You'll need to have the working directory of your Azure PowerShell or the Azure CLI console pointing to the location where this file is saved. +Save the Bicep file locally as *main.bicep*. You need to have the working directory of your Azure PowerShell or the Azure CLI console pointing to the location where this file is saved. ## Deploy the MedTech service with the Bicep file and Azure PowerShell Complete the following five steps to deploy the MedTech service using Azure Powe For example: `New-AzResourceGroupDeployment -ResourceGroupName BicepTestDeployment -TemplateFile main.bicep -basename abc123 -location southcentralus` > [!IMPORTANT]- > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group. + > If you're going to allow access from multiple services to the event hub, it is highly recommended that each service has its own event hub consumer group. > > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). > > Examples: >- > - Two MedTech services accessing the same device message event hub. + > * Two MedTech services accessing the same event hub. >- > - A MedTech service and a storage writer application accessing the same device message event hub. + > * A MedTech service and a storage writer application accessing the same event hub. ## Deploy the MedTech service with the Bicep file and the Azure CLI Complete the following five steps to deploy the MedTech service using the Azure > > Examples: > - > - Two MedTech services accessing the same device message event hub. + > * Two MedTech services accessing the same event hub. >- > - A MedTech service and a storage writer application accessing the same device message event hub. + > * A MedTech service and a storage writer application accessing the same event hub. ## Review deployed resources and access permissions When deployment is completed, the following resources and access roles are created in the Bicep file deployment: -- Azure Event Hubs namespace and device message event hub. In this deployment, the device message event hub is named *devicedata*.+* Azure Event Hubs namespace and device message event hub. In this deployment, the device message event hub is named *devicedata*. - - An event hub consumer group. In this deployment, the consumer group is named *$Default*. + * Event hub consumer group. In this deployment, the consumer group is named *$Default*. - - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md). + * **Azure Event Hubs Data Sender** role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md). -- A Health Data Services workspace.+* Health Data Services workspace. -- A Health Data Services FHIR service.+* Health Data Services FHIR service. -- A Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles:+* Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles: - - For the device message event hub, the Azure Events Hubs Data Receiver role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub. + * For the event hub, the **Azure Events Hubs Data Receiver** role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub. - - For the FHIR service, the FHIR Data Writer role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service. + * For the FHIR service, the **FHIR Data Writer** role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service. > [!IMPORTANT] > In this quickstart, the ARM template configures the MedTech service to operate in **Create** mode. A Patient resource and a Device resource are created for each device that sends data to your FHIR service. >-> To learn more about the MedTech service resolution types **Create** and **Lookup**, see [Destination properties](deploy-new-config.md#destination-properties). +> To learn more about the MedTech service resolution types **Create** and **Lookup**, see [Configure the Destination tab](deploy-manual-portal.md#configure-the-destination-tab). ## Post-deployment mappings -After you've successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings. +After you have successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings. -- To learn about the device mapping, see [Overview of the device mapping](overview-of-device-mapping.md).+* To learn about the device mapping, see [Overview of the device mapping](overview-of-device-mapping.md). -- To learn about the FHIR destination mapping, see [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md).+* To learn about the FHIR destination mapping, see [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md). ## Clean up Azure PowerShell deployed resources In this quickstart, you learned about how to use Azure PowerShell or the Azure C To learn about other methods for deploying the MedTech service, see > [!div class="nextstepaction"]-> [Choose a deployment method for the MedTech service](deploy-new-choose.md) +> [Choose a deployment method for the MedTech service](deploy-choose-method.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Deploy Choose Method | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-choose-method.md | Title: Choose a deployment method for the MedTech service - Azure Health Data Services -description: In this article, learn about the different methods for deploying the MedTech service. +description: Learn about the different methods for deploying the MedTech service. Previously updated : 04/25/2023 Last updated : 04/28/2023 The MedTech service provides multiple methods for deployment into Azure. Each de In this quickstart, learn about these deployment methods: * Azure Resource Manager template (ARM template) including an Azure Iot Hub using the **Deploy to Azure** button. -* ARM template using the **Deploy to Azure** button. -* ARM template using Azure PowerShell or the Azure CLI. -* Bicep file using Azure PowerShell or the Azure CLI. -* Manually in the Azure portal. +* ARM template using the **Deploy to Azure** button +* ARM template using Azure PowerShell or the Azure CLI +* Bicep file using Azure PowerShell or the Azure CLI +* Azure portal ## Deployment overview Using a Bicep file with Azure PowerShell or the Azure CLI is a more advanced dep To learn more about deploying the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI, see [Deploy the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI](deploy-bicep-powershell-cli.md). -## Manually in the Azure portal +## Azure portal -Using the Azure portal manual deployment allows you to see the details of each deployment step. The manual deployment has many steps, but it provides valuable technical information that may be useful for customizing and troubleshooting your MedTech service. +Using the Azure portal allows you to see the details of each deployment step. The Azure portal deployment has many steps, but it provides valuable technical information that may be useful for customizing and troubleshooting your MedTech service. -To learn more about deploying the MedTech service manually using the Azure portal, see [Deploy the MedTech service manually using the Azure portal](deploy-manual-prerequisites.md). +To learn more about deploying the MedTech service using the Azure portal, see [Deploy the MedTech service using the Azure portal](deploy-manual-portal.md). > [!IMPORTANT]-> If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group. +> If you're going to allow access from multiple services to the event hub, it is highly recommended that each service has its own event hub consumer group. > > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). > > Examples: >-> * Two MedTech services accessing the same device message event hub. +> * Two MedTech services accessing the same event hub. >-> * A MedTech service and a storage writer application accessing the same device message event hub. +> * A MedTech service and a storage writer application accessing the same event hub. ## Next steps |
healthcare-apis | Deploy Json Powershell Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-json-powershell-cli.md | Title: Deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI - Azure Health Data Services -description: In this article, you'll learn how to deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI +description: Learn how to deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI. Previously updated : 04/14/2023 Last updated : 04/28/2023 -In this quickstart, you'll learn how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using an Azure Resource Manager template (ARM template). +In this quickstart, learn how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using an Azure Resource Manager template (ARM template). > [!TIP] > To learn more about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md) In this quickstart, you'll learn how to use Azure PowerShell or the Azure CLI to To begin your deployment and complete the quickstart, you must have the following prerequisites: -- An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/).+* An active Azure subscription account. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/). -- Owner or Contributor and User Access Administrator role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md)+* **Owner** or **Contributor and User Access Administrator** role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md) -- The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).+* The Microsoft.HealthcareApis and Microsoft.EventHub resource providers registered with your Azure subscription. To learn more about registering resource providers, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). -- [Azure PowerShell](/powershell/azure/install-az-ps) and/or the [Azure CLI](/cli/azure/install-azure-cli) installed locally.+* [Azure PowerShell](/powershell/azure/install-az-ps) and/or the [Azure CLI](/cli/azure/install-azure-cli) installed locally. When you have these prerequisites, you're ready to deploy the ARM template. -## Review the ARM template - Optional +## Review the ARM template (Optional) The ARM template used to deploy the resources in this quickstart is available at [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors/) by using the *azuredeploy.json* file on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/). Complete the following five steps to deploy the MedTech service using Azure Powe For example: `New-AzResourceGroupDeployment -ResourceGroupName ArmTestDeployment -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json -basename abc123 -location southcentralus` > [!IMPORTANT]- > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group. + > If you're going to allow access from multiple services to the event hub, it is highly recommended that each service has its own event hub consumer group. > > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). > > Examples: >- > - Two MedTech services accessing the same device message event hub. + > * Two MedTech services accessing the same event hub. >- > - A MedTech service and a storage writer application accessing the same device message event hub. + > * A MedTech service and a storage writer application accessing the same event hub. ## Deploy the MedTech service with the Azure Resource Manager template and the Azure CLI Complete the following five steps to deploy the MedTech service using the Azure For example: `az deployment group create --resource-group ArmTestDeployment --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json --parameters basename=abc123 location=southcentralus` > [!IMPORTANT]- > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group. + > If you're going to allow access from multiple services to the event hub, it is highly recommended that each service has its own event hub consumer group. > > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). > > Examples: >- > - Two MedTech services accessing the same device message event hub. + > * Two MedTech services accessing the same event hub. >- > - A MedTech service and a storage writer application accessing the same device message event hub. + > * A MedTech service and a storage writer application accessing the same event hub. ## Review deployed resources and access permissions When deployment is completed, the following resources and access roles are created in the ARM template deployment: -- Azure Event Hubs namespace and device message event hub. In this deployment, the device message event hub is named *devicedata*.+* Azure Event Hubs namespace and device message event hub. In this deployment, the device message event hub is named *devicedata*. - - An event hub consumer group. In this deployment, the consumer group is named *$Default*. + * Event hub consumer group. In this deployment, the consumer group is named *$Default*. - - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md). + * **Azure Event Hubs Data Sender** role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md). -- A Health Data Services workspace.+* Health Data Services workspace. -- A Health Data Services FHIR service.+* Health Data Services FHIR service. -- A Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles:+* Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles: - - For the device message event hub, the Azure Events Hubs Data Receiver role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub. + * For the event hub, the **Azure Events Hubs Data Receiver** role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the event hub. - - For the FHIR service, the FHIR Data Writer role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service. + * For the FHIR service, the **FHIR Data Writer** role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service. > [!IMPORTANT] > In this quickstart, the ARM template configures the MedTech service to operate in **Create** mode. A Patient resource and a Device resource are created for each device that sends data to your FHIR service. >-> To learn more about the MedTech service resolution types **Create** and **Lookup**, see [Destination properties](deploy-new-config.md#destination-properties). +> To learn more about the MedTech service resolution types **Create** and **Lookup**, see [Configure the Destination tab](deploy-manual-portal.md#configure-the-destination-tab). ## Post-deployment mappings -After you've successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings. +After you have successfully deployed an instance of the MedTech service, you'll still need to provide conforming and valid device and FHIR destination mappings. + * To learn about the device mapping, see [Overview of the MedTech service device mapping](overview-of-device-mapping.md). + * To learn about the FHIR destination mapping, see [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md). ## Clean up Azure PowerShell resources In this quickstart, you learned how to use Azure PowerShell or Azure CLI to depl To learn about other methods for deploying the MedTech service, see > [!div class="nextstepaction"]-> [Choose a deployment method for the MedTech service](deploy-new-choose.md) +> [Choose a deployment method for the MedTech service](deploy-choose-method.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Deploy Manual Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-manual-config.md | - Title: Configure the MedTech service for deployment using the Azure portal - Azure Health Data Services -description: In this article, you'll learn how to configure the MedTech service for manual deployment using the Azure portal. ---- Previously updated : 04/14/2023----# Quickstart: Part 2: Configure the MedTech service for manual deployment using the Azure portal --> [!NOTE] -> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. --Before you can manually deploy the MedTech service, you must complete the following configuration tasks: --## Set up the MedTech service configuration --Start with these three steps to begin configuring the MedTech service so it will be ready to accept your tabbed configuration input: --1. Start by going to the Health Data Services workspace you created in the manual deployment [Prerequisites](deploy-new-manual.md#part-1-prerequisites) section. Select the **Create MedTech service** box. --2. This step will take you to the **Add MedTech service** button. Select the button. --3. This step will take you to the **Create MedTech service** page. This page has five tabs you need to fill out: --- Basics-- Device mapping-- Destination mapping-- Tags (optional)-- Review + create--## Configure the Basics tab --Follow these six steps to fill in the Basics tab configuration: --1. Enter the **MedTech service name**. -- The **MedTech service name** is a friendly, unique name for your MedTech service. For this example, we'll name the MedTech service `mt-azuredocsdemo`. --2. Enter the **Event Hubs Namespace**. -- The Event Hubs Namespace is the name of the **Event Hubs Namespace** that you previously deployed. For this example, we'll use `eh-azuredocsdemo` with our MedTech service device messages. -- > [!TIP] - > For information about deploying an Azure Event Hubs Namespace, see [Create an Event Hubs Namespace](../../event-hubs/event-hubs-create.md#create-an-event-hubs-namespace). - > - > For more information about Azure Event Hubs Namespaces, see [Namespace](../../event-hubs/event-hubs-features.md?WT.mc_id=Portal-Microsoft_Healthcare_APIs#namespace) in the Features and terminology in Azure Event Hubs document. --3. Enter the **Events Hubs name**. -- The Event Hubs name is the name of the event hub that you previously deployed within the Event Hubs Namespace. For this example, we'll use `devicedata` with our MedTech service device messages. -- > [!TIP] - > For information about deploying an Azure event hub, see [Create an event hub](../../event-hubs/event-hubs-create.md#create-an-event-hub). --4. Enter the **Consumer group**. -- The Consumer group name is located by going to the **Overview** page of the Event Hubs Namespace and selecting the event hub to be used for the MedTech service device messages. In this example, the event hub is named `devicedata`. --5. When you're inside the event hub, select the **Consumer groups** button under **Entities** to display the name of the consumer group to be used by your MedTech service. --6. By default, a consumer group named **$Default** is created during the deployment of an event hub. Use this consumer group for your MedTech service deployment. -- > [!IMPORTANT] - > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group. - > - > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). - > - > Examples: - > - > - Two MedTech services accessing the same device message event hub. - > - > - A MedTech service and a storage writer application accessing the same device message event hub. --The Basics tab should now look like this after you've filled it out: -- :::image type="content" source="media\deploy-manual-config\select-device-mapping-button.png" alt-text="Screenshot of Basics tab filled out correctly." lightbox="media\deploy-manual-config\select-device-mapping-button.png"::: --You're now ready to select the Device mapping tab and begin setting up the device mappings for your MedTech service. --## Configure the Device mapping tab --You need to configure device mappings so that your instance of the MedTech service can normalize the incoming device data. The device data will first be sent to your event hub instance and then picked up by the MedTech service. --The easiest way to configure the Device mapping tab is to use the Internet of Medical Things (IoMT) Connector Data Mapper tool to visualize, edit, and test your device mapping. This open source tool is available from [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper). --To begin configuring the device mapping tab, go to the Create MedTech service page and select the **Device mapping** tab. Then follow these two steps: --1. Go to the IoMT Connector Data Mapper and get the appropriate JSON code. --2. Return to the Create MedTech service page. Enter the JSON code for the template you want to use into the **Device mapping** tab. After you enter the template code, the Device mapping code will be displayed on the screen. --3. If the Device code is correct, select the **Next: Destination >** tab to enter the destination properties you want to use with your MedTech service. Your device configuration data will be saved for this session. --For more information regarding device mappings, see the relevant GitHub open source documentation at [Device Content Mapping](https://github.com/microsoft/iomt-fhir/blob/master/docs/Configuration.md#device-content-mapping). --For Azure docs information about the device mapping, see [How to configure the MedTech service device mapping](how-to-configure-device-mappings.md). --## Configure the Destination tab --In order to configure the **Destination** tab, you can use the [Mapping debugger](how-to-use-mapping-debugger.md) tool to create, edit, and test the FHIR destination mapping. You need to configure FHIR destination mapping so that your instance of MedTech service can send transformed device data to the FHIR service. --To begin configuring FHIR destination mapping, go to the **Create** MedTech service page and select the **Destination mapping** tab. There are two parts of the tab you must fill out: -- 1. Destination properties - 2. JSON template request --### Destination properties --Under the **Destination** tab, use these values to enter the destination properties for your MedTech service instance: --- First, enter the name of your **FHIR server** using the following four steps:-- 1. The **FHIR Server** name (also known as the **FHIR service**) can be located by using the **Search** bar at the top of the screen. - 1. To connect to your FHIR service instance, enter the name of the FHIR service you used in the manual deploy configuration article at [Deploy the FHIR service](deploy-new-manual.md#deploy-the-fhir-service). - 1. Then select the **Properties** button. - 1. Next, Copy and paste the **Name** string into the **FHIR Server** text field. In this example, the **FHIR Server** name is `fs-azuredocsdemo`. --- Next, enter the **Destination Name**.-- The **Destination Name** is a friendly name for the destination. Enter a unique name for your destination. In this example, the **Destination Name** is -- `fs-azuredocsdemo`. --- Then, select the **Resolution Type**.-- **Resolution Type** specifies how MedTech service will resolve missing data when reading from the FHIR service. MedTech reads device and patient resources from the FHIR service using [device identifiers](https://www.hl7.org/fhir/device-definitions.html#Device.identifier) and [patient identifiers](https://www.hl7.org/fhir/patient-definitions.html#Patient.identifier). -- Missing data can be resolved by choosing a **Resolution Type** of **Create** and **Lookup**: -- - **Create** -- If **Create** was selected, and device or patient resources are missing when you're reading data, new resources will be created, containing just the identifier. -- - **Lookup** - - If **Lookup** was selected, and device or patient resources are missing, an error will occur, and the data won't be processed. The errors **DeviceNotFoundException** and/or a **PatientNotFoundException** error will be generated, depending on the type of resource not found. --For more information regarding destination mapping, see the FHIR service GitHub documentation at [FHIR mapping](https://github.com/microsoft/iomt-fhir/blob/master/docs/Configuration.md#fhir-mapping). --For Azure docs information about the FHIR destination mapping, see [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md). --### JSON template request --Before you can complete the FHIR destination mapping, you must get a FHIR destination mapping code. Follow these four steps: --1. Go to the [Mapping debugger](how-to-use-mapping-debugger.md) and get the JSON template for your FHIR destination. -1. Go back to the Destination tab of the Create MedTech service page. -1. Go to the large box below the boxes for FHIR server name, Destination name, and Resolution type. Enter the JSON template request in that box. -1. You'll then receive the FHIR Destination mapping code, which will be saved as part of your configuration. --## Configure the Tags tab (optional) --Before you complete your configuration in the **Review + create** tab, you may want to configure tabs. You can do this step by selecting the **Next: Tags >** tabs. --Tags are name and value pairs used for categorizing resources. This step is an optional step when you may have many resources and want to sort them. For more information about tags, see [Use tags to organize your Azure resources and management hierarchy](../../azure-resource-manager/management/tag-resources.md). --Follow these steps if you want to create tags: --1. Under the **Tags** tab, enter the tag properties associated with the MedTech service. -- - Enter a **Name**. - - Enter a **Value**. --2. Once you've entered your tag(s), you're ready to do the last step of your configuration. --## Select the Review + create tab to validate your deployment request --To begin the validation process of your MedTech service deployment, select the **Review + create** tab. There will be a short delay and then you should see a screen that displays a **Validation success** message. Below the message, you should see the following values for your deployment. --**Basics** -- MedTech service name-- Event Hubs name-- Consumer group-- Event Hubs namespace----**Destination** -- FHIR server-- Destination name-- Resolution type--Your validation screen should look something like this: -- :::image type="content" source="media\deploy-manual-config\validate-and-review-medtech-service.png" alt-text="Screenshot of validation success with details displayed." lightbox="media\deploy-manual-config\validate-and-review-medtech-service.png"::: --If your MedTech service didn't validate, review the validation failure message, and troubleshoot the issue. Check all properties under each MedTech service tab that you've configured. Go back and try again. --## Continue on to Part 3: Deployment and post-deployment --After your configuration is successfully completed, you can go on to Part 3: Deployment and post deployment. See **Next steps**. --## Next steps --When you're ready to begin Part 3 of Manual Deployment, see --> [!div class="nextstepaction"] -> [Part 3: Manual deployment and post-deployment of MedTech service](deploy-new-deploy.md) --FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Deploy Manual Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-manual-portal.md | + + Title: Deploy the MedTech service using the Azure portal - Azure Health Data Services +description: Learn how to deploy the MedTech service using the Azure portal. ++++ Last updated : 04/28/2022++++# Quickstart: Deploy the MedTech service using the Azure portal ++> [!NOTE] +> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ++You may prefer to deploy the MedTech service using the Azure portal if you: ++* Need to track every step of the provisioning process. +* Want to customize or troubleshoot your deployment. ++In this quickstart, the MedTech service deployment using the Azure portal is divided into the following three sections: ++* [Deploy prerequisite resources](#deploy-prerequisite-resources) +* [Configure and deploy the MedTech service](#configure-and-deploy-the-medtech-service) +* [Post-deployment](#post-deployment) ++As a prerequisite, you need an Azure subscription and have been granted the proper permissions to deploy Azure resource groups and resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in Azure PowerShell, Azure CLI, and REST API scripts. +++> [!TIP] +> See the MedTech service article, [Choose a deployment method for the MedTech service](deploy-choose-method.md), for a description of the different deployment methods that can help to simply and automate the deployment of the MedTech service. ++## Deploy prerequisite resources ++The first step is to deploy the MedTech service prerequisite resources: ++* Azure resource group. +* Azure Event Hubs namespace and event hub. +* Azure Health Data services workspace. +* Azure Health Data Services FHIR service. ++Once the prerequisite resources are available, deploy: + +* Azure Health Data Services MedTech service. ++### Deploy a resource group ++Deploy a [resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md) to contain the prerequisite resources and the MedTech service. ++### Deploy an Event Hubs namespace and event hub ++Deploy an Event Hubs namespace into the resource group. Event Hubs namespaces are logical containers for event hubs. Once the namespace is deployed, you can deploy an event hub, which the MedTech service reads from. For information about deploying Event Hubs namespaces and event hubs, see [Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md). ++### Deploy a workspace ++Deploy a [workspace](../workspace-overview.md) into the resource group. After you create a workspace using the [Azure portal](../healthcare-apis-quickstart.md), a FHIR service and MedTech service can be deployed from the workspace. ++### Deploy a FHIR service ++Deploy a [FHIR service](../fhir/fhir-portal-quickstart.md) into your resource group using your workspace. The MedTech service persists transformed device data into the FHIR service. ++## Configure and deploy the MedTech service ++If you have successfully deployed the prerequisite resources, you're now ready to deploy the MedTech service. ++Before you can deploy the MedTech service, you must complete the following steps: ++### Set up the MedTech service configuration ++Start with these three steps to begin configuring the MedTech service: ++1. Start by going to your Azure Health Data services workspace and select the **Create MedTech service** box. ++2. This step takes you to the **Add MedTech service** button. Select the button. ++3. This step takes you to the **Create MedTech service** page. This page has five tabs you need to fill out: ++* Basics +* Device mapping +* Destination mapping +* Tags (Optional) +* Review + create ++### Configure the Basics tab ++Follow these four steps to fill in the **Basics** tab configuration: ++1. Enter the **MedTech service name**. ++ The **MedTech service name** is a friendly, unique name for your MedTech service. For this example, we have named the MedTech service *mt-azuredocsdemo*. ++2. Select the **Event Hubs Namespace**. ++ The **Event Hubs Namespace** is the name of the *Event Hubs namespace* that you previously deployed. For this example, we're using *eh-azuredocsdemo* for our MedTech service device messages. ++3. Select the **Events Hubs name**. ++ The **Event Hubs name** is the name of the event hub that you previously deployed within the Event Hubs Namespace. For this example, we're using *devicedata* for our MedTech service device messages. ++4. Select the **Consumer group**. ++ By default, a consumer group named *$Default* is created during the deployment of an event hub. Use this consumer group for your MedTech service deployment. ++ > [!IMPORTANT] + > If you're going to allow access from multiple services to the event hub, it is highly recommended that each service has its own event hub consumer group. + > + > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). + > + > Examples: + > + > * Two MedTech services accessing the same event hub. + > + > * A MedTech service and a storage writer application accessing the same event hub. ++The **Basics** tab should now look something like this after you've filled it out: +++### Configure the Device mapping tab ++For the purposes of this quickstart, accept the default **Device mapping** and move to the **Destination** tab. The device mapping is addressed in the [Post-deployment](#post-deployment) section of this quickstart. ++### Configure the Destination tab ++Under the **Destination** tab, use these values to enter the destination properties for your MedTech service instance: ++* First, select the name of your **FHIR server**. ++* Next, enter the **Destination name**. ++ The **Destination name** is a friendly name for the destination. Enter a unique name for your destination. In this example, the **Destination name** is + *fs-azuredocsdemo*. ++* Next, select the **Resolution type**. ++ **Resolution type** specifies how MedTech service associates device data with FHIR Device resources and FHIR Patient resources. MedTech reads device and patient resources from the FHIR service using [device identifiers](https://www.hl7.org/fhir/device-definitions.html#Device.identifier) and [patient identifiers](https://www.hl7.org/fhir/patient-definitions.html#Patient.identifier). ++ Device and Patient resources can be resolved by choosing a **Resolution type** of **Create** and **Lookup**: ++ - **Create** ++ If **Create** was selected, and device or patient resources are missing when you're reading data, new resources are created using the identifiers included in the device message. ++ - **Lookup** + + If **Lookup** was selected, and device or patient resources are missing, an error occurs, and the data isn't processed. The errors **DeviceNotFoundException** and/or a **PatientNotFoundException** error is generated, depending on the type of resource not found. ++ * For the **Destination mapping** field, accept the default **Destination mapping**. The FHIR destination mapping is addressed in the [Post-deployment](#post-deployment) section of this quickstart. ++The **Destination** tab should now look something like this after you've filled it out: +++### Configure the Tags tab (Optional) ++Before you complete your configuration in the **Review + create** tab, you may want to configure tags. You can do this step by selecting the **Next: Tags >** tab. ++Tags are name and value pairs used for categorizing resources and are an optional step. For more information about tags, see [Use tags to organize your Azure resources and management hierarchy](../../azure-resource-manager/management/tag-resources.md). ++### Validate your deployment ++To begin the validation process of your MedTech service deployment, select the **Review + create** tab. There's a short delay and then you should see a screen that displays a **Validation success** message. ++Your validation screen should look something like this: +++If your deployment didn't validate, review the validation failure message(s), and troubleshoot the issue(s). Check all properties under each MedTech service tab that you've configured and then try the validation process again. ++### Create deployment ++1. Select the **Create** button to begin the deployment. ++2. The deployment process may take several minutes. The screen displays a message saying that your deployment is in progress. ++3. When Azure has finishes deploying, a "Your Deployment is complete" message appears and also displays the following information: ++* Deployment name +* Subscription +* Resource group +* Deployment details ++Your screen should look something like this: +++## Post-deployment ++### Grant resource access to the MedTech service system-managed identity ++There are two post-deployment access steps you must perform or the MedTech service can't read data from the event hub or write data to the FHIR service. ++These steps are: ++* Grant the MedTech service system-managed identity **Azure Event Hubs Data Receiver** access to the [event hub](../../event-hubs/authorize-access-azure-active-directory.md). +* Grant the MedTech service system-managed identity **FHIR Data Writer** access to the [FHIR service](../configure-azure-rbac.md). ++These two steps are needed because MedTech service uses [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) and a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for extra security and access control of your Azure resources. ++### Provide device and FHIR destination mappings ++Valid and conforming device and FHIR destination mappings have to be provided to your MedTech service for it to be fully functional. For an overview and sample device and FHIR destination mappings, see: ++* [Overview of the MedTech service device mapping](overview-of-device-mapping.md). ++* [Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md). ++> [!TIP] +> You can use the MedTech service [Mapping debugger](how-to-use-mapping-debugger.md) for assistance creating, updating, and troubleshooting the MedTech service device and FHIR destination mappings. The Mapping debugger enables you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations. ++## Next steps ++This article described the deployment steps needed to get started using the MedTech service. ++To learn about other methods of deploying the MedTech service, see ++> [!div class="nextstepaction"] +> [Choose a deployment method for the MedTech service](deploy-new-choose.md) ++For an overview of the MedTech service device data processing stages, see ++> [!div class="nextstepaction"] +> [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md) ++For frequently asked questions (FAQs) about the MedTech service, see ++> [!div class="nextstepaction"] +> [Frequently asked questions about the MedTech service](frequently-asked-questions.md) |
healthcare-apis | Deploy Manual Post | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-manual-post.md | - Title: Manual deployment and post-deployment of the MedTech service using the Azure portal - Azure Health Data Services -description: In this article, you'll learn how to manually create a deployment and post-deployment of the MedTech service in the Azure portal. ---- Previously updated : 04/25/2023----# Quickstart: Part 3: Manual deployment and post-deployment of the MedTech service --> [!NOTE] -> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. --When you're satisfied with your configuration and it has been successfully validated, you can complete the deployment and post-deployment process. --## Create your manual deployment --1. Select the **Create** button to begin the deployment. --2. The deployment process may take several minutes. The screen will display a message saying that your deployment is in progress. --3. When Azure has finished deploying, a message will appear will say, "Your Deployment is complete" and will also display the following information: --- Deployment name-- Subscription-- Resource group-- Deployment details--Your screen should look something like this: -- :::image type="content" source="media\deploy-manual-post\created-medtech-service.png" alt-text="Screenshot of the MedTech service deployment completion." lightbox="media\deploy-manual-post\created-medtech-service.png"::: --## Manual post-deployment requirements --There are two post-deployment steps you must perform or the MedTech service can't: --1. Read device data from the device message event hub. -2. Read or write to the FHIR service. --These steps are: --1. Grant access to the device message event hub. -2. Grant access to the FHIR service. --These two other steps are needed because MedTech service uses [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) and a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for extra security and control of your MedTech service assets. --### Grant access to the device message event hub --Follow these steps to grant access to the device message event hub: --1. In the **Search** bar at the top center of the Azure portal, enter and select the name of your **Event Hubs Namespace** that was previously created for your MedTech service device messages. --2. Select the **Event Hubs** button under **Entities**. --3. Select the event hub that will be used for your MedTech service device messages. For this example, the device message event hub is named **devicedata**. --4. Select the **Access control (IAM)** button. --5. Select the **Add role assignment** button. --6. On the **Add role assignment** page, select the **View** button directly across from the **Azure Event Hubs Data Receiver** role. The Azure Event Hubs Data Receiver role allows the MedTech service to receive device message data from this event hub. For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](.././authentication-authorization.md). --7. Select the **Select role** button. --8. Select the **Next** button. --9. In the **Add role assignment** page, select **Managed identity** next to **Assign access to** and **+ Select members** next to **Members**. --10. When the **Select managed identities** box opens, under the **Managed identity** box, select **MedTech service,** and find your MedTech service system-assigned managed identity under the **Select** box. Once the system-assigned managed identity for your MedTech service is found, select it, and then select the **Select** button. -- The system-assigned managed identify name for your MedTech service is a concatenation of the workspace name and the name of your MedTech service, using the format: **"your workspace name"/"your MedTech service name"** or **"your workspace name"/iotconnectors/"your MedTech service name"**. For example: **azuredocsdemo/mt-azuredocsdemo** or **azuredocsdemo/iotconnectors/mt-azuredocsdemo**. --11. On the **Add role assignment** page, select the **Review + assign** button. --12. On the **Add role assignment** confirmation page, select the **Review + assign** button. --13. After the role assignment has been successfully added to the event hub, a notification will display on your screen with a green check mark. This notification indicates that your MedTech service can now read from your device message event hub. It should look like this: -- :::image type="content" source="media\deploy-manual-post\validate-medtech-service-managed-identity-added-to-event-hub.png" alt-text="Screenshot of the MedTech service system-assigned managed identity being successfully granted access to the event hub with a red box around the message." lightbox="media\deploy-manual-post\validate-medtech-service-managed-identity-added-to-event-hub.png"::: --For more information about authorizing access to Event Hubs resources, see [Authorize access with Azure Active Directory](../../event-hubs/authorize-access-azure-active-directory.md). --### Grant access to the FHIR service --The process for granting your MedTech service system-assigned managed identity access to your **FHIR service** requires the same 13 steps that you used to grant access to your device message event hub. There are two exceptions. The first is that, instead of navigating to the **Access Control (IAM)** menu from within your event hub (as outlined in steps 1-4), you should navigate to the equivalent **Access Control (IAM)** menu from within your **FHIR service**. The second exception is that, in step 6, your MedTech service system-assigned managed identity will require you to select the **View** button directly across from **FHIR Data Writer** access instead of the button across from **Azure Event Hubs Data Receiver**. --The **FHIR Data Writer** role provides read and write access to your FHIR service, which your MedTech service uses to access or persist data. Because the MedTech service is deployed as a separate resource, the FHIR service will receive requests from the MedTech service. If the FHIR service doesnΓÇÖt know who's making the request, it will deny the request as unauthorized. --For more information about assigning roles to the FHIR service, see [Configure Azure Role-based Access Control (RBAC)](.././configure-azure-rbac.md). --For more information about application roles, see [Authentication & Authorization for Azure Health Data Services](.././authentication-authorization.md). --Now that you have granted access to the device message event hub and the FHIR service, your manual deployment is complete. You're MedTech service is now ready to receive data from a device and process it into a FHIR Observation resource. --## Next steps --In this article, you learned how to perform the manual deployment and post-deployment steps to implement your MedTech service. --To learn about other methods for deploying the MedTech service, see --> [!div class="nextstepaction"] -> [Choose a deployment method for the MedTech service](deploy-new-choose.md) --FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Deploy Manual Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-manual-prerequisites.md | - Title: Deploy the MedTech service manually using the Azure portal - Azure Health Data Services -description: In this article, you'll learn how to deploy the MedTech service manually using the Azure portal. ---- Previously updated : 04/19/2022----# Quickstart: Deploy the MedTech service manually using the Azure portal --> [!NOTE] -> [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. --You may prefer to manually deploy the MedTech service if you need to track every step of the developmental process. Manual deployment might be necessary if you have to customize or troubleshoot your deployment. Manual deployment will help you by providing all the details for implementing each task. --The explanation of the MedTech service manual deployment using the Azure portal is divided into three parts that cover each of key tasks required: --- Part 1: Prerequisites (see Prerequisites below)-- Part 2: Configuration (see [Configure for manual deployment](deploy-new-config.md))-- Part 3: Deployment and Post Deployment (see [Manual deployment and post-deployment](deploy-new-deploy.md))--If you need a diagram with information on the MedTech service deployment, there's an overview at [Choose a deployment method](deploy-new-choose.md#deployment-overview). This diagram shows the steps of deployment and how MedTech service processes device data into FHIR Observations. --## Part 1: Prerequisites --Before you can begin configuring to deploy MedTech services, you need to have the following five prerequisites: --- A valid Azure subscription-- A resource group deployed in the Azure portal-- A workspace deployed in Azure Health Data Services-- An event hub deployed in a namespace-- FHIR service deployed in Azure Health Data Services--## Open your Azure account --The first thing you need to do is determine if you have a valid Azure subscription. If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/). --## Deploy a resource group in the Azure portal --When you sign in to your Azure account, go to the Azure portal and select the **Create a resource** button. Enter "Azure Health Data Services" in the "Search services and marketplace" box. This step should take you to the Azure Health Data Services page. --## Deploy a workspace in Azure Health Data Services --The first resource you must create is a workspace to contain your Azure Health Data Services resources. Start by selecting Create from the Azure Health Data Services resource page. This step will take you to the first page of Create Azure Health Data Services workspace, when you need to do the following eight steps: --1. Fill in the resource group you want to use or create a new one. --2. Give the workspace a unique name. --3. Select the region you want to use. --4. Select the Networking button at the bottom to continue. --5. Choose whether you want a public or private endpoint. --6. Create tags if you want to use them. They're optional. --7. When you're ready to continue, select the Review + create tab. --8. Select the Create button to deploy your workspace. --After a short delay, you'll start to see information about your new workspace. Make sure you wait until all parts of the screen are displayed. If your initial deployment was successful, you should see: --- "Your deployment is complete"-- Deployment name-- Subscription name-- Resource group name--## Deploy an event hub in the Azure portal using a namespace --An event hub is the next prerequisite you need to create. It's an important step because the event hub receives the data flow from a device and stores it until the MedTech service picks up the device data. Once the MedTech service picks up the device data, it can begin the transformation of the device data into a FHIR service Observation resource. Because Internet propagation times are indeterminate, the event hub is needed to buffer the data and store it for as much as 24 hours before expiring. --Before you can create an event hub, you must create a namespace in Azure portal to contain it. For more information on how To create a namespace and an event hub, see [Azure Event Hubs namespace and event hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md). --## Deploy the FHIR service --The last prerequisite you need to do before you can configure and deploy MedTech service, is to deploy the FHIR service. --There are three ways to deploy FHIR service: --1. Using portal. See [Deploy a FHIR service within Azure Health Data Services - using portal](../fhir/fhir-portal-quickstart.md). --2. Using Bicep. See [Deploy a FHIR service within Azure Health Data Services using Bicep](../fhir/fhir-service-bicep.md). --3. Using an ARM template. See [Deploy a FHIR service within Azure Health Data Services - using ARM template](../fhir/fhir-service-resource-manager-template.md). --After you have deployed FHIR service, it will be ready to receive the data processed by MedTech and persist it as a FHIR service Observation. --## Continue on to Part 2: Configuration --After your prerequisites are successfully completed, you can go on to Part 2: Configuration. See **Next steps**. --## Next steps --When you're ready to begin Part 2 of Manual Deployment, see --> [!div class="nextstepaction"] -> [Part 2: Configure the MedTech service for manual deployment using the Azure portal](deploy-new-config.md) --FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Device Messages Through Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-messages-through-iot-hub.md | Title: Receive device messages through Azure IoT Hub - Azure Health Data Services -description: Learn how to deploy Azure IoT Hub with message routing to send device messages to the MedTech service in Azure Health Data Services. The tutorial uses an Azure Resource Manager template in the Azure portal and Visual Studio Code with the Azure IoT Hub extension. +description: Learn how to deploy Azure IoT Hub with message routing to send device messages to the MedTech service. The tutorial uses an Azure Resource Manager template and Visual Studio Code with the Azure IoT Hub extension. Previously updated : 04/25/2023 Last updated : 04/28/2023 To begin your deployment and complete the tutorial, you must have the following - An active Azure subscription account. If you don't have an Azure subscription, see the [subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/). -- Owner or Contributor and User Access Administrator role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md)+- **Owner** or **Contributor and User Access Administrator** role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md) -- The `Microsoft.HealthcareApis`, `Microsoft.EventHub`, and `Microsoft.Devices` resource providers registered with your Azure subscription. To learn more, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).+- The Microsoft.HealthcareApis, Microsoft.EventHub, and Microsoft.Devices resource providers registered with your Azure subscription. To learn more, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). - [Visual Studio Code](https://code.visualstudio.com/Download) installed locally. To begin deployment in the Azure portal, select the **Deploy to Azure** button: :::image type="content" source="media\device-messages-through-iot-hub\deployment-complete-banner.png" alt-text="Screenshot that shows a green checkmark and the message Your deployment is complete."::: > [!IMPORTANT]- > If you're going to allow access from multiple services to the device message event hub, it's required that each service has its own event hub consumer group. + > If you're going to allow access from multiple services to the event hub, it's required that each service has its own event hub consumer group. > > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups). > > Examples: >- > * Two MedTech services accessing the same device message event hub. + > * Two MedTech services accessing the same event hub. >- > * A MedTech service and a storage writer application accessing the same device message event hub. + > * A MedTech service and a storage writer application accessing the same event hub. ## Review deployed resources and access permissions When deployment is completed, the following resources and access roles are created in the template deployment: -* An Event Hubs namespace and event hub. In this deployment, the event hub is named *devicedata*. +* Event Hubs namespace and event hub. In this deployment, the event hub is named *devicedata*. - * An event hub consumer group. In this deployment, the consumer group is named *$Default*. + * Event hub consumer group. In this deployment, the consumer group is named *$Default*. - * An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md). The Azure Event Hubs Data Sender role isn't used in this tutorial. + * **Azure Event Hubs Data Sender** role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md). The **Azure Event Hubs Data Sender** role isn't used in this tutorial. -* An IoT hub with [message routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) configured to send device messages to the event hub. +* IoT hub with [message routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) configured to send device messages to the event hub. -* A [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md), which provides send access from the IoT hub to the event hub. The managed identity has the Azure Event Hubs Data Sender role in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the event hub. +* [User-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md), which provides send access from the IoT hub to the event hub. The managed identity has the **Azure Event Hubs Data Sender** role in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the event hub. -- A Health Data Services workspace.+* Health Data Services workspace. -- A Health Data Services FHIR service.+* Health Data Services FHIR service. -- A Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles:+* Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles: - - For the event hub, the Azure Event Hubs Data Receiver role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the event hub. + * For the event hub, the **Azure Event Hubs Data Receiver** role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the event hub. - - For the FHIR service, the FHIR Data Writer role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service. + * For the FHIR service, the **FHIR Data Writer** role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service. -- Conforming and valid MedTech service [device](overview-of-device-mapping.md) and [FHIR destination mappings](overview-of-fhir-destination-mapping.md). **Resolution type** is set to **Create**.+* Conforming and valid MedTech service [device](overview-of-device-mapping.md) and [FHIR destination mappings](overview-of-fhir-destination-mapping.md). **Resolution type** is set to **Create**. > [!IMPORTANT] > In this tutorial, the ARM template configures the MedTech service to operate in **Create** mode. A Patient resource and a Device resource are created for each device that sends data to your FHIR service. >-> To learn about the MedTech service resolution types **Create** and **Lookup**, see [Destination properties](deploy-manual-config.md#destination-properties). +> To learn about the MedTech service resolution types **Create** and **Lookup**, see [Configure the Destination tab](deploy-manual-portal.md#configure-the-destination-tab). ## Create a device and send a test message You complete the steps by using Visual Studio Code with the Azure IoT Hub extens > > :::image type="content" source="media\device-messages-through-iot-hub\iot-hub-enriched-device-message.png" alt-text="Screenshot of an Azure IoT Hub enriched device message." lightbox="media\device-messages-through-iot-hub\iot-hub-enriched-device-message.png"::: >- > `patientIdExpression` is only required for MedTech services in the **Create** mode, however, if **Lookup** is being used, a Device resource with a matching Device Identifier must exist in the FHIR service. This example assumes your MedTech service is in a **Create** mode. The **Resolution type** for this tutorial set to **Create**. For more information on the **Destination properties**: **Create** and **Lookup**, see [Configure Destination properties](deploy-new-config.md#destination-properties). + > `patientIdExpression` is only required for MedTech services in the **Create** mode, however, if **Lookup** is being used, a Device resource with a matching Device Identifier must exist in the FHIR service. This example assumes your MedTech service is in a **Create** mode. The **Resolution type** for this tutorial set to **Create**. For more information on the **Destination properties**: **Create** and **Lookup**, see [Configure the Destination tab](deploy-manual-portal.md#configure-the-destination-tab). ## Review metrics from the test message |
healthcare-apis | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started.md | As a prerequisite, you need an Azure subscription and have been granted the prop ## Deploy resources -After you obtain the required subscription prerequisites, the first step is to create and deploy the MedTech service prerequisite resources: +After you obtain the required subscription prerequisites, the first step is to deploy the MedTech service prerequisite resources: * Azure resource group. * Azure Event Hubs namespace and event hub. Deploy a [FHIR service](../fhir/fhir-portal-quickstart.md) into your resource gr ### Deploy a MedTech service -If you have successfully deployed the prerequisite resources, you're now ready to deploy a [MedTech service](deploy-manual-prerequisites.md) using your workspace. +If you have successfully deployed the prerequisite resources, you're now ready to deploy the [MedTech service](deploy-manual-prerequisites.md) using your workspace. ## Next steps -This article described the basic steps needed to get started using the MedTech service. +This article described the basic steps needed to get started deploying the MedTech service. To learn about methods of deploying the MedTech service, see |
healthcare-apis | Overview Of Device Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-device-mapping.md | The normalization process validates the device mapping before allowing it to be |values[].required |True |True | > [!NOTE] -> `values[].valueName, values[].valueExpression`, `values[].required` and elements are only required if you have a value entry in the array. It's valid to have no values mapped. These elements are used when the telemetry being sent is an event. +> The `values[].valueName, values[].valueExpression`, and `values[].required` elements are only required if you have a value entry in the array. It's valid to have no values mapped. These elements are used when the telemetry being sent is an event. > > For example, some scenarios may require creating a FHIR Observation in the FHIR service that does not contain a value. |
healthcare-apis | Overview Of Fhir Destination Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-fhir-destination-mapping.md | The MedTech service requires two types of [JSON](https://www.json.org/) mappings ## FHIR destination mapping basics -The FHIR destination mapping controls how the normalized data extracted from a device message is mapped into a FHIR observation. +The FHIR destination mapping controls how the normalized data extracted from a device message is mapped into a FHIR Observation. -- Should an observation be created for a point in time or over a period of an hour?-- What codes should be added to the observation?-- Should the value be represented as [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) or a [Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity)?+* Should an observation be created for a point in time or over a period of an hour? +* What codes should be added to the observation? +* Should the value be represented as [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) or a [Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity)? These data types are all options the FHIR destination mapping configuration controls. This diagram provides an illustration of what happens during the transformation > [!NOTE] > The FHIR Observation in this diagram is not the complete resource. See [Example](#example) in this overview for the entire FHIR Observation. -## FHIR destination mapping validations --The validation process validates the FHIR destination mapping before allowing them to be saved for use. These elements are required in the FHIR destination mapping. --**FHIR destination mapping** --|Element|Required| -|:|:-| -|typeName|True| --> [!NOTE] -> The 'typeName' element is used to link a FHIR destination mapping template to one or more device mapping templates. Device mapping templates with the same 'typeName' element generate normalized data that will be evaluated with a FHIR destination mapping template that has the same 'typeName'. - ## CollectionFhir CollectionFhir is the root template type used by the MedTech service FHIR destination mapping. CollectionFhir is a list of all templates that are used during the transformation stage. You can define one or more templates within CollectionFhir, with each normalized message evaluated against all templates. ### CodeValueFhir -CodeValueFhir is currently the only template supported in FHIR destination mapping at this time. It allows you to define codes, the effective period, and the value of the observation. Multiple value types are supported: [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData), [CodeableConcept](https://www.hl7.org/fhir/datatypes.html#CodeableConcept), [Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity), and [String](https://www.hl7.org/fhir/datatypes.html#primitive). Along with these configurable values, the identifier for the Observation resource and linking to the proper Device and Patient resources are handled automatically. --> [!NOTE] -> --|Property|Description| -|:-|--| -|**typeName**| The type of measurement this template should bind to. There should be at least one Device mapping template that outputs this type. -|**periodInterval**|The period of time the observation created should represent. Supported values are 0 (an instance), 60 (an hour), 1440 (a day). Note: `periodInterval` is required when the Observation type is "SampledData" and is ignored for any other Observation types. -|**category**|Any number of [CodeableConcepts](http://hl7.org/fhir/datatypes-definitions.html#codeableconcept) to classify the type of observation created. -|**codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the observation created. -|**codes[].code**|The code for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). -|**codes[].system**|The system for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). -|**codes[].display**|The display for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). -|**value**|The value to extract and represent in the observation. For more information, see [Value type codes](#value-type-codes). -|**components**|*Optional:* One or more components to create on the observation. -|**components[].codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the component. -|**components[].value**|The value to extract and represent in the component. For more information, see [Value type codes](#value-type-codes). +CodeValueFhir is currently the only template supported in FHIR destination mapping. It allows you to define codes, the effective period, and the value of the observation. Multiple value types are supported: [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData), [CodeableConcept](https://www.hl7.org/fhir/datatypes.html#CodeableConcept), [Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity), and [string](https://www.hl7.org/fhir/datatypes.html#string). Along with these configurable values, the identifier for the Observation resource and linking to the proper Device and Patient resources are handled automatically. ++|Element|Description|Required| +|:|:-|:-| +|**typeName**| The type of measurement this template should bind to. There should be at least one device mapping template that has this same `typeName`.|TBD| +|**periodInterval**|The period of time the observation created should represent. Supported values are 0 (an instance), 60 (an hour), 1440 (a day).|TBD Note: `periodInterval` is required when the Observation type is "SampledData" and is ignored for any other Observation types.| +|**category**|Any number of [CodeableConcepts](http://hl7.org/fhir/datatypes-definitions.html#codeableconcept) to classify the type of observation created.|TBD| +|**codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the observation created.|TBD| +|**codes[].code**|The code for a [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding) in the `codes` property.|TBD| +|**codes[].system**|The system for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding).|TBD| +|**codes[].display**|The display for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding).|TBD| +|**value**|The value to extract and represent in the observation. For more information, see [Value types](#value-types).|TBD| +|**components**|*Optional:* One or more components to create on the observation.|TBD| +|**components[].codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the component.|TBD| +|**components[].value**|The value to extract and represent in the component. For more information, see [Value types](#value-types).|TBD| :::image type="content" source="media/overview-of-fhir-destination-mapping/fhir-destination-mapping-templates-diagram.png" alt-text="Diagram showing MedTech service FHIR destination mapping template and code architecture." lightbox="media/overview-of-fhir-destination-mapping/fhir-destination-mapping-templates-diagram.png"::: -### Value type codes +### Value types ++All CodeValueFhir templates' `value` element contains these elements: ++|Element|Description|Required| +|:|:-|:-| +|**valueType**|Type of the value. This value would be "SampledData", "Quantity", "CodeableConcept", or "string" depending on the value type.|TBD| +|**valueName**|Name of the value.|TBD| -The supported value type codes for the MedTech service FHIR destination mapping: +These value types are supported in the MedTech service FHIR destination mapping: -### SampledData +#### SampledData -Represents the [SampledData](http://hl7.org/fhir/datatypes.html#SampledData) FHIR data type. Observation measurements are written to a value stream starting at a point in time and incrementing forward using the period defined. If no value is present, an `E` is written into the data stream. If the period is such that two more values occupy the same position in the data stream, the latest value is used. The same logic is applied when an observation using the SampledData is updated. +Represents the [SampledData](http://hl7.org/fhir/datatypes.html#SampledData) FHIR data type. Observation measurements are written to a value stream starting at a point in time and incrementing forward using the period defined. If no value is present, an `E` is written into the data stream. If the period is such that two or more values occupy the same position in the data stream, the latest value is used. The same logic is applied when an observation using the SampledData is updated. For a CodeValueFhir template with the SampleData value type, the template's `value` element contains the following elements: -| Property | Description -| | -|**DefaultPeriod**|The default period in milliseconds to use. -|**Unit**|The unit to set on the origin of the SampledData. +|Element|Description|Required| +|:|:-|:-| +|**defaultPeriod**|The default period in milliseconds to use.|TBD| +|**unit**|The unit to set on the origin of the SampledData. |TBD| -### Quantity +#### Quantity -Represents the [Quantity](http://hl7.org/fhir/datatypes.html#Quantity) FHIR data type. This type creates a single, point in time, Observation. If a new value arrives that contains the same device identifier, measurement type, and timestamp, the previous Observation is updated to the new value. +Represents the [Quantity](http://hl7.org/fhir/datatypes.html#Quantity) FHIR data type. This type creates a single, point in time, Observation. If a new value arrives that contains the same device identifier, measurement type, and timestamp, the previous Observation is updated to the new value. For a CodeValueFhir template with the Quantity value type, the template's `value` element contains the following elements: -| Property | Description -| | -|**Unit**| Unit representation. -|**Code**| Coded form of the unit. -|**System**| System that defines the coded unit form. +|Element|Description|Required| +|:|:-|:-| +|**unit**|Unit representation.|TBD| +|**code**|Coded form of the unit.|TBD| +|**system**|System that defines the coded unit form.|TBD| -### CodeableConcept +#### CodeableConcept -Represents the [CodeableConcept](http://hl7.org/fhir/datatypes.html#CodeableConcept) FHIR data type. The value in the normalized data model isn't used, and instead when this type of data is received, an Observation is created with a specific code representing that an observation was recorded at a specific point in time. +Represents the [CodeableConcept](http://hl7.org/fhir/datatypes.html#CodeableConcept) FHIR data type. The value in the normalized data model isn't used, and instead when this type of data is received, an Observation is created with a specific code representing that an observation was recorded at a specific point in time. For a CodeValueFhir template with the CodeableConcept value type, the template's `value` element contains the following elements: -| Property | Description -| | -|**Text**|Plain text representation. -|**Codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the observation created. -|**Codes[].Code**|The code for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). -|**Codes[].System**|The system for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). -|**Codes[].Display**|The display for the [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding). +|Element|Description|Required| +|:|:-|:-| +|**text**|Plain text representation.|TBD| +|**codes**|One or more [Codings](http://hl7.org/fhir/datatypes-definitions.html#coding) to apply to the observation created.|TBD| +|**codes[].code**|The code for a [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding) in the `codes` property.|TBD| +|**codes[].system**|The system for a [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding) in the `codes` property.|TBD| +|**codes[].display**|The display for a [Coding](http://hl7.org/fhir/datatypes-definitions.html#coding) in the `codes` property.|TBD| -### String +#### String -Represents the [string](https://www.hl7.org/fhir/datatypes.html#string) FHIR data type. This type creates a single, point in time, Observation. If new value arrives that contains the same device identifier, measurement type, and timestamp, the previous Observation is updated to the new value. +Represents the [string](https://www.hl7.org/fhir/datatypes.html#string) FHIR data type. This type creates a single, point in time, Observation. If new a value arrives that contains the same device identifier, measurement type, and timestamp, the previous Observation is updated to the new value. No other elements are defined. ### Example |
iot-develop | About Getting Started Device Development | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/about-getting-started-device-development.md | -This article shows how to quickly get started with Azure IoT device development. As a prerequisite, see the introductory articles [What is Azure IoT device and application development?](about-iot-develop.md) and [Overview of Azure IoT Device SDKs](about-iot-sdks.md). These articles summarize key development options, tools, and SDKs available to device developers. +This article shows how to get started quickly with Azure IoT device development. As a prerequisite, see the introductory articles [What is Azure IoT device and application development?](about-iot-develop.md) and [Overview of Azure IoT Device SDKs](about-iot-sdks.md). These articles summarize key development options, tools, and SDKs available to device developers. -In this article, you'll select from a set of device quickstarts to get started with hands-on development. +In this article, you can select from a set of device quickstarts to get started with hands-on development. ## Quickstarts for general devices-To start using the Azure IoT device SDKs to connect general, unconstrained MPU devices to Azure IoT, see the following articles. These quickstarts provide simulators and don't require you to have a physical device. +See the following articles to start using the Azure IoT device SDKs to connect general, microprocessor unit (MPU) devices to Azure IoT. Examples of general MPU devices with larger compute and memory resources include PCs, servers, Raspberry Pi devices, and smartphones. The following quickstarts all provide device simulators and don't require you to have a physical device. Each quickstart shows how to set up a code sample and tools, run a temperature controller sample, and connect it to Azure. After the device is connected, you perform several common operations. Each quickstart shows how to set up a code sample and tools, run a temperature c |[Send telemetry from a device to Azure IoT Hub (Java)](quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java)|[Azure IoT SDK for Java](https://github.com/Azure/azure-iot-sdk-java)| ## Quickstarts for embedded devices-To start using the Azure IoT embedded device SDKs to connect embedded, resource-constrained MCU devices to Azure IoT, see the following articles. These quickstarts require you to have one of the listed devices. +See the following articles to start using the Azure IoT embedded device SDKs to connect embedded, resource-constrained microcontroller unit (MCU) devices to Azure IoT. Examples of constrained MCU devices with compute and memory limitations, include sensors, and special purpose hardware modules or boards. The following quickstarts require you to have the listed MCU devices. Each quickstart shows how to set up a code sample and tools, flash the device, and connect it to Azure. After the device is connected, you perform several common operations. |
iot-develop | About Iot Develop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/about-iot-develop.md | -Azure IoT is a collection of managed and platform services that connect, monitor, and control your IoT devices. Azure IoT offers developers a comprehensive set of options. Your options include device platforms, supporting cloud services, SDKs, and tools for building device-enabled cloud applications. +Azure IoT is a collection of managed and platform services that connect, monitor, and control your IoT devices. Azure IoT offers developers a comprehensive set of options. Your options include device platforms, supporting cloud services, SDKs, MQTT support, and tools for building device-enabled cloud applications. This article overviews several key considerations for developers who are getting started with Azure IoT. - [Understanding device development paths](#device-development-paths) This article discusses two common device development paths. Each path includes a > [!NOTE] > If your device is able to run a general-purpose operating system, we recommend following the [Device application development](#device-application-development) path. It provides a richer set of development options. -* **Embedded device development:** Describes development targeting resource constrained devices. A resource constrained device will often be used to reduce per unit costs, power consumption, or device size. These devices have direct control over the hardware platform they execute on. +* **Embedded device development:** Describes development targeting resource constrained devices. Often you use a resource-constrained device to reduce per unit costs, power consumption, or device size. These devices have direct control over the hardware platform they execute on. ### Device application development Device application developers are adapting existing devices to connect to the cloud and integrate into their IoT solutions. These devices can support higher-order languages, such as C# or Python, and often support a robust general purpose operating system such as Windows or Linux. Common target devices include PCs, Containers, Raspberry Pis, and mobile devices. -Rather than develop constrained devices at scale, device application developers focus on enabling a specific IoT scenario required by their cloud solution. Some developers will also work on constrained devices for their cloud solution. For developers working with resource constrained devices, see the [Embedded Device Development](#embedded-device-development) path. +Rather than develop constrained devices at scale, device application developers focus on enabling a specific IoT scenario required by their cloud solution. Some developers also work on constrained devices for their cloud solution. For developers working with resource constrained devices, see the [Embedded Device Development](#embedded-device-development) path. > [!IMPORTANT] > For information on SDKs to use with device application development, see the [Device SDKs](about-iot-sdks.md#device-sdks). Embedded development targets constrained devices that have limited memory and pr Embedded devices typically use a real-time operating system (RTOS), or no operating system at all. Embedded devices have full control over their hardware, due to the lack of a general purpose operating system. That fact makes embedded devices a good choice for real-time systems. -The current embedded SDKs target the **C** language. The embedded SDKs provide either no operating system, or Azure RTOS support. They're designed with embedded targets in mind. The design considerations include the need for a minimal footprint, and a non-memory allocating design. +The current embedded SDKs target the **C** language. The embedded SDKs provide either no operating system, or Azure RTOS support. They're designed with embedded targets in mind. The design considerations include the need for a minimal footprint, and a nonmemory allocating design. > [!IMPORTANT] > For information on SDKs to use with embedded device development, see the [Embedded device SDKs](about-iot-sdks.md#embedded-device-sdks). Azure IoT devices are the basic building blocks of an IoT solution and are respo For more information on the difference between devices types covered in this article, see [About IoT Device Types](concepts-iot-device-types.md). ## Choosing an SDK-As an Azure IoT device developer, you have a diverse set of SDKs to help you build device-enabled cloud applications. The SDKs streamline your development effort and simplify much of the complexity of connecting and managing devices. +As an Azure IoT device developer, you have a diverse set of SDKs, protocols and tools to help build device-enabled cloud applications. ++There are two main options to connect devices and communicate with IoT Hub: +- **Use the Azure IoT SDKs**. In most cases, we recommend that you use the Azure IoT SDKs versus using MQTT directly. The SDKs streamline your development effort and simplify the complexity of connecting and managing devices. IoT Hub supports the [MQTT v3.1.1](https://mqtt.org/) protocol, and the IoT SDKs simplify the process of using MQTT to communicate with IoT Hub. +- **Use the MQTT protocol directly**. There are some advantages of building an IoT Hub solution to use MQTT directly. For example, a solution that uses MQTT directly without the SDKs can be built on the open MQTT standard. A standards-based approach makes the solution more portable, and gives you more control over how devices connect and communicate. However, IoT Hub isn't a full-featured MQTT broker and doesn't support all behaviors specified in the MQTT v3.1.1 standard. The partial support for MQTT v3.1.1 adds development cost and complexity. Device developers should weigh the trade-offs of using the IoT device SDKs versus using MQTT directly. For more information, see [Communicate with an IoT hub using the MQTT protocol](../iot/iot-mqtt-connect-to-iot-hub.md). There are three sets of IoT SDKs for device development: - Device SDKs (for using higher order languages to connect existing general purpose devices to IoT applications) To learn more about choosing an Azure IoT device or service SDK, see [Overview o ## Selecting a service A key step in the development process is selecting a service to connect your devices to. There are two primary Azure IoT service options for connecting and managing devices: IoT Hub, and IoT Central. -- [Azure IoT Hub](../iot-hub/about-iot-hub.md). You can use Iot Hub to host IoT applications and connect devices. IoT Hub is a platform-as-a-service (PaaS) application that acts as a central message hub for bi-directional communication between IoT applications and connected devices. IoT Hub can scale to support millions of devices. Compared to other Azure IoT services, IoT Hub offers the greatest control and customization over your application design. It also offers the most developer tool options for working with the service, at the cost of some increase in development and management complexity.+- [Azure IoT Hub](../iot-hub/about-iot-hub.md). Use Iot Hub to host IoT applications and connect devices. IoT Hub is a platform-as-a-service (PaaS) application that acts as a central message hub for bi-directional communication between IoT applications and connected devices. IoT Hub can scale to support millions of devices. Compared to other Azure IoT services, IoT Hub offers the greatest control and customization over your application design. It also offers the most developer tool options for working with the service, at the cost of some increase in development and management complexity. - [Azure IoT Central](../iot-central/core/overview-iot-central.md). IoT Central is designed to simplify the process of working with IoT solutions. You can use it as a proof of concept to evaluate your IoT solutions. IoT Central is a software-as-a-service (SaaS) application that provides a web UI to simplify the tasks of creating applications, and connecting and managing devices. IoT Central uses IoT Hub to create and manage applications, but keeps most details transparent to the user. ## Tools to connect and manage devices |
iot-edge | Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md | Modules built as Linux containers can be deployed to either Linux or Windows dev All Windows operating systems must be minimum build 17763 with all current cumulative updates installed. +> [!NOTE] +> [Standard support for Ubuntu 18.04 LTS ends on May 31st, 2023](https://ubuntu.com/blog/18-04-end-of-standard-support). Beginning June 2023, Ubuntu 18.04 LTS won't be an IoT Edge *tier 1* supported platform. Ubuntu 18.04 LTS IoT Edge packages are available until Nov 30th, 2023. IoT Edge system modules Edge Agent and Edge Hub aren't impacted. If you take no action, Ubuntu 18.04 LTS based IoT Edge devices continue to work but ongoing security patches and bug fixes in the host packages for Ubuntu 18.04 won't be available after Nov 30th, 2023. To continue to receive support and security updates, we recommend that you update your host OS to a *tier 1* supported platform. For more information, see the [Update your IoT Edge devices on Ubuntu 18.04 LTS announcement](https://azure.microsoft.com/updates/update-ubuntu-1804/). + #### Windows containers We no longer support Windows containers. [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) is the recommended way to run IoT Edge on Windows devices. The systems listed in the following table are considered compatible with Azure I | [Debian 11](https://www.debian.org/releases/bullseye/) |  | |  | | [Mentor Embedded Linux Flex OS](https://www.mentor.com/embedded-software/linux/mel-flex-os/) |  |  |  | | [Mentor Embedded Linux Omni OS](https://www.mentor.com/embedded-software/linux/mel-omni-os/) |  | |  |-| [RHEL 7](https://access.redhat.com/documentation/red_hat_enterprise_linux/7) |  |  |  | +| [RHEL 7](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7) |  |  |  | | [Ubuntu 18.04 <sup>2</sup>](https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes) | |  | | | [Ubuntu 20.04 <sup>2</sup>](https://wiki.ubuntu.com/FocalFossa/ReleaseNotes) | |  | | | [Ubuntu 22.04 <sup>2</sup>](https://wiki.ubuntu.com/JammyJellyfish/ReleaseNotes) | |  | | |
key-vault | Vs Key Vault Add Connected Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/vs-key-vault-add-connected-service.md | Title: Add Key Vault support to your ASP.NET project using Visual Studio - Azure description: Use this tutorial to help you learn how to add Key Vault support to an ASP.NET or ASP.NET Core web application. -+ Previously updated : 11/14/2022 Last updated : 4/28/2023 # Add Key Vault to your web application by using Visual Studio Connected Services Affects the project file .NET references and `packages.config` (NuGet references If you followed this tutorial, your Key Vault permissions are set up to run with your own Azure subscription, but that might not be desirable for a production scenario. You can create a managed identity to manage Key Vault access for your app. See [How to Authenticate to Key Vault](./authentication.md) and [Assign a Key Vault access policy](./assign-access-policy-portal.md). Learn more about Key Vault development by reading the [Key Vault Developer's Guide](developers-guide.md).++If your goal is to store configuration for an ASP.NET Core app in an Azure Key Vault, see [Azure Key Vault configuration provider in ASP.NET Core](/aspnet/core/security/key-vault-configuration). + |
machine-learning | How To Debug Pipeline Reuse Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-pipeline-reuse-issues.md | + + Title: Debug pipeline reuse issues in Azure Machine Learning ++description: Learn how reuse works in pipeline and how to debug reuse issues +++++ |