Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Architecture Icons | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/architecture-icons.md | Helping our customers design and architect new solutions is core to the Microsof | Month | Change description | |-|--|-| August 2023 | Added a downloadable package that contains the Microsoft Entra architecture icons, branding playbook (which contains guidelines about the Microsoft Security visual identity), and terms of use. | +| October 12, 2023 | Updated the downloadable package to include more Microsoft Entra product icons and updated branding playbook. | +| August 15, 2023 | Added a downloadable package that contains the Microsoft Entra architecture icons, branding playbook (which contains guidelines about the Microsoft Security visual identity), and terms of use. | ## Icon terms Microsoft permits the use of these icons in architectural diagrams, training materials, or documentation. You may copy, distribute, and display the icons only for the permitted use unless granted explicit permission by Microsoft. Microsoft reserves all other rights. > [!div class="button"]- > [I agree to the above terms. Download icons.](https://download.microsoft.com/download/a/4/2/a4289cad-4eaf-4580-87fd-ce999a601516/Microsoft-Entra-architecture-icons.zip?wt.mc_id=microsoftentraicons_downloadmicrosoftentraicons_content_cnl_csasci) + > [I agree to the above terms. Download icons.](https://download.microsoft.com/download/3/1/a/31a56038-856a-4489-88e4-ee5a1c4352be/Microsoft%20Entra%20architecture%20icons%20-%20Oct%202023.zip?wt.mc_id=microsoftentraicons_downloadmicrosoftentraicons_content_cnl_csasci) ## More icon sets from Microsoft |
active-directory | How To Mfa Additional Context | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-additional-context.md | description: Learn how to use additional context in MFA notifications Previously updated : 09/13/2023 Last updated : 10/11/2023 To enable application name or geographic location in the Microsoft Entra admin c ## Known issues -Additional context isn't supported for Network Policy Server (NPS) or Active Directory Federation Services (AD FS). +- Additional context isn't supported for Network Policy Server (NPS) or Active Directory Federation Services (AD FS). +- Users can modify the location reported by iOS and Android devices. As a result, Microsoft Authenticator is updating its security baseline for Location Based Access Control (LBAC) Conditional Access policies. Authenticator will deny authentications where the user may be using a different location than the actual GPS location of the mobile device where Authenticator installed. ++ In the November 2023 release of Authenticator, users who modify the location of their device will see a denial message in Authenticator when doing an LBAC authentication. Beginning January 2024, any users that run older Authenticator versions will be blocked from LBAC authentication with a modified location: + + - Authenticator version 6.2309.6329 or earlier on Android + - Authenticator version 6.7.16 or earlier on iOS + + To find which users run older versions of Authenticator, use [Microsft Graph APIs](/graph/api/resources/microsoftauthenticatorauthenticationmethod#properties). ## Next steps |
active-directory | How To Configure Okta As An Identity Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-configure-okta-as-an-identity-provider.md | + + Title: Configure Okta as an identity provider +description: How to configure Okta as an identity provider in Microsoft Entra Permissions Management. +++++++ Last updated : 10/10/2023++++# Configure Okta as an identity provider (preview) ++This article describes how to integrate Okta as an identity provider (IdP) for an Amazon Web Services (AWS) account in Microsoft Entra Permissions Management. ++Permissions Required: +++| **Account** | **Permissions Required** |**Why?** | +|--|--|| +| Permissions Management | Permissions Management Administrator | Admin can create and edit the AWS authorization system onboarding configuration. | +| Okta | API Access Management Administrator | Admin can add the application in the Okta portal and add or edit the API scope. | +| AWS | AWS permissions explicitly | Admin should be able to run the cloudformation stack to create 1. AWS Secret in Secrets Manager; 2. Managed policy to allow the role to read the AWS secret. | ++> [!NOTE] +> While configuring the Amazon Web Services (AWS) app in Okta, the suggested AWS role group syntax is (```aws#{account alias]#{role name}#{account #]```). +> Sample RegEx pattern for the group filter name are: +> - ```^aws\#\S+\#?{{role}}[\w\-]+)\#(?{{accountid}}\d+)$``` +> - ```aws_(?{{accountid}}\d+)_(?{{role}}[a-zA-Z0-9+=,.@\-_]+)``` +> Permissions Management reads default suggested filters. Custom RegEx expression for group syntax is not supported. ++## How to configure Okta as an identity provider ++1. Log in to the Okta portal with API Access Management Administrator. +2. Create a new **Okta API Services Application**. +3. In the Admin Console, go to Applications. +4. On the Create a new app integration page, select **API Services**. +5. Enter a name for your app integration and click **Save**. +6. Copy the **Client ID** for future use. +7. In the **Client Credentials** section of the General tab, click **Edit** to change the client authentication method. +8. Select **Public key/Private key** as the Client authentication method. +9. Leave the default **Save keys in Okta**, then click **Add key**. +10. Click **Add** and in the **Add a public key** dialog, either paste your own public key or click **Generate new key** to autogenerate a new 2048-bit RSA key. +11. Copy **Public Key Id** for future use. +12. Click **Generate new key** and the public and private keys appear in JWK format. +13. Click **PEM**. The private key appears in PEM format. + This is your only opportunity to save the private key. Click **Copy to clipboard** to copy the private key and store it somewhere safe. +14. Click **Done**. The new public key is now registered with the app and appears in a table in the **PUBLIC KEYS** section of the **General** tab. +15. From the Okta API scopes tab, grant these scopes: + - okta.users.read + - okta.groups.read + - okta.apps.read +16. Optional. Click the **Application rate limits** tab to adjust the rate-limit capacity percentage for this service application. By default, each new application sets this percentage at 50 percent. ++### Convert public key to a Base64 string ++1. See instructions for [using a personal access token (PAT)](https://go.microsoft.com/fwlink/?linkid=2249174). ++### Find your Okta URL (also called an Okta domain) ++This Okta URL/Okta domain is saved in the AWS secret. ++1. Sign in to your Okta organization with your administrator account. +2. Look for the Okta URL/Okta domain in the global header of the dashboard. +Once located, note the Okta URL in an app such as Notepad. You'll need this URL for your next steps. ++### Configure AWS stack details ++1. Fill in the following fields on the **CloudFormation Template Specify stack details** screen using the information from your Okta application: + - **Stack name** - A name of our choosing + - **Or URL** Your organization's Okta URL, example: *https://companyname.okta.com* + - **Client Id** - From the **Client Credentials** section of your Okta application + - **Public Key Id** - Click **Add > Generate new key**. The public key is generated + - **Private Key (in PEM format)** - Base64 encoded string of the PEM format of the **Private key** + > [!NOTE] + > You must copy all text in the field before converting to a Base64 string, including the dash before BEGIN PRIVATE KEY and after END PRIVATE KEY. +2. When the **CloudFormation Template Specify stack details** screen is complete, click **Next**. +3. On the **Configure stack options** screen, click **Next**. +4. Review the information you've entered, then click **Submit**. +5. Select the **Resources** tab, then copy the **Physical ID** (this ID is the Secret ARN) for future use. ++### Configure Okta in Microsoft Entra Permissions Management ++> [!NOTE] +> Integrating Okta as an identity provider is an optional step. You can return to these steps to configure an IdP at any time. ++1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches, select **Settings** (gear icon), and then select the **Data Collectors** subtab. +2. On the **Data Collectors** dashboard, select **AWS**, and then select **Create Configuration**. + Complete the **Manage Authorization System** steps. + > [!NOTE] + > If a Data Collector already exists in your AWS account and you want to add Okta integration, follow these steps: + > 1. Select the Data Collector for which you want to add Okta integration. + > 1. Click on the ellipsis next to the **Authorization System Status**. + > 1. Select **Integrate Identity Provider**. ++3. On the **Integrate Identity Provider (IdP)** page, select the box for **Okta**. +4. Select **Launch CloudFormation Template**. The template opens in a new window. + > [!NOTE] + > Here you'll fill in information to create a secret Amazon Resource Name (ARN) that you'll enter on the **Integrate Identity Provider (IdP)** page. Microsoft does not read or store this ARN. +5. Return to the Permissions Management **Integrate Identity Provider (IdP)** page and paste the **Secret ARN** in the field provided. +6. Click **Next** to review and confirm the information you've entered. +7. Click **Verify Now & Save**. + The system returns the populated AWS CloudFormation Template. ++## Next steps ++- For information on how to view existing roles/policies, requests, and permissions, see [View roles/policies, requests, and permission in the Remediation dashboard](ui-remediation.md). |
active-directory | Onboard Enable Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md | -> To complete this task, you must have *Microsoft Entra Permissions Management Administrator* permissions. You can't enable Permissions Management as a user from another tenant who has signed in via B2B or via Azure Lighthouse. +> To complete this task, you must have at least [*Billing Administrator*](https://go.microsoft.com/fwlink/?linkid=2248574) permissions. You can't enable Permissions Management as a user from another tenant who has signed in via B2B or via Azure Lighthouse. :::image type="content" source="media/onboard-enable-tenant/dashboard.png" alt-text="Screenshot of the Microsoft Entra Permissions Management dashboard." lightbox="media/onboard-enable-tenant/dashboard.png"::: To enable Permissions Management in your organization: ## How to enable Permissions Management on your Microsoft Entra tenant 1. In your browser:- 1. Browse to the [Microsoft Entra admin center](https://entra.microsoft.com) and sign in to [Microsoft Entra ID](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) as a [Global Administrator](https://aka.ms/globaladmin). + 1. Browse to the [Microsoft Entra admin center](https://entra.microsoft.com) and sign in to [Microsoft Entra ID](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) as at least a [Billing Administrator](https://go.microsoft.com/fwlink/?linkid=2248574). 1. If needed, activate the *Permissions Management Administrator* role in your Microsoft Entra tenant. 1. In the Azure portal, select **Microsoft Entra Permissions Management**, then select the link to purchase a license or begin a trial. |
active-directory | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/overview.md | Title: What's Permissions Management? -description: An introduction to Permissions Management. + Title: What's Microsoft Entra Permissions Management? +description: An introduction to Microsoft Entra Permissions Management. -# What's Microsoft Entra Permissions Management? --## Overview +# What's Microsoft Entra Permissions Management Microsoft Entra Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management detects, automatically right-sizes, and continuously mon Organizations have to consider permissions management as a central piece of their Zero Trust security to implement least privilege access across their entire infrastructure: - Organizations are increasingly adopting multicloud strategy and are struggling with the lack of visibility and the increasing complexity of managing access permissions.-- With the proliferation of identities and cloud services, the number of high-risk cloud permissions is exploding, expanding the attack surface for organizations.+- With the growth of identities and cloud services, the number of high-risk cloud permissions is exploding, expanding the attack surface for organizations. - IT security teams are under increased pressure to ensure access to their expanding cloud estate is secure and compliant. - The inconsistency of cloud providers' native access management models makes it even more complex for Security and Identity to manage permissions and enforce least privilege access policies across their entire environment. Organizations have to consider permissions management as a central piece of thei ## Key use cases -Permissions Management allows customers to address three key use cases: *discover*, *remediate*, and *monitor*. +Permissions Management allows customers to address three key use cases: *discover*, *remediate*, and *monitor*. -Permissions Management has been designed in such a way that we recommended you 'step-through' each of the below phases in order to gain insights into permissions across the organization. This is because you generally can't action what is yet to be discovered, likewise you can't continually evaluate what is yet to be remediated. +Permissions Management is designed in such a way that we recommended you 'step-through' each of the below phases in order to gain insights into permissions across the organization. This is because you generally can't take action on what has not been discovered, likewise you can't continually evaluate what has not been remediated. ### Discover Customers can detect anomalous activities with machine learning-powered (ML-powe - ML-powered anomaly detections. - Context-rich forensic reports around identities, actions, and resources to support rapid investigation and remediation. -Permissions Management deepens Zero Trust security strategies by augmenting the least privilege access principle, allowing customers to: +Permissions Management deepens Zero Trust security strategies by augmenting the least privilege access principle, allowing customers to: - Get comprehensive visibility: Discover which identity is doing what, where, and when. - Automate least privilege access: Use access analytics to ensure identities have the right permissions, at the right time. Once your organization has explored and implemented the discover, remediation an - Deepen your learning with [Introduction to Microsoft Entra Permissions Management](https://go.microsoft.com/fwlink/?linkid=2240016) learn module. - Sign up for a [45-day free trial](https://aka.ms/TryPermissionsManagement) of Permissions Management.-- For a list of frequently asked questions (FAQs) about Permissions Management, see [FAQs](faqs.md).+- For a list of frequently asked questions (FAQs) about Permissions Management, see [FAQs](faqs.md). |
active-directory | Product Permissions Analytics Reports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-permissions-analytics-reports.md | You can view the Permissions Analytics Report information directly in the Permis 2. Locate the **Permissions Analytics Report** in the list, then select it. 3. Select which Authorization System you want to generate the PDF download for (AWS, Azure, or GCP). >[!NOTE]- > You can download a PDF report for up to 10 authorization systems at one time. The authorization systems must be part of the same cloud environment (for example, 1- 10 authorization systems that are all on Amazon Web Service (AWS)). + > (Preview) You can download a PDF report for up to 10 authorization systems at one time. The authorization systems must be part of the same cloud environment (for example, 1- 10 authorization systems that are all on Amazon Web Service (AWS)). The following message displays: **Successfully started to generate PDF report**. |
active-directory | Concept Conditional Access Policy Common | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-policy-common.md | Microsoft recommends these policies as the base for all organizations. We recomm - [Require multifactor authentication for admins](howto-conditional-access-policy-admin-mfa.md) - [Securing security info registration](howto-conditional-access-policy-registration.md) - [Block legacy authentication](howto-conditional-access-policy-block-legacy.md)+- [Require multifactor authentication for admins accessing Microsoft admin portals](how-to-policy-mfa-admin-portals.md) - [Require multifactor authentication for all users](howto-conditional-access-policy-all-users-mfa.md) - [Require multifactor authentication for Azure management](howto-conditional-access-policy-azure-management.md) - [Require compliant or Microsoft Entra hybrid joined device or multifactor authentication for all users](howto-conditional-access-policy-compliant-device.md) These policies help secure organizations with remote workers. # [Protect administrator](#tab/protect-administrator) -These policies are directed at highly privileged administrators in your environment, where compromise may cause the most damage. +These policies are directed at highly privileged administrators in your environment, where compromise might cause the most damage. - [Require multifactor authentication for admins](howto-conditional-access-policy-admin-mfa.md) - [Block legacy authentication](howto-conditional-access-policy-block-legacy.md) |
active-directory | Location Condition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md | Conditional Access policies are at their most basic an if-then statement combini ![Conceptual Conditional signal plus decision to get enforcement](./media/location-condition/conditional-access-signal-decision-enforcement.png) -As mentioned in the blog post [IPv6 is coming to Microsoft Entra ID](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/ipv6-coming-to-azure-ad/ba-p/2967451) we now support IPv6 in Microsoft Entra services. +As mentioned in the blog post [IPv6 is coming to Microsoft Entra ID](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/ipv6-coming-to-azure-ad/ba-p/2967451), we now support IPv6 in Microsoft Entra services. Organizations can use these locations for common tasks like: Multiple Conditional Access policies may prompt users for their GPS location bef > [!IMPORTANT] > Users may receive prompts every hour letting them know that Microsoft Entra ID is checking their location in the Authenticator app. The preview should only be used to protect very sensitive apps where this behavior is acceptable or where access needs to be restricted to a specific country/region. + #### Include unknown countries/regions Some IP addresses don't map to a specific country or region. To capture these IP locations, check the box **Include unknown countries/regions** when defining a geographic location. This option allows you to choose if these IP addresses should be included in the named location. Use this setting when the policy using the named location should apply to unknown locations. |
active-directory | How To Find Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-find-tenant.md | Azure subscriptions have a trust relationship with Microsoft Entra ID. Microsoft ## Find tenant ID through the Microsoft Entra admin center -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). - +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader). 1. Browse to **Identity** > **Overview** > **Properties**.- 1. Scroll down to the **Tenant ID** section and you can find your tenant ID in the box. +<!-- docutune:disable --> ## Find tenant ID through the Azure portal---1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Reader](../roles/permissions-reference.md#global-reader). -1. Browse to **Identity** > **Overview** > **Properties**. -+1. Sign in to the [Azure portal](https://portal.azure.com). +1. Browse to **Microsoft Entra ID** > **Properties**. 1. Scroll down to the **Tenant ID** section and you can find your tenant ID in the box. :::image type="content" source="media/how-to-find-tenant/portal-tenant-id.png" alt-text="Microsoft Entra ID - Properties - Tenant ID - Tenant ID field":::+<!-- docutune:enable --> ## Find tenant ID with PowerShell |
active-directory | New Name | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/new-name.md | -Microsoft has reamed Azure Active Directory (Azure AD) to Microsoft Entra ID for the following reasons: (1) to communicate the multicloud, multiplatform functionality of the products, (2) to alleviate confusion with Windows Server Active Directory, and (3) to unify the [Microsoft Entra](/entra) product family. +Microsoft has renamed Azure Active Directory (Azure AD) to Microsoft Entra ID for the following reasons: (1) to communicate the multicloud, multiplatform functionality of the products, (2) to alleviate confusion with Windows Server Active Directory, and (3) to unify the [Microsoft Entra](/entra) product family. ## No interruptions to usage or service All features and capabilities are still available in the product. Licensing, ter To make the transition seamless, all existing login URLs, APIs, PowerShell cmdlets, and Microsoft Authentication Libraries (MSAL) stay the same, as do developer experiences and tooling. -Service plan display names will change on October 1, 2023. Microsoft Entra ID Free, Microsoft Entra ID P1, and Microsoft Entra ID P2 will be the new names of standalone offers, and all capabilities included in the current Azure AD plans remain the same. Microsoft Entra ID ΓÇô previously known as Azure AD ΓÇô continues to be included in Microsoft 365 licensing plans, including Microsoft 365 E3 and Microsoft 365 E5. Details on pricing and whatΓÇÖs included are available on the [pricing and free trials page](https://aka.ms/PricingEntra). +Service plan display names changed on October 1, 2023. Microsoft Entra ID Free, Microsoft Entra ID P1, and Microsoft Entra ID P2 are the new names of standalone offers, and all capabilities included in the current Azure AD plans remain the same. Microsoft Entra ID ΓÇô previously known as Azure AD ΓÇô continues to be included in Microsoft 365 licensing plans, including Microsoft 365 E3 and Microsoft 365 E5. Details on pricing and whatΓÇÖs included are available on the [pricing and free trials page](https://aka.ms/PricingEntra). :::image type="content" source="./media/new-name/azure-ad-new-name.png" alt-text="Diagram showing the new name for Azure AD and Azure AD External Identities." border="false" lightbox="./media/new-name/azure-ad-new-name-high-res.png"::: You can manage Microsoft Entra ID and all other Microsoft Entra solutions in the ### What are the display names for service plans and SKUs? -Licensing, pricing, and functionality aren't changing. Display names will be updated October 1, 2023 as follows. +Licensing, pricing, and functionality aren't changing. Display names were updated October 1, 2023 as follows. | **Old display name for service plan** | **New display name for service plan** | ||| No. Prices, terms and service level agreements (SLAs) remain the same. Pricing d ### Will Microsoft Entra ID be available as a free service with an Azure subscription? -Customers using Azure AD Free as part of their Azure, Microsoft 365, Dynamics 365, Teams, or Intune subscription continue to have access to the same capabilities. It will be called Microsoft Entra ID Free. Get the free version at <https://www.microsoft.com/security/business/microsoft-entra-pricing>. +Customers using Azure AD Free as part of their Azure, Microsoft 365, Dynamics 365, Teams, or Intune subscription continue to have access to the same capabilities. This is now called Microsoft Entra ID Free. Get the free version at <https://www.microsoft.com/security/business/microsoft-entra-pricing>. ### What's changing for Microsoft 365 or Azure AD for Office 365? Only official product names are capitalized, plus Conditional Access and My * ap | | Azure AD cloud-only identities<br/> Azure Active Directory cloud-only identities | Microsoft Entra cloud-only identities | | | Azure AD Connect<br/> Azure Active Directory Connect | Microsoft Entra Connect | | | Azure AD Connect Sync<br/> Azure Active Directory Connect Sync | Microsoft Entra Connect Sync |+| | Azure AD connector<br/> Azure Active Directory connector | Microsoft Entra connector | | | Azure AD domain<br/> Azure Active Directory domain | Microsoft Entra domain | | | Azure AD Domain Services<br/> Azure Active Directory Domain Services | Microsoft Entra Domain Services | | | Azure AD enterprise application<br/> Azure Active Directory enterprise application | Microsoft Entra enterprise application | Only official product names are capitalized, plus Conditional Access and My * ap | | Azure AD identity protection<br/> Azure Active Directory identity protection | Microsoft Entra ID Protection | | | Azure AD integrated authentication<br/> Azure Active Directory integrated authentication | Microsoft Entra integrated authentication | | | Azure AD join<br/> Azure AD joined<br/> Azure Active Directory join<br/> Azure Active Directory joined | Microsoft Entra join<br/> Microsoft Entra joined |+| | Azure AD license<br/> Azure Active Directory license | Microsoft Entra ID license or license for Microsoft Entra ID | | | Azure AD login<br/> Azure Active Directory login | Microsoft Entra login | | | Azure AD managed identities<br/> Azure Active Directory managed identities | Microsoft Entra managed identities | | | Azure AD multifactor authentication (MFA)<br/> Azure Active Directory multifactor authentication (MFA) | Microsoft Entra multifactor authentication (MFA)<br/> (Second use: MFA) | Only official product names are capitalized, plus Conditional Access and My * ap | | Azure AD password authentication<br/> Azure Active Directory password authentication | Microsoft Entra password authentication | | | Azure AD password hash synchronization (PHS)<br/> Azure Active Directory password hash synchronization (PHS) | Microsoft Entra password hash synchronization | | | Azure AD password protection<br/> Azure Active Directory password protection | Microsoft Entra password protection |+| | Azure AD Premium<br/> Azure Active Directory Premium | Microsoft Entra ID P1 or P2 | | | Azure AD principal ID<br/> Azure Active Directory principal ID | Microsoft Entra principal ID | | | Azure AD Privileged Identity Management (PIM)<br/> Azure Active Directory Privileged Identity Management (PIM) | Microsoft Entra Privileged Identity Management (PIM) | | | Azure AD registered<br/> Azure Active Directory registered | Microsoft Entra registered | Only official product names are capitalized, plus Conditional Access and My * ap | Date | Change description | ||--|+| October 12, 2023 | <br/>•Updated statement about availability of license plans. <br/>• Added three other terms in the glossary: "Azure AD connector", "Azure AD license", and "Azure AD Premium" | | September 15, 2023 | Added a link to the new article, [How to: Rename Azure AD](how-to-rename-azure-ad.md), updated the description for Azure AD B2C, and added more info about why the name Azure AD is changing. | | August 29, 2023 | <br/>• In the [glossary](#glossary-of-updated-terminology), corrected the entry for "Azure AD activity logs" to separate "Azure AD audit log", which is a distinct type of activity log. <br/>• Added Azure AD Sync and DirSync to the [What names aren't changing](#what-names-arent-changing) section. | | August 18, 2023 | <br/>• Updated the article to include a new section [Glossary of updated terminology](#glossary-of-updated-terminology), which includes the old and new terminology.<br/>• Updated info and added link to usage of the Microsoft Entra ID icon, and updates to verbiage in some sections. | |
active-directory | Security Defaults | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-defaults.md | These basic controls include: ## Enabling security defaults -If your tenant was created on or after October 22, 2019, security defaults may be enabled in your tenant. To protect all of our users, security defaults are being rolled out to all new tenants at creation. +If your tenant was created on or after October 22, 2019, security defaults might be enabled in your tenant. To protect all of our users, security defaults are being rolled out to all new tenants at creation. To help protect organizations, we're always working to improve the security of Microsoft account services. As part of this protection, customers are periodically notified for the automatic enablement of the security defaults if they: After you enable security defaults in your tenant, any user accessing the follow - Azure PowerShell - Azure CLI -This policy applies to all users who are accessing Azure Resource Manager services, whether they're an administrator or a user. This applies to ARM APIs such as accessing your subscription, VMs, storage accounts etc. This does not include Microsoft Entra ID or Microsoft Graph. +This policy applies to all users who are accessing Azure Resource Manager services, whether they're an administrator or a user. This applies to Azure Resource Manager APIs such as accessing your subscription, VMs, storage accounts etc. This doesn't include Microsoft Entra ID or Microsoft Graph. > [!NOTE] > Pre-2017 Exchange Online tenants have modern authentication disabled by default. In order to avoid the possibility of a login loop while authenticating through these tenants, you must [enable modern authentication](/exchange/clients-and-mobile-in-exchange-online/enable-or-disable-modern-authentication-in-exchange-online). It's critical to inform users about upcoming changes, registration requirements, ### Authentication methods -Security defaults users are required to register for and use multifactor authentication using the [Microsoft Authenticator app using notifications](../authentication/concept-authentication-authenticator-app.md). Users may use verification codes from the Microsoft Authenticator app but can only register using the notification option. Users can also use any third party application using [OATH TOTP](../authentication/concept-authentication-oath-tokens.md) to generate codes. +Security defaults users are required to register for and use multifactor authentication using the [Microsoft Authenticator app using notifications](../authentication/concept-authentication-authenticator-app.md). Users might use verification codes from the Microsoft Authenticator app but can only register using the notification option. Users can also use any third party application using [OATH TOTP](../authentication/concept-authentication-oath-tokens.md) to generate codes. > [!WARNING] > Do not disable methods for your organization if you are using security defaults. Disabling methods may lead to locking yourself out of your tenant. Leave all **Methods available to users** enabled in the [MFA service settings portal](../authentication/howto-mfa-getstarted.md#choose-authentication-methods-for-mfa). To disable security defaults in your directory: 1. Set **Security defaults** to **Disabled (not recommended)**. 1. Select **Save**. +### Move from security defaults to Conditional Access ++While security defaults are a good baseline to start your security posture from, they don't allow for the customization that many organizations require. Conditional Access policies provide a full range of customization that more complex organizations require. ++|| Security defaults | Conditional Access | +| | | | +| **Required licenses**| None | At least Microsoft Entra ID P1 | +| **Customization**| No customization (on or off) | Fully customizable | +| **Enabled by**| Microsoft or administrator | Administrator | +| **Complexity**| Simple to use | Fully customizable based on your requirements | ++Recommended steps when moving from security defaults ++Organizations who would like to test out the features of Conditional Access can [sign up for a free trial](get-started-premium.md) to get started. ++After administrators disable security defaults, organizations should immediately enable Conditional Access policies to protect their organization. These policies should include at least those policies in the [secure foundations category of Conditional Access templates](../conditional-access/concept-conditional-access-policy-common.md?tabs=secure-foundation#template-categories). Organizations with Microsoft Entra ID P2 licenses that include Microsoft Entra ID Protection can expand on this list to include [user and sign in risk-based policies](../identity-protection/howto-identity-protection-configure-risk-policies.md) to further strengthen their posture. ++We recommend that you exclude at least one account from your Conditional Access policies. These excluded **emergency access** or **break-glass** accounts help prevent tenant-wide account lockout. In the unlikely scenario all administrators are locked out of your tenant, your emergency-access administrative account can be used to log into the tenant to take steps to recover access. For more information, see the article [Manage emergency access accounts](../roles/security-emergency-access.md). + ## Next steps - [Blog: Introducing security defaults](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/introducing-security-defaults/ba-p/1061414) |
active-directory | Cic Intelligent Compensation Control Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cic-intelligent-compensation-control-tutorial.md | + + Title: Microsoft Entra SSO integration with CIC - Controle Inteligente de Compensação +description: Learn how to configure single sign-on between Microsoft Entra ID and CIC - Controle Inteligente de Compensação. ++++++++ Last updated : 10/10/2023+++++# Microsoft Entra SSO integration with CIC - Controle Inteligente de Compensação ++In this tutorial, you'll learn how to integrate CIC - Controle Inteligente de Compensação with Microsoft Entra ID. When you integrate CIC - Controle Inteligente de Compensação with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to CIC - Controle Inteligente de Compensação. +* Enable your users to be automatically signed-in to CIC - Controle Inteligente de Compensação with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with CIC - Controle Inteligente de Compensação, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* CIC - Controle Inteligente de Compensação single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* CIC - Controle Inteligente de Compensação supports **SP** initiated SSO. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Add CIC - Controle Inteligente de Compensação from the gallery ++To configure the integration of CIC - Controle Inteligente de Compensação into Microsoft Entra ID, you need to add CIC - Controle Inteligente de Compensação from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **CIC - Controle Inteligente de Compensação** in the search box. +1. Select **CIC - Controle Inteligente de Compensação** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for CIC - Controle Inteligente de Compensação ++Configure and test Microsoft Entra SSO with CIC - Controle Inteligente de Compensação using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in CIC - Controle Inteligente de Compensação. ++To configure and test Microsoft Entra SSO with CIC - Controle Inteligente de Compensação, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure CIC - Controle Inteligente de Compensação SSO](#configure-ciccontrole-inteligente-de-compensação-sso)** - to configure the single sign-on settings on application side. + 1. **[Create CIC - Controle Inteligente de Compensação test user](#create-ciccontrole-inteligente-de-compensação-test-user)** - to have a counterpart of B.Simon in CIC - Controle Inteligente de Compensação that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **CIC - Controle Inteligente de Compensação** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier (Entity ID)** text box, type the value: + `cic-prod` ++ b. In the **Reply URL** text box, type the URL: + `https://prodgtw.perdcomp.com.br/auth/login/saml/callback` ++ c. In the **Sign on URL** text box, type the URL: + `https://perdcomp.com.br/` ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate") ++1. On the **Set up CIC - Controle Inteligente de Compensação** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata") ++### Create a Microsoft Entra ID test user ++In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to CIC - Controle Inteligente de Compensação. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **CIC - Controle Inteligente de Compensação**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure CIC - Controle Inteligente de Compensação SSO ++To configure single sign-on on **CIC - Controle Inteligente de Compensação** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Microsoft Entra admin center to [CIC - Controle Inteligente de Compensação support team](mailto:cicsso@perdcomp.com.br). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create CIC - Controle Inteligente de Compensação test user ++In this section, you create a user called B.Simon in CIC - Controle Inteligente de Compensação. Work with [CIC - Controle Inteligente de Compensação support team](mailto:cicsso@perdcomp.com.br) to add the users in the CIC - Controle Inteligente de Compensação platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +* Click on **Test this application** in Microsoft Entra admin center. This will redirect to CIC - Controle Inteligente de Compensação Sign-on URL where you can initiate the login flow. + +* Go to CIC - Controle Inteligente de Compensação Sign-on URL directly and initiate the login flow from there. + +* You can use Microsoft My Apps. When you click the CIC - Controle Inteligente de Compensação tile in the My Apps, this will redirect to CIC - Controle Inteligente de Compensação Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next steps ++Once you configure CIC - Controle Inteligente de Compensação you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Insider Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insider-tutorial.md | + + Title: Microsoft Entra SSO integration with Insider +description: Learn how to configure single sign-on between Microsoft Entra ID and Insider. ++++++++ Last updated : 10/10/2023+++++# Microsoft Entra SSO integration with Insider ++In this tutorial, you'll learn how to integrate Insider with Microsoft Entra ID. When you integrate Insider with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to Insider. +* Enable your users to be automatically signed-in to Insider with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with Insider, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Insider single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* Insider supports **SP and IDP** initiated SSO. +* Insider supports **Just In Time** user provisioning. ++## Add Insider from the gallery ++To configure the integration of Insider into Microsoft Entra ID, you need to add Insider from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **Insider** in the search box. +1. Select **Insider** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for Insider ++Configure and test Microsoft Entra SSO with Insider using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Insider. ++To configure and test Microsoft Entra SSO with Insider, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure Insider SSO](#configure-insider-sso)** - to configure the single sign-on settings on application side. + 1. **[Create Insider test user](#create-insider-test-user)** - to have a counterpart of B.Simon in Insider that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Insider** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** text box, type a URL using the following pattern: + `https://inone.useinsider.com/sso/<Workplace_ID>/metadata` ++ b. In the **Reply URL** text box, type a URL using the following pattern: + `https://inone.useinsider.com/sso/<Workplace_ID>/acs` ++1. Perform the following step, if you wish to configure the application in **SP** initiated mode: ++ In the **Sign-on URL** text box, type a URL using the following pattern: + `https://inone.useinsider.com/sso/<Workplace_ID>/login` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Insider support team](mailto:bytemasters@useinsider.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate") ++1. On the **Set up Insider** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata") ++### Create a Microsoft Entra ID test user ++In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to Insider. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Insider**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure Insider SSO ++To configure single sign-on on **Insider** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Microsoft Entra admin center to [Insider support team](mailto:bytemasters@useinsider.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Insider test user ++In this section, a user called Britta Simon is created in Insider. Insider supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Insider, a new one is created after authentication. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +#### SP initiated: + +* Click on **Test this application** in Microsoft Entra admin center. This will redirect to Insider Sign on URL where you can initiate the login flow. + +* Go to Insider Sign-on URL directly and initiate the login flow from there. + +#### IDP initiated: + +* Click on **Test this application** in Microsoft Entra admin center and you should be automatically signed in to the Insider for which you set up the SSO. + +You can also use Microsoft My Apps to test the application in any mode. When you click the Insider tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Insider for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next steps ++Once you configure Insider you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Kofax Totalagility Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kofax-totalagility-tutorial.md | + + Title: Microsoft Entra SSO integration with Kofax TotalAgility +description: Learn how to configure single sign-on between Microsoft Entra ID and Kofax TotalAgility. ++++++++ Last updated : 10/05/2023+++++# Microsoft Entra SSO integration with Kofax TotalAgility ++In this tutorial, you'll learn how to integrate Kofax TotalAgility with Microsoft Entra ID. When you integrate Kofax TotalAgility with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to Kofax TotalAgility. +* Enable your users to be automatically signed-in to Kofax TotalAgility with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with Kofax TotalAgility, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Kofax TotalAgility single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* Kofax TotalAgility supports both **SP and IDP** initiated SSO. +* Kofax TotalAgility supports **Just In Time** user provisioning. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Add Kofax TotalAgility from the gallery ++To configure the integration of Kofax TotalAgility into Microsoft Entra ID, you need to add Kofax TotalAgility from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **Kofax TotalAgility** in the search box. +1. Select **Kofax TotalAgility** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for Kofax TotalAgility ++Configure and test Microsoft Entra SSO with Kofax TotalAgility using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Kofax TotalAgility. ++To configure and test Microsoft Entra SSO with Kofax TotalAgility, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure Kofax TotalAgility SSO](#configure-kofax-totalagility-sso)** - to configure the single sign-on settings on application side. + 1. **[Create Kofax TotalAgility test user](#create-kofax-totalagility-test-user)** - to have a counterpart of B.Simon in Kofax TotalAgility that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Kofax TotalAgility** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** text box, type the URL: + `https://cloudops.dmoeukta.kofaxcloud.com` ++ b. In the **Reply URL** text box, type a URL using the following pattern: + `https://cloudops.dmoeukta.kofaxcloud.com/FederatedLogin.aspx?Id=<ID>&Protocol=Workspace&Origin=https://cloudops.dmoeukta.kofaxcloud.com/forms/custom/logon.html` ++1. Perform the following step, if you wish to configure the application in **SP** initiated mode: ++ In the **Sign-on URL** text box, type the URL: + `https://cloudops.dmoeukta.kofaxcloud.com/forms/custom/logon.html` ++ > [!NOTE] + > The Reply URL is not real. Update this value with the actual Reply URL. Contact [Kofax TotalAgility support team](mailto:cloud-help@kofax.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. Kofax TotalAgility application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. ++ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image") ++1. In addition to above, Kofax TotalAgility application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements. + + | Name | Source Attribute| + | | | + | displayname | user.displayname | ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate") ++1. On the **Set up Kofax TotalAgility** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata") ++### Create a Microsoft Entra ID test user ++In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to Kofax TotalAgility. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Kofax TotalAgility**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure Kofax TotalAgility SSO ++To configure single sign-on on **Kofax TotalAgility** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Microsoft Entra admin center to [Kofax TotalAgility support team](mailto:cloud-help@kofax.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Kofax TotalAgility test user ++In this section, a user called Britta Simon is created in Kofax TotalAgility. Kofax TotalAgility supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Kofax TotalAgility, a new one is created after authentication. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +#### SP initiated: + +* Click on **Test this application** in Microsoft Entra admin center. This will redirect to Kofax TotalAgility Sign on URL where you can initiate the login flow. + +* Go to Kofax TotalAgility Sign-on URL directly and initiate the login flow from there. + +#### IDP initiated: + +* Click on **Test this application** in Microsoft Entra admin center and you should be automatically signed in to the Kofax TotalAgility for which you set up the SSO. + +You can also use Microsoft My Apps to test the application in any mode. When you click the Kofax TotalAgility tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Kofax TotalAgility for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next steps ++Once you configure Kofax TotalAgility you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Mdcomune Business Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mdcomune-business-tutorial.md | + + Title: Microsoft Entra SSO integration with MDComune Business +description: Learn how to configure single sign-on between Microsoft Entra ID and MDComune Business. ++++++++ Last updated : 10/10/2023+++++# Microsoft Entra SSO integration with MDComune Business ++In this tutorial, you'll learn how to integrate MDComune Business with Microsoft Entra ID. When you integrate MDComune Business with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to MDComune Business. +* Enable your users to be automatically signed-in to MDComune Business with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with MDComune Business, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* MDComune Business single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* MDComune Business supports **IDP** initiated SSO. +* MDComune Business supports **Just In Time** user provisioning. ++## Add MDComune Business from the gallery ++To configure the integration of MDComune Business into Microsoft Entra ID, you need to add MDComune Business from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **MDComune Business** in the search box. +1. Select **MDComune Business** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for MDComune Business ++Configure and test Microsoft Entra SSO with MDComune Business using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in MDComune Business. ++To configure and test Microsoft Entra SSO with MDComune Business, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure MDComune Business SSO](#configure-mdcomune-business-sso)** - to configure the single sign-on settings on application side. + 1. **[Create MDComune Business test user](#create-mdcomune-business-test-user)** - to have a counterpart of B.Simon in MDComune Business that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **MDComune Business** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** text box, type one of the following value/pattern: ++ | **Identifier** | + || + | `MDComuneBusiness`| + | `<MDComuneBusiness_ENTITY_ID>`| ++ > [!NOTE] + > <MDComuneBusiness_ENTITY_ID> is not real. Update this with the actual value. ++ b. In the **Reply URL** text box, type the URL: + `https://www.mdcomune.com.br/Madis/Account/SamlLogon` ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/certificateraw.png "Certificate") ++1. On the **Set up MDComune Business** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata") ++### Create a Microsoft Entra ID test user ++In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to MDComune Business. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **MDComune Business**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure MDComune Business SSO ++To configure single sign-on on **MDComune Business** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Microsoft Entra admin center to [MDComune Business support team](mailto:madis@madis.com.br). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create MDComune Business test user ++In this section, a user called Britta Simon is created in MDComune Business. MDComune Business supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in MDComune Business, a new one is created after authentication. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +* Click on Test this application in Microsoft Entra admin center and you should be automatically signed in to the MDComune Business for which you set up the SSO. + +* You can use Microsoft My Apps. When you click the MDComune Business tile in the My Apps, you should be automatically signed in to the MDComune Business for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next steps ++Once you configure MDComune Business you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Pressreader Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pressreader-tutorial.md | + + Title: Microsoft Entra SSO integration with PressReader +description: Learn how to configure single sign-on between Microsoft Entra ID and PressReader. ++++++++ Last updated : 10/03/2023+++++# Microsoft Entra SSO integration with PressReader ++In this tutorial, you'll learn how to integrate PressReader with Microsoft Entra ID. When you integrate PressReader with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to PressReader. +* Enable your users to be automatically signed-in to PressReader with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with PressReader, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* PressReader single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* PressReader supports **SP** initiated SSO. +* PressReader supports **Just In Time** user provisioning. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Add PressReader from the gallery ++To configure the integration of PressReader into Microsoft Entra ID, you need to add PressReader from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **PressReader** in the search box. +1. Select **PressReader** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for PressReader ++Configure and test Microsoft Entra SSO with PressReader using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in PressReader. ++To configure and test Microsoft Entra SSO with PressReader, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure PressReader SSO](#configure-pressreader-sso)** - to configure the single sign-on settings on application side. + 1. **[Create PressReader test user](#create-pressreader-test-user)** - to have a counterpart of B.Simon in PressReader that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **PressReader** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier (Entity ID)** text box, type the URL: + `https://www.pressreader.com/` ++ .b In the **Reply URL** text box, type the URL: + `https://www.pressreader.com/externalauth/processsamlauthorization/` ++ c. In the **Sign on URL** text box, type a URL using the following pattern: + `https://www.pressreader.com/<INSTANCE>` ++ > [!NOTE] + > The Sign on URL is not real. Update this value with the actual Sign on URL. Contact [PressReader support team](mailto:libraries@pressreader.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate") ++### Create a Microsoft Entra ID test user ++In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to PressReader. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **PressReader**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure PressReader SSO ++To configure single sign-on on **PressReader** side, you need to send the **App Federation Metadata Url** to [PressReader support team](mailto:libraries@pressreader.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create PressReader test user ++In this section, a user called Britta Simon is created in PressReader. PressReader supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in PressReader, a new one is created after authentication. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +* Click on **Test this application** in Microsoft Entra admin center. This will redirect to PressReader Sign-on URL where you can initiate the login flow. + +* Go to PressReader Sign-on URL directly and initiate the login flow from there. + +* You can use Microsoft My Apps. When you click the PressReader tile in the My Apps, this will redirect to PressReader Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next steps ++Once you configure PressReader you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Smart Map Pro Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smart-map-pro-tutorial.md | + + Title: Microsoft Entra SSO integration with Smart Map Pro +description: Learn how to configure single sign-on between Microsoft Entra ID and Smart Map Pro. ++++++++ Last updated : 10/05/2023+++++# Microsoft Entra SSO integration with Smart Map Pro ++In this tutorial, you'll learn how to integrate Smart Map Pro with Microsoft Entra ID. When you integrate Smart Map Pro with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to Smart Map Pro. +* Enable your users to be automatically signed-in to Smart Map Pro with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with Smart Map Pro, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Smart Map Pro single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* Smart Map Pro supports **IDP** initiated SSO. ++## Add Smart Map Pro from the gallery ++To configure the integration of Smart Map Pro into Microsoft Entra ID, you need to add Smart Map Pro from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **Smart Map Pro** in the search box. +1. Select **Smart Map Pro** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for Smart Map Pro ++Configure and test Microsoft Entra SSO with Smart Map Pro using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Smart Map Pro. ++To configure and test Microsoft Entra SSO with Smart Map Pro, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure Smart Map Pro SSO](#configure-smart-map-pro-sso)** - to configure the single sign-on settings on application side. + 1. **[Create Smart Map Pro test user](#create-smart-map-pro-test-user)** - to have a counterpart of B.Simon in Smart Map Pro that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Smart Map Pro** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** text box, type a URL using the following pattern: + `https://<SUBDOMAIN>.smartmap-pro.com/saml/smartmap/metadata` ++ b. In the **Reply URL** text box, type a URL using the following pattern: + `https://<SUBDOMAIN>.smartmap-pro.com/saml/smartmap/acs` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Smart Map Pro support team](mailto:smartpr@ww-system.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/certificateraw.png "Certificate") ++1. On the **Set up Smart Map Pro** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata") ++### Create a Microsoft Entra ID test user ++In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to Smart Map Pro. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Smart Map Pro**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure Smart Map Pro SSO ++To configure single sign-on on **Smart Map Pro** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Microsoft Entra admin center to [Smart Map Pro support team](mailto:smartpr@ww-system.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Smart Map Pro test user ++In this section, you create a user called B.Simon in Smart Map Pro. Work with [Smart Map Pro support team](mailto:smartpr@ww-system.com) to add the users in the Smart Map Pro platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +* Click on Test this application in Microsoft Entra admin center and you should be automatically signed in to the Smart Map Pro for which you set up the SSO. + +* You can use Microsoft My Apps. When you click the Smart Map Pro tile in the My Apps, you should be automatically signed in to the Smart Map Pro for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next steps ++Once you configure Smart Map Pro you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Supply Chain Catalyst Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/supply-chain-catalyst-tutorial.md | + + Title: Microsoft Entra SSO integration with Supply Chain Catalyst +description: Learn how to configure single sign-on between Microsoft Entra ID and Supply Chain Catalyst. ++++++++ Last updated : 10/05/2023+++++# Microsoft Entra SSO integration with Supply Chain Catalyst ++In this tutorial, you'll learn how to integrate Supply Chain Catalyst with Microsoft Entra ID. When you integrate Supply Chain Catalyst with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to Supply Chain Catalyst. +* Enable your users to be automatically signed-in to Supply Chain Catalyst with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with Supply Chain Catalyst, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Supply Chain Catalyst single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* Supply Chain Catalyst supports both **SP and IDP** initiated SSO. +* Supply Chain Catalyst supports **Just In Time** user provisioning. ++## Add Supply Chain Catalyst from the gallery ++To configure the integration of Supply Chain Catalyst into Microsoft Entra ID, you need to add Supply Chain Catalyst from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **Supply Chain Catalyst** in the search box. +1. Select **Supply Chain Catalyst** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for Supply Chain Catalyst ++Configure and test Microsoft Entra SSO with Supply Chain Catalyst using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Supply Chain Catalyst. ++To configure and test Microsoft Entra SSO with Supply Chain Catalyst, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure Supply Chain Catalyst SSO](#configure-supply-chain-catalyst-sso)** - to configure the single sign-on settings on application side. + 1. **[Create Supply Chain Catalyst test user](#create-supply-chain-catalyst-test-user)** - to have a counterpart of B.Simon in Supply Chain Catalyst that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Supply Chain Catalyst** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** text box, type a URL using the following pattern: + `https://authenticate.bvdep.com/<CUSTOMER_ID>` ++ b. In the **Reply URL** text box, type a URL using the following pattern: + `https://authenticate.bvdep.com/<CUSTOMER_ID>/Shibboleth.sso/SAML2/POST` ++ c. In the **Relay State** text box, type a URL using the following pattern: + `https://authenticate.bvdep.com/<CUSTOMER_ID>` ++1. Perform the following step, if you wish to configure the application in **SP** initiated mode: ++ In the **Sign on URL** text box, type a URL using the following pattern: + `https://login.bvdinfo.com/supplychaincatalyst/sso/<CUSTOMER_ID>` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL, Relay State and Sign on URL. Contact [Supply Chain Catalyst support team](mailto:help@bvdinfo.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate") ++### Create a Microsoft Entra ID test user ++In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to Supply Chain Catalyst. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Supply Chain Catalyst**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure Supply Chain Catalyst SSO ++To configure single sign-on on **Supply Chain Catalyst** side, you need to send the **App Federation Metadata Url** to [Supply Chain Catalyst support team](mailto:help@bvdinfo.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Supply Chain Catalyst test user ++In this section, a user called Britta Simon is created in Supply Chain Catalyst. Supply Chain Catalyst supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Supply Chain Catalyst, a new one is created after authentication. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +#### SP initiated: + +* Click on **Test this application** in Microsoft Entra admin center. This will redirect to Supply Chain Catalyst Sign on URL where you can initiate the login flow. + +* Go to Supply Chain Catalyst Sign-on URL directly and initiate the login flow from there. + +#### IDP initiated: + +* Click on **Test this application** in Microsoft Entra admin center and you should be automatically signed in to the Supply Chain Catalyst for which you set up the SSO. + +You can also use Microsoft My Apps to test the application in any mode. When you click the Supply Chain Catalyst tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Supply Chain Catalyst for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next steps ++Once you configure Supply Chain Catalyst you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Visual Paradigm Online Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/visual-paradigm-online-tutorial.md | + + Title: Microsoft Entra SSO integration with Visual Paradigm Online +description: Learn how to configure single sign-on between Microsoft Entra ID and Visual Paradigm Online. ++++++++ Last updated : 10/10/2023+++++# Microsoft Entra SSO integration with Visual Paradigm Online ++In this tutorial, you'll learn how to integrate Visual Paradigm Online with Microsoft Entra ID. When you integrate Visual Paradigm Online with Microsoft Entra ID, you can: ++* Control in Microsoft Entra ID who has access to Visual Paradigm Online. +* Enable your users to be automatically signed-in to Visual Paradigm Online with their Microsoft Entra accounts. +* Manage your accounts in one central location. ++## Prerequisites ++To integrate Microsoft Entra ID with Visual Paradigm Online, you need: ++* A Microsoft Entra subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Visual Paradigm Online single sign-on (SSO) enabled subscription. ++## Scenario description ++In this tutorial, you configure and test Microsoft Entra SSO in a test environment. ++* Visual Paradigm Online supports **SP** initiated SSO. ++## Add Visual Paradigm Online from the gallery ++To configure the integration of Visual Paradigm Online into Microsoft Entra ID, you need to add Visual Paradigm Online from the gallery to your list of managed SaaS apps. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **New application**. +1. In the **Add from the gallery** section, type **Visual Paradigm Online** in the search box. +1. Select **Visual Paradigm Online** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) ++## Configure and test Microsoft Entra SSO for Visual Paradigm Online ++Configure and test Microsoft Entra SSO with Visual Paradigm Online using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between a Microsoft Entra user and the related user in Visual Paradigm Online. ++To configure and test Microsoft Entra SSO with Visual Paradigm Online, perform the following steps: ++1. **[Configure Microsoft Entra SSO](#configure-microsoft-entra-sso)** - to enable your users to use this feature. + 1. **[Create a Microsoft Entra ID test user](#create-a-microsoft-entra-id-test-user)** - to test Microsoft Entra single sign-on with B.Simon. + 1. **[Assign the Microsoft Entra ID test user](#assign-the-microsoft-entra-id-test-user)** - to enable B.Simon to use Microsoft Entra single sign-on. +1. **[Configure Visual Paradigm Online SSO](#configure-visual-paradigm-online-sso)** - to configure the single sign-on settings on application side. + 1. **[Create Visual Paradigm Online test user](#create-visual-paradigm-online-test-user)** - to have a counterpart of B.Simon in Visual Paradigm Online that is linked to the Microsoft Entra ID representation of user. +1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ++## Configure Microsoft Entra SSO ++Follow these steps to enable Microsoft Entra SSO in the Microsoft Entra admin center. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Visual Paradigm Online** > **Single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern: + `https://online.visual-paradigm.com/w/<Workspace_ID>/saml2` ++ b. In the **Reply URL** text box, type a URL using the following pattern: + `https://online.visual-paradigm.com/w/<Workspace_ID>/saml2/service/` ++ c. In the **Sign on URL** text box, type the URL: + `https://online.visual-paradigm.com/login.jsp` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Visual Paradigm Online support team](mailto:support@visual-paradigm.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Microsoft Entra admin center. ++1. Your Visual Paradigm Online application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Box expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration. ++ ![Screenshot shows custom attribute mapping.](common/default-attributes.png "Attribute") ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate") ++1. On the **Set up Visual Paradigm Online** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to copy configuration URLs.](common/copy-configuration-urls.png "Metadata") + +### Create a Microsoft Entra ID test user ++In this section, you'll create a test user in the Microsoft Entra admin center called B.Simon. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [User Administrator](../roles/permissions-reference.md#user-administrator). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user**, at the top of the screen. +1. In the **User** properties, follow these steps: + 1. In the **Display name** field, enter `B.Simon`. + 1. In the **User principal name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`. + 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box. + 1. Select **Review + create**. +1. Select **Create**. ++### Assign the Microsoft Entra ID test user ++In this section, you'll enable B.Simon to use Microsoft Entra single sign-on by granting access to Visual Paradigm Online. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Visual Paradigm Online**. +1. In the app's overview page, select **Users and groups**. +1. Select **Add user/group**, then select **Users and groups** in the **Add Assignment** dialog. + 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. + 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. + 1. In the **Add Assignment** dialog, click the **Assign** button. ++## Configure Visual Paradigm Online SSO ++To configure single sign-on on **Visual Paradigm Online** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Microsoft Entra admin center to [Visual Paradigm Online support team](mailto:support@visual-paradigm.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Visual Paradigm Online test user ++In this section, you create a user called B.Simon in Visual Paradigm Online. Work with [Visual Paradigm Online support team](mailto:support@visual-paradigm.com) to add the users in the Visual Paradigm Online platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Microsoft Entra single sign-on configuration with following options. + +* Click on **Test this application** in Microsoft Entra admin center. This will redirect to Visual Paradigm Online Sign-on URL where you can initiate the login flow. + +* Go to Visual Paradigm Online Sign-on URL directly and initiate the login flow from there. + +* You can use Microsoft My Apps. When you click the Visual Paradigm Online tile in the My Apps, this will redirect to Visual Paradigm Online Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Next steps ++Once you configure Visual Paradigm Online you can enforce session control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app). |
active-directory | Nist Authenticator Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-types.md | The authentication process begins when a claimant asserts its control of one of |Look-up secret <br> (something you have)| None| |Single-factor out-of-band <br>(something you have)| Microsoft Authenticator App (Push Notification) <br> Phone (SMS): Not recommended | Multi-factor Out-of-band <br> (something you have + something you know/are) | Microsoft Authenticator App (Passwordless) |-|Single-factor one-time password (OTP) <br> (something you have)| Microsoft Authenticator App (OTP) <br> Single-factor Hardware/Software OTP<sup data-htmlnode="">1</sup>| +|Single-factor one-time password (OTP) <br> (something you have)| Microsoft Authenticator App (OTP) <br> Single-factor Hardware/Software OTP<sup>1</sup>| |Multi-factor OTP <br> (something you have + something you know/are)| Treated as single-factor OTP| -|Single-factor crypto software <br> (something you have)|Single-factor software certificate <br> Microsoft Entra joined <sup data-htmlnode="">2</sup> with software TPM <br> Microsoft Entra hybrid joined <sup data-htmlnode="">2</sup> with software TPM <br> Compliant mobile device | -|Single-factor crypto hardware <br> (something you have) | Microsoft Entra joined <sup data-htmlnode="">2</sup> with hardware TPM <br> Microsoft Entra hybrid joined <sup data-htmlnode="">2</sup> with hardware TPM| +|Single-factor crypto software <br> (something you have)|Single-factor software certificate <br> Microsoft Entra joined <sup>2</sup> with software TPM <br> Microsoft Entra hybrid joined <sup>2</sup> with software TPM <br> Compliant mobile device | +|Single-factor crypto hardware <br> (something you have) | Microsoft Entra joined <sup>2</sup> with hardware TPM <br> Microsoft Entra hybrid joined <sup>2</sup> with hardware TPM| |Multi-factor crypto software <br> (something you have + something you know/are) | Multi-factor Software Certificate (PIN Protected) <br> Windows Hello for Business with software TPM | |Multi-factor crypto hardware <br> (something you have + something you know/are) |Hardware protected certificate (smartcard/security key/TPM) <br> Windows Hello for Business with hardware TPM <br> FIDO 2 security key| -<sup data-htmlnode="">1</sup> 30-second or 60-second OATH-TOTP SHA-1 token +<sup>1</sup> 30-second or 60-second OATH-TOTP SHA-1 token -<sup data-htmlnode="">2</sup> For more information on device join states, see [Microsoft Entra device identity](../devices/index.yml) +<sup>2</sup> For more information on device join states, see [Microsoft Entra device identity](../devices/index.yml) ## Public Switch Telephone Network (PSTN) SMS/Voice are not recommended |
active-directory | Partner Gallery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/partner-gallery.md | Our IDV partner network extends Microsoft Entra Verified ID's capabilities to he | IDV partner | Description | Integration walkthroughs | |:-|:--|:--|-|![Screenshot of au10tix logo.](media/partner-gallery/au10tix.png) | [AU10TIX](https://www.au10tix.com/solutions/microsoft-azure-active-directory-verifiable-credentials-program) improves Verifiability While Protecting Privacy For Businesses, Employees, Contractors, Vendors, And Customers. | [Configure Verified ID by AU10TIX as your Identity Verification Partner](https://aka.ms/au10tixvc). | +|![Screenshot of au10tix logo.](media/partner-gallery/au10tix.png) | [AU10TIX](https://www.au10tix.com/solutions/verifiable-credentials/) improves Verifiability While Protecting Privacy For Businesses, Employees, Contractors, Vendors, And Customers. | [Configure Verified ID by AU10TIX as your Identity Verification Partner](https://aka.ms/au10tixvc). | | ![Screenshot of a LexisNexis logo.](media/partner-gallery/lexisnexis.png) | [LexisNexis](https://solutions.risk.lexisnexis.com/did-microsoft) risk solutions Verifiable credentials enables faster onboarding for employees, students, citizens, or others to access services. | [Configure Verified ID by LexisNexis Risk Solutions as your Identity Verification Partner](https://aka.ms/lexisnexisvc). | | ![Screenshot of a Vu logo.](medi) | | ![Screenshot of a Onfido logo.](media/partner-gallery/onfido.jpeg) | Start issuing and accepting verifiable credentials in minutes. With verifiable credentials and Onfido you can verify a personΓÇÖs identity while respecting privacy. Digitally validate information on a personΓÇÖs ID or their biometrics.| * | | ![Screenshot of a Jumio logo.](media/partner-gallery/jumio.jpeg) | [Jumio](https://www.jumio.com/microsoft-verifiable-credentials/) is helping to support a new form of digital identity by Microsoft based on verifiable credentials and decentralized identifiers standards to let consumers verify once and use everywhere.| * |-| ![Screenshot of a Idemia logo.](media/partner-gallery/idemia.png) | [Idemia](https://na.idemia.com/identity/verifiable-credentials/) Integration with Verified ID enables ΓÇ£Verify once, use everywhereΓÇ¥ functionality.| * | +| ![Screenshot of a Idemia logo.](medi) | | ![Screenshot of a Acuant logo.](media/partner-gallery/acuant.png) | [Acuant](https://www.acuant.com/microsoft-acuant-verifiable-credentials-my-digital-id/) - My Digital ID - Create Your Digital Identity Once, Use It Everywhere.| * | | ![Screenshot of a Clear logo.](media/partner-gallery/clear.jpeg) | [Clear](https://ir.clearme.com/news-events/press-releases/detail/25/clear-collaborates-with-microsoft-to-create-more-secure) Collaborates with Microsoft to Create More Secure Digital Experience Through Verification Credential.| * | Our IDV partner network extends Microsoft Entra Verified ID's capabilities to he ## Next steps Select a partner in the tables mentioned to learn how to integrate their solution with your application.+++ |
advisor | Advisor Reference Reliability Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md | We have identified that you're using Premium SSD Unmanaged Disks in Storage acco Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Disks for storage accounts reaching capacity limit)](https://aka.ms/premium_blob_quota). +### Use Azure Disks with Zone Redundant Storage for higher resiliency and availability ++Azure Disks with ZRS provide synchronous replication of data across three Availability Zones in a region, making the disk tolerant to zonal failures without disruptions to applications. Migrate disks from LRS to ZRS for higher resiliency and availability. ++Learn more about [Changing Disk type of an Azure managed disk](https://aka.ms/migratedisksfromLRStoZRS). + ### Use Managed Disks to improve data reliability Virtual machines in an Availability Set with disks that share either storage accounts or storage scale units aren't resilient to single storage scale unit failures during outages. Migrate to Azure Managed Disks to ensure that the disks of different VMs in the Availability Set are sufficiently isolated to avoid a single point of failure. |
ai-services | Image Retrieval | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/image-retrieval.md | curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeImage? To vectorize a local image, you'd put the binary image data in the HTTP request body. -The API call returns an **vector** JSON object, which defines the image's coordinates in the high-dimensional vector space. +The API call returns a **vector** JSON object, which defines the image's coordinates in the high-dimensional vector space. ```json { curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeText?a }" ``` -The API call returns an **vector** JSON object, which defines the text string's coordinates in the high-dimensional vector space. +The API call returns a **vector** JSON object, which defines the text string's coordinates in the high-dimensional vector space. ```json { |
ai-services | Model Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/model-lifecycle.md | Language service features utilize AI models. We update the language service with ## Prebuilt features -Our standard (not customized) language service features are built on AI models that we call pre-trained models. +Our standard (not customized) language service features are built on AI models that we call pre-trained or prebuilt models. We regularly update the language service with new model versions to improve model accuracy, support, and quality. By default, all API requests will use the latest Generally Available (GA) model. #### Choose the model-version used on your data -We strongly recommend using the `latest` model version to utilize the latest and highest quality models. As our models improve, itΓÇÖs possible that some of your model results may change. Model versions may be deprecated, so don't recommend including specified versions in your implementation. +We recommend using the `latest` model version to utilize the latest and highest quality models. As our models improve, itΓÇÖs possible that some of your model results may change. Model versions may be deprecated, so we no longer accept specified GA model versions in your implementation. Preview models used for preview features do not maintain a minimum retirement period and may be deprecated at any time. By default, API and SDK requests will use the latest Generally Available model. Use the table below to find which model versions are supported by each feature: -| Feature | Supported versions | -|--|--| -| Sentiment Analysis and opinion mining | `2021-10-01`, `2022-06-01`,`2022-10-01`,`2022-11-01*` | -| Language Detection | `2021-11-20`, `2022-10-01*` | -| Entity Linking | `2021-06-01*` | -| Named Entity Recognition (NER) | `2021-06-01*`, `2022-10-01-preview`, `2023-02-01-preview`, `2023-04-15-preview**`| -| Personally Identifiable Information (PII) detection | `2021-01-15*`, `2023-01-01-preview`, `2023-04-15-preview**` | -| PII detection for conversations (Preview) | `2022-05-15-preview`, `2023-04-15-preview**` | -| Question answering | `2021-10-01*` | -| Text Analytics for health | `2021-05-15`, `2022-03-01*`, `2022-08-15-preview`, `2023-01-01-preview**`| -| Key phrase extraction | `2021-06-01`, `2022-07-01`,`2022-10-01*` | -| Document summarization - extractive only (preview) | `2022-08-31-preview**` | +| Feature | Supported generally available (GA) version | Supported preview versions | +|--||| +| Sentiment Analysis and opinion mining | `latest*` | | +| Language Detection | `latest*` | | +| Entity Linking | `latest*` | | +| Named Entity Recognition (NER) | `latest*` | `2023-04-15-preview**` | +| Personally Identifiable Information (PII) detection | `latest*` | `2023-04-15-preview**` | +| PII detection for conversations (Preview) | `latest*` | `2023-04-15-preview**` | +| Question answering | `latest*` | | +| Text Analytics for health | `latest*` | `2022-08-15-preview`, `2023-01-01-preview**`| +| Key phrase extraction | `latest*` | | +| Document summarization - extractive only (preview) | |`2022-08-31-preview**` | -\* Latest Generally Available (GA) model version +\* Latest Generally Available (GA) model version \*\* Latest preview version Use the table below to find which model versions are supported by each feature: | Feature | Supported Training Config Versions | Training Config Expiration | Deployment Expiration | ||--|||-| Conversational language understanding | `2022-05-01` | October 28, 2022 | October 28, 2023 | +| Conversational language understanding | `2022-05-01` | October 28, 2022 (expired) | October 28, 2023 | | Conversational language understanding | `2022-09-01` (latest)** | February 28, 2024 | February 27, 2025 | | Orchestration workflow | `2022-09-01` (latest)** | April 30, 2024 | April 30, 2025 | | Custom named entity recognition | `2022-05-01` (latest)** | April 30, 2024 | April 30, 2025 | |
ai-services | Plan Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/plan-manage-costs.md | With Pay-As-You-Go pricing, you are billed according to the Azure AI services of | Service | Instance(s) | Billing information | ||-||-| **Vision** | | | -| [Azure AI Vision](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) | Free, Standard (S1) | Billed by the number of transactions. Price per transaction varies per feature (Read, OCR, Spatial Analysis). For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/). | +| [Anomaly Detector](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/) | Free, Standard | Billed by the number of transactions. | +| [Content Moderator](https://azure.microsoft.com/pricing/details/cognitive-services/content-moderator/) | Free, Standard | Billed by the number of transactions. | | [Custom Vision](https://azure.microsoft.com/pricing/details/cognitive-services/custom-vision-service/) | Free, Standard | <li>Predictions are billed by the number of transactions.</li><li>Training is billed by compute hour(s).</li><li>Image storage is billed by number of images (up to 6 MB per image).</li>| | [Face](https://azure.microsoft.com/pricing/details/cognitive-services/face-api/) | Free, Standard | Billed by the number of transactions. |-| **Speech** | | | -| [Speech service](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) | Free, Standard | Billing varies by feature (speech-to-text, text to speech, speech translation, speaker recognition). Primarily, billing is by transaction count or character count. For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | -| **Language** | | | +| [Language](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/) | Free, Standard | Billed by number of text records. | | [Language Understanding (LUIS)](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/) | Free Authoring, Free Prediction, Standard | Billed by number of transactions. Price per transaction varies by feature (speech requests, text requests). For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/). |-| [QnA Maker](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/) | Free, Standard | Subscription fee billed monthly. For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/). | -| [Language service](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/) | Free, Standard | Billed by number of text records. | -| [Translator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/) | Free, Pay-as-you-go (S1), Volume discount (S2, S3, S4, C2, C3, C4, D3) | Pricing varies by meter and feature. For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/). <li>Text translation is billed by number of characters translated.</li><li>Document translation is billed by characters translated.</li><li>Custom translation is billed by characters of source and target training data.</li> | -| **Decision** | | | -| [Anomaly Detector](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/) | Free, Standard | Billed by the number of transactions. | -| [Content Moderator](https://azure.microsoft.com/pricing/details/cognitive-services/content-moderator/) | Free, Standard | Billed by the number of transactions. | | [Personalizer](https://azure.microsoft.com/pricing/details/cognitive-services/personalizer/) | Free, Standard (S0) | Billed by transactions per month. There are storage and transaction quotas. For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/personalizer/). | +| [QnA Maker](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/) | Free, Standard | Subscription fee billed monthly. For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/). | +| [Speech](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) | Free, Standard | Billing varies by feature (speech-to-text, text to speech, speech translation, speaker recognition). Primarily, billing is by transaction count or character count. For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | +| [Translator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/) | Free, Pay-as-you-go (S1), Volume discount (S2, S3, S4, C2, C3, C4, D3) | Pricing varies by meter and feature. For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/). <li>Text translation is billed by number of characters translated.</li><li>Document translation is billed by characters translated.</li><li>Custom translation is billed by characters of source and target training data.</li> | +| [Vision](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) | Free, Standard (S1) | Billed by the number of transactions. Price per transaction varies per feature (Read, OCR, Spatial Analysis). For full details, see [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/). | ## Commitment tier With commitment tier pricing, you are billed according to the plan you choose. S > [!NOTE] > If you use the resource above the quota provided by the commitment plan, you will be charged for the additional usage as per the overage amount mentioned in the Azure portal when you purchase a commitment plan. For more information, see [Azure AI services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/). --- ### Costs that typically accrue with Azure AI services Typically, after you deploy an Azure resource, costs are determined by your pricing tier and the API calls you make to your endpoint. If the service you're using has a commitment tier, going over the allotted calls in your tier may incur an overage charge. You can pay for Azure AI services charges with your Azure Prepayment (previously ## Monitor costs -<!-- Note to Azure service writer: Modify the following as needed for your service. Replace example screenshots with ones taken for your service. If you need assistance capturing screenshots, ask banders for help. --> - As you use Azure resources with Azure AI services, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on). As soon as use of an Azure AI services resource starts, costs are incurred and you can see the costs in [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). When you use cost analysis, you view Azure AI services costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded. When you use cost analysis, you view Azure AI services costs in graphs and table To view Azure AI services costs in cost analysis: 1. Sign in to the Azure portal.-2. Open the scope in the Azure portal and select **Cost analysis** in the menu. For example, go to **Subscriptions**, select a subscription from the list, and then select **Cost analysis** in the menu. Select **Scope** to switch to a different scope in cost analysis. -3. By default, cost for services are shown in the first donut chart. Select the area in the chart labeled Azure AI services. +1. Open the scope in the Azure portal and select **Cost analysis** in the menu. For example, go to **Subscriptions**, select a subscription from the list, and then select **Cost analysis** in the menu. Select **Scope** to switch to a different scope in cost analysis. +1. By default, cost for services are shown in the first donut chart. Select the area in the chart labeled Azure AI services. Actual monthly costs are shown when you initially open cost analysis. Here's an example showing all monthly usage costs. |
ai-services | Translator How To Install Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translator-how-to-install-container.md | In this article, you learned concepts and workflows for downloading, installing, ## Next steps > [!div class="nextstepaction"]-> [Learn more about Azure AI containers](../../containers/index.yml?context=%2fazure%2fcognitive-services%2ftranslator%2fcontext%2fcontext) +> [Learn more about Azure AI containers](../../cognitive-services-container-support.md?context=%2fazure%2fcognitive-services%2ftranslator%2fcontext%2fcontext) |
ai-services | Translator Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/translator-overview.md | Title: What is the Microsoft Azure AI Translator? + Title: What is Azure AI Translator? description: Integrate Translator into your applications, websites, tools, and other solutions to provide multi-language user experiences.- Previously updated : 07/18/2023 Last updated : 10/12/2023 |
ai-services | What Are Ai Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/what-are-ai-services.md | Select a service from the table below and learn how it can help you meet your de | Service | Description | | | |-| ![Anomaly Detector icon](media/service-icons/anomaly-detector.svg) [Anomaly Detector](./Anomaly-Detector/index.yml)(retired) | Identify potential problems early on | +| ![Anomaly Detector icon](media/service-icons/anomaly-detector.svg) [Anomaly Detector](./Anomaly-Detector/index.yml) (retired) | Identify potential problems early on | | ![Azure Cognitive Search icon](media/service-icons/cognitive-search.svg) [Azure Cognitive Search](../search/index.yml) | Bring AI-powered cloud search to your mobile and web apps | | ![Azure OpenAI Service icon](media/service-icons/azure.svg) [Azure OpenAI](./openai/index.yml) | Perform a wide variety of natural language tasks | | ![Bot service icon](media/service-icons/bot-services.svg) [Bot Service](/composer/) | Create bots and connect them across channels | Select a service from the table below and learn how it can help you meet your de | ![Immersive Reader icon](media/service-icons/immersive-reader.svg) [Immersive Reader](./immersive-reader/index.yml) | Help users read and comprehend text | | ![Language icon](media/service-icons/language.svg) [Language](./language-service/index.yml) | Build apps with industry-leading natural language understanding capabilities | | ![Language Understanding icon](media/service-icons/luis.svg) [Language understanding](./luis/index.yml) (retired) | Understand natural language in your apps |-| ![Metrics Advisor icon](media/service-icons/metrics-advisor.svg) [Metrics Advisor](./metrics-advisor/index.yml)(retired) | An AI service that detects unwanted contents | -| ![Personalizer icon](media/service-icons/personalizer.svg) [Personalizer](./personalizer/index.yml)(retired) | Create rich, personalized experiences for each user | +| ![Metrics Advisor icon](media/service-icons/metrics-advisor.svg) [Metrics Advisor](./metrics-advisor/index.yml) (retired) | An AI service that detects unwanted contents | +| ![Personalizer icon](media/service-icons/personalizer.svg) [Personalizer](./personalizer/index.yml) (retired) | Create rich, personalized experiences for each user | | ![QnA Maker icon](media/service-icons/luis.svg) [QnA maker](./qnamaker/index.yml) (retired) | Distill information into easy-to-navigate questions and answers | | ![Speech icon](media/service-icons/speech.svg) [Speech](./speech-service/index.yml) | Speech to text, text to speech, translation and speaker recognition | | ![Translator icon](media/service-icons/translator.svg) [Translator](./translator/index.yml) | Translate more than 100 languages and dialects | |
aks | Configure Kubenet Dual Stack | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet-dual-stack.md | AKS configures the required supporting services for dual-stack networking. This * Outbound rules for both IPv4 and IPv6 traffic. * Load balancer setup for IPv4 and IPv6 services. +> [!NOTE] +> When using Dualstack with an [outbound type][outbound-type] of user-defined routing, you can choose to have a default route for IPv6 depending on if you need your IPv6 traffic to reach the internet or not. If you don't have a default route for IPv6, a warning will surface when creating a cluster but will not prevent cluster creation. + ## Deploying a dual-stack cluster The following attributes are provided to support dual-stack clusters: Once the cluster has been created, you can deploy your workloads. This article w [kubernetes-dual-stack]: https://kubernetes.io/docs/concepts/services-networking/dual-stack/ <!-- LINKS - Internal -->+[outbound-type]: ./egress-outboundtype.md [deploy-arm-template]: ../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md [deploy-bicep-template]: ../azure-resource-manager/bicep/deploy-cli.md [kubenet]: ./configure-kubenet.md |
aks | Supported Kubernetes Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md | For the past release history, see [Kubernetes history](https://github.com/kubern | K8s version | Upstream release | AKS preview | AKS GA | End of life | Platform support | |--|-|--||-|--| | 1.24 | Apr 2022 | May 2022 | Jul 2022 | Jul 2023 | Until 1.28 GA |-| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Dec 2023 | Until 1.29 GA | +| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Jan 2, 2024 | Until 1.29 GA | | 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 | Until 1.30 GA | | 1.27* | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024, LTS until Jul 2025 | Until 1.31 GA | | 1.28 | Aug 2023 | Sep 2023 | Oct 2023 || Until 1.32 GA| |
api-management | Mitigate Owasp Api Threats | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mitigate-owasp-api-threats.md | More information about this threat: [API10:2019 Insufficient logging and monito * Monitor API traffic with [Azure Monitor](api-management-howto-use-azure-monitor.md). -* Log to [Application Insights](api-management-howto-app-insights.md) for debugging purposes. Correlate [transactions in Application Insights](../azure-monitor/app/transaction-diagnostics.md) between API Management and the backend API to [trace them end-to-end](../azure-monitor/app/correlation.md). +* Log to [Application Insights](api-management-howto-app-insights.md) for debugging purposes. Correlate [transactions in Application Insights](../azure-monitor/app/search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) between API Management and the backend API to [trace them end-to-end](../azure-monitor/app/correlation.md). * If needed, forward custom events to [Event Hubs](api-management-howto-log-event-hubs.md). |
app-service | Configure Custom Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md | description: Learn how to configure a custom container in Azure App Service. Thi Previously updated : 09/14/2023 Last updated : 10/12/2023 zone_pivot_groups: app-service-containers-windows-linux az resource update --resource-group <group-name> --name <app-name> --resource-ty ## I don't see the updated container -If you change your Docker container settings to point to a new container, it may take a few minutes before the app serves HTTP requests from the new container. While the new container is being pulled and started, App Service continues to serve requests from the old container. Only when the new container is started and ready to receive requests does App Service start sending requests to it. +If you change your Docker container settings to point to a new container, it might take a few minutes before the app serves HTTP requests from the new container. While the new container is being pulled and started, App Service continues to serve requests from the old container. Only when the new container is started and ready to receive requests does App Service start sending requests to it. ## How container images are stored The first time you run a custom Docker image in App Service, App Service does a `docker pull` and pulls all image layers. These layers are stored on disk, like if you were using Docker on-premises. Each time the app restarts, App Service does a `docker pull`, but only pulls layers that have changed. If there have been no changes, App Service uses existing layers on the local disk. -If the app changes compute instances for any reason, such as scaling up and down the pricing tiers, App Service must pull down all layers again. The same is true if you scale out to add additional instances. There are also rare cases where the app instances may change without a scale operation. +If the app changes compute instances for any reason, such as scaling up and down the pricing tiers, App Service must pull down all layers again. The same is true if you scale out to add additional instances. There are also rare cases where the app instances might change without a scale operation. ## Configure port number App Service currently allows your container to expose only one port for HTTP req ## Configure environment variables -Your custom container may use environment variables that need to be supplied externally. You can pass them in via the [Cloud Shell](https://shell.azure.com). In Bash: +Your custom container might use environment variables that need to be supplied externally. You can pass them in via the [Cloud Shell](https://shell.azure.com). In Bash: ```azurecli-interactive az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings DB_HOST="myownserver.mysql.database.azure.com" The front ends are located inside Azure data centers. If you use TLS/SSL with yo During the container start, automatically generated keys are injected into the container as the machine keys for ASP.NET cryptographic routines. You can [find these keys in your container](#connect-to-the-container) by looking for the following environment variables: `MACHINEKEY_Decryption`, `MACHINEKEY_DecryptionKey`, `MACHINEKEY_ValidationKey`, `MACHINEKEY_Validation`. -The new keys at each restart may reset ASP.NET forms authentication and view state, if your app depends on them. To prevent the automatic regeneration of keys, [set them manually as App Service app settings](#configure-environment-variables). +The new keys at each restart might reset ASP.NET forms authentication and view state, if your app depends on them. To prevent the automatic regeneration of keys, [set them manually as App Service app settings](#configure-environment-variables). ## Connect to the container -You can connect to your Windows container directly for diagnostic tasks by navigating to `https://<app-name>.scm.azurewebsites.net/DebugConsole`. Here's how it works: +You can connect to your Windows container directly for diagnostic tasks by navigating to `https://<app-name>.scm.azurewebsites.net/` and choosing the SSH option. A direct SSH session with your container is established in which you can run commands inside your container -- The debug console lets you execute interactive commands, such as starting PowerShell sessions, inspecting registry keys, and navigate the entire container file system. - It functions separately from the graphical browser above it, which only shows the files in your [shared storage](#use-persistent-shared-storage).-- In a scaled-out app, the debug console is connected to one of the container instances. You can select a different instance from the **Instance** dropdown in the top menu.-- Any change you make to the container from within the console does *not* persist when your app is restarted (except for changes in the shared storage), because it's not part of the Docker image. To persist your changes, such as registry settings and software installation, make them part of the Dockerfile.+- In a scaled-out app, the SSH session is connected to one of the container instances. You can select a different instance from the **Instance** dropdown in the top Kudu menu. +- Any change you make to the container from within the SSH session does *not* persist when your app is restarted (except for changes in the shared storage), because it's not part of the Docker image. To persist your changes, such as registry settings and software installation, make them part of the Dockerfile. ## Access diagnostic logs Navigate to `https://<app-name>.scm.azurewebsites.net/DebugConsole` and click th In the console terminal, you can't access the `C:\home\LogFiles` folder by default because persistent shared storage is not enabled. To enable this behavior in the console terminal, [enable persistent shared storage](#use-persistent-shared-storage). -If you try to download the Docker log that is currently in use using an FTP client, you may get an error because of a file lock. +If you try to download the Docker log that is currently in use by using an FTP client, you might get an error because of a file lock. ### With the Kudu API -Navigate directly to `https://<app-name>.scm.azurewebsites.net/api/logs/docker` to see metadata for the Docker logs. You may see more than one log file listed, and the `href` property lets you download the log file directly. +Navigate directly to `https://<app-name>.scm.azurewebsites.net/api/logs/docker` to see metadata for the Docker logs. You might see more than one log file listed, and the `href` property lets you download the log file directly. To download all the logs together in one ZIP file, access `https://<app-name>.scm.azurewebsites.net/api/logs/docker/zip`. The value is defined in MB and must be less and equal to the total physical memo ## Customize the number of compute cores -By default, a Windows container runs with all available cores for your chosen pricing tier. You may want to reduce the number of cores that your staging slot uses, for example. To reduce the number of cores used by a container, set the `WEBSITE_CPU_CORES_LIMIT` app setting to the preferred number of cores. You can set it via the [Cloud Shell](https://shell.azure.com). In Bash: +By default, a Windows container runs with all available cores for your chosen pricing tier. You might want to reduce the number of cores that your staging slot uses, for example. To reduce the number of cores used by a container, set the `WEBSITE_CPU_CORES_LIMIT` app setting to the preferred number of cores. You can set it via the [Cloud Shell](https://shell.azure.com). In Bash: ```azurecli-interactive az webapp config appsettings set --resource-group <group-name> --name <app-name> --slot staging --settings WEBSITE_CPU_CORES_LIMIT=1 Get-ComputerInfo | ft CsNumberOfLogicalProcessors # Total number of enabled logi Get-ComputerInfo | ft CsNumberOfProcessors # Number of physical processors. ``` -The processors may be multicore or hyperthreading processors. Information on how many cores are available for each pricing tier can be found in [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/), in the **Premium v3 service plan** section. +The processors might be multicore or hyperthreading processors. Information on how many cores are available for each pricing tier can be found in [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/), in the **Premium v3 service plan** section. ## Customize health ping behavior |
app-service | Configure Linux Open Ssh Session | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-linux-open-ssh-session.md | Title: SSH access for Linux containers -description: You can open an SSH session to a Linux container in Azure App Service. Custom Linux containers are supported with some modifications to your custom image. -keywords: azure app service, web app, linux, oss + Title: SSH access for Linux and Windows containers +description: You can open an SSH session to a Linux or a Windows container in Azure App Service. Custom Linux containers are supported with some modifications to your custom image. Custom Windows containers require no modifications to your custom image. +keywords: azure app service, web app, linux, windows, oss ms.assetid: 66f9988f-8ffa-414a-9137-3a9b15a5573c Previously updated : 11/18/2022 Last updated : 10/13/2023 +zone_pivot_groups: app-service-containers-windows-linux -# Open an SSH session to a Linux container in Azure App Service +# Open an SSH session to a container in Azure App Service -[Secure Shell (SSH)](https://wikipedia.org/wiki/Secure_Shell) is commonly used to execute administrative commands remotely from a command-line terminal. App Service on Linux provides SSH support into the app container. +[Secure Shell (SSH)](https://wikipedia.org/wiki/Secure_Shell) can be used to execute administrative commands remotely to a Container. App Service provides SSH support direct into an app hosted in a Container. +++## Open SSH session in browser +++ ![Linux App Service SSH](./media/configure-linux-open-ssh-session/app-service-linux-ssh.png) Load average: 0.07 0.04 0.08 4/765 45738 45738 1 root Z 0 0% 0 0% [init] </pre> + ## Next steps -You can post questions and concerns on the [Azure forum](/answers/topics/azure-webapps.html). +You can post questions and concerns on the [Azure forum](/answers/tags/436/azure-app-service). For more information on Web App for Containers, see: |
application-gateway | How To Url Rewrite Gateway Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-rewrite-gateway-api.md | spec: - path: type: PathPrefix value: /shop- - filters: - - type: URLRewrite - urlRewrite: - path: - type: ReplacePrefixMatch - replacePrefixMatch: /ecommerce + filters: + - type: URLRewrite + urlRewrite: + path: + type: ReplacePrefixMatch + replacePrefixMatch: /ecommerce backendRefs: - name: backend-v1 port: 8080 |
automation | Automation Solution Vm Management Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-config.md | In an environment that includes two or more components on multiple VMs supportin Start/Stop VMs during off-hours can help manage the cost of running Azure Resource Manager and classic VMs in your subscription by evaluating machines that aren't used during non-peak periods, such as after hours, and automatically shutting them down if processor utilization is less than a specified percentage. -By default, the feature is pre-configured to evaluate the percentage CPU metric to see if average utilization is 5 percent or less. This scenario is controlled by the following variables and can be modified if the default values don't meet your requirements: --* `External_AutoStop_MetricName` -* `External_AutoStop_Threshold` -* `External_AutoStop_TimeAggregationOperator` -* `External_AutoStop_TimeWindow` -* `External_AutoStop_Frequency` -* `External_AutoStop_Severity` +By default, the feature is pre-configured to evaluate the percentage CPU metric to see if average utilization is 5 percent or less. This scenario is controlled by the following variables or parameters and can be modified if the default values don't meet your requirements: ++|Parameter | Description| +|-|-| +|External_AutoStop_MetricName | This parameter specifies the name of the metric that will be used to trigger the auto-stop action. It could be a metric related to the VM's performance or resource usage.| +|External_AutoStop_Threshold | This parameter sets the threshold value for the specified metric. When the metric value falls below this threshold, the auto-stop action will be triggered.| +|External_AutoStop_TimeAggregationOperator | This parameter determines how the metric values will be aggregated over time. It could be an operator like "Average", "Minimum", or "Maximum".| +|External_AutoStop_TimeWindow | This parameter defines the time window over which the metric values will be evaluated. It specifies the duration for which the metric values will be monitored before triggering the auto-stop action.| +|External_AutoStop_Frequency | This parameter sets the frequency at which the metric values will be checked. It determines how often the auto-stop action will be evaluated based on the specified metric.| +|External_AutoStop_Severity | This parameter specifies the severity level of the auto-stop action. It could be a value like "Low", "Medium", or "High" to indicate the importance or urgency of the action.| You can enable and target the action against a subscription and resource group, or target a specific list of VMs. |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md | This article highlights capabilities, features, and enhancements recently releas ## October 10, 2023 -### Image tag --`v1.24.0_2023-10-10` +**Image tag**: `v1.24.0_2023-10-10` For complete release version information, review [Version log](version-log.md#october-10-2023). ## September 12, 2023 -### Image tag --`v1.23.0_2023-09-12` +**Image tag**: `v1.23.0_2023-09-12` For complete release version information, review [Version log](version-log.md#september-12-2023). For complete release version information, review [Version log](version-log.md#se ## August 8, 2023 -### Image tag --`v1.22.0_2023-08-08` +**Image tag**: `v1.22.0_2023-08-08` For complete release version information, review [Version log](version-log.md#august-8-2023). For complete release version information, review [Version log](version-log.md#au ## July 11, 2023 -### Image tag --`v1.21.0_2023-07-11` +**Image tag**: `v1.21.0_2023-07-11` For complete release version information, review [Version log](version-log.md#july-11-2023). For complete release version information, review [Version log](version-log.md#ju ## June 13, 2023 -### Image tag --`v1.20.0_2023-06-13` +**Image tag**: `v1.20.0_2023-06-13` For complete release version information, review [Version log](version-log.md#june-13-2023). For complete release version information, review [Version log](version-log.md#ju ## May 9, 2023 -### Image tag --`v1.19.0_2023-05-09` +**Image tag**: `v1.19.0_2023-05-09` For complete release version information, review [Version log](version-log.md#may-9-2023). New for this release: ## April 12, 2023 -### Image tag --`v1.18.0_2023-04-11` +**Image tag**: `v1.18.0_2023-04-11` For complete release version information, see [Version log](version-log.md#april-11-2023). New for this release: ## March 14, 2023 -### Image tag --`v1.17.0_2023-03-14` +**Image tag**: `v1.17.0_2023-03-14` For complete release version information, see [Version log](version-log.md#march-14-2023). New for this release: ## February 14, 2023 -### Image tag --`v1.16.0_2023-02-14` +**Image tag**: `v1.16.0_2023-02-14` For complete release version information, see [Version log](version-log.md#february-14-2023). New for this release: ## January 13, 2023 -### Image tag --`v1.15.0_2023-01-10` +**Image tag**: `v1.15.0_2023-01-10` For complete release version information, see [Version log](version-log.md#january-13-2023). New for this release: ## December 13, 2022 -### Image tag --`v1.14.0_2022-12-13` +**Image tag**: `v1.14.0_2022-12-13` For complete release version information, see [Version log](version-log.md#december-13-2022). New for this release: ## November 8, 2022 -### Image tag --`v1.13.0_2022-11-08` +**Image tag**: `v1.13.0_2022-11-08` For complete release version information, see [Version log](version-log.md#november-8-2022). New for this release: ## October 11, 2022 -### Image tag --`v1.12.0_2022-10-11` +**Image tag**: `v1.12.0_2022-10-11` For complete release version information, see [Version log](version-log.md#october-11-2022). The following properties in the Arc SQL Managed Instance status will be deprecat ## September 13, 2022 -### Image tag --`v1.11.0_2022-09-13` +**Image tag**: `v1.11.0_2022-09-13` For complete release version information, see [Version log](version-log.md#september-13-2022). New for this release: This release is published August 9, 2022. -### Image tag --`v1.10.0_2022-08-09` +**Image tag**: `v1.10.0_2022-08-09` For complete release version information, see [Version log](version-log.md#august-9-2022). For complete release version information, see [Version log](version-log.md#augus This release is published July 12, 2022 -### Image tag --`v1.9.0_2022-07-12` +**Image tag**: `v1.9.0_2022-07-12` For complete release version information, see [Version log](version-log.md#july-12-2022). For complete release version information, see [Version log](version-log.md#july- This release is published June 14, 2022. -### Image tag --`v1.8.0_2022-06-14` +**Image tag**: `v1.8.0_2022-06-14` For complete release version information, see [Version log](version-log.md#june-14-2022). For complete release version information, see [Version log](version-log.md#june- This release is published May 24, 2022. -### Image tag --`v1.7.0_2022-05-24` +**Image tag**: `v1.7.0_2022-05-24` For complete release version information, see [Version log](version-log.md#may-24-2022). Preview expected costs for Azure Arc-enabled SQL Managed Instance Business Criti This release is published May 4, 2022. -### Image tag --`v1.6.0_2022-05-02` +**Image tag**: `v1.6.0_2022-05-02` For complete release version information, see [Version log](version-log.md#may-4-2022). Added upgrade experience for Data Controller in direct and indirect connectivity This release is published April 6, 2022. -### Image tag --`v1.5.0_2022-04-05` +**Image tag**: `v1.5.0_2022-04-05` For complete release version information, see [Version log](version-log.md#april-6-2022). During direct connected mode data controller creation, you can now specify the l This release is published March 8, 2022. -### Image tag --`v1.4.1_2022-03-08` +**Image tag**: `v1.4.1_2022-03-08` For complete release version information, see [Version log](version-log.md#march-8-2022). For complete release version information, see [Version log](version-log.md#march This release is published February 25, 2022. -### Image tag --`v1.4.0_2022-02-25` +**Image tag**: `v1.4.0_2022-02-25` For complete release version information, see [Version log](version-log.md#february-25-2022). The following improvements are available in [Azure Data Studio](/sql/azure-data- This release is published January 27, 2022. -### Image tag --`v1.3.0_2022-01-27` +**Image tag**: `v1.3.0_2022-01-27` For complete release version information, see [Version log](version-log.md#january-27-2022). |
azure-arc | Validation Program | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md | The following providers and their corresponding Kubernetes distributions have su | Provider name | Distribution name | Version | | | -- | - | | RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.9.43](https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html), [4.10.23](https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html), 4.11.0-rc.6, [4.13.4](https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html) |-| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) |TKGm 2.3; upstream K8s v1.26.5+vmware.2<br>TKGm 2.2; upstream K8s v1.25.7+vmware.2 <br>TKGm 2.1.0; upstream K8s v1.24.9+vmware.1 <br>TKGm 1.6.0; upstream K8s v1.23.8+vmware.2| +| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) |TKGs 2.2; upstream K8s 1.25.7+vmware.3<br>TKGm 2.3; upstream K8s v1.26.5+vmware.2<br>TKGm 2.2; upstream K8s v1.25.7+vmware.2 <br>TKGm 2.1.0; upstream K8s v1.24.9+vmware.1| | Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes)|[1.24](https://ubuntu.com/kubernetes/docs/1.24/components), [1.28](https://ubuntu.com/kubernetes/docs/1.28/components) | | SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.3.13](https://github.com/rancher/rke/releases/tag/v1.3.13); Kubernetes versions: 1.24.2, 1.23.8 | | Nutanix | [Nutanix Kubernetes Engine](https://www.nutanix.com/products/kubernetes-engine) | Version [2.5](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_5:Nutanix-Kubernetes-Engine-v2_5); upstream K8s v1.23.11 | The conformance tests run as part of the Azure Arc-enabled Kubernetes validation + |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md | Arc resource bridge supports the following Azure regions: * West US 2 * West US 3 * Central US- * South Central US * West Europe * North Europe * UK South * Sweden Central- * Canada Central * Australia East * Southeast Asia The following private cloud environments and their versions are officially suppo * Azure Stack HCI * SCVMM -### Required Azure permissions -* To onboard Arc resource bridge, you must have the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group. -* To read, modify, and delete Arc resource bridge, you must have the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group. +### Supported versions ++Generally, the latest released version and the previous three versions (n-3) of Arc resource bridge are supported. For example, if the current version is 1.0.10, then the typical n-3 supported versions are: ++- Current version: 1.0.10 +- n-1 version: 1.0.9 +- n-2 version: 1.0.8 +- n-3 version: 1.0.7 -### Networking +There may be instances where supported versions are not sequential. For example, version 1.0.11 is released and later found to contain a bug. A hot fix is released in version 1.0.12 and version 1.0.11 is removed. In this scenario, n-3 supported versions become 1.0.12, 1.0.10, 1.0.9, 1.0.8. -Arc resource bridge communicates outbound securely to Azure Arc over TCP port 443. If the appliance needs to connect through a firewall or proxy server to communicate over the internet, it communicates outbound using the HTTPS protocol. You may need to allow specific URLs to [ensure outbound connectivity is not blocked](troubleshoot-resource-bridge.md#not-able-to-connect-to-url) by your firewall or proxy server. For more information, see [Azure Arc resource bridge (preview) network requirements](network-requirements.md). +Arc resource bridge typically releases a new version on a monthly cadence, at the end of the month. Delays may occur that could push the release date further out. Regardless of when a new release comes out, if you are within n-3 supported versions, then your Arc resource bridge version is supported. To stay updated on releases, visit the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub. To learn more about upgrade options, visit [Upgrade Arc resource bridge](upgrade.md). ++If a resource bridge is not upgraded to one of the supported versions (n-3), then it will fall outside the support window and be unsupported. If this happens, it may not always be possible to upgrade an unsupported resource bridge to a newer version, as component services used by Arc resource bridge may no longer be compatible. In addition, the unsupported resource bridge may not be able to provide reliable monitoring and health metrics. ++If an Arc resource bridge is unable to be upgraded to a supported version, you must delete it and deploy a new resource bridge. Depending on which private cloud product you're using, there may be other steps required to reconnect the resource bridge to existing resources. For details, check the partner product's Arc resource bridge recovery documentation. ## Next steps Arc resource bridge communicates outbound securely to Azure Arc over TCP port 44 * Review the [system requirements](system-requirements.md) for deploying and managing Arc resource bridge. + |
azure-arc | Agent Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md | Installing the Connected Machine agent for Window applies the following system-w | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. | > [!TIP]- > This account requires the "Log on as a service" right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you may need to adjust your Group Policy Object to grant the right to "NT SERVICE\\himds" or "NT SERVICE\\ALL SERVICES" to allow the agent to function. + > This account requires the "Log on as a service" right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you might need to adjust your Group Policy Object to grant the right to "NT SERVICE\\himds" or "NT SERVICE\\ALL SERVICES" to allow the agent to function. * Agent installation creates the following local security group. Installing the Connected Machine agent for Linux applies the following system-wi The Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions: * The Guest Configuration agent can use up to 5% of the CPU to evaluate policies.-* The Extension Service agent can use up to 5% of the CPU to install, upgrade, run, and delete extensions. Some extensions may apply more restrictive CPU limits once installed. The following exceptions apply: +* The Extension Service agent can use up to 5% of the CPU to install, upgrade, run, and delete extensions. Some extensions might apply more restrictive CPU limits once installed. The following exceptions apply: | Extension type | Operating system | CPU limit | | -- | - | | The Azure Connected Machine agent is designed to manage agent and system resourc | AzureMonitorWindowsAgent | Windows | 100% | | AzureSecurityLinuxAgent | Linux | 30% | | LinuxOsUpdateExtension | Linux | 60% |- | MDE.Linux | Linux | 30% | + | MDE.Linux | Linux | 60% | | MicrosoftDnsAgent | Windows | 100% | | MicrosoftMonitoringAgent | Windows | 60% | | OmsAgentForLinux | Windows | 60%| |
azure-arc | Agent Release Notes Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md | The Azure Connected Machine agent receives improvements on an ongoing basis. Thi - Known issues - Bug fixes +## Version 1.31 - June 2023 ++Download for [Windows](https://download.microsoft.com/download/2/6/e/26e2b001-1364-41ed-90b0-1340a44ba409/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) ++### Known issue ++The first release of agent version 1.31 had a known issue affecting customers using proxy servers. The issue displays as `AZCM0026: Network Error` and a message about "no IP addresses found" when connecting a server to Azure Arc using a proxy server. A newer version of agent 1.31 was released on June 14, 2023 that addresses this issue. ++To check if you're running the latest version of the Azure connected machine agent, navigate to the server in the Azure portal or run `azcmagent show` from a terminal on the server itself and look for the "Agent version." The table below shows the version numbers for the first and patched releases of agent 1.31. ++| Package type | Version number with proxy issue | Version number of patched agent | +| | - | - | +| Windows | 1.31.02347.1069 | 1.31.02356.1083 | +| RPM-based Linux | 1.31.02347.957 | 1.31.02356.970 | +| DEB-based Linux | 1.31.02347.939 | 1.31.02356.952 | ++### New features ++- Added support for Amazon Linux 2023 +- [azcmagent show](azcmagent-show.md) no longer requires administrator privileges +- You can now filter the output of [azcmagent show](azcmagent-show.md) by specifying the properties you wish to output ++### Fixed ++- Added an error message when a pending reboot on the machine affects extension operations +- The scheduled task that checks for agent updates no longer outputs a file +- Improved formatting for clock skew calculations +- Improved reliability when upgrading extensions by explicitly asking extensions to stop before trying to upgrade. +- Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Update Manager extension for Linux, Microsoft Defender Endpoint for Linux, and Azure Security Agent for Linux to prevent timeouts during installation +- [azcmagent disconnect](azcmagent-disconnect.md) now closes any active SSH or Windows Admin Center connections +- Improved output of the [azcmagent check](azcmagent-check.md) command +- Better handling of spaces in the `--location` parameter of [azcmagent connect](azcmagent-connect.md) + ## Version 1.30 - May 2023 Download for [Windows](https://download.microsoft.com/download/7/7/9/779eae73-a12b-4170-8c5e-abec71bc14cf/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) Download for [Windows](https://download.microsoft.com/download/2/7/0/27063536-94 ### New features -- The agent now compares the time on the local system and Azure service when checking network connectivity and creating the resource in Azure. If the clocks are offset by more than 120 seconds (2 minutes), a nonblocking error is shown. You may encounter TLS connection errors if the time of your computer doesn't match the time in Azure.+- The agent now compares the time on the local system and Azure service when checking network connectivity and creating the resource in Azure. If the clocks are offset by more than 120 seconds (2 minutes), a nonblocking error is shown. You might encounter TLS connection errors if the time of your computer doesn't match the time in Azure. - `azcmagent show` now supports an `--os` flag to print extra OS information to the console ### Fixed Download for [Windows](https://download.microsoft.com/download/f/9/d/f9d60cc9-7c - Only the most recent log file for each component is collected by default. To collect all log files, use the new `--full` flag. - Journal logs for the agent services are now collected on Linux operating systems - Logs from extensions are now collected-- Agent telemetry is no longer sent to `dc.services.visualstudio.com`. You may be able to remove this URL from any firewall or proxy server rules if no other applications in your environment require it.+- Agent telemetry is no longer sent to `dc.services.visualstudio.com`. You might be able to remove this URL from any firewall or proxy server rules if no other applications in your environment require it. - Failed extension installs can now be retried without removing the old extension as long as the extension settings are different - Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Azure Update Manager extension on Linux to reduce downtime during update operations Download for [Windows](https://download.microsoft.com/download/f/b/1/fb143ada-1b ### Known issues -- Some systems may incorrectly report their cloud provider as Azure Stack HCI.+- Some systems might incorrectly report their cloud provider as Azure Stack HCI. ### New features Download for [Windows](https://download.microsoft.com/download/8/9/f/89f80a2b-32 ### Known issues - Agents configured to use private endpoints incorrectly download extensions from a public endpoint. [Upgrade the agent](manage-agent.md#upgrade-the-agent) to version 1.20 or later to restore correct functionality.-- Some systems may incorrectly report their cloud provider as Azure Stack HCI.+- Some systems might incorrectly report their cloud provider as Azure Stack HCI. ### New features Download for [Windows](https://download.microsoft.com/download/8/9/f/89f80a2b-32 - Resolved an issue that could cause the extension manager to hang during extension installation, update, and removal operations. - Improved support for TLS 1.3 -## Version 1.18 - May 2022 +## Version 1.18 - might 2022 Download for [Windows](https://download.microsoft.com/download/2/5/6/25685d0f-2895-4b80-9b1d-5ba53a46097f/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) Download for [Windows](https://download.microsoft.com/download/8/a/9/8a963958-c4 ### Known issues -- Extensions may get stuck in transient states (creating, deleting, updating) on Windows machines running the 1.13 agent in certain conditions. Microsoft recommends upgrading to agent version 1.14 as soon as possible to resolve this issue.+- Extensions might get stuck in transient states (creating, deleting, updating) on Windows machines running the 1.13 agent in certain conditions. Microsoft recommends upgrading to agent version 1.14 as soon as possible to resolve this issue. ### Fixed Download for [Windows](https://download.microsoft.com/download/8/a/9/8a963958-c4 - Local configuration of agent settings now available using the [azcmagent config command](azcmagent-config.md). - Support for configuring proxy server settings [using agent-specific settings](manage-agent.md#update-or-remove-proxy-settings) instead of environment variables.-- Extension operations execute faster using a new notification pipeline. You may need to adjust your firewall or proxy server rules to allow the new network addresses for this notification service (see [networking configuration](network-requirements.md)). The extension manager falls back to the existing behavior of checking every 5 minutes when the notification service is inaccessible.+- Extension operations execute faster using a new notification pipeline. You might need to adjust your firewall or proxy server rules to allow the new network addresses for this notification service (see [networking configuration](network-requirements.md)). The extension manager falls back to the existing behavior of checking every 5 minutes when the notification service is inaccessible. - Detection of the AWS account ID, instance ID, and region information for servers running in Amazon Web Services. ## Version 1.12 - October 2021 Download for [Windows](https://download.microsoft.com/download/6/1/c/61c69f31-8e - Onboarding continues instead of aborting if OS information isn't available - Improved reliability when installing the Log Analytics agent for Linux extension on Red Hat and CentOS systems -## Version 1.6 - May 2021 +## Version 1.6 - might 2021 Download for [Windows](https://download.microsoft.com/download/d/3/d/d3df034a-d231-4ca6-9199-dbaa139b1eaf/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) |
azure-arc | Agent Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md | The Azure Connected Machine agent receives improvements on an ongoing basis. To This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Connected Machine agent](agent-release-notes-archive.md). +## Version 1.35 - October 2023 ++Download for [Windows](https://download.microsoft.com/download/e/7/0/e70b1753-646e-4aea-bac4-40187b5128b0/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) ++### New features ++- The Linux installation script now downloads supporting assets with either wget or curl, depending on which tool is available on the system +- [azcmagent connect](azcmagent-connect.md) and [azcmagent disconnect](azcmagent-disconnect.md) now accept the `--user-tenant-id` parameter to enable Lighthouse users to use a credential from their tenant and onboard a server to a different tenant. +- You can configure the extension manager to run, without allowing any extensions to be installed, by configuring the allowlist to `Allow/None`. This supports Windows Server 2012 ESU scenarios where the extension manager is required for billing purposes but doesn't need to allow any extensions to be installed. Learn more about [local security controls](security-overview.md#local-agent-security-controls). ++### Fixed ++- Improved reliability when installing Microsoft Defender for Endpoint on Linux by increasing [available system resources](agent-overview.md#agent-resource-governance) and extending the timeout +- Better error handling when a user specifies an invalid location name to [azcmagent connect](azcmagent-connect.md) +- Fixed a bug where clearing the `incomingconnections.enabled` [configuration setting](azcmagent-config.md) would show `<nil>` as the previous value +- Security fix for the extension allowlist and blocklist feature to address an issue where an invalid extension name could impact enforcement of the lists. + ## Version 1.34 - September 2023 Download for [Windows](https://download.microsoft.com/download/b/3/2/b3220316-13db-4f1f-babf-b1aab33b364f/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) Download for [Windows](https://download.microsoft.com/download/7/e/5/7e51205f-a0 - Fixed an issue that could result in high CPU usage if the agent was unable to send telemetry to Azure. - Improved local logging when there are network communication errors -## Version 1.31 - June 2023 --Download for [Windows](https://download.microsoft.com/download/2/6/e/26e2b001-1364-41ed-90b0-1340a44ba409/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --### Known issue --The first release of agent version 1.31 had a known issue affecting customers using proxy servers. The issue displays as `AZCM0026: Network Error` and a message about "no IP addresses found" when connecting a server to Azure Arc using a proxy server. A newer version of agent 1.31 was released on June 14, 2023 that addresses this issue. --To check if you're running the latest version of the Azure connected machine agent, navigate to the server in the Azure portal or run `azcmagent show` from a terminal on the server itself and look for the "Agent version." The table below shows the version numbers for the first and patched releases of agent 1.31. --| Package type | Version number with proxy issue | Version number of patched agent | -| | - | - | -| Windows | 1.31.02347.1069 | 1.31.02356.1083 | -| RPM-based Linux | 1.31.02347.957 | 1.31.02356.970 | -| DEB-based Linux | 1.31.02347.939 | 1.31.02356.952 | --### New features --- Added support for Amazon Linux 2023-- [azcmagent show](azcmagent-show.md) no longer requires administrator privileges-- You can now filter the output of [azcmagent show](azcmagent-show.md) by specifying the properties you wish to output--### Fixed --- Added an error message when a pending reboot on the machine affects extension operations-- The scheduled task that checks for agent updates no longer outputs a file-- Improved formatting for clock skew calculations-- Improved reliability when upgrading extensions by explicitly asking extensions to stop before trying to upgrade.-- Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Update Manager extension for Linux, Microsoft Defender Endpoint for Linux, and Azure Security Agent for Linux to prevent timeouts during installation-- [azcmagent disconnect](azcmagent-disconnect.md) now closes any active SSH or Windows Admin Center connections-- Improved output of the [azcmagent check](azcmagent-check.md) command-- Better handling of spaces in the `--location` parameter of [azcmagent connect](azcmagent-connect.md)- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods. |
azure-arc | Azcmagent Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-connect.md | Title: azcmagent connect CLI reference description: Syntax for the azcmagent connect command line tool Previously updated : 04/20/2023 Last updated : 10/05/2023 # azcmagent connect There are 4 ways to provide authentication credentials to the Azure connected ma ### Interactive browser login (Windows-only) -This option is the default on Windows operating systems with a desktop experience. It login page opens in your default web browser. This option may be required if your organization has configured conditional access policies that require you to log in from trusted machines. +This option is the default on Windows operating systems with a desktop experience. It login page opens in your default web browser. This option might be required if your organization has configured conditional access policies that require you to log in from trusted machines. No flag is required to use the interactive browser login. The tenant ID for the subscription where you want to create the Azure Arc-enable Generate a Microsoft Entra device login code that can be entered in a web browser on another computer to authenticate the agent with Azure. For more information, see [authentication options](#authentication-options). +`--user-tenant-id` ++The tenant ID for the account used to connect the server to Azure. This field is required when the tenant of the onboarding account isn't the same as the desired tenant for the Azure Arc-enabled server resource. + [!INCLUDE [common-flags](includes/azcmagent-common-flags.md)] |
azure-arc | Azcmagent Disconnect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/azcmagent-disconnect.md | There are 4 ways to provide authentication credentials to the Azure connected ma ### Interactive browser login (Windows-only) -This option is the default on Windows operating systems with a desktop experience. The login page opens in your default web browser. This option may be required if your organization has configured conditional access policies that require you to log in from trusted machines. +This option is the default on Windows operating systems with a desktop experience. The login page opens in your default web browser. This option might be required if your organization has configured conditional access policies that require you to log in from trusted machines. No flag is required to use the interactive browser login. Specifies the service principal secret. Must be used with the `--service-princip Generate a Microsoft Entra device login code that can be entered in a web browser on another computer to authenticate the agent with Azure. For more information, see [authentication options](#authentication-options). +`--user-tenant-id` ++The tenant ID for the account used to connect the server to Azure. This field is required when the tenant of the onboarding account isn't the same as the desired tenant for the Azure Arc-enabled server resource. + [!INCLUDE [common-flags](includes/azcmagent-common-flags.md)] |
azure-arc | License Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/license-extended-security-updates.md | An additional scenario (scenario 1, below) is a candidate for VM/Virtual core li > Customers that choose virtual core licensing will always be charged at the Standard edition rate, even if the actual operating system used is Datacenter edition. Additionally, virtual core licensing is not available for physical servers. > +### License limits ++Each WS2012 ESU license can cover up to and including 10,000 cores. If you need ESUs for more than 10,000 cores, split the total number of cores across multiple licenses. + ### SA/SPLA conformance In all cases, you're required to attest to conformance with SA or SPLA. There is no exception for these requirements. Software Assurance or an equivalent Server Subscription is required for you to purchase Extended Security Updates on-premises and in hosted environments. You will be able to purchase Extended Security Updates from Enterprise Agreement (EA), Enterprise Subscription Agreement (EAS), a Server & Cloud Enrollment (SCE), and Enrollment for Education Solutions (EES). On Azure, you do not need Software Assurance to get free Extended Security Updates, but Software Assurance or Server Subscription is required to take advantage of the Azure Hybrid Benefit. |
azure-arc | Security Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-overview.md | Last updated 05/24/2022 This article describes the security configuration and considerations you should evaluate before deploying Azure Arc-enabled servers in your enterprise. -> [!IMPORTANT] -> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available. - ## Identity and access control -Each Azure Arc-enabled server has a managed identity as part of a resource group inside an Azure subscription. That identity represents the server running on-premises or other cloud environment. Access to this resource is controlled by standard [Azure role-based access control](../../role-based-access-control/overview.md). From the [**Access Control (IAM)**](../../role-based-access-control/role-assignments-portal.md) page in the Azure portal, you can verify who has access to your Azure Arc-enabled server. +[Azure role-based access control](../../role-based-access-control/overview.md) is used to control which accounts can see and manage your Azure Arc-enabled server. From the [**Access Control (IAM)**](../../role-based-access-control/role-assignments-portal.md) page in the Azure portal, you can verify who has access to your Azure Arc-enabled server. :::image type="content" source="./media/security-overview/access-control-page.png" alt-text="Azure Arc-enabled server access control" border="false" lightbox="./media/security-overview/access-control-page.png"::: The guest configuration and extension services run as Local System on Windows, a ## Local agent security controls -Starting with agent version 1.16, you can optionally limit the extensions that can be installed on your server and disable Guest Configuration. These controls can be useful when connecting servers to Azure that need to be monitored or secured by Azure, but should not allow arbitrary management capabilities like running scripts with Custom Script Extension or configuring settings on the server with Guest Configuration. +Starting with agent version 1.16, you can optionally limit the extensions that can be installed on your server and disable Guest Configuration. These controls can be useful when connecting servers to Azure for a single purpose, such as collecting event logs, without allowing other management capabilities to be used on the server. -These security controls can only be configured by running a command on the server itself and cannot be modified from Azure. This approach preserves the server admin's intent when enabling remote management scenarios with Azure Arc, but also means that changing the setting is more difficult if you later decide to change them. This feature is intended for particularly sensitive servers (for example, Active Directory Domain Controllers, servers that handle payment data, and servers subject to strict change control measures). In most other cases, it is not necessary to modify these settings. +These security controls can only be configured by running a command on the server itself and cannot be modified from Azure. This approach preserves the server admin's intent when enabling remote management scenarios with Azure Arc, but also means that changing the setting is more difficult if you later decide to change them. This feature is intended for sensitive servers (for example, Active Directory Domain Controllers, servers that handle payment data, and servers subject to strict change control measures). In most other cases, it's not necessary to modify these settings. ### Extension allowlists and blocklists -To limit which [extensions](manage-vm-extensions.md) can be installed on your server, you can configure lists of the extensions you wish to allow and block on the server. The extension manager will evaluate all requests to install, update, or upgrade extensions against the allowlist and blocklist to determine if the extension can be installed on the server. Delete requests are always allowed. +To limit which [extensions](manage-vm-extensions.md) can be installed on your server, you can configure lists of the extensions you wish to allow and block on the server. The extension manager evaluates all requests to install, update, or upgrade extensions against the allowlist and blocklist to determine if the extension can be installed on the server. Delete requests are always allowed. -The most secure option is to explicitly allow the extensions you expect to be installed. Any extension not in the allowlist is automatically blocked. To configure the Azure Connected Machine agent to allow only the Log Analytics Agent for Linux and the Dependency Agent for Linux, run the following command on each server: +The most secure option is to explicitly allow the extensions you expect to be installed. Any extension not in the allowlist is automatically blocked. To configure the Azure Connected Machine agent to allow only the Azure Monitor Agent for Linux, run the following command on each server: ```bash-azcmagent config set extensions.allowlist "Microsoft.EnterpriseCloud.Monitoring/OMSAgentForLinux,Microsoft.Azure.Monitoring.DependencyAgent/DependencyAgentLinux" +azcmagent config set extensions.allowlist "Microsoft.Azure.Monitor/AzureMonitorLinuxAgent" ``` -You can block one or more extensions by adding them to the blocklist. If an extension is present in both the allowlist and blocklist, it will be blocked. To block the Custom Script extension for Linux, run the following command: +You can block one or more extensions by adding them to the blocklist. If an extension is present in both the allowlist and blocklist, it's blocked. To block the Custom Script extension for Linux, run the following command: ```bash azcmagent config set extensions.blocklist "Microsoft.Azure.Extensions/CustomScript" ``` -Extensions are specified by their publisher and type, separated by a forward slash. See the list of the [most common extensions](manage-vm-extensions.md) in the docs or list the VM extensions already installed on your server in the [portal](manage-vm-extensions-portal.md#list-extensions-installed), [Azure PowerShell](manage-vm-extensions-powershell.md#list-extensions-installed), or [Azure CLI](manage-vm-extensions-cli.md#list-extensions-installed). +Specify extensions with their publisher and type, separated by a forward slash `/`. See the list of the [most common extensions](manage-vm-extensions.md) in the docs or list the VM extensions already installed on your server in the [portal](manage-vm-extensions-portal.md#list-extensions-installed), [Azure PowerShell](manage-vm-extensions-powershell.md#list-extensions-installed), or [Azure CLI](manage-vm-extensions-cli.md#list-extensions-installed). -The table below describes the behavior when performing an extension operation against an agent that has the allowlist or blocklist configured. +The table describes the behavior when performing an extension operation against an agent that has the allowlist or blocklist configured. | Operation | In the allowlist | In the blocklist | In both the allowlist and blocklist | Not in any list, but an allowlist is configured | |--|--|--|--| The table below describes the behavior when performing an extension operation ag | Delete extension | Allowed | Allowed | Allowed | Allowed | > [!IMPORTANT]-> If an extension is already installed on your server before you configure an allowlist or blocklist, it will not automatically be removed. It is your responsibility to delete the extension from Azure to fully remove it from the machine. Delete requests are always accepted to accommodate this scenario. Once deleted, the allowlist and blocklist will determine whether or not to allow future install attempts. +> If an extension is already installed on your server before you configure an allowlist or blocklist, it won't automatically be removed. It's your responsibility to delete the extension from Azure to fully remove it from the machine. Delete requests are always accepted to accommodate this scenario. Once deleted, the allowlist and blocklist determine whether or not to allow future install attempts. ++Starting with agent version 1.35, there is a special allowlist value `Allow/None`, which instructs the extension manager to run, but not allow any extensions to be installed. This is the recommended configuration when using Azure Arc to deliver Windows Server 2012 Extended Security Updates (ESU) without intending to use any other extensions. ++```bash +azcmagent config set extensions.allowlist "Allow/None" +``` ### Enable or disable Guest Configuration Azure Policy's Guest Configuration feature enables you to audit and configure se azcmagent config set guestconfiguration.enabled false ``` -When Guest Configuration is disabled, any Guest Configuration policies assigned to the machine in Azure will report as non-compliant. Consider [creating an exemption](../../governance/policy/concepts/exemption-structure.md) for these machines or [changing the scope](../../governance/policy/concepts/assignment-structure.md#excluded-scopes) of your policy assignments if you don't want to see these machines reported as non-compliant. +When Guest Configuration is disabled, any Guest Configuration policies assigned to the machine in Azure show as noncompliant. Consider [creating an exemption](../../governance/policy/concepts/exemption-structure.md) for these machines or [changing the scope](../../governance/policy/concepts/assignment-structure.md#excluded-scopes) of your policy assignments if you don't want to see these machines reported as noncompliant. ### Enable or disable the extension manager The extension manager is responsible for installing, updating, and removing [VM azcmagent config set extensions.enabled false ``` -Disabling the extension manager will not remove any extensions already installed on your server. Extensions that are hosted in their own Windows or Linux services, such as the Log Analytics Agent, may continue to run even if the extension manager is disabled. Other extensions that are hosted by the extension manager itself, like the Azure Monitor Agent, will not run if the extension manger is disabled. You should [remove any extensions](manage-vm-extensions-portal.md#remove-extensions) before disabling the extension manager to ensure no extensions continue to run on the server. +Disabling the extension manager won't remove any extensions already installed on your server. Extensions that are hosted in their own Windows or Linux services, such as the Log Analytics Agent, might continue to run even if the extension manager is disabled. Other extensions that are hosted by the extension manager itself, like the Azure Monitor Agent, don't run if the extension manger is disabled. You should [remove any extensions](manage-vm-extensions-portal.md#remove-extensions) before disabling the extension manager to ensure no extensions continue to run on the server. ### Locked down machine best practices -When configuring the Azure Connected Machine agent with a reduced set of capabilities, it is important to consider the mechanisms that someone could use to remove those restrictions and implement appropriate controls. Anybody capable of running commands as an administrator or root user on the server can change the Azure Connected Machine agent configuration. Extensions and guest configuration policies execute in privileged contexts on your server, and as such may be able to change the agent configuration. If you apply these security controls to lock down the agent, Microsoft recommends the following best practices to ensure only local server admins can update the agent configuration: +When configuring the Azure Connected Machine agent with a reduced set of capabilities, it's important to consider the mechanisms that someone could use to remove those restrictions and implement appropriate controls. Anybody capable of running commands as an administrator or root user on the server can change the Azure Connected Machine agent configuration. Extensions and guest configuration policies execute in privileged contexts on your server, and as such might be able to change the agent configuration. If you apply local agent security controls to lock down the agent, Microsoft recommends the following best practices to ensure only local server admins can update the agent configuration: * Use allowlists for extensions instead of blocklists whenever possible. * Don't include the Custom Script Extension in the extension allowlist to prevent execution of arbitrary scripts that could change the agent configuration. When configuring the Azure Connected Machine agent with a reduced set of capabil ### Example configuration for monitoring and security scenarios -It's common to use Azure Arc to monitor your servers with Azure Monitor and Microsoft Sentinel and secure them with Microsoft Defender for Cloud. The following configuration samples can help you configure the Azure Arc Connected Machine agent to only allow these scenarios. +It's common to use Azure Arc to monitor your servers with Azure Monitor and Microsoft Sentinel and secure them with Microsoft Defender for Cloud. This section contains examples for how to lock down the agent to only support monitoring and security scenarios. #### Azure Monitor Agent only sudo azcmagent config set guestconfiguration.enabled false #### Monitoring and security -Microsoft Defender for Cloud enables additional extensions on your server to identify vulnerable software on your server and enable Microsoft Defender for Endpoint (if configured). Microsoft Defender for Cloud also uses Guest Configuration for its regulatory compliance feature. Since a custom Guest Configuration assignment could be used to undo the agent limitations, you should carefully evaluate whether or not you need the regulatory compliance feature and, as a result, Guest Configuration to be enabled on the machine. +Microsoft Defender for Cloud deploys extensions on your server to identify vulnerable software on your server and enable Microsoft Defender for Endpoint (if configured). Microsoft Defender for Cloud also uses Guest Configuration for its regulatory compliance feature. Since a custom Guest Configuration assignment could be used to undo the agent limitations, you should carefully evaluate whether or not you need the regulatory compliance feature and, as a result, Guest Configuration to be enabled on the machine. On your Windows servers, run the following commands in an elevated command console: azcmagent config set config.mode full By default, the Microsoft Entra system assigned identity used by Arc can only be used to update the status of the Azure Arc-enabled server in Azure. For example, the *last seen* heartbeat status. You can optionally assign other roles to the identity if an application on your server uses the system assigned identity to access other Azure services. To learn more about configuring a system-assigned managed identity to access Azure resources, see [Authenticate against Azure resources with Azure Arc-enabled servers](managed-identity-authentication.md). -While the Hybrid Instance Metadata Service can be accessed by any application running on the machine, only authorized applications can request a Microsoft Entra token for the system assigned identity. On the first attempt to access the token URI, the service will generate a randomly generated cryptographic blob in a location on the file system that only trusted callers can read. The caller must then read the file (proving it has appropriate permission) and retry the request with the file contents in the authorization header to successfully retrieve a Microsoft Entra token. +While the Hybrid Instance Metadata Service can be accessed by any application running on the machine, only authorized applications can request a Microsoft Entra token for the system assigned identity. On the first attempt to access the token URI, the service generates a randomly generated cryptographic blob in a location on the file system that only trusted callers can read. The caller must then read the file (proving it has appropriate permission) and retry the request with the file contents in the authorization header to successfully retrieve a Microsoft Entra token. * On Windows, the caller must be a member of the local **Administrators** group or the **Hybrid Agent Extension Applications** group to read the blob. |
azure-government | Azure Secure Isolation Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md | Title: Azure guidance for secure isolation description: Customer guidance for Azure secure isolation across compute, networking, and storage.--++ recommendations: false Microsoft provides detailed customer guidance on **[Windows](../virtual-machines Azure Compute offers virtual machine sizes that are [isolated to a specific hardware type](../virtual-machines/isolation.md) and dedicated to a single customer. These VM instances allow your workloads to be deployed on dedicated physical servers. Using Isolated VMs essentially guarantees that your VM will be the only one running on that specific server node. You can also choose to further subdivide the resources on these Isolated VMs by using [Azure support for nested Virtual Machines](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization). ## Networking isolation-The logical isolation of tenant infrastructure in a public multi-tenant cloud is [fundamental to maintaining security](https://azure.microsoft.com/resources/azure-network-security/). The overarching principle for a virtualized solution is to allow only connections and communications that are necessary for that virtualized solution to operate, blocking all other ports and connections by default. Azure [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) helps ensure that your private network traffic is logically isolated from traffic belonging to other customers. Virtual Machines (VMs) in one VNet can't communicate directly with VMs in a different VNet even if both VNets are created by the same customer. [Networking isolation](../security/fundamentals/isolation-choices.md#networking-isolation) ensures that communication between your VMs remains private within a VNet. You can connect your VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on your connectivity options, including bandwidth, latency, and encryption requirements. +The logical isolation of tenant infrastructure in a public multi-tenant cloud is [fundamental to maintaining security](https://azure.microsoft.com/solutions/network-security/). The overarching principle for a virtualized solution is to allow only connections and communications that are necessary for that virtualized solution to operate, blocking all other ports and connections by default. Azure [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet) helps ensure that your private network traffic is logically isolated from traffic belonging to other customers. Virtual Machines (VMs) in one VNet can't communicate directly with VMs in a different VNet even if both VNets are created by the same customer. [Networking isolation](../security/fundamentals/isolation-choices.md#networking-isolation) ensures that communication between your VMs remains private within a VNet. You can connect your VNets via [VNet peering](../virtual-network/virtual-network-peering-overview.md) or [VPN gateways](../vpn-gateway/vpn-gateway-about-vpngateways.md), depending on your connectivity options, including bandwidth, latency, and encryption requirements. This section describes how Azure provides isolation of network traffic among tenants and enforces that isolation with cryptographic certainty. The cumulative effect of network isolation restrictions is that each cloud servi > > *Extra resources:* > - **[Azure network security overview](../security/fundamentals/network-overview.md)**-> - **[Azure network security white paper](https://azure.microsoft.com/resources/azure-network-security/)** +> - **[Azure network security](https://azure.microsoft.com/solutions/network-security/)** ## Storage isolation Microsoft Azure separates your VM-based compute resources from storage as part of its [fundamental design](../security/fundamentals/isolation-choices.md#storage-isolation). The separation allows compute and storage to scale independently, making it easier to provide multi-tenancy and isolation. Therefore, Azure Storage runs on separate hardware with no network connectivity to Azure Compute except logically. However, you can also choose to manage encryption with your own keys by specifyi Storage service encryption is enabled by default for all new and existing storage accounts and it [can't be disabled](../storage/common/storage-service-encryption.md#about-azure-storage-service-side-encryption). As shown in Figure 17, the encryption process uses the following keys to help ensure cryptographic certainty of data isolation at rest: -- *Data Encryption Key (DEK)* is a symmetric AES-256 key that is used for bulk encryption, and it's unique per storage account in Azure Storage. It's generated by the Azure Storage service as part of the storage account creation and is used derive a unique key for each block of data. The Storage Service always encrypts the DEK using either the Stamp Key or a Key Encryption Key if the customer has configured customer-managed key encryption.+- *Data Encryption Key (DEK)* is a symmetric AES-256 key that is used for bulk encryption, and it's unique per storage account in Azure Storage. It's generated by the Azure Storage service as part of the storage account creation and is used to derive a unique key for each block of data. The Storage Service always encrypts the DEK using either the Stamp Key or a Key Encryption Key if the customer has configured customer-managed key encryption. - *Key Encryption Key (KEK)* is an asymmetric RSA (2048 or greater) key managed by the customer and is used to encrypt the Data Encryption Key (DEK) using Azure Key Vault or Managed HSM. It's never exposed directly to the Azure Storage service or other services. - *Stamp Key (SK)* is a symmetric AES-256 key managed by Azure Storage. This key is used to protect the DEK when not using a customer-managed key. |
azure-government | Compare Azure Government Global Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md | Title: Compare Azure Government and global Azure description: Describe feature differences between Azure Government and global Azure. --++ recommendations: false Last updated 06/08/2023 |
azure-government | Documentation Government Concept Naming Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-concept-naming-resources.md | Title: Considerations for naming Azure resources description: Guidance on naming Azure resources to prevent accidental spillage of sensitive data. --++ recommendations: false Last updated 01/28/2022 |
azure-government | Documentation Government Developer Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-developer-guide.md | description: Provides guidance on developing applications for Azure Government --++ recommendations: false Last updated 03/07/2022 |
azure-government | Documentation Government Get Started Connect To Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-get-started-connect-to-storage.md | description: Guidance for getting started with Storage on Azure Government -+ Last updated 10/01/2021 |
azure-government | Documentation Government Impact Level 5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md | description: Guidance for configuring Azure Government services for DoD Impact L --++ recommendations: false Last updated 04/02/2023 |
azure-government | Documentation Government Overview Dod | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-dod.md | Title: Azure Government DoD Overview | Microsoft Docs description: Features and guidance for using Azure Government DoD regions--++ |
azure-government | Documentation Government Overview Itar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-itar.md | Title: Azure support for export controls description: Customer guidance for Azure export control support --++ recommendations: false Last updated 05/31/2023 |
azure-government | Documentation Government Overview Jps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-jps.md | Title: Azure support for public safety and justice description: Guidance on using Azure cloud services for public safety and justice workloads. --++ recommendations: false Last updated 02/06/2023 |
azure-government | Documentation Government Overview Nerc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-nerc.md | Title: NERC CIP standards and cloud computing description: This article discusses implications of NERC CIP standards on cloud computing. It explores compliance assurances that cloud service providers can furnish to registered entities subject to compliance with NERC CIP standards.--++ recommendations: false |
azure-government | Documentation Government Overview Wwps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-wwps.md | Title: Azure for secure worldwide public sector cloud adoption description: Customer guidance for Azure public sector cloud adoption--++ recommendations: false |
azure-government | Documentation Government Plan Compliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-plan-compliance.md | description: Provides an overview of the available compliance assurances for Azu --++ recommendations: false Last updated 04/02/2023 |
azure-government | Documentation Government Plan Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-plan-security.md | Title: Azure Government Security description: Customer guidance and best practices for securing Azure workloads.--++ recommendations: false |
azure-government | Documentation Government Stig Linux Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-stig-linux-vm.md | Title: Deploy STIG-compliant Linux Virtual Machines (Preview) description: This quickstart shows you how to deploy a STIG-compliant Linux VM (Preview) from the Azure portal or Azure Government portal.--++ |
azure-government | Documentation Government Stig Windows Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-stig-windows-vm.md | Title: Deploy STIG-compliant Windows Virtual Machines (Preview) description: This quickstart shows you how to deploy a STIG-compliant Windows VM (Preview) from the Azure portal or Azure Government portal.--++ |
azure-government | Documentation Government Welcome | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-welcome.md | |
azure-monitor | Azure Monitor Agent Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md | Before you begin migrating from the Log Analytics agent to Azure Monitor Agent, Azure Monitor Agent is generally available for data collection. Most services that used Log Analytics agent for data collection have migrated to Azure Monitor Agent. -The following features and services now have and Azure Monitor Agent version (some are still in Public Preview). This means you can already choose to use Azure Monitor Agent to collect data when you enable the feature or service. +The following features and services now have an Azure Monitor Agent version (some are still in Public Preview). This means you can already choose to use Azure Monitor Agent to collect data when you enable the feature or service. | Service or feature | Migration recommendation | Other extensions installed | More information | | : | : | : | : | |
azure-monitor | Proactive Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-diagnostics.md | Alternatively, you can change the configuration by using Azure Resource Manager These diagnostic tools help you inspect the telemetry from your app: * [Metric explorer](../essentials/metrics-charts.md)-* [Search explorer](../app/diagnostic-search.md) +* [Search explorer](../app/search-and-transaction-diagnostics.md?tabs=transaction-search) * [Analytics: Powerful query language](../logs/log-analytics-tutorial.md) Smart detection is automatic, but if you want to set up more alerts, see: |
azure-monitor | Proactive Failure Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-failure-diagnostics.md | Notice that if you delete an Application Insights resource, the associated Failu An alert indicates that an abnormal rise in the failed request rate was detected. It's likely that there is some problem with your app or its environment. -To investigate further, click on 'View full details in Application Insights' the links in this page will take you straight to a [search page](../app/diagnostic-search.md) filtered to the relevant requests, exception, dependency, or traces. +To investigate further, click on 'View full details in Application Insights' the links in this page will take you straight to a [search page](../app/search-and-transaction-diagnostics.md?tabs=transaction-search) filtered to the relevant requests, exception, dependency, or traces. You can also open the [Azure portal](https://portal.azure.com), navigate to the Application Insights resource for your app, and open the Failures page. Smart Detection of Failure Anomalies complements other similar but distinct feat These diagnostic tools help you inspect the data from your app: * [Metric explorer](../essentials/metrics-charts.md)-* [Search explorer](../app/diagnostic-search.md) +* [Search explorer](../app/search-and-transaction-diagnostics.md?tabs=transaction-search) * [Analytics - powerful query language](../logs/log-analytics-tutorial.md) Smart detections are automatic. But maybe you'd like to set up some more alerts? |
azure-monitor | Api Custom Events Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md | In Node.js projects, you can use `new applicationInsights.TelemetryClient(instru ## TrackEvent -In Application Insights, a *custom event* is a data point that you can display in [Metrics Explorer](../essentials/metrics-charts.md) as an aggregated count and in [Diagnostic Search](./diagnostic-search.md) as individual occurrences. (It isn't related to MVC or other framework "events.") +In Application Insights, a *custom event* is a data point that you can display in [Metrics Explorer](../essentials/metrics-charts.md) as an aggregated count and in [Diagnostic Search](./search-and-transaction-diagnostics.md?tabs=transaction-search) as individual occurrences. (It isn't related to MVC or other framework "events.") Insert `TrackEvent` calls in your code to count various events. For example, you might want to track how often users choose a particular feature. Or you might want to know how often they achieve certain goals or make specific types of mistakes. The recommended way to send request telemetry is where the request acts as an <a ## Operation context -You can correlate telemetry items together by associating them with operation context. The standard request-tracking module does this for exceptions and other events that are sent while an HTTP request is being processed. In [Search](./diagnostic-search.md) and [Analytics](../logs/log-query-overview.md), you can easily find any events associated with the request by using its operation ID. +You can correlate telemetry items together by associating them with operation context. The standard request-tracking module does this for exceptions and other events that are sent while an HTTP request is being processed. In [Search](./search-and-transaction-diagnostics.md?tabs=transaction-search) and [Analytics](../logs/log-query-overview.md), you can easily find any events associated with the request by using its operation ID. For more information on correlation, see [Telemetry correlation in Application Insights](distributed-tracing-telemetry-correlation.md). requests Send exceptions to Application Insights: * To [count them](../essentials/metrics-charts.md), as an indication of the frequency of a problem.-* To [examine individual occurrences](./diagnostic-search.md). +* To [examine individual occurrences](./search-and-transaction-diagnostics.md?tabs=transaction-search). The reports include the stack traces. exceptions ## TrackTrace -Use `TrackTrace` to help diagnose problems by sending a "breadcrumb trail" to Application Insights. You can send chunks of diagnostic data and inspect them in [Diagnostic Search](./diagnostic-search.md). +Use `TrackTrace` to help diagnose problems by sending a "breadcrumb trail" to Application Insights. You can send chunks of diagnostic data and inspect them in [Diagnostic Search](./search-and-transaction-diagnostics.md?tabs=transaction-search). In .NET [Log adapters](./asp-net-trace-logs.md), use this API to send third-party logs to the portal. properties.put("Database", db.ID); telemetry.trackTrace("Slow Database response", SeverityLevel.Warning, properties); ``` -In [Search](./diagnostic-search.md), you can then easily filter out all the messages of a particular severity level that relate to a particular database. +In [Search](./search-and-transaction-diagnostics.md?tabs=transaction-search), you can then easily filter out all the messages of a particular severity level that relate to a particular database. ### Traces in Log Analytics appInsights.setAuthenticatedUserContext(validatedId, accountId); In [Metrics Explorer](../essentials/metrics-charts.md), you can create a chart that counts **Users, Authenticated**, and **User accounts**. -You can also [search](./diagnostic-search.md) for client data points with specific user names and accounts. +You can also [search](./search-and-transaction-diagnostics.md?tabs=transaction-search) for client data points with specific user names and accounts. > [!NOTE] > The [EnableAuthenticationTrackingJavaScript property in the ApplicationInsightsServiceOptions class](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs) in the .NET Core SDK simplifies the JavaScript configuration needed to inject the user name as the Auth ID for each trace sent by the Application Insights JavaScript SDK. Azure alerts are only on metrics. Create a custom metric that crosses a value th ## <a name="next"></a>Next steps -* [Search events and logs](./diagnostic-search.md) +* [Search events and logs](./search-and-transaction-diagnostics.md?tabs=transaction-search) |
azure-monitor | Api Filtering Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md | What's the difference between telemetry processors and telemetry initializers? * [JavaScript SDK](https://github.com/Microsoft/ApplicationInsights-JS) ## <a name="next"></a>Next steps-* [Search events and logs](./diagnostic-search.md) +* [Search events and logs](./search-and-transaction-diagnostics.md?tabs=transaction-search) * [sampling](./sampling.md) |
azure-monitor | App Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md | Application Insights provides many experiences to enhance the performance, relia - [Application dashboard](overview-dashboard.md): An at-a-glance assessment of your application's health and performance. - [Application map](app-map.md): A visual overview of application architecture and components' interactions. - [Live metrics](live-stream.md): A real-time analytics dashboard for insight into application activity and performance.-- [Transaction search](diagnostic-search.md): Trace and diagnose transactions to identify issues and optimize performance.+- [Transaction search](search-and-transaction-diagnostics.md?tabs=transaction-search): Trace and diagnose transactions to identify issues and optimize performance. - [Availability view](availability-overview.md): Proactively monitor and test the availability and responsiveness of application endpoints. - Performance view: Review application performance metrics and potential bottlenecks. - Failures view: Identify and analyze failures in your application to minimize downtime. Review dedicated [troubleshooting articles](/troubleshoot/azure/azure-monitor/we - [Application dashboard](overview-dashboard.md) - [Application Map](app-map.md) - [Live metrics](live-stream.md)-- [Transaction search](diagnostic-search.md)+- [Transaction search](search-and-transaction-diagnostics.md?tabs=transaction-search) - [Availability overview](availability-overview.md) - [Users, sessions, and events](usage-segmentation.md) |
azure-monitor | App Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md | To provide feedback, use the feedback option. ## Next steps * To learn more about how correlation works in Application Insights, see [Telemetry correlation](distributed-tracing-telemetry-correlation.md).-* The [end-to-end transaction diagnostic experience](transaction-diagnostics.md) correlates server-side telemetry from across all your Application Insights-monitored components into a single view. +* The [end-to-end transaction diagnostic experience](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) correlates server-side telemetry from across all your Application Insights-monitored components into a single view. * For advanced correlation scenarios in ASP.NET Core and ASP.NET, see [Track custom operations](custom-operations-tracking.md). |
azure-monitor | Application Insights Asp Net Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/application-insights-asp-net-agent.md | See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/ap View your telemetry: - [Explore metrics](../essentials/metrics-charts.md) to monitor performance and usage.-- [Search events and logs](./diagnostic-search.md) to diagnose problems.+- [Search events and logs](./search-and-transaction-diagnostics.md?tabs=transaction-search) to diagnose problems. - [Use Log Analytics](../logs/log-query-overview.md) for more advanced queries. - [Create dashboards](./overview-dashboard.md). |
azure-monitor | Asp Net Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md | In the preceding cases, the proper way of validating that the instrumentation en ## Where to find dependency data * [Application Map](app-map.md) visualizes dependencies between your app and neighboring components.-* [Transaction Diagnostics](transaction-diagnostics.md) shows unified, correlated server data. +* [Transaction Diagnostics](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) shows unified, correlated server data. * [Browsers tab](javascript.md) shows AJAX calls from your users' browsers. * Select from slow or failed requests to check their dependency calls. * [Analytics](#logs-analytics) can be used to query dependency data. Like every Application Insights SDK, the dependency collection module is also op ## Dependency auto-collection -Below is the currently supported list of dependency calls that are automatically detected as dependencies without requiring any additional modification to your application's code. These dependencies are visualized in the Application Insights [Application map](./app-map.md) and [Transaction diagnostics](./transaction-diagnostics.md) views. If your dependency isn't on the list below, you can still track it manually with a [track dependency call](./api-custom-events-metrics.md#trackdependency). +Below is the currently supported list of dependency calls that are automatically detected as dependencies without requiring any additional modification to your application's code. These dependencies are visualized in the Application Insights [Application map](./app-map.md) and [Transaction diagnostics](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) views. If your dependency isn't on the list below, you can still track it manually with a [track dependency call](./api-custom-events-metrics.md#trackdependency). ### .NET |
azure-monitor | Asp Net Exceptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md | To get diagnostic data specific to your app, you can insert code to send your ow Using the <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient?displayProperty=fullName>, you have several APIs available: -* <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackEvent%2A?displayProperty=nameWithType> is typically used for monitoring usage patterns, but the data it sends also appears under **Custom Events** in diagnostic search. Events are named and can carry string properties and numeric metrics on which you can [filter your diagnostic searches](./diagnostic-search.md). +* <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackEvent%2A?displayProperty=nameWithType> is typically used for monitoring usage patterns, but the data it sends also appears under **Custom Events** in diagnostic search. Events are named and can carry string properties and numeric metrics on which you can [filter your diagnostic searches](./search-and-transaction-diagnostics.md?tabs=transaction-search). * <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackTrace%2A?displayProperty=nameWithType> lets you send longer data such as POST information. * <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient.TrackException%2A?displayProperty=nameWithType> sends exception details, such as stack traces to Application Insights. -To see these events, on the left menu, open [Search](./diagnostic-search.md). Select the dropdown menu **Event types**, and then choose **Custom Event**, **Trace**, or **Exception**. +To see these events, on the left menu, open [Search](./search-and-transaction-diagnostics.md?tabs=transaction-search). Select the dropdown menu **Event types**, and then choose **Custom Event**, **Trace**, or **Exception**. :::image type="content" source="./media/asp-net-exceptions/customevents.png" lightbox="./media/asp-net-exceptions/customevents.png" alt-text="Screenshot that shows the Search screen."::: Catch ex as Exception End Try ``` -The properties and measurements parameters are optional, but they're useful for [filtering and adding](./diagnostic-search.md) extra information. For example, if you have an app that can run several games, you could find all the exception reports related to a particular game. You can add as many items as you want to each dictionary. +The properties and measurements parameters are optional, but they're useful for [filtering and adding](./search-and-transaction-diagnostics.md?tabs=transaction-search) extra information. For example, if you have an app that can run several games, you could find all the exception reports related to a particular game. You can add as many items as you want to each dictionary. ## Browser exceptions |
azure-monitor | Asp Net Trace Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md | Perhaps your application sends voluminous amounts of data and you're using the A ## <a name="add"></a>Next steps * [Diagnose failures and exceptions in ASP.NET](asp-net-exceptions.md)-* [Learn more about Transaction Search](diagnostic-search.md) +* [Learn more about Transaction Search](search-and-transaction-diagnostics.md?tabs=transaction-search) * [Set up availability and responsiveness tests](availability-overview.md) <!--Link references--> [availability]: ./availability-overview.md-[diagnostic]: ./diagnostic-search.md +[diagnostic]: ./search-and-transaction-diagnostics.md?tabs=transaction-search [exceptions]: asp-net-exceptions.md [start]: ./app-insights-overview.md |
azure-monitor | Availability Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md | For advanced scenarios where the business logic must be adjusted to access the U * [Standard tests](availability-standard-tests.md) * [Availability alerts](availability-alerts.md) * [Application Map](./app-map.md)-* [Transaction diagnostics](./transaction-diagnostics.md) +* [Transaction diagnostics](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) |
azure-monitor | Availability Standard Tests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-standard-tests.md | From an availability test result, you can see the transaction details across all * Log an issue or work item in Git or Azure Boards to track the problem. The bug will contain a link to this event. * Open the web test result in Visual Studio. -To learn more about the end-to-end transaction diagnostics experience, see the [transaction diagnostics documentation](./transaction-diagnostics.md). +To learn more about the end-to-end transaction diagnostics experience, see the [transaction diagnostics documentation](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics). Select the exception row to see the details of the server-side exception that caused the synthetic availability test to fail. You can also get the [debug snapshot](./snapshot-debugger.md) for richer code-level diagnostics. |
azure-monitor | Codeless Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md | The following table shows the current state of autoinstrumentation availability. Links are provided to more information for each supported scenario. -|Environment/Resource provider | .NET Framework | .NET Core / .NET | Java | Node.js | Python | -|-|||-|-|--| -|Azure App Service on Windows - Publish as Code | [ :white_check_mark: :link: ](azure-web-apps-net.md) ┬╣ | :x: | -|Azure App Service on Windows - Publish as Docker | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) ┬▓ | :x: | :x: | -|Azure App Service on Linux - Publish as Code | :x: | [ :white_check_mark: :link: ](azure-web-apps-net-core.md?tabs=linux) ┬╣ | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: | -|Azure App Service on Linux - Publish as Docker | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: | -|Azure Functions - basic | [ :white_check_mark: :link: ](monitor-functions.md) ┬╣ | -|Azure Functions - dependencies | :x: | :x: | [ :white_check_mark: :link: ](monitor-functions.md) | :x: | [ :white_check_mark: :link: ](monitor-functions.md#distributed-tracing-for-python-function-apps) | -|Azure Spring Cloud | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | :x: | :x: | -|Azure Kubernetes Service (AKS) | :x: | :x: | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | -|Azure VMs Windows | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) ┬▓ | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | -|On-premises VMs Windows | [ :white_check_mark: :link: ](application-insights-asp-net-agent.md) ┬│ | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | -|Standalone agent - any environment | :x: | :x: | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | +|Environment/Resource provider | .NET Framework | .NET Core / .NET | Java | Node.js | Python | +|-|-|-|--|-|--| +|Azure App Service on Windows - Publish as Code | [ :white_check_mark: :link: ](azure-web-apps-net.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-net-core.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md) <sup>[1](#OnBD)</sup> | :x: | +|Azure App Service on Windows - Publish as Docker | [ :white_check_mark: :link: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: :link: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: :link: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: :link: ](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/public-preview-application-insights-auto-instrumentation-for/ba-p/3947971) <sup>[2](#Preview)</sup> | :x: | +|Azure App Service on Linux - Publish as Code | :x: | [ :white_check_mark: :link: ](azure-web-apps-net-core.md?tabs=linux) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: | +|Azure App Service on Linux - Publish as Docker | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: | +|Azure Functions - basic | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | +|Azure Functions - dependencies | :x: | :x: | [ :white_check_mark: :link: ](monitor-functions.md) | :x: | [ :white_check_mark: :link: ](monitor-functions.md#distributed-tracing-for-python-function-apps) | +|Azure Spring Cloud | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | :x: | :x: | +|Azure Kubernetes Service (AKS) | :x: | :x: | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | +|Azure VMs Windows | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | +|On-premises VMs Windows | [ :white_check_mark: :link: ](application-insights-asp-net-agent.md) <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](application-insights-asp-net-agent.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | +|Standalone agent - any environment | :x: | :x: | [ :white_check_mark: :link: ](opentelemetry-enable.md?tabs=java) | :x: | :x: | **Footnotes**-- ┬╣: Application Insights is on by default and enabled automatically.-- ┬▓: This feature is in public preview. See [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).-- ┬│: An agent must be deployed and configured.+- <a name="OnBD">1</a>: Application Insights is on by default and enabled automatically. +- <a name="Preview">2</a>: This feature is in public preview. See [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +- <a name="Agent">3</a>: An agent must be deployed and configured. > [!NOTE] > Autoinstrumentation was known as "codeless attach" before October 2021. |
azure-monitor | Configuration With Applicationinsights Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md | Configure a [snapshot collection for ASP.NET applications](snapshot-debugger-vm. [api]: ./api-custom-events-metrics.md [client]: ./javascript.md-[diagnostic]: ./diagnostic-search.md +[diagnostic]: ./search-and-transaction-diagnostics.md?tabs=transaction-search [exceptions]: ./asp-net-exceptions.md [netlogs]: ./asp-net-trace-logs.md [new]: ./create-workspace-resource.md |
azure-monitor | Custom Operations Tracking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md | When you instrument message deletion, make sure you set the operation (correlati ### Dependency types -Application Insights uses dependency type to customize UI experiences. For queues, it recognizes the following types of `DependencyTelemetry` that improve [Transaction diagnostics experience](./transaction-diagnostics.md): +Application Insights uses dependency type to customize UI experiences. For queues, it recognizes the following types of `DependencyTelemetry` that improve [Transaction diagnostics experience](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics): - `Azure queue` for Azure Storage queues - `Azure Event Hubs` for Azure Event Hubs Each Application Insights operation (request or dependency) involves `Activity`. ## Next steps - Learn the basics of [telemetry correlation](distributed-tracing-telemetry-correlation.md) in Application Insights.-- Check out how correlated data powers [transaction diagnostics experience](./transaction-diagnostics.md) and [Application Map](./app-map.md).+- Check out how correlated data powers [transaction diagnostics experience](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) and [Application Map](./app-map.md). - See the [data model](./data-model-complete.md) for Application Insights types and data model. - Report custom [events and metrics](./api-custom-events-metrics.md) to Application Insights. - Check out standard [configuration](configuration-with-applicationinsights-config.md#telemetry-initializers-aspnet) for context properties collection. |
azure-monitor | Diagnostic Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md | - Title: Use Search in Azure Application Insights | Microsoft Docs -description: Search and filter raw telemetry sent by your web app. - Previously updated : 09/12/2023-----# Use Search in Application Insights --Transaction search is a feature of [Application Insights](./app-insights-overview.md) that you use to find and explore individual telemetry items, such as page views, exceptions, or web requests. You can also view log traces and events that you've coded. --For more complex queries over your data, use [Log Analytics](../logs/log-analytics-tutorial.md). --## Where do you see Search? --You can find **Search** in the Azure portal or Visual Studio. --### In the Azure portal --You can open transaction search from the Application Insights **Overview** tab of your application. You can also select **Search** under **Investigate** on the left menu. ---Go to the **Event types** dropdown menu to see a list of telemetry items such as server requests, page views, and custom events that you've coded. At the top of the **Results** list is a summary chart showing counts of events over time. --Back out of the dropdown menu or select **Refresh** to get new events. --### In Visual Studio --In Visual Studio, there's also an **Application Insights Search** window. It's most useful for displaying telemetry events generated by the application that you're debugging. But it can also show the events collected from your published app at the Azure portal. --Open the **Application Insights Search** window in Visual Studio: ---The **Application Insights Search** window has features similar to the web portal: ---The **Track Operation** tab is available when you open a request or a page view. An "operation" is a sequence of events that's associated with a single request or page view. For example, dependency calls, exceptions, trace logs, and custom events might be part of a single operation. The **Track Operation** tab shows graphically the timing and duration of these events in relation to the request or page view. --## Inspect individual items --Select any telemetry item to see key fields and related items. ---The end-to-end transaction details view opens. --## Filter event types --Open the **Event types** dropdown menu and choose the event types you want to see. If you want to restore the filters later, select **Reset**. --The event types are: --* **Trace**: [Diagnostic logs](./asp-net-trace-logs.md) including TrackTrace, log4Net, NLog, and System.Diagnostic.Trace calls. -* **Request**: HTTP requests received by your server application including pages, scripts, images, style files, and data. These events are used to create the request and response overview charts. -* **Page View**: [Telemetry sent by the web client](./javascript.md) used to create page view reports. -* **Custom Event**: If you inserted calls to `TrackEvent()` to [monitor usage](./api-custom-events-metrics.md), you can search them here. -* **Exception**: Uncaught [exceptions in the server](./asp-net-exceptions.md), and the exceptions that you log by using `TrackException()`. -* **Dependency**: [Calls from your server application](./asp-net-dependencies.md) to other services such as REST APIs or databases, and AJAX calls from your [client code](./javascript.md). -* **Availability**: Results of [availability tests](availability-overview.md) --## Filter on property values --You can filter events on the values of their properties. The available properties depend on the event types you selected. Select **Filter** :::image type="content" source="./media/diagnostic-search/filter-icon.png" lightbox="./media/diagnostic-search/filter-icon.png" alt-text="Filter icon"::: to start. --Choosing no values of a particular property has the same effect as choosing all values. It switches off filtering on that property. --Notice that the counts to the right of the filter values show how many occurrences there are in the current filtered set. --## Find events with the same property --To find all the items with the same property value, either enter it in the **Search** box or select the checkbox when you look through properties on the **Filter** tab. ---## Search the data --> [!NOTE] -> To write more complex queries, open [Logs (Analytics)](../logs/log-analytics-tutorial.md) at the top of the **Search** pane. -> --You can search for terms in any of the property values. This capability is useful if you've written [custom events](./api-custom-events-metrics.md) with property values. --You might want to set a time range because searches over a shorter range are faster. ---Search for complete words, not substrings. Use quotation marks to enclose special characters. --| String | *Not* found | Found | -| | | | -| HomeController.About |`home`<br/>`controller`<br/>`out` | `homecontroller`<br/>`about`<br/>`"homecontroller.about"`| -|United States|`Uni`<br/>`ted`|`united`<br/>`states`<br/>`united AND states`<br/>`"united states"` --You can use the following search expressions: --| Sample query | Effect | -| | | -| `apple` |Find all events in the time range whose fields include the word "apple". | -| `apple AND banana` <br/>`apple banana` |Find events that contain both words. Use capital "AND", not "and". <br/>Short form. | -| `apple OR banana` |Find events that contain either word. Use "OR", not "or". | -| `apple NOT banana` |Find events that contain one word but not the other. | --## Sampling --If your app generates a large amount of telemetry, and you're using the ASP.NET SDK version 2.0.0-beta3 or later, the adaptive sampling module automatically reduces the volume that's sent to the portal by sending only a representative fraction of events. Events that are related to the same request are selected or deselected as a group so that you can navigate between related events. --Learn about [sampling](./sampling.md). --## Create work item --You can create a bug in GitHub or Azure DevOps with the details from any telemetry item. --Go to the end-to-end transaction detail view by selecting any telemetry item. Then select **Create work item**. ---The first time you do this step, you're asked to configure a link to your Azure DevOps organization and project. You can also configure the link on the **Work Items** tab. --## Send more telemetry to Application Insights --In addition to the out-of-the-box telemetry sent by Application Insights SDK, you can: --* Capture log traces from your favorite logging framework in [.NET](./asp-net-trace-logs.md) or [Java](./opentelemetry-add-modify.md?tabs=java#logs). This means you can search through your log traces and correlate them with page views, exceptions, and other events. --* [Write code](./api-custom-events-metrics.md) to send custom events, page views, and exceptions. --Learn how to [send logs and custom telemetry to Application Insights](./asp-net-trace-logs.md). --## <a name="questions"></a>Frequently asked questions --Find answers to common questions. --### <a name="limits"></a>How much data is retained? --See the [Limits summary](../service-limits.md#application-insights). --### How can I see POST data in my server requests? --We don't log the POST data automatically, but you can use [TrackTrace or log calls](./asp-net-trace-logs.md). Put the POST data in the message parameter. You can't filter on the message in the same way you can filter on properties, but the size limit is longer. --### Why does my Azure Function search return no results? --The URL query strings are not logged by Azure Functions. --## <a name="add"></a>Next steps --* [Write complex queries in Analytics](../logs/log-analytics-tutorial.md) -* [Send logs and custom telemetry to Application Insights](./asp-net-trace-logs.md) -* [Availability overview](availability-overview.md) |
azure-monitor | Distributed Tracing Telemetry Correlation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing-telemetry-correlation.md | -Azure Monitor provides two experiences for consuming distributed trace data: the [transaction diagnostics](./transaction-diagnostics.md) view for a single transaction/request and the [application map](./app-map.md) view to show how systems interact. +Azure Monitor provides two experiences for consuming distributed trace data: the [transaction diagnostics](./search-and-transaction-diagnostics.md?tabs=transaction-diagnostics) view for a single transaction/request and the [application map](./app-map.md) view to show how systems interact. [Application Insights](app-insights-overview.md#application-insights-overview) can monitor each component separately and detect which component is responsible for failures or performance degradation by using distributed telemetry correlation. This article explains the data model, context-propagation techniques, protocols, and implementation of correlation tactics on different languages and platforms used by Application Insights. |
azure-monitor | Live Stream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md | If you open the Live Metrics pane, the SDKs switch to a higher frequency mode an ## Next steps * [Monitor usage with Application Insights](./usage-overview.md)-* [Use Diagnostic Search](./diagnostic-search.md) +* [Use Diagnostic Search](./search-and-transaction-diagnostics.md?tabs=transaction-search) * [Profiler](./profiler.md) * [Snapshot Debugger](./snapshot-debugger.md) |
azure-monitor | Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md | Because the SDK batches data for submission, there might be a delay before items * Continue to use the application. Take more actions to generate more telemetry. * Select **Refresh** in the portal resource view. Charts periodically refresh on their own, but manually refreshing forces them to refresh immediately. * Verify that [required outgoing ports](./ip-addresses.md) are open.-* Use [Search](./diagnostic-search.md) to look for specific events. +* Use [Search](./search-and-transaction-diagnostics.md?tabs=transaction-search) to look for specific events. * Check the [FAQ][FAQ]. ## Basic usage |
azure-monitor | Release And Work Item Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/release-and-work-item-insights.md | To delete, go to in your Application Insights resource under *Configure* select ## See also * [Azure Pipelines documentation](/azure/devops/pipelines)-* [Create work items](./diagnostic-search.md#create-work-item) +* [Create work items](./search-and-transaction-diagnostics.md?tabs=transaction-search#create-work-item) * [Automation with PowerShell](./powershell.md) * [Availability test](availability-overview.md) |
azure-monitor | Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md | Ingestion sampling doesn't operate while adaptive or fixed-rate sampling is in o **Use fixed-rate sampling if:** -* You want synchronized sampling between client and server so that, when you're investigating events in [Search](./diagnostic-search.md), you can navigate between related events on the client and server, such as page views and HTTP requests. +* You want synchronized sampling between client and server so that, when you're investigating events in [Search](./search-and-transaction-diagnostics.md?tabs=transaction-search), you can navigate between related events on the client and server, such as page views and HTTP requests. * You're confident of the appropriate sampling percentage for your app. It should be high enough to get accurate metrics, but below the rate that exceeds your pricing quota and the throttling limits. **Use adaptive sampling:** |
azure-monitor | Search And Transaction Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/search-and-transaction-diagnostics.md | + + Title: Transaction Search and Diagnostics +description: This article explains Application Insights end-to-end transaction diagnostics and how to search and filter raw telemetry sent by your web app. + Last updated : 10/13/2023+++++# Transaction Search and Diagnostics ++Azure Monitor Application Insights offers Transaction Search for pinpointing specific telemetry items and Transaction Diagnostics for comprehensive end-to-end transaction analysis. ++**Transaction Search**: This experience enables users to locate and examine individual telemetry items such as page views, exceptions, and web requests. Additionally, it offers the capability to view log traces and events that have been coded into the application. It's designed for the specific identification of performance issues or errors within the application. ++**Transaction Diagnostics**: Quickly identify issues in components through comprehensive insight into end-to-end transaction details, including dependencies and exceptions. Access this feature via the Search interface by choosing an item from the search results. ++## [Transaction Search](#tab/transaction-search) ++Transaction search is a feature of [Application Insights](./app-insights-overview.md) that you use to find and explore individual telemetry items, such as page views, exceptions, or web requests. You can also view log traces and events that you've coded. ++For more complex queries over your data, use [Log Analytics](../logs/log-analytics-tutorial.md). ++## Where do you see Search? ++You can find **Search** in the Azure portal or Visual Studio. ++### In the Azure portal ++You can open transaction search from the Application Insights **Overview** tab of your application. You can also select **Search** under **Investigate** on the left menu. +++Go to the **Event types** dropdown menu to see a list of telemetry items such as server requests, page views, and custom events that you've coded. At the top of the **Results** list is a summary chart showing counts of events over time. ++Back out of the dropdown menu or select **Refresh** to get new events. ++### In Visual Studio ++In Visual Studio, there's also an **Application Insights Search** window. It's most useful for displaying telemetry events generated by the application that you're debugging. But it can also show the events collected from your published app at the Azure portal. ++Open the **Application Insights Search** window in Visual Studio: +++The **Application Insights Search** window has features similar to the web portal: +++The **Track Operation** tab is available when you open a request or a page view. An "operation" is a sequence of events that's associated with a single request or page view. For example, dependency calls, exceptions, trace logs, and custom events might be part of a single operation. The **Track Operation** tab shows graphically the timing and duration of these events in relation to the request or page view. ++## Inspect individual items ++Select any telemetry item to see key fields and related items. +++The end-to-end transaction details view opens. ++## Filter event types ++Open the **Event types** dropdown menu and choose the event types you want to see. If you want to restore the filters later, select **Reset**. ++The event types are: ++* **Trace**: [Diagnostic logs](./asp-net-trace-logs.md) including TrackTrace, log4Net, NLog, and System.Diagnostic.Trace calls. +* **Request**: HTTP requests received by your server application including pages, scripts, images, style files, and data. These events are used to create the request and response overview charts. +* **Page View**: [Telemetry sent by the web client](./javascript.md) used to create page view reports. +* **Custom Event**: If you inserted calls to `TrackEvent()` to [monitor usage](./api-custom-events-metrics.md), you can search them here. +* **Exception**: Uncaught [exceptions in the server](./asp-net-exceptions.md), and the exceptions that you log by using `TrackException()`. +* **Dependency**: [Calls from your server application](./asp-net-dependencies.md) to other services such as REST APIs or databases, and AJAX calls from your [client code](./javascript.md). +* **Availability**: Results of [availability tests](availability-overview.md) ++## Filter on property values ++You can filter events on the values of their properties. The available properties depend on the event types you selected. Select **Filter** :::image type="content" source="./media/search-and-transaction-diagnostics/filter-icon.png" lightbox="./media/search-and-transaction-diagnostics/filter-icon.png" alt-text="Filter icon"::: to start. ++Choosing no values of a particular property has the same effect as choosing all values. It switches off filtering on that property. ++Notice that the counts to the right of the filter values show how many occurrences there are in the current filtered set. ++## Find events with the same property ++To find all the items with the same property value, either enter it in the **Search** box or select the checkbox when you look through properties on the **Filter** tab. +++## Search the data ++> [!NOTE] +> To write more complex queries, open [Logs (Analytics)](../logs/log-analytics-tutorial.md) at the top of the **Search** pane. +> ++You can search for terms in any of the property values. This capability is useful if you've written [custom events](./api-custom-events-metrics.md) with property values. ++You might want to set a time range because searches over a shorter range are faster. +++Search for complete words, not substrings. Use quotation marks to enclose special characters. ++| String | *Not* found | Found | +| | | | +| HomeController.About |`home`<br/>`controller`<br/>`out` | `homecontroller`<br/>`about`<br/>`"homecontroller.about"`| +|United States|`Uni`<br/>`ted`|`united`<br/>`states`<br/>`united AND states`<br/>`"united states"` ++You can use the following search expressions: ++| Sample query | Effect | +| | | +| `apple` |Find all events in the time range whose fields include the word "apple". | +| `apple AND banana` <br/>`apple banana` |Find events that contain both words. Use capital "AND", not "and". <br/>Short form. | +| `apple OR banana` |Find events that contain either word. Use "OR", not "or". | +| `apple NOT banana` |Find events that contain one word but not the other. | ++## Sampling ++If your app generates a large amount of telemetry, and you're using the ASP.NET SDK version 2.0.0-beta3 or later, the adaptive sampling module automatically reduces the volume that's sent to the portal by sending only a representative fraction of events. Events that are related to the same request are selected or deselected as a group so that you can navigate between related events. ++Learn about [sampling](./sampling.md). ++## Create work item ++You can create a bug in GitHub or Azure DevOps with the details from any telemetry item. ++Go to the end-to-end transaction detail view by selecting any telemetry item. Then select **Create work item**. +++The first time you do this step, you're asked to configure a link to your Azure DevOps organization and project. You can also configure the link on the **Work Items** tab. ++## Send more telemetry to Application Insights ++In addition to the out-of-the-box telemetry sent by Application Insights SDK, you can: ++* Capture log traces from your favorite logging framework in [.NET](./asp-net-trace-logs.md) or [Java](./opentelemetry-add-modify.md?tabs=java#logs). This means you can search through your log traces and correlate them with page views, exceptions, and other events. ++* [Write code](./api-custom-events-metrics.md) to send custom events, page views, and exceptions. ++Learn how to [send logs and custom telemetry to Application Insights](./asp-net-trace-logs.md). ++## <a name="questions"></a>Frequently asked questions ++Find answers to common questions. ++### <a name="limits"></a>How much data is retained? ++See the [Limits summary](../service-limits.md#application-insights). ++### How can I see POST data in my server requests? ++We don't log the POST data automatically, but you can use [TrackTrace or log calls](./asp-net-trace-logs.md). Put the POST data in the message parameter. You can't filter on the message in the same way you can filter on properties, but the size limit is longer. ++### Why does my Azure Function search return no results? ++The URL query strings are not logged by Azure Functions. ++## [Transaction Diagnostics](#tab/transaction-diagnostics) ++The unified diagnostics experience automatically correlates server-side telemetry from across all your Application Insights monitored components into a single view. It doesn't matter if you have multiple resources. Application Insights detects the underlying relationship and allows you to easily diagnose the application component, dependency, or exception that caused a transaction slowdown or failure. ++## What is a component? ++Components are independently deployable parts of your distributed or microservice application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components. ++* Components are different from "observed" external dependencies, such as SQL and event hubs, which your team or organization might not have access to (code or telemetry). +* Components run on any number of server, role, or container instances. +* Components can be separate Application Insights instrumentation keys, even if subscriptions are different. Components also can be different roles that report to a single Application Insights instrumentation key. The new experience shows details across all components, regardless of how they were set up. ++> [!NOTE] +> Are you missing the related item links? All the related telemetry is on the left side in the [top](#cross-component-transaction-chart) and [bottom](#all-telemetry-with-this-operation-id) sections. ++## Transaction diagnostics experience ++This view has four key parts: a results list, a cross-component transaction chart, a time-sequence list of all telemetry related to this operation, and the details pane for any selected telemetry item on the left. +++## Cross-component transaction chart ++This chart provides a timeline with horizontal bars during requests and dependencies across components. Any exceptions that are collected are also marked on the timeline. ++- The top row on this chart represents the entry point. It's the incoming request to the first component called in this transaction. The duration is the total time taken for the transaction to complete. +- Any calls to external dependencies are simple noncollapsible rows, with icons that represent the dependency type. +- Calls to other components are collapsible rows. Each row corresponds to a specific operation invoked at the component. +- By default, the request, dependency, or exception that you selected appears on the right side. Select any row to see its [details](#details-of-the-selected-telemetry). ++> [!NOTE] +> Calls to other components have two rows. One row represents the outbound call (dependency) from the caller component. The other row corresponds to the inbound request at the called component. The leading icon and distinct styling of the duration bars help differentiate between them. ++## All telemetry with this Operation ID ++This section shows a flat list view in a time sequence of all the telemetry related to this transaction. It also shows the custom events and traces that aren't displayed in the transaction chart. You can filter this list to telemetry generated by a specific component or call. You can select any telemetry item in this list to see corresponding [details on the right](#details-of-the-selected-telemetry). +++## Details of the selected telemetry ++This collapsible pane shows the detail of any selected item from the transaction chart or the list. **Show all** lists all the standard attributes that are collected. Any custom attributes are listed separately under the standard set. Select the ellipsis button (...) under the **Call Stack** trace window to get an option to copy the trace. **Open profiler traces** and **Open debug snapshot** show code-level diagnostics in corresponding detail panes. +++## Search results ++This collapsible pane shows the other results that meet the filter criteria. Select any result to update the respective details of the preceding three sections. We try to find samples that are most likely to have the details available from all components, even if sampling is in effect in any of them. These samples are shown as suggestions. +++## Profiler and Snapshot Debugger ++[Application Insights Profiler](./profiler.md) or [Snapshot Debugger](snapshot-debugger.md) help with code-level diagnostics of performance and failure issues. With this experience, you can see Profiler traces or snapshots from any component with a single selection. ++If you can't get Profiler working, contact serviceprofilerhelp\@microsoft.com. ++If you can't get Snapshot Debugger working, contact snapshothelp\@microsoft.com. +++## Frequently asked questions ++This section provides answers to common questions. ++### Why do I see a single component on the chart and the other components only show as external dependencies without any details? ++Potential reasons: ++* Are the other components instrumented with Application Insights? +* Are they using the latest stable Application Insights SDK? +* If these components are separate Application Insights resources, do you have required [access](resources-roles-access-control.md)? +If you do have access and the components are instrumented with the latest Application Insights SDKs, let us know via the feedback channel in the upper-right corner. ++### I see duplicate rows for the dependencies. Is this behavior expected? ++Currently, we're showing the outbound dependency call separate from the inbound request. Typically, the two calls look identical with only the duration value being different because of the network round trip. The leading icon and distinct styling of the duration bars help differentiate between them. Is this presentation of the data confusing? Give us your feedback! ++### What about clock skews across different component instances? ++Timelines are adjusted for clock skews in the transaction chart. You can see the exact timestamps in the details pane or by using Log Analytics. ++### Why is the new experience missing most of the related items queries? ++This behavior is by design. All the related items, across all components, are already available on the left side in the top and bottom sections. The new experience has two related items that the left side doesn't cover: all telemetry from five minutes before and after this event and the user timeline. ++### Is there a way to see fewer events per transaction when I use the Application Insights JavaScript SDK? ++The transaction diagnostics experience shows all telemetry in a [single operation](distributed-tracing-telemetry-correlation.md#data-model-for-telemetry-correlation) that shares an [Operation ID](data-model-complete.md#operation-id). By default, the Application Insights SDK for JavaScript creates a new operation for each unique page view. In a single-page application (SPA), only one page view event will be generated and a single Operation ID will be used for all telemetry generated. As a result, many events might be correlated to the same operation. ++In these scenarios, you can use Automatic Route Tracking to automatically create new operations for navigation in your SPA. You must turn on [enableAutoRouteTracking](javascript.md#single-page-applications) so that a page view is generated every time the URL route is updated (logical page view occurs). If you want to manually refresh the Operation ID, call `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. Manually triggering a PageView event also resets the Operation ID. ++### Why do transaction detail durations not add up to the top-request duration? ++Time not explained in the Gantt chart is time that isn't covered by a tracked dependency. This issue can occur because external calls weren't instrumented, either automatically or manually. It can also occur because the time taken was in process rather than because of an external call. ++If all calls were instrumented, in process is the likely root cause for the time spent. A useful tool for diagnosing the process is the [Application Insights profiler](./profiler.md). ++### What if I see the message ***Error retrieving data*** while navigating Application Insights in the Azure portal? ++This error indicates that the browser was unable to call into a required API or the API returned a failure response. To troubleshoot the behavior, open a browser [InPrivate window](https://support.microsoft.com/microsoft-edge/browse-inprivate-in-microsoft-edge-cd2c9a48-0bc4-b98e-5e46-ac40c84e27e2) and [disable any browser extensions](https://support.microsoft.com/microsoft-edge/add-turn-off-or-remove-extensions-in-microsoft-edge-9c0ec68c-2fbc-2f2c-9ff0-bdc76f46b026) that are running, then identify if you can still reproduce the portal behavior. If the portal error still occurs, try testing with other browsers, or other machines, investigate DNS or other network related issues from the client machine where the API calls are failing. If the portal error persists and requires further investigations, then [collect a browser network trace](../../azure-portal/capture-browser-trace.md#capture-a-browser-trace-for-troubleshooting) while you reproduce the unexpected portal behavior and open a support case from the Azure portal. ++++## See also ++* [Write complex queries in Analytics](../logs/log-analytics-tutorial.md) +* [Send logs and custom telemetry to Application Insights](./asp-net-trace-logs.md) +* [Availability overview](availability-overview.md) |
azure-monitor | Separate Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/separate-resources.md | You need the instrumentation keys of all the resources to which your app will se ## Filter on the build number When you publish a new version of your app, you'll want to be able to separate the telemetry from different builds. -You can set the **Application Version** property so that you can filter [search](../../azure-monitor/app/diagnostic-search.md) and [metric explorer](../../azure-monitor/essentials/metrics-charts.md) results. +You can set the **Application Version** property so that you can filter [search](../../azure-monitor/app/search-and-transaction-diagnostics.md?tabs=transaction-search) and [metric explorer](../../azure-monitor/essentials/metrics-charts.md) results. There are several different methods of setting the **Application Version** property. To track the application version, make sure `buildinfo.config` is generated by y </PropertyGroup> ``` -When the Application Insights web module has the build information, it automatically adds **Application Version** as a property to every item of telemetry. For this reason, you can filter by version when you perform [diagnostic searches](../../azure-monitor/app/diagnostic-search.md) or when you [explore metrics](../../azure-monitor/essentials/metrics-charts.md). +When the Application Insights web module has the build information, it automatically adds **Application Version** as a property to every item of telemetry. For this reason, you can filter by version when you perform [diagnostic searches](../../azure-monitor/app/search-and-transaction-diagnostics.md?tabs=transaction-search) or when you [explore metrics](../../azure-monitor/essentials/metrics-charts.md). The build version number is generated only by the Microsoft Build Engine, not by the developer build from Visual Studio. |
azure-monitor | Transaction Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md | - Title: Application Insights transaction diagnostics | Microsoft Docs -description: This article explains Application Insights end-to-end transaction diagnostics. - Previously updated : 11/15/2022----# Unified cross-component transaction diagnostics --The unified diagnostics experience automatically correlates server-side telemetry from across all your Application Insights monitored components into a single view. It doesn't matter if you have multiple resources. Application Insights detects the underlying relationship and allows you to easily diagnose the application component, dependency, or exception that caused a transaction slowdown or failure. --## What is a component? --Components are independently deployable parts of your distributed or microservice application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components. --* Components are different from "observed" external dependencies, such as SQL and event hubs, which your team or organization might not have access to (code or telemetry). -* Components run on any number of server, role, or container instances. -* Components can be separate Application Insights instrumentation keys, even if subscriptions are different. Components also can be different roles that report to a single Application Insights instrumentation key. The new experience shows details across all components, regardless of how they were set up. --> [!NOTE] -> Are you missing the related item links? All the related telemetry is on the left side in the [top](#cross-component-transaction-chart) and [bottom](#all-telemetry-with-this-operation-id) sections. --## Transaction diagnostics experience --This view has four key parts: a results list, a cross-component transaction chart, a time-sequence list of all telemetry related to this operation, and the details pane for any selected telemetry item on the left. ---## Cross-component transaction chart --This chart provides a timeline with horizontal bars during requests and dependencies across components. Any exceptions that are collected are also marked on the timeline. --1. The top row on this chart represents the entry point. It's the incoming request to the first component called in this transaction. The duration is the total time taken for the transaction to complete. -1. Any calls to external dependencies are simple noncollapsible rows, with icons that represent the dependency type. -1. Calls to other components are collapsible rows. Each row corresponds to a specific operation invoked at the component. -1. By default, the request, dependency, or exception that you selected appears on the right side. Select any row to see its [details](#details-of-the-selected-telemetry). --> [!NOTE] -> Calls to other components have two rows. One row represents the outbound call (dependency) from the caller component. The other row corresponds to the inbound request at the called component. The leading icon and distinct styling of the duration bars help differentiate between them. --## All telemetry with this Operation ID --This section shows a flat list view in a time sequence of all the telemetry related to this transaction. It also shows the custom events and traces that aren't displayed in the transaction chart. You can filter this list to telemetry generated by a specific component or call. You can select any telemetry item in this list to see corresponding [details on the right](#details-of-the-selected-telemetry). ---## Details of the selected telemetry --This collapsible pane shows the detail of any selected item from the transaction chart or the list. **Show all** lists all the standard attributes that are collected. Any custom attributes are listed separately under the standard set. Select the ellipsis button (...) under the **Call Stack** trace window to get an option to copy the trace. **Open profiler traces** and **Open debug snapshot** show code-level diagnostics in corresponding detail panes. ---## Search results --This collapsible pane shows the other results that meet the filter criteria. Select any result to update the respective details of the preceding three sections. We try to find samples that are most likely to have the details available from all components, even if sampling is in effect in any of them. These samples are shown as suggestions. ---## Profiler and Snapshot Debugger --[Application Insights Profiler](./profiler.md) or [Snapshot Debugger](snapshot-debugger.md) help with code-level diagnostics of performance and failure issues. With this experience, you can see Profiler traces or snapshots from any component with a single selection. --If you can't get Profiler working, contact serviceprofilerhelp\@microsoft.com. --If you can't get Snapshot Debugger working, contact snapshothelp\@microsoft.com. ---## Frequently asked questions --This section provides answers to common questions. --### Why do I see a single component on the chart and the other components only show as external dependencies without any details? --Potential reasons: --* Are the other components instrumented with Application Insights? -* Are they using the latest stable Application Insights SDK? -* If these components are separate Application Insights resources, do you have required [access](resources-roles-access-control.md)? -If you do have access and the components are instrumented with the latest Application Insights SDKs, let us know via the feedback channel in the upper-right corner. --### I see duplicate rows for the dependencies. Is this behavior expected? --Currently, we're showing the outbound dependency call separate from the inbound request. Typically, the two calls look identical with only the duration value being different because of the network round trip. The leading icon and distinct styling of the duration bars help differentiate between them. Is this presentation of the data confusing? Give us your feedback! --### What about clock skews across different component instances? --Timelines are adjusted for clock skews in the transaction chart. You can see the exact timestamps in the details pane or by using Log Analytics. --### Why is the new experience missing most of the related items queries? --This behavior is by design. All the related items, across all components, are already available on the left side in the top and bottom sections. The new experience has two related items that the left side doesn't cover: all telemetry from five minutes before and after this event and the user timeline. --### Is there a way to see fewer events per transaction when I use the Application Insights JavaScript SDK? --The transaction diagnostics experience shows all telemetry in a [single operation](distributed-tracing-telemetry-correlation.md#data-model-for-telemetry-correlation) that shares an [Operation ID](data-model-complete.md#operation-id). By default, the Application Insights SDK for JavaScript creates a new operation for each unique page view. In a single-page application (SPA), only one page view event will be generated and a single Operation ID will be used for all telemetry generated. As a result, many events might be correlated to the same operation. --In these scenarios, you can use Automatic Route Tracking to automatically create new operations for navigation in your SPA. You must turn on [enableAutoRouteTracking](javascript.md#single-page-applications) so that a page view is generated every time the URL route is updated (logical page view occurs). If you want to manually refresh the Operation ID, call `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. Manually triggering a PageView event also resets the Operation ID. --### Why do transaction detail durations not add up to the top-request duration? --Time not explained in the Gantt chart is time that isn't covered by a tracked dependency. This issue can occur because external calls weren't instrumented, either automatically or manually. It can also occur because the time taken was in process rather than because of an external call. --If all calls were instrumented, in process is the likely root cause for the time spent. A useful tool for diagnosing the process is the [Application Insights profiler](./profiler.md). --### What if I see the message ***Error retrieving data*** while navigating Application Insights in the Azure portal? --This error indicates that the browser was unable to call into a required API or the API returned a failure response. To troubleshoot the behavior, open a browser [InPrivate window](https://support.microsoft.com/microsoft-edge/browse-inprivate-in-microsoft-edge-cd2c9a48-0bc4-b98e-5e46-ac40c84e27e2) and [disable any browser extensions](https://support.microsoft.com/microsoft-edge/add-turn-off-or-remove-extensions-in-microsoft-edge-9c0ec68c-2fbc-2f2c-9ff0-bdc76f46b026) that are running, then identify if you can still reproduce the portal behavior. If the portal error still occurs, try testing with other browsers, or other machines, investigate DNS or other network related issues from the client machine where the API calls are failing. If the portal error persists and requires further investigations, then [collect a browser network trace](../../azure-portal/capture-browser-trace.md#capture-a-browser-trace-for-troubleshooting) while you reproduce the unexpected portal behavior and open a support case from the Azure portal. |
azure-monitor | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md | General|[Azure Monitor cost and usage](usage-estimated-costs.md)|Added section d Application-Insights|[Add, modify, and filter OpenTelemetry](app/opentelemetry-add-modify.md)|A caution has been added about using community libraries with additional information on how to request we include them in our distro.| Application-Insights|[Add, modify, and filter OpenTelemetry](app/opentelemetry-add-modify.md)|Support and feedback options are now available across all of our OpenTelemetry pages.| Application-Insights|[How many Application Insights resources should I deploy?](app/separate-resources.md)|We added an important warning about additional network costs when monitoring across regions.|-Application-Insights|[Use Search in Application Insights](app/diagnostic-search.md)|We clarified that URL query strings are not logged by Azure Functions and that URL query strings won't show up in searches.| +Application-Insights|[Use Search in Application Insights](app/search-and-transaction-diagnostics.md?tabs=transaction-search)|We clarified that URL query strings are not logged by Azure Functions and that URL query strings won't show up in searches.| Application-Insights|[Migrating from OpenCensus Python SDK and Azure Monitor OpenCensus exporter for Python to Azure Monitor OpenTelemetry Python Distro](app/opentelemetry-python-opencensus-migrate.md)|Migrate from OpenCensus to OpenTelemetry with this step-by-step guidance.| Application-Insights|[Application Insights overview](app/app-insights-overview.md)|We've added an illustration to convey how Azure Monitor Application Insights works at a high level.| Containers|[Troubleshoot collection of Prometheus metrics in Azure Monitor](containers/prometheus-metrics-troubleshoot.md)|Added the *Troubleshoot using PowerShell script* section.| Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to |[Java Profiler for Azure Monitor Application Insights](./app/java-standalone-profiler.md)|Announced the new Java Profiler at Ignite. Read all about it.| |[Release notes for Azure Web App extension for Application Insights](./app/web-app-extension-release-notes.md)|Added release notes for 2.8.44 and 2.8.43.| |[Resource Manager template samples for creating Application Insights resources](./app/resource-manager-app-resource.md)|Fixed inaccurate tagging of workspace-based resources as still in preview.|-|[Unified cross-component transaction diagnostics](./app/transaction-diagnostics.md)|Added a FAQ section to help troubleshoot Azure portal errors like "error retrieving data."| +|[Unified cross-component transaction diagnostics](./app/search-and-transaction-diagnostics.md?tabs=transaction-diagnostics)|Added a FAQ section to help troubleshoot Azure portal errors like "error retrieving data."| |[Upgrading from Application Insights Java 2.x SDK](./app/java-standalone-upgrade-from-2x.md)|Added more upgrade guidance. Java 2.x is deprecated.| |[Using Azure Monitor Application Insights with Spring Boot](./app/java-spring-boot.md)|Updated configuration options.| |
azure-resource-manager | Bicep Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md | Title: Bicep CLI commands and overview description: Describes the commands that you can use in the Bicep CLI. These commands include building Azure Resource Manager templates from Bicep. Previously updated : 09/08/2023 Last updated : 10/13/2023 # Bicep CLI commands |
azure-resource-manager | Bicep Config Linter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md | The following example shows the rules that are available for configuration. "max-variables": { "level": "warning" },+ "nested-deployment-template-scoping": { + "level": "error" + } "no-conflicting-metadata" : { "level": "warning" }, |
azure-resource-manager | Linter Rule Nested Deployment Template Scoping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-nested-deployment-template-scoping.md | + + Title: Linter rule - nested deployment template scoping +description: Linter rule - nested deployment template scoping ++ Last updated : 10/12/2023+++# Linter rule - nested deployment template scoping ++This linter rule triggers a diagnostic when a `Microsoft.Resources/deployments` resource using inner-scoped expression evaluation and contains any references to symbols defined in the parent template. ++## Linter rule code ++Use the following value in the [Bicep configuration file](bicep-config-linter.md) to customize rule settings: ++`nested-deployment-template-scoping` ++## Solution ++The following example fails this test because `fizz` is defined in the parent template's namespace. ++```bicep +var fizz = 'buzz' ++resource nested 'Microsoft.Resources/deployments@2020-10-01' = { + name: 'name' + properties: { + mode: 'Incremental' + expressionEvaluationOptions: { + scope: 'inner' + } + template: { + '$schema': 'https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#' + contentVersion: '1.0.0.0' + resources: [ + { + apiVersion: '2022-09-01' + type: 'Microsoft.Resources/tags' + name: 'default' + properties: { + tags: { + tag1: fizz // <-- Error! `fizz` is defined in the parent template's namespace + } + } + } + ] + } + } +} +``` ++## Next steps ++For more information about the linter, see [Use Bicep linter](./linter.md). |
azure-resource-manager | Linter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md | Title: Use Bicep linter description: Learn how to use Bicep linter. Previously updated : 10/05/2023 Last updated : 10/13/2023 # Use Bicep linter The default set of linter rules is minimal and taken from [arm-ttk test cases](. - [max-params](./linter-rule-max-parameters.md) - [max-resources](./linter-rule-max-resources.md) - [max-variables](./linter-rule-max-variables.md)+- [nested-deployment-template-scoping](./linter-rule-nested-deployment-template-scoping.md) - [no-conflicting-metadata](./linter-rule-no-conflicting-metadata.md) - [no-deployments-resources](./linter-rule-no-deployments-resources.md) - [no-hardcoded-env-urls](./linter-rule-no-hardcoded-environment-urls.md) scriptDownloadUrl: 'https://mytools.blob.core.windows.net/...' It's good practice to add a comment explaining why the rule doesn't apply to this line. +If you want to suppress a linter rule, you can change the level of the rule to `Off` in [bicepconfig.json](./bicep-config-linter.md). For example, in the following example, the `no-deployments-reesources` rule is suppressed: ++```json +{ + "analyzers": { + "core": { + "rules": { + "no-deployments-resources": { + "level": "off" + } + } + } + } +} +``` + ## Next steps - For more information about customizing the linter rules, see [Add custom settings in the Bicep config file](bicep-config-linter.md). |
azure-resource-manager | Modules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/modules.md | Title: Bicep modules description: Describes how to define a module in a Bicep file, and how to use module scopes. Previously updated : 11/04/2022 Last updated : 10/13/2023 # Bicep modules You can also use an ARM JSON template as a module: ::: code language="bicep" source="~/azure-docs-bicep-samples/syntax-samples/modules/local-file-definition-json.bicep" ::: -Use the symbolic name to reference the module in another part of the Bicep file. For example, you can use the symbolic name to get the output from a module. The symbolic name may contain a-z, A-Z, 0-9, and underscore (`_`). The name can't start with a number. A module can't have the same name as a parameter, variable, or resource. +Use the symbolic name to reference the module in another part of the Bicep file. For example, you can use the symbolic name to get the output from a module. The symbolic name might contain a-z, A-Z, 0-9, and underscore (`_`). The name can't start with a number. A module can't have the same name as a parameter, variable, or resource. The path can be either a local file or a file in a registry. The local file can be either a Bicep file or an ARM JSON template. For more information, see [Path to module](#path-to-module). The **name** property is required. It becomes the name of the nested deployment resource in the generated template. -If a module with a static name is deployed concurrently to the same scope, there's the potential for one deployment to interfere with the output from the other deployment. For example, if two Bicep files use the same module with the same static name (`examplemodule`) and targeted to the same resource group, one deployment may show the wrong output. If you're concerned about concurrent deployments to the same scope, give your module a unique name. +If a module with a static name is deployed concurrently to the same scope, there's the potential for one deployment to interfere with the output from the other deployment. For example, if two Bicep files use the same module with the same static name (`examplemodule`) and targeted to the same resource group, one deployment might show the wrong output. If you're concerned about concurrent deployments to the same scope, give your module a unique name. The following example concatenates the deployment name to the module name. If you provide a unique name for the deployment, the module name is also unique. For example, to deploy a file that is up one level in the directory from your ma #### Public module registry -The public module registry is hosted in a Microsoft container registry (MCR). The source code and the modules are stored in [GitHub](https://github.com/azure/bicep-registry-modules). The [README file](https://github.com/azure/bicep-registry-modules#readme) in the GitHub repo lists the available modules and their latest versions: +The public module registry is hosted in a Microsoft container registry (MCR). The source code and the modules are stored in [GitHub](https://github.com/azure/bicep-registry-modules). To view the available modules and their versions, see [Bicep registry Module Index](https://aka.ms/br-module-index). :::image type="content" source="./media/modules/bicep-public-module-registry-modules.png" alt-text="The screenshot of public module registry."::: -Select the versions to see the available versions. You can also select **Code** to see the module source code, and open the Readme files. +Select the versions to see the available versions. You can also select **Source code** to see the module source code, and open the Readme files. There are only a few published modules currently. More modules are coming. If you like to contribute to the registry, see the [contribution guide](https://github.com/Azure/bicep-registry-modules/blob/main/CONTRIBUTING.md). |
azure-resource-manager | Template Specs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md | Title: Create & deploy template specs in Bicep description: Describes how to create template specs in Bicep and share them with other users in your organization. Previously updated : 05/26/2023 Last updated : 10/13/2023 # Azure Resource Manager template specs in Bicep az deployment group create \ +Currently, you can't deploy a template spec with a [.bicepparam file](./parameter-files.md). + ## Versioning When you create a template spec, you provide a version name for it. As you iterate on the template code, you can either update an existing version (for hotfixes) or publish a new version. The version is a text string. You can choose to follow any versioning system, including semantic versioning. Users of the template spec can provide the version name they want to use when deploying it. Both the template and its versions can have tags. The tags are applied or inheri After creating a template spec, you can link to that template spec in a Bicep module. The template spec is deployed when you deploy the Bicep file containing that module. For more information, see [File in template spec](./modules.md#path-to-module). +To create aliases for template specs intended for module linking, see [Aliases for modules](./bicep-config-modules.md#aliases-for-modules). + ## Next steps To learn more about template specs, and for hands-on guidance, see [Publish libraries of reusable infrastructure code by using template specs](/training/modules/arm-template-specs). |
azure-signalr | Concept Connection String | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-connection-string.md | Title: Connection string in Azure SignalR Service -description: An overview of connection string in Azure SignalR Service, how to generate it and how to configure it in app server + Title: Connection strings in Azure SignalR Service +description: This article gives an overview of connection strings in Azure SignalR Service, how to generate one, and how to configure one in an app server. Last updated 03/29/2023 -# Connection string in Azure SignalR Service +# Connection strings in Azure SignalR Service -A connection string contains information about how to connect to Azure Signal Service (ASRS). In this article, you learn the basics of connection string and how to configure it in your application. +A connection string contains information about how to connect to Azure SignalR Service. In this article, you learn the basics of connection strings and how to configure one in your application. -## What is a connection string +## What a connection string is When an application needs to connect to Azure SignalR Service, it needs the following information: -- The HTTP endpoint of the SignalR service instance.-- The way to authenticate with the service endpoint.+- The HTTP endpoint of the Azure SignalR Service instance +- The way to authenticate with the service endpoint A connection string contains such information. ## What a connection string looks like -A connection string consists of a series of key/value pairs separated by semicolons(;). An equal sign(=) to connect each key and its value. Keys aren't case sensitive. +A connection string consists of a series of key/value pairs separated by semicolons (;). The string uses an equal sign (=) to connect each key and its value. Keys aren't case sensitive. -For example, a typical connection string may look like this: +A typical connection string might look like this example: -> Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;Version=1.0; +> `Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;Version=1.0;` The connection string contains: - `Endpoint=https://<resource_name>.service.signalr.net`: The endpoint URL of the resource.-- `AccessKey=<access_key>`: The key to authenticate with the service. When an access key is specified in the connection string, the SignalR Service SDK uses it to generate a token that is validated by the service.+- `AccessKey=<access_key>`: The key to authenticate with the service. When you specify an access key in the connection string, the Azure SignalR Service SDK uses it to generate a token that the service validates. - `Version`: The version of the connection string. The default value is `1.0`. The following table lists all the valid names for key/value pairs in the connection string. | Key | Description | Required | Default value | Example value | | -- | - | -- | | |-| Endpoint | The URL of your ASRS instance. | Y | N/A | `https://foo.service.signalr.net` | -| Port | The port that your ASRS instance is listening on. on. | N | 80/443, depends on the endpoint URI schema | 8080 | -| Version | The version of given connection. string. | N | 1.0 | 1.0 | -| ClientEndpoint | The URI of your reverse proxy, such as the App Gateway or API. Management | N | null | `https://foo.bar` | -| AuthType | The auth type. By default the service uses the AccessKey authorize requests. **Case insensitive** | N | null | Azure, azure.msi, azure.app | +| `Endpoint` | The URL of your Azure SignalR Service instance. | Yes | Not applicable | `https://foo.service.signalr.net` | +| `Port` | The port that your Azure SignalR Service instance is listening on. | No | `80` or `443`, depending on the endpoint URI schema | `8080` | +| `Version` | The version of a connection string. | No | `1.0` | `1.0` | +| `ClientEndpoint` | The URI of your reverse proxy, such as Azure Application Gateway or Azure API Management. | No | `null` | `https://foo.bar` | +| `AuthType` | The authentication type. By default, the service uses `AccessKey` to authorize requests. It's not case sensitive. | No | `null` | `Azure`, `azure.msi`, `azure.app` | ### Use AccessKey -The local auth method is used when `AuthType` is set to null. +The service uses the local authentication method when `AuthType` is set to `null`. | Key | Description | Required | Default value | Example value | | | - | -- | - | - |-| AccessKey | The key string in base64 format for building access token. | Y | null | ABCDEFGHIJKLMNOPQRSTUVWEXYZ0123456789+=/ | +| `AccessKey` | The key string, in Base64 format, for building an access token. | Yes | `null` | `ABCDEFGHIJKLMNOPQRSTUVWEXYZ0123456789+=/` | ### Use Microsoft Entra ID -The Microsoft Entra auth method is used when `AuthType` is set to `azure`, `azure.app` or `azure.msi`. +The service uses the Microsoft Entra authentication method when `AuthType` is set to `azure`, `azure.app`, or `azure.msi`. | Key | Description | Required | Default value | Example value | | -- | | -- | - | |-| ClientId | A GUID of an Azure application or an Azure identity. | N | null | `00000000-0000-0000-0000-000000000000` | -| TenantId | A GUID of an organization in Microsoft Entra ID. | N | null | `00000000-0000-0000-0000-000000000000` | -| ClientSecret | The password of an Azure application instance. | N | null | `***********************.****************` | -| ClientCertPath | The absolute path of a client certificate (cert) file to an Azure application instance. | N | null | `/usr/local/cert/app.cert` | +| `ClientId` | A GUID of an Azure application or an Azure identity. | No | `null` | `00000000-0000-0000-0000-000000000000` | +| `TenantId` | A GUID of an organization in Microsoft Entra ID. | No | `null` | `00000000-0000-0000-0000-000000000000` | +| `ClientSecret` | The password of an Azure application instance. | No | `null` | `***********************.****************` | +| `ClientCertPath` | The absolute path of a client certificate file to an Azure application instance. | No | `null` | `/usr/local/cert/app.cert` | -A different `TokenCredential` is used to generate Microsoft Entra tokens depending on the parameters you have given. +The service uses a different `TokenCredential` value to generate Microsoft Entra tokens, depending on the parameters that you give: - `type=azure` - [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) is used. + - The service uses [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential): - ```text - Endpoint=xxx;AuthType=azure - ``` + ```text + Endpoint=xxx;AuthType=azure + ``` - `type=azure.msi` - 1. A user-assigned managed identity is used if `clientId` has been given in connection string. + - The service uses a user-assigned managed identity ([ManagedIdentityCredential(clientId)](/dotnet/api/azure.identity.managedidentitycredential)) if the connection string uses `clientId`: ```text Endpoint=xxx;AuthType=azure.msi;ClientId=<client_id> ``` - - [ManagedIdentityCredential(clientId)](/dotnet/api/azure.identity.managedidentitycredential) is used. -- 1. A system-assigned managed identity is used. + - The service uses a system-assigned managed identity ([ManagedIdentityCredential()](/dotnet/api/azure.identity.managedidentitycredential)): ```text Endpoint=xxx;AuthType=azure.msi; ``` - - [ManagedIdentityCredential()](/dotnet/api/azure.identity.managedidentitycredential) is used. - - `type=azure.app` - `clientId` and `tenantId` are required to use [Microsoft Entra application with service principal](../active-directory/develop/howto-create-service-principal-portal.md). + Both `clientId` and `tenantId` are required to use [a Microsoft Entra application with a service principal](../active-directory/develop/howto-create-service-principal-portal.md). - 1. [ClientSecretCredential(clientId, tenantId, clientSecret)](/dotnet/api/azure.identity.clientsecretcredential) is used if `clientSecret` is given. + - The service uses [ClientSecretCredential(clientId, tenantId, clientSecret)](/dotnet/api/azure.identity.clientsecretcredential) if the connection string uses `clientSecret`: ```text Endpoint=xxx;AuthType=azure.msi;ClientId=<client_id>;clientSecret=<client_secret>> ``` - 1. [ClientCertificateCredential(clientId, tenantId, clientCertPath)](/dotnet/api/azure.identity.clientcertificatecredential) is used if `clientCertPath` is given. + - The service uses [ClientCertificateCredential(clientId, tenantId, clientCertPath)](/dotnet/api/azure.identity.clientcertificatecredential) if the connection string uses `clientCertPath`: ```text Endpoint=xxx;AuthType=azure.msi;ClientId=<client_id>;TenantId=<tenant_id>;clientCertPath=</path/to/cert> A different `TokenCredential` is used to generate Microsoft Entra tokens dependi ## How to get connection strings -### From Azure portal --Open your SignalR service resource in Azure portal and go to `Keys` tab. +You can use the Azure portal or the Azure CLI to get connection strings. -You see two connection strings (primary and secondary) in the following format: +### Azure portal -> Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;Version=1.0; +Open your Azure SignalR Service resource in the Azure portal. The **Keys** tab shows two connection strings (primary and secondary) in the following format: -### From Azure CLI +> `Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;Version=1.0;` -You can also use Azure CLI to get the connection string: +### Azure CLI ```bash az signalr key list -g <resource_group> -n <resource_name> az signalr key list -g <resource_group> -n <resource_name> ## Connect with a Microsoft Entra application -You can use a [Microsoft Entra application](../active-directory/develop/app-objects-and-service-principals.md) to connect to your SignalR service. As long as the application has the right permission to access SignalR service, no access key is needed. +You can use a [Microsoft Entra application](../active-directory/develop/app-objects-and-service-principals.md) to connect to your Azure SignalR Service instance. If the application has the right permission to access Azure SignalR Service, it doesn't need an access key. -To use Microsoft Entra authentication, you need to remove `AccessKey` from connection string and add `AuthType=azure.app`. You also need to specify the credentials of your Microsoft Entra application, including client ID, client secret and tenant ID. The connection string looks as follows: +To use Microsoft Entra authentication, you need to remove `AccessKey` from the connection string and add `AuthType=azure.app`. You also need to specify the credentials of your Microsoft Entra application, including client ID, client secret, and tenant ID. The connection string looks like this example: ```text Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.app;ClientId=<client_id>;ClientSecret=<client_secret>;TenantId=<tenant_id>;Version=1.0; ``` -For more information about how to authenticate using Microsoft Entra application, see [Authorize from Azure Applications](signalr-howto-authorize-application.md). +For more information about how to authenticate by using a Microsoft Entra application, see [Authorize requests to SignalR resources with Microsoft Entra applications](signalr-howto-authorize-application.md). -## Authenticate with Managed identity +## Authenticate with a managed identity -You can also use a system assigned or user assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with SignalR service. +You can use a system-assigned or user-assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with Azure SignalR Service. -To use a system assigned identity, add `AuthType=azure.msi` to the connection string: +To use a system-assigned identity, add `AuthType=azure.msi` to the connection string: ```text Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.msi;Version=1.0; ``` -The SignalR service SDK automatically uses the identity of your app server. +The Azure SignalR Service SDK automatically uses the identity of your app server. -To use a user assigned identity, include the client ID of the managed identity in the connection string: +To use a user-assigned identity, include the client ID of the managed identity in the connection string: ```text Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.msi;ClientId=<client_id>;Version=1.0; ``` -For more information about how to configure managed identity, see [Authorize from Managed Identity](signalr-howto-authorize-managed-identity.md). +For more information about how to configure managed identities, see [Authorize requests to SignalR resources with Microsoft Entra managed identities](signalr-howto-authorize-managed-identity.md). > [!NOTE]-> It's highly recommended to use managed identity to authenticate with SignalR service as it's a more secure way compared to using access keys. If you don't use access keys authentication, consider completely disabling it (go to Azure portal -> Keys -> Access Key -> Disable). If you still use access keys, it's highly recommended to rotate them regularly. For more information, see [Rotate access keys for Azure SignalR Service](signalr-howto-key-rotation.md). +> We highly recommend that you use managed identities to authenticate with Azure SignalR Service, because they're more secure than access keys. If you don't use access keys for authentication, consider completely disabling them in the Azure portal (select **Keys** > **Access Key** > **Disable**). +> +> If you decide to use access keys, we recommend that you rotate them regularly. For more information, see [Rotate access keys for Azure SignalR Service](signalr-howto-key-rotation.md). ### Use the connection string generator -It may be cumbersome and error-prone to build connection strings manually. To avoid making mistakes, SignalR provides a connection string generator to help you generate a connection string that includes Microsoft Entra identities like `clientId`, `tenantId`, etc. To use the tool open your SignalR instance in Azure portal, select **Connection strings** from the left side menu. +Building connection strings manually can be cumbersome and error prone. To avoid mistakes, Azure SignalR Service provides a connection string generator to help you generate a connection string that includes Microsoft Entra identities like `clientId` and `tenantId`. To use the tool, open your Azure SignalR Service instance in Azure portal and select **Connection strings** from the left menu. -In this page you can choose different authentication types (access key, managed identity or Microsoft Entra application) and input information like client endpoint, client ID, client secret, etc. Then connection string is automatically generated. You can copy and use it in your application. +On this page, you can choose among authentication types (access key, managed identity, or Microsoft Entra application) and enter information like client endpoint, client ID, and client secret. Then the connection string is automatically generated. You can copy it and use it in your application. > [!NOTE]-> Information you enter won't be saved after you leave the page. You will need to copy and save your connection string to use in your application. +> Information that you enter isn't saved after you leave the page. You need to copy and save your connection string to use it in your application. -For more information about how access tokens are generated and validated, see [Authenticate via Microsoft Entra token](signalr-reference-data-plane-rest-api.md#authenticate-via-microsoft-entra-token) in [Azure SignalR service data plane REST API reference](signalr-reference-data-plane-rest-api.md) . +For more information about how access tokens are generated and validated, see the [Authenticate via Microsoft Entra token](signalr-reference-data-plane-rest-api.md#authentication-via-microsoft-entra-token) section in the Azure SignalR Service data plane REST API reference. -## Client and server endpoints +## Provide client and server endpoints -A connection string contains the HTTP endpoint for app server to connect to SignalR service. The server returns the HTTP endpoint to the clients in a negotiate response, so the client can connect to the service. +A connection string contains the HTTP endpoint for the app server to connect to Azure SignalR Service. The server returns the HTTP endpoint to the clients in a negotiation response, so the client can connect to the service. -In some applications, there may be an extra component in front of SignalR service. All client connections need to go through that component first. For example, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides additional network security. +In some applications, there might be an extra component in front of Azure SignalR Service. All client connections need to go through that component first. For example, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides additional network security. -In such case, the client needs to connect to an endpoint different than SignalR service. Instead of manually replacing the endpoint at the client side, you can add `ClientEndpoint` to connection string: +In such cases, the client needs to connect to an endpoint that's different from Azure SignalR Service. Instead of manually replacing the endpoint at the client side, you can add `ClientEndpoint` to the connection string: ```text Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;ClientEndpoint=https://<url_to_app_gateway>;Version=1.0; ``` -The app server returns a response to the client's negotiate request containing the correct endpoint URL for the client to connect to. For more information about client connections, see [Azure SignalR Service internals](signalr-concept-internals.md#client-connections). +The app server returns a response to the client's negotiation request. The response contains the correct endpoint URL for the client to connect to. For more information about client connections, see [Azure SignalR Service internals](signalr-concept-internals.md#client-connections). -Similarly, the server wants to make [server connections](signalr-concept-internals.md#azure-signalr-service-internals) or call [REST APIs](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md) to the service, the SignalR service may also be behind another service like [Azure Application Gateway](../application-gateway/overview.md). In that case, you can use `ServerEndpoint` to specify the actual endpoint for server connections and REST APIs: +Similarly, if the server tries to make [server connections](signalr-concept-internals.md#azure-signalr-service-internals) or call [REST APIs](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md) to the service, Azure SignalR Service might also be behind another service like [Azure Application Gateway](../application-gateway/overview.md). In that case, you can use `ServerEndpoint` to specify the actual endpoint for server connections and REST APIs: ```text Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;ServerEndpoint=https://<url_to_app_gateway>;Version=1.0; ``` -## Configure connection string in your application +## Configure a connection string in your application There are two ways to configure a connection string in your application. -You can set the connection string when calling `AddAzureSignalR()` API: +You can set the connection string when calling the `AddAzureSignalR()` API: ```cs services.AddSignalR().AddAzureSignalR("<connection_string>"); ``` -Or you can call `AddAzureSignalR()` without any arguments. The service SDK returns the connection string from a config named `Azure:SignalR:ConnectionString` in your [configuration provider](/dotnet/core/extensions/configuration-providers). +Or you can call `AddAzureSignalR()` without any arguments. The service SDK returns the connection string from a configuration named `Azure:SignalR:ConnectionString` in your [configuration provider](/dotnet/core/extensions/configuration-providers). -In a local development environment, the configuration is stored in a file (_appsettings.json_ or _secrets.json_) or environment variables. You can use one of the following ways to configure connection string: +In a local development environment, the configuration is stored in a file (_appsettings.json_ or _secrets.json_) or in environment variables. You can use one of the following ways to configure connection string: -- Use .NET secret manager (`dotnet user-secrets set Azure:SignalR:ConnectionString "<connection_string>"`)-- Set an environment variable named `Azure__SignalR__ConnectionString` to the connection string. The colons need to be replaced with double underscore in the [environment variable configuration provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider).+- Use a .NET secret manager (`dotnet user-secrets set Azure:SignalR:ConnectionString "<connection_string>"`). +- Set an environment variable named `Azure__SignalR__ConnectionString` to the connection string. The colons need to be replaced with a double underscore in the [environment variable configuration provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider). -In a production environment, you can use other Azure services to manage config/secrets like Azure [Key Vault](../key-vault/general/overview.md) and [App Configuration](../azure-app-configuration/overview.md). See their documentation to learn how to set up configuration provider for those services. +In a production environment, you can use other Azure services to manage configurations and secrets, like Azure [Key Vault](../key-vault/general/overview.md) and [App Configuration](../azure-app-configuration/overview.md). See their documentation to learn how to set up a configuration provider for those services. > [!NOTE]-> Even when you're directly setting a connection string using code, it's not recommended to hardcode the connection string in source code You should read the connection string from a secret store like key vault and pass it to `AddAzureSignalR()`. +> Even when you're directly setting a connection string by using code, we don't recommend that you hard-code the connection string in source code. Instead, read the connection string from a secret store like Key Vault and pass it to `AddAzureSignalR()`. ### Configure multiple connection strings -Azure SignalR Service also allows the server to connect to multiple service endpoints at the same time, so it can handle more connections that are beyond a service instance's limit. Also, when one service instance is down the other service instances can be used as backup. For more information about how to use multiple instances, see [Scale SignalR Service with multiple instances](signalr-howto-scale-multi-instances.md). +Azure SignalR Service allows the server to connect to multiple service endpoints at the same time, so it can handle more connections that are beyond a service instance's limit. When one service instance is down, you can use the other service instances as backup. For more information about how to use multiple instances, see [Scale SignalR Service with multiple instances](signalr-howto-scale-multi-instances.md). -There are also two ways to configure multiple instances: +There are two ways to configure multiple instances: - Through code: There are also two ways to configure multiple instances: - Through configuration: - You can use any supported configuration provider (secret manager, environment variables, key vault, etc.) to store connection strings. Take secret manager as an example: + You can use any supported configuration provider (for example, secret manager, environment variables, or key vault) to store connection strings. Here's an example that uses a secret ```bash dotnet user-secrets set Azure:SignalR:ConnectionString:name_a <connection_string_1> There are also two ways to configure multiple instances: dotnet user-secrets set Azure:SignalR:ConnectionString:name_c:secondary <connection_string_3> ``` - You can assign a name and type to each endpoint by using a different config name in the following format: + You can assign a name and type to each endpoint by using a different configuration name in the following format: ```text Azure:SignalR:ConnectionString:<name>:<type> |
azure-signalr | Howto Disable Local Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-disable-local-auth.md | -There are two ways to authenticate to Azure SignalR Service resources: Microsoft Entra ID and Access Key. Microsoft Entra ID offers superior security and ease of use compared to the access key method. -With Microsoft Entra ID, you do not need to store tokens in your code, reducing the risk of potential security vulnerabilities. -We highly recommend using Microsoft Entra ID for your Azure SignalR Service resources whenever possible. +There are two ways to authenticate to Azure SignalR Service resources: Microsoft Entra ID and access key. Microsoft Entra ID offers superior security and ease of use compared to the access key method. ++With Microsoft Entra ID, you don't need to store tokens in your code, reducing the risk of potential security vulnerabilities. We highly recommend using Microsoft Entra ID for your Azure SignalR Service resources whenever possible. > [!IMPORTANT]-> Disabling local authentication can have following consequences. +> Disabling local authentication can have the following consequences: >-> - The current set of access keys will be permanently deleted. -> - Tokens signed with the current set of access keys will become unavailable. +> - The current set of access keys is permanently deleted. +> - Tokens signed with the current set of access keys become unavailable. -## Use Azure portal +## Use the Azure portal -In this section, you will learn how to use the Azure portal to disable local authentication. +In this section, you learn how to use the Azure portal to disable local authentication. -1. Navigate to your SignalR Service resource in the [Azure portal](https://portal.azure.com). +1. In the [Azure portal](https://portal.azure.com), go to your Azure SignalR Service resource. -2. in the **Settings** section of the menu sidebar, select **Keys** tab. +2. In the **Settings** section of the menu sidebar, select **Keys**. -3. Select **Disabled** for local authentication. +3. For **Access Key**, select **Disable**. -4. Click **Save** button. +4. Select the **Save** button. -![Screenshot of disabling local auth.](./media/howto-disable-local-auth/disable-local-auth.png) +![Screenshot of selections for disabling local authentication in the Azure portal.](./media/howto-disable-local-auth/disable-local-auth.png) -## Use Azure Resource Manager template +## Use an Azure Resource Manager template -You can disable local authentication by setting `disableLocalAuth` property to true as shown in the following Azure Resource Manager template. +You can disable local authentication by setting the `disableLocalAuth` property to `true`, as shown in the following Azure Resource Manager template: ```json { You can disable local authentication by setting `disableLocalAuth` property to t } ``` -## Use Azure Policy +## Use an Azure policy -You can assign the [Azure SignalR Service should have local authentication methods disabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff70eecba-335d-4bbc-81d5-5b17b03d498f) Azure policy to an Azure subscription or a resource group to enforce disabling of local authentication for all SignalR resources in the subscription or the resource group. +To enforce disabling of local authentication for all Azure SignalR Service resources in an Azure subscription or a resource group, you can assign the following Azure policy: [Azure SignalR Service should have local authentication methods disabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff70eecba-335d-4bbc-81d5-5b17b03d498f). -![Screenshot of disabling local auth policy.](./media/howto-disable-local-auth/disable-local-auth-policy.png) +![Screenshot that shows disabling local authentication by using a policy.](./media/howto-disable-local-auth/disable-local-auth-policy.png) ## Next steps -See the following docs to learn about authentication methods. +See the following articles to learn about authentication methods: -- [Overview of Microsoft Entra ID for SignalR](signalr-concept-authorize-azure-active-directory.md)-- [Authenticate with Azure applications](./signalr-howto-authorize-application.md)-- [Authenticate with managed identities](./signalr-howto-authorize-managed-identity.md)+- [Authorize access with Microsoft Entra ID for Azure SignalR Service](signalr-concept-authorize-azure-active-directory.md) +- [Authorize requests to Azure SignalR Service resources with Microsoft Entra applications](./signalr-howto-authorize-application.md) +- [Authorize requests to Azure SignalR Service resources with Microsoft Entra managed identities](./signalr-howto-authorize-managed-identity.md) |
azure-signalr | Howto Use Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-use-managed-identity.md | +- Obtain access tokens. +- Access secrets in Azure Key Vault. -The service supports only one managed identity; you can create either a system-assigned or user-assigned identity. A system-assigned identity is dedicated to your SignalR instance and is deleted when you delete the instance. A user-assigned identity is managed independently of your SignalR resource. +The service supports only one managed identity. You can create either a system-assigned or a user-assigned identity. A system-assigned identity is dedicated to your Azure SignalR Service instance and is deleted when you delete the instance. A user-assigned identity is managed independently of your Azure SignalR Service resource. This article shows you how to create a managed identity for Azure SignalR Service and how to use it in serverless scenarios. This article shows you how to create a managed identity for Azure SignalR Servic To use a managed identity, you must have the following items: - An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- An Azure SignalR resource.-- Upstream resources that you want to access. For example, an Azure Key Vault resource.-- An Azure Function app.+- An Azure SignalR Service resource. +- Upstream resources that you want to access, such as an Azure Key Vault resource. +- An Azure Functions app (function app). ## Add a managed identity to Azure SignalR Service You can add a managed identity to Azure SignalR Service in the Azure portal or t ### Add a system-assigned identity -To add a system-managed identity to your SignalR instance: +To add a system-assigned managed identity to your Azure SignalR Service instance: -1. Browse to your SignalR instance in the Azure portal. +1. In the Azure portal, browse to your Azure SignalR Service instance. 1. Select **Identity**. 1. On the **System assigned** tab, switch **Status** to **On**.-1. Select **Save**. -- :::image type="content" source="media/signalr-howto-use-managed-identity/system-identity-portal.png" alt-text="Screenshot showing Add a system-assigned identity in the portal."::: + :::image type="content" source="media/signalr-howto-use-managed-identity/system-identity-portal.png" alt-text="Screenshot that shows selections for adding a system-assigned identity in the portal."::: +1. Select **Save**. 1. Select **Yes** to confirm the change. ### Add a user-assigned identity -To add a user-assigned identity to your SignalR instance, you need to create the identity then add it to your service. +To add a user-assigned identity to your Azure SignalR Service instance, you need to create the identity and then add it to the service. 1. Create a user-assigned managed identity resource according to [these instructions](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity).-1. Browse to your SignalR instance in the Azure portal. +1. In the Azure portal, browse to your Azure SignalR Service instance. 1. Select **Identity**. 1. On the **User assigned** tab, select **Add**.-1. Select the identity from the **User assigned managed identities** drop down menu. +1. On the **User assigned managed identities** dropdown menu, select the identity. ++ :::image type="content" source="media/signalr-howto-use-managed-identity/user-identity-portal.png" alt-text="Screenshot that shows selections for adding a user-assigned identity in the portal."::: 1. Select **Add**.- :::image type="content" source="media/signalr-howto-use-managed-identity/user-identity-portal.png" alt-text="Screenshot showing Add a user-assigned identity in the portal."::: ## Use a managed identity in serverless scenarios Azure SignalR Service is a fully managed service. It uses a managed identity to ### Enable managed identity authentication in upstream settings -Once you've added a [system-assigned identity](#add-a-system-assigned-identity) or [user-assigned identity](#add-a-user-assigned-identity) to your SignalR instance, you can enable managed identity authentication in the upstream endpoint settings. +After you add a [system-assigned identity](#add-a-system-assigned-identity) or [user-assigned identity](#add-a-user-assigned-identity) to your Azure SignalR Service instance, you can enable managed identity authentication in the upstream endpoint settings: -1. Browse to your SignalR instance. +1. In the Azure portal, browse to your Azure SignalR Service instance. 1. Select **Settings** from the menu. 1. Select the **Serverless** service mode.-1. Enter the upstream endpoint URL pattern in the **Add an upstream URL pattern** text box. See [URL template settings](concept-upstream.md#url-template-settings) -1. Select Add one Upstream Setting and select any asterisk go to **Upstream Settings**. - :::image type="content" source="media/signalr-howto-use-managed-identity/pre-msi-settings.png" alt-text="Screenshot of Azure SignalR service Settings."::: +1. In the **Add an upstream URL pattern** text box, enter the upstream endpoint URL pattern. See [URL template settings](concept-upstream.md#url-template-settings). +1. Select **Add one Upstream Setting**, and then select any asterisk. ++ :::image type="content" source="media/signalr-howto-use-managed-identity/pre-msi-settings.png" alt-text="Screenshot that shows Azure SignalR Service settings for adding an upstream URL pattern."::: -1. Configure your upstream endpoint settings. +1. In **Upstream Settings**, configure your upstream endpoint settings. - :::image type="content" source="media/signalr-howto-use-managed-identity/msi-settings.png" alt-text="Screenshot of Azure SignalR service Upstream settings."::: + :::image type="content" source="media/signalr-howto-use-managed-identity/msi-settings.png" alt-text="Screenshot of upstream settings for Azure SignalR Service."::: -1. In the managed identity authentication settings, for **Resource**, you can specify the target resource. The resource will become an `aud` claim in the obtained access token, which can be used as a part of validation in your upstream endpoints. The resource can be one of the following formats: +1. In the managed identity authentication settings, for **Resource**, you can specify the target resource. The resource will become an `aud` claim in the obtained access token, which can be used as a part of validation in your upstream endpoints. The resource can be in one of the following formats: - - Empty - - Application (client) ID of the service principal - - Application ID URI of the service principal - - Resource ID of an Azure service (For a list of Azure services that support managed identities, see [Azure services that support managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).) + - Empty. + - Application (client) ID of the service principal. + - Application ID URI of the service principal. + - Resource ID of an Azure service. For more information, see [Azure services that support managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication). > [!NOTE]- > If you manually validate an access token your service, you can choose any one of the resource formats. Make sure that the **Resource** value in **Auth** settings and the validation are consistent. When you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource format that the service provider requests. + > If you manually validate an access token for your service, you can choose any one of the resource formats. Make sure that the **Resource** value in **Auth** settings and the validation are consistent. When you use Azure role-based access control (RBAC) for a data plane, you must use the resource format that the service provider requests. ### Validate access tokens The token in the `Authorization` header is a [Microsoft identity platform access token](../active-directory/develop/access-tokens.md). -To validate access tokens, your app should also validate the audience and the signing tokens. These tokens need to be validated against the values in the OpenID discovery document. For example, see the [tenant-independent version of the document](https://login.microsoftonline.com/common/.well-known/openid-configuration). +To validate access tokens, your app should also validate the audience and the signing tokens. These tokens need to be validated against the values in the OpenID discovery document. For an example, see the [tenant-independent version of the document](https://login.microsoftonline.com/common/.well-known/openid-configuration). -The Microsoft Entra middleware has built-in capabilities for validating access tokens. You can browse through our [samples](../active-directory/develop/sample-v2-code.md) to find one in the language of your choice. +The Microsoft Entra middleware has built-in capabilities for validating access tokens. You can browse through the [Microsoft identity platform code samples](../active-directory/develop/sample-v2-code.md) to find one in the language of your choice. -Libraries and code samples that show how to handle token validation are available. There are also several open-source partner libraries available for JSON Web Token (JWT) validation. There's at least one option for almost every platform and language. For more information about Microsoft Entra authentication libraries and code samples, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md). +Libraries and code samples that show how to handle token validation are available. Several open-source partner libraries are also available for JSON Web Token (JWT) validation. There's at least one option for almost every platform and language. For more information about Microsoft Entra authentication libraries and code samples, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md). -#### Authentication in Function App +#### Authentication in a function app -You can easily set access validation for a Function App without code changes using the Azure portal. +You can easily set access validation for a function app without code changes by using the Azure portal: -1. Go to the Function App in the Azure portal. +1. In the Azure portal, go to the function app. 1. Select **Authentication** from the menu. 1. Select **Add identity provider**.-1. In the **Basics** tab, select **Microsoft** from the **Identity provider** dropdown. -1. Select **Log in with Microsoft Entra ID** in **Action to take when request is not authenticated**. -1. Select **Microsoft** in the identity provider dropdown. The option to create a new registration is selected by default. You can change the name of the registration. For more information on enabling Microsoft Entra provider, see [Configure your App Service or Azure Functions app to login with Microsoft Entra ID](../app-service/configure-authentication-provider-aad.md) - :::image type="content" source="media/signalr-howto-use-managed-identity/function-aad.png" alt-text="Function Microsoft Entra ID"::: -1. Navigate to SignalR Service and follow the [steps](howto-use-managed-identity.md#add-a-system-assigned-identity) to add a system-assigned identity or user-assigned identity. -1. go to **Upstream settings** in SignalR Service and choose **Use Managed Identity** and **Select from existing Applications**. Select the application you created previously. +1. On the **Basics** tab, in the **Identity provider** dropdown list, select **Microsoft**. +1. In **Action to take when request is not authenticated**, select **Log in with Microsoft Entra ID**. +1. The option to create a new registration is selected by default. You can change the name of the registration. For more information on enabling a Microsoft Entra provider, see [Configure your App Service or Azure Functions app to use a Microsoft Entra ID sign-in](../app-service/configure-authentication-provider-aad.md). ++ :::image type="content" source="media/signalr-howto-use-managed-identity/function-aad.png" alt-text="Screenshot that shows basic information for adding an identity provider."::: +1. Go to Azure SignalR Service and follow the [steps](howto-use-managed-identity.md#add-a-system-assigned-identity) to add a system-assigned identity or user-assigned identity. +1. In Azure SignalR Service, go to **Upstream settings**, and then select **Use Managed Identity** and **Select from existing Applications**. Select the application that you created previously. -After you configure these settings, the Function App will reject requests without an access token in the header. +After you configure these settings, the function app will reject requests without an access token in the header. > [!IMPORTANT]-> To pass the authentication, the _Issuer Url_ must match the _iss_ claim in token. Currently, we only support v1 endpoint (see [v1.0 and v2.0](../active-directory/develop/access-tokens.md)). +> To pass the authentication, the issuer URL must match the `iss` claim in the token. Currently, we support only v1.0 endpoints. See [Access tokens in the Microsoft identity platform](../active-directory/develop/access-tokens.md). -To verify the _Issuer Url_ format in your Function app: +To verify the issuer URL format in your function app: -1. Go to the Function app in the portal. +1. In the portal, go to the function app. 1. Select **Authentication**. 1. Select **Identity provider**. 1. Select **Edit**. 1. Select **Issuer Url**.-1. Verify that the _Issuer Url_ has the format `https://sts.windows.net/<tenant-id>/`. +1. Verify that the issuer URL has the format `https://sts.windows.net/<tenant-id>/`. -## Use a managed identity for Key Vault reference +## Use a managed identity for a Key Vault reference -SignalR Service can access Key Vault to get secrets using the managed identity. +Azure SignalR Service can access Key Vault to get secrets by using the managed identity. -1. Add a [system-assigned identity](#add-a-system-assigned-identity) or [user-assigned identity](#add-a-user-assigned-identity) to your SignalR instance. -1. Grant secret read permission for the managed identity in the Access policies in the Key Vault. See [Assign a Key Vault access policy using the Azure portal](../key-vault/general/assign-access-policy-portal.md) +1. Add a [system-assigned identity](#add-a-system-assigned-identity) or [user-assigned identity](#add-a-user-assigned-identity) to your Azure SignalR Service instance. +1. Grant secret read permission for the managed identity in the access policies in Key Vault. See [Assign a Key Vault access policy by using the Azure portal](../key-vault/general/assign-access-policy-portal.md). -Currently, this feature can be used to [Reference secret in Upstream URL Pattern](./concept-upstream.md#key-vault-secret-reference-in-url-template-settings) +Currently, you can use this feature to [reference a secret in the upstream URL pattern](./concept-upstream.md#key-vault-secret-reference-in-url-template-settings). ## Next steps |
azure-signalr | Signalr Concept Authorize Azure Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authorize-azure-active-directory.md | Title: Authorize access with Microsoft Entra ID for Azure SignalR Service -description: This article provides information on authorizing access to Azure SignalR Service resources using Microsoft Entra ID. +description: This article provides information on authorizing access to Azure SignalR Service resources by using Microsoft Entra ID. Last updated 09/06/2021-Azure SignalR Service supports Microsoft Entra ID for authorizing requests to SignalR resources. With Microsoft Entra ID, you can utilize role-based access control (RBAC) to grant permissions to a security principal<sup>[<a href="#security-principal">1</a>]</sup>. The security principal is authenticated by Microsoft Entra ID, which returns an OAuth 2.0 token. The token is then used to authorize a request against the SignalR resource. +Azure SignalR Service supports Microsoft Entra ID for authorizing requests to its resources. With Microsoft Entra ID, you can use role-based access control (RBAC) to grant permissions to a *security principal*. A security principal is a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities. -Authorizing requests against SignalR with Microsoft Entra ID provides superior security and ease of use compared to Access Key authorization. It is highly recommended to use Microsoft Entra ID for authorizing whenever possible, as it ensures access with the minimum required privileges. +Microsoft Entra ID authenticates the security principal and returns an OAuth 2.0 token. The token is then used to authorize a request against the Azure SignalR Service resource. -<a id="security-principal"></a> -_[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities._ +Authorizing requests against Azure SignalR Service by using Microsoft Entra ID provides superior security and ease of use compared to access key authorization. We highly recommend that you use Microsoft Entra ID for authorizing whenever possible, because it ensures access with the minimum required privileges. > [!IMPORTANT]-> Disabling local authentication can have following influences. +> Disabling local authentication can have the following consequences: >-> - The current set of access keys will be permanently deleted. -> - Tokens signed with access keys will no longer be available. +> - The current set of access keys is permanently deleted. +> - Tokens signed with the current set of access keys become unavailable. ## Overview of Microsoft Entra ID -When a security principal attempts to access a SignalR resource, the request must be authorized. Get access to a resource requires 2 steps when using Microsoft Entra ID. +When a security principal tries to access an Azure SignalR Service resource, the request must be authorized. Using Microsoft Entra ID to get access to a resource requires two steps: -1. The security principal has to be authenticated by Microsoft Entra ID, which will then return an OAuth 2.0 token. -1. The token is passed as part of a request to the SignalR resource for authorizing the request. +1. Microsoft Entra ID authenticates the security principal and then returns an OAuth 2.0 token. +1. The token is passed as part of a request to the Azure SignalR Service resource for authorizing the request. ### Client-side authentication with Microsoft Entra ID -When using Access Key, the key is shared between your app server (or Function App) and the SignalR resource. The SignalR service authenticates the client connection request with the shared key. +When you use an access key, the key is shared between your app server (or function app) and the Azure SignalR Service resource. Azure SignalR Service authenticates the client connection request by using the shared key. -When using Microsoft Entra ID, there is no shared key. Instead, SignalR uses a **temporary access key** for signing tokens used in client connections. The workflow contains four steps. +When you use Microsoft Entra ID, there is no shared key. Instead, Azure SignalR Service uses a *temporary access key* for signing tokens used in client connections. The workflow contains four steps: 1. The security principal requires an OAuth 2.0 token from Microsoft Entra ID to authenticate itself.-2. The security principal calls SignalR Auth API to get a **temporary access key**. -3. The security principal signs a client token with the **temporary access key** for client connections during negotiation. -4. The client uses the client token to connect to Azure SignalR resources. +2. The security principal calls the SignalR authentication API to get a temporary access key. +3. The security principal signs a client token with the temporary access key for client connections during negotiation. +4. The client uses the client token to connect to Azure SignalR Service resources. -The **temporary access key** expires in 90 minutes. It's recommend getting a new one and rotate the old one once an hour. +The temporary access key expires in 90 minutes. We recommend that you get a new one and rotate out the old one once an hour. -The workflow is built in the [Azure SignalR SDK for app server](https://github.com/Azure/azure-signalr). +The workflow is built in the [Azure SignalR Service SDK for app servers](https://github.com/Azure/azure-signalr). ## Assign Azure roles for access rights -Microsoft Entra ID authorizes access rights to secured resources through [Azure role-based access control](../role-based-access-control/overview.md). Azure SignalR defines a set of Azure built-in roles that encompass common sets of permissions used to access SignalR resources. You can also define custom roles for access to SignalR resources. +Microsoft Entra ID authorizes access rights to secured resources through [Azure RBAC](../role-based-access-control/overview.md). Azure SignalR Service defines a set of Azure built-in roles that encompass common sets of permissions for accessing Azure SignalR Service resources. You can also define custom roles for access to Azure SignalR Service resources. ### Resource scope -You may have to determine the scope of access that the security principal should have before you assign any Azure RBAC role to a security principal. It is recommended to only grant the narrowest possible scope. Azure RBAC roles defined at a broader scope are inherited by the resources beneath them. +You might have to determine the scope of access that the security principal should have before you assign any Azure RBAC role to a security principal. We recommend that you grant only the narrowest possible scope. Azure RBAC roles defined at a broader scope are inherited by the resources beneath them. -You can scope access to Azure SignalR resources at the following levels, beginning with the narrowest scope: +You can scope access to Azure SignalR Service resources at the following levels, beginning with the narrowest scope. | Scope | Description | | | |-| **An individual resource.** | Applies to only the target resource. | -| **A resource group.** | Applies to all of the resources in a resource group. | -| **A subscription.** | Applies to all of the resources in a subscription. | -| **A management group.** | Applies to all of the resources in the subscriptions included in a management group. | +| Individual resource | Applies to only the target resource. | +| Resource group | Applies to all of the resources in a resource group. | +| Subscription | Applies to all of the resources in a subscription. | +| Management group | Applies to all of the resources in the subscriptions included in a management group. | -## Azure built-in roles for SignalR resources +## Azure built-in roles for Azure SignalR Service resources | Role | Description | Use case | | - | | -- |-| [SignalR App Server](../role-based-access-control/built-in-roles.md#signalr-app-server) | Access to Websocket connection creation API and Auth APIs. | Most commonly for an App Server. | -| [SignalR Service Owner](../role-based-access-control/built-in-roles.md#signalr-service-owner) | Full access to all data-plane APIs, including REST APIs, WebSocket connection creation API and Auth APIs. | Use for **Serverless mode** for Authorization with Microsoft Entra ID since it requires both REST APIs permissions and Auth API permissions. | -| [SignalR REST API Owner](../role-based-access-control/built-in-roles.md#signalr-rest-api-owner) | Full access to data-plane REST APIs. | Often used to write a tool that manages connections and groups but does **NOT** make connections or call Auth APIs. | -| [SignalR REST API Reader](../role-based-access-control/built-in-roles.md#signalr-rest-api-reader) | Read-only access to data-plane REST APIs. | Commonly used to write a monitoring tool that calls **ONLY** SignalR data-plane **READONLY** REST APIs. | +| [SignalR App Server](../role-based-access-control/built-in-roles.md#signalr-app-server) | Access to the WebSocket connection creation API and authentication APIs. | Most commonly used for an app server. | +| [SignalR Service Owner](../role-based-access-control/built-in-roles.md#signalr-service-owner) | Full access to all data-plane APIs, including REST APIs, the WebSocket connection creation API, and authentication APIs. | Use for *serverless mode* for authorization with Microsoft Entra ID, because it requires both REST API permissions and authentication API permissions. | +| [SignalR REST API Owner](../role-based-access-control/built-in-roles.md#signalr-rest-api-owner) | Full access to data-plane REST APIs. | Often used to write a tool that manages connections and groups, but does *not* make connections or call authentication APIs. | +| [SignalR REST API Reader](../role-based-access-control/built-in-roles.md#signalr-rest-api-reader) | Read-only access to data-plane REST APIs. | Commonly used to write a monitoring tool that calls *only* Azure SignalR Service data-plane read-only REST APIs. | ## Next steps -To learn how to create an Azure application and use Microsoft Entra authorization, see: +- To learn how to create an Azure application and use Microsoft Entra authorization, see [Authorize requests to Azure SignalR Service resources with Microsoft Entra applications](signalr-howto-authorize-application.md). -- [Authorize request to SignalR resources with Microsoft Entra applications](signalr-howto-authorize-application.md)+- To learn how to configure a managed identity and use Microsoft Entra authorization, see [Authorize requests to Azure SignalR Service resources with Microsoft Entra managed identities](signalr-howto-authorize-managed-identity.md). -To learn how to configure a managed identity and use Microsoft Entra authorization, see: +- To learn more about roles and role assignments, see [What is Azure role-based access control (Azure RBAC)?](../role-based-access-control/overview.md). -- [Authorize request to SignalR resources with Microsoft Entra managed identities](signalr-howto-authorize-managed-identity.md)+- To learn how to create custom roles, see [Steps to create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role). -To learn more about roles and role assignments, see: --- [What is Azure role-based access control (Azure RBAC)?](../role-based-access-control/overview.md).--To learn how to create custom roles, see: --- [Steps to create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role)--To learn how to use only Microsoft Entra authentication, see: --- [Disable local authentication](./howto-disable-local-auth.md)+- To learn how to use only Microsoft Entra authentication, see [Disable local authentication](./howto-disable-local-auth.md). |
azure-signalr | Signalr Howto Authorize Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-application.md | Title: Authorize requests to SignalR resources with Microsoft Entra applications -description: This article provides information about authorizing request to SignalR resources with Microsoft Entra applications + Title: Authorize requests to Azure SignalR Service resources with Microsoft Entra applications +description: This article provides information about authorizing requests to Azure SignalR Service resources by using Microsoft Entra applications. Last updated 02/03/2023 ms.devlang: csharp -# Authorize requests to SignalR resources with Microsoft Entra applications +# Authorize requests to Azure SignalR Service resources with Microsoft Entra applications Azure SignalR Service supports Microsoft Entra ID for authorizing requests from [Microsoft Entra applications](../active-directory/develop/app-objects-and-service-principals.md). -This article shows how to configure your SignalR resource and codes to authorize requests to a SignalR resource from a Microsoft Entra application. +This article shows how to configure your Azure SignalR Service resource and codes to authorize requests to the resource from a Microsoft Entra application. ## Register an application -The first step is to register a Microsoft Entra application. +The first step is to register a Microsoft Entra application: -1. On the [Azure portal](https://portal.azure.com/), search for and select **Microsoft Entra ID** -2. Under **Manage** section, select **App registrations**. -3. Select **New registration**. - ![Screenshot of registering an application.](./media/signalr-howto-authorize-application/register-an-application.png) -4. Enter a display **Name** for your application. -5. Select **Register** to confirm the register. +1. In the [Azure portal](https://portal.azure.com/), search for and select **Microsoft Entra ID**. +2. Under **Manage**, select **App registrations**. +3. Select **New registration**. The **Register an application** pane opens. -Once you have your application registered, you can find the **Application (client) ID** and **Directory (tenant) ID** under its Overview page. These GUIDs can be useful in the following steps. + ![Screenshot of the pane for registering an application.](./media/signalr-howto-authorize-application/register-an-application.png) +5. For **Name**, enter a display name for your application. +6. Select **Register** to confirm the registration. -![Screenshot of an application.](./media/signalr-howto-authorize-application/application-overview.png) +After you register your application, you can find the **Application (client) ID** and **Directory (tenant) ID** values on the application's overview page. These GUIDs can be useful in the following steps. -To learn more about registering an application, see +![Screenshot of overview information for a registered application.](./media/signalr-howto-authorize-application/application-overview.png) -- [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md).+To learn more about registering an application, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). ## Add credentials You can add both certificates and client secrets (a string) as credentials to yo ### Client secret -The application requires a client secret to prove its identity when requesting a token. To create a client secret, follow these steps. +The application requires a client secret to prove its identity when it's requesting a token. To create a client secret, follow these steps: -1. Under **Manage** section, select **Certificates & secrets** +1. Under **Manage**, select **Certificates & secrets**. 1. On the **Client secrets** tab, select **New client secret**.- ![Screenshot of creating a client secret.](./media/signalr-howto-authorize-application/new-client-secret.png) -1. Enter a **description** for the client secret, and choose a **expire time**. -1. Copy the value of the **client secret** and then paste it to a secure location. ++ ![Screenshot of selections for creating a client secret.](./media/signalr-howto-authorize-application/new-client-secret.png) +1. Enter a description for the client secret, and choose an expiration time. +1. Copy the value of the client secret and then paste it in a secure location. > [!NOTE]- > The secret will display only once. + > The secret appears only once. ### Certificate -You can also upload a certification instead of creating a client secret. --![Screenshot of uploading a certification.](./media/signalr-howto-authorize-application/upload-certificate.png) +You can upload a certificate instead of creating a client secret. -To learn more about adding credentials, see +![Screenshot of selections for uploading a certificate.](./media/signalr-howto-authorize-application/upload-certificate.png) -- [Add credentials](../active-directory/develop/quickstart-register-app.md#add-credentials)+To learn more about adding credentials, see [Add credentials](../active-directory/develop/quickstart-register-app.md#add-credentials). -## Add role assignments on Azure portal +## Add role assignments in the Azure portal -The following steps describe how to assign a `SignalR App Server` role to a service principal (application) over a SignalR resource. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). +The following steps describe how to assign a SignalR App Server role to a service principal (application) over an Azure SignalR Service resource. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). > [!NOTE]-> A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md) +> A role can be assigned to any scope, including management group, subscription, resource group, or single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md). -1. From the [Azure portal](https://portal.azure.com/), navigate to your SignalR resource. +1. In the [Azure portal](https://portal.azure.com/), go to your Azure SignalR Service resource. 1. Select **Access control (IAM)**. -1. Select **Add > Add role assignment**. +1. Select **Add** > **Add role assignment**. - :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open."::: + :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows the page for access control and selections for adding a role assignment."::: 1. On the **Role** tab, select **SignalR App Server**. -1. On the **Members** tab, select **User, group, or service principal**, and then select **Select members**. +1. On the **Members** tab, select **User, group, or service principal**, and then choose **Select members**. -1. Search for and select the application that to which you'd like to assign the role. +1. Search for and select the application to which you want to assign the role. 1. On the **Review + assign** tab, select **Review + assign** to assign the role. > [!IMPORTANT]-> Azure role assignments may take up to 30 minutes to propagate. +> Azure role assignments might take up to 30 minutes to propagate. -To learn more about how to assign and manage Azure role assignments, see these articles: +To learn more about how to assign and manage Azure roles, see these articles: - [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md) - [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)-- [Assign Azure roles using Azure CLI](../role-based-access-control/role-assignments-cli.md)+- [Assign Azure roles using the Azure CLI](../role-based-access-control/role-assignments-cli.md) - [Assign Azure roles using Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md) ## Configure your app The best practice is to configure identity and credentials in your environment v | Variable | Description | | - | | | `AZURE_TENANT_ID` | The Microsoft Entra tenant ID. |-| `AZURE_CLIENT_ID` | The client(application) ID of an App Registration in the tenant. | -| `AZURE_CLIENT_SECRET` | A client secret that was generated for the App Registration. | -| `AZURE_CLIENT_CERTIFICATE_PATH` | A path to a certificate and private key pair in PEM or PFX format, which can authenticate the App Registration. | -| `AZURE_USERNAME` | The username, also known as upn, of a Microsoft Entra user account. | -| `AZURE_PASSWORD` | The password of the Microsoft Entra user account. Password isn't supported for accounts with MFA enabled. | +| `AZURE_CLIENT_ID` | The client (application) ID of an app registration in the tenant. | +| `AZURE_CLIENT_SECRET` | A client secret that was generated for the app registration. | +| `AZURE_CLIENT_CERTIFICATE_PATH` | A path to a certificate and private key pair in PEM or PFX format, which can authenticate the app registration. | +| `AZURE_USERNAME` | The username, also known as User Principal Name (UPN), of a Microsoft Entra user account. | +| `AZURE_PASSWORD` | The password of the Microsoft Entra user account. A password isn't supported for accounts with multifactor authentication enabled. | -You can use either [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) or [EnvironmentCredential](/dotnet/api/azure.identity.environmentcredential) to configure your SignalR endpoints. +You can use either [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) or [EnvironmentCredential](/dotnet/api/azure.identity.environmentcredential) to configure your Azure SignalR Service endpoints. Here's the code for `DefaultAzureCredential`: ```C# services.AddSignalR().AddAzureSignalR(option => services.AddSignalR().AddAzureSignalR(option => }); ``` -Or using `EnvironmentalCredential` directly. +Here's the code for `EnvironmentCredential`: ```C# services.AddSignalR().AddAzureSignalR(option => services.AddSignalR().AddAzureSignalR(option => }); ``` -To learn how `DefaultAzureCredential` works, see [DefaultAzureCredential Class](/dotnet/api/overview/azure/identity-readme#defaultazurecredential). +To learn how `DefaultAzureCredential` works, see [DefaultAzureCredential class](/dotnet/api/overview/azure/identity-readme#defaultazurecredential). -#### Use different credentials while using multiple endpoints +#### Use endpoint-specific credentials -For some reason, you may want to use different credentials for different endpoints. +In your organization, you might want to use different credentials for different endpoints. -In this scenario, you can use [ClientSecretCredential](/dotnet/api/azure.identity.clientsecretcredential) or [ClientCertificateCredential](/dotnet/api/azure.identity.clientcertificatecredential). +In this scenario, you can use [ClientSecretCredential](/dotnet/api/azure.identity.clientsecretcredential) or [ClientCertificateCredential](/dotnet/api/azure.identity.clientcertificatecredential): ```csharp services.AddSignalR().AddAzureSignalR(option => services.AddSignalR().AddAzureSignalR(option => }); ``` -### Azure Functions SignalR bindings +### Azure SignalR Service bindings in Azure Functions -Azure Functions SignalR bindings use [application settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md) on portal or [`local.settings.json`](../azure-functions/functions-develop-local.md#local-settings-file) at local to configure Microsoft Entra application identities to access your SignalR resources. +Azure SignalR Service bindings in Azure Functions use [application settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md) in the portal or [local.settings.json](../azure-functions/functions-develop-local.md#local-settings-file) locally to configure Microsoft Entra application identities to access your Azure SignalR Service resources. -Firstly, you need to specify the service URI of the SignalR Service, whose key is `serviceUri` starting with a **connection name prefix** (defaults to `AzureSignalRConnectionString`) and a separator (`__` on Azure portal and `:` in the local.settings.json file). The connection name can be customized with the binding property [`ConnectionStringSetting`](../azure-functions/functions-bindings-signalr-service.md). Continue reading to find the sample. +First, you need to specify the service URI of Azure SignalR Service. The key of the service URI is `serviceUri`. It starts with a connection name prefix (which defaults to `AzureSignalRConnectionString`) and a separator. The separator is an underscore (`__`) in the Azure portal and a colon (`:`) in the *local.settings.json* file. You can customize the connection name by using the binding property [`ConnectionStringSetting`](../azure-functions/functions-bindings-signalr-service.md). Continue reading to find the sample. -Then you choose to configure your Microsoft Entra application identity in [pre-defined environment variables](#configure-identity-in-pre-defined-environment-variables) or [in SignalR specified variables](#configure-identity-in-signalr-specified-variables). +Then, you choose whether to configure your Microsoft Entra application identity in [predefined environment variables](#configure-an-identity-in-predefined-environment-variables) or in [SignalR-specified variables](#configure-an-identity-in-signalr-specified-variables). -#### Configure identity in pre-defined environment variables +#### Configure an identity in predefined environment variables -See [Environment variables](/dotnet/api/overview/azure/identity-readme#environment-variables) for the list of pre-defined environment variables. When you have multiple services, we recommend that you use the same application identity, so that you don't need to configure the identity for each service. These environment variables might also be used by other services according to the settings of other services. +See [Environment variables](/dotnet/api/overview/azure/identity-readme#environment-variables) for the list of predefined environment variables. When you have multiple services, we recommend that you use the same application identity, so that you don't need to configure the identity for each service. Other services might also use these environment variables, based on the settings of those services. -For example, to use client secret credentials, configure as follows in the `local.settings.json` file. +For example, to use client secret credentials, configure the identity as follows in the *local.settings.json* file: ```json { For example, to use client secret credentials, configure as follows in the `loca } ``` -On Azure portal, add settings as follows: +In the Azure portal, add settings as follows: ```bash <CONNECTION_NAME_PREFIX>__serviceUri=https://<SIGNALR_RESOURCE_NAME>.service.signalr.net AZURE_TENANT_ID = ... AZURE_CLIENT_SECRET = ... ``` -#### Configure identity in SignalR specified variables --The SignalR specified variables share the same key prefix with `serviceUri` key. Here's the list of variables you might use: +#### Configure an identity in SignalR-specified variables -- clientId-- clientSecret-- tenantId+SignalR-specified variables share the same key prefix with the `serviceUri` key. Here's the list of variables that you might use: -Here are the samples to use client secret credentials: +- `clientId` +- `clientSecret` +- `tenantId` -In the `local.settings.json` file: +Here are the samples to use client secret credentials in the *local.settings.json* file: ```json { In the `local.settings.json` file: } ``` -On Azure portal, add settings as follows: +In the Azure portal, add settings as follows: ```bash <CONNECTION_NAME_PREFIX>__serviceUri = https://<SIGNALR_RESOURCE_NAME>.service.signalr.net On Azure portal, add settings as follows: See the following related articles: -- [Overview of Microsoft Entra ID for SignalR](signalr-concept-authorize-azure-active-directory.md)-- [Authorize request to SignalR resources with Microsoft Entra managed identities](signalr-howto-authorize-managed-identity.md)+- [Authorize access with Microsoft Entra ID for Azure SignalR Service](signalr-concept-authorize-azure-active-directory.md) +- [Authorize requests to Azure SignalR Service resources with Microsoft Entra managed identities](signalr-howto-authorize-managed-identity.md) - [Disable local authentication](./howto-disable-local-auth.md) |
azure-signalr | Signalr Howto Authorize Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-managed-identity.md | Title: Authorize requests to SignalR resources with Microsoft Entra managed identities -description: This article provides information about authorizing request to SignalR resources with Microsoft Entra managed identities + Title: Authorize requests to Azure SignalR Service resources with Microsoft Entra managed identities +description: This article provides information about authorizing requests to Azure SignalR Service resources by using Microsoft Entra managed identities. Last updated 03/28/2023 ms.devlang: csharp -# Authorize requests to SignalR resources with Microsoft Entra managed identities +# Authorize requests to Azure SignalR Service resources with Microsoft Entra managed identities -Azure SignalR Service supports Microsoft Entra ID for authorizing requests from [Microsoft Entra managed identities -](../active-directory/managed-identities-azure-resources/overview.md). +Azure SignalR Service supports Microsoft Entra ID for authorizing requests from [Microsoft Entra managed identities](../active-directory/managed-identities-azure-resources/overview.md). -This article shows how to configure your SignalR resource and code to authorize requests to a SignalR resource from a managed identity. +This article shows how to configure your Azure SignalR Service resource and code to authorize requests to the resource from a managed identity. ## Configure managed identities The first step is to configure managed identities. -This example shows you how to configure `System-assigned managed identity` on a `Virtual Machine` using the Azure portal. +This example shows you how to configure a system-assigned managed identity on a virtual machine (VM) by using the Azure portal: -1. Open [Azure portal](https://portal.azure.com/), Search for and select a Virtual Machine. -1. Under **Settings** section, select **Identity**. -1. On the **System assigned** tab, toggle the **Status** to **On**. - ![Screenshot of an application.](./media/signalr-howto-authorize-managed-identity/identity-virtual-machine.png) +1. In the [Azure portal](https://portal.azure.com/), search for and select a VM. +1. Under **Settings**, select **Identity**. +1. On the **System assigned** tab, switch **Status** to **On**. ++ ![Screenshot of selections for turning on system-assigned managed identities for a virtual machine.](./media/signalr-howto-authorize-managed-identity/identity-virtual-machine.png) 1. Select the **Save** button to confirm the change. -To learn how to create user-assigned managed identities, see [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) +To learn how to create user-assigned managed identities, see [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity). To learn more about configuring managed identities, see one of these articles: - [Configure managed identities for Azure resources on a VM using the Azure portal](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) - [Configure managed identities for Azure resources on an Azure VM using PowerShell](../active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md)-- [Configure managed identities for Azure resources on an Azure VM using Azure CLI](../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md)+- [Configure managed identities for Azure resources on an Azure VM using the Azure CLI](../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md) - [Configure managed identities for Azure resources on an Azure VM using templates](../active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md) - [Configure a VM with managed identities for Azure resources using an Azure SDK](../active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md) -### For App service and Azure Functions --See [How to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md). +To learn how to configure managed identities for Azure App Service and Azure Functions, see [How to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md). -## Add role assignments on Azure portal +## Add role assignments in the Azure portal -The following steps describe how to assign a `SignalR App Server` role to a system-assigned identity over a SignalR resource. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). +The following steps describe how to assign a SignalR App Server role to a system-assigned identity over an Azure SignalR Service resource. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). > [!NOTE]-> A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md) +> A role can be assigned to any scope, including management group, subscription, resource group, or single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md). -1. From the [Azure portal](https://portal.azure.com/), navigate to your SignalR resource. +1. In the [Azure portal](https://portal.azure.com/), go to your Azure SignalR Service resource. 1. Select **Access control (IAM)**. -1. Select **Add > Add role assignment**. +1. Select **Add** > **Add role assignment**. - :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open."::: + :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows the page for access control and selections for adding a role assignment."::: 1. On the **Role** tab, select **SignalR App Server**. -1. On the **Members** tab, select **Managed identity**, and then select **Select members**. +1. On the **Members** tab, select **Managed identity**, and then choose **Select members**. 1. Select your Azure subscription. -1. Select **System-assigned managed identity**, search for a virtual machine to which you'd like to assign the role, and then select it. +1. Select **System-assigned managed identity**, search for a virtual machine to which you want to assign the role, and then select it. 1. On the **Review + assign** tab, select **Review + assign** to assign the role. > [!IMPORTANT]-> Azure role assignments may take up to 30 minutes to propagate. +> Azure role assignments might take up to 30 minutes to propagate. -To learn more about how to assign and manage Azure role assignments, see these articles: +To learn more about how to assign and manage Azure roles, see these articles: - [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md) - [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)-- [Assign Azure roles using Azure CLI](../role-based-access-control/role-assignments-cli.md)+- [Assign Azure roles using the Azure CLI](../role-based-access-control/role-assignments-cli.md) - [Assign Azure roles using Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md) ## Configure your app -### App Server +### App server -#### Using system-assigned identity +#### Use a system-assigned identity -You can use either [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme#defaultazurecredential) or [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycredential) to configure your SignalR endpoints. However, the best practice is to use `ManagedIdentityCredential` directly. +You can use either [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme#defaultazurecredential) or [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycredential) to configure your Azure SignalR Service endpoints. The best practice is to use `ManagedIdentityCredential` directly. -The system-assigned managed identity is used by default, but **make sure that you don't configure any environment variables** that the [EnvironmentCredential](/dotnet/api/azure.identity.environmentcredential) preserved if you were using `DefaultAzureCredential`. Otherwise it falls back to use `EnvironmentCredential` to make the request and it results to a `Unauthorized` response in most cases. +The system-assigned managed identity is used by default, but *make sure that you don't configure any environment variables* that [EnvironmentCredential](/dotnet/api/azure.identity.environmentcredential) preserved if you use `DefaultAzureCredential`. Otherwise, Azure SignalR Service falls back to use `EnvironmentCredential` to make the request, which usually results in an `Unauthorized` response. ```C# services.AddSignalR().AddAzureSignalR(option => services.AddSignalR().AddAzureSignalR(option => }); ``` -#### Using user-assigned identity +#### Use a user-assigned identity Provide `ClientId` while creating the `ManagedIdentityCredential` object. > [!IMPORTANT]-> Use **Client Id**, not the Object (principal) ID even if they are both GUID! +> Use the client ID, not the object (principal) ID, even if they're both GUIDs. ```C# services.AddSignalR().AddAzureSignalR(option => services.AddSignalR().AddAzureSignalR(option => }; ``` -### Azure Functions SignalR bindings +### Azure SignalR Service bindings in Azure Functions -Azure Functions SignalR bindings use [application settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md) on portal or [`local.settings.json`](../azure-functions/functions-develop-local.md#local-settings-file) at local to configure managed identity to access your SignalR resources. +Azure SignalR Service bindings in Azure Functions use [application settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md) in the portal or [local.settings.json](../azure-functions/functions-develop-local.md#local-settings-file) locally to configure a managed identity to access your Azure SignalR Service resources. -You might need a group of key-value pairs to configure an identity. The keys of all the key-value pairs must start with a **connection name prefix** (defaults to `AzureSignalRConnectionString`) and a separator (`__` on portal and `:` at local). The prefix can be customized with binding property [`ConnectionStringSetting`](../azure-functions/functions-bindings-signalr-service.md). +You might need a group of key/value pairs to configure an identity. The keys of all the key/value pairs must start with a *connection name prefix* (which defaults to `AzureSignalRConnectionString`) and a separator. The separator is an underscore (`__`) in the portal and a colon (`:`) locally. You can customize the prefix by using the binding property [`ConnectionStringSetting`](../azure-functions/functions-bindings-signalr-service.md). -#### Using system-assigned identity +#### Use a system-assigned identity -If you only configure the service URI, then the `DefaultAzureCredential` is used. This class is useful when you want to share the same configuration on Azure and local development environments. To learn how `DefaultAzureCredential` works, see [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme#defaultazurecredential). +If you configure only the service URI, you use the `DefaultAzureCredential` class. This class is useful when you want to share the same configuration on Azure and local development environments. To learn how it works, see [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme#defaultazurecredential). -In the Azure portal, use the following example to configure a `DefaultAzureCredential`. If you don't configure any [environment variables listed here](/dotnet/api/overview/azure/identity-readme#environment-variables), then the system-assigned identity is used to authenticate. +In the Azure portal, use the following example to configure `DefaultAzureCredential`. If you don't configure any of [these environment variables](/dotnet/api/overview/azure/identity-readme#environment-variables), the system-assigned identity is used for authentication. ```bash <CONNECTION_NAME_PREFIX>__serviceUri=https://<SIGNALR_RESOURCE_NAME>.service.signalr.net ``` -Here's a config sample of `DefaultAzureCredential` in the `local.settings.json` file. At the local scope there's no managed identity, and the authentication via Visual Studio, Azure CLI, and Azure PowerShell accounts are attempted in order. +Here's a configuration sample of `DefaultAzureCredential` in the *local.settings.json* file. At the local scope, there's no managed identity. Authentication via Visual Studio, the Azure CLI, and Azure PowerShell accounts is attempted in order. ```json { Here's a config sample of `DefaultAzureCredential` in the `local.settings.json` } ``` -If you want to use system-assigned identity independently and without the influence of [other environment variables](/dotnet/api/overview/azure/identity-readme#environment-variables), you should set the `credential` key with the connection name prefix to `managedidentity`. Here's an application settings sample: +If you want to use a system-assigned identity independently and without the influence of [other environment variables](/dotnet/api/overview/azure/identity-readme#environment-variables), set the `credential` key with the connection name prefix to `managedidentity`. Here's a sample for application settings: ```bash <CONNECTION_NAME_PREFIX>__serviceUri = https://<SIGNALR_RESOURCE_NAME>.service.signalr.net <CONNECTION_NAME_PREFIX>__credential = managedidentity ``` -#### Using user-assigned identity +#### Use a user-assigned identity -If you want to use user-assigned identity, you need to assign `clientId`in addition to the `serviceUri` and `credential` keys with the connection name prefix. Here's the application settings sample: +If you want to use a user-assigned identity, you need to assign `clientId` in addition to `serviceUri` and `credential` keys with the connection name prefix. Here's a sample for application settings: ```bash <CONNECTION_NAME_PREFIX>__serviceUri = https://<SIGNALR_RESOURCE_NAME>.service.signalr.net If you want to use user-assigned identity, you need to assign `clientId`in addit See the following related articles: -- [Overview of Microsoft Entra ID for SignalR](signalr-concept-authorize-azure-active-directory.md)-- [Authorize request to SignalR resources with Microsoft Entra applications](signalr-howto-authorize-application.md)+- [Authorize access with Microsoft Entra ID for Azure SignalR Service](signalr-concept-authorize-azure-active-directory.md) +- [Authorize requests to Azure SignalR Service resources with Microsoft Entra applications](signalr-howto-authorize-application.md) - [Disable local authentication](./howto-disable-local-auth.md) |
azure-signalr | Signalr Howto Reverse Proxy Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-reverse-proxy-overview.md | There are several general practices to follow when using a reverse proxy in fron * Make sure to rewrite the incoming HTTP [HOST header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Host) with the Azure SignalR service URL, e.g. `https://demo.service.signalr.net`. Azure SignalR is a multi-tenant service, and it relies on the `HOST` header to resolve to the correct endpoint. For example, when [configuring Application Gateway](./signalr-howto-work-with-app-gateway.md#create-an-application-gateway-instance) for Azure SignalR, select **Yes** for the option *Override with new host name*. -* When your client goes through your reverse proxy to Azure SignalR, set `ClientEndpoint` as your reverse proxy URL. When your client *negotiate*s with your hub server, the hub server will return the URL defined in `ClientEndpoint` for your client to connect. [Check here for more details.](./concept-connection-string.md#client-and-server-endpoints) +* When your client goes through your reverse proxy to Azure SignalR, set `ClientEndpoint` as your reverse proxy URL. When your client *negotiate*s with your hub server, the hub server will return the URL defined in `ClientEndpoint` for your client to connect. [Check here for more details.](./concept-connection-string.md#provide-client-and-server-endpoints) There are two ways to configure `ClientEndpoint`: * Add a `ClientEndpoint` section to your ConnectionString: `Endpoint=...;AccessKey=...;ClientEndpoint=<reverse-proxy-URL>` |
azure-signalr | Signalr Howto Troubleshoot Live Trace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-troubleshoot-live-trace.md | Title: How to use live trace tool for Azure SignalR service -description: Learn how to use live trace tool for Azure SignalR service + Title: Use the live trace tool for Azure SignalR Service +description: Learn how to use the live trace tool for Azure SignalR Service. -# How to use live trace tool for Azure SignalR service +# Use the live trace tool for Azure SignalR Service -Live trace tool is a single web application for capturing and displaying live traces in Azure SignalR Service. The live traces can be collected in real time without any dependency on other services. You can enable and disable the live trace feature with a single select. You can also choose any log category that you're interested. +The live trace tool is a single web application for capturing and displaying live traces in Azure SignalR Service. The live traces can be collected in real time without any dependency on other services. ++You can enable and disable the live trace feature with a single selection. You can also choose any log category that you're interested in. > [!NOTE]-> Note that the live traces will be counted as outbound messages. -> Using Microsoft Entra ID to access the live trace tool is not supported. You have to enable **Access Key** in **Keys** settings. +> Live traces are counted as outbound messages. +> +> Using Microsoft Entra ID to access the live trace tool is not supported. -## Launch the live trace tool +## Open the live trace tool -> [!NOTE] -> When enable access key, you'll use access token to authenticate live trace tool. -> Otherwise, you'll use Microsoft Entra ID to authenticate live trace tool. -> You can check whether you enable access key or not in your SignalR Service's Keys page in Azure portal. +When you enable an access key, you use an access token to authenticate the live trace tool. Otherwise, you use Microsoft Entra ID to authenticate the tool. ++You can check whether you enabled an access key by going to the **Keys** page for Azure SignalR Service in the Azure portal. -### Steps for access key enabled +### Steps if you enabled an access key -1. Go to the Azure portal and your SignalR Service page. -1. From the menu on the left, under **Monitoring** select **Live trace settings**. +1. Go to the Azure portal and your Azure SignalR Service page. +1. On the left menu, under **Monitoring**, select **Live trace settings**. 1. Select **Enable Live Trace**.-1. Select **Save** button. It will take a moment for the changes to take effect. -1. When updating is complete, select **Open Live Trace Tool**. +1. Select the **Save** button, and then wait for the changes to take effect. +1. Select **Open Live Trace Tool**. - :::image type="content" source="media/signalr-howto-troubleshoot-live-trace/signalr-enable-live-trace.png" alt-text="Screenshot of launching the live trace tool."::: -### Steps for access key disabled +### Steps if you didn't enable an access key #### Assign live trace tool API permission to yourself-1. Go to the Azure portal and your SignalR Service page. ++1. Go to the Azure portal and your Azure SignalR Service page. 1. Select **Access control (IAM)**.-1. In the new page, Click **+Add**, then click **Role assignment**. -1. In the new page, focus on **Job function roles** tab, Select **SignalR Service Owner** role, and then click **Next**. -1. In **Members** page, click **+Select members**. -1. In the new panel, search and select members, and then click **Select**. -1. Click **Review + assign**, and wait for the completion notification. --#### Visit live trace tool -1. Go to the Azure portal and your SignalR Service page. -1. From the menu on the left, under **Monitoring** select **Live trace settings**. +1. On the new page, select **+Add**, and then select **Role assignment**. +1. On the new page, select the **Job function roles** tab, select the **SignalR Service Owner** role, and then select **Next**. +1. On the **Members** page, click **+Select members**. +1. On the new panel, search for and select members, and then click **Select**. +1. Select **Review + assign**, and wait for the completion notification. ++#### Open the tool ++1. Go to the Azure portal and your Azure SignalR Service page. +1. On the left menu, under **Monitoring**, select **Live trace settings**. 1. Select **Enable Live Trace**.-1. Select **Save** button. It will take a moment for the changes to take effect. -1. When updating is complete, select **Open Live Trace Tool**. +1. Select the **Save** button, and then wait for the changes to take effect. +1. Select **Open Live Trace Tool**. - :::image type="content" source="media/signalr-howto-troubleshoot-live-trace/signalr-enable-live-trace.png" alt-text="Screenshot of launching the live trace tool."::: #### Sign in with your Microsoft account -1. The live trace tool will pop up a Microsoft sign in window. If no window is pop up, check and allow pop up windows in your browser. -1. Wait for **Ready** showing in the status bar. +1. When the Microsoft sign-in window opens in the live trace tool, enter your credentials. If no sign-in window appears, be sure to allow pop-up windows in your browser. +1. Wait for **Ready** to appear on the status bar. ## Capture live traces -The live trace tool provides functionality to help you capture the live traces for troubleshooting. +In the live trace tool, you can: -* **Capture**: Begin to capture the real time live traces from SignalR Service instance with live trace tool. -* **Clear**: Clear the captured real time live traces. -* **Export**: Export live traces to a file. The current supported file format is CSV file. -* **Log filter**: The live trace tool allows you to filter the captured real time live traces with one specific key word. Separators (for example, space, comma, semicolon, and so on), if present, will be treated as part of the key word. +* Begin to capture real-time live traces from the Azure SignalR Service instance. +* Clear the captured real-time live traces. +* Export live traces to a file. The currently supported file format is CSV. +* Filter the captured real-time live traces with one specific keyword. Separators (for example, space, comma, or semicolon), if present, are treated as part of the keyword. -The real time live traces captured by live trace tool contain detailed information for troubleshooting. +The real-time live traces that the tool captures contain detailed information for troubleshooting. | Name | Description |-| | | -| Time | Log event time | -| Log Level | Log event level (Trace/Debug/Informational/Warning/Error) | -| Event Name | Operation name of the event | -| Message | Detailed message of log event | -| Exception | The run-time exception of Azure Web PubSub service | -| Hub | User-defined Hub Name | -| Connection ID | Identity of the connection | -| Connection Type | Type of the connection. Allowed values are `Server` (connections between server and service) and `Client` (connections between client and service)| -| User ID | Identity of the user | -| IP | The IP address of client | -| Server Sticky | Routing mode of client. Allowed values are `Disabled`, `Preferred` and `Required`. For more information, see [ServerStickyMode](https://github.com/Azure/azure-signalr/blob/master/docs/run-asp-net-core.md#serverstickymode) | -| Transport | The transport that the client can use to send HTTP requests. Allowed values are `WebSockets`, `ServerSentEvents` and `LongPolling`. For more information, see [HttpTransportType](/dotnet/api/microsoft.aspnetcore.http.connections.httptransporttype) | -| Message Tracing ID | The unique identifier for a message | -| Route Template | The route template of the API | -| Http Method | The Http method (POST/GET/PUT/DELETE) | -| URL | The uniform resource locator | -| Trace ID | The unique identifier to represent a request | -| Status Code | the Http response code | -| Duration | The duration between the request is received and processed | -| Headers | The additional information passed by the client and the server with an HTTP request or response | -| Invocation ID | The unique identifier to represent an invocation (only available for ASP.NET SignalR) | -| Message Type | The type of the message (BroadcastDataMessage\|JoinGroupMessage\|LeaveGroupMessage\|...) | --## Next Steps --In this guide, you learned about how to use live trace tool. Next, learn how to handle the common issues: -* Troubleshooting guides: How to troubleshoot typical issues based on live traces, see [troubleshooting guide](./signalr-howto-troubleshoot-guide.md). -* Troubleshooting methods: For self-diagnosis to find the root cause directly or narrow down the issue, see [troubleshooting methods introduction](./signalr-howto-troubleshoot-method.md). +| | | +| **Time** | Log event time. | +| **Log Level** | Log event level: `Trace`, `Debug`, `Informational`, `Warning`, or `Error`. | +| **Event Name** | Operation name of the log event. | +| **Message** | Detailed message of the log event. | +| **Exception** | Runtime exception of the Azure Web PubSub service. | +| **Hub** | User-defined hub name. | +| **Connection ID** | Identity of the connection. | +| **Connection Type** | Type of the connection. Allowed values are `Server` (connections between server and service) and `Client` (connections between client and service).| +| **User ID** | Identity of the user. | +| **IP** | IP address of the client. | +| **Server Sticky** | Routing mode of the client. Allowed values are `Disabled`, `Preferred`, and `Required`. For more information, see [ServerStickyMode](https://github.com/Azure/azure-signalr/blob/master/docs/run-asp-net-core.md#serverstickymode). | +| **Transport** | Transport that the client can use to send HTTP requests. Allowed values are `WebSockets`, `ServerSentEvents`, and `LongPolling`. For more information, see [HttpTransportType](/dotnet/api/microsoft.aspnetcore.http.connections.httptransporttype). | +| **Message Tracing ID** | Unique identifier for a message. | +| **Route Template** | Route template of the API. | +| **Http Method** | HTTP method: `POST`, `GET`, `PUT`, or `DELETE`. | +| **URL** | Uniform resource locator. | +| **Trace ID** | Unique identifier to represent a request. | +| **Status Code** | HTTP response code. | +| **Duration** | Duration between receiving and processing the request. | +| **Headers** | Additional information that the client and the server pass with an HTTP request or response. | +| **Invocation ID** | Unique identifier to represent an invocation (available only for ASP.NET SignalR). | +| **Message Type** | Type of the message. Examples include `BroadcastDataMessage`, `JoinGroupMessage`, and `LeaveGroupMessage`. | ++## Next steps ++Learn how to handle common problems with the live trace tool: ++* To troubleshoot typical problems based on live traces, see the [troubleshooting guide](./signalr-howto-troubleshoot-guide.md). +* For self-diagnosis to find the root cause directly or narrow down the problem, see the [introduction to troubleshooting methods](./signalr-howto-troubleshoot-method.md). |
azure-signalr | Signalr Reference Data Plane Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-reference-data-plane-rest-api.md | Title: Azure SignalR service data plane REST API reference -description: Describes the REST APIs Azure SignalR service supports to manage the connections and send messages to them. + Title: Azure SignalR Service data-plane REST API reference +description: Learn about the REST APIs that Azure SignalR Service supports to manage connections and send messages to connections. -# Azure SignalR service data plane REST API reference +# Azure SignalR Service data-plane REST API reference -In addition to the classic client-server pattern, Azure SignalR Service provides a set of REST APIs so that you can easily integrate real-time functionality into your serverless architecture. +In addition to the classic client/server pattern, Azure SignalR Service provides a set of REST APIs to help you integrate real-time functionality into your serverless architecture. > [!NOTE]-> Azure SignalR Service only supports using REST API to manage clients connected using ASP.NET Core SignalR. Clients connected using ASP.NET SignalR use a different data protocol that is not currently supported. +> Azure SignalR Service supports using REST APIs only to manage clients connected through ASP.NET Core SignalR. Clients connected through ASP.NET SignalR use a different data protocol that's not currently supported. <a name="serverless"></a> -## Typical serverless Architecture with Azure Functions +## Typical serverless architecture with Azure Functions -The following diagram shows a typical serverless architecture using Azure SignalR Service with Azure Functions. +The following diagram shows a typical serverless architecture that uses Azure SignalR Service with Azure Functions. -- The `negotiate` function returns a negotiation response and redirects all clients to SignalR Service.-- The `broadcast` function calls SignalR Service's REST API. The SignalR Service broadcasts the message to all connected clients.+- The `negotiate` function returns a negotiation response and redirects all clients to Azure SignalR Service. +- The `broadcast` function calls the REST API for Azure SignalR Service. Azure SignalR Service broadcasts the message to all connected clients. -In a serverless architecture, clients still have persistent connections to the SignalR Service. -Since there's no application server to handle traffic, clients are in `LISTEN` mode, which means they can only receive messages but can't send messages. -SignalR Service disconnects any client that sends messages because it's an invalid operation. +In a serverless architecture, clients have persistent connections to Azure SignalR Service. Because there's no application server to handle traffic, clients are in `LISTEN` mode. In that mode, clients can receive messages but can't send messages. Azure SignalR Service disconnects any client that sends messages because it's an invalid operation. -You can find a complete sample of using SignalR Service with Azure Functions at [here](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/RealtimeSignIn). +You can find a complete sample of using Azure SignalR Service with Azure Functions in [this GitHub repository](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/RealtimeSignIn). -## API +### Implement the negotiation endpoint -The following table shows all supported versions of REST API. You can also find the swagger file for each version of REST API. +You should implement a `negotiate` function that returns a redirect negotiation response so that clients can connect to the service. -| API Version | Status | Port | Doc | Spec | +A typical negotiation response has this format: ++```json +{ + "url": "https://<service_name>.service.signalr.net/client/?hub=<hub_name>", + "accessToken": "<a typical JWT token>" +} +``` ++The `accessToken` value is generated through the same algorithm described in the [authentication section](#authentication-via-accesskey-in-azure-signalr-service). The only difference is that the `aud` claim should be the same as `url`. ++You should host your negotiation API in `https://<hub_url>/negotiate` so that you can still use a SignalR client to connect to the hub URL. Read more about redirecting clients to Azure SignalR Service in [Client connections](./signalr-concept-internals.md#client-connections). ++## REST API versions ++The following table shows all supported REST API versions. It also provides the Swagger file for each API version. ++| API version | Status | Port | Documentation | Specification | | - | -- | -- | | |-| `20220601` | Latest | Standard | [Doc](./swagger/signalr-data-plane-rest-v20220601.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/V20220601.json) | -| `1.0` | Stable | Standard | [Doc](./swagger/signalr-data-plane-rest-v1.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1.json) | -| `1.0-preview` | Obsolete | Standard | [Doc](./swagger/signalr-data-plane-rest-v1-preview.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1-preview.json) | +| `20220601` | Latest | Standard | [Article](./swagger/signalr-data-plane-rest-v20220601.md) | [Swagger file](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/V20220601.json) | +| `1.0` | Stable | Standard | [Article](./swagger/signalr-data-plane-rest-v1.md) | [Swagger file](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1.json) | +| `1.0-preview` | Obsolete | Standard | [Article](./swagger/signalr-data-plane-rest-v1-preview.md) | [Swagger file](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1-preview.json) | -The available APIs are listed as following. +The following table lists the available APIs. | API | Path | | -- | |-| [Broadcast a message to all clients connected to target hub.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-connected-to-target-hub) | `POST /api/v1/hubs/{hub}` | -| [Broadcast a message to all clients belong to the target user.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-belong-to-the-target-user) | `POST /api/v1/hubs/{hub}/users/{id}` | -| [Send message to the specific connection.](./swagger/signalr-data-plane-rest-v1.md#send-message-to-the-specific-connection) | `POST /api/v1/hubs/{hub}/connections/{connectionId}` | -| [Check if the connection with the given connectionId exists.](./swagger/signalr-data-plane-rest-v1.md#check-if-the-connection-with-the-given-connectionid-exists) | `GET /api/v1/hubs/{hub}/connections/{connectionId}` | -| [Close the client connection.](./swagger/signalr-data-plane-rest-v1.md#close-the-client-connection) | `DELETE /api/v1/hubs/{hub}/connections/{connectionId}` | -| [Broadcast a message to all clients within the target group.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-within-the-target-group) | `POST /api/v1/hubs/{hub}/groups/{group}` | -| [Check if there are any client connections inside the given group.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-inside-the-given-group) | `GET /api/v1/hubs/{hub}/groups/{group}` | -| [Check if there are any client connections connected for the given user.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-connected-for-the-given-user) | `GET /api/v1/hubs/{hub}/users/{user}` | -| [Add a connection to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-connection-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` | -| [Remove a connection from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-connection-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` | -| [Check whether a user exists in the target group.](./swagger/signalr-data-plane-rest-v1.md#check-whether-a-user-exists-in-the-target-group) | `GET /api/v1/hubs/{hub}/groups/{group}/users/{user}` | -| [Add a user to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-user-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/users/{user}` | -| [Remove a user from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/users/{user}` | -| [Remove a user from all groups.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-all-groups) | `DELETE /api/v1/hubs/{hub}/users/{user}/groups` | +| [Broadcast a message to all clients connected to target hub](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-connected-to-target-hub) | `POST /api/v1/hubs/{hub}` | +| [Broadcast a message to all clients belong to the target user](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-belong-to-the-target-user) | `POST /api/v1/hubs/{hub}/users/{id}` | +| [Send message to the specific connection](./swagger/signalr-data-plane-rest-v1.md#send-message-to-the-specific-connection) | `POST /api/v1/hubs/{hub}/connections/{connectionId}` | +| [Check if the connection with the given connectionId exists](./swagger/signalr-data-plane-rest-v1.md#check-if-the-connection-with-the-given-connectionid-exists) | `GET /api/v1/hubs/{hub}/connections/{connectionId}` | +| [Close the client connection](./swagger/signalr-data-plane-rest-v1.md#close-the-client-connection) | `DELETE /api/v1/hubs/{hub}/connections/{connectionId}` | +| [Broadcast a message to all clients within the target group](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-within-the-target-group) | `POST /api/v1/hubs/{hub}/groups/{group}` | +| [Check if there are any client connections inside the given group](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-inside-the-given-group) | `GET /api/v1/hubs/{hub}/groups/{group}` | +| [Check if there are any client connections connected for the given user](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-connected-for-the-given-user) | `GET /api/v1/hubs/{hub}/users/{user}` | +| [Add a connection to the target group](./swagger/signalr-data-plane-rest-v1.md#add-a-connection-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` | +| [Remove a connection from the target group](./swagger/signalr-data-plane-rest-v1.md#remove-a-connection-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` | +| [Check whether a user exists in the target group](./swagger/signalr-data-plane-rest-v1.md#check-whether-a-user-exists-in-the-target-group) | `GET /api/v1/hubs/{hub}/groups/{group}/users/{user}` | +| [Add a user to the target group](./swagger/signalr-data-plane-rest-v1.md#add-a-user-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/users/{user}` | +| [Remove a user from the target group](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/users/{user}` | +| [Remove a user from all groups](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-all-groups) | `DELETE /api/v1/hubs/{hub}/users/{user}/groups` | -## Using REST API +## Using the REST API -### Authenticate via Azure SignalR Service AccessKey +### Authentication via AccessKey in Azure SignalR Service -In each HTTP request, an authorization header with a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is required to authenticate with SignalR Service. +In each HTTP request, an authorization header with a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is required to authenticate with Azure SignalR Service. -#### Signing Algorithm and Signature +#### Signing algorithm and signature `HS256`, namely HMAC-SHA256, is used as the signing algorithm. -Use the `AccessKey` in Azure SignalR Service instance's connection string to sign the generated JWT token. +Use the `AccessKey` value in the Azure SignalR Service instance's connection string to sign the generated JWT. #### Claims -The following claims are required to be included in the JWT token. +The following claims must be included in the JWT. -| Claim Type | Is Required | Description | +| Claim type | Is required | Description | | - | -- | -- |-| `aud` | true | Needs to be the same as your HTTP request URL, trailing slash and query parameters not included. For example, a broadcast request's audience should look like: `https://example.service.signalr.net/api/v1/hubs/myhub`. | -| `exp` | true | Epoch time when this token expires. | +| `aud` | `true` | Needs to be the same as your HTTP request URL, not including the trailing slash and query parameters. For example, a broadcast request's audience should look like: `https://example.service.signalr.net/api/v1/hubs/myhub`. | +| `exp` | `true` | Epoch time when this token expires. | -### Authenticate via Microsoft Entra token +### Authentication via Microsoft Entra token -Similar to authenticating using `AccessKey`, when authenticating using Microsoft Entra token, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request. --The difference is, in this scenario, the JWT Token is generated by Microsoft Entra ID. For more information, see [Learn how to generate Microsoft Entra tokens](../active-directory/develop/reference-v2-libraries.md) --You could also use **Role Based Access Control (RBAC)** to authorize the request from your client/server to SignalR Service. For more information, see [Authorize access with Microsoft Entra ID for Azure SignalR Service](./signalr-concept-authorize-azure-active-directory.md) --### Implement Negotiate Endpoint --As shown in the [architecture section](#serverless), you should implement a `negotiate` function that returns a redirect negotiation response so that clients can connect to the service. -A typical negotiation response looks as follows: --```json -{ - "url": "https://<service_name>.service.signalr.net/client/?hub=<hub_name>", - "accessToken": "<a typical JWT token>" -} -``` +Similar to authenticating via `AccessKey`, a [JWT](https://en.wikipedia.org/wiki/JSON_Web_Token) is required to authenticate an HTTP request by using a Microsoft Entra token. -The `accessToken` is generated using the same algorithm described in the [authentication section](#authenticate-via-azure-signalr-service-accesskey). The only difference is the `aud` claim should be the same as `url`. +The difference is that in this scenario, Microsoft Entra ID generates the JWT. For more information, see [Learn how to generate Microsoft Entra tokens](../active-directory/develop/reference-v2-libraries.md). -You should host your negotiate API in `https://<hub_url>/negotiate` so you can still use SignalR client to connect to the hub url. Read more about redirecting client to Azure SignalR Service at [here](./signalr-concept-internals.md#client-connections). +You can also use *role-based access control (RBAC)* to authorize the request from your client or server to Azure SignalR Service. For more information, see [Authorize access with Microsoft Entra ID for Azure SignalR Service](./signalr-concept-authorize-azure-active-directory.md). ### User-related REST API -In order to the call user-related REST API, each of your clients should identify themselves to SignalR Service. Otherwise SignalR Service can't find target connections from a given user ID. +To call the user-related REST API, each of your clients should identify themselves to Azure SignalR Service. Otherwise, Azure SignalR Service can't find target connections from the user ID. -Client identification can be achieved by including a `nameid` claim in each client's JWT token when they're connecting to SignalR Service. -Then SignalR Service uses the value of `nameid` claim as the user ID of each client connection. +You can achieve client identification by including a `nameid` claim in each client's JWT when it's connecting to Azure SignalR Service. Azure SignalR Service then uses the value of the `nameid` claim as the user ID for each client connection. ### Sample -You can find a complete console app to demonstrate how to manually build a REST API HTTP request in SignalR Service [here](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/Serverless). +In [this GitHub repository](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/Serverless), you can find a complete console app to demonstrate how to manually build a REST API HTTP request in Azure SignalR Service. -You can also use [Microsoft.Azure.SignalR.Management](https://www.nuget.org/packages/Microsoft.Azure.SignalR.Management) to publish messages to SignalR Service using the similar interfaces of `IHubContext`. Samples can be found [here](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/Management). For more information, see [How to use Management SDK](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md). +You can also use [Microsoft.Azure.SignalR.Management](https://www.nuget.org/packages/Microsoft.Azure.SignalR.Management) to publish messages to Azure SignalR Service by using the similar interfaces of `IHubContext`. You can find samples in [this GitHub repository](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/Management). For more information, see [Azure SignalR Service Management SDK](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md). -## Limitation +## Limitations -Currently, we have the following limitation for REST API requests: +Currently, REST API requests have the following limitations: - Header size is a maximum of 16 KB. - Body size is a maximum of 1 MB. -If you want to send messages larger than 1 MB, use the Management SDK with `persistent` mode. +If you want to send messages larger than 1 MB, use the Service Management SDK with `persistent` mode. |
azure-signalr | Signalr Tutorial Authenticate Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-tutorial-authenticate-azure-functions.md | Title: "Tutorial: Authentication with Azure Functions - Azure SignalR" -description: In this tutorial, you learn how to authenticate Azure SignalR Service clients for Azure Functions binding + Title: "Tutorial: Authentication with Azure Functions - Azure SignalR Service" +description: In this tutorial, you learn how to authenticate Azure SignalR Service clients for Azure Functions binding. -A step by step tutorial to build a chat room with authentication and private messaging using Azure Functions, App Service Authentication, and SignalR Service. +In this step-by-step tutorial, you build a chat room with authentication and private messaging by using these technologies: -## Introduction +- [Azure Functions](https://azure.microsoft.com/services/functions/?WT.mc_id=serverlesschatlab-tutorial-antchu): Back-end API for authenticating users and sending chat messages. +- [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/?WT.mc_id=serverlesschatlab-tutorial-antchu): Service for broadcasting new messages to connected chat clients. +- [Azure Storage](https://azure.microsoft.com/services/storage/?WT.mc_id=serverlesschatlab-tutorial-antchu): Storage service that Azure Functions requires. +- [Azure App Service](https://azure.microsoft.com/products/app-service/): Service that provides user authentication. -### Technologies used +## Prerequisites -- [Azure Functions](https://azure.microsoft.com/services/functions/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Backend API for authenticating users and sending chat messages-- [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Broadcast new messages to connected chat clients-- [Azure Storage](https://azure.microsoft.com/services/storage/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Required by Azure Functions--### Prerequisites --- An Azure account with an active subscription.- - If you don't have one, you can [create one for free](https://azure.microsoft.com/free/). -- [Node.js](https://nodejs.org/en/download/) (Version 18.x)-- [Azure Functions Core Tools](../azure-functions/functions-run-local.md?#install-the-azure-functions-core-tools) (Version 4)+- An Azure account with an active subscription. If you don't have one, you can [create one for free](https://azure.microsoft.com/free/). +- [Node.js](https://nodejs.org/en/download/) (version 18.x). +- [Azure Functions Core Tools](../azure-functions/functions-run-local.md?#install-the-azure-functions-core-tools) (version 4). [Having issues? Let us know.](https://aka.ms/asrs/qsauth) ## Create essential resources on Azure -### Create an Azure SignalR service resource +### Create an Azure SignalR Service resource -Your application will access a SignalR Service instance. Use the following steps to create a SignalR Service instance using the Azure portal. --1. Select on the **Create a resource** (**+**) button for creating a new Azure resource. +Your application will access an Azure SignalR Service instance. Use the following steps to create an Azure SignalR Service instance by using the Azure portal: +1. In the [Azure portal](https://portal.azure.com/), select the **Create a resource** (**+**) button. 1. Search for **SignalR Service** and select it. 1. Select **Create**.- 1. Enter the following information. | Name | Value | | | - |- | **Resource group** | Create a new resource group with a unique name | - | **Resource name** | A unique name for the SignalR Service instance | - | **Region** | Select a region close to you | - | **Pricing Tier** | Free | - | **Service mode** | Serverless | + | **Resource group** | Create a new resource group with a unique name. | + | **Resource name** | Enter a unique name for the Azure SignalR Service instance. | + | **Region** | Select a region close to you. | + | **Pricing Tier** | Select **Free**. | + | **Service mode** | Select **Serverless**. | 1. Select **Review + Create**. 1. Select **Create**. [Having issues? Let us know.](https://aka.ms/asrs/qsauth) -### Create an Azure Function App and an Azure Storage account --1. From the home page in the Azure portal, select on the **Create a resource** (**+**). +### Create an Azure function app and an Azure storage account +1. From the home page in the Azure portal, select **Create a resource** (**+**). 1. Search for **Function App** and select it. 1. Select **Create**.- 1. Enter the following information. | Name | Value | | | -- |- | **Resource group** | Use the same resource group with your SignalR Service instance | - | **Function App name** | A unique name for the Function app instance | - | **Runtime stack** | Node.js | - | **Region** | Select a region close to you | --1. By default, a new Azure Storage account will also be created in the same resource group together with your function app. If you want to use another storage account in the function app, switch to **Hosting** tab to choose an account. + | **Resource group** | Use the same resource group with your Azure SignalR Service instance. | + | **Function App name** | Enter a unique name for the function app. | + | **Runtime stack** | Select **Node.js**. | + | **Region** | Select a region close to you. | -1. Select **Review + Create**, then select **Create**. +1. By default, a new Azure storage account is created in the same resource group together with your function app. If you want to use another storage account in the function app, switch to the **Hosting** tab to choose an account. +1. Select **Review + Create**, and then select **Create**. ## Create an Azure Functions project locally ### Initialize a function app 1. From a command line, create a root folder for your project and change to the folder.+1. Run the following command in your terminal to create a new JavaScript Functions project: -1. Execute the following command in your terminal to create a new JavaScript Functions project. --```bash -func init --worker-runtime node --language javascript --name my-app -``` + ```bash + func init --worker-runtime node --language javascript --name my-app + ``` -By default, the generated project includes a _host.json_ file containing the extension bundles which include the SignalR extension. For more information about extension bundles, see [Register Azure Functions binding extensions](../azure-functions/functions-bindings-register.md#extension-bundles). +By default, the generated project includes a _host.json_ file that contains the extension bundles that include the SignalR extension. For more information about extension bundles, see [Register Azure Functions binding extensions](../azure-functions/functions-bindings-register.md#extension-bundles). ### Configure application settings -When running and debugging the Azure Functions runtime locally, application settings are read by the function app from _local.settings.json_. Update this file with the connection strings of the SignalR Service instance and the storage account that you created earlier. +When you run and debug the Azure Functions runtime locally, the function app reads application settings from _local.settings.json_. Update this file with the connection strings of the Azure SignalR Service instance and the storage account that you created earlier. -1. Replace the content of _local.settings.json_ with the following code: +Replace the content of _local.settings.json_ with the following code: - ```json - { - "IsEncrypted": false, - "Values": { - "FUNCTIONS_WORKER_RUNTIME": "node", - "AzureWebJobsStorage": "<your-storage-account-connection-string>", - "AzureSignalRConnectionString": "<your-Azure-SignalR-connection-string>" - } - } - ``` +```json +{ + "IsEncrypted": false, + "Values": { + "FUNCTIONS_WORKER_RUNTIME": "node", + "AzureWebJobsStorage": "<your-storage-account-connection-string>", + "AzureSignalRConnectionString": "<your-Azure-SignalR-connection-string>" + } +} +``` - - Enter the Azure SignalR Service connection string into the `AzureSignalRConnectionString` setting. +In the preceding code: - Navigate to your SignalR Service in the Azure portal. In the **Settings** section, locate the **Keys** setting. Select the **Copy** button to the right of the connection string to copy it to your clipboard. You can use either the primary or secondary connection string. +- Enter the Azure SignalR Service connection string into the `AzureSignalRConnectionString` setting. - - Enter the storage account connection string into the `AzureWebJobsStorage` setting. + To get the string, go to your Azure SignalR Service instance in the Azure portal. In the **Settings** section, locate the **Keys** setting. Select the **Copy** button to the right of the connection string to copy it to your clipboard. You can use either the primary or secondary connection string. - Navigate to your storage account in the Azure portal. In the **Security + networking** section, locate the **Access keys** setting. Select the **Copy** button to the right of the connection string to copy it to your clipboard. You can use either the primary or secondary connection string. +- Enter the storage account connection string into the `AzureWebJobsStorage` setting. ++ To get the string, go to your storage account in the Azure portal. In the **Security + networking** section, locate the **Access keys** setting. Select the **Copy** button to the right of the connection string to copy it to your clipboard. You can use either the primary or secondary connection string. [Having issues? Let us know.](https://aka.ms/asrs/qsauth) -### Create a function to authenticate users to SignalR Service +### Create a function to authenticate users to Azure SignalR Service -When the chat app first opens in the browser, it requires valid connection credentials to connect to Azure SignalR Service. You'll create an HTTP triggered function named `negotiate` in your function app to return this connection information. +When the chat app first opens in the browser, it requires valid connection credentials to connect to Azure SignalR Service. Create an HTTP trigger function named `negotiate` in your function app to return this connection information. > [!NOTE]-> This function must be named `negotiate` as the SignalR client requires an endpoint that ends in `/negotiate`. +> This function must be named `negotiate` because the SignalR client requires an endpoint that ends in `/negotiate`. -1. From the root project folder, create the `negotiate` function from a built-in template with the following command. +1. From the root project folder, create the `negotiate` function from a built-in template by using the following command: ```bash func new --template "SignalR negotiate HTTP trigger" --name negotiate When the chat app first opens in the browser, it requires valid connection crede 1. Open _negotiate/function.json_ to view the function binding configuration. - The function contains an HTTP trigger binding to receive requests from SignalR clients and a SignalR input binding to generate valid credentials for a client to connect to an Azure SignalR Service hub named `default`. + The function contains an HTTP trigger binding to receive requests from SignalR clients. The function also contains a SignalR input binding to generate valid credentials for a client to connect to an Azure SignalR Service hub named `default`. ```json { When the chat app first opens in the browser, it requires valid connection crede } ``` - There's no `userId` property in the `signalRConnectionInfo` binding for local development, but you'll add it later to set the user name of a SignalR connection when you deploy the function app to Azure. + There's no `userId` property in the `signalRConnectionInfo` binding for local development. You'll add it later to set the username of a SignalR connection when you deploy the function app to Azure. 1. Close the _negotiate/function.json_ file. -1. Open _negotiate/index.js_ to view the body of the function. +1. Open _negotiate/index.js_ to view the body of the function: ```javascript module.exports = async function (context, req, connectionInfo) { When the chat app first opens in the browser, it requires valid connection crede }; ``` - This function takes the SignalR connection information from the input binding and returns it to the client in the HTTP response body. The SignalR client uses this information to connect to the SignalR Service instance. + This function takes the SignalR connection information from the input binding and returns it to the client in the HTTP response body. The SignalR client uses this information to connect to the Azure SignalR Service instance. [Having issues? Let us know.](https://aka.ms/asrs/qsauth) ### Create a function to send chat messages -The web app also requires an HTTP API to send chat messages. You'll create an HTTP triggered function named `sendMessage` that sends messages to all connected clients using SignalR Service. +The web app also requires an HTTP API to send chat messages. Create an HTTP trigger function that sends messages to all connected clients that use Azure SignalR Service: -1. From the root project folder, create an HTTP trigger function named `sendMessage` from the template with the command: +1. From the root project folder, create an HTTP trigger function named `sendMessage` from the template by using the following command: ```bash func new --name sendMessage --template "Http trigger" The web app also requires an HTTP API to send chat messages. You'll create an HT } ``` - Two changes are made to the original file: + The preceding code makes two changes to the original file: - - Changes the route to `messages` and restricts the HTTP trigger to the `POST` HTTP method. - - Adds a SignalR Service output binding that sends a message returned by the function to all clients connected to a SignalR Service hub named `default`. + - It changes the route to `messages` and restricts the HTTP trigger to the `POST` HTTP method. + - It adds an Azure SignalR Service output binding that sends a message returned by the function to all clients connected to an Azure SignalR Service hub named `default`. 1. Replace the content of _sendMessage/index.js_ with the following code: The web app also requires an HTTP API to send chat messages. You'll create an HT }; ``` - This function takes the body from the HTTP request and sends it to clients connected to SignalR Service, invoking a function named `newMessage` on each client. + This function takes the body from the HTTP request and sends it to clients connected to Azure SignalR Service. It invokes a function named `newMessage` on each client. The function can read the sender's identity and can accept a `recipient` value in the message body to allow you to send a message privately to a single user. You'll use these functionalities later in the tutorial. The web app also requires an HTTP API to send chat messages. You'll create an HT [Having issues? Let us know.](https://aka.ms/asrs/qsauth) -### Host the chat client web user interface --The chat application's UI is a simple single-page application (SPA) created with the Vue JavaScript framework using [ASP.NET Core SignalR JavaScript client](/aspnet/core/signalr/javascript-client). For simplicity, the function app hosts the web page. In a production environment, you can use [Static Web Apps](https://azure.microsoft.com/products/app-service/static) to host the web page. +### Host the chat client's web user interface -1. Create a new folder named _content_ in the root directory of your function project. -1. In the _content_ folder, create a new file named _https://docsupdatetracker.net/index.html_. +The chat application's UI is a simple single-page application (SPA) created with the Vue JavaScript framework by using the [ASP.NET Core SignalR JavaScript client](/aspnet/core/signalr/javascript-client). For simplicity, the function app hosts the webpage. In a production environment, you can use [Static Web Apps](https://azure.microsoft.com/products/app-service/static) to host the webpage. +1. Create a folder named _content_ in the root directory of your function project. +1. In the _content_ folder, create a file named _https://docsupdatetracker.net/index.html_. 1. Copy and paste the content of [https://docsupdatetracker.net/index.html](https://github.com/aspnet/AzureSignalR-samples/blob/da0aca70f490f3d8f4c220d0c88466b6048ebf65/samples/ServerlessChatWithAuth/content/https://docsupdatetracker.net/index.html) to your file. Save the file.--1. From the root project folder, create an HTTP trigger function named `index` from the template with the command: +1. From the root project folder, create an HTTP trigger function named `index` from the template by using this command: ```bash func new --name index --template "Http trigger" ``` -1. Modify the content of `index/index.js` to the following: +1. Modify the content of `index/index.js` to the following code: ```js const fs = require("fs"); The chat application's UI is a simple single-page application (SPA) created with }; ``` - The function reads the static web page and returns it to the user. + The function reads the static webpage and returns it to the user. -1. Open _index/function.json_, change the `authLevel` of the bindings to `anonymous`. Now the whole file looks like this: +1. Open _index/function.json_, and change the `authLevel` value of the bindings to `anonymous`. Now the whole file looks like this example: ```json { The chat application's UI is a simple single-page application (SPA) created with } ``` -1. Now you can test your app locally. Start the function app with the command: +1. Test your app locally. Start the function app by using this command: ```bash func start ``` -1. Open **http://localhost:7071/api/index** in your web browser. You should be able to see a web page as follows: +1. Open `http://localhost:7071/api/index` in your web browser. A chat webpage should appear. - :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/local-chat-client-ui.png" alt-text="Screenshot of local chat client web user interface."::: + :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/local-chat-client-ui.png" alt-text="Screenshot of a web user interface for a local chat client."::: -1. Enter a message in the chat box and press enter. +1. Enter a message in the chat box. - The message is displayed on the web page. Because the user name of the SignalR client isn't set, we send all messages as "anonymous". + After you select the Enter key, the message appears on the webpage. Because the username of the SignalR client isn't set, you're sending all messages anonymously. [Having issues? Let us know.](https://aka.ms/asrs/qsauth) ## Deploy to Azure and enable authentication -You have been running the function app and chat application locally. You'll now deploy them to Azure and enable authentication and private messaging in the application. +You've been running the function app and chat app locally. Now, deploy them to Azure and enable authentication and private messaging. -### Configure function app for authentication +### Configure the function app for authentication -So far, the chat app works anonymously. In Azure, you'll use [App Service Authentication](../app-service/overview-authentication-authorization.md) to authenticate the user. The user ID or username of the authenticated user is passed to the `SignalRConnectionInfo` binding to generate connection information authenticated as the user. +So far, the chat app works anonymously. In Azure, you'll use [App Service authentication](../app-service/overview-authentication-authorization.md) to authenticate the user. The user ID or username of the authenticated user is passed to the `SignalRConnectionInfo` binding to generate connection information authenticated as the user. 1. Open _negotiate/function.json_.--1. Insert a `userId` property to the `SignalRConnectionInfo` binding with value `{headers.x-ms-client-principal-name}`. This value is a [binding expression](../azure-functions/functions-triggers-bindings.md) that sets the user name of the SignalR client to the name of the authenticated user. The binding should now look like this. +1. Insert a `userId` property into the `SignalRConnectionInfo` binding with the value `{headers.x-ms-client-principal-name}`. This value is a [binding expression](../azure-functions/functions-triggers-bindings.md) that sets the username of the SignalR client to the name of the authenticated user. The binding should now look like this example: ```json { So far, the chat app works anonymously. In Azure, you'll use [App Service Authen 1. Save the file. -### Deploy function app to Azure +### Deploy the function app to Azure -Deploy the function app to Azure with the following command: +Deploy the function app to Azure by using the following command: ```bash func azure functionapp publish <your-function-app-name> --publish-local-settings func azure functionapp publish <your-function-app-name> --publish-local-settings The `--publish-local-settings` option publishes your local settings from the _local.settings.json_ file to Azure, so you don't need to configure them in Azure again. -### Enable App Service Authentication +### Enable App Service authentication -Azure Functions supports authentication with Microsoft Entra ID, Facebook, Twitter, Microsoft account, and Google. You will use **Microsoft** as the identity provider for this tutorial. +Azure Functions supports authentication with Microsoft Entra ID, Facebook, Twitter, Microsoft account, and Google. You'll use Microsoft as the identity provider for this tutorial. -1. Go to the resource page of your function app on Azure portal. -1. Select **Settings** -> **Authentication**. +1. In the Azure portal, go to the resource page of your function app. +1. Select **Settings** > **Authentication**. 1. Select **Add identity provider**.- :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/function-app-authentication.png" alt-text="Screenshot of the Function App Authentication page."::: --1. Select **Microsoft** from the **Identity provider** list. - :::image type="content" source="media/signalr-tutorial-authenticate-azure-functions/function-app-select-identity-provider.png" alt-text="Screenshot of 'Add an identity provider' page."::: - Azure Functions supports authentication with Microsoft Entra ID, Facebook, Twitter, Microsoft account, and Google. For more information about the supported identity providers, see the following articles: + :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/function-app-authentication.png" alt-text="Screenshot of the function app Authentication page and the button for adding an identity provider."::: - - [Microsoft Entra ID](../app-service/configure-authentication-provider-aad.md) - - [Facebook](../app-service/configure-authentication-provider-facebook.md) - - [Twitter](../app-service/configure-authentication-provider-twitter.md) - - [Microsoft account](../app-service/configure-authentication-provider-microsoft.md) - - [Google](../app-service/configure-authentication-provider-google.md) +1. In the **Identity provider** list, select **Microsoft**. Then select **Add**. -1. Select **Add** to complete the settings. An app registration will be created, which associates your identity provider with your function app. + :::image type="content" source="media/signalr-tutorial-authenticate-azure-functions/function-app-select-identity-provider.png" alt-text="Screenshot of the page for adding an identity provider."::: -### Try the application +The completed settings create an app registration that associates your identity provider with your function app. -1. Open **https://\<YOUR-FUNCTION-APP-NAME\>.azurewebsites.net/api/index** +For more information about the supported identity providers, see the following articles: -1. Select **Login** to authenticate with your chosen authentication provider. +- [Microsoft Entra ID](../app-service/configure-authentication-provider-aad.md) +- [Facebook](../app-service/configure-authentication-provider-facebook.md) +- [Twitter](../app-service/configure-authentication-provider-twitter.md) +- [Microsoft account](../app-service/configure-authentication-provider-microsoft.md) +- [Google](../app-service/configure-authentication-provider-google.md) -1. Send public messages by entering them into the main chat box. +### Try the application -1. Send private messages by clicking on a username in the chat history. Only the selected recipient will receive these messages. +1. Open `https://<YOUR-FUNCTION-APP-NAME>.azurewebsites.net/api/index`. +1. Select **Login** to authenticate with your chosen authentication provider. +1. Send public messages by entering them in the main chat box. +1. Send private messages by selecting a username in the chat history. Only the selected recipient receives these messages. -Congratulations! You've deployed a real-time, serverless chat app! +Congratulations! You deployed a real-time, serverless chat app. [Having issues? Let us know.](https://aka.ms/asrs/qsauth) ## Clean up resources -To clean up the resources created in this tutorial, delete the resource group using the Azure portal. +To clean up the resources that you created in this tutorial, delete the resource group by using the Azure portal. > [!CAUTION]-> Deleting the resource group deletes all resources contained within it. If the resource group contains resources outside the scope of this tutorial, they will also be deleted. +> Deleting the resource group deletes all the resources that it contains. If the resource group contains resources outside the scope of this tutorial, they're also deleted. [Having issues? Let us know.](https://aka.ms/asrs/qsauth) ## Next steps -In this tutorial, you learned how to use Azure Functions with Azure SignalR Service. Read more about building real-time serverless applications with SignalR Service bindings for Azure Functions. +In this tutorial, you learned how to use Azure Functions with Azure SignalR Service. Read more about building real-time serverless applications with Azure SignalR Service bindings for Azure Functions: > [!div class="nextstepaction"] > [Real-time apps with Azure SignalR Service and Azure Functions](signalr-concept-azure-functions.md) |
azure-vmware | Concepts Private Clouds Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-private-clouds-clusters.md | Each Azure VMware Solution architectural component has the following function: [!INCLUDE [disk-capabilities-of-the-host](includes/disk-capabilities-of-the-host.md)] -## Azure Region Availability Zone (AZ) to SKU mapping +## Azure Region Availability Zone (AZ) to SKU mapping table -When planning your Azure VMware Solution design, use the following table to understand what SKUs are available in each physical Availability Zone of an [Azure region](https://azure.microsoft.com/explore/global-infrastructure/geographies/#geographies). This is important for placing your private clouds in close proximity to your Azure native workloads, including integrated services such as Azure NetApp Files and Pure Cloud Block Storage (CBS). The Multi-AZ capability for Azure VMware Solution Stretched Clusters is also tagged in the table below. +When planning your Azure VMware Solution design, use the following table to understand what SKUs are available in each physical Availability Zone of an [Azure region](https://azure.microsoft.com/explore/global-infrastructure/geographies/#geographies). -Customer quota for Azure VMware Solution is assigned by Azure region, and you are not able to specify the Availability Zone during private cloud provisioning. An auto selection algorithm is used to balance deployments across the Azure region. If you have a particular Availability Zone you want to deploy to, open an SR with Microsoft requesting a "special placement policy" for your Subscription, Azure region, Availability Zone, and SKU type. This policy will remain in place until you request it be removed or changed. +>[!IMPORTANT] +> This mapping is important for placing your private clouds in close proximity to your Azure native workloads, including integrated services such as Azure NetApp Files and Pure Cloud Block Storage (CBS). ++The Multi-AZ capability for Azure VMware Solution Stretched Clusters is also tagged in the following table. Customer quota for Azure VMware Solution is assigned by Azure region, and you are not able to specify the Availability Zone during private cloud provisioning. An auto selection algorithm is used to balance deployments across the Azure region. If you have a particular Availability Zone you want to deploy to, open a [Service Request](https://rc.portal.azure.com/#create/Microsoft.Support) with Microsoft requesting a "special placement policy" for your subscription, Azure region, Availability Zone, and SKU type. This policy remains in place until you request it be removed or changed. | Azure region | Availability Zone | SKU | Multi-AZ SDDC | | : | :: | :: | :: | Now that you've covered Azure VMware Solution private cloud concepts, you might <!-- LINKS - external--> [vCSA versions]: https://kb.vmware.com/s/article/2143838+ [ESXi versions]: https://kb.vmware.com/s/article/2143832+ [vSAN versions]: https://kb.vmware.com/s/article/2150753++ |
backup | About Azure Vm Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/about-azure-vm-restore.md | This article describes how the [Azure Backup service](./backup-overview.md) rest | [Restore to create a new virtual machine](./backup-azure-arm-restore-vms.md) | Restores the entire VM to OLR (if the source VM still exists) or ALR | <ul><li> If the source VM is lost or corrupt, then you can restore entire VM <li> You can create a copy of the VM <li> You can perform a restore drill for audit or compliance <li> If license for Marketplace Azure VM has expired, [create VM restore](./backup-azure-arm-restore-vms.md#create-a-vm) option can't be used.</ul> | | [Restore disks of the VM](./backup-azure-arm-restore-vms.md#restore-disks) | Restore disks attached to the VM | All disks: This option creates the template and restores the disk. You can edit this template with special configurations (for example, availability sets) to meet your requirements and then use both the template and restore the disk to recreate the VM. | | [Restore specific files within the VM](./backup-azure-restore-files-from-vm.md) | Choose restore point, browse, select files, and restore them to the same (or compatible) OS as the backed-up VM. | If you know which specific files to restore, then use this option instead of restoring the entire VM. |-| [Restore an encrypted VM](./backup-azure-vms-encryption.md) | From the portal, restore the disks and then use PowerShell to create the VM | <ul><li> [Encrypted VM with Microsoft Entra ID](../virtual-machines/windows/disk-encryption-windows-aad.md) <li> [Encrypted VM without Microsoft Entra ID](../virtual-machines/windows/disk-encryption-windows.md) <li> [Encrypted VM *with Microsoft Entra ID* migrated to *without Microsoft Entra ID*](../virtual-machines/windows/disk-encryption-faq.yml#can-i-migrate-vms-that-were-encrypted-with-an-azure-ad-app-to-encryption-without-an-azure-ad-app-)</ul> | +| [Restore an encrypted VM](./backup-azure-vms-encryption.md) | From the portal, restore the disks and then use PowerShell to create the VM | <ul><li> [Encrypted VM with Microsoft Entra ID](../virtual-machines/windows/disk-encryption-windows-aad.md) <li> [Encrypted VM without Microsoft Entra ID](../virtual-machines/windows/disk-encryption-windows.md) <li> [Encrypted VM *with Microsoft Entra ID* migrated to *without Microsoft Entra ID*](../virtual-machines/windows/disk-encryption-faq.yml#can-i-migrate-vms-that-were-encrypted-with-a-microsoft-entra-app-to-encryption-without-a-microsoft-entra-app-)</ul> | | [Cross Region Restore](./backup-azure-arm-restore-vms.md#cross-region-restore) | Create a new VM or restore disks to a secondary region (Azure paired region) | <ul><li> **Full outage**: With the cross region restore feature, there's no wait time to recover data in the secondary region. You can initiate restores in the secondary region even before Azure declares an outage. <li> **Partial outage**: Downtime can occur in specific storage clusters where Azure Backup stores your backed-up data or even in-network, connecting Azure Backup and storage clusters associated with your backed-up data. With Cross Region Restore, you can perform a restore in the secondary region using a replica of backed up data in the secondary region. <li> **No outage**: You can conduct business continuity and disaster recovery (BCDR) drills for audit or compliance purposes with the secondary region data. This allows you to perform a restore of backed up data in the secondary region even if there isn't a full or partial outage in the primary region for business continuity and disaster recovery drills.</ul> | ## Next steps |
batch | Batch Docker Container Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md | Title: Container workloads on Azure Batch description: Learn how to run and scale apps from container images on Azure Batch. Create a pool of compute nodes that support running container tasks. Previously updated : 07/14/2023 Last updated : 10/12/2023 ms.devlang: csharp, python Use one of the following supported Windows or Linux images to create a pool of V ### Windows support -Batch supports Windows server images that have container support designations. Typically, these image SKU names are suffixed with `-with-containers` or `-with-containers-smalldisk`. Additionally, [the API to list all supported images in Batch](batch-linux-nodes.md#list-of-virtual-machine-images) denotes a `DockerCompatible` capability if the image supports Docker containers. +Batch supports Windows server images that have container support designations. Typically, these image SKU names are suffixed with `win_2016_mcr_20_10` or `win_2022_mcr_20_10` under the Mirantis publisher and are offered as `windows_2016_with_mirantis_container_runtime` or `windows_2022_with_mirantis_container_runtime`. Additionally, [the API to list all supported images in Batch](batch-linux-nodes.md#list-of-virtual-machine-images) denotes a `DockerCompatible` capability if the image supports Docker containers. You can also create custom images from VMs running Docker on Windows. +> [!NOTE] +> The image SKUs `-with-containers` or `-with-containers-smalldisk` are retired. Please see the [announcement](https://techcommunity.microsoft.com/t5/containers/updates-to-the-windows-container-runtime-support/ba-p/2788799) for details and alternative container runtime options for Kubernetes environment. + ### Linux support For Linux container workloads, Batch currently supports the following Linux images published by Microsoft Azure Batch in the Azure Marketplace without the need for a custom image. |
batch | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md | Title: Best practices description: Learn best practices and useful tips for developing your Azure Batch solutions. Previously updated : 09/13/2023 Last updated : 10/12/2023 lrwxrwxrwx 1 root root 12 Oct 31 15:16 lun0 -> ../../../sdc There's no need to translate the reference back to the `sd[X]` mapping in your preparation script, instead refer to the device directly. In this example, this device would be `/dev/disk/azure/scsi1/lun0`. You could provide this ID directly to `fdisk`, `mkfs`, and any other-tooling required for your workflow. +tooling required for your workflow. Alternatively, you can use `lsblk` with `blkid` to map the UUID for the disk. -For more information about Azure data disks in Linux, see this [article](../virtual-machine-scale-sets/tutorial-use-disks-cli.md). +For more information about Azure data disks in Linux, including alternate methods of locating data disks and `/etc/fstab` options, +see this [article](../virtual-machines/linux/add-disk.md). Ensure that there are no dependencies or races as described by the Tip +note before promoting your method into production use. #### Preparing data disks in Windows Batch pools Number Friendly Name Serial Number HealthStatus Opera Where disk number 2 is the uninitialized data disk attached to this compute node. These disks can then be initialized, partitioned, and formatted as required for your workflow. -For more information about Azure data disks in Windows, see this [article](../virtual-machine-scale-sets/tutorial-use-disks-powershell.md). +For more information about Azure data disks in Windows, including sample PowerShell scripts, see this +[article](../virtual-machines/windows/attach-disk-ps.md). Ensure any sample scripts are validated for idempotency before +promotion into production use. ### Collect Batch agent logs |
communication-services | Add Azure Managed Domains | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/add-azure-managed-domains.md | Azure communication Services Email automatically configures the required email a You can optionally configure your MailFrom address to be something other than the default DoNotReply, and also add more than one sender username to your domain. To understand how to configure your sender address, see how to [add multiple sender addresses](add-multiple-senders.md). +> [!NOTE] +> Azure Managed Domains facilitate developers in rapidly initiating application development. Once your application is prepared for deployment, you can seamlessly integrate your custom domain. It's crucial to note that if you continue to rely on Azure Managed Domains, the MailFrom Address displayed in the recipient's mailbox will differ from what you observe in the Portal. This address is dynamically generated, dependent on the data location. As an illustration, if the data location is set to the US, the received email address will transform into 'donotreply@xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.US1.azurecomm.net'. + **Your email domain is now ready to send emails.** ## Next steps |
container-apps | Authentication Azure Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-azure-active-directory.md | This article shows you how to configure authentication for Azure Container Apps The Container Apps Authentication feature can automatically create an app registration with the Microsoft identity platform. You can also use a registration that you or a directory admin creates separately. -- [Create a new app registration automatically](#aad-express)-- [Use an existing registration created separately](#aad-advanced)+- [Create a new app registration automatically](#entra-id-express) +- [Use an existing registration created separately](#entra-id-advanced) -## <a name="aad-express"> </a> Option 1: Create a new app registration automatically +## <a name="entra-id-express"> </a> Option 1: Create a new app registration automatically This option is designed to make enabling authentication simple and requires just a few steps. This option is designed to make enabling authentication simple and requires just You're now ready to use the Microsoft identity platform for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration. -## <a name="aad-advanced"> </a>Option 2: Use an existing registration created separately +## <a name="entra-id-advanced"> </a>Option 2: Use an existing registration created separately You can also manually register your application for the Microsoft identity platform, customizing the registration and configuring Container Apps Authentication with the registration details. This approach is useful if you want to use an app registration from a different Microsoft Entra tenant other than the one your application is defined. -### <a name="aad-register"> </a>Create an app registration in Microsoft Entra ID for your container app +### <a name="entra-id-register"> </a>Create an app registration in Microsoft Entra ID for your container app First, you'll create your app registration. As you do so, collect the following information that you'll need later when you configure the authentication in the container app: To register the app, perform the following steps: 1. (Optional) To create a client secret, select **Certificates & secrets** > **Client secrets** > **New client secret**. Enter a description and expiration and select **Add**. Copy the client secret value shown in the page. It won't be shown again. 1. (Optional) To add multiple **Reply URLs**, select **Authentication**. -### <a name="aad-secrets"> </a>Enable Microsoft Entra ID in your container app +### <a name="entra-id-secrets"> </a>Enable Microsoft Entra ID in your container app 1. Sign in to the [Azure portal] and navigate to your app. 1. Select **Authentication** in the menu on the left. Select **Add identity provider**. You can register native clients to request access your container app's APIs on b 1. Select **Create**. 1. After the app registration is created, copy the value of **Application (client) ID**. 1. Select **API permissions** > **Add a permission** > **My APIs**.-1. Select the app registration you created earlier for your container app. If you don't see the app registration, make sure that you've added the **user_impersonation** scope in [Create an app registration in Microsoft Entra ID for your container app](#aad-register). +1. Select the app registration you created earlier for your container app. If you don't see the app registration, make sure that you've added the **user_impersonation** scope in [Create an app registration in Microsoft Entra ID for your container app](#entra-id-register). 1. Under **Delegated permissions**, select **user_impersonation**, and then select **Add permissions**. You've now configured a native client application that can request access your container app on behalf of a user. |
container-apps | Dapr Functions Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-functions-extension.md | + + Title: Deploy the Dapr extension for Azure Functions in Azure Container Apps ++description: Learn how to use and deploy the Azure Functions with Dapr extension in your Dapr-enabled container apps. +++++ Last updated : 10/13/2023++# Customer Intent: I'm a developer who wants to use the Dapr extension for Azure Functions in my Dapr-enabled container app +++# Deploy the Dapr extension for Azure Functions in Azure Container Apps ++The [Dapr extension for Azure Functions](../azure-functions/functions-bindings-dapr.md) allows you to easily interact with the Dapr APIs from an Azure Function using triggers and bindings. In this guide, you learn how to: ++- Create an Azure Redis Cache for use as a Dapr statestore +- Deploy an Azure Container Apps environment to host container apps +- Deploy a Dapr-enabled function on Azure Container Apps: + - One function that invokes the other service + - One function that creates an Order and saves it to storage via Dapr statestore +- Verify the interaction between the two apps ++> [!NOTE] +> The Dapr extension for Azure Functions is currently in preview. ++## Prerequisites ++- [An Azure account with an active subscription.](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- [Install Azure CLI](/cli/azure/install-azure-cli) ++## Set up the environment ++1. In the terminal, log into your Azure subscription. ++ ```azurecli + az login + ``` ++1. Set up your Azure login with the active subscription you'd like to use. ++ ```azurecli + az account set --subscription {subscription-id-or-name} + ``` ++1. Clone the [Dapr extension for Azure Functions repo](https://github.com/Azure/azure-functions-dapr-extension). ++ ```azurecli + git clone https://github.com/Azure/azure-functions-dapr-extension.git + ``` ++## Create resource group ++> [!NOTE] +> Azure Container Apps support for Functions is currently in preview and available in the following regions. +> - Australia East +> - Central US +> - East US +> - East US 2 +> - North Europe +> - South Central US +> - UK South +> - West Europe +> - West US 3 ++Specifying one of the available regions, create a resource group for your container app. ++ ```azurecli + az group create --name {resourceGroupName} --location {region} + ``` ++## Deploy the Azure Function templates ++1. From the root directory, change into the folder holding the template. ++ ```azurecli + cd quickstarts/dotnet-isolated/deploy/aca + ``` ++1. Create a deployment group and specify the template you'd like to deploy. ++ ```azurecli + az deployment group create --resource-group {resourceGroupName} --template-file deploy-quickstart.bicep + ``` ++1. When prompted by the CLI, enter a resource name prefix. The name you choose must be a combination of numbers and lowercase letters, 3 and 24 characters in length. ++ ``` + Please provide string value for 'resourceNamePrefix' (? for help): {your-resource-name-prefix} + ``` ++ The template deploys the following resources and might take a while: ++ - A Container App Environment + - A Function App + - An Azure Blob Storage Account and a default storage container + - Application Insights + - Log Analytics WorkSpace + - Dapr Component (Azure Redis Cache) for State Management + - The following .NET Dapr-enabled Functions: + - `OrderService` + - `CreateNewOrder` + - `RetrieveOrder` ++1. In the Azure portal, navigate to your resource group and select **Deployments** to track the deployment status. ++ :::image type="content" source="media/dapr-binding-functions/deployment-status.png" alt-text="Screenshot showing the deployment group deployment status in the Azure portal."::: ++## Verify the result ++Once the template has deployed successfully, run the following command to initiate an `OrderService` function that triggers the `CreateNewOrder` process. A new order is created and stored in the Redis statestore. ++In the command: +- Replace `{quickstart-functionapp-url}` with your actual function app URL. For example: `https://daprext-funcapp.wittyglacier-20884174.eastus.azurecontainerapps.io`. +- Replace `{quickstart-functionapp-name}` with your function app name. ++# [PowerShell](#tab/powershell) ++```powershell +Invoke-RestMethod -Uri 'https://{quickstart-functionapp-url.io}/api/invoke/{quickstart-functionapp-name}/CreateNewOrder' -Method POST -Headers @{"Content-Type" = "application/json"} -Body '{ + "data": { + "value": { + "orderId": "Order22" + } + } +}' +``` ++# [Curl](#tab/curl) ++```sh +curl --location 'https://{quickstart-functionapp-url.io}/api/invoke/{quickstart-functionapp-name}/CreateNewOrder' \ +--header 'Content-Type: application/json' \ +--data '{ + "data": { + "value": { + "orderId": "Order22" + } + } +}' +``` ++++## View logs ++Data logged via a function app is stored in the `ContainerAppConsoleLogs_CL` custom table in the Log Analytics workspace. Wait a few minutes for the analytics to arrive for the first time before you query the logged data. ++You can view logs through the Azure portal or from the command line. ++### Via the Azure portal ++1. Navigate to your container app environment. ++1. In the left side menu, under **Monitoring**, select **Logs**. ++1. Run a query like the following to verify your function app is receiving the invoked message from Dapr. ++ ``` + ContainerAppsConsoleLogs_CL + | where RevisionName_s == $revision_name + | where Log_s contains "Order22" + | project Log_s + ``` ++++### Via the Azure CLI ++Run the following command to view the saved state. ++# [PowerShell](#tab/powershell) ++```powershell +Invoke-RestMethod -Uri 'https://{quickstart-functionapp-url.io}/api/retrieveorder' -Method GET +``` ++# [Curl](#tab/curl) ++```sh +curl --location 'https://{quickstart-functionapp-url.io}/api/retrieveorder' +``` ++++## Clean up resources ++Once you're finished with this tutorial, run the following command to delete your resource group, along with all the resources you created. ++``` +az group delete --resource-group $RESOURCE_GROUP +``` ++## Related links ++- [Learn more about the Dapr extension for Azure Functions](../azure-functions/functions-bindings-dapr.md) +- [Learn more about connecting Dapr components to your container app](./dapr-component-connection.md) |
cosmos-db | How To Migrate Native Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-migrate-native-tools.md | Migrate a collection from the source MongoDB instance to the target Azure Cosmos ### [mongoexport/mongoimport](#tab/export-import) -1. To export the data from the source MongoDB instance, open a terminal and use the ``--host``, ``--username``, and ``--password`` arguments to connect to and export JSON records. -- ```bash - mongoexport \ - --host <hostname><:port> \ - --username <username> \ - --password <password> \ - --db <database-name> \ - --collection <collection-name> \ - --out <filename>.json - ``` --1. Optionally, export a subset of the MongoDB data by adding a ``--query`` argument. This argument ensures that the tool only exports documents that match the filter. -- ```bash - mongoexport \ - --host <hostname><:port> \ - --username <username> \ - --password <password> \ - --db <database-name> \ - --collection <collection-name> \ - --query '{ "quantity": { "$gte": 15 } }' \ - --out <filename>.json - ``` -+1. To export the data from the source MongoDB instance, open a terminal and use any of three methods listed below. + + 1. Specify the ``--host``, ``--username``, and ``--password`` arguments to connect to and export JSON records. + + ```bash + mongoexport \ + --host <hostname><:port> \ + --username <username> \ + --password <password> \ + --db <database-name> \ + --collection <collection-name> \ + --out <filename>.json + ``` + + 2. Export a subset of the MongoDB data by adding a ``--query`` argument. This argument ensures that the tool only exports documents that match the filter. + + ```bash + mongoexport \ + --host <hostname><:port> \ + --username <username> \ + --password <password> \ + --db <database-name> \ + --collection <collection-name> \ + --query '{ "quantity": { "$gte": 15 } }' \ + --out <filename>.json + ``` + 3. Export data from Azure Cosmos DB for MongoDB vCore. + + ```bash + mongoexport \ + --uri <target-connection-string> + --db <database-name> \ + --collection <collection-name> \ + --query '{ "quantity": { "$gte": 15 } }' \ + --out <filename>.json + ``` 1. Import the previously exported file into the target Azure Cosmos DB for MongoDB vCore account. ```bash mongoimport \ --file <filename>.json \ --type json \- --writeConcern="{w:0}" \ --db <database-name> \ --collection <collection-name> \ --ssl \- <target-connection-string> + --uri <target-connection-string> ``` 1. Monitor the terminal output from *mongoimport*. The output prints lines of text to the terminal with updates on the import operation's status. ### [mongodump/mongorestore](#tab/dump-restore) -1. To create a data dump of all data in your MongoDB instance, open a terminal and use the ``--host``, ``--username``, and ``--password`` arguments to dump the data as native BSON. -- ```bash - mongodump \ - --host <hostname><:port> \ - --username <username> \ - --password <password> \ - --out <dump-directory> - ``` --1. Optionally, you can specify the ``--db`` and ``--collection`` arguments to narrow the scope of the data you wish to dump: -- ```bash - mongodump \ - --host <hostname><:port> \ - --username <username> \ - --password <password> \ - --db <database-name> \ - --out <dump-directory> - ``` -- ```bash - mongodump \ - --host <hostname><:port> \ - --username <username> \ - --password <password> \ - --db <database-name> \ - --collection <collection-name> \ - --out <dump-directory> - ``` -+1. To create a data dump of all data in your MongoDB instance, open a terminal and use any of three methods listed below. + 1. Specify the ``--host``, ``--username``, and ``--password`` arguments to dump the data as native BSON. + + ```bash + mongodump \ + --host <hostname><:port> \ + --username <username> \ + --password <password> \ + --out <dump-directory> + ``` + + 1. Specify the ``--db`` and ``--collection`` arguments to narrow the scope of the data you wish to dump: + + ```bash + mongodump \ + --host <hostname><:port> \ + --username <username> \ + --password <password> \ + --db <database-name> \ + --out <dump-directory> + ``` + + ```bash + mongodump \ + --host <hostname><:port> \ + --username <username> \ + --password <password> \ + --db <database-name> \ + --collection <collection-name> \ + --out <dump-directory> + ``` + 1. Create a data dump of all data in your Azure Cosmos DB for MongoDB vCore. + + ```bash + mongodump \ + --uri <target-connection-string> \ + --out <dump-directory> + ``` 1. Observe that the tool created a directory with the native BSON data dumped. The files and folders are organized into a resource hierarchy based on the database and collection names. Each database is a folder and each collection is a `.bson` file. 1. Restore the contents of any specific collection into an Azure Cosmos DB for MongoDB vCore account by specifying the collection's specific BSON file. The filename is constructed using this syntax: `<dump-directory>/<database-name>/<collection-name>.bson`. ```bash mongorestore \ - --writeConcern="{w:0}" \ - --db <database-name> \ - --collection <collection-name> \ - --ssl \ - <dump-directory>/<database-name>/<collection-name>.bson + --db <database-name> \ + --collection <collection-name> \ + --ssl \ + --uri <target-connection-string> \ + <dump-directory>/<database-name>/<collection-name>.bson ``` 1. Monitor the terminal output from *mongoimport*. The output prints lines of text to the terminal with updates on the restore operation's status. |
cost-management-billing | Add Change Subscription Administrator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/add-change-subscription-administrator.md | tags: billing Previously updated : 04/10/2023 Last updated : 10/13/2023 To identify accounts for which you're a billing administrator, visit the [Cost M If you're not sure who the account administrator is for a subscription, visit the [Subscriptions page in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade). Then select the subscription you want to check, and then look under **Settings**. Select **Properties** and the account administrator of the subscription is shown in the **Account Admin** box. -If you don't see **Account Admin**, you might have a Microsoft Customer Agreement or Enterprise Agreement account. Instead, [check your access to a Microsoft Customer Agreement](understand-mca-roles.md#check-access-to-a-microsoft-customer-agreement) or see [Manage Azure Enterprise Agreement roles](understand-ea-roles.md). +If you don't see **Account Admin**, you might have a Microsoft Customer Agreement. Instead, [check your access to a Microsoft Customer Agreement](understand-mca-roles.md#check-access-to-a-microsoft-customer-agreement). ## Assign a subscription administrator |
dev-box | How To Configure Dev Box Hibernation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-dev-box-hibernation.md | There are two steps in enabling hibernation; you must enable hibernation on your - Hibernation doesn't support hypervisor-protected code integrity (HVCI)/ Memory Integrity features. Dev box disables this feature automatically. - Auto-stop schedules still shutdown the dev boxes. If you want to hibernate your dev box, you can do it through the developer portal or using the CLI.+ + > [!NOTE] + > The functionality to schedule dev boxes to hibernate automatically is available in preview. You can sign up for the preview here: [Microsoft Dev Box - Auto-Hibernation Schedules Preview](https://aka.ms/DevBoxHibernationSchedulesPrivatePreviewSignUp). ### Settings not compatible with hibernation You can enable hibernation as you create a dev box definition, providing that th All new dev boxes created in dev box pools that use a dev box definition with hibernation enabled can hibernate in addition to shutting down. If a pool has dev boxes that were created before hibernation was enabled, they continue to only support shutdown. -Dev Box validates your image for hibernate support. Your dev box definition may fail validation if hibernation couldn't be successfully enabled using your image. +Dev Box validates your image for hibernate support. Your dev box definition might fail validation if hibernation couldn't be successfully enabled using your image. You can enable hibernation on a dev box definition by using the Azure portal or the CLI. |
dns | Dns Private Records | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-records.md | + + Title: Private DNS records overview - Azure Private DNS +description: Overview of support for DNS records in Azure Private DNS. ++++ Last updated : 10/12/2023++++# Overview of private DNS records ++This article provides information about support for DNS records in Azure Private DNS zones. For an overview of private DNS zones, see: [What is an Azure Private DNS zone?](private-dns-privatednszone.md) ++## DNS records ++### Record names ++Records are specified by using relative names. A *fully qualified* domain name (FQDN) includes the zone name, whereas a *relative* name does not. For example, the relative record name `www` in the zone `contoso.com` gives the fully qualified record name `www.contoso.com`. ++An *apex* record is a DNS record at the root (or *apex*) of a DNS zone. For example, in the DNS zone `contoso.com`, an apex record also has the fully qualified name `contoso.com` (this is sometimes called a *naked* domain). By convention, the relative name '\@' is used to represent apex records. ++### Record types ++Each DNS record has a name and a type. Records are organized into various types according to the data they contain. The most common type is an 'A' record, which maps a name to an IPv4 address. Another common type is an 'MX' record, which maps a name to a mail server. ++Azure Private DNS supports the following common DNS record types: A, AAAA, CNAME, MX, NS, PTR, SOA, SRV, and TXT. ++### Record sets ++Sometimes you need to create more than one DNS record with a given name and type. For example, suppose the 'www.contoso.com' web site is hosted on two different IP addresses. The website requires two different A records, one for each IP address. Here is an example of a record set: ++```dns +www.contoso.com. 3600 IN A 10.10.1.5 +www.contoso.com. 3600 IN A 10.10.1.10 +``` ++Azure DNS manages all DNS records using *record sets*. A record set (also known as a *resource* record set) is the collection of DNS records in a zone that have the same name and are of the same type. Most record sets contain a single record. However, examples like the one shown here, in which a record set contains more than one record, are not uncommon. ++For example, suppose you have already created an A record 'www' in the zone 'contoso.com', pointing to the IP address '10.10.1.5' (the first record shown previously). To create the second record you would add that record to the existing record set, rather than create an additional record set. ++The SOA and CNAME record types are exceptions. The DNS standards don't permit multiple records with the same name for these types, therefore these record sets can only contain a single record. ++### Time-to-live ++The time to live, or TTL, specifies how long each record is cached by clients before being queried. In the previous example, the TTL is 3600 seconds or 1 hour. ++In Azure DNS, the TTL gets specified for the record set, not for each record, so the same value is used for all records within that record set. You can specify any TTL value between 1 and 2,147,483,647 seconds. ++### Wildcard records ++Azure DNS supports [wildcard records](https://en.wikipedia.org/wiki/Wildcard_DNS_record). Wildcard records get returned in response to any query with a matching name, unless there's a closer match from a nonwildcard record set. Azure DNS supports wildcard record sets for all record types except NS and SOA. ++To create a wildcard record set, use the record set name '\*'. You can also use a name with '\*' as its left-most label, for example, '\*.foo'. ++### CNAME records ++CNAME record sets can't coexist with other record sets with the same name. For example, you can't create a CNAME record set with the relative name `www` and an A record with the relative name `www` at the same time. ++Since the zone apex (name = '\@') always contains the NS and SOA record sets during the creation of the zone, you can't create a CNAME record set at the zone apex. ++These constraints arise from the DNS standards and aren't limitations of Azure DNS. ++### SOA records ++A SOA record set gets created automatically at the apex of each zone (name = '\@'), and gets deleted automatically when the zone gets deleted. SOA records can't be created or deleted separately. ++You can modify all properties of the SOA record except for the `host` property. This property gets preconfigured to refer to the primary name server name provided by Azure DNS. ++The zone serial number in the SOA record isn't updated automatically when changes are made to the records in the zone. It can be updated manually by editing the SOA record, if necessary. ++### SRV records ++[SRV records](https://en.wikipedia.org/wiki/SRV_record) are used by various services to specify server locations. When specifying an SRV record in Azure DNS: ++* The *service* and *protocol* must be specified as part of the record set name, prefixed with underscores, such as '\_sip.\_tcp.name'. For a record at the zone apex, there's no need to specify '\@' in the record name, simply use the service and protocol, such as '\_sip.\_tcp'. +* The *priority*, *weight*, *port*, and *target* are specified as parameters of each record in the record set. ++### TXT records ++TXT records are used to map domain names to arbitrary text strings. They're used in multiple applications. ++The DNS standards permit a single TXT record to contain multiple strings, each of which can be up to 255 characters in length. Where multiple strings are used, they're concatenated by clients and treated as a single string. ++When calling the Azure DNS REST API, you need to specify each TXT string separately. When you use the Azure portal, PowerShell, or CLI interfaces, you should specify a single string per record. This string is automatically divided into 255-character segments if necessary. ++The multiple strings in a DNS record shouldn't be confused with the multiple TXT records in a TXT record set. A TXT record set can contain multiple records, *each of which* can contain multiple strings. Azure private DNS supports a total string length of up to 1024 characters in each TXT record set (across all records combined). ++## Tags and metadata ++### Tags ++Tags are a list of name-value pairs and are used by Azure Resource Manager to label resources. Azure Resource Manager uses tags to enable filtered views of your Azure bill and also enables you to set a policy for certain tags. For more information about tags, see [Using tags to organize your Azure resources](../azure-resource-manager/management/tag-resources.md). ++Azure DNS supports using Azure Resource Manager tags on DNS zone resources. It doesn't support tags on DNS record sets, although as an alternative, metadata is supported on DNS record sets as explained below. ++### Metadata ++As an alternative to record set tags, Azure DNS supports annotating record sets using *metadata*. Similar to tags, metadata enables you to associate name-value pairs with each record set. This feature can be useful, for example to record the purpose of each record set. Unlike tags, metadata can't be used to provide a filtered view of your Azure bill and can't be specified in an Azure Resource Manager policy. ++## Etags ++Suppose two people or two processes try to modify a DNS record at the same time. Which one wins? And does the winner know that they have overwritten changes created by someone else? ++Azure DNS uses Etags to handle concurrent changes to the same resource safely. Etags are separate from [Azure Resource Manager 'Tags'](#tags). Each DNS resource (zone or record set) has an Etag associated with it. Whenever a resource is retrieved, its Etag is also retrieved. When updating a resource, you can choose to pass back the Etag so Azure DNS can verify the Etag on the server matches. Since each update to a resource results in the Etag being regenerated, an Etag mismatch indicates a concurrent change has occurred. Etags can also be used when creating a new resource to ensure the resource doesn't already exist. ++By default, Azure DNS PowerShell uses Etags to block concurrent changes to zones and record sets. The optional *-Overwrite* switch can be used to suppress Etag checks, in which case any concurrent changes that have occurred are overwritten. ++At the level of the Azure DNS REST API, Etags are specified using HTTP headers. Their behavior is given in the following table: ++| Header | Behavior | +| | | +| None |PUT always succeeds (no Etag checks) | +| If-match \<etag> |PUT only succeeds if resource exists and Etag matches | +| If-match * |PUT only succeeds if resource exists | +| If-none-match * |PUT only succeeds if resource doesn't exist | +++## Limits ++The following default limits apply when using Azure Private DNS: +++## Next steps ++- [What is a private Azure DNS zone](private-dns-privatednszone.md) +- [Quickstart: Create an Azure private DNS zone using the Azure portal](private-dns-getstarted-portal.md) |
dns | Dns Private Resolver Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md | A DNS forwarding rule includes one or more target DNS servers that are used for - A target IP address - A target Port and Protocol (UDP or TCP) -## Restrictions: +## Restrictions -> [!NOTE] -> See [What are the usage limits for Azure DNS?](dns-faq.yml#what-are-the-usage-limits-for-azure-dns-) for a list of usage limits for the DNS private resolver. +The following limits currently apply to Azure DNS Private Resolver: + ### Virtual network restrictions |
dns | Dns Zones Records | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-zones-records.md | Title: DNS Zones and Records overview - Azure DNS -description: Overview of support for hosting DNS zones and records in Microsoft Azure DNS. + Title: DNS Zones and Records Overview - Azure Public DNS +description: Overview of support for hosting DNS zones and records in Microsoft Azure Public DNS. ms.assetid: be4580d7-aa1b-4b6b-89a3-0991c0cda897 Previously updated : 09/06/2023 Last updated : 10/09/2023 This article explains the key concepts of domains, DNS zones, DNS records, and r The Domain Name System is a hierarchy of domains. The hierarchy starts from the `root` domain, whose name is simply '**.**'. Below this come top-level domains, such as `com`, `net`, `org`, `uk` or `jp`. Below the top-level domains are second-level domains, such as `org.uk` or `co.jp`. The domains in the DNS hierarchy are globally distributed, hosted by DNS name servers around the world. -A domain name registrar is an organization that allows you to purchase a domain name, such as `contoso.com`. Purchasing a domain name gives you the right to control the DNS hierarchy under that name, for example allowing you to direct the name `www.contoso.com` to your company web site. The registrar may host the domain in its own name servers on your behalf or allow you to specify alternative name servers. +A domain name registrar is an organization that allows you to purchase a domain name, such as `contoso.com`. Purchasing a domain name gives you the right to control the DNS hierarchy under that name, for example allowing you to direct the name `www.contoso.com` to your company web site. The registrar might host the domain on its own name servers on your behalf or allow you to specify alternative name servers. Azure DNS provides a globally distributed and high-availability name server infrastructure that you can use to host your domain. By hosting your domains in Azure DNS, you can manage your DNS records with the same credentials, APIs, tools, billing, and support as your other Azure services. The zone serial number in the SOA record isn't updated automatically when change TXT records are used to map domain names to arbitrary text strings. They're used in multiple applications, in particular related to email configuration, such as the [Sender Policy Framework (SPF)](https://en.wikipedia.org/wiki/Sender_Policy_Framework) and [DomainKeys Identified Mail (DKIM)](https://en.wikipedia.org/wiki/DomainKeys_Identified_Mail). -The DNS standards permit a single TXT record to contain multiple strings, each of which may be up to 255 characters in length. Where multiple strings are used, they're concatenated by clients and treated as a single string. +The DNS standards permit a single TXT record to contain multiple strings, each of which can be up to 255 characters in length. Where multiple strings are used, they're concatenated by clients and treated as a single string. When calling the Azure DNS REST API, you need to specify each TXT string separately. When you use the Azure portal, PowerShell, or CLI interfaces, you should specify a single string per record. This string is automatically divided into 255-character segments if necessary. At the level of the Azure DNS REST API, Etags are specified using HTTP headers. The following default limits apply when using Azure DNS: ## Next steps |
dns | Private Dns Privatednszone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-privatednszone.md | Title: What is an Azure DNS private zone -description: Overview of a private DNS zone + Title: What is an Azure Private DNS zone? +description: Overview of Private DNS zones Previously updated : 02/27/2023 Last updated : 10/12/2023 -# What is a private Azure DNS zone +# What is an Azure Private DNS zone? Azure Private DNS provides a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. By using private DNS zones, you can use your own custom domain names rather than the Azure-provided names available today. The records contained in a private DNS zone aren't resolvable from the Internet. You can link a private DNS zone to one or more virtual networks by creating [virtual network links](./private-dns-virtual-network-links.md). You can also enable the [autoregistration](./private-dns-autoregistration.md) feature to automatically manage the life cycle of the DNS records for the virtual machines that get deployed in a virtual network. +## Private DNS zone resolution ++Private DNS zones linked to a VNet are queried first when using the default DNS settings of a VNet. Azure provided DNS servers are queried next. However, if a [custom DNS server](../virtual-network/manage-virtual-network.md#change-dns-servers) is defined in a VNet, then private DNS zones linked to that VNet are not automatically queried, because the custom settings override the name resolution order. ++To enable custom DNS to resolve the private zone, you can use an [Azure DNS Private Resolver](dns-private-resolver-overview.md) in a VNet linked to the private zone as described in [centralized DNS architecture](private-resolver-architecture.md#centralized-dns-architecture). If the custom DNS is a virtual machine, configure a conditional forwarder to Azure DNS (168.63.129.16) for the private zone. + ## Limits -To understand how many private DNS zones you can create in a subscription and how many record sets are supported in a private DNS zone, see [Azure DNS limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-dns-limits) ## Restrictions To understand how many private DNS zones you can create in a subscription and ho ## Next steps +* Review and understand [Private DNS records](dns-private-records.md). * Learn how to create a private zone in Azure DNS by using [Azure PowerShell](./private-dns-getstarted-powershell.md) or [Azure CLI](./private-dns-getstarted-cli.md). * Read about some common [private zone scenarios](./private-dns-scenarios.md) that can be realized with private zones in Azure DNS. * For common questions and answers about private zones in Azure DNS, see [Private DNS FAQ](./dns-faq-private.yml). |
energy-data-services | How To Manage Audit Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-audit-logs.md | For example, when you ΓÇ£Add a new memberΓÇ¥ to the ```users.datalake.admins``` ## Enable audit logs To enable audit logs in diagnostic logging, select your Azure Data Manager for Energy instance in the Azure portal.++> [!NOTE] +> Currently, you can enable audit logs for OSDU Core Services, Seismic DMS, Petrel Data Services, and Wellbore DMS. + * Select the **Activity log** screen, and then select **Diagnostic settings**. * Select **+ Add diagnostic setting**. * Enter the Diagnostic settings name. * Select **Audit Events** as the Category. -[![Screenshot of audit events option in diagnostic settings](media/how-to-manage-audit-logs/how-to-manage-audit-logs-1-audit-event-diagnostic-logs.png)](media/how-to-manage-audit-logs/how-to-manage-audit-logs-1-audit-event-diagnostic-logs.png#lightbox) +[![Screenshot of audit events option in diagnostic settings.](media/how-to-manage-audit-logs/how-to-manage-audit-logs-1-audit-event-diagnostic-logs-categories.png)](media/how-to-manage-audit-logs/how-to-manage-audit-logs-1-audit-event-diagnostic-logs-categories.png#lightbox) * Select appropriate Destination details for accessing the diagnostic logs. The audit logs for Azure Data Manager for Energy service returns the following f | PuID | String | ObjectId of the user in Microsoft Entra ID| | ResultType | String | Define success or failure of operation | | Operation Description | String | Provides specific details of the response. These details can include tracing information, such as the symptoms, of the result that are used for further analysis. |-| RequestId | String | This is the unique ID associated to the request, which triggered the operation on data plane. | +| RequestId | String | RequestId is the unique ID associated to the request, which triggered the operation on data plane. | | Message | String | Provides message associated with the success or failure of the operation.| | ResourceID | String | The Azure Data Manager for Energy resource ID of the customer under which the audit log belongs. | |
hdinsight-aks | Prerequisites Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/prerequisites-subscription.md | Title: Subscription prerequisites for Azure HDInsight on AKS. description: Prerequisite steps to complete on your subscription before working with Azure HDInsight on AKS. Previously updated : 08/29/2023 Last updated : 10/13/2023 # Subscription prerequisites If you're using Azure subscription first time for HDInsight on AKS, the followin ## Tenant registration -If you're trying to onboard a new tenant to HDInsight on AKS, you need to provide consent to first party App of HDInsight on AKS to Access API. This app will try to provision the application used to authenticate cluster users and groups. +If you're trying to onboard a new tenant to HDInsight on AKS, you need to provide consent to first party App of HDInsight on AKS to Access API. This app tries to provision the application used to authenticate cluster users and groups. > [!NOTE]-> Resource owner would be able to run the command to provision the first party service principal on the given tenant. +> Tenant admin would be able to run the command to provision the first party service principal on the given tenant. **Commands**: |
iot-develop | Quickstart Devkit Stm B U585i Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-u585i-iot-hub.md | |
machine-learning | How To Create Data Assets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md | When you create your data asset, you need to set the data asset type. Azure Mach |**Folder**<br> Reference a folder | `uri_folder` | Read a folder of parquet/CSV files into Pandas/Spark.<br><br>Read unstructured data (images, text, audio, etc.) located in a folder. | |**Table**<br> Reference a data table | `mltable` | You have a complex schema subject to frequent changes, or you need a subset of large tabular data.<br><br>AutoML with Tables.<br><br>Read unstructured data (images, text, audio, etc.) data that is spread across **multiple** storage locations. | +> [!NOTE] +> Please do not use embedded newlines in csv files unless you register the data as an MLTable. Embedded newlines in csv files might cause misaligned field values when you read the data. MLTable has this parameter [`support_multi_line`](https://learn.microsoft.com/azure/machine-learning/reference-yaml-mltable?view=azureml-api-2#read-transformations)in `read_delimited` transformation to interpret quoted line breaks as one record. ++ When you consume the data asset in an Azure Machine Learning job, you can either *mount* or *download* the asset to the compute node(s). For more information, please read [Modes](how-to-read-write-data-v2.md#modes). Also, you must specify a `path` parameter that points to the data asset location. Supported paths include: |
machine-learning | How To Create Image Labeling Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-image-labeling-projects.md | Title: Set up an image labeling project -description: Learn how to create a project and use the data labeling tool to label images in the project. Enable machine learning-assisted labeling or human-in-the-loop labeling to help with the task. +description: Learn how to create a project to label images in the project. Enable machine learning-assisted labeling to help with the task. Previously updated : 02/08/2023 Last updated : 08/16/2023 monikerRange: 'azureml-api-1 || azureml-api-2'+#customer intent: As a project manager, I want to set up a project to label images in the project. I want to enable machine learning-assisted labeling to help with the task. # Set up an image labeling project and export labels Learn how to create and run data labeling projects to label images in Azure Machine Learning. Use machine learning (ML)-assisted data labeling or human-in-the-loop labeling to help with the task. -Set up labels for classification, object detection (bounding box), or instance segmentation (polygon). +Set up labels for classification, object detection (bounding box), instance segmentation (polygon), or semantic segmentation (Preview). You can also use the data labeling tool in Azure Machine Learning to [create a text labeling project](how-to-create-text-labeling-projects.md). + ## Image labeling capabilities Azure Machine Learning data labeling is a tool you can use to create, manage, and monitor data labeling projects. Use it to: You use these items to set up image labeling in Azure Machine Learning: * To apply *one or more* labels to an image from a set of labels, select **Image Classification Multi-label**. For example, a photo of a dog might be labeled with both *dog* and *daytime*. * To assign a label to each object within an image and add bounding boxes, select **Object Identification (Bounding Box)**. * To assign a label to each object within an image and draw a polygon around each object, select **Instance Segmentation (Polygon)**.+ * To draw masks on an image and assign a label class at the pixel level, select **Semantic Segmentation (Preview)**. :::image type="content" source="media/how-to-create-labeling-projects/labeling-creation-wizard.png" alt-text="Screenshot that shows creating a labeling project to manage labeling."::: For bounding boxes, important questions include: ## Use ML-assisted data labeling -To accelerate labeling tasks, on the **ML assisted labeling** page, you can trigger automatic machine learning models. Medical images (files that have a *.dcm* extension) aren't included in assisted labeling. +To accelerate labeling tasks, on the **ML assisted labeling** page, you can trigger automatic machine learning models. Medical images (files that have a *.dcm* extension) aren't included in assisted labeling. If the project type is **Semantic Segmentation (Preview)**, ML-assisted labeling isn't available. At the start of your labeling project, the items are shuffled into a random order to reduce potential bias. However, the trained model reflects any biases that are present in the dataset. For example, if 80 percent of your items are of a single class, then approximately 80 percent of the data used to train the model lands in that class. If your project was created from [Vision Studio](../ai-services/computer-vision/ To export the labels, on the **Project details** page of your labeling project, select the **Export** button. You can export the label data for Machine Learning experimentation at any time. -You can export an image label as: +If your project type is Semantic segmentation (Preview), an [Azure MLTable data asset](./how-to-mltable.md) is created. ++For all other project types, you can export an image label as: :::moniker range="azureml-api-1" * A CSV file. Azure Machine Learning creates the CSV file in a folder inside *Labeling/export/csv*. |
machine-learning | How To Identity Based Service Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md | This authentication mode allows you to: > This functionality has the following limitations > * Feature is supported for experiments submitted via the [Azure Machine Learning CLI and Python SDK V2](concept-v2.md), but not via ML Studio. > * User identity and compute managed identity cannot be used for authentication within same job.-> * For pipeline jobs, the user identity must be configured at job top level, not for individual pipeline steps. +> * For pipeline jobs, we recommend setting user identity at the individual step level that will be executed on a compute, rather than at the root pipeline level. ( While identity setting is supported at both root pipeline and step levels, the step level setting takes precedence if both are set. However, for pipelines containing pipeline components, identity must be set on individual steps that will be executed. Identity set at the root pipeline or pipeline component level will not function. Therefore, we suggest setting identity at the individual step level for simplicity.) The following steps outline how to set up data access with user identity for training jobs on compute clusters from CLI. |
machine-learning | How To Label Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-label-data.md | To delete *all* polygons in the current image, select the **Delete all regions** After you create the polygons for an image, select **Submit** to save your work, or your work in progress won't be saved. +## Tag images and draw masks for semantic segmentation ++If your project is of type "Semantic segmentation (Preview)," use the paintbrush to paint a mask over the area you wish to tag. ++1. Select a tag for the area you will paint over. +1. Select the **paintbrush** tool ![Screenshot of the Paintbrush tool.](./media/how-to-label-data/paintbrush-tool.png). +1. Select the **size** tool![Screenshot of the Size tool.](./media/how-to-label-data/width-tool.png) to pick a size for your paintbrush. +1. Paint over the area you wish to tag. The color corresponding to your tag will be applied to the area you paint over. + + :::image type="content" source="media/how-to-label-data/paintbrush.gif" alt-text="Screenshot of paint area for cat and dog faces for semantic segmentation."::: ++To delete parts of the area, select **Eraser** tool. ++To change the tag for an area, select the new tag and re-paint the area. ++You can also use the [Polygon tool](#tag-images-and-specify-polygons-for-image-segmentation) to specify a region. ++After you create the areas for an image, select **Submit** to save your work, or your work in progress won't be saved. If you used the Polygon tool, all polygons will be converted to a mask when you submit. + ## Label text When you tag text, use the toolbar to: |
machine-learning | Tutorial Enable Materialization Backfill Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-materialization-backfill-data.md | This list summarizes the required setup steps: ### Configure the Azure Machine Learning Spark notebook -You can create a new notebook and execute the instructions in this tutorial step by step. You can also open the existing notebook named *2. Enable materialization and backfill feature data.ipynb* from the *featurestore_sample/notebooks* directory, and then run it. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation. +You can create a new notebook and execute the instructions in this tutorial step by step. You can also open the existing notebook named *Enable materialization and backfill feature data.ipynb* from the *featurestore_sample/notebooks* directory, and then run it. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation. 1. On the top menu, in the **Compute** dropdown list, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**. -1. Configure the session: +2. Configure the session: 1. On the toolbar, select **Configure session**.- 1. On the **Python packages** tab, select **Upload Conda file**. - 1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment). - 1. Increase the session time-out (idle time) to avoid frequent prerequisite reruns. + 2. On the **Python packages** tab, select **Upload Conda file**. + 3. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment). + 4. Increase the session time-out (idle time) to avoid frequent prerequisite reruns. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/Enable materialization and backfill feature data.ipynb?name=start-spark-session)] You can create a new notebook and execute the instructions in this tutorial step 1. Install the Azure Machine Learning extension. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=install-ml-ext-cli)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=install-ml-ext-cli)] 1. Authenticate. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=auth-cli)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=auth-cli)] 1. Set the default subscription. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=set-default-subs-cli)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=set-default-subs-cli)] The materialization store uses these values. You can optionally override the def # [Azure CLI](#tab/cli) - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=create-new-storage)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=create-new-storage)] - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=create-new-storage-container)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=create-new-storage-container)] - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=set-container-arm-id-cli)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=set-container-arm-id-cli)] The materialization store uses these values. You can optionally override the def # [Azure CLI](#tab/cli) - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=use-existing-storage)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=use-existing-storage)] The second option is to reuse an existing managed identity. Run this code sample in the SDK to retrieve the UAI properties. -[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=retrieve-uai-properties)] +[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=retrieve-uai-properties)] The next CLI commands assign the first two roles to the UAI. In this example, th # [Azure CLI](#tab/cli) -[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-fs)] +[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-fs)] -[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-offline-store)] +[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-offline-store)] The following steps grant the Storage Blob Data Reader role access to your user Inspect file `xxxx`. This command attaches the offline store and the UAI, to update the feature store. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=dump_featurestore_yaml)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=dump_featurestore_yaml)] - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-store)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=enable-offline-store)] The following steps grant the Storage Blob Data Reader role access to your user # [Azure CLI](#tab/cli) - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)] |
machine-learning | Tutorial Enable Recurrent Materialization Run Batch Inference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-recurrent-materialization-run-batch-inference.md | Before you proceed with the following procedures, be sure to complete the first, 1. Configure the Azure Machine Learning Spark notebook. - To run this tutorial, you can create a new notebook and execute the instructions step by step. You can also open and run the existing notebook named *4. Enable recurrent materialization and run batch inference*. You can find that notebook, and all the notebooks in this series, in the *featurestore_sample/notebooks* directory. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation. + To run this tutorial, you can create a new notebook and execute the instructions step by step. You can also open and run the existing notebook named *3. Enable recurrent materialization and run batch inference*. You can find that notebook, and all the notebooks in this series, in the *featurestore_sample/notebooks* directory. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation. 1. On the top menu, in the **Compute** dropdown list, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**. Before you proceed with the following procedures, be sure to complete the first, 1. Install the Azure Machine Learning extension. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=install-ml-ext-cli)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Enable recurrent materialization and run batch inference.ipynb?name=install-ml-ext-cli)] 1. Authenticate. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=auth-cli)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Enable recurrent materialization and run batch inference.ipynb?name=auth-cli)] 1. Set the default subscription. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=set-default-subs-cli)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Enable recurrent materialization and run batch inference.ipynb?name=set-default-subs-cli)] |
machine-learning | Tutorial Experiment Train Models Using Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-experiment-train-models-using-features.md | Before you proceed with the following procedures, be sure to complete the first 1. Configure the Azure Machine Learning Spark notebook. - You can create a new notebook and execute the instructions in this tutorial step by step. You can also open and run the existing notebook named *3. Experiment and train models using features.ipynb* from the *featurestore_sample/notebooks* directory. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation. + You can create a new notebook and execute the instructions in this tutorial step by step. You can also open and run the existing notebook named *2. Experiment and train models using features.ipynb* from the *featurestore_sample/notebooks* directory. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation. 1. On the top menu, in the **Compute** dropdown list, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**. Before you proceed with the following procedures, be sure to complete the first 1. Install the Azure Machine Learning extension. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Experiment and train models using features.ipynb?name=install-ml-ext-cli)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Experiment and train models using features.ipynb?name=install-ml-ext-cli)] 1. Authenticate. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Experiment and train models using features.ipynb?name=auth-cli)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Experiment and train models using features.ipynb?name=auth-cli)] 1. Set the default subscription. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Experiment and train models using features.ipynb?name=set-default-subs-cli)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Experiment and train models using features.ipynb?name=set-default-subs-cli)] |
managed-instance-apache-cassandra | Create Cluster Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-portal.md | The service allows update to Cassandra YAML configuration on a datacenter via th > - seed_provider > - initial_token > - autobootstrap- > - client_ecncryption_options + > - client_encryption_options > - server_encryption_options > - transparent_data_encryption_options > - audit_logging_options The service allows update to Cassandra YAML configuration on a datacenter via th > - data_file_directories > - commitlog_directory > - cdc_raw_directory- > - saved_caches_directory + > - saved_caches_directory + > - endpoint_snitch + > - partitioner + > - rpc_address + > - rpc_interface ## De-allocate cluster |
managed-instance-apache-cassandra | Manage Resources Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/manage-resources-cli.md | az managed-cassandra datacenter update \ --node-count 13 ``` +### <a id="get-yaml"></a>Get Cassandra configuration + +Get the current YAML configuration of a node by using the [az managed-cassandra cluster invoke-command](/cli/azure/managed-cassandra/cluster#az-managed-cassandra-invoke-command) command: + +```azurecli-interactive +resourceGroupName='MyResourceGroup' +clusterName='cassandra-hybrid-cluster' +commandName='get-cassandra-yaml' + +az managed-cassandra cluster invoke-command \ + --resource-group $resourceGroupName \ + --cluster-name $clusterName \ + --host <ip address> \ + --command-name 'get-cassandra-yaml' +``` ++> [!NOTE] +> The output can be made more readable using the following commands: +> +> ```azurecli-interactive +> $output = az managed-cassandra cluster invoke-command \ +> --resource-group $resourceGroupName \ +> --cluster-name $clusterName \ +> --host <ip address> \ +> --command-name 'get-cassandra-yaml' \ +> | ConvertFrom-Json +> $output.commandOutput +> ``` + ### <a id="update-yaml"></a>Update Cassandra configuration Change Cassandra configuration on a datacenter by using the [az managed-cassandra datacenter update](/cli/azure/managed-cassandra/datacenter#az-managed-cassandra-datacenter-update) command. You will need to base64 encode the YAML fragment by using an [online tool](https://www.base64encode.org/). |
mysql | How To Data In Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-in-replication.md | The results should appear similar to the following. Make sure to note the binary az mysql flexible-server gtid reset --resource-group <resource group> --server-name <replica server name> --gtid-set <gtid set from the source server> --subscription <subscription id> ``` - For more details refer [GTID Reset](/cli/azure/mysql/flexible-server/gtid). - +For more details refer [GTID Reset](/cli/azure/mysql/flexible-server/gtid). ++> [!NOTE] +>GTID reset can't be performed on a Geo-redundancy backup enabled server. Please disable Geo-redundancy to perform GTID reset on the server. You can enable Geo-redundancy option again after GTID reset. GTID reset action invalidates all the available backups and therefore, once Geo-redundancy is enabled again, it may take a day before geo-restore can be performed on the server + ## Link source and replica servers to start Data-in replication To skip a replication error and allow replication to continue, use the following - Learn more about [Data-in replication](concepts-data-in-replication.md) for Azure Database for MySQL - Flexible Server. + |
operator-nexus | Howto Run Instance Readiness Testing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-run-instance-readiness-testing.md | -Instance Readiness Testing (IRT) is a framework built to orchestrate real-world workloads for testing of the Azure Operator Nexus Platform. +Instance Readiness Testing (IRT) is a framework built to validate that the Azure Operator Nexus platform is deployed successfully. +IRT creates Nexus resources such as virtual machines, NAKS clusters, and networks in order to perform various networking, storage and compute tests. -## Environment requirements +## Get Instance Readiness Testing Framework +For more detailed information, including the latest documentation and release artifacts visit the [nexus-samples](https://github.com/microsoft/nexus-samples/) GitHub repository. If access is required, see "Requesting Access to Nexus-samples GitHub repository." -- A Linux environment (Ubuntu suggested) capable of calling Azure APIs-- Knowledge of networks to use for the test- * Networks to use for the test are specified in a "networks-blueprint.yml" file, see [Input Configuration](#input-configuration). -- curl to download IRT package-- The User Access Admin & Contributor roles for the execution subscription-- The ability to create security groups in your Active Directory tenant--## Input configuration --Build your input file. The IRT tarball provides `irt-input.example.yml` as an example, follow the [instructions](#download-irt) to download the tarball. These values **will not work for your instances**, they need to be manually changed and the file should also be renamed to `irt-input.yml`. The example input file is provided as a stub to aid in configuring new input files. Overridable values and their usage are outlined in the example. The **[One Time Setup](#one-time-setup) assists in setting input values by writing key/value pairs to the config file as they execute.** --The network information is provided in either a `networks-blueprint.yml` file, similar to the `networks-blueprint.example.yml` that is provided, or appended to the `irt-input.yml` file. The schema for IRT is defined in the `networks-blueprint.example.yml`. The networks are created as part of the test, provide network details that aren't in use. Currently IRT has the following network requirements: --* Three (3) L3 Networks - * Two of them with MTU 1500 - * One of them with MTU 9000 and shouldn't have a fabric_asn attribute -* One (1) Trunked Network -* All VLANs should be greater than 500 --## One Time Setup --### Download IRT -IRT is distributed via tarball, download it, extract it, and navigate to the `irt` directory. -1. From your Linux environment, download nexus-irt.tar.gz from aka.ms/nexus-irt `curl -Lo nexus-irt.tar.gz aka.ms/nexus-irt` -1. Extract the tarball to the local file system: `mkdir -p irt && tar xf nexus-irt.tar.gz --directory ./irt` -1. Switch to the new directory `cd irt` ---### Install dependencies -There are multiple dependencies expected to be available during execution. Review this list; --* `jq` version 1.6 or greater -* `yq` version 4.33 or greater -* `azcopy` version 10 or greater -* `az` Azure CLI, stay up to date. Minimum expected version: 2.11.0(supports self upgrade) -* `elinks` - for viewing html files on the command line -* `tree` - for viewing directory structures -* `moreutils` - for viewing progress from the Azure Container Instance (ACI) container --The `setup.sh` script is provided to aid with installing the listed dependencies. It installs any dependencies that aren't available in PATH. It doesn't upgrade any dependencies that don't meet the minimum required versions. --> [!NOTE] -> `setup.sh` assumes a nonroot user and attempts to use `sudo` --### All in one setup --`all-in-one-setup.sh` is provided to create all of the Azure resources required to run IRT. This process includes creating a managed identity, a service principal, a security group, isolation domains, and a storage account to archive the test results. These resources can be created during the all in one script, or they can be created step by step per the instructions in this document. Each of the script, individually and via the all in one script, writes updates to your `irt-input.yml` file with the key value pairs needed to utilize the resources you created. Review the `irt-input.example.yml` file for the required inputs needed for one or more of the scripts, regardless of the methodology you pursue. All of the scripts are idempotent, and also allow you to use existing resources if desired. -### Step-by-Step setup --> [!NOTE] -> Only use this section if you're NOT using `all-in-one.sh` --If your workflow is incompatible with `all-in-one.sh`, each resource needed for IRT can be created manually with each supplemental script. Like `all-in-one.sh`, running these scripts writes key/value pairs to your `irt-input.yml` for you to use during your run. These four scripts make up the `all-in-one.sh`. --IRT makes commands against your resources, and needs permission to do so. IRT requires a managed identity and a service principal to execute. It also requires that the service principal is a member of the Microsoft Entra Security Group that is also provided as input. --#### Create managed identity -<details> -<summary>Expand to see how to create managed identity.</summary> --A managed identity with the following role assignments is needed to execute tests. The supplemental script, `create-managed-identity.sh` creates a managed identity with these role assignments. -* `Contributor` - For creating and manipulating resources -* `Storage Blob Data Contributor` - For reading from and writing to the storage blob container -* `Log Analytics Reader` - For reading metadata about the LAW -* `Kubernetes Connected Cluster Role` - For read/write operations on connected cluster --Executing `create-managed-identity.sh` requires the input yaml to have the following properties, all of them can be overridden by the corresponding environment variables: -```yml -MANAGED_IDENTITY: - RESOURCE_GROUP: "<resource-group>" # env: MANAGED_IDENTITY_RESOURCE_GROUP - NAME: "<name>" # env: MANAGED_IDENTITY_NAME - SUBSCRIPTION: "<subscription>" # env: MANAGED_IDENTITY_SUBSCRIPTION - LOCATION: "<location>" # env: MANAGED_IDENTITY_LOCATION -``` -* `MANAGED_IDENTITY.RESOURCE_GROUP` - The resource group the managed identity is created in. -* `MANAGED_IDENTITY.NAME` - The name of the managed identity to be created. -* `MANAGED_IDENTITY.SUBSCRIPTION` - The subscription where the resource group should reside. -* `MANAGED_IDENTITY.LOCATION` - The location to create the resource group. --```bash -# Example execution of the script -./create-managed-identity.sh irt-input.yml -``` --> [!NOTE] -> if `MANAGED_IDENTITY_ID` is set in the input yaml or as an environment variable the script won't create anything. --**RESULT:** This script prints a value for `MANAGED_IDENTITY_ID` and sets it to the input.yml. -See [Input Configuration](#input-configuration). --```yml -MANAGED_IDENTITY_ID: <generated_id> -``` -</details> --#### Create service principal and security group -<details> -<summary>Expand to see how to create service principal and security group.</summary> --A service principal with the following role assignments. The supplemental script, `create-service-principal.sh` creates a service principal with these role assignments, or add role assignments to an existing service principal. --* `Contributor` - For creating and manipulating resources -* `Storage Blob Data Contributor` - For reading from and writing to the storage blob container -* `Azure ARC Kubernetes Admin` - For ARC enrolling the NKS cluster --Additionally, the script creates the necessary security group, and adds the service principal to the security group. If the security group exists, it adds the service principal to the existing security group. --Executing `create-service-principal.sh` requires the input yaml to have the following properties, all of them can be overridden by the corresponding environment variables: -```yml -SERVICE_PRINCIPAL: - NAME: "<name>" # env: SERVICE_PRINCIPAL_NAME - AAD_GROUP_NAME: "<aad-group-name>" # env: SERVICE_PRINCIPAL_AAD_GROUP_NAME - SUBSCRIPTION: "<subscription>" # env: SERVICE_PRINCIPAL_SUBSCRIPTION -``` -* `SERVICE_PRINCIPAL.NAME` - The name of the service principal, created with the `az ad sp create-for-rbac` command. -* `SERVICE_PRINCIPAL.AAD_GROUP_NAME` - The name of the security group. -* `SERVICE_PRINCIPAL.SUBSCRIPTION` - The subscription of the service principal. --```bash -# Example execution of the script -./create-service-principal.sh irt-input.yml -``` --> [!NOTE] -> if all `SP_ID`,`SP_PASSWORD`,`SP_TENANT_ID`,`AAD_GROUP_ID` are set in the yaml or as an environment variable the script skips creating them. --**RESULT:** This script prints values for `AAD_GROUP_ID`, `SP_ID`, `SP_PASSWORD`, and `SP_TENANT` and sets the values back to the input yaml. -See [Input Configuration](#input-configuration). --```yml -SP_ID: "<generated-sp-id>" -SP_PASSWORD: "<generated-sp-password>" # If SP already exists sp password is not retreivable, please fill it in. -SP_TENANT_ID: "<generated-sp-tenant-id>" -AAD_GROUP_ID: "generated-aad-group-id" -``` -</details> --#### Create l3 isolation domains -<details> -<summary>Expand to see how to create l3 isolation.</summary> --The testing framework doesn't create, destroy, or manipulate isolation domains. Therefore, existing isolation domains can be used. Each isolation domain requires at least one external network. The supplemental script, `create-l3-isolation-domains.sh`. Internal networks are created, manipulated, and destroyed through the course of testing. --Executing `create-l3-isolation-domains.sh` requires one **parameter**, a path to a file containing the networks requirements. You can choose either the standalone network-blueprint.yml or the input.yml based on your workflow, both should contain the information needed. --```bash -# Example of the script being invoked using networks-blueprint.yml: -./create-l3-isolation-domains.sh networks-blueprint.yml -``` --```bash -# Example of the script being invoked using irt-input.yml: -# the network-blueprint should exist under NETWORK_BLUEPRINT node. -./create-l3-isolation-domains.sh irt-input.yml -``` -</details> --#### Create archive storage -<details> -<summary>Expand to see how to create archive storage.</summary> --IRT creates an html test report after running a test scenario. These reports can optionally be uploaded to a blob storage container. The supplementary script `create-archive-storage.sh` to create a storage container, storage account, and resource group if they don't already exist. --Executing `create-archive-storage.sh` requires the input yaml to have the following properties, all of them can be overridden by the corresponding environment variables: --```yml -ARCHIVE_STORAGE: - RESOURCE_GROUP: "<resource-group>" # env: ARCHIVE_STORAGE_RESOURCE_GROUP - ACCOUNT_NAME: "<storage-account-name>" # env: ARCHIVE_STORAGE_ACCOUNT_NAME - CONTAINER_NAME: "<storage-container-name>" # env: ARCHIVE_STORAGE_CONTAINER_NAME - SUBSCRIPTION: "<subscription>" # env: ARCHIVE_STORAGE_SUBSCRIPTION - LOCATION: "<location>" # env: ARCHIVE_STORAGE_LOCATION -``` -* `ARCHIVE_STORAGE_RESOURCE_GROUP` - The resource group the managed identity is created in. -* `ARCHIVE_STORAGE_ACCOUNT_NAME` - The name of the Azure storage account to be created. -* `ARCHIVE_STORAGE_CONTAINER_NAME` - The name of the blob storage container to be created. -* `SUBSCRIPTION` - The subscription where the resource group is created in. -* `LOCATION` - The location where the resource group is created in. --> [!NOTE] -> if `PUBLISH_RESULTS_TO` is set in the input yaml or as an environment variable the script skips creating a new one. --```bash -# Example execution of the script -./create-archive-storage.sh irt-input.yaml -``` --**RESULT:** This script prints a value for `PUBLISH_RESULTS_TO` and sets the value in the input.yml. See [Input Configuration](#input-configuration). -```yml -PUBLISH_RESULTS_TO: <generated_id> -``` -</details> --## Execution --* Execute. This example assumes irt-input.yml is in the same location as irt.sh. If your file is located in a different directory, provide the full file path. --```bash -./irt.sh irt-input.yml -``` --## Results --1. A file named `summary-<cluster_name>-<timestamp>.html` is downloaded at the end of the run and contains the testing results. It can be viewed: - 1. From any browser - 1. Using elinks or lynx to view from the command line; for example: - 1. `elinks summary-<cluster_name>-<timestamp>.html` - 1. If the `PUBLISH_RESULTS_TO` parameter was provided, the results are uploaded to the blob container you specified. It can be previewed by navigating to the link presented to you at the end of the IRT run. +### Request Access to Nexus-samples GitHub repository +For access to the nexus-samples GitHub repository +1. Link your GitHub account to the Microsoft GitHub Org https://repos.opensource.microsoft.com/link +2. Join the Microsoft Org https://repos.opensource.microsoft.com/orgs/Microsoft/join +3. Send an email request to be added to nexus-samples GitHub repo to afoncamalgamatesall@microsoft.com |
operator-nexus | Howto Use Vm Console Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-use-vm-console-service.md | Title: "Azure Operator Nexus: VM Console Service" description: Learn how to use the VM Console service.--++ Previously updated : 08/04/2023 Last updated : 10/11/2023 To help set up the environment for access to Virtual Machines, define these envi > It should be noted that the first set of variables in the section below are for the **Cluster Manager** not the Cluster. ```bash- # Cluster Manager environment variables - export CLUSTER_MANAGER_NAME="contorso-cluster-manager-1234" - export CLUSTER_MANAGER_RG="contorso-cluster-manager-1234-rg" - export CLUSTER_MANAGER_EXTENDED_LOC="/subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.ExtendedLocation/customLocations/clusterManagerExtendedLocationName" + # CM_HOSTED_RESOURCES_RESOURCE_GROUP: Cluster Manager resource group name + export CM_HOSTED_RESOURCES_RESOURCE_GROUP="my-contoso-console-rg" + # CM_EXTENDED_LOCATION: Cluster Manager Extended Location, can be retrieved but you will need access rights to execute certain Azure CLI commands + export CM_EXTENDED_LOCATION="/subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.ExtendedLocation/customLocations/clusterManagerExtendedLocationName" - # Your Console resource environment variables + # VIRTUAL_MACHINE_NAME: Virtual Machine name you want to access through VM Console service export VIRTUAL_MACHINE_NAME="my-undercloud-vm"- export VM_RESOURCE_GROUP="my-vm-rg" + # CONSOLE_PUBLIC_KEY: Public Key matching Private Key to be used when establish `ssh` session, e.g., `ssh -i $HOME/.ssh/id-rsa` export CONSOLE_PUBLIC_KEY="xxxx-xxxx-xxxxxx-xxxx"+ # CONSOLE_EXPIRATION_TIME: Expiration date and time (RFC3339 format) for any `ssh` session with a virtual machine. export CONSOLE_EXPIRATION_TIME="2023-06-01T01:27:03.008Z" - # your environment variables + # PRIVATE_ENDPOINT_RG: Resource group name that Private Endpoint will be created on export PRIVATE_ENDPOINT_RG="my-work-env-rg"+ # PRIVATE_ENDPOINT_NAME: Private Endpoint's name you choose export PRIVATE_ENDPOINT_NAME="my-work-env-ple"+ # PRIVATE_ENDPOINT_CONNECTION_NAME: PLE/PLS connection name you choose + export PRIVATE_ENDPOINT_CONNECTION_NAME="my-contoso-ple-pls-connection" + # PRIVATE_ENDPOINT_REGION: Location where Private Endpoint will be created + export PRIVATE_ENDPOINT_REGION="eastus" + # PRIVATE_ENDPOINT_VNET: Virtual Network to be used by the Private Endpoint export PRIVATE_ENDPOINT_VNET="my-work-env-ple-vnet"+ # PRIVATE_ENDPOINT_SUBNET: Subnetwork to be used by the Private Endpoint export PRIVATE_ENDPOINT_SUBNET="my-work-env-ple-subnet"- export PRIVATE_ENDPOINT_CONNECTION_NAME="my-contorse-ple-pls-connection" ``` -## Establishing Private Network Connectivity --In order to establish a secure session with a Virtual Machine, you need to establish private network connectivity between your network and the Cluster Manager's private network. +## Creating Console Resource -This private network relies on the Azure Private Link Endpoint (PLE) and the Azure Private Link Service (PLS). +The Console resource provides the information about the VM such as VM name, public SSH key, expiration date for the SSH session, etc. -The Cluster Manager automatically creates a PLS so that you can establish a private network connection between your network and the Cluster Manager's private network. +This section provides step-by-step guide to help you to create a Console resource using Azure CLI commands. -This section provides a step-by-step guide to help you to establish a private network connectivity. +1. In order to create a ***Console*** resource in the Cluster Manager, you will need to collect some information, e.g., resource group (CM_HOSTED_RESOURCES_RESOURCE_GROUP) and custom location (CM_EXTENDED_LOCATION). You have to provide the resource group but you can retrieve the custom location if you have access rights to excute the commands listed below. -1. You need to retrieve the resource identifier for the PLS associated to the VM Console service running in the Cluster Manager. + ```bash + export cluster_manager_resource_id=$(az resource list -g ${CM_HOSTED_RESOURCES_RESOURCE_GROUP} --query "[?type=='Microsoft.NetworkCloud/clusterManagers'].id" --output tsv) + export CM_EXTENDED_LOCATION=$(az resource show --ids $cluster_manager_resource_id --query "properties.managerExtendedLocation.name" | tr -d '"') + ``` - ```bash - # retrieve the infrastructure resource group of the AKS cluster - export pls_resource_group=$(az aks show --name ${CLUSTER_MANAGER_NAME} -g ${CLUSTER_MANAGER_RG} --query "nodeResourceGroup" -o tsv) -- # retrieve the Private Link Service resource id - export pls_resourceid=$(az network private-link-service show \ - --name console-pls \ - --resource-group ${pls_resource_group} \ - --query id \ - --output tsv) - ``` +1. The first thing before you can establish an SSH session with a VM is to create a ***Console*** resource in the Cluster Manager. -1. Create the PLE for establishing a private and secure connection between your network and the Cluster Manager's private network. You need the PLS resource ID obtained in [Creating Console Resource](#creating-console-resource). + ```bash + az networkcloud virtualmachine console create \ + --virtual-machine-name "${VIRTUAL_MACHINE_NAME}" \ + --resource-group "${CM_HOSTED_RESOURCES_RESOURCE_GROUP}" \ + --extended-location name="${CM_EXTENDED_LOCATION}" type="CustomLocation" \ + --enabled True \ + --key-data "${CONSOLE_PUBLIC_KEY}" \ + [--expiration "${CONSOLE_EXPIRATION_TIME}"] + ``` - ```bash - az network private-endpoint create \ - --connection-name "${PRIVATE_ENDPOINT_CONNECTION_NAME}" \ - --name "${PRIVATE_ENDPOINT_NAME}" \ - --private-connection-resource-id "${pls_resourceid}" \ - --resource-group "${PRIVATE_ENDPOINT_RG}" \ - --vnet-name "${PRIVATE_ENDPOINT_VNET}" \ - --subnet "${PRIVATE_ENDPOINT_SUBNET}" \ - --manual-request false - ``` + If you omit the `--expiration` parameter, the expiration will be defaulted to one day after the creation of the Console resource. Also note that the `expiration` date & time format **must** comply with RFC3339 otherwise the creation of the Console resource fails. -1. Retrieve the private IP address allocated to the PLE, which you need when establishing a session. + > [!NOTE] + > For a complete synopsis for this command, invoke `az networkcloud console create --help`. - ```bash - export ple_interface_id=$(az network private-endpoint list --resource-group ${PRIVATE_ENDPOINT_RG} --query "[0].networkInterfaces[0].id" -o tsv) +1. Upon successful creation of the Console resource, retrieve the **Private Link Service** identifier that is required to create Private Link Endpoint (PLE) - export sshmux_ple_ip=$(az network nic show --ids $ple_interface_id --query 'ipConfigurations[0].privateIPAddress' -o tsv) + ```bash + export pls_resourceid=$(az networkcloud virtualmachine console show \ + --virtual-machine-name "${VIRTUAL_MACHINE_NAME}" \ + --resource-group "${CM_HOSTED_RESOURCES_RESOURCE_GROUP}" \ + --query "privateLinkServiceId") + ``` - echo "sshmux_ple_ip: ${sshmux_ple_ip}" - ``` +1. Also, retrieve the **VM Access ID**. You must use this unique identifier as `user` of the `ssh` session. -## Creating Console Resource + ```bash + virtual_machine_access_id=$(az networkcloud virtualmachine console show \ + --virtual-machine-name "${VIRTUAL_MACHINE_NAME}" \ + --resource-group "${CM_HOSTED_RESOURCES_RESOURCE_GROUP}" \ + --query "virtualMachineAccessId") + ``` -The Console resource provides the information about the VM such as VM name, public SSH key, expiration date for the session, etc. +> [!NOTE] +> For a complete synopsis for this command, invoke `az networkcloud virtualmachine console show --help`. -This section provides step-by-step guide to help you to create a Console resource using Azure CLI commands. +## Establishing Private Network Connectivity +In order to establish a secure session with a Virtual Machine, you need to establish private network connectivity between your network and the Cluster Manager's private network. -1. Before you can establish a session with a VM, create the Console resource for that VM. +This private network relies on the Azure Private Link Endpoint (PLE) and the Azure Private Link Service (PLS). - ```bash - az networkcloud virtualmachine console create \ - --virtual-machine-name "${VIRTUAL_MACHINE_NAME}" \ - --resource-group "${VM_RESOURCE_GROUP}" \ - --extended-location name="${CLUSTER_MANAGER_EXTENDED_LOC}" type="CustomLocation" \ - --enabled True \ - --ssh-public-key key-data "${CONSOLE_PUBLIC_KEY}" \ - [--expiration "${CONSOLE_EXPIRATION_TIME}"] - ``` +The Cluster Manager automatically creates a PLS so that you can establish a private network connection between your network and the Cluster Manager's private network. - If you omit the `--expiration` parameter, the Cluster Manager will automatically set the expiration to one day after the creation of the Console resource. Also note that the `expiration` date & time format **must** comply with RFC3339 otherwise the creation of the Console resource fails. +This section provides a step-by-step guide to help you to establish a private network connectivity. - > [!NOTE] - > For a complete synopsis for this command, invoke `az networkcloud console create --help`. -1. Upon successful creation of the Console resource, retrieve the VM Access ID. You must use this unique identifier as `user` of the SSH session. +1. Create the PLE for establishing a private and secure connection between your network and the Cluster Manager's private network. You need the PLS resource ID obtained in [Creating Console Resource](#creating-console-resource). ```bash- virtual_machine_access_id=$(az networkcloud virtualmachine console show \ - --virtual-machine-name "${VIRTUAL_MACHINE_NAME}" \ - --resource-group "${VM_RESOURCE_GROUP}" \ - --query "virtualMachineAccessId") + az network private-endpoint create \ + --connection-name "${PRIVATE_ENDPOINT_CONNECTION_NAME}" \ + --name "${PRIVATE_ENDPOINT_NAME}" \ + --private-connection-resource-id "${pls_resourceid}" \ + --resource-group "${PRIVATE_ENDPOINT_RG}" \ + --vnet-name "${PRIVATE_ENDPOINT_VNET}" \ + --subnet "${PRIVATE_ENDPOINT_SUBNET}" \ + --manual-request false ``` > [!NOTE]-> For a complete synopsis for this command, invoke `az networkcloud virtualmachine console show --help`. +> You will need only one Private Endpoint per Cluster Manager. -## Establishing a session with a Virtual Machine +1. Retrieve the private IP address allocated to the PLE, which you need when establishing the `ssh` session. ++ ```bash + export ple_interface_id=$(az network private-endpoint list --resource-group ${PRIVATE_ENDPOINT_RG} --query "[0].networkInterfaces[0].id" -o tsv) ++ export sshmux_ple_ip=$(az network nic show --ids $ple_interface_id --query 'ipConfigurations[0].privateIPAddress' -o tsv) ++ echo "sshmux_ple_ip: ${sshmux_ple_ip}" + ``` ++## Establishing an SSH session with Virtual Machine At this point, you have the `virtual_machine_access_id` and the `sshmux_ple_ip`. This input is the info needed for establishing a session with the VM. The VM Console service was designed to allow **only** one session per Virtual Ma You can disable the session to a given VM by updating the expiration date/time and/or updating the public SSH key used when creating the session with a VM. ```bash- az networkcloud virtualmachine console update \ - --virtual-machine-name "${VIRTUAL_MACHINE_NAME}" \ - --resource-group "${VM_RESOURCE_GROUP}" \ - [--enabled True | False] \ - [--key-data "${CONSOLE_PUBLIC_KEY}"] \ - [--expiration "${CONSOLE_EXPIRATION_TIME}"] +az networkcloud virtualmachine console update \ + --virtual-machine-name "${VIRTUAL_MACHINE_NAME}" \ + --resource-group "${VM_RESOURCE_GROUP}" \ + [--enabled True | False] \ + [--key-data "${CONSOLE_PUBLIC_KEY}"] \ + [--expiration "${CONSOLE_EXPIRATION_TIME}"] ``` If you want to disable access to a VM, you need to update the Console resource with the parameter `enabled False`. This update closes any existing session and restricts any subsequent sessions. To clean up your VM Console environment setup, you need to delete the Console re 1. Deleting your Console resource ```bash- az networkcloud virtualmachine console delete \ - --virtual-machine-name "${VIRTUAL_MACHINE_NAME}" \ - --resource-group "${VM_RESOURCE_GROUP}" + az networkcloud virtualmachine console delete \ + --virtual-machine-name "${VIRTUAL_MACHINE_NAME}" \ + --resource-group "${VM_RESOURCE_GROUP}" ``` 1. Deleting the Private Link Endpoint - ```bash - az network private-endpoint delete \ - --name ${PRIVATE_ENDPOINT_NAME}-ple \ - --resource-group ${PRIVATE_ENDPOINT_NAME}-rg - ``` + ```bash + az network private-endpoint delete \ + --name ${PRIVATE_ENDPOINT_NAME}-ple \ + --resource-group ${PRIVATE_ENDPOINT_NAME}-rg + ``` |
operator-nexus | Quickstarts Tenant Workload Deployment Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-deployment-ps.md | New-AzNetworkCloudVirtualMachine -Name $VM_NAME ` After a few minutes, the command completes and returns information about the virtual machine. You've created the virtual machine. You're now ready to use them. -> [!NOTE] -> If each server has two CPU chipsets and each CPU chip has 28 cores, then with hyperthreading enabled (default), the CPU chip supports 56 vCPUs. With 8 vCPUs in each chip reserved for infrastructure (OS and agents), the remaining 48 are available for tenant workloads. - ## Review deployed resources [!INCLUDE [quickstart-review-deployment-poweshell](./includes/virtual-machine/quickstart-review-deployment-ps.md)] |
orbital | Register Spacecraft | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/register-spacecraft.md | Title: Register Spacecraft on Azure Orbital Earth Observation service + Title: Register Spacecraft on Azure Orbital Ground Station description: Learn how to register a spacecraft. -To contact a satellite, it must be registered and authorized as a spacecraft resource with Azure Orbital Ground Station using required identifying information. +To contact a satellite, it must be registered and authorized as a spacecraft resource with Azure Orbital Ground Station. ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Contributor permissions at the subscription level.-- A [Basic Support Plan](https://azure.microsoft.com/support/plans/) or higher is required for a spacecraft authorization request.-- Private spacecraft must have a relevant spacecraft license.+- [Basic Support Plan](https://azure.microsoft.com/support/plans/) or higher is required for a spacecraft authorization request. +- Private spacecraft: an active spacecraft license and relevant ground station licenses. +- An active contract with the partner network(s) you wish to integrate with Azure Orbital Ground Station. ## Sign in to Azure Sign in to the [Azure portal](https://aka.ms/orbital/portal). ## Create spacecraft resource -1. In the Azure Portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results. +1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results. 2. In the **Spacecraft** page, select Create. 3. In **Create spacecraft resource**, enter or select this information in the Basics tab: Sign in to the [Azure portal](https://aka.ms/orbital/portal). ## Request authorization of the new spacecraft resource > [!NOTE]- > Authorization of private spacecraft requires the customer to have acquired a spacecraft license for their spacecraft. Microsoft can provide technical information required to complete the federal regulator and ITU processes as needed. This process augments the ground station licenses in our network and ensures customer satellites are added and authorized. Third party ground station networks are not included. - > - > Public spacecraft do not require licensing for authorization. The Azure Orbital Ground Station service supports several public satellites including Aqua, Suomi NPP, JPSS-1/NOAA-20, and Terra. -+ > Private spacecraft: prior to submitting an authorization request, you must have an active spacecraft license for your satellite and work with Mircosot to add your satellite to our ground station licenses. Microsoft can provide technical information required to complete the federal regulator and ITU processes as needed. + > Public spacecraft: licensing is not required for authorization. The Azure Orbital Ground Station service supports several public satellites including Aqua, Suomi NPP, JPSS-1/NOAA-20, and Terra. 1. Navigate to the newly created spacecraft resource's overview page. 2. Select **New support request** in the Support + troubleshooting section of the left-hand blade. Sign in to the [Azure portal](https://aka.ms/orbital/portal). > [!NOTE] > A [Basic Support Plan](https://azure.microsoft.com/support/plans/) or higher is required for a spacecraft authorization request. -### Private spacecraft --#### Step 1: Provide more detail -After the authorization request is generated, our regulatory team will investigate the request and determine if more detail is required. If so, a customer support representative will reach out to you with a regulatory intake form. You will need to input information regarding relevant filings, call signs, orbital parameters, link details, antenna details, point of contacts, etc. --Fill out all relevant fields in this form. When you are finished entering information, email this form back to the customer support representative. --#### Step 2: Await feedback from our regulatory team -Based on the details provided in the steps above, our regulatory team will make an assessment on time and cost to onboard your spacecraft to all requested ground stations. This step will take a few weeks to execute. --Once the determination is made, we will confirm the cost with you and ask you to authorize before proceeding. --#### Step 3: Azure Orbital Ground Station requests relevant ground station licensing --Upon authorization, you will be billed the fees associated with licensing your spacecraft at each relevant ground station. Our regulatory team will seek the relevant licenses to enable your spacecraft to communicate with the desired ground stations. Refer to the following table for an estimated timeline for execution: --| **Station** | **Qunicy** | **Chile** | **Sweden** | **South Africa** | **Singapore** | -| -- | - | | - | - | - | -| Onboarding Timeframe | 3-6 months | 3-6 months | 3-6 months | <1 month | 3-6 months | - ## Confirm spacecraft is authorized 1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results. |
private-link | Private Endpoint Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md | The following information lists the known limitations to the use of private endp | No more than 50 members in an Application Security Group. | Fifty is the number of IP Configurations that can be tied to each respective ASG that's coupled to the NSG on the private endpoint subnet. Connection failures may occur with more than 50 members. | | Destination port ranges supported up to a factor of 250 K. | Destination port ranges are supported as a multiplication SourceAddressPrefixes, DestinationAddressPrefixes, and DestinationPortRanges. </br></br> Example inbound rule: </br> One source * one destination * 4K portRanges = 4K Valid </br> 10 sources * 10 destinations * 10 portRanges = 1 K Valid </br> 50 sources * 50 destinations * 50 portRanges = 125 K Valid </br> 50 sources * 50 destinations * 100 portRanges = 250 K Valid </br> 100 sources * 100 destinations * 100 portRanges = 1M Invalid, NSG has too many sources/destinations/ports. | | Source port filtering is interpreted as * | Source port filtering isn't actively used as valid scenario of traffic filtering for traffic destined to a private endpoint. |-| Feature unavailable in select regions. | Currently unavailable in the following regions: </br> West India </br> Australia Central 2 </br> South Africa West </br> Brazil Southeast | +| Feature unavailable in select regions. | Currently unavailable in the following regions: </br> West India </br> Australia Central 2 </br> South Africa West </br> Brazil Southeast </br> All Government regions </br> All China regions | ### NSG more considerations |
private-link | Tutorial Inspect Traffic Azure Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-inspect-traffic-azure-firewall.md | In this section, you connect the virtual networks with virtual network peering. ||| | **This virtual network** | | | Peering link name | Enter **vnet-firewall-to-vnet-private-endpoint**. |- | Traffic to remote virtual network | Select **Allow (default)**. | - | Traffic forwarded from remote virtual network | Select **Allow (default)**. | - | Virtual network gateway or Route Server | Select **None (default)**. | + | Allow 'vnet-1' to access 'vnet-private-endpoint' | Leave the default of selected. | + | Allow 'vnet-1' to receive forwarded traffic from 'vnet-private-endpoint' | Select the checkbox. | + | Allow gateway in 'vnet-1' to forward traffic to 'vnet-private-endpoint' | Leave the default of cleared. | + | Enable 'vnet-1' to use 'vnet-private-endpoint' remote gateway | Leave the default of cleared. | | **Remote virtual network** | | | Peering link name | Enter **vnet-private-endpoint-to-vnet-firewall**. | | Virtual network deployment model | Select **Resource manager**. | | Subscription | Select your subscription. | | Virtual network | Select **vnet-private-endpoint**. |- | Traffic to remote virtual network | Select **Allow (default)**. | - | Traffic forwarded from remote virtual network | Select **Allow (default)**. | - | Virtual network gateway or Route Server | Select **None (default)**. | + | Allow 'vnet-private-endpoint' to access 'vnet-1' | Leave the default of selected. | + | Allow 'vnet-private-endpoint' to receive forwarded traffic from 'vnet-1' | Select the checkbox. | + | Allow gateway in 'vnet-private-endpoint' to forward traffic to 'vnet-1' | Leave the default of cleared. | + | Enable 'vnet-private-endpoint' to use 'vnet-1's' remote gateway | Leave the default of cleared. | 1. Select **Add**. |
route-server | Hub Routing Preference Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/hub-routing-preference-cli.md | Use [az network routeserver show](/cli/azure/network/routeserver#az-network-rout az network routeserver show --resource-group 'myResourceGroup' --name 'myRouteServer' ``` -In the output, you can see the current routing preference setting in front of **"HubRoutingPreference":**: +In the output, you can see the current routing preference setting in front of **"hubRoutingPreference"**: ```output { |
sap | Advanced State Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bash/advanced-state-management.md | description: Updates the Terraform state file using a shell script # advanced_state_management.sh ## Synopsis-Updates the Terraform state file. +Allows for Terraform state file management. ## Syntax Updates the Terraform state file. advanced_state_management.sh [--parameterfile] <String> [--type] <String> +[--operation] <String> [--terraform_keyfile] <String> [--subscription] <String> [--storage_account_name] <String> advanced_state_management.sh [--parameterfile] <String> ``` ## Description-You can use this script to add missing or modified resources to the Terraform state file. This script is useful if resources have been modified or created without using Terraform. +You can use this script to: ++- add missing or modified resources to the Terraform state file. +- remove resources from the Terraform state file. +- list the resources in the Terraform state file. ++This script is useful if resources have been modified or created without using Terraform. ## Examples azure_resource_id="/subscriptions/<subscriptionId>/resourceGroups/DEV-WEEU-SAP01 $DEPLOYMENT_REPO_PATH/deploy/scripts/advanced_state_management.sh \ --parameterfile "${parameter_file_name}" \ --type "${deployment_type}" \+ --operation import \ --subscription "${subscriptionID}" \ --storage_account_name "${storage_accountname}" \ --terraform_keyfile "${key_file}" \ --tf_resource_name "${tf_resource_name}" \- --azure_resource_id "${azure_resource_id}" + --azure_resource_id "${azure_resource_id}" + ``` ++### Example 2 ++Removing a storage account from the state file ++```bash ++parameter_file_name="DEV-WEEU-SAP01-X00.tfvars" +deployment_type="sap_system" +subscriptionID="<subscriptionId>" ++filepart=$(echo "${parameter_file_name}" | cut -d. -f1) +key_file=${filepart}.terraform.tfstate ++#This is the name of the storage account containing the terraform state files +storage_accountname="<storageaccountname>" ++#Terraform Resource name of the first +tf_resource_name="module.common_infrastructure.azurerm_storage_account.sapmnt[0]" + +$DEPLOYMENT_REPO_PATH/deploy/scripts/advanced_state_management.sh \ + --parameterfile "${parameter_file_name}" \ + --type "${deployment_type}" \ + --operation remove \ + --subscription "${subscriptionID}" \ + --storage_account_name "${storage_accountname}" \ + --terraform_keyfile "${key_file}" \ + --tf_resource_name "${tf_resource_name}" ``` + ## Parameters ### `--parameterfile` Accepted values: sap_deployer, sap_landscape, sap_library, sap_system Required: True ``` +### `--operation` +Sets the operation to perform. Valid values include: `sap_deployer`, `import`, `list`, and `remove`. ++```yaml +Type: String +Aliases: `-t` +Accepted values: import, list, remove ++Required: True +``` ++ ### `--terraform_keyfile` Sets the Terraform state file's name. |
sap | Cal S4h | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/cal-s4h.md | The online library is continuously updated with Appliances for demo, proof of co | [**SAP S/4HANA 2022 FPS02, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/983008db-db92-4d4d-ac79-7e2afa95a2e0)| July 16 2023 |This appliance contains SAP S/4HANA 2022 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=983008db-db92-4d4d-ac79-7e2afa95a2e0&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | [**SAP S/4HANA 2022 FPS01, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3722f683-42af-4059-90db-4e6a52dc9f54) | April 20 2023 |This appliance contains SAP S/4HANA 2022 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=3722f683-42af-4059-90db-4e6a52dc9f54&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | | [**SAP S/4HANA 2021 FPS01, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/a954cc12-da16-4caa-897e-cf84bc74cf15)| April 26 2022 |This appliance contains SAP S/4HANA 2021 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. |[Create Appliance](https://cal.sap.com/registration?sguid=a954cc12-da16-4caa-897e-cf84bc74cf15&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |-| [**SAP BW/4HANA 2021 SP04 Developer Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/1b0ac659-a5b4-4d3b-b1ae-f1a1cb89c6db)| March 23 2023 | This solution offers you an insight of SAP BW/4HANA2021 SP04. SAP BW/4HANA is the next generation Data Warehouse optimized for SAP HANA. Beside the basic BW/4HANA options the solution offers a bunch of SAP HANA optimized BW/4HANA Content and the next step of Hybrid Scenarios with SAP Data Warehouse Cloud. | [Create Appliance](https://cal.sap.com/registration?sguid=1b0ac659-a5b4-4d3b-b1ae-f1a1cb89c6db&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | +| [**SAP NetWeaver AS ABAP 7.51 SP02 on ASE**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/56fea1da-3460-4398-bc75-c612a4bc345e)| January 03 2018 |The ABAP AS on ASE 16.0 provides a great platform for trying out the ABAP language and toolset. It is extensively pre-configured with Fiori launchpad, SAP Cloud Connector, SAP Java Virtual Machine, pre-configured backend /frontend connections, roles, and sample applications. It also includes all the standard ABAP AS infrastructure: Transaction Management, database operations / persistence, Change and Transport System, SAP Gateway, interoperability with ABAP Development Toolkit and SAP WebIDE, and much more | [Create Appliance](https://cal.sap.com/registration?sguid=56fea1da-3460-4398-bc75-c612a4bc345e&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | [**SAP ABAP Platform 1909, Developer Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/5a830213-f0cb-423e-ab5f-f7736e57f5a1)| May 10 2023 | The SAP ABAP Platform on SAP HANA gives you access to your own copy of SAP ABAP Platform 1909 Developer Edition on SAP HANA. Note that this solution is preconfigured with many additional elements, including: SAP ABAP RESTful Application Programming Model, SAP Fiori launchpad, SAP gCTS, SAP ABAP Test Cockpit, and preconfigured frontend / backend connections, etc It also includes all the standard ABAP AS infrastructure: Transaction Management, database operations / persistence, Change and Transport System, SAP Gateway, interoperability with ABAP Development Toolkit and SAP WebIDE, and much more. | [Create Appliance](https://cal.sap.com/registration?sguid=5a830213-f0cb-423e-ab5f-f7736e57f5a1&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |-| [**SAP NetWeaver 7.5 SP15 on SAP ASE**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/solutions/69efd5d1-04de-42d8-a279-813b7a54c1f6) | January 3 2018 | SAP NetWeaver 7.5 SP15 on SAP ASE | [Create Appliance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | +| [**SAP Solution Manager 7.2 SP17 & Focused Solutions SP12 (Baseline)**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/811a4b92-3ea1-4108-9661-c38e775ca488)| September 24 2023 |This template contains a partly configured SAP Solution Manager 7.2 SP17 (incl. Focused Build and Focused Insights 2.0 SP12). Only the Mandatory Configuration and Focused Build configuration are performed. The system is clean and does not contain pre-defined demo scenarios. | [Create Appliance](https://cal.sap.com/registration?sguid=811a4b92-3ea1-4108-9661-c38e775ca488&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | + + |
search | Search Api Preview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-preview.md | Preview features that transition to general availability are removed from this l |Feature | Category | Description | Availability | |||-||+| [**Exhaustive K-Nearest Neighbors (KNN)**](vector-search-ranking.md#eknn) | Vector search | Exhaustive K-Nearest Neighbors (KNN) is a new scoring algorithm for similarity search in vector space. It performs an exhaustive search for the nearest neighbors, useful for situations where high recall is more important than query performance. | Available in the 2023-10-01-Preview REST API. | +| [**Prefilters in vector search**](vector-search-how-to-query.md) | Vector search | Evaluates filter criteria before query execution, reducing the amount of content that needs to be searched. | Available in the 2023-10-01-Preview REST API. | +| [**2023-10-01-Preview Search REST API**](/rest/api/searchservice/search-service-api-versions#2023-10-01-Preview) | Vector search | New preview version of the Search REST APIs that changes the definition for [vector fields](vector-search-how-to-create-index.md) and [vector queries](vector-search-how-to-query.md). This API version introduces breaking changes from **2023-07-01-Preview**, otherwise it's inclusive of all previous preview features. If you're using earlier previews, switch to **2023-10-01-Preview** with no loss of functionality, assuming you make updates to vector code. | Public preview, [Search REST API 2023-10-01-Preview](/rest/api/searchservice/index). Announced in October 2023. | | [**Vector search**](vector-search-overview.md) | Vector search | Adds vector fields to a search index for similarity search scenarios over vector representations of text, image, and multilingual content. | Public preview using the [Search REST API 2023-07-01-Preview](/rest/api/searchservice/index-preview) and Azure portal. | | [**Search REST API 2023-07-01-Preview**](/rest/api/searchservice/index-preview) | Vector search | Modifies [Create or Update Index](/rest/api/searchservice/preview-api/create-or-update-index) to include a new data type for vector search fields. It also adds query parameters for queries composed of vector data (embeddings) | Public preview, [Search REST API 2023-07-01-Preview](/rest/api/searchservice/index-preview). Announced in June 2023. | | [**Azure Files indexer**](search-file-storage-integration.md) | Indexer data source | Adds REST API support for creating indexers for [Azure Files](https://azure.microsoft.com/services/storage/files/) | Public preview, [Search REST API 2021-04-30-Preview](/rest/api/searchservice/index-preview). Announced in November 2021. | |
search | Search Get Started Vector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md | Last updated 10/10/2023 > [!IMPORTANT] > Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme). -Get started with vector search in Azure Cognitive Search using the **2023-07-01-Preview** REST APIs that create, load, and query a search index. +Get started with vector search in Azure Cognitive Search using the **2023-10-01-Preview** REST APIs that create, load, and query a search index. Search indexes now support vector fields in the fields collection. When querying the search index, you can build vector-only queries, or create hybrid queries that target vector fields *and* textual fields configured for filters, sorts, facets, and semantic ranking. Search indexes now support vector fields in the fields collection. When querying For the optional [semantic search](semantic-search-overview.md) shown in the last example, your search service must be Basic tier or higher, with [semantic search enabled](semantic-how-to-enable-disable.md). -+ [Sample Postman collection](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Quickstart-vectors), with requests targeting the **2023-07-01-preview** API version of Azure Cognitive Search. ++ [Sample Postman collection](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Quickstart-vectors), with requests targeting the **2023-10-01-preview** API version of Azure Cognitive Search. + Optional. The Postman collection includes a **Generate Embedding** request that can generate vectors from text. To send this request, you need [Azure OpenAI](https://aka.ms/oai/access) with a deployment of **text-embedding-ada-002**. For this request only, provide your Azure OpenAI endpoint, Azure OpenAI key, model deployment name, and API version in the collection variables. If you're unfamiliar with Postman, see [this quickstart](search-get-started-rest 1. [Fork or clone the azure-search-postman-samples repository](https://github.com/Azure-Samples/azure-search-postman-samples). -1. Start Postman and import the collection `AzureSearchQuickstartVectors.postman_collection.json`. +1. Start Postman and import the `AzureSearchQuickstartVectors 2023-10-01-Preview.postman_collection.json` collection. 1. Right-click the collection name and select **Edit** to set the collection's variables to valid values for Azure Cognitive Search and Azure OpenAI. If you're unfamiliar with Postman, see [this quickstart](search-get-started-rest |-|| | index-name | *index names are lower-case, no spaces, and can't start or end with dashes* | | search-service-name | *from Azure portal, get just the name of the service, not the full URL* |- | search-api-version | 2023-07-01-Preview | + | search-api-version | 2023-10-01-Preview | | search-api-key | *provide an admin key* | | openai-api-key | *optional. Set this value if you want to generate embeddings. Find this value in Azure portal.* | | openai-service-name | *optional. Set this value if you want to generate embeddings. Find this value in Azure portal.* | You're now ready to send the requests to your search service. For each request, ## Create an index -Use the [Create or Update Index](/rest/api/searchservice/preview-api/create-or-update-index) REST API for this request. +Use the [Create or Update Index](/rest/api/searchservice/2023-10-01-preview/indexes/create-or-update) REST API for this request. The index schema is organized around hotels content. Sample data consists of the names, descriptions, and locations of seven fictitious hotels. This schema includes fields for vector and traditional keyword search, with configurations for vector and semantic search. api-key: {{admin-api-key}} "facetable": false }, {- "name": "HotelNameVector", + "name": "HotelNameVector", "type": "Collection(Edm.Single)",- "searchable": true, + "searchable": true, "retrievable": true, "dimensions": 1536,- "vectorSearchConfiguration": "my-vector-config" + "vectorSearchProfile": "my-vector-profile" }, { "name": "Description", api-key: {{admin-api-key}} "facetable": false }, {- "name": "DescriptionVector", + "name": "DescriptionVector", "type": "Collection(Edm.Single)",- "searchable": true, + "searchable": true, "retrievable": true, "dimensions": 1536,- "vectorSearchConfiguration": "my-vector-config" + "vectorSearchProfile": "my-vector-profile" }, { "name": "Category", "type": "Edm.String", api-key: {{admin-api-key}} } ], "vectorSearch": {- "algorithmConfigurations": [ + "algorithms": [ {- "name": "my-vector-config", + "name": "my-hnsw-vector-config-1", "kind": "hnsw", "hnswParameters": { api-key: {{admin-api-key}} "efSearch": 500, "metric": "cosine" }+ }, + { + "name": "my-hnsw-vector-config-2", + "kind": "hnsw", + "hnswParameters": + { + "m": 4, + "metric": "euclidean" + } + }, + { + "name": "my-eknn-vector-config", + "kind": "exhaustiveKnn", + "exhaustiveKnnParameters": + { + "metric": "cosine" + } }- ] + ], + "profiles": [ + { + "name": "my-vector-profile", + "algorithm": "my-hnsw-vector-config-1" + } + ] }, "semantic": { "configurations": [ You should get a status HTTP 201 success. + The `"fields"` collection includes a required key field, text and vector fields (such as `"Description"`, `"DescriptionVector"`) for keyword and vector search. Colocating vector and non-vector fields in the same index enables hybrid queries. For instance, you can combine filters, keyword search with semantic ranking, and vectors into a single query operation. -+ Vector fields must be `"type": "Collection(Edm.Single)"` with `"dimensions"` and `"vectorSearchConfiguration"` properties. See [this article](/rest/api/searchservice/preview-api/create-or-update-index) for property descriptions. ++ Vector fields must be `"type": "Collection(Edm.Single)"` with `"dimensions"` and `"vectorSearchProfile"` properties. See [Create or Update Index](/rest/api/searchservice/2023-10-01-preview/indexes/create-or-update) for property descriptions. -+ The `"vectorSearch"` section is an array of algorithm configurations used by vector fields. Currently, only HNSW is supported. HNSW is a graph-based Approximate Nearest Neighbors (ANN) algorithm optimized for high-recall, low-latency applications. ++ The `"vectorSearch"` section is an array of Approximate Nearest Neighbors (ANN) algorithm configurations and profiles. Supported algorithms include HNSW and eKNN. See [Relevance scoring in vector search](vector-search-ranking.md) for details. + [Optional]: The `"semantic"` configuration enables reranking of search results. You can rerank results in queries of type `"semantic"` for string fields that are specified in the configuration. See [Semantic Search overview](semantic-search-overview.md) to learn more. ## Upload documents -Use the [Add, Update, or Delete Documents](/rest/api/searchservice/preview-api/add-update-delete-documents) REST API for this request. +Use the [Index Documents](/rest/api/searchservice/2023-10-01-preview/documents/) REST API for this request. For readability, the following excerpt shows just the fields used in queries, minus the vector values associated with `DescriptionVector`. Each vector field contains 1536 embeddings, so those values are omitted for readability. api-key: {{admin-api-key}} ## Run queries -Use the [Search Documents](/rest/api/searchservice/preview-api/search-documents) REST API for query request. Public preview has specific requirements for using POST on the queries. Also, the API version must be 2023-07-01-Preview. +Use the [Search Documents](/rest/api/searchservice/2023-10-01-preview/documents/search-post) REST API for query request. Public preview has specific requirements for using POST on the queries. Also, the API version must be 2023-10-01-Preview if you want vector filters and profiles. There are several queries to demonstrate various patterns. The queries in this section are based on two strings: + search string: *"historic hotel walk to restaurants and shopping"* + vector query string (vectorized into a mathematical representation): *"classic lodging near running trails, eateries, retail"* -The vector query string is semantically similar to the search string, but has terms that don't exist in the search index. If you do a keyword search for "classic lodging near running trails, eateries, retail", results are zero. We use this example to show you can get relevant results even if there are no matching terms. +The vector query string is semantically similar to the search string, but has terms that don't exist in the search index. If you do a keyword search for "classic lodging near running trails, eateries, retail", results are zero. We use this example to show how you can get relevant results even if there are no matching terms. ### Single vector search In this vector query, which is shortened for brevity, the `"value"` contains the The vector query string is *"classic lodging near running trails, eateries, retail"* - vectorized into 1536 embeddings for this query. ```http-POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}} +POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-10-01-Preview Content-Type: application/json api-key: {{admin-api-key}} { api-key: {{admin-api-key}} "value": [0.01944167, 0.0040178085 . . . 010858015, -0.017496133],+ "k": 7, "fields": "DescriptionVector",- "k": 7 + "kind": "vector", + "exhaustive": true } ] } The response for the vector equivalent of "classic lodging near running trails, You can add filters, but the filters are applied to the non-vector content in your index. In this example, the filter applies to the `"Tags"` field, filtering out any hotels that don't provide free WIFI. +This example sets `vectorFilterMode` to pre-query filtering, which is the default, so you don't need to set it. It's listed here for awareness because it's new in 2023-10-01-Preview. + ```http-POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}} +POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-10-01-Preview Content-Type: application/json api-key: {{admin-api-key}} { "count": true, "select": "HotelName, Tags, Description", "filter": "Tags/any(tag: tag eq 'free wifi')",- "vectors": [ + "vectorFilterMode": "PreFilter", + "vectorQueries": [ {- "value": [ VECTOR OMITTED ], + "vector": [ VECTOR OMITTED ], + "k": 7, "fields": "DescriptionVector",- "k": 7 + "kind": "vector", + "exhaustive": true }, ] } Hybrid search consists of keyword queries and vector queries in a single search + vector query string (vectorized into a mathematical representation): *"classic lodging near running trails, eateries, retail"* ```http-POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}} +POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-10-01-Preview Content-Type: application/json api-key: {{admin-api-key}} { api-key: {{admin-api-key}} "search": "historic hotel walk to restaurants and shopping", "select": "HotelName, Description", "top": 7,- "vectors": [ + "vectorQueries": [ {- "value": [ VECTOR OMITTED], + "vector": [ VECTOR OMITTED], "k": 7,- "fields": "DescriptionVector" + "fields": "DescriptionVector", + "kind": "vector", + "exhaustive": true } ] } In the vector-only query, Sublime Cliff Hotel drops to position four. But Histor ### Semantic hybrid search with filter -Here's the last query in the collection: a hybrid query, with semantic ranking, filtered to show just those hotels within a 500-kilometer radius of Washington D.C. +Here's the last query in the collection: a hybrid query, with semantic ranking, filtered to show just those hotels within a 500-kilometer radius of Washington D.C. `vectorFilterMode` can be set to null, which is equivalent to the default (`preFilter` for newer indexes, `postFilter` for older ones). ```http-POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}} +POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-10-01-Preview Content-Type: application/json api-key: {{admin-api-key}} { api-key: {{admin-api-key}} "search": "historic hotel walk to restaurants and shopping", "select": "HotelId, HotelName, Category, Description,Address/City, Address/StateProvince", "filter": "geo.distance(Location, geography'POINT(-77.03241 38.90166)') le 500",+ "vectorFilterMode": null, "facets": [ "Address/StateProvince"], "top": 7, "queryType": "semantic", api-key: {{admin-api-key}} "answers": "extractive|count-3", "captions": "extractive|highlight-true", "semanticConfiguration": "my-semantic-config",- "vectors": [ + "vectorQueries": [ {- "value": [ VECTOR OMITTED ], + "vector": [ VECTOR OMITTED ], "k": 7,- "fields": "DescriptionVector" + "fields": "DescriptionVector", + "kind": "vector", + "exhaustive": true } ] } |
search | Search Synapseml Cognitive Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-synapseml-cognitive-services.md | display(translated_df) Paste the following code in the sixth cell and then run it. No modifications are required. This code loads [AzureSearchWriter](https://microsoft.github.io/SynapseML/docs/Explore%20Algorithms/AI%20Services/Overview/#azure-cognitive-search-sample). It consumes a tabular dataset and infers a search index schema that defines one field for each column. The translations structure is an array, so it's articulated in the index as a complex collection with subfields for each language translation. The generated index will have a document key and use the default values for fields created using the [Create Index REST API](/rest/api/searchservice/create-index).-This code loads [AzureSearchWriter](). It consumes a tabular dataset and infers a search index schema that defines one field for each column. The translations structure is an array, so it's articulated in the index as a complex collection with subfields for each language translation. The generated index will have a document key and use the default values for fields created using the [Create Index REST API](/rest/api/searchservice/create-index). ```python from synapse.ml.cognitive import * |
search | Vector Search How To Create Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-create-index.md | -> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme). +> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST APIs, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme). In Azure Cognitive Search, vector data is indexed as *vector fields* in a [search index](search-what-is-an-index.md), using a *vector configuration* to specify the embedding space definition. Follow these steps to index vector data: > [!div class="checklist"]-> + Add one or more vector fields to the index schema. > + Add one or more vector configurations. +> + Add one or more vector fields to the index schema. > + Load the index with vector data [as a separate step](#load-vector-data-for-indexing), after the index schema is defined. Code samples in the [cognitive-search-vector-pr](https://github.com/Azure/cognitive-search-vector-pr) repository demonstrate end-to-end workflows that include schema definition, vectorization, indexing, and queries. ## Prerequisites -+ Azure Cognitive Search, in any region and on any tier. Most existing services support vector search. For services created prior to January 2019, there is a small subset which won't support vector search. If an index containing vector fields fails to be created or updated, this is an indicator. In this situation, a new service must be created. ++ Azure Cognitive Search, in any region and on any tier. Most existing services support vector search. For services created prior to January 2019, there's a small subset which won't support vector search. If an index containing vector fields fails to be created or updated, this is an indicator. In this situation, a new service must be created. + Pre-existing vector embeddings in your source documents. Cognitive Search doesn't generate vectors. We recommend [Azure OpenAI embedding models](/azure/ai-services/openai/concepts/models#embeddings-models) but you can use any model for vectorization. For more information, see [Create and use embeddings for search queries and documents](vector-search-how-to-generate-embeddings.md). Your search index should include fields and content for all of the query scenari A short example of a documents payload that includes vector and non-vector fields is in the [load vector data](#load-vector-data-for-indexing) section of this article. -## Add a vector field to the fields collection --The schema must include a `vectorConfiguration` section, a field for the document key, vector fields, and any other fields that you need for hybrid search scenarios. --+ `vectorConfiguration` specifies the algorithm and parameters used during indexing to create "nearest neighbor" information among the vector nodes. Currently, only Hierarchical Navigable Small World (HNSW) is supported. +## Add a vector search configuration -+ Vector fields are of type `Collection(Edm.Single)` and single-precision floating-point values. A field of this type also has a `dimensions` property and a `vectorConfiguration` property +The schema must include a field for the document key, a vector configuration object, vector fields, and any other fields that you need for hybrid search scenarios. -During indexing, HNSW determines how closely the vectors match and stores the neighborhood information as a proximity graph in the index. You can have multiple configurations within an index if you want different HNSW parameter combinations. As long as the vector fields contain embeddings from the same model, having a different vector configuration per field has no effect on queries. +A vector configuration object specifies the algorithm and parameters used during indexing to create "nearest neighbor" information among the vector nodes. -You can use the Azure portal, REST APIs, or the beta packages of the Azure SDKs to index vectors. +You can define multiple [algorithm configurations](vector-search-ranking.md). In the fields definition, you'll choose one for each vector field. During indexing, nearest neighbor algorithm determines how closely the vectors match and stores the neighborhood information as a proximity graph in the index. You can have multiple configurations within an index if you want different parameter combinations. As long as the vector fields contain embeddings from the same model, having a different vector configuration per field has no effect on queries. -### [**Azure portal**](#tab/portal-add-field) +In the **2023-10-01-Preview**, you can specify either approximate or exhaustive nearest neighbor algorithms: -Use the index designer in the Azure portal to add vector field definitions. If the index doesn't have a vector configuration, you're prompted to create one when you add your first vector field to the index. ++ Hierarchical Navigable Small World (HNSW)++ Exhaustive KNN -Although you can add a field to an index, there's no portal (Import data wizard) support for loading it with vector data. Instead, use the REST APIs or an SDK for data import. +If you choose HNSW on a field, you can opt in for exhaustive KNN at query time. But the other direction wonΓÇÖt work: if you choose exhaustive, you canΓÇÖt later request HNSW search because the extra data structures that enable approximate search donΓÇÖt exist. -1. [Sign in to Azure portal](https://portal.azure.com) and open your search service page in a browser. +You can use the Azure portal, REST APIs, or the beta packages of the Azure SDKs to index vectors. To evaluate the newest vector search behaviors, use the **2023-10-01-Preview** REST API version. -1. In the left-side navigation pane, select **Search management** > **Indexes**. +### [**2023-10-01-Preview**](#tab/config-2023-10-01-Preview) -1. Select **+ Add Index** and give the index a name. +REST API version [**2023-10-01-Preview**](/rest/api/searchservice/search-service-api-versions#2023-10-01-Preview) introduces breaking changes to vector configuration and vector field definitions. This version adds: -1. Select **Add Field**: ++ `vectorProfiles`++ `exhaustiveKnn` nearest neighbors algorithm for indexing vector content - :::image type="content" source="media/vector-search-how-to-create-index/portal-add-field.png" alt-text="Screenshot of the Add Field pane." border="true"::: +1. Use the [Create or Update Index Preview REST API](/rest/api/searchservice/2023-10-01-preview/indexes/create-or-update) to create the index. - **Key points**: +1. Add a `vectorSearch` section in the index that specifies the similarity algorithms used to create the embedding space. Valid algorithms are `"hnsw"` and `exhaustiveKnn`. You can specify variants of each algorithm if you want different parameter combinations. - + Name the field (no spaces). - + Choose type `Collection(Edm.Single)`. - + Select "Retrievable" if you want the query to return the vector data in search results. If you have other fields with human readable content that you can return as a proxy for the match, you should set "Retrievable" to false to save space. - + "Searchable" is mandatory for a vector field and can't be changed. - + "Dimensions" is the length of the vector returned by the model. Set this value to specify `1536` for **text-embeddding-ada-002**, where the input text that you provide is numerically described using 1536 dimensions. + For "metric", valid values are `cosine`, `euclidean`, and `dotProduct`. The `cosine` metric is specified because it's the similarity metric that the Azure OpenAI models use to create embeddings. -1. Select or create a vector configuration used for similarity search. If the index doesn't have a vector configuration, you must select **Create**. + ```json + "vectorSearch": { + "algorithms": [ + { + "name": "my-hnsw-config-1", + "kind": "hnsw", + "hnswParameters": { + "m": 4, + "efConstruction": 400, + "efSearch": 500, + "metric": "cosine" + } + }, + { + "name": "my-hnsw-config-2", + "kind": "hnsw", + "hnswParameters": { + "m": 8, + "efConstruction": 800, + "efSearch": 800, + "metric": "cosine" + } + }, + { + "name": "my-eknn-config", + "kind": "exhaustiveKnn", + "exhaustiveKnnParameters": { + "metric": "cosine" + } + } - :::image type="content" source="media/vector-search-how-to-create-index/portal-add-vector-configuration.png" alt-text="Screenshot of the vector configuration properties." border="true"::: + ], + "profiles": [ + { + "name": "my-default-vector-profile", + "algorithm": "my-hnsw-config-2" + } + ] + } + ``` **Key points**: - + Name the configuration. The name must be unique within the index. - + "hnsw" is the Approximate Nearest Neighbors (ANN) algorithm used to create the proximity graph during indexing. Currently, only Hierarchical Navigable Small World (HNSW) is supported. - + "Bi-directional link count" default is 4. The range is 4 to 10. Lower values should return less noise in the results. - + "efConstruction" default is 400. The range is 100 to 1,000. It's the number of nearest neighbors used during indexing. - + "efSearch default is 500. The range is 100 to 1,000. It's the number of nearest neighbors used during search. - + "Similarity metric" should be "cosine" if you're using Azure OpenAI, otherwise use the similarity metric associated with the embedding model your're using. Supported values are `cosine`, `dotProduct`, `euclidean`. -- If you're familiar with HNSW parameters, you might be wondering about how to set the `"k"` number of nearest neighbors to return in the result. In Cognitive Search, that value is set on the [query request](vector-search-how-to-query.md). --1. Select **Save** to save the vector configuration and the field definition. + + Name of the configuration. The name must be unique within the index. + + Profiles are new in this preview. They add a layer of abstraction for accommodating richer definitions. A profile is defined in `vectorSearch`, and then as a property on each vector field. + + `"hnsw"` and `"exhaustiveKnn"` are the Approximate Nearest Neighbors (ANN) algorithms used to organize vector content during indexing. + + `"m"` (bi-directional link count) default is 4. The range is 4 to 10. Lower values should return less noise in the results. + + `"efConstruction"` default is 400. The range is 100 to 1,000. It's the number of nearest neighbors used during indexing. + + `"efSearch"` default is 500. The range is 100 to 1,000. It's the number of nearest neighbors used during search. + + `"metric"` should be "cosine" if you're using Azure OpenAI, otherwise use the similarity metric associated with the embedding model you're using. Supported values are `cosine`, `dotProduct`, `euclidean`. -### [**REST API**](#tab/rest-add-field) +### [**2023-07-01-Preview**](#tab/rest-add-config) -Use the **2023-07-01-Preview** REST API for vector scenarios. If you're updating an existing index to include vector fields, make sure the `allowIndexDowntime` query parameter is set to `true`. +REST API version [**2023-07-01-Preview**](/rest/api/searchservice/index-preview) enables vector scenarios. This version adds: -In the following REST API example, "title" and "content" contain textual content used in full text search and semantic search, while "titleVector" and "contentVector" contain vector data. ++ `vectorConfigurations`++ `hnsw` nearest neighbor algorithm for indexing vector content 1. Use the [Create or Update Index Preview REST API](/rest/api/searchservice/preview-api/create-or-update-index) to create the index. -1. Add a `vectorSearch` section in the index that specifies the similarity algorithm used to create the embedding space. Currently, only `"hnsw"` is supported. For "metric", valid values are `cosine`, `euclidean`, and `dotProduct`. The `cosine` metric is specified because it's the similarity metric that the Azure OpenAI models use to create embeddings. +1. Add a `vectorSearch` section in the index that specifies the similarity algorithm used to create the embedding space. In this API version, only `"hnsw"` is supported. For "metric", valid values are `cosine`, `euclidean`, and `dotProduct`. The `cosine` metric is specified because it's the similarity metric that the Azure OpenAI models use to create embeddings. ```json "vectorSearch": { In the following REST API example, "title" and "content" contain textual content **Key points**: - + Name the configuration. The name must be unique within the index. - + "hnsw" is the Approximate Nearest Neighbors (ANN) algorithm used to create the proximity graph during indexing. Currently, only Hierarchical Navigable Small World (HNSW) is supported. + + Name of the configuration. The name must be unique within the index. + + "hnsw" is the Approximate Nearest Neighbors (ANN) algorithm used to create the proximity graph during indexing. Only Hierarchical Navigable Small World (HNSW) is supported in this API version. + "m" (bi-directional link count) default is 4. The range is 4 to 10. Lower values should return less noise in the results. + "efConstruction" default is 400. The range is 100 to 1,000. It's the number of nearest neighbors used during indexing. + "efSearch default is 500. The range is 100 to 1,000. It's the number of nearest neighbors used during search.- + "metric" should be "cosine" if you're using Azure OpenAI, otherwise use the similarity metric associated with the embedding model your're using. Supported values are `cosine`, `dotProduct`, `euclidean`. + + "metric" should be "cosine" if you're using Azure OpenAI, otherwise use the similarity metric associated with the embedding model you're using. Supported values are `cosine`, `dotProduct`, `euclidean`. ++### [**.NET**](#tab/dotnet-add-config) +++ Use the [**Azure.Search.Documents 11.5.0-beta.4**](https://www.nuget.org/packages/Azure.Search.Documents/11.5.0-beta.4) package for vector scenarios. +++ See the [cognitive-search-vector-pr](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet) GitHub repository for .NET code samples.++### [**Python**](#tab/python-add-config) +++ Use the [**Azure.Search.Documents 11.4.0b8**](https://pypi.org/project/azure-search-documents/11.4.0b8/) package for vector scenarios. +++ See the [cognitive-search-vector-pr](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python) GitHub repository for Python code samples.++### [**JavaScript**](#tab/js-add-config) +++ Use the [**@azure/search-documents 12.0.0-beta.2**](https://www.npmjs.com/package/@azure/search-documents/v/12.0.0-beta.2) package for vector scenarios. +++ See the [cognitive-search-vector-pr](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript) GitHub repository for JavaScript code samples.++++## Add a vector field to the fields collection ++The fields collection must include a field for the document key, vector fields, and any other fields that you need for hybrid search scenarios. ++Vector fields are of type `Collection(Edm.Single)` and single-precision floating-point values. A field of this type also has a `dimensions` property and specifies a vector configuration. ++You can use the Azure portal, REST APIs, or the beta packages of the Azure SDKs to index vectors. To evaluate the newest vector search behaviors, use the **2023-10-01-Preview** REST API version. ++### [**2023-10-01-Preview**](#tab/rest-2023-10-01-Preview) ++In the following REST API example, "title" and "content" contain textual content used in full text search and semantic search, while "titleVector" and "contentVector" contain vector data. ++> [!TIP] +> Updating an existing index to include vector fields? Make sure the `allowIndexDowntime` query parameter is set to `true` ++1. Use the [Create or Update Index Preview REST API](/rest/api/searchservice/2023-10-01-preview/indexes/create-or-update) to create the index. ++1. Add vector fields to the fields collection. You can store one generated embedding per document field. For each vector field: ++ + Assign the `Collection(Edm.Single)` data type. + + Provide the name of the vector search profile. + + Provide the number of dimensions generated by the embedding model. + + Set attributes: + + "searchable" must be "true". + + "retrievable" set to "true" allows you to display the raw vectors (for example, as a verification step), but doing so increases storage. Set to "false" if you don't need to return raw vectors. You don't need to return vectors for a query, but if you're passing a vector result to a downstream app then set "retrievable" to "true". + + "filterable", "facetable", "sortable" attributes must be "false". Don't set them to "true" because those behaviors don't apply within the context of vector fields and the request will fail. ++1. Add filterable fields to the collection, such as "title" with "filterable" set to true, if you want to invoke prefiltering or postfiltering on the [vector query](vector-search-how-to-query.md). ++1. Add other fields that define the substance and structure of the textual content you're indexing. At a minimum, you need a document key. ++ You should also add fields that are useful in the query or in its response. The following example shows vector fields for title and content ("titleVector", "contentVector") that are equivalent to vectors. It also provides fields for equivalent textual content ("title", "content") useful for sorting, filtering, and reading in a search result. ++ The following example shows the fields collection: ++ ```http + PUT https://my-search-service.search.windows.net/indexes/my-index?api-version=2023-10-01-Preview&allowIndexDowntime=true + Content-Type: application/json + api-key: {{admin-api-key}} + { + "name": "{{index-name}}", + "fields": [ + { + "name": "id", + "type": "Edm.String", + "key": true, + "filterable": true + }, + { + "name": "title", + "type": "Edm.String", + "searchable": true, + "filterable": true, + "sortable": true, + "retrievable": true + }, + { + "name": "titleVector", + "type": "Collection(Edm.Single)", + "searchable": true, + "retrievable": true, + "dimensions": 1536, + "vectorSearchProfile": "my-default-vector-profile" + }, + { + "name": "content", + "type": "Edm.String", + "searchable": true, + "retrievable": true + }, + { + "name": "contentVector", + "type": "Collection(Edm.Single)", + "searchable": true, + "retrievable": true, + "dimensions": 1536, + "vectorSearchProfile": "my-default-vector-profile" + } + ] + } + ``` ++### [**2023-07-01-Preview**](#tab/rest-add-field) ++REST API version [**2023-07-01-Preview**](/rest/api/searchservice/index-preview) enables vector scenarios. This version adds: ++In the following REST API example, "title" and "content" contain textual content used in full text search and semantic search, while "titleVector" and "contentVector" contain vector data. ++> [!TIP] +> Updating an existing index to include vector fields? Make sure the `allowIndexDowntime` query parameter is set to `true`. ++1. Use the [Create or Update Index Preview REST API](/rest/api/searchservice/preview-api/create-or-update-index) to create the index. 1. Add vector fields to the fields collection. You can store one generated embedding per document field. For each vector field: + Assign the `Collection(Edm.Single)` data type.- + For `Collection(Edm.Single)`, the "filterable", "facetable", "sortable" attributes are "false" by default. Don't set them to "true" because those behaviors don't apply within the context of vector fields and the request will fail. + Provide the name of the vector search algorithm configuration. + Provide the number of dimensions generated by the embedding model.- + "searchable" must be "true". - + "retrievable" set to "true" allows you to display the raw vectors (for example, as a verification step), but doing so increases storage. Set to "false" if you don't need to return raw vectors. You don't need to return vectors for a query, but if you're passing a vector result to a downstream app then set "retrievable" to "true". + + Set attributes: + + "searchable" must be "true". + + "retrievable" set to "true" allows you to display the raw vectors (for example, as a verification step), but doing so increases storage. Set to "false" if you don't need to return raw vectors. You don't need to return vectors for a query, but if you're passing a vector result to a downstream app then set "retrievable" to "true". + + "filterable", "facetable", "sortable" attributes must be "false". Don't set them to "true" because those behaviors don't apply within the context of vector fields and the request will fail. 1. Add other fields that define the substance and structure of the textual content you're indexing. At a minimum, you need a document key. In the following REST API example, "title" and "content" contain textual content } ``` +### [**Azure portal**](#tab/portal-add-field) ++Azure portal supports **2023-07-01-Preview** behaviors. ++Use the index designer in the Azure portal to add vector field definitions. If the index doesn't have a vector configuration, you're prompted to create one when you add your first vector field to the index. ++Although you can add a field to an index, there's no portal (Import data wizard) support for loading it with vector data. Instead, use the REST APIs or an SDK for data import. ++1. [Sign in to Azure portal](https://portal.azure.com) and open your search service page in a browser. ++1. In the left-side navigation pane, select **Search management** > **Indexes**. ++1. Select **+ Add Index** and give the index a name. ++1. Select **Add Field**: ++ :::image type="content" source="media/vector-search-how-to-create-index/portal-add-field.png" alt-text="Screenshot of the Add Field pane." border="true"::: ++ **Key points**: ++ + Name the field (no spaces). + + Choose type `Collection(Edm.Single)`. + + Select "Retrievable" if you want the query to return the vector data in search results. If you have other fields with human readable content that you can return as a proxy for the match, you should set "Retrievable" to false to save space. + + "Searchable" is mandatory for a vector field and can't be changed. + + "Dimensions" is the length of the vector returned by the model. Set this value to specify `1536` for **text-embeddding-ada-002**, where the input text that you provide is numerically described using 1536 dimensions. ++1. Select or create a vector configuration used for similarity search. If the index doesn't have a vector configuration, you must select **Create**. ++ :::image type="content" source="media/vector-search-how-to-create-index/portal-add-vector-configuration.png" alt-text="Screenshot of the vector configuration properties." border="true"::: ++ **Key points**: ++ + Name the configuration. The name must be unique within the index. + + "hnsw" is the Approximate Nearest Neighbors (ANN) algorithm used to create the proximity graph during indexing. Currently, only Hierarchical Navigable Small World (HNSW) is supported. + + "Bi-directional link count" default is 4. The range is 4 to 10. Lower values should return less noise in the results. + + "efConstruction" default is 400. The range is 100 to 1,000. It's the number of nearest neighbors used during indexing. + + "efSearch default is 500. The range is 100 to 1,000. It's the number of nearest neighbors used during search. + + "Similarity metric" should be "cosine" if you're using Azure OpenAI, otherwise use the similarity metric associated with the embedding model you're using. Supported values are `cosine`, `dotProduct`, `euclidean`. ++ If you're familiar with HNSW parameters, you might be wondering about how to set the `"k"` number of nearest neighbors to return in the result. In Cognitive Search, that value is set on the [query request](vector-search-how-to-query.md). ++1. Select **Save** to save the vector configuration and the field definition. + ### [**.NET**](#tab/dotnet-add-field) + Use the [**Azure.Search.Documents 11.5.0-beta.4**](https://www.nuget.org/packages/Azure.Search.Documents/11.5.0-beta.4) package for vector scenarios. You can use either [push or pull methodologies](search-what-is-data-import.md) f ### [**Push APIs**](#tab/push) -Use the [Add, Update, or Delete Documents Preview REST API](/rest/api/searchservice/preview-api/add-update-delete-documents) to push documents containing vector data. +Use the [Add, Update, or Delete Documents (2023-07-01-Preview)](/rest/api/searchservice/preview-api/add-update-delete-documents) or [Index Documents (2023-10-01-Preview)](/rest/api/searchservice/2023-10-01-preview/documents/) to push documents containing vector data. ```http POST https://my-search-service.search.windows.net/indexes/my-index/docs/index?api-version=2023-07-01-Preview |
search | Vector Search How To Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md | -> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme). +> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST APIs, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme). In Azure Cognitive Search, if you added vector fields to a search index, this article explains how to: > [!div class="checklist"]-> + [Query vector fields](#vector-query-request). +> + [Query vector fields](#vector-query-request) > + [Filter a vector query](#vector-query-with-filter)-> + [Query multiple vector fields at once](#multiple-vector-fields). -> + [Run multiple vector queries in parallel](#multiple-vector-queries). +> + [Query multiple vector fields at once](#multiple-vector-fields) Code samples in the [cognitive-search-vector-pr](https://github.com/Azure/cognitive-search-vector-pr) repository demonstrate end-to-end workflows that include schema definition, vectorization, indexing, and queries. ## Prerequisites -+ Azure Cognitive Search, in any region and on any tier. Most existing services support vector search. For services created prior to January 2019, there is a small subset which won't support vector search. If an index containing vector fields fails to be created or updated, this is an indicator. In this situation, a new service must be created. ++ Azure Cognitive Search, in any region and on any tier. Most existing services support vector search. For services created prior to January 2019, there's a small subset that won't support vector search. If an index containing vector fields fails to be created or updated, this is an indicator. In this situation, a new service must be created. + A search index containing vector fields. See [Add vector fields to a search index](vector-search-how-to-create-index.md). -+ Use REST API version **2023-07-01-Preview**, the [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr/tree/main), or Search Explorer in the Azure portal. ++ Use REST API version **2023-10-01-Preview** if you want pre-filters. Otherwise, you can use **2023-07-01-Preview**, the [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr/tree/main), or Search Explorer in the Azure portal. ## Limitations The actual response for this POST call to the deployment model includes 1536 emb You can use the Azure portal, REST APIs, or the beta packages of the Azure SDKs to query vectors. -### [**Azure portal**](#tab/portal-vector-query) --Be sure to the **JSON view** and formulate the query in JSON. The search bar in **Query view** is for full text search and will treat any vector input as plain text. +### [**2023-10-01-Preview**](#tab/query-2023-10-01-Preview) -1. Sign in to Azure portal and find your search service. --1. Under **Search management** and **Indexes**, select the index. +REST API version [**2023-10-01-Preview**](/rest/api/searchservice/search-service-api-versions#2023-10-01-Preview) introduces breaking changes to the vector query definition in [Search Documents](/rest/api/searchservice/2023-10-01-preview/documents/search-post). This version adds: - :::image type="content" source="media/vector-search-how-to-query/select-index.png" alt-text="Screenshot of the indexes menu." border="true"::: --1. On Search Explorer, under **View**, select **JSON view**. -- :::image type="content" source="media/vector-search-how-to-query/select-json-view.png" alt-text="Screenshot of the index list." border="true"::: ++ `vectorQueries` for specifying a vector to search for, vector fields to search in, and the k-number of nearest neighbors to return.++ `kind` is a parameter of `vectorQueries` and it can only be set to `vector` in this preview.++ `exhaustive` can be set to true or false, and invokes exhaustive KNN at query time. -1. By default, the search API is **2023-07-01-Preview**. This is the correct API version for vector search. +In the following example, the vector is a representation of this query string: `"what Azure services support full text search"`. The query targets the "contentVector" field. The actual vector has 1536 embeddings, so it's trimmed in this example for readability. -1. Paste in a JSON vector query, and then select **Search**. You can use the REST example as a template for your JSON query. +```http +POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-10-01-Preview +Content-Type: application/json +api-key: {{admin-api-key}} +{ + "count": true, + "select": "title, content, category", + "vectorQueries": [ + { + "kind": "vector" + "vector": [ + -0.009154141, + 0.018708462, + . . . + -0.02178128, + -0.00086512347 + ], + "exhaustive": true, + "fields": "contentVector", + "k": 5 + } + ] +} +``` - :::image type="content" source="media/vector-search-how-to-query/paste-vector-query.png" alt-text="Screenshot of the JSON query." border="true"::: +### [**2023-07-01-Preview**](#tab/query-vector-query) -### [**REST API**](#tab/rest-vector-query) +REST API version [**2023-07-01-Preview**](/rest/api/searchservice/index-preview) introduces vector query support to [Search Documents](/rest/api/searchservice/preview-api/search-documents). This version adds: -In this single vector query, which is shortened for brevity, the "value" contains the vectorized text of the query input, "fields" determines which vector fields are searched, and "k" specifies the number of nearest neighbors to return. ++ `vectors` for specifying a vector to search for, vector fields to search in, and the k-number of nearest neighbors to return. In the following example, the vector is a representation of this query string: `"what Azure services support full text search"`. The query targets the "contentVector" field. The actual vector has 1536 embeddings. It's trimmed in this example for readability. ```http-POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-07-01-Preview +POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-10-01-Preview Content-Type: application/json api-key: {{admin-api-key}} { Here's a modified example so that you can see the basic structure of a response + It shows a **`@search.score`** that's determined by the HNSW algorithm and a `cosine` similarity metric. + Fields include text and vector values. The content vector field consists of 1536 dimensions for each match, so it's truncated for brevity (normally, you might exclude vector fields from results). The text fields used in the response (`"select": "title, category"`) aren't used during query execution. The match is made on vector data alone. However, a response can include any "retrievable" field in an index. As such, the inclusion of text fields is helpful because its values are easily recognized by users. +### [**Azure portal**](#tab/portal-vector-query) ++Azure portal supports **2023-07-01-Preview** behaviors. ++Be sure to the **JSON view** and formulate the query in JSON. The search bar in **Query view** is for full text search and will treat any vector input as plain text. ++1. Sign in to Azure portal and find your search service. ++1. Under **Search management** and **Indexes**, select the index. ++ :::image type="content" source="media/vector-search-how-to-query/select-index.png" alt-text="Screenshot of the indexes menu." border="true"::: ++1. On Search Explorer, under **View**, select **JSON view**. ++ :::image type="content" source="media/vector-search-how-to-query/select-json-view.png" alt-text="Screenshot of the index list." border="true"::: ++1. By default, the search API is **2023-07-01-Preview**. This is the correct API version for vector search. ++1. Paste in a JSON vector query, and then select **Search**. You can use the REST example as a template for your JSON query. ++ :::image type="content" source="media/vector-search-how-to-query/paste-vector-query.png" alt-text="Screenshot of the JSON query." border="true"::: + ### [**.NET**](#tab/dotnet-vector-query) + Use the [**Azure.Search.Documents 11.5.0-beta.4**](https://www.nuget.org/packages/Azure.Search.Documents/11.5.0-beta.4) package for vector scenarios. A query request can include a vector query and a [filter expression](search-filt In contrast with full text search, a filter in a pure vector query is effectively processed as a post-query operation. The set of `"k"` nearest neighbors is retrieved, and then combined with the set of filtered results. As such, the value of `"k"` predetermines the surface over which the filter is applied. For `"k": 10`, the filter is applied to 10 most similar documents. For `"k": 100`, the filter iterates over 100 documents (assuming the index contains 100 documents that are sufficiently similar to the query). -Here's an example of filter expressions combined with a vector query: +> [!TIP] +> If you don't have source fields with text or numeric values, check for document metadata, such as LastModified or CreatedBy properties, that might be useful in a metadata filter. ++### [**2023-10-01-Preview**](#tab/filter-2023-10-01-Preview) ++REST API version [**2023-10-01-Preview**](/rest/api/searchservice/search-service-api-versions#2023-10-01-Preview) introduces filter options. This version adds: +++ `vectorFilterMode` for prefiltering (default) or postfiltering during query execution.++ `filter` provides the criteria, which is applied to a filterable text field ("category" in this example)++In the following example, the vector is a representation of this query string: `"what Azure services support full text search"`. The query targets the "contentVector" field. The actual vector has 1536 embeddings, so it's trimmed in this example for readability. ++The filter criteria are applied before the search engine executes the vector query. ++```http +POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-10-01-Preview +Content-Type: application/json +api-key: {{admin-api-key}} +{ + "count": true, + "select": "title, content, category", + "filter": "category eq 'Databases'", + "vectorFilterMode": "preFilter", + "vectorQueries": [ + { + "kind": "vector" + "vector": [ + -0.009154141, + 0.018708462, + . . . + -0.02178128, + -0.00086512347 + ], + "exhaustive": true, + "fields": "contentVector", + "k": 5 + } + ] +} +``` ++### [**2023-07-01-Preview**](#tab/filter-2023-07-01-Preview) ++REST API version [**2023-07-01-Preview**](/rest/api/searchservice/index-preview) supports post-filtering over query results. ++In the following example, the vector is a representation of this query string: `"what Azure services support full text search"`. The query targets the "contentVector" field. The actual vector has 1536 embeddings, so it's trimmed in this example for readability. ++The filter criteria are applied after the search engine executes the vector query. ```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-07-01-Preview api-key: {{admin-api-key}} } ``` -> [!TIP] -> If you don't have source fields with text or numeric values, check for document metadata, such as LastModified or CreatedBy properties, that might be useful in a filter. + ## Multiple vector fields |
search | Vector Search Index Size | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-index-size.md | The storage size of one vector is determined by its dimensionality. Multiply the For `Edm.Single`, the size of the data type is 4 bytes. -### Overhead from the selected algorithm --Each approximate-nearest-neighbor algorithm creates other data structures in memory to enable efficient searching. These consume extra space within memory. --**For HNSW algorithm, this overhead is between 5% to 20%.** --Overhead is lower for higher dimensions because the raw size of the vectors is larger, but the extra data structures remain a fixed size since they store information on connectivity within the graph. As a result, the contribution of the extra data structures makes up a smaller portion of the overall size. --Overhead is higher for larger values of the HNSW parameter `m`, which sets the number of bi-directional links created for every new vector during index construction. (The reason is because _m_ contributes roughly _m times 8 to 10_ bytes per document.) --Based on internal tests, a model with _m=4_ and _dims=96_ has an overhead of ~17%, and a model with _m=4_ and _dims=768_ has an overhead of ~5%. +### Memory Overhead from the Selected Algorithm + +Every approximate nearest neighbor (ANN) algorithm generates additional data structures in memory to enable efficient searching. These structures consume extra space within memory. + +**For the HNSW algorithm, the memory overhead ranges between 1% and 20%.** + +The memory overhead is lower for higher dimensions because the raw size of the vectors increases, while the extra data structures remain a fixed size since they store information on the connectivity within the graph. Consequently, the contribution of the extra data structures constitutes a smaller portion of the overall size. + +The memory overhead is higher for larger values of the HNSW parameter `m`, which determines the number of bi-directional links created for every new vector during index construction. This is because `m` contributes approximately 8 to 10 bytes per document multiplied by `m`. + +The following table summarizes the overhead percentages observed in internal tests: + +| Dimensions | HNSW Parameter (m) | Overhead Percentage | +|-|--|| +| 96 | 4 | 20% | +| 200 | 4 | 8% | +| 768 | 4 | 2% | +| 1536 | 4 | 1% | ++These results demonstrate the relationship between dimensions, HNSW parameter `m`, and memory overhead for the HNSW algorithm. ### Overhead from deleting or updating documents within the index |
search | Vector Search Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md | Last updated 09/27/2023 # Vector search in Azure Cognitive Search > [!IMPORTANT]-> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, [**2023-07-01-Preview REST APIs**](/rest/api/searchservice/index-preview), and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme). +> Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, [**2023-10-01-Preview REST APIs**](/rest/api/searchservice/search-service-api-versions#2023-10-01-Preview), [**2023-07-01-Preview REST APIs**](/rest/api/searchservice/index-preview), and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme). Vector search is an approach in information retrieval that uses numeric representations of content for search scenarios. Because the content is numeric rather than plain text, the search engine matches on vectors that are the most similar to the query, with no requirement for matching on exact terms. Scenarios for vector search include: + [**Hybrid search**](hybrid-search-overview.md). Vector search is implemented at the field level, which means you can build queries that include both vector fields and searchable text fields. The queries execute in parallel and the results are merged into a single response. Optionally, add [semantic search (preview)](semantic-search-overview.md) for even more accuracy with L2 reranking using the same language models that power Bing. -+ **Filtered vector search**. A query request can include a vector query and a [filter expression](search-filters.md). Filters apply to text and numeric fields, and are useful for including or excluding search documents based on filter criteria. Although a vector field isn't filterable itself, you can set up a filterable text or numeric field. The search engine processes the filter after the vector query executes, trimming search results from query response. ++ **Filtered vector search**. A query request can include a vector query and a [filter expression](search-filters.md). Filters apply to text and numeric fields, and are useful for metadata filters, and including or excluding search documents based on filter criteria. Although a vector field isn't filterable itself, you can set up a filterable text or numeric field. The search engine can process the filter before or after the vector query executes. + **Vector database**. Use Cognitive Search as a vector store to serve as long-term memory or an external knowledge base for Large Language Models (LLMs), or other applications. For example, you can use Azure Cognitive Search as a [*vector index* in an Azure Machine Learning prompt flow](/azure/machine-learning/concept-vector-stores) for Retrieval Augmented Generation (RAG) applications. Popular vector similarity metrics include the following, which are all supported Approximate Nearest Neighbor search (ANN) is a class of algorithms for finding matches in vector space. This class of algorithms employs different data structures or data partitioning methods to significantly reduce the search space to accelerate query processing. The specific approach depends on the algorithm. While this approach sacrifices some accuracy, these algorithms offer scalable and faster retrieval of approximate nearest neighbors, which makes them ideal for balancing accuracy and efficiency in modern information retrieval applications. You can adjust the parameters of your algorithm to fine-tune the recall, latency, memory, and disk footprint requirements of your search application. -Azure Cognitive Search uses Hierarchical Navigable Small Worlds (HNSW), which is a leading ANN algorithm optimized for high-recall, low-latency applications where data distribution is unknown or can change frequently. +Azure Cognitive Search uses Hierarchical Navigable Small Worlds (HNSW), which is a leading ANN algorithm optimized for high-recall, low-latency applications where data distribution is unknown or can change frequently. REST API 2023-10-01-Preview adds support for Exhaustive K-Nearest Neighbors (eKNN), which calculates the distance between all pairs of data points. > [!NOTE] > Finding the true set of [_k_ nearest neighbors](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) requires comparing the input vector exhaustively against all vectors in the dataset. While each vector similarity calculation is relatively fast, performing these exhaustive comparisons across large datasets is computationally expensive and slow due to the sheer number of comparisons. For example, if a dataset contains 10 million 1,000-dimensional vectors, computing the distance between the query vector and all vectors in the dataset would require scanning 37 GB of data (assuming single-precision floating point vectors) and a high number of similarity calculations. |
search | Vector Search Ranking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-ranking.md | This article is for developers who need a deeper understanding of relevance scor ## Scoring algorithms used in vector search -Hierarchical Navigable Small World (HNSW) is an algorithm used for efficient [approximate nearest neighbor (ANN)](vector-search-overview.md#approximate-nearest-neighbors) search in high-dimensional spaces. It organizes data points into a hierarchical graph structure that enables fast neighbor queries by navigating through the graph while maintaining a balance between search accuracy and computational efficiency. +Azure Cognitive Search provides the following scoring algorithms for vector search: -HNSW has several configuration parameters that can be tuned to achieve the throughput, latency, and recall objectives for your search application. You can create multiple configurations if you need optimizations for specific scenarios, but only one configuration can be specified on each vector field. +| Algorithm | Usage | Range | +|--|-|-| +|`exhaustiveKnn` | Calculates the distances between all pairs of data points. | Metric dependent, usually 0 < 1.00 | +| `hnsw` | Creates proximity graphs for organizing and querying vector content. | Metric dependent, usually 0 < 1.00. | ++Vector search algorithms are specified in a search index, and then specified on the field definition (also in the index): +++ [Create a vector index](vector-search-how-to-create-index.md)++Because many algorithm configuration parameters are used to initialize the vector index during index creation, they're immutable parameters and can't be changed once the index is built. However, there's a subset of parameters that can be modified in a [query request](vector-search-how-to-query.md). ++Each algorithm has different memory requirements, which affect [vector index size](vector-search-index-size.md), predicated on memory usage. When evaluating algorithms, remember: +++ `hnsw`, which accesses proximity graphs stored in memory, adds overhead to vector index size.++ `exhaustiveKnn` doesn't load the entire vector index into memory. As such, it has no vector index size overhead, meaning it doesn't contribute to index size. -Vector search algorithms are specified in the json path `vectorSearch.algorithmConfigurations` in a search index, and then specified on the field definition (also in the index): +<a name="eknn"></a> -- [Create a vector index](vector-search-how-to-create-index.md)+### Exhaustive K-Nearest Neighbors (KNN) -Because many algorithm configuration parameters are used to initialize the vector index during index creation, they're immutable parameters and can't be changed once the index is built. There's a subset of query-time parameters that may be modified. +Exhaustive KNN support is available through [2023-10-01-Preview REST API](/rest/api/searchservice/search-service-api-versions#2023-10-01-Preview) and it enables users to search the entire vector space for matches that are most similar to the query. This algorithm is intended for scenarios where high recall is of utmost importance, and users are willing to accept the trade-offs in search performance. ++Exhaustive KNN performs a brute-force search by calculating the distances between all pairs of data points. It guarantees finding the exact `k` nearest neighbors for a query point. Because it's computationally intensive, use Exhaustive KNN for small to medium datasets, or when precision requirements outweigh query performance considerations. ++### Hierarchical Navigable Small World (HNSW) ++HNSW is an algorithm used for efficient [approximate nearest neighbor (ANN)](vector-search-overview.md#approximate-nearest-neighbors) search in high-dimensional spaces. It organizes data points into a hierarchical graph structure that enables fast neighbor queries by navigating through the graph while maintaining a balance between search accuracy and computational efficiency. ++HNSW has several configuration parameters that can be tuned to achieve the throughput, latency, and recall objectives for your search application. You can create multiple configurations if you need optimizations for specific scenarios, but only one configuration can be specified on each vector field. ## How HNSW ranking works The goal of indexing a new vector into an HNSW graph is to add it to the graph s - These connections use the configured similarity `metric` to determine distance. Some connections are "long-distance" connections that connect across different hierarchical levels, creating shortcuts in the graph that enhance search efficiency. -1. Graph pruning and optimization: This may be performed after indexing all vectors to improve navigability and efficiency of the HNSW graph. +1. Graph pruning and optimization: This can happen after indexing all vectors, and it improves navigability and efficiency of the HNSW graph. ### Retrieving vectors with the HNSW algorithm The following table identifies the scoring property returned on each match, algo | Search method | Parameter | Scoring algorithm | Range | ||--|-|-|-| vector search | `@search.score` | HNSW algorithm, using the similarity metric specified in the HNSW configuration. | 0.333 - 1.00 (Cosine) | +| vector search | `@search.score` | HNSW or KNN algorithm, using the similarity metric specified in the algorithm configuration. | 0.333 - 1.00 (Cosine) | ## Number of ranked results in a vector query response |
search | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md | Learn about the latest updates to Azure Cognitive Search functionality, docs, an > [!NOTE] > Looking for preview features? Previews are announced here, but we also maintain a [preview features list](search-api-preview.md) so you can find them in one place. +## October 2023 ++| Item | Type | Description | +|--||--| +| [**Exhaustive K-Nearest Neighbors (KNN)**](vector-search-ranking.md#eknn) | Feature | Exhaustive K-Nearest Neighbors (KNN) is a new scoring algorithm for similarity search in vector space. It performs an exhaustive search for the nearest neighbors, useful for situations where high recall is more important than query performance. Available in the 2023-10-01-Preview REST API only. | +| [**Prefilters in vector search**](vector-search-how-to-query.md) | Feature | Evaluates filter criteria before query execution, reducing the amount of content that needs to be searched. Available in the 2023-10-01-Preview REST API only, through a new `vectorFilterMode` property on the query that can be set to `preFilter` (default) or `postFilter`, depending on your requirements. | +| [**2023-10-01-Preview Search REST API**](/rest/api/searchservice/search-service-api-versions#2023-10-01-Preview) | API | New preview version of the Search REST APIs that changes the definition for [vector fields](vector-search-how-to-create-index.md) and [vector queries](vector-search-how-to-query.md). This API version introduces breaking changes from **2023-07-01-Preview**, otherwise it's inclusive of all previous preview features. We recommend [creating new indexes](vector-search-how-to-create-index.md) for **2023-10-01-Preview**. You might encounter an HTTP 400 on some features on a migrated index, even if you migrated correctly.| + ## August 2023 | Item | Type | Description | |
spring-apps | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart.md | The application code used in this tutorial is a simple app. When you've complete ::: zone-end ++This article provides the following options for deploying to Azure Spring Apps: ++- The Azure portal is the easiest and fastest way to create resources and deploy applications with a single click. This option is suitable for Spring developers who want to quickly deploy applications to Azure cloud services. +- The Azure CLI is a powerful command line tool to manage Azure resources. This option is suitable for Spring developers who are familiar with Azure cloud services. +- IntelliJ is a powerful Java IDE to easily manage Azure resources. This option is suitable for Spring developers who are familiar with Azure cloud services and IntelliJ IDEA. +- Visual Studio Code is a lightweight but powerful source code editor, which can easily manage Azure resources. This option is suitable for Spring developers who are familiar with Azure cloud services and Visual Studio Code. ++ ## 1. Prerequisites ::: zone pivot="sc-consumption-plan,sc-standard" ### [Azure portal](#tab/Azure-portal) +- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. ++### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin) + - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. The application code used in this tutorial is a simple app. When you've complete ::: zone pivot="sc-enterprise" -## [Azure CLI](#tab/Azure-CLI) +### [Azure portal](#tab/Azure-portal-ent) ++- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. ++### [Azure CLI](#tab/Azure-CLI) - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). The application code used in this tutorial is a simple app. When you've complete - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. - [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher. -## [IntelliJ](#tab/IntelliJ) +### [IntelliJ](#tab/IntelliJ) - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). The application code used in this tutorial is a simple app. When you've complete - [IntelliJ IDEA](https://www.jetbrains.com/idea/). - [Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/install-toolkit). -## [Visual Studio Code](#tab/visual-studio-code) +### [Visual Studio Code](#tab/visual-studio-code) - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). The application code used in this tutorial is a simple app. When you've complete ## 5. Validate the app -After deployment, you can access the app at `https://<your-Azure-Spring-Apps-instance-name>-demo.azuremicroservices.io`. When you open the app, you get the response `Hello World`. +This section describes how to validate your application. ++### [Azure portal](#tab/Azure-portal) ++After the deployment finishes, find the application URL from the deployment outputs. Use the following steps to validate: +++1. Access the application URL. When you open the app, you get the response `Hello World`. ++1. Check the details for each resource deployment, which are useful for investigating any deployment issues. ++### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin) ++After the deployment finishes, access the application with the output application URL. Use the following steps to check the app's logs to investigate any deployment issue: ++1. Access the output application URL. When you open the app, you get the response `Hello World`. ++1. From the navigation pane of the Azure Spring Apps instance **Overview** page, select **Logs** to check the app's logs. ++ :::image type="content" source="media/quickstart/logs.png" alt-text="Screenshot of the Azure portal that shows the Azure Spring Apps Logs page." lightbox="media/quickstart/logs.png"::: ++### [Azure Developer CLI](#tab/Azure-Developer-CLI) ++After the deployment finishes, access the application with the output endpoint. When you open the app, you get the response `Hello World`. ++++++### [Azure portal](#tab/Azure-portal) ++After the deployment finishes, use the following steps to find the application URL from the deployment outputs: +++1. Access the application URL. When you open the app, you get the response `Hello World`. -From the navigation pane of the Azure Spring Apps instance overview page, select **Logs** to check the app's logs. +1. Check the details for each resource deployment, which are useful for investigating any deployment issues. ++### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin) ++After the deployment finishes, use the following steps to check the app's logs to investigate any deployment issue: ++1. Access the application with the output application URL. When you open the app, you get the response `Hello World`. ++1. From the navigation pane of the Azure Spring Apps instance overview page, select **Logs** to check the app's logs. ++ :::image type="content" source="media/quickstart/logs.png" alt-text="Screenshot of the Azure portal that shows the Azure Spring Apps Logs page." lightbox="media/quickstart/logs.png"::: ++### [Azure Developer CLI](#tab/Azure-Developer-CLI) +After the deployment finishes, access the application with the output endpoint. When you open the app, you get the response `Hello World`. ++ ::: zone-end ::: zone pivot="sc-enterprise" -## [Azure CLI](#tab/Azure-CLI) +### [Azure portal](#tab/Azure-portal-ent) ++After the deployment finishes, use the following steps to find the application URL from the deployment outputs: +++1. Access the application URL. When you open the app, you get the response `Hello World`. ++1. Check the details for each resource deployment, which are useful for investigating any deployment issues. ++### [Azure CLI](#tab/Azure-CLI) -Use the following command to check the app's log to investigate any deployment issue: +After the deployment finishes, use the following steps to check the app's logs to investigate any deployment issue: -```azurecli -az spring app logs \ - --service ${SERVICE_NAME} \ - --name ${APP_NAME} -``` +1. Access the application with the output application URL. When you open the app, you get the response `Hello World`. -## [IntelliJ](#tab/IntelliJ) +1. Use the following command to check the app's log to investigate any deployment issue: ++ ```azurecli + az spring app logs \ + --service ${SERVICE_NAME} \ + --name ${APP_NAME} + ``` ++### [IntelliJ](#tab/IntelliJ) Use the following steps to stream your application logs: -1. Open the **Azure Explorer** window, expand the node **Azure**, expand the service node **Azure Spring Apps**, expand the Azure Spring Apps instance you created, and then select the *demo* instance of the app you created. -2. Right-click and select **Start Streaming Logs**, then select **OK** to see real-time application logs. +1. Access the application with the output application URL. When you open the app, you get the response `Hello World`. ++1. Open the **Azure Explorer** window, expand the node **Azure**, expand the service node **Azure Spring Apps**, expand the Azure Spring Apps instance you created, and then select the **demo** instance of the app you created. ++1. Right-click and select **Start Streaming Logs**, then select **OK** to see real-time application logs. :::image type="content" source="media/quickstart/app-stream-log.png" alt-text="Screenshot of IntelliJ that shows the Azure Streaming Log." lightbox="media/quickstart/app-stream-log.png"::: -## [Visual Studio Code](#tab/visual-studio-code) +### [Visual Studio Code](#tab/visual-studio-code) ++Use the following steps to stream your application logs: ++1. Access the application with the output application URL. When you open the app, you get the response `Hello World`. -To stream your application logs, follow the steps in the [Stream your application logs](https://code.visualstudio.com/docs/java/java-spring-apps#_stream-your-application-logs) section of [Java on Azure Spring Apps](https://code.visualstudio.com/docs/java/java-spring-apps). +1. Follow the steps in the [Stream your application logs](https://code.visualstudio.com/docs/java/java-spring-apps#_stream-your-application-logs) section of [Java on Azure Spring Apps](https://code.visualstudio.com/docs/java/java-spring-apps). To stream your application logs, follow the steps in the [Stream your applicatio > [Use managed identities for applications in Azure Spring Apps](./how-to-use-managed-identities.md) > [!div class="nextstepaction"]-> [Create a service connection in Azure Spring Apps with the Azure CLI](../service-connector/quickstart-cli-spring-cloud-connection.md) +> [Quickstart: Create a service connection in Azure Spring Apps with the Azure CLI](../service-connector/quickstart-cli-spring-cloud-connection.md) ::: zone pivot="sc-standard, sc-consumption-plan" To stream your application logs, follow the steps in the [Stream your applicatio For more information, see the following articles: - [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).-- [Spring on Azure](/azure/developer/java/spring/)-- [Spring Cloud Azure](/azure/developer/java/spring-framework/)+- [Azure for Spring developers](/azure/developer/java/spring/) +- [Spring Cloud Azure documentation](/azure/developer/java/spring-framework/) |
spring-apps | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/whats-new.md | The following updates are now available in the Enterprise plan: - **API Portal supports single sign-on with multiple replicas**: This update removes the restriction that prevents you from getting better reliability by configuring multiple replicas of your API Portal instance when single sign-on is enabled. For more information, see the [Configure single sign-on (SSO)](how-to-use-enterprise-api-portal.md#configure-single-sign-on-sso) section of [Use API portal for VMware Tanzu](how-to-use-enterprise-api-portal.md). -- **Accelerator supports Git repositories in Azure DevOps**: Application Accelerator maintains ready-made, enterprise-conformant code and configurations in Git repositories. Now, Application Accelerator enables loading accelerators directly from Git repositories hosted in Azure DevOps. For more information, see the [Manage your own accelerators](how-to-use-accelerator.md#manage-your-own-accelerators) section of [Use VMware Tanzu Application Accelerator with the Azure Spring Apps Enterprise plan](how-to-use-accelerator.md).+- **App Accelerator supports Git repositories in Azure DevOps**: Application Accelerator maintains ready-made, enterprise-conformant code and configurations in Git repositories. Now, Application Accelerator enables loading accelerators directly from Git repositories hosted in Azure DevOps. For more information, see the [Manage your own accelerators](how-to-use-accelerator.md#manage-your-own-accelerators) section of [Use VMware Tanzu Application Accelerator with the Azure Spring Apps Enterprise plan](how-to-use-accelerator.md). -- **Accelerator supports fragments and sub paths**: Application Accelerator supports fragments, enabling the efficient reuse of sections within an accelerator. This functionality saves you effort when you add new accelerators. For more information, see the [Reference a fragment in your own accelerators](how-to-use-accelerator.md#reference-a-fragment-in-your-own-accelerators) section of [Use VMware Tanzu Application Accelerator with the Azure Spring Apps Enterprise plan](how-to-use-accelerator.md).+- **App Accelerator supports fragments and sub paths**: Application Accelerator supports fragments, enabling the efficient reuse of sections within an accelerator. This functionality saves you effort when you add new accelerators. For more information, see the [Reference a fragment in your own accelerators](how-to-use-accelerator.md#reference-a-fragment-in-your-own-accelerators) section of [Use VMware Tanzu Application Accelerator with the Azure Spring Apps Enterprise plan](how-to-use-accelerator.md). -- **Native image support**: Native images generally have smaller memory footprints and quicker startup times when compared to their JVM counterparts. With this feature, you can deploy Spring Boot native image applications using the `java-native-image` buildpack. For more information, see the [Deploy Java Native Image applications](how-to-enterprise-deploy-polyglot-apps.md#deploy-java-native-image-applications-preview) section of [How to deploy polyglot apps in the Azure Spring Apps Enterprise plan](how-to-enterprise-deploy-polyglot-apps.md).+- **Java native image support (preview)**: Native images generally have smaller memory footprints and quicker startup times when compared to their JVM counterparts. With this feature, you can deploy Spring Boot native image applications using the `java-native-image` buildpack. For more information, see the [Deploy Java Native Image applications](how-to-enterprise-deploy-polyglot-apps.md#deploy-java-native-image-applications-preview) section of [How to deploy polyglot apps in the Azure Spring Apps Enterprise plan](how-to-enterprise-deploy-polyglot-apps.md). -- **Support for the PHP Buildpack**: You can use the PHP buildpack with PHP runtimes. For more information, see the [Deploy PHP applications](how-to-enterprise-deploy-polyglot-apps.md#deploy-php-applications) section of [How to deploy polyglot apps in the Azure Spring Apps Enterprise plan](how-to-enterprise-deploy-polyglot-apps.md).+- **Support for the PHP Buildpack**: You can deploy PHP apps directly from source code and receive continuous maintenance (CVE fixes) for the automatic built images. For more information, see the [Deploy PHP applications](how-to-enterprise-deploy-polyglot-apps.md#deploy-php-applications) section of [How to deploy polyglot apps in the Azure Spring Apps Enterprise plan](how-to-enterprise-deploy-polyglot-apps.md). - **New Relic APM support for .NET apps**: New Relic is a software analytics tool suite to measure and monitor performance bottlenecks, throughput, service health, and more. This update enables you to bind your .NET application with New Relic Application Performance Monitoring (APM). For more information, see the [Supported APM types](how-to-enterprise-configure-apm-integration-and-ca-certificates.md#supported-apm-types) section of [How to configure APM integration and CA certificates](how-to-enterprise-configure-apm-integration-and-ca-certificates.md). |
static-web-apps | Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/monitor.md | The following table highlights a few locations in the portal you can use to insp | | | | | [Failures](../azure-monitor/app/asp-net-exceptions.md) | _Investigate > Failures_ | Review failed requests. | | [Server requests](../azure-monitor/app/tutorial-performance.md) | _Investigate > Performance_ | Review individual API requests. |-| [Logs](../azure-monitor/app/diagnostic-search.md) | _Monitoring > Logs_ | Interact with an editor to query transaction logs. | +| [Logs](../azure-monitor/app/search-and-transaction-diagnostics.md?tabs=transaction-search) | _Monitoring > Logs_ | Interact with an editor to query transaction logs. | | [Metrics](../azure-monitor/essentials/app-insights-metrics.md) | _Monitoring > Metrics_ | Interact with a designer to create custom charts using various metrics. | ### Traces |
storage | Storage Ref Azcopy Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-sync.md | Replicates the source location to the destination location. This article provide The last modified times are used for comparison. The file is skipped if the last modified time in the destination is more recent. Alternatively, you can use the `--compare-hash` flag to transfer only files which differ in their MD5 hash. The supported pairs are: - Local <-> Azure Blob / Azure File (either SAS or OAuth authentication can be used)-- Azure Blob <-> Azure Blob (Source must include a SAS or is publicly accessible; either SAS or OAuth authentication can be used for destination)+- Azure Blob <-> Azure Blob (either SAS or OAuth authentication can be used) - Azure File <-> Azure File (Source must include a SAS or is publicly accessible; SAS authentication should be used for destination) - Azure Blob <-> Azure File |
storage | Storage Use Azcopy Blobs Synchronize | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-synchronize.md | azcopy sync 'https://mystorageaccount.blob.core.windows.net/mycontainer' 'C:\myD ## Update a container with changes in another container -The first container that appears in this command is the source. The second one is the destination. Make sure to append a SAS token to each source URL. +The first container that appears in this command is the source. The second one is the destination. -If you provide authorization credentials by using Microsoft Entra ID, you can omit the SAS token only from the destination URL. Make sure that you've set up the proper roles in your destination account. See [Option 1: Use Microsoft Entra ID](storage-use-azcopy-v10.md?toc=/azure/storage/blobs/toc.json#option-1-use-azure-active-directory). +If you provide authorization credentials by using Microsoft Entra ID, make sure that you've set up the proper roles in your source and destination account. See [Option 1: Use Microsoft Entra ID](storage-use-azcopy-v10.md?toc=/azure/storage/blobs/toc.json#option-1-use-azure-active-directory). > [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes (''). azcopy sync 'https://mysourceaccount.blob.core.windows.net/mycontainer?sv=2018-0 ## Update a directory with changes to a directory in another container -The first directory that appears in this command is the source. The second one is the destination. Make sure to append a SAS token to each source URL. +The first directory that appears in this command is the source. The second one is the destination. -If you provide authorization credentials by using Microsoft Entra ID, you can omit the SAS token only from the destination URL. Make sure that you've set up the proper roles in your destination account. See [Option 1: Use Microsoft Entra ID](storage-use-azcopy-v10.md?toc=/azure/storage/blobs/toc.json#option-1-use-azure-active-directory). +If you provide authorization credentials by using Microsoft Entra ID, make sure that you've set up the proper roles in your source and destination account. See [Option 1: Use Microsoft Entra ID](storage-use-azcopy-v10.md?toc=/azure/storage/blobs/toc.json#option-1-use-azure-active-directory). > [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes (''). |
storage | Files Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md | description: Learn about new features and enhancements in Azure Files and Azure Previously updated : 10/12/2023 Last updated : 10/13/2023 Azure Files is updated regularly to offer new features and enhancements. This ar Expanded character support will allow users to create SMB file shares with file and directory names on par with the NTFS file system for all valid Unicode characters. It also enables tools like AzCopy and Storage Mover to migrate all the files into Azure Files using the REST protocol. Expanded character support is now available in all Azure regions. For more information, [read the announcement](https://azure.microsoft.com/updates/azurefilessupportforunicodecharacters/). -Azure File Sync also supports most of the special case valid Unicode characters and control characters except for the trailing dot (.) --For more information on unsupported characters in Azure File Sync, refer to the [documentation](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors#handling-unsupported-characters). - ### 2023 quarter 3 (July, August, September) #### Azure Active Directory support for Azure Files REST API with OAuth authentication is generally available |
stream-analytics | App Insights Export Sql Stream Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/app-insights-export-sql-stream-analytics.md | Continuous export always outputs data to an Azure Storage account, so you need t ![Screenshot of add continuous export then choose event types.](./media/app-insights-export-sql-stream-analytics/085-types.png) -1. Let some data accumulate. Sit back and let people use your application for a while. Telemetry will come in and you'll see statistical charts in [metric explorer](../azure-monitor/essentials/metrics-charts.md) and individual events in [diagnostic search](../azure-monitor/app/diagnostic-search.md). +1. Let some data accumulate. Sit back and let people use your application for a while. Telemetry will come in and you'll see statistical charts in [metric explorer](../azure-monitor/essentials/metrics-charts.md) and individual events in [diagnostic search](../azure-monitor/app/search-and-transaction-diagnostics.md?tabs=transaction-search). And also, the data will export to your storage. 2. Inspect the exported data, either in the portal - choose **Browse**, select your storage account, and then **Containers** - or in Visual Studio. In Visual Studio, choose **View / Cloud Explorer**, and open Azure / Storage. (If you don't have this menu option, you need to install the Azure SDK: Open the New Project dialog and open Visual C# / Cloud / Get Microsoft Azure SDK for .NET.) FROM [dbo].[PageViewsTable] <!--Link references--> -[diagnostic]: ../azure-monitor/app/diagnostic-search.md +[diagnostic]: ../azure-monitor/app/search-and-transaction-diagnostics.md?tabs=transaction-search [export]: /previous-versions/azure/azure-monitor/app/export-telemetry [metrics]: ../azure-monitor/essentials/metrics-charts.md [start]: ../azure-monitor/app/app-insights-overview.md |
stream-analytics | App Insights Export Stream Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/app-insights-export-stream-analytics.md | Continuous export always outputs data to an Azure Storage account, so you need t ![Screenshot of add continuous export and choose event types.](./media/app-insights-export-stream-analytics/080.png) -1. Let some data accumulate. Sit back and let people use your application for a while. Telemetry will come in and you'll see statistical charts in [metric explorer](../azure-monitor/essentials/metrics-charts.md) and individual events in [diagnostic search](../azure-monitor/app/diagnostic-search.md). +1. Let some data accumulate. Sit back and let people use your application for a while. Telemetry will come in and you'll see statistical charts in [metric explorer](../azure-monitor/essentials/metrics-charts.md) and individual events in [diagnostic search](../azure-monitor/app/search-and-transaction-diagnostics.md?tabs=transaction-search). And also, the data will export to your storage. 2. Inspect the exported data. In Visual Studio, choose **View / Cloud Explorer**, and open Azure / Storage. (If you don't have this menu option, you need to install the Azure SDK: Open the New Project dialog and open Visual C# / Cloud / Get Microsoft Azure SDK for .NET.) |
synapse-analytics | Apache Spark R Language | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-r-language.md | To establish a ```sparklyr``` connection, you can use the following connection m ```r spark_version <- "<enter Spark version>" config <- spark_config()-sc <- spark_connect(master = "yarn", version = spark_version, spark_home = "/opt/spark", config = config) +sc <- spark_connect(master = "yarn", version = spark_version, spark_home = "/opt/spark", config = config, method='synapse') ``` ## Next steps -- [Create R Visualizations](./apache-spark-data-visualization.md#r-libraries-preview)+- [Create R Visualizations](./apache-spark-data-visualization.md#r-libraries-preview) |
update-center | Quickstart On Demand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/quickstart-on-demand.md | After the assessment is finished, a confirmation message appears in the upper-ri ## Configure settings -For the assessed machines that are reporting updates, you can configure [periodic assessment](assessment-options.md#periodic-assessment), [hot patching](updates-maintenance-schedules.md#hot-patching),and [patch orchestration](manage-multiple-machines.md#summary-of-machine-status) either immediately or schedule the updates by defining the maintenance window. +For the assessed machines that are reporting updates, you can configure [periodic assessment](assessment-options.md#periodic-assessment), [hotpatching](updates-maintenance-schedules.md#hotpatching),and [patch orchestration](manage-multiple-machines.md#summary-of-machine-status) either immediately or schedule the updates by defining the maintenance window. To configure the settings on your machines: |
update-center | Update Manager Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/update-manager-faq.md | This FAQ is a list of commonly asked questions about Azure Update Manager. If y ### What are the benefits of using Azure Update Manager? -Azure Update Manager provides a SaaS solution to manage and govern software updates to Windows and Linux machines across Azure, on-premises, and multi-cloud environments. +Azure Update Manager provides a SaaS solution to manage and govern software updates to Windows and Linux machines across Azure, on-premises, and multicloud environments. Following are the benefits of using Azure Update -- Oversee update compliance for your entire fleet of machines in Azure (Azure VMs), on premises, and multi-cloud environments (Arc-enabled Servers).+- Oversee update compliance for your entire fleet of machines in Azure (Azure VMs), on premises, and multicloud environments (Arc-enabled Servers). - View and deploy pending updates to secure your machines [instantly](updates-maintenance-schedules.md#update-nowone-time-update). - Manage [extended security updates (ESUs)](https://learn.microsoft.com/azure/azure-arc/servers/prepare-extended-security-updates) for your Azure Arc-enabled Windows Server 2012/2012 R2 machines. Get consistent experience for deployment of ESUs and other updates.-- Define recurring time windows during which your machines receive updates and may undergo reboots using [scheduled patching](scheduled-patching.md). Enforce machines grouped together based on standard Azure constructs (Subscriptions, Location, Resource Group, Tags etc.) to have common patch schedules using [dynamic scoping](dynamic-scope-overview.md). Sync patch schedules for Windows machines in relation to patch Tuesday, the unofficial term for month.-- Enable incremental rollout of updates to Azure VMs in off-peak hours using [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) and reduce reboots by enabling [hot patching](updates-maintenance-schedules.md#hot-patching).+- Define recurring time windows during which your machines receive updates and might undergo reboots using [scheduled patching](scheduled-patching.md). Enforce machines grouped together based on standard Azure constructs (Subscriptions, Location, Resource Group, Tags etc.) to have common patch schedules using [dynamic scoping](dynamic-scope-overview.md). Sync patch schedules for Windows machines in relation to patch Tuesday, the unofficial term for month. +- Enable incremental rollout of updates to Azure VMs in off-peak hours using [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) and reduce reboots by enabling [hotpatching](updates-maintenance-schedules.md#hotpatching). - Automatically [assess](assessment-options.md#periodic-assessment) machines for pending updates every 24 hours, and flag machines that are out of compliance. Enforce enabling periodic assessments on multiple machines at scale using [Azure Policy](periodic-assessment-at-scale.md). - Create [custom reports](workbooks.md) for deeper understanding of the updates data of the environment. - Granular access management to Azure resources with Azure roles and identity, to control who can perform update operations and edit schedules. Follow the [guidance](guidance-migration-automation-update-management-azure-upda ### LA agent (also known as MMA) is retiring and will be replaced with AMA. Is it necessary to move to Update Manager or can I continue to use Automation Update Management with AMA? -The Azure Log Analytics agent, also known as the Microsoft Monitoring Agent (MMA) will be [retired in August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). Azure Automation Update management solution relies on this agent and may encounter issues once the agent is retired. It doesn't work with Azure Monitoring Agent (AMA) either. +The Azure Log Analytics agent, also known as the Microsoft Monitoring Agent (MMA) will be [retired in August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). Azure Automation Update management solution relies on this agent and might encounter issues once the agent is retired. It doesn't work with Azure Monitoring Agent (AMA) either. Therefore, if you're using Azure Automation Update management solution, you're encouraged to move to Azure Update Manager for their software update needs. All capabilities of Azure Automation Update Management Solution will be available on Azure Update Manager before the retirement date. Follow the [guidance](guidance-migration-automation-update-management-azure-update-manager.md) to move update management for your machines to Azure Update Manager. |
update-center | Updates Maintenance Schedules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/updates-maintenance-schedules.md | -# Update options in Azure Update Manager +# Update options and orchestration in Azure Update Manager **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. -This article provides an overview of the various update and maintenance options available by Azure Update Manager. -Update Manager provides you with the flexibility to take an immediate action or schedule an update within a defined maintenance window. It also supports new patching methods, such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) and [hot patching](../automanage/automanage-hotpatch.md?context=%2fazure%2fvirtual-machines%2fcontext%2fcontext). +This article provides an overview of the various update options and orchestration in Azure Update Manager. -## Update now/One-time update +## Update Options -Update Manager allows you to secure your machines immediately by installing updates on demand. To perform the on-demand updates, see [Check and install one-time updates](deploy-updates.md#install-updates-on-a-single-vm). +### Automatic OS image upgrade -## Scheduled patching +When you enable the [automatic OS image upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) on your [Azure Virtual Machine Scale Set](../virtual-machine-scale-sets/overview.md), it helps ease Azure Update Manager to safely and automatically upgrade the OS disk for all instances in the scale set. -You can create a schedule on a daily, weekly, or hourly cadence according to your requirement. You can specify the machines that must be updated as part of the schedule and the updates that must be installed. The schedule then automatically installs the updates according to the specifications. +Automatic OS upgrade has the following characteristics: +- After you configure, the latest OS image published by the image publishers is automatically applied to the scale set without any user intervention. +- It upgrades batches of instances in a rolling manner every time a new image is published by the publisher. +- Integrates with application health probes and [Application Health extension](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md). +- Works for all VM sizes, for both Windows and Linux images including the custom images through the [Azure Compute Gallery](../virtual-machines/shared-image-galleries.md). +- Flexibility to opt out of automatic upgrades at any time. (OS upgrades can be initiated manually as well). +- The OS Disk of a VM is replaced with the new OS Disk created with the latest image version. Configured extensions and custom data scripts are run while persisted data disks are retained. +- Supports Extension sequencing. +- You can enable on a scale set of any size. -Update Manager uses a maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see the [Maintenance control documentation](/azure/virtual-machines/maintenance-control). +> [!NOTE] +> We recommend that you check on the following: +> - Requirements before you enable automatic OS image upgrades +> - Supported OS images +> - Requirements to support custom images. [Learn more](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) -Start by using [scheduled patching](scheduled-patching.md) to create and save recurring deployment schedules. -> [!NOTE] -> The patch orchestration property for Azure machines should be set to **Customer Managed Schedules** because it's a prerequisite for scheduled patching. For more information, see the [list of prerequisites](../update-center/scheduled-patching.md#prerequisites-for-scheduled-patching). +### Automatic VM guest patching ++When you enable [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) on your Azure VMs, it helps ease Azure Update Manager to safely and automatically patch virtual machines to maintain security compliance. ++Automatic VM guest patching has the following characteristics: +- Patches classified as *Critical* or *Security* are automatically downloaded and applied on the VM. +- Patches are applied during off-peak hours for IaaS VMs in the VM's time zone. +- Patches are applied during all hours for Azure Virtual Machine Scale Sets [VMSS Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration). +- Patch orchestration is managed by Azure and patches are applied following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-updates). +- Virtual machine health, as determined through platform health signals, is monitored to detect patching failures. +- You can monitor application health through the [Application Health Extension](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md). +- It works for all VM sizes. -## Automatic VM guest patching in Azure +#### Enable VM property -This mode of patching lets the Azure platform automatically download and install all the security and critical updates on your machines every month and apply them on your machines following the availability-first principles. For more information, see [Automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). +To enable the VM property, follow these steps: ++1. On the Azure Update Manager home page, go to **Update Settings**. +1. Select Patch Orchestration as **Azure Managed-Safe Deployment**. ++> [!NOTE] +> We recommend the following: +> - Obtain an understanding how the Automatic VM guest patching works. +> - Check the requirements before you enable Automatic VM guest patching. +> - Check for supported OS images. [Learn more](../virtual-machines/automatic-vm-guest-patching.md) -On the **Azure Update Manager** home page, go to **Update Settings** and select **Patch orchestration** as the **Azure Managed - Safe Deployment** value to enable this VM property. -## Windows automatic updates -This mode of patching allows the operating system to automatically install updates as soon as they're available. It uses the VM property that's enabled by setting the patch orchestration to OS orchestrated/automatic by the OS. +## Hotpatching -## Hot patching +[Hotpatching](https://learn.microsoft.com/windows-server/get-started/hotpatch?context=%2Fazure%2Fvirtual-machines%2Fcontext%2Fcontext) allows you to install OS security updates on supported *Windows Server Datacenter: Azure Edition* virtual machines that don't require a reboot after installation. It works by patching the in-memory code of running processes without the need to restart the process. -Hot patching allows you to install updates on supported Windows Server Azure Edition VMs without requiring a reboot after installation. It reduces the number of reboots required on your mission-critical application workloads running on Windows Server. For more information, see [Hot patch for new virtual machines](../automanage/automanage-hotpatch.md). +Following are the features of Hotpatching: -The hot patching property is available as a setting in Update Manager. You can enable it by using the Update settings flow. For detailed instructions, see [Manage update configuration settings](manage-update-settings.md#configure-settings-on-a-single-vm). +- Fewer binaries mean update install faster and consume less disk and CPU resources. +- Lower workload impact with fewer reboots. +- Better protection, as the hotpatch update packages are scoped to Windows security updates that install faster without rebooting. +- Reduces the time exposed to security risks and change windows, and easier patch orchestration with Azure Update Manager :::image type="content" source="media/updates-maintenance/hot-patch-inline.png" alt-text="Screenshot that shows the Hotpatch option." lightbox="media/updates-maintenance/hot-patch-expanded.png"::: +Hotpatching property is available as a setting in Azure Update Manager that you can enable by using Update settings flow. For more information, see [Hotpatch for virtual machines and supported platforms](https://learn.microsoft.com/windows-server/get-started/hotpatch). ++## Automatic extension upgrade ++[Automatic Extension Upgrade](../virtual-machines/automatic-extension-upgrade.md) is available for Azure VMs and [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md). When Automatic Extension Upgrade is enabled on a VM or scale set, the extension is upgraded automatically whenever the extension publisher releases a new version for that extension. ++Automatic Extension Upgrade has the following features: ++- It's supported for Azure VMs and Azure Virtual Machine Scale Sets. +- Upgrades are applied on an [availability-first-deployment-model](../virtual-machines/automatic-extension-upgrade.md#availability-first-updates). +- For a Virtual Machine Scale Set, no more than 20% of the scale set virtual machines will be upgraded in a single batch. The minimum batch size is one virtual machine. +- Works for all VM sizes and for both Windows and Linux extensions. +- Enabled on a Virtual Machine Scale Sets of any size. +- Each supported extension is enrolled individually, and you can choose the extensions to upgrade automatically. +- Supported in all public cloud regions. For more information, see [supported extensions and Automatic Extension upgrade](../virtual-machines/automatic-extension-upgrade.md#availability-first-updates) + + ### Windows automatic updates +This mode of patching allows operating system to automatically install updates on Windows VMs as soon as they're available. It uses the VM property that is enabled by setting the patch orchestration to OS orchestrated/Automatic by OS. ++> [!NOTE] +> - Windows automatic updates is not an Azure Update Manager setting but a Windows level setting. +> - Azure Update Manager doesn't support [In-place upgrade for VMs running Windows Server in Azure](../virtual-machines/windows-in-place-upgrade.md). ++## Update or Patch orchestration ++Azure Update Manager provides the flexibility to either install updates immediately or schedule updates within a defined maintenance window. These settings allow you to orchestrate patching for your virtual machine. ++### Update Now/One-time update ++Azure Update Manager allows you to secure your machines immediately by installing updates on demand. To perform the on-demand updates, see [Check and install one time updates](deploy-updates.md#install-updates-on-a-single-vm) +++### Scheduled patching ++You can create a schedule for a daily, weekly or hourly cadence as per your requirement, specify the machines that must be updated as part of the schedule, and the updates that you must install. The schedule will then automatically install the updates as per the specifications. ++Azure Update Manager uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see the [Maintenance control](../virtual-machines/maintenance-configurations.md). ++Use [scheduled patching](scheduled-patching.md) to create and save recurring deployment schedules. ++> [!NOTE] +> Patch orchestration property for Azure machines should be set to **Customer Managed Schedules** as it is a prerequisite for scheduled patching. For more information, see the [list of prerequisites](scheduled-patching.md#prerequisites-for-scheduled-patching). ++> [!IMPORTANT] +> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you must update the patch orchestration to **Customer Managed Schedules**. If you fail to update the patch orchestration, you can experience a disruption in business continuity because the schedules will fail to patch the VMs. [Learn more](prerequsite-for-schedule-patching.md). +> - For Arc-enabled servers, the updates and maintenance options such as Automatic VM Guest patching in Azure, Windows automatic updates and Hotpatching aren't supported. + + ## Next steps * To view update assessment and deployment logs generated by Update Manager, see [Query logs](query-logs.md).-* To troubleshoot issues, see [Troubleshoot Update Manager](troubleshoot.md). +* To troubleshoot Azure Update Manager issues, see [Troubleshoot issues](troubleshoot.md). |
update-center | Whats Upcoming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-upcoming.md | Last updated 09/27/2023 The article [What's new in Azure Update Manager](whats-new.md) contains updates of feature releases. This article lists all the upcoming features for Azure Update Manager. +## Alerting +Enable alerts to address events as captured in updates data. ## Prescript and postscript |
virtual-machines | Bpsv2 Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/bpsv2-arm.md | Last updated 06/09/2023 The Bpsv2-series virtual machines are based on the Arm architecture, featuring the Ampere® Altra® Arm-based processor operating at 3.0 GHz, delivering outstanding price-performance for general-purpose workloads, These virtual machines offer a range of VM sizes, from 0.5 GiB to up to 4 GiB of memory per vCPU, to meet the needs of applications that do not need the full performance of the CPU continuously, such as development and test servers, low traffic web servers, small databases, micro services, servers for proof-of-concepts, build servers, and code repositories. These workloads typically have burstable performance requirements. The Bpsv2-series VMs provides you with the ability to purchase a VM size with baseline performance that can build up credits when it is using less than its baseline performance. When the VM has accumulated credits, the VM can burst above the baseline using up to 100% of the vCPU when your application requires higher CPU performance. ## Bpsv2-series-Bpsv2 VMs offer up to 16 vCPU and 64 GiB of RAM and are optimized for scale-out and most enterprise workloads. Bpsv2-series virtual machines support Standard SSD, Standard HDD, Premium SSd disk types with no local-SSD support (i.e. no local or temp disk) and you can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/). +Bpsv2 VMs offer up to 16 vCPU and 64 GiB of RAM and are optimized for scale-out and most enterprise workloads. Bpsv2-series virtual machines support Standard SSD, Standard HDD, Premium SSD disk types with no local-SSD support (i.e. no local or temp disk) and you can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/). [Premium Storage](premium-storage-performance.md): Supported<br> |
virtual-machines | Dplsv5 Dpldsv5 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dplsv5-dpldsv5-series.md | -**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets +**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows Client VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets The Dplsv5-series and Dpldsv5-series virtual machines are based on the Arm architecture, delivering outstanding price-performance for general-purpose workloads. These virtual machines feature the Ampere® Altra® Arm-based processor operating at 3.0 GHz, which provides an entire physical core for each virtual machine vCPU. These virtual machines offer a range of vCPU sizes, up to 2 GiB of memory per vCPU, and temporary storage options able to meet the requirements of most non-memory-intensive and scale-out workloads such as microservices, small databases, caches, gaming servers, and more. |
virtual-machines | Dpsv5 Dpdsv5 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dpsv5-dpdsv5-series.md | -**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets +**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows Client VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets The Dpsv5-series and Dpdsv5-series virtual machines are based on the Arm architecture, delivering outstanding price-performance for general-purpose workloads. These virtual machines feature the Ampere® Altra® Arm-based processor operating at 3.0 GHz, which provides an entire physical core for each virtual machine vCPU. These virtual machines offer a range of vCPU sizes, up to 4 GiB of memory per vCPU, and temporary storage options able to meet the requirements of scale-out and most enterprise workloads such as web and application servers, small to medium databases, caches, and more. |
virtual-machines | Epsv5 Epdsv5 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/epsv5-epdsv5-series.md | -**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets +**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows Client VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets The Epsv5-series and Epdsv5-series virtual machines are based on the Arm architecture, delivering outstanding price-performance for memory-intensive workloads. These virtual machines feature the Ampere® Altra® Arm-based processor operating at 3.0 GHz, which provides an entire physical core for each virtual machine vCPU. These virtual machines offer a range of vCPU sizes, up to 8 GiB of memory per vCPU, and are best suited for memory-intensive scale-out and enterprise workloads, such as relational database servers, large databases, data analytics engines, in-memory caches, and more. |
virtual-machines | Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/regions.md | Examples of region pairs include: You can see the full [list of regional pairs here](../availability-zones/cross-region-replication-azure.md#azure-paired-regions). ## Feature availability-Some services or VM features are only available in certain regions, such as specific VM sizes or storage types. There are also some global Azure services that do not require you to select a particular region, such as [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md), [Traffic Manager](../traffic-manager/traffic-manager-overview.md), or [Azure DNS](../dns/dns-overview.md). To assist you in designing your application environment, you can check the [availability of Azure services across each region](https://azure.microsoft.com/regions/#services). You can also [programmatically query the supported VM sizes and restrictions in each region](../azure-resource-manager/templates/error-sku-not-available.md). +Some services or VM features are only available in certain regions, such as specific VM sizes or storage types. There are also some global Azure services that do not require you to select a particular region, such as [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md), [Traffic Manager](../traffic-manager/traffic-manager-overview.md), or [Azure DNS](../dns/dns-overview.md). To assist you in designing your application environment, you can check the [availability of Azure services across each region](https://azure.microsoft.com/regions/#services). You can also [programmatically query the supported VM sizes and restrictions in each region](../azure-resource-manager/templates/error-sku-not-available.md). ## Storage availability Understanding Azure regions and geographies becomes important when you consider the available storage replication options. Depending on the storage type, you have different replication options. |
virtual-machines | Copy Managed Disks To Same Or Different Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-managed-disks-to-same-or-different-subscription.md | -This article contains two scripts. The first script copies a managed disk that's using platform-managed keys to same or different subscription but in the same region. The second script copies a managed disk that's using customer-managed keys to the same or a different subscription in the same region. Either copy only works when the subscriptions are part of the same Azure AD tenant. +This article contains two scripts. The first script copies a managed disk that's using platform-managed keys to same or different subscription but in the same region. The second script copies a managed disk that's using customer-managed keys to the same or a different subscription in the same region. Either copy only works when the subscriptions are part of the same Microsoft Entra tenant. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] |
virtual-machines | Security Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-policy.md | To learn more about server-side encryption, refer to either the articles for [Wi For enhanced [Windows VM](windows/disk-encryption-overview.md) and [Linux VM](linux/disk-encryption-overview.md) security and compliance, virtual disks in Azure can be encrypted. Virtual disks on Windows VMs are encrypted at rest using BitLocker. Virtual disks on Linux VMs are encrypted at rest using dm-crypt. -There is no charge for encrypting virtual disks in Azure. Cryptographic keys are stored in Azure Key Vault using software-protection, or you can import or generate your keys in Hardware Security Modules (HSMs) certified to FIPS 140-2 level 2 standards. These cryptographic keys are used to encrypt and decrypt virtual disks attached to your VM. You retain control of these cryptographic keys and can audit their use. An Azure Active Directory service principal provides a secure mechanism for issuing these cryptographic keys as VMs are powered on and off. +There is no charge for encrypting virtual disks in Azure. Cryptographic keys are stored in Azure Key Vault using software-protection, or you can import or generate your keys in Hardware Security Modules (HSMs) certified to FIPS 140-2 level 2 standards. These cryptographic keys are used to encrypt and decrypt virtual disks attached to your VM. You retain control of these cryptographic keys and can audit their use. A Microsoft Entra service principal provides a secure mechanism for issuing these cryptographic keys as VMs are powered on and off. ## Key Vault and SSH Keys When you connect to VMs, you should use public-key cryptography to provide a mor A common challenge when building cloud applications is how to manage the credentials in your code for authenticating to cloud services. Keeping the credentials secure is an important task. Ideally, the credentials never appear on developer workstations and aren't checked into source control. Azure Key Vault provides a way to securely store credentials, secrets, and other keys, but your code has to authenticate to Key Vault to retrieve them. -The managed identities for Azure resources feature in Azure Active Directory (Azure AD) solves this problem. The feature provides Azure services with an automatically managed identity in Azure AD. You can use the identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without any credentials in your code. Your code that's running on a VM can request a token from two endpoints that are accessible only from within the VM. For more detailed information about this service, review the [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) overview page. +The managed identities for Azure resources feature in Microsoft Entra solves this problem. The feature provides Azure services with an automatically managed identity in Microsoft Entra ID. You can use the identity to authenticate to any service that supports Microsoft Entra authentication, including Key Vault, without any credentials in your code. Your code that's running on a VM can request a token from two endpoints that are accessible only from within the VM. For more detailed information about this service, review the [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) overview page. ## Policies |
virtual-machines | Security Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-recommendations.md | For general information about Microsoft Defender for Cloud, see [What is Microso | Recommendation | Comments | Defender for Cloud | |-|-|--|-| Centralize VM authentication. | You can centralize the authentication of your Windows and Linux VMs by using [Azure Active Directory authentication](../active-directory/develop/authentication-vs-authorization.md). | - | +| Centralize VM authentication. | You can centralize the authentication of your Windows and Linux VMs by using [Microsoft Entra authentication](../active-directory/develop/authentication-vs-authorization.md). | - | ## Monitoring |
virtual-machines | Share Gallery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery.md | If you share gallery resources to someone outside of your Azure tenant, they wil 1. On the page for your gallery, in the menu on the left, select **Access control (IAM)**. 1. Under **Add a role assignment**, select **Add**. The **Add a role assignment** pane will open. 1. Under **Role**, select **Reader**.-1. Under **assign access to**, leave the default of **Azure AD user, group, or service principal**. +1. Under **assign access to**, leave the default of **Microsoft Entra user, group, or service principal**. 1. Under **Select**, type in the email address of the person that you would like to invite. 1. If the user is outside of your organization, you'll see the message **This user will be sent an email that enables them to collaborate with Microsoft.** Select the user with the email address and then click **Save**. New-AzRoleAssignment ` - Create an [image definition and an image version](image-version.md). - Create a VM from a [generalized](vm-generalized-image-version.md) or [specialized](vm-specialized-image-version.md) image in a gallery.-- |
virtual-machines | Shared Image Galleries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/shared-image-galleries.md | You can create Azure Compute Gallery resource using templates. There are several * [Can I move the Azure Compute Gallery resource to a different subscription after it has been created?](#can-i-move-the-azure-compute-gallery-resource-to-a-different-subscription-after-it-has-been-created) * [Can I replicate my image versions across clouds such as Microsoft Azure operated by 21Vianet, Azure Germany, or Azure Government Cloud?](#can-i-replicate-my-image-versions-across-clouds-such-as-azure-operated-by-21vianet-or-azure-germany-or-azure-government-cloud) * [Can I replicate my image versions across subscriptions?](#can-i-replicate-my-image-versions-across-subscriptions)-* [Can I share image versions across Azure AD tenants?](#can-i-share-image-versions-across-azure-ad-tenants) +* [Can I share image versions across Microsoft Entra tenants?](#can-i-share-image-versions-across-azure-ad-tenants) * [How long does it take to replicate image versions across the target regions?](#how-long-does-it-take-to-replicate-image-versions-across-the-target-regions) * [What is the difference between source region and target region?](#what-is-the-difference-between-source-region-and-target-region) * [How do I specify the source region while creating the image version?](#how-do-i-specify-the-source-region-while-creating-the-image-version) No, you can't replicate image versions across clouds. No, you may replicate the image versions across regions in a subscription and use it in other subscriptions through RBAC. -### Can I share image versions across Azure AD tenants? +<a name='can-i-share-image-versions-across-azure-ad-tenants'></a> ++### Can I share image versions across Microsoft Entra tenants? Yes, you can use RBAC to share to individuals across tenants. But, to share at scale, see "Share gallery images across Azure tenants" using [PowerShell](share-gallery.md?tabs=powershell) or [CLI](share-gallery.md?tabs=cli). |
virtual-machines | Disk Encryption Key Vault Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-key-vault-aad.md | Title: Create and configure a key vault for Azure Disk Encryption with Azure AD (previous release) -description: In this article, learn how to create and configure a key vault for Azure Disk Encryption with Azure AD. + Title: Create and configure a key vault for Azure Disk Encryption with Microsoft Entra ID (previous release) +description: In this article, learn how to create and configure a key vault for Azure Disk Encryption with Microsoft Entra ID. -# Creating and configuring a key vault for Azure Disk Encryption with Azure AD (previous release) +# Creating and configuring a key vault for Azure Disk Encryption with Microsoft Entra ID (previous release) **Applies to:** :heavy_check_mark: Windows VMs -**The new release of Azure Disk Encryption eliminates the requirement for providing an Azure AD application parameter to enable VM disk encryption. With the new release, you are no longer required to provide Azure AD credentials during the enable encryption step. All new VMs must be encrypted without the Azure AD application parameters using the new release. To view instructions to enable VM disk encryption using the new release, see [Azure Disk Encryption](disk-encryption-overview.md). VMs that were already encrypted with Azure AD application parameters are still supported and should continue to be maintained with the AAD syntax.** +**The new release of Azure Disk Encryption eliminates the requirement for providing a Microsoft Entra application parameter to enable VM disk encryption. With the new release, you are no longer required to provide Microsoft Entra credentials during the enable encryption step. All new VMs must be encrypted without the Microsoft Entra application parameters using the new release. To view instructions to enable VM disk encryption using the new release, see [Azure Disk Encryption](disk-encryption-overview.md). VMs that were already encrypted with Microsoft Entra application parameters are still supported and should continue to be maintained with the Microsoft Entra syntax.** Azure Disk Encryption uses Azure Key Vault to control and manage disk encryption keys and secrets. For more information about key vaults, see [Get started with Azure Key Vault](../../key-vault/general/overview.md) and [Secure your key vault](../../key-vault/general/security-features.md). -Creating and configuring a key vault for use with Azure Disk Encryption with Azure AD (previous release) involves three steps: +Creating and configuring a key vault for use with Azure Disk Encryption with Microsoft Entra ID (previous release) involves three steps: 1. Create a key vault.-2. Set up an Azure AD application and service principal. -3. Set the key vault access policy for the Azure AD app. +2. Set up a Microsoft Entra application and service principal. +3. Set the key vault access policy for the Microsoft Entra app. 4. Set key vault advanced access policies. You may also, if you wish, generate or import a key encryption key (KEK). You can create a key vault by using the [Resource Manager template](https://gith 2. Select the subscription, resource group, resource group location, Key Vault name, Object ID, legal terms, and agreement, and then select **Purchase**. -## Set up an Azure AD app and service principal -When you need encryption to be enabled on a running VM in Azure, Azure Disk Encryption generates and writes the encryption keys to your key vault. Managing encryption keys in your key vault requires Azure AD authentication. Create an Azure AD application for this purpose. For authentication purposes, you can use either client secret-based authentication or [client certificate-based Azure AD authentication](../../active-directory/authentication/active-directory-certificate-based-authentication-get-started.md). +<a name='set-up-an-azure-ad-app-and-service-principal'></a> +## Set up a Microsoft Entra app and service principal +When you need encryption to be enabled on a running VM in Azure, Azure Disk Encryption generates and writes the encryption keys to your key vault. Managing encryption keys in your key vault requires Microsoft Entra authentication. Create a Microsoft Entra application for this purpose. For authentication purposes, you can use either client secret-based authentication or [client certificate-based Microsoft Entra authentication](../../active-directory/authentication/active-directory-certificate-based-authentication-get-started.md). -### Set up an Azure AD app and service principal with Azure PowerShell ++<a name='set-up-an-azure-ad-app-and-service-principal-with-azure-powershell'></a> ++### Set up a Microsoft Entra app and service principal with Azure PowerShell To execute the following commands, get and use the [Azure AD PowerShell module](/powershell/azure/active-directory/install-adv2). -1. Use the [New-AzADApplication](/powershell/module/az.resources/new-azadapplication) PowerShell cmdlet to create an Azure AD application. MyApplicationHomePage and the MyApplicationUri can be any values you wish. +1. Use the [New-AzADApplication](/powershell/module/az.resources/new-azadapplication) PowerShell cmdlet to create a Microsoft Entra application. MyApplicationHomePage and the MyApplicationUri can be any values you wish. ```azurepowershell $aadClientSecret = "My AAD client secret" To execute the following commands, get and use the [Azure AD PowerShell module]( $servicePrincipal = New-AzADServicePrincipal ΓÇôApplicationId $azureAdApplication.ApplicationId -Role Contributor ``` -3. The $azureAdApplication.ApplicationId is the Azure AD ClientID and the $aadClientSecret is the client secret that you'll use later to enable Azure Disk Encryption. Safeguard the Azure AD client secret appropriately. Running `$azureAdApplication.ApplicationId` will show you the ApplicationID. +3. The $azureAdApplication.ApplicationId is the Microsoft Entra ClientID and the $aadClientSecret is the client secret that you'll use later to enable Azure Disk Encryption. Safeguard the Microsoft Entra client secret appropriately. Running `$azureAdApplication.ApplicationId` will show you the ApplicationID. + +<a name='set-up-an-azure-ad-app-and-service-principal-with-azure-cli'></a> -### Set up an Azure AD app and service principal with Azure CLI +### Set up a Microsoft Entra app and service principal with Azure CLI You can manage your service principals with Azure CLI using the [az ad sp](/cli/azure/ad/sp) commands. For more information, see [Create an Azure service principal](/cli/azure/create-an-azure-service-principal-azure-cli). You can manage your service principals with Azure CLI using the [az ad sp](/cli/ ```azurecli-interactive az ad sp create-for-rbac --name "ServicePrincipalName" --password "My-AAD-client-secret" --role Contributor --scopes /subscriptions/<subscription_id> ```-3. The appId returned is the Azure AD ClientID used in other commands. It's also the SPN you'll use for az keyvault set-policy. The password is the client secret that you should use later to enable Azure Disk Encryption. Safeguard the Azure AD client secret appropriately. +3. The appId returned is the Microsoft Entra ClientID used in other commands. It's also the SPN you'll use for az keyvault set-policy. The password is the client secret that you should use later to enable Azure Disk Encryption. Safeguard the Microsoft Entra client secret appropriately. -### Set up an Azure AD app and service principal through the Azure portal -Use the steps from the [Use portal to create an Azure Active Directory application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md) article to create an Azure AD application. Each of these steps will take you directly to the article section to complete. +<a name='set-up-an-azure-ad-app-and-service-principal-through-the-azure-portal'></a> ++### Set up a Microsoft Entra app and service principal through the Azure portal +Use the steps from the [Use portal to create a Microsoft Entra application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md) article to create a Microsoft Entra application. Each of these steps will take you directly to the article section to complete. 1. [Verify required permissions](../../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app)-2. [Create an Azure Active Directory application](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) +2. [Create a Microsoft Entra application](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) - You can use any name and sign-on URL you would like when creating the application. 3. [Get the application ID and the authentication key](../../active-directory/develop/howto-create-service-principal-portal.md#sign-in-to-the-application). - The authentication key is the client secret and is used as the AadClientSecret for Set-AzVMDiskEncryptionExtension.- - The authentication key is used by the application as a credential to sign in to Azure AD. In the Azure portal, this secret is called keys, but has no relation to key vaults. Secure this secret appropriately. + - The authentication key is used by the application as a credential to sign in to Microsoft Entra ID. In the Azure portal, this secret is called keys, but has no relation to key vaults. Secure this secret appropriately. - The application ID will be used later as the AadClientId for Set-AzVMDiskEncryptionExtension and as the ServicePrincipalName for Set-AzKeyVaultAccessPolicy. -## Set the key vault access policy for the Azure AD app -To write encryption secrets to a specified Key Vault, Azure Disk Encryption needs the Client ID and the Client Secret of the Azure Active Directory application that has permissions to write secrets to the Key Vault. +<a name='set-the-key-vault-access-policy-for-the-azure-ad-app'></a> ++## Set the key vault access policy for the Microsoft Entra app +To write encryption secrets to a specified Key Vault, Azure Disk Encryption needs the Client ID and the Client Secret of the Microsoft Entra application that has permissions to write secrets to the Key Vault. > [!NOTE]-> Azure Disk Encryption requires you to configure the following access policies to your Azure AD client application: _WrapKey_ and _Set_ permissions. +> Azure Disk Encryption requires you to configure the following access policies to your Microsoft Entra client application: _WrapKey_ and _Set_ permissions. -### Set the key vault access policy for the Azure AD app with Azure PowerShell -Your Azure AD application needs rights to access the keys or secrets in the vault. Use the [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy) cmdlet to grant permissions to the application, using the client ID (which was generated when the application was registered) as the _ΓÇôServicePrincipalName_ parameter value. To learn more, see the blog post [Azure Key Vault - Step by Step](/archive/blogs/kv/azure-key-vault-step-by-step). +<a name='set-the-key-vault-access-policy-for-the-azure-ad-app-with-azure-powershell'></a> ++### Set the key vault access policy for the Microsoft Entra app with Azure PowerShell +Your Microsoft Entra application needs rights to access the keys or secrets in the vault. Use the [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy) cmdlet to grant permissions to the application, using the client ID (which was generated when the application was registered) as the _ΓÇôServicePrincipalName_ parameter value. To learn more, see the blog post [Azure Key Vault - Step by Step](/archive/blogs/kv/azure-key-vault-step-by-step). 1. Set the key vault access policy for the AD application with PowerShell. Your Azure AD application needs rights to access the keys or secrets in the vaul Set-AzKeyVaultAccessPolicy -VaultName $keyVaultName -ServicePrincipalName $aadClientID -PermissionsToKeys 'WrapKey' -PermissionsToSecrets 'Set' -ResourceGroupName $KVRGname ``` -### Set the key vault access policy for the Azure AD app with Azure CLI +<a name='set-the-key-vault-access-policy-for-the-azure-ad-app-with-azure-cli'></a> ++### Set the key vault access policy for the Microsoft Entra app with Azure CLI Use [az keyvault set-policy](/cli/azure/keyvault#az-keyvault-set-policy) to set the access policy. For more information, see [Manage Key Vault using CLI 2.0](../../key-vault/general/manage-with-cli2.md#authorizing-an-application-to-use-a-key-or-secret). Give the service principal you created via the Azure CLI access to get secrets and wrap keys with the following command: Give the service principal you created via the Azure CLI access to get secrets a az keyvault set-policy --name "MySecureVault" --spn "<spn created with CLI/the Azure AD ClientID>" --key-permissions wrapKey --secret-permissions set ``` -### Set the key vault access policy for the Azure AD app with the portal +<a name='set-the-key-vault-access-policy-for-the-azure-ad-app-with-the-portal'></a> ++### Set the key vault access policy for the Microsoft Entra app with the portal 1. Open the resource group with your key vault. 2. Select your key vault, go to **Access Policies**, then select **Add new**.-3. Under **Select principal**, search for the Azure AD application you created and select it. +3. Under **Select principal**, search for the Microsoft Entra application you created and select it. 4. For **Key permissions**, check **Wrap Key** under **Cryptographic Operations**. 5. For **Secret permissions**, check **Set** under **Secret Management Operations**. 6. Select **OK** to save the access policy. Before using the PowerShell script, you should be familiar with the Azure Disk E $servicePrincipal = New-AzADServicePrincipal ΓÇôApplicationId $azureAdApplication.ApplicationId -Role Contributor; $aadClientID = $azureAdApplication.ApplicationId; - #Step 3: Enable the vault for disk encryption and set the access policy for the Azure AD application. + #Step 3: Enable the vault for disk encryption and set the access policy for the Microsoft Entra application. Set-AzKeyVaultAccessPolicy -VaultName $KeyVaultName -ResourceGroupName $KVRGname -EnabledForDiskEncryption; Set-AzKeyVaultAccessPolicy -VaultName $keyVaultName -ServicePrincipalName $aadClientID -PermissionsToKeys 'WrapKey' -PermissionsToSecrets 'Set' -ResourceGroupName $KVRGname; If you would like to use certificate authentication, you can upload one to your $DiskEncryptionKeyVaultUrl = $KeyVault.VaultUri $KeyVaultResourceId = $KeyVault.ResourceId - # Create the Azure AD application and associate the certificate with it. + # Create the Microsoft Entra application and associate the certificate with it. # Fill in "C:\certificates\mycert.pfx", "Password", "<My Application Display Name>", "<https://MyApplicationHomePage>", and "<https://MyApplicationUri>" with your values. # MyApplicationHomePage and the MyApplicationUri can be any values you wish If you would like to use certificate authentication, you can upload one to your $VM = Add-AzVMSecret -VM $VM -SourceVaultId $SourceVaultId -CertificateStore "My" -CertificateUrl $CertUrl Update-AzVM -VM $VM -ResourceGroupName $VMRGName - #Enable encryption on the VM using Azure AD client ID and the client certificate thumbprint + #Enable encryption on the VM using Microsoft Entra client ID and the client certificate thumbprint Set-AzVMDiskEncryptionExtension -ResourceGroupName $VMRGName -VMName $VMName -AadClientID $AADClientID -AadClientCertThumbprint $AADClientCertThumbprint -DiskEncryptionKeyVaultUrl $DiskEncryptionKeyVaultUrl -DiskEncryptionKeyVaultId $KeyVaultResourceId ``` If you would like to use certificate authentication and wrap the encryption key ## Next steps -[Enable Azure Disk Encryption with Azure AD on Windows VMs (previous release)](disk-encryption-windows-aad.md) +[Enable Azure Disk Encryption with Microsoft Entra ID on Windows VMs (previous release)](disk-encryption-windows-aad.md) |
virtual-machines | Disk Encryption Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-key-vault.md | -> - If you have previously used Azure Disk Encryption with Azure AD to encrypt a VM, you must continue use this option to encrypt your VM. See [Creating and configuring a key vault for Azure Disk Encryption with Azure AD (previous release)](disk-encryption-key-vault-aad.md) for details. +> - If you have previously used Azure Disk Encryption with Microsoft Entra ID to encrypt a VM, you must continue use this option to encrypt your VM. See [Creating and configuring a key vault for Azure Disk Encryption with Microsoft Entra ID (previous release)](disk-encryption-key-vault-aad.md) for details. Creating and configuring a key vault for use with Azure Disk Encryption involves three steps: |
virtual-machines | Disk Encryption Overview Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-overview-aad.md | -**The new release of Azure Disk Encryption eliminates the requirement for providing an Azure AD application parameter to enable VM disk encryption. With the new release, you are no longer required to provide Azure AD credentials during the enable encryption step. All new VMs must be encrypted without the Azure AD application parameters using the new release. To view instructions to enable VM disk encryption using the new release, see [Azure Disk Encryption for Windows VMs](disk-encryption-overview.md). VMs that were already encrypted with Azure AD application parameters are still supported and should continue to be maintained with the AAD syntax.** +**The new release of Azure Disk Encryption eliminates the requirement for providing a Microsoft Entra application parameter to enable VM disk encryption. With the new release, you are no longer required to provide Microsoft Entra credentials during the enable encryption step. All new VMs must be encrypted without the Microsoft Entra application parameters using the new release. To view instructions to enable VM disk encryption using the new release, see [Azure Disk Encryption for Windows VMs](disk-encryption-overview.md). VMs that were already encrypted with Microsoft Entra application parameters are still supported and should continue to be maintained with the Microsoft Entra syntax.** -This article supplements [Azure Disk Encryption for Windows VMs](disk-encryption-overview.md) with additional requirements and prerequisites for Azure Disk Encryption with Azure AD (previous release). The [Supported VMs and operating systems](disk-encryption-overview.md#supported-vms-and-operating-systems) section remains the same. +This article supplements [Azure Disk Encryption for Windows VMs](disk-encryption-overview.md) with additional requirements and prerequisites for Azure Disk Encryption with Microsoft Entra ID (previous release). The [Supported VMs and operating systems](disk-encryption-overview.md#supported-vms-and-operating-systems) section remains the same. ## Networking and Group Policy -**To enable the Azure Disk Encryption feature using the older AAD parameter syntax, the IaaS VMs must meet the following network endpoint configuration requirements:** - - To get a token to connect to your key vault, the IaaS VM must be able to connect to an Azure Active Directory endpoint, \[login.microsoftonline.com\]. +**To enable the Azure Disk Encryption feature using the older Microsoft Entra parameter syntax, the IaaS VMs must meet the following network endpoint configuration requirements:** + - To get a token to connect to your key vault, the IaaS VM must be able to connect to a Microsoft Entra endpoint, \[login.microsoftonline.com\]. - To write the encryption keys to your key vault, the IaaS VM must be able to connect to the key vault endpoint. - The IaaS VM must be able to connect to an Azure storage endpoint that hosts the Azure extension repository and an Azure storage account that hosts the VHD files. - If your security policy limits access from Azure VMs to the Internet, you can resolve the preceding URI and configure a specific rule to allow outbound connectivity to the IPs. For more information, see [Azure Key Vault behind a firewall](../../key-vault/general/access-behind-firewall.md). This article supplements [Azure Disk Encryption for Windows VMs](disk-encryption Azure Disk Encryption requires an Azure Key Vault to control and manage disk encryption keys and secrets. Your key vault and VMs must reside in the same Azure region and subscription. -For details, see [Creating and configuring a key vault for Azure Disk Encryption with Azure AD (previous release)](disk-encryption-key-vault-aad.md). +For details, see [Creating and configuring a key vault for Azure Disk Encryption with Microsoft Entra ID (previous release)](disk-encryption-key-vault-aad.md). ## Next steps -- [Creating and configuring a key vault for Azure Disk Encryption with Azure AD (previous release)](disk-encryption-key-vault-aad.md)-- [Enable Azure Disk Encryption with Azure AD on Windows VMs (previous release)](disk-encryption-windows-aad.md)+- [Creating and configuring a key vault for Azure Disk Encryption with Microsoft Entra ID (previous release)](disk-encryption-key-vault-aad.md) +- [Enable Azure Disk Encryption with Microsoft Entra ID on Windows VMs (previous release)](disk-encryption-windows-aad.md) - [Azure Disk Encryption prerequisites CLI script](https://github.com/ejarvi/ade-cli-getting-started) - [Azure Disk Encryption prerequisites PowerShell script](https://github.com/Azure/azure-powershell/tree/master/src/Compute/Compute/Extension/AzureDiskEncryption/Scripts) |
virtual-machines | Disk Encryption Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-overview.md | If you use [Microsoft Defender for Cloud](../../security-center/index.yml), you' ![Microsoft Defender for Cloud disk encryption alert](../media/disk-encryption/security-center-disk-encryption-fig1.png) > [!WARNING]-> - If you have previously used Azure Disk Encryption with Azure AD to encrypt a VM, you must continue use this option to encrypt your VM. See [Azure Disk Encryption with Azure AD (previous release)](disk-encryption-overview-aad.md) for details. +> - If you have previously used Azure Disk Encryption with Microsoft Entra ID to encrypt a VM, you must continue use this option to encrypt your VM. See [Azure Disk Encryption with Microsoft Entra ID (previous release)](disk-encryption-overview-aad.md) for details. > - Certain recommendations might increase data, network, or compute resource usage, resulting in additional license or subscription costs. You must have a valid active Azure subscription to create resources in Azure in the supported regions. > - Do not use BitLocker to manually decrypt a VM or disk that was encrypted through Azure Disk Encryption. Azure Disk Encryption is not available on [Basic, A-series VMs](https://azure.mi ## Networking requirements To enable Azure Disk Encryption, the VMs must meet the following network endpoint configuration requirements:- - To get a token to connect to your key vault, the Windows VM must be able to connect to an Azure Active Directory endpoint, \[login.microsoftonline.com\]. + - To get a token to connect to your key vault, the Windows VM must be able to connect to a Microsoft Entra endpoint, \[login.microsoftonline.com\]. - To write the encryption keys to your key vault, the Windows VM must be able to connect to the key vault endpoint. - The Windows VM must be able to connect to an Azure storage endpoint that hosts the Azure extension repository and an Azure storage account that hosts the VHD files. - If your security policy limits access from Azure VMs to the Internet, you can resolve the preceding URI and configure a specific rule to allow outbound connectivity to the IPs. For more information, see [Azure Key Vault behind a firewall](../../key-vault/general/access-behind-firewall.md). |
virtual-machines | Disk Encryption Sample Scripts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-sample-scripts.md | The following table shows which parameters can be used in the PowerShell script: |$keyVaultName|Name of the KeyVault in which encryption keys are to be placed. A new vault with this name will be created if one doesn't exist.| True| |$location|Location of the KeyVault. Make sure the KeyVault and VMs to be encrypted are in the same location. Get a location list with `Get-AzLocation`.|True| |$subscriptionId|Identifier of the Azure subscription to be used. You can get your Subscription ID with `Get-AzSubscription`.|True|-|$aadAppName|Name of the Azure AD application that will be used to write secrets to KeyVault. A new application with this name will be created if one doesn't exist. If this app already exists, pass aadClientSecret parameter to the script.|False| -|$aadClientSecret|Client secret of the Azure AD application that was created earlier.|False| +|$aadAppName|Name of the Microsoft Entra application that will be used to write secrets to KeyVault. A new application with this name will be created if one doesn't exist. If this app already exists, pass aadClientSecret parameter to the script.|False| +|$aadClientSecret|Client secret of the Microsoft Entra application that was created earlier.|False| |$keyEncryptionKeyName|Name of optional key encryption key in KeyVault. A new key with this name will be created if one doesn't exist.|False| ## Resource Manager templates -### Encrypt or decrypt VMs without an Azure AD app +<a name='encrypt-or-decrypt-vms-without-an-azure-ad-app'></a> ++### Encrypt or decrypt VMs without a Microsoft Entra app - [Enable disk encryption on an existing or running Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-windows-vm-without-aad) - [Disable encryption on a running Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/decrypt-running-windows-vm-without-aad) -### Encrypt or decrypt VMs with an Azure AD app (previous release) +<a name='encrypt-or-decrypt-vms-with-an-azure-ad-app-previous-release'></a> ++### Encrypt or decrypt VMs with a Microsoft Entra app (previous release) - [Enable disk encryption on an existing or running Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-windows-vm) - [Disable encryption on a running Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/decrypt-running-windows-vm) |
virtual-machines | Disk Encryption Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-troubleshooting.md | Any network security group settings that are applied must still allow the endpoi ### Azure Key Vault behind a firewall -When encryption is being enabled with [Azure AD credentials](disk-encryption-windows-aad.md#), the target VM must allow connectivity to both Azure Active Directory endpoints and Key Vault endpoints. Current Azure Active Directory authentication endpoints are maintained in sections 56 and 59 of the [Microsoft 365 URLs and IP address ranges](/microsoft-365/enterprise/urls-and-ip-address-ranges) documentation. Key Vault instructions are provided in the documentation on how to [Access Azure Key Vault behind a firewall](../../key-vault/general/access-behind-firewall.md). +When encryption is being enabled with [Microsoft Entra credentials](disk-encryption-windows-aad.md#), the target VM must allow connectivity to both Microsoft Entra endpoints and Key Vault endpoints. Current Microsoft Entra authentication endpoints are maintained in sections 56 and 59 of the [Microsoft 365 URLs and IP address ranges](/microsoft-365/enterprise/urls-and-ip-address-ranges) documentation. Key Vault instructions are provided in the documentation on how to [Access Azure Key Vault behind a firewall](../../key-vault/general/access-behind-firewall.md). ### Azure Instance Metadata Service The VM must be able to access the [Azure Instance Metadata service](../windows/instance-metadata-service.md) endpoint (`169.254.169.254`) and the [virtual public IP address](../../virtual-network/what-is-ip-address-168-63-129-16.md) (`168.63.129.16`) used for communication with Azure platform resources. Proxy configurations that alter local HTTP traffic to these addresses (for example, adding an X-Forwarded-For header) are not supported. |
virtual-machines | Disk Encryption Windows Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-windows-aad.md | Title: Azure Disk Encryption with Azure AD for Windows VMs (previous release) + Title: Azure Disk Encryption with Microsoft Entra ID for Windows VMs (previous release) description: This article provides instructions on enabling Microsoft Azure Disk Encryption for Windows IaaS VMs. Last updated 03/15/2019 -# Azure Disk Encryption with Azure AD for Windows VMs (previous release) +# Azure Disk Encryption with Microsoft Entra ID for Windows VMs (previous release) **Applies to:** :heavy_check_mark: Windows VMs -**The new release of Azure Disk Encryption eliminates the requirement for providing an Azure AD application parameter to enable VM disk encryption. With the new release, you are no longer required to provide Azure AD credentials during the enable encryption step. All new VMs must be encrypted without the Azure AD application parameters using the new release. To view instructions to enable VM disk encryption using the new release, see [Azure Disk Encryption for Windows VMS](disk-encryption-windows.md). VMs that were already encrypted with Azure AD application parameters are still supported and should continue to be maintained with the AAD syntax.** +**The new release of Azure Disk Encryption eliminates the requirement for providing a Microsoft Entra application parameter to enable VM disk encryption. With the new release, you are no longer required to provide Microsoft Entra credentials during the enable encryption step. All new VMs must be encrypted without the Microsoft Entra application parameters using the new release. To view instructions to enable VM disk encryption using the new release, see [Azure Disk Encryption for Windows VMS](disk-encryption-windows.md). VMs that were already encrypted with Microsoft Entra application parameters are still supported and should continue to be maintained with the Microsoft Entra syntax.** You can enable many disk-encryption scenarios, and the steps may vary according to the scenario. The following sections cover the scenarios in greater detail for Windows IaaS VMs. Before you can use disk encryption, the [Azure Disk Encryption prerequisites](disk-encryption-overview-aad.md) need to be completed. You can enable disk encryption on new IaaS Windows VM from the Marketplace in Az - Select the VM, then click on **Disks** under the **Settings** heading to verify encryption status in the portal. In the chart under **Encryption**, you'll see if it's enabled. ![Azure portal - Disk Encryption Enabled](../media/disk-encryption/disk-encryption-fig2.png) -The following table lists the Resource Manager template parameters for new VMs from the Marketplace scenario using Azure AD client ID: +The following table lists the Resource Manager template parameters for new VMs from the Marketplace scenario using Microsoft Entra client ID: | Parameter | Description | | | | The following table lists the Resource Manager template parameters for new VMs f | vmSize | Size of the VM. Currently, only Standard A, D, and G series are supported. | | virtualNetworkName | Name of the VNet that the VM NIC should belong to. | | subnetName | Name of the subnet in the VNet that the VM NIC should belong to. |-| AADClientID | Client ID of the Azure AD application that has permissions to write secrets to your key vault. | -| AADClientSecret | Client secret of the Azure AD application that has permissions to write secrets to your key vault. | +| AADClientID | Client ID of the Microsoft Entra application that has permissions to write secrets to your key vault. | +| AADClientSecret | Client secret of the Microsoft Entra application that has permissions to write secrets to your key vault. | | keyVaultURL | URL of the key vault that the BitLocker key should be uploaded to. You can get it by using the cmdlet `(Get-AzKeyVault -VaultName "MyKeyVault" -ResourceGroupName "MyKeyVaultResourceGroupName").VaultURI` or the Azure CLI `az keyvault show --name "MySecureVault" --query properties.vaultUri` | | keyEncryptionKeyURL | URL of the key encryption key that's used to encrypt the generated BitLocker key (optional). </br> </br>KeyEncryptionKeyURL is an optional parameter. You can bring your own KEK to further safeguard the data encryption key (Passphrase secret) in your key vault. | | keyVaultResourceGroup | Resource group of the key vault. | In this scenario, you can enable encryption by using a template, PowerShell cmdl Use the [Set-AzVMDiskEncryptionExtension](/powershell/module/az.compute/set-azvmdiskencryptionextension) cmdlet to enable encryption on a running IaaS virtual machine in Azure. For information about enabling encryption with Azure Disk Encryption by using PowerShell cmdlets, see the blog posts [Explore Azure Disk Encryption with Azure PowerShell - Part 1](/archive/blogs/azuresecurity/explore-azure-disk-encryption-with-azure-powershell) and [Explore Azure Disk Encryption with Azure PowerShell - Part 2](/archive/blogs/azuresecurity/explore-azure-disk-encryption-with-azure-powershell-part-2). -- **Encrypt a running VM using a client secret:** The script below initializes your variables and runs the Set-AzVMDiskEncryptionExtension cmdlet. The resource group, VM, key vault, AAD app, and client secret should have already been created as prerequisites. Replace MyKeyVaultResourceGroup, MyVirtualMachineResourceGroup, MySecureVM, MySecureVault, My-AAD-client-ID, and My-AAD-client-secret with your values.+- **Encrypt a running VM using a client secret:** The script below initializes your variables and runs the Set-AzVMDiskEncryptionExtension cmdlet. The resource group, VM, key vault, Microsoft Entra app, and client secret should have already been created as prerequisites. Replace MyKeyVaultResourceGroup, MyVirtualMachineResourceGroup, MySecureVM, MySecureVault, My-AAD-client-ID, and My-AAD-client-secret with your values. ```azurepowershell $KVRGname = 'MyKeyVaultResourceGroup'; $VMRGName = 'MyVirtualMachineResourceGroup'; You can enable disk encryption on existing or running IaaS Windows VMs in Azure 2. Select the subscription, resource group, resource group location, parameters, legal terms, and agreement. Click **Purchase** to enable encryption on the existing or running IaaS VM. -The following table lists the Resource Manager template parameters for existing or running VMs that use an Azure AD client ID: +The following table lists the Resource Manager template parameters for existing or running VMs that use a Microsoft Entra client ID: | Parameter | Description | | | |-| AADClientID | Client ID of the Azure AD application that has permissions to write secrets to the key vault. | -| AADClientSecret | Client secret of the Azure AD application that has permissions to write secrets to the key vault. | +| AADClientID | Client ID of the Microsoft Entra application that has permissions to write secrets to the key vault. | +| AADClientSecret | Client secret of the Microsoft Entra application that has permissions to write secrets to the key vault. | | keyVaultName | Name of the key vault that the BitLocker key should be uploaded to. You can get it by using the cmdlet `(Get-AzKeyVault -ResourceGroupName <MyKeyVaultResourceGroupName>). Vaultname` or the Azure CLI command `az keyvault list --resource-group "MySecureGroup"`| | keyEncryptionKeyURL | URL of the key encryption key that's used to encrypt the generated BitLocker key. This parameter is optional if you select **nokek** in the UseExistingKek drop-down list. If you select **kek** in the UseExistingKek drop-down list, you must enter the _keyEncryptionKeyURL_ value. | | volumeType | Type of volume that the encryption operation is performed on. Valid values are _OS_, _Data_, and _All_. | You can [add a new disk to a Windows VM using PowerShell](attach-disk-ps.md), or ### Enable encryption on a newly added disk with Azure PowerShell When using PowerShell to encrypt a new disk for Windows VMs, a new sequence version should be specified. The sequence version has to be unique. The script below generates a GUID for the sequence version. In some cases, a newly added data disk might be encrypted automatically by the Azure Disk Encryption extension. Auto encryption usually occurs when the VM reboots after the new disk comes online. This is typically caused because "All" was specified for the volume type when disk encryption previously ran on the VM. If auto encryption occurs on a newly added data disk, we recommend running the Set-AzVmDiskEncryptionExtension cmdlet again with new sequence version. If your new data disk is auto encrypted and you do not wish to be encrypted, decrypt all drives first then re-encrypt with a new sequence version specifying OS for the volume type. -- **Encrypt a running VM using a client secret:** The script below initializes your variables and runs the Set-AzVMDiskEncryptionExtension cmdlet. The resource group, VM, key vault, AAD app, and client secret should have already been created as prerequisites. Replace MyKeyVaultResourceGroup, MyVirtualMachineResourceGroup, MySecureVM, MySecureVault, My-AAD-client-ID, and My-AAD-client-secret with your values. This example uses "All" for the -VolumeType parameter, which includes both OS and Data volumes. If you only want to encrypt the OS volume, use "OS" for the -VolumeType parameter. +- **Encrypt a running VM using a client secret:** The script below initializes your variables and runs the Set-AzVMDiskEncryptionExtension cmdlet. The resource group, VM, key vault, Microsoft Entra app, and client secret should have already been created as prerequisites. Replace MyKeyVaultResourceGroup, MyVirtualMachineResourceGroup, MySecureVM, MySecureVault, My-AAD-client-ID, and My-AAD-client-secret with your values. This example uses "All" for the -VolumeType parameter, which includes both OS and Data volumes. If you only want to encrypt the OS volume, use "OS" for the -VolumeType parameter. ```azurepowershell $sequenceVersion = [Guid]::NewGuid(); https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id] ``` -## Enable encryption using Azure AD client certificate-based authentication. -You can use client certificate authentication with or without KEK. Before using the PowerShell scripts, you should already have the certificate uploaded to the key vault and deployed to the VM. If you're using KEK too, the KEK should already exist. For more information, see the [Certificate-based authentication for Azure AD](disk-encryption-key-vault-aad.md#certificate-based-authentication-optional) section of the prerequisites article. +<a name='enable-encryption-using-azure-ad-client-certificate-based-authentication'></a> ++## Enable encryption using Microsoft Entra client certificate-based authentication. +You can use client certificate authentication with or without KEK. Before using the PowerShell scripts, you should already have the certificate uploaded to the key vault and deployed to the VM. If you're using KEK too, the KEK should already exist. For more information, see the [Certificate-based authentication for Microsoft Entra ID](disk-encryption-key-vault-aad.md#certificate-based-authentication-optional) section of the prerequisites article. ### Enable encryption using certificate-based authentication with Azure PowerShell |
virtual-machines | Disk Encryption Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-windows.md | You can only apply disk encryption to virtual machines of [supported VM sizes an - [Encryption key storage requirements](disk-encryption-overview.md#encryption-key-storage-requirements) >[!IMPORTANT]-> - If you have previously used Azure Disk Encryption with Azure AD to encrypt a VM, you must continue use this option to encrypt your VM. See [Azure Disk Encryption with Azure AD (previous release)](disk-encryption-overview-aad.md) for details. +> - If you have previously used Azure Disk Encryption with Microsoft Entra ID to encrypt a VM, you must continue use this option to encrypt your VM. See [Azure Disk Encryption with Microsoft Entra ID (previous release)](disk-encryption-overview-aad.md) for details. > > - You should [take a snapshot](snapshot-copy-managed-disk.md) and/or create a backup before disks are encrypted. Backups ensure that a recovery option is possible if an unexpected failure occurs during encryption. VMs with managed disks require a backup before encryption occurs. Once a backup is made, you can use the [Set-AzVMDiskEncryptionExtension cmdlet](/powershell/module/az.compute/set-azvmdiskencryptionextension) to encrypt managed disks by specifying the -skipVmBackup parameter. For more information about how to back up and restore encrypted VMs, see [Back up and restore encrypted Azure VM](../../backup/backup-azure-vms-encryption.md). > In these scenarios, the NVMe disks need to be initialized after the VM starts. T In addition to the scenarios listed in the [Unsupported Scenarios](#unsupported-scenarios) section, encryption of NVMe disks is not supported for: -- VMs encrypted with Azure Disk Encryption with AAD (previous release)+- VMs encrypted with Azure Disk Encryption with Microsoft Entra ID (previous release) - NVMe disks with storage spaces - Azure Site Recovery of SKUs with NVMe disks (see [Support matrix for Azure VM disaster recovery between Azure regions: Replicated machines - storage](../../site-recovery/azure-to-azure-support-matrix.md#replicated-machinesstorage)). |
virtual-machines | Disks Enable Customer Managed Keys Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-enable-customer-managed-keys-powershell.md | Update-AzDiskEncryptionSet -Name $diskEncryptionSetName -ResourceGroupName $Reso [!INCLUDE [virtual-machines-disks-encryption-status-powershell](../../../includes/virtual-machines-disks-encryption-status-powershell.md)] > [!IMPORTANT]-> Customer-managed keys rely on managed identities for Azure resources, a feature of Azure Active Directory (Azure AD). When you configure customer-managed keys, a managed identity is automatically assigned to your resources under the covers. If you subsequently move the subscription, resource group, or managed disk from one Azure AD directory to another, the managed identity associated with the managed disks is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see [Transferring a subscription between Azure AD directories](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories). +> Customer-managed keys rely on managed identities for Azure resources, a feature of Microsoft Entra ID. When you configure customer-managed keys, a managed identity is automatically assigned to your resources under the covers. If you subsequently move the subscription, resource group, or managed disk from one Microsoft Entra directory to another, the managed identity associated with the managed disks is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see [Transferring a subscription between Microsoft Entra directories](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories). ## Next steps |
virtual-machines | Disks Enable Double Encryption At Rest Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-enable-double-encryption-at-rest-powershell.md | Install the latest [Azure PowerShell version](/powershell/azure/install-azure-po 1. Grant the DiskEncryptionSet resource access to the key vault. > [!NOTE]- > It may take few minutes for Azure to create the identity of your DiskEncryptionSet in your Azure Active Directory. If you get an error like "Cannot find the Active Directory object" when running the following command, wait a few minutes and try again. + > It may take few minutes for Azure to create the identity of your DiskEncryptionSet in your Microsoft Entra ID. If you get an error like "Cannot find the Active Directory object" when running the following command, wait a few minutes and try again. ```powershell $des=Get-AzDiskEncryptionSet -name $diskEncryptionSetName -ResourceGroupName $ResourceGroupName |
virtual-machines | Disks Upload Vhd To Managed Disk Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md | This article explains how to either upload a VHD from your local machine to an A If you're providing a backup solution for IaaS VMs in Azure, you should use direct upload to restore customer backups to managed disks. When uploading a VHD from a source external to Azure, speeds depend on your local bandwidth. When uploading or copying from an Azure VM, your bandwidth would be the same as standard HDDs. -## Secure uploads with Azure AD +<a name='secure-uploads-with-azure-ad'></a> -If you're using [Azure Active Directory (Azure AD)](../../active-directory/fundamentals/active-directory-whatis.md) to control resource access, you can now use it to restrict uploading of Azure managed disks. This feature is available as a GA offering in all regions. When a user attempts to upload a disk, Azure validates the identity of the requesting user in Azure AD, and confirms that user has the required permissions. At a higher level, a system administrator could set a policy at the Azure account or subscription level to ensure that an Azure AD identity has the necessary permissions for uploading before allowing a disk or a disk snapshot to be uploaded. If you have any questions on securing uploads with Azure AD, reach out to this email: azuredisks@microsoft .com +## Secure uploads with Microsoft Entra ID ++If you're using [Microsoft Entra ID](../../active-directory/fundamentals/active-directory-whatis.md) to control resource access, you can now use it to restrict uploading of Azure managed disks. This feature is available as a GA offering in all regions. When a user attempts to upload a disk, Azure validates the identity of the requesting user in Microsoft Entra ID, and confirms that user has the required permissions. At a higher level, a system administrator could set a policy at the Azure account or subscription level to ensure that a Microsoft Entra identity has the necessary permissions for uploading before allowing a disk or a disk snapshot to be uploaded. If you have any questions on securing uploads with Microsoft Entra ID, reach out to this email: azuredisks@microsoft .com ### Prerequisites [!INCLUDE [disks-azure-ad-upload-download-prereqs](../../../includes/disks-azure-ad-upload-download-prereqs.md)] If you're using [Azure Active Directory (Azure AD)](../../active-directory/funda ### Assign RBAC role -To access managed disks secured with Azure AD, the requesting user must have either the [Data Operator for Managed Disks](../../role-based-access-control/built-in-roles.md#data-operator-for-managed-disks) role, or a [custom role](../../role-based-access-control/custom-roles-powershell.md) with the following permissions: +To access managed disks secured with Microsoft Entra ID, the requesting user must have either the [Data Operator for Managed Disks](../../role-based-access-control/built-in-roles.md#data-operator-for-managed-disks) role, or a [custom role](../../role-based-access-control/custom-roles-powershell.md) with the following permissions: - **Microsoft.Compute/disks/download/action** - **Microsoft.Compute/disks/upload/action** For guidance on how to copy a managed disk from one region to another, see [Copy ### (Optional) Grant access to the disk -If Azure AD is used to enforce upload restrictions on a subscription or at the account level, [Add-AzVHD](/powershell/module/az.compute/add-azvhd) only succeeds if attempted by a user that has the [appropriate RBAC role or necessary permissions](#assign-rbac-role). You'll need to [assign RBAC permissions](../../role-based-access-control/role-assignments-powershell.md) to grant access to the disk and generate a writeable SAS. +If Microsoft Entra ID is used to enforce upload restrictions on a subscription or at the account level, [Add-AzVHD](/powershell/module/az.compute/add-azvhd) only succeeds if attempted by a user that has the [appropriate RBAC role or necessary permissions](#assign-rbac-role). You'll need to [assign RBAC permissions](../../role-based-access-control/role-assignments-powershell.md) to grant access to the disk and generate a writeable SAS. ```azurepowershell New-AzRoleAssignment -SignInName <emailOrUserprincipalname> ` New-AzRoleAssignment -SignInName <emailOrUserprincipalname> ` The following example uploads a VHD from your local machine to a new Azure managed disk using [Add-AzVHD](/powershell/module/az.compute/add-azvhd). Replace `<your-filepath-here>`, `<your-resource-group-name>`,`<desired-region>`, and `<desired-managed-disk-name>` with your parameters: > [!NOTE]-> If you're using Azure AD to enforce upload restrictions, add `DataAccessAuthMode 'AzureActiveDirectory'` to the end of your `Add-AzVhd` command. +> If you're using Microsoft Entra ID to enforce upload restrictions, add `DataAccessAuthMode 'AzureActiveDirectory'` to the end of your `Add-AzVhd` command. ```azurepowershell # Required parameters Replace `<yourdiskname>`, `<yourresourcegroupname>`, and `<yourregion>` then run > [!IMPORTANT] > If you're creating an OS disk, add `-HyperVGeneration '<yourGeneration>'` to `New-AzDiskConfig`. > -> If you're using Azure AD to secure your uploads, add `-dataAccessAuthMode 'AzureActiveDirectory'` to `New-AzDiskConfig`. +> If you're using Microsoft Entra ID to secure your uploads, add `-dataAccessAuthMode 'AzureActiveDirectory'` to `New-AzDiskConfig`. > When uploading to an Ultra Disk or Premium SSD v2 you need to select the correct sector size of the target disk. If you're using a VHDX file with a 4k logical sector size, the target disk must be set to 4k. If you're using a VHD file with a 512 logical sector size, the target disk must be set to 512. > > VHDX files with logical sector size of 512k aren't supported. Now that you've successfully uploaded a VHD to a managed disk, you can attach yo To learn how to attach a data disk to a VM, see our article on the subject: [Attach a data disk to a Windows VM with PowerShell](attach-disk-ps.md). To use the disk as the OS disk, see [Create a Windows VM from a specialized disk](create-vm-specialized.md#create-the-new-vm). -If you've additional questions, see the section on [uploading a managed disk](../faq-for-disks.yml#uploading-to-a-managed-disk) in the FAQ. +If you've additional questions, see the section on [uploading a managed disk](../faq-for-disks.yml#uploading-to-a-managed-disk) in the FAQ. |
virtual-machines | Download Vhd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/download-vhd.md | Your snapshot will be created shortly, and can then be used to download or creat > This method is only recommended for VMs with a single OS disk. VMs with one or more data disks should be stopped before download or before creating a snapshot for the OS disk and each data disk. -## Secure downloads and uploads with Azure AD +<a name='secure-downloads-and-uploads-with-azure-ad'></a> ++## Secure downloads and uploads with Microsoft Entra ID [!INCLUDE [disks-azure-ad-upload-download-portal](../../../includes/disks-azure-ad-upload-download-portal.md)] az disk grant-access --duration-in-seconds 86400 --access-level Read --name your ## Download VHD > [!NOTE]-> If you're using Azure AD to secure managed disk downloads, the user downloading the VHD must have the appropriate [RBAC permissions](#assign-rbac-role). +> If you're using Microsoft Entra ID to secure managed disk downloads, the user downloading the VHD must have the appropriate [RBAC permissions](#assign-rbac-role). # [Portal](#tab/azure-portal) When the download finishes, revoke access to your disk using `Revoke-AzDiskAcces Replace `yourPathhere` and `sas-URI` with your values, then use the following script to download your VHD: > [!NOTE]-> If you're using Azure AD to secure your managed disk uploads and downloads, add `--auth-mode login` to `az storage blob download`. +> If you're using Microsoft Entra ID to secure your managed disk uploads and downloads, add `--auth-mode login` to `az storage blob download`. ```azurecli |
virtual-machines | Prepare For Upload Vhd Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/prepare-for-upload-vhd-image.md | Make sure the following settings are configured correctly for remote access: > [!IMPORTANT] > 168.63.129.16 is a special public IP address that is owned by Microsoft for Azure. For more information, see [What is IP address 168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md). -1. If the VM is part of a domain, check the following Azure AD policies to make sure the previous +1. If the VM is part of a domain, check the following Microsoft Entra policies to make sure the previous settings aren't reverted. | Goal | Policy | Value | Make sure the VM is healthy, secure, and RDP accessible: - `Computer Configuration\Windows Settings\Security Settings\Local Policies\User Rights Assignment` -1. Check the following Azure AD policies to make sure they're not blocking RDP access: +1. Check the following Microsoft Entra policies to make sure they're not blocking RDP access: - `Computer Configuration\Windows Settings\Security Settings\Local Policies\User Rights Assignment\Deny access to this computer from the network` Make sure the VM is healthy, secure, and RDP accessible: - `Computer Configuration\Windows Settings\Security Settings\Local Policies\User Rights Assignment\Deny log on through Remote Desktop Services` -1. Check the following Azure AD policy to make sure they're not removing any of the required access +1. Check the following Microsoft Entra policy to make sure they're not removing any of the required access accounts: - `Computer Configuration\Windows Settings\Security Settings\Local Policies\User Rights Assignment\Access this computer from the network` |
virtual-machines | Tutorial Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/tutorial-disaster-recovery.md | If you don't have an Azure subscription, create a [free account](https://azure.m **Name** | **Public cloud** | **Government cloud** | **Details** | | | Storage | `*.blob.core.windows.net` | `*.blob.core.usgovcloudapi.net`| Write data from the VM to the cache storage account in the source region.- Azure AD | `login.microsoftonline.com` | `login.microsoftonline.us`| Authorize and authenticate to Site Recovery service URLs. + Microsoft Entra ID | `login.microsoftonline.com` | `login.microsoftonline.us`| Authorize and authenticate to Site Recovery service URLs. Replication | `*.hypervrecoverymanager.windowsazure.com` | `*.hypervrecoverymanager.windowsazure.com` |VM communication with the Site Recovery service. Service Bus | `*.servicebus.windows.net` | `*.servicebus.usgovcloudapi.net` | VM writes to Site Recovery monitoring and diagnostic data. If you don't have an Azure subscription, create a [free account](https://azure.m **Tag** | **Allow** | Storage tag | Allows data to be written from the VM to the cache storage account.- Azure AD tag | Allows access to all IP addresses that correspond to Azure AD. + Microsoft Entra ID tag | Allows access to all IP addresses that correspond to Microsoft Entra ID. EventsHub tag | Allows access to Site Recovery monitoring. AzureSiteRecovery tag | Allows access to the Site Recovery service in any region. GuestAndHybridManagement | Use if you want to automatically upgrade the Site Recovery Mobility agent that's running on VMs enabled for replication. If you want to enable disaster recovery on an existing VM instead of for a new V :::image type="content" source="./media/tutorial-disaster-recovery/existing-vm.png" alt-text="Open disaster recovery options for an existing VM."::: 3. In **Basics**, if the VM is deployed in an availability zone, you can select disaster recovery between availability zones.-4. In **Target region**, select the region to which you want to replicate the VM. The source and target regions must be in the same Azure Active Directory tenant. +4. In **Target region**, select the region to which you want to replicate the VM. The source and target regions must be in the same Microsoft Entra tenant. :::image type="content" source="./media/tutorial-disaster-recovery/basics.png" alt-text="Set the basic disaster recovery options for a VM."::: |
virtual-machines | Windows Desktop Multitenant Hosting Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/windows-desktop-multitenant-hosting-deployment.md | Location : westus LicenseType : ``` -## Additional Information about joining Azure Active Directory -Azure provisions all Windows VMs with built-in administrator account, which can't be used to join Azure Active Directory. For example, *Settings > Account > Access Work or School > + Connect* won't work. You must create and log on as a second administrator account to join Azure AD manually. You can also configure Azure AD using a provisioning package, use the link in the *Next Steps* section to learn more. +<a name='additional-information-about-joining-azure-active-directory'></a> ++## Additional Information about joining Microsoft Entra ID +Azure provisions all Windows VMs with built-in administrator account, which can't be used to join Microsoft Entra ID. For example, *Settings > Account > Access Work or School > + Connect* won't work. You must create and log on as a second administrator account to join Microsoft Entra ID manually. You can also configure Microsoft Entra ID using a provisioning package, use the link in the *Next Steps* section to learn more. ## Next Steps - Learn more about [Configuring VDA for Windows 11](/windows/deployment/vda-subscription-activation) |
virtual-machines | Configure Azure Oci Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/configure-azure-oci-networking.md | Once you have completed the network configuration, you can verify your configura ## Automation -Microsoft has created Terraform scripts to enable automated deployment of the network interconnect. The Terraform scripts need to authenticate with Azure before they run, because they require adequate permissions on the Azure subscription. Authentication can be performed using an [Azure Active Directory service principal](../../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) or using the Azure CLI. For more information, see [CLI Authentication](https://www.terraform.io/cli/auth). +Microsoft has created Terraform scripts to enable automated deployment of the network interconnect. The Terraform scripts need to authenticate with Azure before they run, because they require adequate permissions on the Azure subscription. Authentication can be performed using an [Microsoft Entra service principal](../../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) or using the Azure CLI. For more information, see [CLI Authentication](https://www.terraform.io/cli/auth). For the Terraform scripts and related documentation to deploy the inter-connect, see [Azure-OCI Cloud Inter-Connect](https://aka.ms/azureociinterconnecttf). |
virtual-machines | Deploy Application Oracle Database Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/deploy-application-oracle-database-azure.md | As Oracle applications move on Azure IaaS, there are common design consideration The provided network settings for Oracle Applications on Azure cover various aspects of network and security considerations. Here's a breakdown of the recommended network settings: -- Single sign-on (SSO) with Azure AD and SAML: Use [Azure AD for single sign-on (SSO)](https://learn.microsoft.com/azure/active-directory/manage-apps/what-is-single-sign-on) using the Security Assertions Markup Language (SAML) protocol. This SSO allows users to authenticate once and access multiple services seamlessly.-- Azure AD Application Proxy: Consider using [Azure AD Application Proxy](https://learn.microsoft.com/azure/active-directory/app-proxy/application-proxy), especially for remote users. This proxy allows you to securely access on-premises applications from outside your network.+- Single sign-on (SSO) with Microsoft Entra ID and SAML: Use [Microsoft Entra ID for single sign-on (SSO)](https://learn.microsoft.com/azure/active-directory/manage-apps/what-is-single-sign-on) using the Security Assertions Markup Language (SAML) protocol. This SSO allows users to authenticate once and access multiple services seamlessly. +- Microsoft Entra application proxy: Consider using [Microsoft Entra application proxy](https://learn.microsoft.com/azure/active-directory/app-proxy/application-proxy), especially for remote users. This proxy allows you to securely access on-premises applications from outside your network. - Routing Internal Users through [ExpressRoute](https://learn.microsoft.com/azure/expressroute/expressroute-introduction): For internal users, route traffic through Azure ExpressRoute for a dedicated, private connection to Azure services, ensuring low-latency and secure communication. - Azure Firewall: If necessary, you can configure [Azure Firewall](https://learn.microsoft.com/azure/architecture/example-scenario/gateway/application-gateway-before-azure-firewall) in front of your application for added security. Azure Firewall helps protect your resources from unauthorized access and threats. - Application Gateway for External Users: When external users need to access your application, consider using [Azure Application Gateway](https://learn.microsoft.com/azure/application-gateway/overview). It supplies Web Application Firewall (WAF) capabilities for protecting your web applications and Layer 7 load balancing to distribute traffic. Database Tier - The primary and replicated to a secondary should stay within one [Reference architectures for Oracle Database](oracle-reference-architecture.md) [Migrate Oracle workload to Azure Virtual Machines](oracle-migration.md)- |
virtual-machines | Oracle Database Backup Strategies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-backup-strategies.md | Blobfuse is ubiquitous across Azure regions and works with all storage account t Azure support for the NFS v3.0 protocol is available. [NFS support](../../../storage/blobs/network-file-system-protocol-support.md) enables Windows and Linux clients to mount an Azure Blob Storage container to an Azure VM. -To ensure network security, the storage account that you use for NFS mounting must be contained within a virtual network. Azure Active Directory (Azure AD) security and access control lists (ACLs) are not yet supported in accounts that have NFS 3.0 protocol support enabled on them. +To ensure network security, the storage account that you use for NFS mounting must be contained within a virtual network. Microsoft Entra security and access control lists (ACLs) are not yet supported in accounts that have NFS 3.0 protocol support enabled on them. ### Azure Files |
virtual-machines | Oracle Oci Applications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-oci-applications.md | -Microsoft and Oracle have worked together to enable customers to deploy Oracle applications such as Oracle E-Business Suite, JD Edwards EnterpriseOne, and PeopleSoft in the cloud. With the introduction of the preview [private network interconnectivity](configure-azure-oci-networking.md) between Microsoft Azure and Oracle Cloud Infrastructure (OCI), Oracle applications can now be deployed on Azure with their back-end databases in Azure or OCI. Oracle applications can also be integrated with Azure Active Directory, allowing you to set up single sign-on so that users can sign into the Oracle application using their Azure Active Directory (Azure AD) credentials. +Microsoft and Oracle have worked together to enable customers to deploy Oracle applications such as Oracle E-Business Suite, JD Edwards EnterpriseOne, and PeopleSoft in the cloud. With the introduction of the preview [private network interconnectivity](configure-azure-oci-networking.md) between Microsoft Azure and Oracle Cloud Infrastructure (OCI), Oracle applications can now be deployed on Azure with their back-end databases in Azure or OCI. Oracle applications can also be integrated with Microsoft Entra ID, allowing you to set up single sign-on so that users can sign into the Oracle application using their Microsoft Entra credentials. OCI offers multiple Oracle database options for Oracle applications, including DBaaS, Exadata Cloud Service, Oracle RAC, and Infrastructure-as-a-Service (IaaS). Currently, Autonomous Database isn't a supported back-end for Oracle applications. Microsoft and Oracle recommend a high availability setup. High availability in A ### Identity tier -The identity tier contains the EBS Asserter VM. EBS Asserter allows you to synchronize identities from Oracle Identity Cloud Service (IDCS) and Azure AD. The EBS Asserter is needed because EBS doesn't support single sign-on protocols like SAML 2.0 or OpenID Connect. The EBS Asserter consumes the OpenID connect token (generated by IDCS), validates it, and then creates a session for the user in EBS. +The identity tier contains the EBS Asserter VM. EBS Asserter allows you to synchronize identities from Oracle Identity Cloud Service (IDCS) and Microsoft Entra ID. The EBS Asserter is needed because EBS doesn't support single sign-on protocols like SAML 2.0 or OpenID Connect. The EBS Asserter consumes the OpenID connect token (generated by IDCS), validates it, and then creates a session for the user in EBS. -While this architecture shows IDCS integration, Azure AD unified access and single sign-on also can be enabled with Oracle Access Manager with Oracle Internet Directory or Oracle Unified Directory. For more information, see the whitepapers on [Deploying Oracle EBS with IDCS Integration](https://www.oracle.com/a/ocom/docs/deploy-ebusiness-suite-across-oci-azure-sso-idcs.pdf) or [Deploying Oracle EBS with OAM Integration](https://www.oracle.com/a/ocom/docs/deploy-ebusiness-suite-across-oci-azure-sso-oam.pdf). +While this architecture shows IDCS integration, Microsoft Entra ID unified access and single sign-on also can be enabled with Oracle Access Manager with Oracle Internet Directory or Oracle Unified Directory. For more information, see the whitepapers on [Deploying Oracle EBS with IDCS Integration](https://www.oracle.com/a/ocom/docs/deploy-ebusiness-suite-across-oci-azure-sso-idcs.pdf) or [Deploying Oracle EBS with OAM Integration](https://www.oracle.com/a/ocom/docs/deploy-ebusiness-suite-across-oci-azure-sso-oam.pdf). For high availability, it's recommended that you deploy redundant servers of the EBS Asserter across multiple availability zones with a load balancer in front of it. |
virtual-machines | Oracle Oci Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-oci-overview.md | The [WebLogic Server Azure Applications](oracle-weblogic.md) each create a netwo ## Identity -Identity is one of the core pillars of the partnership between Microsoft and Oracle. Significant work has been done to integrate [Oracle Identity Cloud Service](https://docs.oracle.com/en/cloud/paas/identity-cloud/https://docsupdatetracker.net/index.html) (IDCS) with [Azure Active Directory](../../../active-directory/index.yml) (Azure AD). Azure AD is Microsoft's cloud-based identity and access management service. Your users can sign in and access various resources with help from Azure AD. Azure AD also allows you to manage your users and their permissions. +Identity is one of the core pillars of the partnership between Microsoft and Oracle. Significant work has been done to integrate [Oracle Identity Cloud Service](https://docs.oracle.com/en/cloud/paas/identity-cloud/https://docsupdatetracker.net/index.html) (IDCS) with [Microsoft Entra ID](../../../active-directory/index.yml) (Microsoft Entra ID). Microsoft Entra ID is Microsoft's cloud-based identity and access management service. Your users can sign in and access various resources with help from Microsoft Entra ID. Microsoft Entra ID also allows you to manage your users and their permissions. -Currently, this integration allows you to manage in one central location. Azure AD synchronizes any changes in the directory with the corresponding Oracle directory and is used for single sign-on to cross-cloud Oracle solutions. +Currently, this integration allows you to manage in one central location. Microsoft Entra ID synchronizes any changes in the directory with the corresponding Oracle directory and is used for single sign-on to cross-cloud Oracle solutions. ## Next steps |
virtual-machines | Oracle Weblogic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-weblogic.md | You can also run WLS on the Azure Kubernetes Service. The solutions to do so are WLS is a leading Java application server running some of the most mission-critical enterprise Java applications across the globe. WLS forms the middleware foundation for the Oracle software suite. Oracle and Microsoft are committed to empowering WLS customers with choice and flexibility to run workloads on Azure as a leading cloud platform. -The Azure WLS solutions are aimed at making it as easy as possible to migrate your Java applications to Azure virtual machines. The solutions do so by generating deployed resources for most common cloud provisioning scenarios. The solutions automatically provision virtual network, storage, Java, WLS, and Linux resources. With minimal effort, WebLogic Server is installed. The solutions can set up security with a network security group, load balancing with Azure App Gateway or Oracle HTTP Server, authentication with Azure Active Directory, centralized logging using ELK and distributed caching with Oracle Coherence. You can also automatically connect to your existing database including Azure PostgreSQL, Azure SQL, and the Oracle Database on the Oracle Cloud or Azure. +The Azure WLS solutions are aimed at making it as easy as possible to migrate your Java applications to Azure virtual machines. The solutions do so by generating deployed resources for most common cloud provisioning scenarios. The solutions automatically provision virtual network, storage, Java, WLS, and Linux resources. With minimal effort, WebLogic Server is installed. The solutions can set up security with a network security group, load balancing with Azure App Gateway or Oracle HTTP Server, authentication with Microsoft Entra ID, centralized logging using ELK and distributed caching with Oracle Coherence. You can also automatically connect to your existing database including Azure PostgreSQL, Azure SQL, and the Oracle Database on the Oracle Cloud or Azure. :::image type="content" source="media/oracle-weblogic/wls-on-azure.gif" alt-text="You can use the Azure portal to deploy WebLogic Server on Azure"::: |
virtual-network | Create Peering Different Subscriptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-subscriptions.md | You need the **Resource ID** for **vnet-2** from the previous steps to set up th | - | -- | | **This virtual network** | | | Peering link name | Enter **vnet-1-to-vnet-2**. |- | Allow access to remote virtual network | Leave the default of selected. | - | Allow traffic to remote virtual network | Select the checkbox. | - | Allow traffic forwarded from the remote virtual network (allow gateway transit) | Leave the default of cleared. | + | Allow 'vnet-1' to access 'vnet-2' | Leave the default of selected. | + | Allow 'vnet-1' to receive forwarded traffic from 'vnet-2' | Select the checkbox. | + | Allow gateway in 'vnet-1' to forward traffic to 'vnet-2' | Leave the default of cleared. | + | Enable 'vnet-1' to use 'vnet-2' remote gateway | Leave the default of cleared. | | Use remote virtual network gateway or route server | Leave the default of cleared. | | **Remote virtual network** | | | Peering link name | Leave blank. | You need the **Resource IDs** for **vnet-1** from the previous steps to set up t | - | -- | | **This virtual network** | | | Peering link name | Enter **vnet-2-to-vnet-1**. |- | Allow access to remote virtual network | Leave the default of selected. | - | Allow traffic to remote virtual network | Select the checkbox. | - | Allow traffic forwarded from the remote virtual network (allow gateway transit) | Leave the default of cleared. | - | Use remote virtual network gateway or route server | Leave the default of cleared. | + | Allow 'vnet-2' to access 'vnet-1' | Leave the default of selected. | + | Allow 'vnet-2' to receive forwarded traffic from 'vnet-1' | Select the checkbox. | + | Allow gateway in 'vnet-2' to forward traffic to 'vnet-1' | Leave the default of cleared. | + | Enable 'vnet-2' to use 'vnet-1's' remote gateway | Leave the default of cleared. | | **Remote virtual network** | | | Peering link name | Leave blank. | | Virtual network deployment model | Select **Resource manager**. | |