Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Configure Authentication Sample Python Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-python-web-app.md | To create the web app registration, follow these steps: ## Step 3: Get the web app sample -[Download the zip file](https://github.com/Azure-Samples/ms-identity-python-webapp/archive/master.zip), or clone the sample web application from GitHub. +[Download the zip file](https://github.com/Azure-Samples/ms-identity-python-webapp/archive/main.zip), or clone the sample web application from GitHub. ```bash git clone https://github.com/Azure-Samples/ms-identity-python-webapp.git |
active-directory-b2c | Integrate With App Code Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/integrate-with-app-code-samples.md | Title: Azure Active Directory B2C integrate with app samples + Title: Azure Active Directory B2C integrates with app samples description: Code samples for integrating Azure AD B2C to mobile, desktop, web, and single-page applications. The following tables provide links to samples for applications including iOS, An | [dotnetcore-webapp-msal-api](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/4-WebApp-your-API/4-2-B2C) | An ASP.NET Core web application that can sign in a user using Azure AD B2C, get an access token using MSAL.NET and call an API. | | [auth-code-flow-nodejs](https://github.com/Azure-Samples/active-directory-b2c-msal-node-sign-in-sign-out-webapp) | A Node.js app that shows how to enable authentication (sign in, sign out and profile edit) in a Node.js web application using Azure Active Directory B2C. The web app uses MSAL-node.| | [javascript-nodejs-webapi](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi) | A small Node.js Web API for Azure AD B2C that shows how to protect your web api and accept B2C access tokens using passport.js. |-| [ms-identity-python-webapp](https://github.com/Azure-Samples/ms-identity-python-webapp/blob/master/README_B2C.md) | Demonstrate how to Integrate B2C of Microsoft identity platform with a Python web application. | +| [ms-identity-python-webapp](https://github.com/Azure-Samples/ms-identity-python-webapp/blob/main/README_B2C.md) | Demonstrate how to Integrate B2C of Microsoft identity platform with a Python web application. | ## Single page apps |
active-directory-b2c | Security Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/security-architecture.md | + + Title: Security architecture in Azure AD B2C ++description: End to end guidance on how to secure your Azure AD B2C solution. +++++++ Last updated : 05/09/2023+++++# How to secure your Azure Active Directory B2C identity solution ++This article provides the best practices in securing your Azure Active Directory B2C (Azure AD B2C) solution. To build your identity solution using Azure AD B2C involves many components that you should consider protecting and monitoring. ++Depending on your solution, you have one or more of the following components in scope: ++- [Azure AD B2C authentication endpoints](./protocols-overview.md) +- [Azure AD B2C user flows or custom policies](./user-flow-overview.md) + - Sign in + - Sign up +- Email One-time-password (OTP) +- Multifactor authentication (MFA) controls +- External REST APIs ++You must protect and monitor all these components to ensure your users can sign in to applications without disruption. Follow the guidance in this article to protect your solution from bot attacks, fraudulent account creation, international revenue share fraud (ISRF), and password spray. ++## How to secure your solution ++Your identity solution uses multiple components to provide a smooth sign in experience. The following table shows protection mechanisms we recommend for each component. ++|Component |Endpoint|Why|How to protect| +|-|-|-|-| +|Azure AD B2C authentication endpoints|`/authorize`, `/token`, `/.well-known/openid-configuration`, `/discovery/v2.0/keys`|Prevent resource exhaustion|[Web Application Firewall (WAF)](./partner-azure-web-application-firewall.md) and [Azure Front Door (AFD)](https://azure.microsoft.com/products/frontdoor/?ef_id=_k_53b0ace78faa14e3c3b1c8b385bf944d_k_&OCID=AIDcmm5edswduu_SEM__k_53b0ace78faa14e3c3b1c8b385bf944d_k_&msclkid=53b0ace78faa14e3c3b1c8b385bf944d)| +|Sign-in|NA|Malicious sign-in's may try to brute force accounts or use leaked credentials|[Identity Protection](/azure/active-directory/identity-protection/overview-identity-protection)| +|Sign-up|NA|Fraudulent sign-up's that may try to exhaust resources.|[Endpoint protection](https://www.microsoft.com/security/business/endpoint-security/microsoft-defender-business-b?ef_id=_k_22063a2ad7b719a498ec5e7edc5d6500_k_&OCID=AIDcmm7ol8ekjr_SEM__k_22063a2ad7b719a498ec5e7edc5d6500_k_&msclkid=22063a2ad7b719a498ec5e7edc5d6500)<br> Fraud prevention technologies, such as [Dynamics Fraud Protection](./partner-dynamics-365-fraud-protection.md)| +|Email OTP|NA|Fraudulent attempts to brute force or exhaust resources|[Endpoint protection](https://www.microsoft.com/security/business/endpoint-security/microsoft-defender-business-b?ef_id=_k_22063a2ad7b719a498ec5e7edc5d6500_k_&OCID=AIDcmm7ol8ekjr_SEM__k_22063a2ad7b719a498ec5e7edc5d6500_k_&msclkid=22063a2ad7b719a498ec5e7edc5d6500) and [Authenticator App](/azure/active-directory/authentication/concept-authentication-authenticator-app)| +|Multifactor authentication controls|NA|Unsolicited phone calls or SMS messages or resource exhaustion.|[Endpoint protection](https://www.microsoft.com/security/business/endpoint-security/microsoft-defender-business-b?ef_id=_k_22063a2ad7b719a498ec5e7edc5d6500_k_&OCID=AIDcmm7ol8ekjr_SEM__k_22063a2ad7b719a498ec5e7edc5d6500_k_&msclkid=22063a2ad7b719a498ec5e7edc5d6500) and [Authenticator App](/azure/active-directory/authentication/concept-authentication-authenticator-app)| +|External REST APIs|Your REST API endpoints|Malicious usage of user flows or custom policies can lead to resource exhaustion at your API endpoints.|[WAF](./partner-azure-web-application-firewall.md) and [AFD](https://azure.microsoft.com/products/frontdoor/?ef_id=_k_921daffd3bd81af80dd9cba9348858c4_k_&OCID=AIDcmm5edswduu_SEM__k_921daffd3bd81af80dd9cba9348858c4_k_&msclkid=921daffd3bd81af80dd9cba9348858c4)| ++### Protection mechanisms ++The following table provides an overview of the different protection mechanisms you can use to protect different components. ++|What |Why |How| +|-|-|-| +|Web Application Firewall (WAF)|WAF serves as the first layer of defense against malicious requests made to Azure AD B2C endpoints. It provides a centralized protection against common exploits and vulnerabilities such as DDoS, bots, OWASP Top 10, and so on. It's advised that you use WAF to ensure that malicious requests are stopped even before they reach Azure AD B2C endpoints. </br></br> To enable WAF, you must first [enable custom domains in Azure AD B2C using AFD](custom-domain.md?pivots=b2c-custom-policy).|<ul><li>[Configure Cloudflare WAF](./partner-cloudflare.md)</li></br><li>[Configure Akamai WAF](./partner-akamai.md)</li></ul>| +|Azure Front Door (AFD)| AFD is a global, scalable entry-point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. The key capabilities of AFD are:<ul><li>You can add or remove custom domains in a self-service fashion </li><li>Streamlined certificate management experience</li><li>You can bring your own certificate and get alert for certificate expiry with good rotation experience via [Azure Key Vault](https://azure.microsoft.com/products/key-vault/)</li><li>AFD-provisioned certificate for quicker provisioning and autorotation on expiry </li> </ul>|<ul><li> [Enable custom domains for Azure Active Directory B2C](./custom-domain.md)</li><ul>| +|Identity Verification & Proofing / Fraud Protection|Identity verification and proofing are critical for creating a trusted user experience and protecting against account takeover and fraudulent account creation. It also contributes to tenant hygiene by ensuring that user objects reflect the actual users, which align with business scenarios. </br></br>Azure AD B2C allows the integration of identity verification and proofing, and fraud protection from various software-vendor partners.| <ul><li> [Integrate with identity verification and proofing partners](./identity-verification-proofing.md)</li><li>[Configure Microsoft Dynamics 365 Fraud Protection](./partner-dynamics-365-fraud-protection.md) </li><li> [Configure with Arkose Labs platform](./partner-arkose-labs.md)</li><li> [Mitigate fraudulent MFA usage](phone-based-mfa.md#mitigate-fraudulent-sign-ups)</li></ul>| +|Identity Protection|Identity Protection provides ongoing risk detection. When a risk is detected during sign-in, you can configure Azure AD B2C conditional policy to allow the user to remediate the risk before proceeding with the sign-in. Administrators can also use identity protection reports to review risky users who are at risk and review detection details. The risk detections report includes information about each risk detection, such as its type and the location of the sign-in attempt, and more. Administrators can also confirm or deny that the user is compromised.|<ul><li>[Investigate risk with Identity Protection](./identity-protection-investigate-risk.md)</li><ul> | +|Conditional Access (CA)|When a user attempts to sign in, CA gathers various signals such as risks from identity protection, to make decisions and enforce organizational policies. CA can assist administrators to develop policies that are consistent with their organization's security posture. The policies can include the ability to completely block user access or provide access after the user has completed another authentication like MFA.|<ul><li>[Add Conditional Access policies to user flows](./conditional-access-user-flow.md)</li></ul>| +|Multifactor authentication (MFA)|MFA adds a second layer of security to the sign-up and sign-in process and is an essential component of improving the security posture of user authentication in Azure AD B2C. The Authenticator app - TOTP is the recommended MFA method in Azure AD B2C. | <ul><li>[Enable multifactor authentication](./multi-factor-authentication.md)</li></ul> | +|Security Information and Event management (SIEM)/ Security Orchestration, Automation and Response (SOAR) |You need a reliable monitoring and alerting system for analyzing usage patterns such as sign-ins and sign-ups, and detect any anomalous behavior that may be indicative of a cyberattack. It's an important step that adds an extra layer of security. It also you to understand patterns and trends that can only be captured and built upon over time. Alerting assists in determining factors such as the rate of change in overall sign-ins, an increase in failed sign-ins, and failed sign-up journeys, phone-based frauds such as IRSF attacks, and so on. All of these can be indicators of an ongoing cyberattack that requires immediate attention. Azure AD B2C supports both high level and fine grain logging, as well as the generation of reports and alerts. It's advised that you implement monitoring and alerting in all production tenants. | <ul><li>[Monitor using Azure Monitor](./azure-monitor.md)</li><li>[ Use reports & alerts](https://github.com/azure-ad-b2c/siem)</li><li> [Monitor for fraudulent MFA usage](./phone-based-mfa.md)</li><li>[Collect Azure AD B2C logs with Application Insights](troubleshoot-with-application-insights.md?pivots=b2c-user-flow)</li><li>[Configure security analytics for Azure AD B2C data with Microsoft Sentinel](./azure-sentinel.md)</li></ul>| + +[](./media/security-architecture/security-architecture-high-level.png#lightbox) ++## Protecting your REST APIs +Azure AD B2C allows you to connect to external systems by using the [API Connectors](./api-connectors-overview.md?pivots=b2c-custom-policy), or the [REST API technical profile](restful-technical-profile.md). You need to protect these interfaces. You can prevent malicious requests to your REST APIs by protecting the Azure AD B2C authentication endpoints. You can protect these endpoints with a WAF and AFD. + +## Scenario 1: How to secure your sign-in experience +After you create a sign-in experience, or user flow, you'll need to protect specific components of your flow from malicious activity. For example, if your sign in flow involves the following, then the table shows the components you need to protect, and associated protection technique: ++- Local account email and password authentication +- Azure AD Multi-Factor Authentication using SMS or phone call ++|Component |Endpoint|How to protect| +|-|-|-| +|Azure AD B2C authentication endpoints|`/authorize`, `/token`, `/.well-known/openid-configuration`, `/discovery/v2.0/keys`|WAP and AFD| +|Sign in|NA|Identity Protection| +|Multifactor authentication controls|NA|Authenticator app| +|External REST API|Your API endpoint.|Authenticator app, WAF and AFD| + +[](./media/security-architecture/protect-sign-in.png#lightbox) + +## Scenario 2: How to secure your sign-up experience +After you create a sign-up experience, or user flow, you need to protect specific components of your flow from malicious activity. If your sign in flow involves the following, then the table shows the components you need to protect, and associated protection technique: ++- Local account email and password sign-up +- Email verification using email OTP +- Azure AD Multi-Factor Authentication using SMS or phone call ++|Component |Endpoint|How to protect| +|-|-|-| +|Azure AD B2C authentication endpoints|`/authorize`, `/token`, `/.well-known/openid-configuration`, `/discovery/v2.0/keys`|WAF and AFD| +|sign up|NA|Dynamics Fraud Protection| +|Email OTP|NA|WAF and AFD| +|Multifactor authentication controls|NA|Authenticator app| ++In this scenario, the use of the WAF and AFD protection mechanisms protects both the Azure AD B2C authentication endpoints and the Email OTP components. + +[](./media/security-architecture/protect-sign-up.png#lightbox) ++## Next steps ++- [Configure a Web application firewall](./partner-akamai.md) to protect Azure AD B2C authentication endpoints. +- [Configure Fraud prevention with Dynamics](./partner-dynamics-365-fraud-protection.md) to protect your authentication experiences. +- [Investigate risk with Identity Protection in Azure AD B2C](./identity-protection-investigate-risk.md) to discover, investigate, and remediate identity-based risks. +- [Securing phone-based multi-factor authentication (MFA)](./phone-based-mfa.md) to protect your phone based multi-factor authentication. +- [Configure Identity Protection](./conditional-access-user-flow.md) to protect your sign in experience. +- [Configure Monitoring and alerting](./azure-monitor.md) to be alerted to any threats. |
active-directory | Accidental Deletions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/accidental-deletions.md | The Azure AD provisioning service includes a feature to help avoid accidental de ::: zone-end ::: zone pivot="cross-tenant-synchronization"-> [!IMPORTANT] -> [Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - The Azure AD provisioning service includes a feature to help avoid accidental deletions. This feature ensures that users aren't disabled or deleted in the target tenant unexpectedly. ::: zone-end To enable accidental deletion prevention: ::: zone-end ::: zone pivot="cross-tenant-synchronization"-2. Select **Cross-tenant synchronization (Preview)** > **Configurations** and then select your configuration. +2. Select **Cross-tenant synchronization** > **Configurations** and then select your configuration. 3. Select **Provisioning**. ::: zone-end Let the provisioning job run (20 ΓÇô 40 mins) and navigate back to the provision - Delete a user / put them into the recycle bin. - Block sign in for a user. - Unassign a user or group from the application (or configuration).-- Remove a user from a group that's providing them access to the application (or configuration).+- Remove a user from a group that's provides them access to the application (or configuration). To learn more about deprovisioning scenarios, see [How Application Provisioning Works](how-provisioning-works.md#deprovisioning). |
active-directory | Define Conditional Rules For Provisioning User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md | zone_pivot_groups: app-provisioning-cross-tenant-synchronization # Scoping users or groups to be provisioned with scoping filters -> [!IMPORTANT] -> [Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - Learn how to use scoping filters in the Azure Active Directory (Azure AD) provisioning service to define attribute based rules. The rules are used to determine which users or groups are provisioned. ## Scoping filter use cases |
active-directory | Export Import Provisioning Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/export-import-provisioning-configuration.md | |
active-directory | Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/known-issues.md | -> [!IMPORTANT] -> [Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - This article discusses known issues to be aware of when you work with app provisioning or cross-tenant synchronization. To provide feedback about the application provisioning service on UserVoice, see [Azure Active Directory (Azure AD) application provision UserVoice](https://aka.ms/appprovisioningfeaturerequest). We watch UserVoice closely so that we can improve the service. > [!NOTE] This article discusses known issues to be aware of when you work with app provis - Synchronizing photos across tenants - Synchronizing contacts and converting contacts to B2B users -### Provisioning users +### Microsoft Teams ++* Microsoft Teams does not support converting the [userType](../external-identities/user-properties.md) property on a B2B user from `member` to `guest` or `guest` to `member`. +* External / B2B users of type `member` cannot be added to a shared channel in Microsoft Teams. If your organization uses shared channels, please ensure that you update your synchronization configuration to create users as type `guest`. At that point, you will be able to add the native identity (the original account in the source tenant) to the shared channel. If a user is already created as type `member`, you can convert the user to type `guest` in this scenario and add the native identity to the shared channel. +* External / B2B users will need to switch tenants in Teams to receive messages. This experience does not change for users created by cross-tenant synchronization. ++ ### Provisioning users An external user from the source (home) tenant can't be provisioned into another tenant. Internal guest users from the source tenant can't be provisioned into another tenant. Only internal member users from the source tenant can be provisioned into the target tenant. For more information, see [Properties of an Azure Active Directory B2B collaboration user](../external-identities/user-properties.md). |
active-directory | Provision On Demand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provision-on-demand.md | zone_pivot_groups: app-provisioning-cross-tenant-synchronization # On-demand provisioning in Azure Active Directory -> [!IMPORTANT] -> [Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - Use on-demand provisioning to provision a user or group in seconds. Among other things, you can use this capability to: * Troubleshoot configuration issues quickly. |
active-directory | Workday Retrieve Pronoun Information | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-retrieve-pronoun-information.md | Workday introduced the ability for workers to [display pronoun information](http >Links to certain Workday community notes and documents in this article require Workday community account credentials. Please check with your Workday administrator or partner to get the required access. ## Enabling pronoun data in Workday-This section describes steps required to enable pronoun data in Workday. We recommend engaging your Workday administrator to complete the steps listed below. -1. Ensure that pronoun display and sharing preferences are enabled as per Workday guidelines. Refer Workday documents: - - [Steps: Set Up Gender Pronouns to Display on a Worker Profile * Human Capital Management * Reader * Administrator Guide (workday.com)](https://doc.workday.com/r/gJQvxHUyQOZv_31Vknf~3w/7gZPvVfbRhLiPissprv6lQ) - - [Steps: Set Up Public Profile Preferences * Human Capital Management * Reader * Administrator Guide (workday.com)](https://doc.workday.com/r/gJQvxHUyQOZv_31Vknf~3w/FuENV1VTRTHWo_h93KIjJA) -+This section describes the steps required to enable pronoun data in Workday. We recommend engaging your Workday administrator to complete the steps listed. +1. Ensure that pronoun display and sharing preferences are enabled as per Workday guidelines. Refer to the Workday documents: + - [Steps: Set Up Gender Pronouns to Display on a Worker Profile * Human Capital Management * Reader * Administrator Guide (workday.com)](https://doc.workday.com/r/gJQvxHUyQOZv_31Vknf~3w/7gZPvVfbRhLiPissprv6lQ) + - [Steps: Set Up Public Profile Preferences * Human Capital Management * Reader * Administrator Guide (workday.com)](https://doc.workday.com/r/gJQvxHUyQOZv_31Vknf~3w/FuENV1VTRTHWo_h93KIjJA) 1. Use Workday **Maintain Pronouns** task to define preferred pronoun data (HE/HIM, SHE/HER, and THEY/THEM) in your Workday tenant. 1. Use Workday **Maintain Localization Settings task -> Personal Information** area to activate pronoun data for different countries. 1. Select the Workday Integration System Security Group used with your Azure AD integration. Update the [domain permissions for the security group](../saas-apps/workday-inbound-tutorial.md#configuring-domain-security-policy-permissions), so it has GET access for the Workday domain **Reports: Public Profile**. This section describes steps required to enable pronoun data in Workday. We reco >[!div class="mx-imgBorder"] > -1. Use Workday Studio or Postman to invoke [Get_Workers API version 38.1](https://community.workday.com/sites/default/files/file-hosting/productionapi/Human_Resources/v38.1/Get_Workers.html) for the test user using the Workday Azure AD integration system user. In the SOAP request header specify the option Include_Reference_Descriptors_In_Response. +1. Use Workday Studio or Postman to invoke [Get_Workers API version 38.1](https://community.workday.com/sites/default/files/file-hosting/productionapi/Human_Resources/v38.1/Get_Workers.html) for the test user using the Workday Azure AD integration system user. In the SOAP request header, specify the option Include_Reference_Descriptors_In_Response. ``` <bsvc:Workday_Common_Header> <bsvc:Include_Reference_Descriptors_In_Response>true</bsvc:Include_Reference_Descriptors_In_Response> </bsvc:Workday_Common_Header> ```-1. In the Get_Workers response, you will now see pronoun information. +1. In the Get_Workers response, view the pronoun information. >[!div class="mx-imgBorder"] > Once you confirm that pronoun data is available in the *Get_Workers* response, g ## Updating Azure AD provisioning app to retrieve pronouns -To retrieve pronouns from Workday, you'll need to update your Azure AD provisioning app to query Workday using v38.1 of the Workday Web Services. We recommend testing this configuration first in your test/sandbox environment before implementing the change in production. +To retrieve pronouns from Workday, update your Azure AD provisioning app to query Workday using v38.1 of the Workday Web Services. We recommend testing this configuration first in your test/sandbox environment before implementing the change in production. 1. Sign-in to Azure portal as administrator. 1. Open your *Workday to AD User provisioning* app OR *Workday to Azure AD User provisioning* app. -1. In the **Admin Credentials** section, update the **Tenant URL** to include the Workday Web Service version v38.1 as shown below. +1. In the **Admin Credentials** section, update the **Tenant URL** to include the Workday Web Service version v38.1 as shown. >[!div class="mx-imgBorder"] > To retrieve pronouns from Workday, you'll need to update your Azure AD provision 1. Save your changes. 1. You can now add a new attribute mapping to flow the Workday attribute **PreferredPronoun** to any attribute in AD/Azure AD.-1. If you want to incorporate pronoun information as part of display name, you can update the attribute mapping for displayName attribute to use the below expression. +1. If you want to incorporate pronoun information as part of display name, you can update the attribute mapping for displayName attribute to use the expression. `Switch([PreferredPronoun], Join("", [PreferredNameData], " (", [PreferredPronoun], ")"), "", [PreferredNameData])` -1. If worker *Aaron Hall* has set his pronoun information in Workday as `HE/HIM`, then the above expression will set the display name in Azure AD as: *Aaron Hall (HE/HIM)* +1. If worker *Aaron Hall* has set his pronoun information in Workday as `HE/HIM`, the above expression sets the display name in Azure AD as: *Aaron Hall (HE/HIM)* 1. Save your changes. 1. Test the configuration for one user with provisioning on demand. |
active-directory | Concept Authentication Strengths | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-strengths.md | Title: Overview of Azure Active Directory authentication strength (preview) + Title: Overview of Azure Active Directory authentication strength description: Learn how admins can use Azure AD Conditional Access to distinguish which authentication methods can be used based on relevant security factors. Previously updated : 01/29/2023 Last updated : 05/08/2023 -# Conditional Access authentication strength (preview) +# Conditional Access authentication strength Authentication strength is a Conditional Access control that allows administrators to specify which combination of authentication methods can be used to access a resource. For example, they can make only phishing-resistant authentication methods available to access a sensitive resource. But to access a nonsensitive resource, they can allow less secure multifactor authentication (MFA) combinations, such as password + SMS. GET https://graph.microsoft.com/beta/identity/conditionalAccess/authenticationSt In addition to the three built-in authentication strengths, administrators can create up to 15 of their own custom authentication strengths to exactly suit their requirements. A custom authentication strength can contain any of the supported combinations in the preceding table. -1. In the Azure portal, browse to **Azure Active Directory** > **Security** > **Authentication methods** > **Authentication strengths (Preview)**. +1. In the Azure portal, browse to **Azure Active Directory** > **Security** > **Authentication methods** > **Authentication strengths**. 1. Select **New authentication strength**. 1. Provide a descriptive **Name** for your new authentication strength. 1. Optionally provide a **Description**. There are two policies that determine which authentication methods can be used t Users may register for authentications for which they are enabled, and in other cases, an administrator can configure a user's device with a method, such as certificate-based authentication. +### How an authentication strength policy is evaluated during sign-in + The authentication strength Conditional Access policy defines which methods can be used. Azure AD checks the policy during sign-in to determine the userΓÇÖs access to the resource. For example, an administrator configures a Conditional Access policy with a custom authentication strength that requires FIDO2 Security Key or Password + SMS. The user accesses a resource protected by this policy. During sign-in, all settings are checked to determine which methods are allowed, which methods are registered, and which methods are required by the Conditional Access policy. To be used, a method must be allowed, registered by the user (either before or as part of the access request), and satisfy the authentication strength. + +### How multiple Conditional Access authentication strength policies are evaluated ++In general, when there are multiple Conditional Access policies applicable for a single sign-in, all conditions from all policies must be met. In the same vein, when there are multiple Conditional Access policies which apply authentication strengths to the sign-in, the user must satisfy all of the authentication strength policies. For example, if two different authentication strength policies both require FIDO2, the user can use their FIDO2 security key and satisfy both policies. If the two authentication strength policies have different sets of methods, the user must use multiple methods to satisfy both policies. ++#### How multiple Conditional Access authentication strength policies are evaluated for registering security info ++For security info registration, the authentication strength evaluation is treated differently ΓÇô authentication strengths that target the user action of **Registering security info** are preferred over other authentication strength policies that target **All cloud apps**. All other grant controls (such as **Require device to be marked as compliant**) from other Conditional Access policies in scope for the sign-in will apply as usual. ++For example, letΓÇÖs assume Contoso would like to require their users to always sign in with a phishing-resistant authentication method and from a compliant device. Contoso also wants to allow new employees to register these authentication methods using a Temporary Access Pass (TAP). TAP canΓÇÖt be used on any other resource. To achieve this goal, the admin can take the following steps: ++1. Create a custom authentication strength named **Bootstrap and recovery** that includes the Temporary Access Pass authentication combination, it can also include any of the phishing-resistant MFA methods. +1. Create a Conditional Access policy which targets **All cloud apps** and requires **Phishing-resistant MFA** authentication strength AND **Require compliant device** grant controls. +1. Create a Conditional Access policy that targets the **Register security information** user action and requires the **Bootstrap and recovery** authentication strength. ++As a result, users on a compliant device would be able to use a Temporary Access Pass to register FIDO2 security keys and then use the newly registered FIDO2 security key to authenticate to other resources (such as Outlook). ++>[!NOTE] +>If multiple conditional access policies target the **Register security information** user action, and they each apply an authentication strength, the user must satisfy all such authentication strengths to sign in. +++ ## User experience The following factors determine if the user gains access to the resource: An authentication strength Conditional Access policy works together with [MFA tr - **If MFA trust is enabled**, Azure AD checks the user's authentication session for a claim indicating that MFA has been fulfilled in the user's home tenant. See the preceding table for authentication methods that are acceptable for MFA when completed in an external user's home tenant. If the session contains a claim indicating that MFA policies have already been met in the user's home tenant, and the methods satisfy the authentication strength requirements, the user is allowed access. Otherwise, Azure AD presents the user with a challenge to complete MFA in the home tenant using an acceptable authentication method. - **If MFA trust is disabled**, Azure AD presents the user with a challenge to complete MFA in the resource tenant using an acceptable authentication method. (See the table above for authentication methods that are acceptable for MFA by an external user.) -## Known issues --- **Users who signed in by using certificate-based authentication aren't prompted to reauthenticate** - If a user first authenticated by using certificate-based authentication and the authentication strength requires another method, such as a FIDO2 security key, the user isn't prompted to use a FIDO2 security key and authentication fails. The user must restart their session to sign-in with a FIDO2 security key.--- **Using 'Require one of the selected controls' with 'require authentication strength' control** - After you select authentication strengths grant control and additional controls, all the selected controls must be satisfied in order to gain access to the resource. Using **Require one of the selected controls** isn't applicable, and will default to requiring all the controls in the policy.-- ## Limitations -- **Conditional Access policies are only evaluated after the initial authentication** - As a result, authentication strength will not restrict a user's initial authentication. Suppose you are using the built-in phishing-resistant MFA strength. A user can still type in their password, but they will be required to use a phishing-resistant method such as FIDO2 security key before they can continue.+- **Conditional Access policies are only evaluated after the initial authentication** - As a result, authentication strength doesn't restrict a user's initial authentication. Suppose you are using the built-in phishing-resistant MFA strength. A user can still type in their password, but they will be required to use a phishing-resistant method such as FIDO2 security key before they can continue. - **Require multifactor authentication and Require authentication strength can't be used together in the same Conditional Access policy** - These two Conditional Access grant controls can't be used together because the built-in authentication strength **Multifactor authentication** is equivalent to the **Require multifactor authentication** grant control. -- **Authentication methods that are currently not supported by authentication strength** - The Email one-time pass (Guest) authentication method is not included in the available combinations.+- **Authentication methods that aren't currently supported by authentication strength** - The **Email one-time pass (Guest)** authentication method isn't included in the available combinations. -- **Windows Hello for Business** ΓÇô If the user has used Windows Hello for Business as their primary authentication method it can be used to satisfy an authentication strength requirement that includes Windows Hello for Business. However, if the user has used another method as their primary authenticating method (for example, password) and the authentication strength requires them to use Windows Hello for Business they will not be prompted to use not register for Windows Hello for Business. +- **Windows Hello for Business** ΓÇô If the user signed in with Windows Hello for Business as their primary authentication method, it can be used to satisfy an authentication strength requirement that includes Windows Hello for Business. But if the user signed in with another method like password as their primary authenticating method, and the authentication strength requires Windows Hello for Business, they get prompted to sign in with Windows Hello for Business. ## FAQ |
active-directory | How To Certificate Based Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md | For more information, see [Understanding the certificate revocation process](./c ## Step 2: Enable CBA on the tenant >[!IMPORTANT]->A user is considered capable for MFA when the user is in scope for **Certificate-based authentication** in the Authentication methods policy. This policy requirement means a user can't use proof up as part of their authentication to register other available methods. For more information, see [Azure AD MFA](concept-mfa-howitworks.md). +>A user is considered capable for **MFA** when the user is in scope for **Certificate-based authentication** in the Authentication methods policy. This policy requirement means a user can't use proof up as part of their authentication to register other available methods. If the users do not have access to certificates they will be locked out and not be able to register other methods for MFA. So the admin needs to enable users who have a valid certificate into the CBA scope. Do not use all users for CBA target and use groups of users who have valid certificates available. For more information, see [Azure AD MFA](concept-mfa-howitworks.md). To enable the certificate-based authentication in the Azure portal, complete the following steps: |
active-directory | Troubleshoot Authentication Strengths | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-authentication-strengths.md | Title: Troubleshoot Azure AD authentication strength (Preview) + Title: Troubleshoot Azure AD authentication strength description: Learn how to resolve errors when using Azure AD authentication strength. Previously updated : 01/29/2023 Last updated : 03/09/2023 -# Troubleshoot Azure AD authentication strength (Preview) +# Troubleshoot Azure AD authentication strength This topic covers errors you might see when you use Azure Active Directory (Azure AD) authentication strength and how to resolve them. This topic covers errors you might see when you use Azure Active Directory (Azur <!What could be a good example?> -Users can sign in only by using authentication methods that they registered and are enabled by the Authentication methods policy. For more information, see [How Conditional Access Authentication strengths policies are used in combination with Authentication methods policy](concept-authentication-strengths.md#how-authentication-strength-works-with-the-authentication-methods-policy). +For sign in, the authentication method needs to be: ++- Registered for the user +- Enabled by the Authentication methods policy ++For more information, see [How Conditional Access Authentication strength policies are used in combination with the Authentication methods policy](concept-authentication-strengths.md#how-authentication-strength-works-with-the-authentication-methods-policy). To verify if a method can be used: To verify if a method can be used: 1. As needed, check if the tenant is enabled for any method required for the authentication strength. Click **Security** > **Multifactor Authentication** > **Additional cloud-based multifactor authentication settings**. 1. Check which authentication methods are registered for the user in the Authentication methods policy. Click **Users and groups** > _username_ > **Authentication methods**. -If the user is registered for an enabled method that meets the authentication strength, they might need to use another method that isn't available after primary authentication, such as Windows Hello for Business or certificate-based authentication. For more information, see [How each authentication method works](concept-authentication-methods.md#how-each-authentication-method-works). The user will need to restart the session and choose **Sign-in options** and select a method required by the authentication strength. +If the user is registered for an enabled method that meets the authentication strength, they might need to use another method that isn't available after primary authentication, such as Windows Hello for Business or certificate-based authentication. For more information, see [How each authentication method works](concept-authentication-methods.md#how-each-authentication-method-works). The user needs to restart the session, choose **Sign-in options** , and select a method required by the authentication strength. :::image type="content" border="true" source="./media/troubleshoot-authentication-strengths/choose-another-method.png" alt-text="Screenshot of how to choose another sign-in method."::: If the user is registered for an enabled method that meets the authentication st If an authentication strength requires a method that a user canΓÇÖt use, the user is blocked from sign-in. To check which method is required by an authentication strength, and which method the user is registered and enabled to use, follow the steps in the [previous section](#a-user-is-asked-to-sign-in-with-another-method-but-they-dont-see-a-method-they-expect). ## How to check which authentication strength was enforced during sign-in-Use the **Sign-ins** log to find additional information about the sign-in: +Use the **Sign-ins** log to find more information about the sign-in: -- Under the **Authentication details** tab, the **Requirement** column shows the name of the authentication strengths policy.+- Under the **Authentication details** tab, the **Requirement** column shows the name of the authentication strength policy. :::image type="content" source="./media/troubleshoot-authentication-strengths/sign-in-logs-authentication-details.png" alt-text="Screenshot showing the authentication strength in the Sign-ins log."::: Use the **Sign-ins** log to find additional information about the sign-in: :::image type="content" source="./media/troubleshoot-authentication-strengths/sign-in-logs-control.png" alt-text="Screenshot showing the authentication strength under Conditional Access Policy details in the Sign-ins log."::: -## My users can't use their FIDO2 security key to sign in -An admin can restrict access to specific security keys. When a user tries to sign in by using a key they can't use, this **You can't get there from here** message appears. The user has to restart the session, and sign-in with a different FIDO2 security key. +## Users can't use their FIDO2 security key to sign in +An Authentication Policy Administrator can restrict access to specific security keys. When a user tries to sign in by using a key they can't use, this **You can't get there from here** message appears. The user has to restart the session, and sign-in with a different FIDO2 security key. :::image type="content" border="true" source="./media/troubleshoot-authentication-strengths/restricted-security-key.png" alt-text="Screenshot of a sign-in error when using a restricted FIDO2 security key."::: Some methods can't be registered during sign-in, or they need more setup beyond ## Next steps -- [Azure AD Authentication Strengths overview](concept-authentication-strengths.md)+- [Conditional Access authentication strength](concept-authentication-strengths.md) |
active-directory | Concept Conditional Access Grant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md | The control for blocking access considers any assignments and prevents access ba Administrators can choose to enforce one or more controls when granting access. These controls include the following options: - [Require multifactor authentication (Azure AD Multifactor Authentication)](../authentication/concept-mfa-howitworks.md)-- [Require authentication strength (Preview)](#require-authentication-strength-preview)+- [Require authentication strength](#require-authentication-strength) - [Require device to be marked as compliant (Microsoft Intune)](/intune/protect/device-compliance-get-started) - [Require hybrid Azure AD joined device](../devices/concept-azure-ad-join-hybrid.md) - [Require approved client app](app-based-conditional-access.md) Selecting this checkbox requires users to perform Azure Active Directory (Azure [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) satisfies the requirement for multifactor authentication in Conditional Access policies. -### Require authentication strength (preview) +### Require authentication strength -Administrators can choose to require [specific authentication strengths](../authentication/concept-authentication-strengths.md) in their Conditional Access policies. These authentication strengths are defined in the **Azure portal** > **Azure Active Directory** > **Security** > **Authentication methods** > **Authentication strengths (Preview)**. Administrators can choose to create their own or use the built-in versions. --> [!NOTE] -> Require authentication strength is currently in public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +Administrators can choose to require [specific authentication strengths](../authentication/concept-authentication-strengths.md) in their Conditional Access policies. These authentication strengths are defined in the **Azure portal** > **Azure Active Directory** > **Security** > **Authentication methods** > **Authentication strengths**. Administrators can choose to create their own or use the built-in versions. ### Require device to be marked as compliant |
active-directory | Howto Conditional Access Policy Authentication Strength External | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-authentication-strength-external.md | The authentication methods that external users can use to satisfy MFA requiremen Determine if one of the built-in authentication strengths will work for your scenario or if you'll need to create a custom authentication strength. 1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.-1. Browse to **Azure Active Directory** > **Security** > **Authentication methods** > **Authentication strengths (Preview)**. +1. Browse to **Azure Active Directory** > **Security** > **Authentication methods** > **Authentication strengths**. 1. Review the built-in authentication strengths to see if one of them meets your requirements. 1. If you want to enforce a different set of authentication methods, [create a custom authentication strength](https://aka.ms/b2b-auth-strengths). |
active-directory | Access Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md | The server possibly revokes refresh tokens due to a change in credentials, or du | Password changed by user | Revoked | Revoked | Stays alive | Stays alive | Stays alive | | User does SSPR | Revoked | Revoked | Stays alive | Stays alive | Stays alive | | Admin resets password | Revoked | Revoked | Stays alive | Stays alive | Stays alive |-| User revokes their refresh tokens by using [PowerShell](/powershell/module/azuread/revoke-azureadsignedinuserallrefreshtoken) | Revoked | Revoked | Revoked | Revoked | Revoked | -| Admin revokes all refresh tokens for a user by using [PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken) | Revoked | Revoked |Revoked | Revoked | Revoked | +| User or admin revokes the refresh tokens by using [PowerShell](/powershell/module/microsoft.graph.users.actions/invoke-mginvalidateuserrefreshtoken) | Revoked | Revoked | Revoked | Revoked | Revoked | | [Single sign-out](v2-protocols-oidc.md#single-sign-out) on web | Revoked | Stays alive | Revoked | Stays alive | Stays alive | #### Non-password-based |
active-directory | Config Authority | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/config-authority.md | When the authority URL is set to `"login.microsoftonline.com/common"`, the user To sign the user into a specific tenant, configure `MSALPublicClientApplication` with a specific authority. For example: -`https://login.microsoftonline.com/469fdeb4-d4fd-4fde-991e-308a78e4bea4` +`https://login.microsoftonline.com/dddd5555-eeee-6666-ffff-00001111aaaa` -The following shows how to sign a user into a specific tenant: +If you want to sign into the Contoso tenant, use; ++`https://login.microsoftonline.com/contoso.onmicrosoft.com` ++The following shows how to sign a user into the Contoso tenant: Objective-C ```objc- NSURL *authorityURL = [NSURL URLWithString:@"https://login.microsoftonline.com/469fdeb4-d4fd-4fde-991e-308a78e4bea4"]; + NSURL *authorityURL = [NSURL URLWithString:@"https://login.microsoftonline.com/contoso.onmicrosoft.com"]; MSALAADAuthority *tenantedAuthority = [[MSALAADAuthority alloc] initWithURL:authorityURL error:&authorityError]; if (!tenantedAuthority) Objective-C Swift ```swift do{- guard let authorityURL = URL(string: "https://login.microsoftonline.com/469fdeb4-d4fd-4fde-991e-308a78e4bea4") else { + guard let authorityURL = URL(string: "https://login.microsoftonline.com/contoso.onmicrosoft.com") else { //Handle error return } |
active-directory | Reference Error Codes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-error-codes.md | The `error` field has several possible values - review the protocol documentatio | AADSTS50006 | InvalidSignature - Signature verification failed because of an invalid signature. | | AADSTS50007 | PartnerEncryptionCertificateMissing - The partner encryption certificate was not found for this app. [Open a support ticket](../fundamentals/active-directory-troubleshooting-support-howto.md) with Microsoft to get this fixed. | | AADSTS50008 | InvalidSamlToken - SAML assertion is missing or misconfigured in the token. Contact your federation provider. |+| AADSTS5000819 | InvalidSamlTokenEmailMissingOrInvalid - SAML Assertion is invalid. Email address claim is missing or does not match domain from an external realm. | | AADSTS50010 | AudienceUriValidationFailed - Audience URI validation for the app failed since no token audiences were configured. | | AADSTS50011 | InvalidReplyTo - The reply address is missing, misconfigured, or doesn't match reply addresses configured for the app. As a resolution ensure to add this missing reply address to the Azure Active Directory application or have someone with the permissions to manage your application in Active Directory do this for you. To learn more, see the troubleshooting article for error [AADSTS50011](/troubleshoot/azure/active-directory/error-code-aadsts50011-reply-url-mismatch).| | AADSTS50012 | AuthenticationFailed - Authentication failed for one of the following reasons:<ul><li>The subject name of the signing certificate isn't authorized</li><li>A matching trusted authority policy was not found for the authorized subject name</li><li>The certificate chain isn't valid</li><li>The signing certificate isn't valid</li><li>Policy isn't configured on the tenant</li><li>Thumbprint of the signing certificate isn't authorized</li><li>Client assertion contains an invalid signature</li></ul> | The `error` field has several possible values - review the protocol documentatio | AADSTS75008 | RequestDeniedError - The request from the app was denied since the SAML request had an unexpected destination. | | AADSTS75011 | NoMatchedAuthnContextInOutputClaims - The authentication method by which the user authenticated with the service doesn't match requested authentication method. To learn more, see the troubleshooting article for error [AADSTS75011](/troubleshoot/azure/active-directory/error-code-aadsts75011-auth-method-mismatch). | | AADSTS75016 | Saml2AuthenticationRequestInvalidNameIDPolicy - SAML2 Authentication Request has invalid NameIdPolicy. |+| AADSTS76021 | ApplicationRequiresSignedRequests - The request sent by client is not signed while the application requires signed requests| | AADSTS76026 | RequestIssueTimeExpired - IssueTime in an SAML2 Authentication Request is expired. | | AADSTS80001 | OnPremiseStoreIsNotAvailable - The Authentication Agent is unable to connect to Active Directory. Make sure that agent servers are members of the same AD forest as the users whose passwords need to be validated and they are able to connect to Active Directory. | | AADSTS80002 | OnPremisePasswordValidatorRequestTimedout - Password validation request timed out. Make sure that Active Directory is available and responding to requests from the agents. | |
active-directory | Scenario Protected Web Api App Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-protected-web-api-app-configuration.md | When an app is called on a controller action that holds an **[Authorize]** attri #### Microsoft.Identity.Web +# [ASP.NET Core](#tab/aspnetcore) + Microsoft recommends you use the [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web) NuGet package when developing a web API with ASP.NET Core. *Microsoft.Identity.Web* provides the glue between ASP.NET Core, the authentication middleware, and the [Microsoft Authentication Library (MSAL)](msal-overview.md) for .NET. It allows for a clearer, more robust developer experience and leverages the power of the Microsoft identity platform and Azure AD B2C. app.MapControllers(); app.Run(); ``` +# [ASP.NET](#tab/aspnet) ++Microsoft recommends you use the [Microsoft.Identity.Web.OWIN](https://www.nuget.org/packages/Microsoft.Identity.Web.OWIN) NuGet package when developing a web API with ASP.NET. ++*Microsoft.Identity.Web.OWIN* provides the glue between ASP.NET, the ASP.NET authentication middleware, and the [Microsoft Authentication Library (MSAL)](msal-overview.md) for .NET. It allows for a clearer, more robust developer experience and leverages the power of the Microsoft identity platform and Azure AD B2C. ++It uses the same configuration file as ASP.NET Core (appsettings.json) and you need to make sure that this file is copied with the output of your project (property copy always in the file properties in Visual Studio or in the .csproj) ++*Microsoft.Identity.Web.OWIN* adds an extension methods to IAppBuilder named `AddMicrosoftIdentityWebApi`. These method takes as a parameter an instance of `OwinTokenAcquirerFactory` that you get calling `OwinTokenAcquirerFactory.GetDefaultInstance<OwinTokenAcquirerFactory>()` and that surfaces an instance of `IServiceCollection` to which you can add many services to call downstream APIs or configure the token cache. +++Here is some sample code for [Startup.Auth.cs](https://github.com/AzureAD/microsoft-identity-web/blob/master/tests/DevApps/aspnet-mvc/OwinWebApp/App_Start/Startup.Auth.cs). The full code is available from [tests/DevApps/aspnet-mvc/OwinWebApp](https://github.com/AzureAD/microsoft-identity-web/tree/master/tests/DevApps/aspnet-mvc/OwinWebApp) ++```CSharp +using Microsoft.Owin.Security; +using Microsoft.Owin.Security.Cookies; +using Owin; +using Microsoft.Identity.Web; +using Microsoft.Identity.Web.TokenCacheProviders.InMemory; +using Microsoft.Extensions.DependencyInjection; +using Microsoft.Identity.Client; +using Microsoft.Identity.Abstractions; +using Microsoft.Identity.Web.OWIN; +using System.Web.Services.Description; ++namespace OwinWebApp +{ + public partial class Startup + { + public void ConfigureAuth(IAppBuilder app) + { + app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType); + app.UseCookieAuthentication(new CookieAuthenticationOptions()); ++ OwinTokenAcquirerFactory factory = TokenAcquirerFactory.GetDefaultInstance<OwinTokenAcquirerFactory>(); ++ app.AddMicrosoftIdentityWebApp(factory); + factory.Services + .Configure<ConfidentialClientApplicationOptions>(options => { options.RedirectUri = "https://localhost:44386/"; }) + .AddMicrosoftGraph() + .AddDownstreamApi("DownstreamAPI1", factory.Configuration.GetSection("DownstreamAPI")) + .AddInMemoryTokenCaches(); + factory.Build(); + } + } +} +``` ++-- ++ ## Token validation In the preceding snippet, the JwtBearer middleware, like the OpenID Connect middleware in web apps, validates the token based on the value of `TokenValidationParameters`. The token is decrypted as needed, the claims are extracted, and the signature is verified. The middleware then validates the token by checking for this data: |
active-directory | Scenario Web Api Call Api Acquire Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-acquire-token.md | After you've built a client application object, use it to acquire a token that y ### [ASP.NET Core](#tab/aspnetcore) -*Microsoft.Identity.Web* adds extension methods that provide convenience services for calling Microsoft Graph or a downstream web API. These methods are explained in detail in [A web API that calls web APIs: Call an API](scenario-web-api-call-api-call-api.md). With these helper methods, you don't need to manually acquire a token. +*Microsoft.Identity.Web* adds extension methods that provide convenience services for calling Microsoft Graph or a downstream web API. These methods are explained in detail in [A web app that calls web APIs: Call an API](scenario-web-app-call-api-call-api.md). With these helper methods, you don't need to manually acquire a token. -If, however, you do want to manually acquire a token, the following code shows an example of using *Microsoft.Identity.Web* to do so in an API controller. It calls a downstream API named *todolist*. -To get a token to call the downstream API, you inject the `ITokenAcquisition` service by dependency injection in your controller's constructor (or your page constructor if you use Blazor), and you use it in your controller actions, getting a token for the user (`GetAccessTokenForUserAsync`) or for the application itself (`GetAccessTokenForAppAsync`) in the case of a daemon scenario. +If, however, you do want to manually acquire a token, the following code shows an example of using *Microsoft.Identity.Web* to do so in a home controller. It calls Microsoft Graph using the REST API (instead of the Microsoft Graph SDK). Usually, you don't need to get a token, you need to build an Authorization header that you add to your request. To get an authorization header, you inject the `IAuthorizationHeaderProvider` service by dependency injection in your controller's constructor (or your page constructor if you use Blazor), and you use it in your controller actions. This interface has methods that produce a string containing the protocol (Bearer, Pop, ...) and a token. To get an authorization header to call an API on behalf of the user, use (`CreateAuthorizationHeaderForUserAsync`). To get an authorization header to call a downstream API on behalf of the application itself, in a daemon scenario, use (`CreateAuthorizationHeaderForAppAsync`). ++The controller methods are protected by an `[Authorize]` attribute that ensures only authenticated calls can use the web API. ```csharp [Authorize] public class MyApiController : Controller static readonly string[] scopesToAccessDownstreamApi = new string[] { "api://MyTodolistService/access_as_user" }; - private readonly ITokenAcquisition _tokenAcquisition; + readonly IAuthorizationHeaderProvider authorizationHeaderProvider; - public MyApiController(ITokenAcquisition tokenAcquisition) + public MyApiController(IAuthorizationHeaderProvider authorizationHeaderProvider) {- _tokenAcquisition = tokenAcquisition; + this.authorizationHeaderProvider = authorizationHeaderProvider; } + [RequiredScopes(Scopes = scopesToAccessDownstreamApi)] public IActionResult Index() {- HttpContext.VerifyUserHasAnyAcceptedScope(scopeRequiredByApi); + // Get an authorization header. + IAuthorizationHeaderProvider authorizationHeaderProvider = this.GetAuthorizationHeaderProvider(); + string[] scopes = new string[]{"user.read"}; + string authorizationHeader = await authorizationHeaderProvider.CreateAuthorizationHeaderForUserAsync(scopes); - string accessToken = _tokenAcquisition.GetAccessTokenForUserAsync(scopesToAccessDownstreamApi); - return await callTodoListService(accessToken); + return await callTodoListService(authorizationHeader); } } ``` For details about the `callTodoListService` method, see [A web API that calls web APIs: Call an API](scenario-web-api-call-api-call-api.md). +### [ASP.NET](#tab/aspnet) ++The code for ASP.NET is similar to the code shown for ASP.NET Core: ++- A controller action, protected by an [Authorize] attribute, extracts the tenant ID and user ID of the `ClaimsPrincipal` member of the controller. (ASP.NET uses `HttpContext.User`.) +*Microsoft.Identity.Web.OWIN* adds extension methods to the Controller that provide convenience services to call Microsoft Graph or a downstream web API, or to get an authorization header, or even a token. The methods used to call an API directly are explained in detail in [A web app that calls web APIs: Call an API](scenario-web-app-call-api-call-api.md). With these helper methods, you don't need to manually acquire a token. ++If, however, you do want to manually acquire a token or build an authorization header, the following code shows how to use *Microsoft.Identity.Web* to do so in a controller. It calls an API (Microsoft Graph) using the REST API instead of the Microsoft Graph SDK. ++To get an authorization header, you get an `IAuthorizationHeaderProvider` service from the controller using an extension method `GetAuthorizationHeaderProvider`. To get an authorization header to call an API on behalf of the user, use (`CreateAuthorizationHeaderForUserAsync`). To get an authorization header to call a downstream API on behalf of the application itself, in a daemon scenario, use (`CreateAuthorizationHeaderForAppAsync`). ++The controller methods are protected by an `[Authorize]` attribute that ensures only authenticated users can use the web app. +++The following snippet shows the action of the `HomeController`, which gets an authorization header to call Microsoft Graph as a REST API: +++```csharp +[Authorize] +public class MyApiController : Controller +{ + [AuthorizeForScopes(Scopes = new[] { "user.read" })] + public async Task<IActionResult> Profile() + { + // Get an authorization header. + IAuthorizationHeaderProvider authorizationHeaderProvider = this.GetAuthorizationHeaderProvider(); + string[] scopes = new string[]{"user.read"}; + string authorizationHeader = await authorizationHeaderProvider.CreateAuthorizationHeaderForUserAsync(scopes); ++ // Use the access token to call a protected web API. + HttpClient client = new HttpClient(); + client.DefaultRequestHeaders.Add("Authorization", authorizationHeader); + string json = await client.GetStringAsync(url); + } +} +``` ++The following snippet shows the action of the `MyApiController`, which gets an access token to call Microsoft Graph as a REST API: ++```csharp +[Authorize] +public class HomeController : Controller +{ + [AuthorizeForScopes(Scopes = new[] { "user.read" })] + public async Task<IActionResult> Profile() + { + // Get an authorization header. + ITokenAcquirer tokenAcquirer = TokenAcquirerFactory.GetDefaultInstance().GetTokenAcquirer(); + string[] scopes = new string[]{"user.read"}; + string token = await await tokenAcquirer.GetTokenForUserAsync(scopes); ++ // Use the access token to call a protected web API. + HttpClient client = new HttpClient(); + client.DefaultRequestHeaders.Add("Authorization", $"Bearer {token}"); + string json = await client.GetStringAsync(url); + } +} +``` + ### [Java](#tab/java) Here's an example of code that's called in the actions of the API controllers. It calls the downstream API - Microsoft Graph. |
active-directory | Scenario Web Api Call Api App Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-app-configuration.md | -# [ASP.NET Core](#tab/aspnetcore) - ## Microsoft.Identity.Web Microsoft recommends that you use the [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web) NuGet package when developing an ASP.NET Core protected API calling downstream web APIs. See [Protected web API: Code configuration | Microsoft.Identity.Web](scenario-protected-web-api-app-configuration.md#microsoftidentityweb) for a quick presentation of that library in the context of a web API. -## Client secrets or client certificates --Given that the web API now calls a downstream web API, a client secret or client certificate in *appsettings.json* can be used for authentication. A section can be added to specify: --- The URL of the downstream web API-- The scopes required for calling the API+# [ASP.NET Core](#tab/aspnetcore) -In the following example, the `GraphBeta` section specifies these settings. +## Client secrets or client certificates -```json -{ - "AzureAd": { - "Instance": "https://login.microsoftonline.com/", - "ClientId": "Enter_the_Application_(client)_ID_here", - "TenantId": "common", - "ClientSecret": "Enter_the_Application_Client_Secret_Value_here", - "ClientCertificates": [] - }, - "GraphBeta": { - "BaseUrl": "https://graph.microsoft.com/beta", - "Scopes": "user.read" - } -} -``` +## Program.cs -Instead of a client secret, a client certificate can be provided. The following code snippet demonstrates a certificate stored in Azure Key Vault. +```csharp +using Microsoft.Identity.Web; -```json -{ - "AzureAd": { - "Instance": "https://login.microsoftonline.com/", - "ClientId": "Enter_the_Application_(client)_ID_here", - "TenantId": "common", - - "ClientCertificates": [ - { - "SourceType": "KeyVault", - "KeyVaultUrl": "https://msidentitywebsamples.vault.azure.net", - "KeyVaultCertificateName": "MicrosoftIdentitySamplesCert" - } - ] - }, - "GraphBeta": { - "BaseUrl": "https://graph.microsoft.com/beta", - "Scopes": "user.read" - } -} +// ... +builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) + .AddMicrosoftIdentityWebApi(Configuration, Configuration.GetSection("AzureAd")) + .EnableTokenAcquisitionToCallDownstreamApi() + .AddInMemoryTokenCaches(); +// ... ``` -*Microsoft.Identity.Web* provides several ways to describe certificates, both by configuration or by code. For details, see [Microsoft.Identity.Web wiki - Using certificates](https://github.com/AzureAD/microsoft-identity-web/wiki/Using-certificates). -## Program.cs --A web API will need to acquire a token for the downstream API. Specify it by adding the `.EnableTokenAcquisitionToCallDownstreamApi()` line after `.AddMicrosoftIdentityWebApi(Configuration)`. This line exposes the `ITokenAcquisition` service that can be used in the controller/pages actions. +A web API needs to acquire a token for the downstream API. Specify it by adding the `.EnableTokenAcquisitionToCallDownstreamApi()` line after `.AddMicrosoftIdentityWebApi(Configuration)`. This line exposes the `ITokenAcquisition` service that can be used in the controller/pages actions. -However, an alternative method is to implement a token cache. For example, adding `.AddInMemoryTokenCaches()`, to *Program.cs* will allow the token to be cached in memory. +However, an alternative method is to implement a token cache. For example, adding `.AddInMemoryTokenCaches()`, to *Program.cs* allows the token to be cached in memory. ```csharp using Microsoft.Identity.Web; builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) ### Option 1: Call Microsoft Graph -To call Microsoft Graph, *Microsoft.Identity.Web* enables you to directly use the `GraphServiceClient` (exposed by the Microsoft Graph SDK) in the API actions. To expose Microsoft Graph: +To call Microsoft Graph, *Microsoft.Identity.Web* enables you to directly use the `GraphServiceClient` (exposed by the Microsoft Graph SDK) in the API actions. ++>[!NOTE] +> There is an ongoing issue for Microsoft Graph SDK v5+. For more information, refer to the [GitHub issue](https://github.com/AzureAD/microsoft-identity-web/issues/2097). ++To expose Microsoft Graph: 1. Add the [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) NuGet package to the project. 1. Add `.AddMicrosoftGraph()` after `.EnableTokenAcquisitionToCallDownstreamApi()` in *Program.cs*. `.AddMicrosoftGraph()` has several overrides. Using the override that takes a configuration section as a parameter, the code becomes: builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) ### Option 2: Call a downstream web API other than Microsoft Graph -To call a downstream API other than Microsoft Graph, *Microsoft.Identity.Web* provides `.AddDownstreamWebApi()`, which requests tokens for the downstream API on behalf of the user. +1. Add the [Microsoft.Identity.Web.DownstreamApi](https://www.nuget.org/packages/Microsoft.Identity.Web.DownstreamApi) NuGet package to the project. +1. Add `.AddDownstreamApi()` after `.EnableTokenAcquisitionToCallDownstreamApi()` in *Program.cs*. The code becomes: ```csharp using Microsoft.Identity.Web; using Microsoft.Identity.Web; builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApi(Configuration, "AzureAd") .EnableTokenAcquisitionToCallDownstreamApi()- .AddDownstreamWebApi("MyApi", Configuration.GetSection("MyApiScope")) + .AddDownstreamApi("MyApi", Configuration.GetSection("MyApiScope")) .AddInMemoryTokenCaches(); // ... ```-If the web app needs to call another API resource, repeat the `.AddDownstreamWebApi()` method with the relevant scope as shown in the following snippet: +If the web app needs to call another API resource, repeat the `.AddDownstreamApi()` method with the relevant scope as shown in the following snippet: ```csharp using Microsoft.Identity.Web; using Microsoft.Identity.Web; builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApi(Configuration, "AzureAd") .EnableTokenAcquisitionToCallDownstreamApi()- .AddDownstreamWebApi("MyApi", Configuration.GetSection("MyApiScope")) - .AddDownstreamWebApi("MyApi2", Configuration.GetSection("MyApi2Scope")) + .AddDownstreamApi("MyApi", Configuration.GetSection("MyApiScope")) + .AddDownstreamApi("MyApi2", Configuration.GetSection("MyApi2Scope")) .AddInMemoryTokenCaches(); // ... ``` -Note that `.EnableTokenAcquisitionToCallDownstreamApi` is called without any parameter, which means that the access token will be acquired just in time as the controller requests the token by specifying the scope. +Note that `.EnableTokenAcquisitionToCallDownstreamApi` is called without any parameter, which means that the access token is acquired just in time as the controller requests the token by specifying the scope. The scope can also be passed in when calling `.EnableTokenAcquisitionToCallDownstreamApi`, which would make the web app acquire the token during the initial user login itself. The token will then be pulled from the cache when controller requests it. The following image shows the possibilities of *Microsoft.Identity.Web* and the > [!NOTE] > To fully understand the code examples here, be familiar with [ASP.NET Core fundamentals](/aspnet/core/fundamentals), and in particular with [dependency injection](/aspnet/core/fundamentals/dependency-injection) and [options](/aspnet/core/fundamentals/configuration/options). +# [ASP.NET](#tab/aspnet) ++## Client secrets or client certificates +++## Modify *Startup.Auth.cs* ++Your web app needs to acquire a token for the downstream API, *Microsoft.Identity.Web* provides two mechanisms for calling a downstream API from a web API. The option you choose depends on whether you want to call Microsoft Graph or another API. ++### Option 1: Call Microsoft Graph ++If you want to call Microsoft Graph, *Microsoft.Identity.Web* enables you to directly use the `GraphServiceClient` (exposed by the Microsoft Graph SDK) in your API actions. To expose Microsoft Graph: ++1. Add the [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) NuGet package to your project. +1. Add `.AddMicrosoftGraph()` to the service collection in the *Startup.Auth.cs* file. `.AddMicrosoftGraph()` has several overrides. Using the override that takes a configuration section as a parameter, the code becomes: ++ ```csharp + using Microsoft.Extensions.DependencyInjection; + using Microsoft.Identity.Client; + using Microsoft.Identity.Web; + using Microsoft.Identity.Web.OWIN; + using Microsoft.Identity.Web.TokenCacheProviders.InMemory; + using Microsoft.IdentityModel.Validators; + using Microsoft.Owin.Security; + using Microsoft.Owin.Security.Cookies; + using Owin; ++ namespace WebApp + { + public partial class Startup + { + public void ConfigureAuth(IAppBuilder app) + { + app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType); ++ app.UseCookieAuthentication(new CookieAuthenticationOptions()); ++ // Get an TokenAcquirerFactory specialized for OWIN + OwinTokenAcquirerFactory owinTokenAcquirerFactory = TokenAcquirerFactory.GetDefaultInstance<OwinTokenAcquirerFactory>(); ++ // Configure the web app. + app.AddMicrosoftIdentityWebApi(owinTokenAcquirerFactory); ++ // Add the services you need. + owinTokenAcquirerFactory.Services + .AddMicrosoftGraph() + .AddInMemoryTokenCaches(); + owinTokenAcquirerFactory.Build(); + } + } + } + ``` ++### Option 2: Call a downstream web API other than Microsoft Graph ++If you want to call an API other than Microsoft Graph, *Microsoft.Identity.Web* enables you to use the `IDownstreamApi` interface in your API actions. To use this interface: ++1. Add the [Microsoft.Identity.Web.DownstreamApi](https://www.nuget.org/packages/Microsoft.Identity.Web.DownstreamApi) NuGet package to your project. +1. Add `.AddDownstreamApi()` after `.EnableTokenAcquisitionToCallDownstreamApi()` in the *Startup.cs* file. `.AddDownstreamApi()` has two arguments: + - The name of a service (api): you use this name in your controller actions to reference the corresponding configuration + - a configuration section representing the parameters used to call the downstream web API. ++Here's the code: ++ ```csharp + using Microsoft.Extensions.DependencyInjection; + using Microsoft.Identity.Client; + using Microsoft.Identity.Web; + using Microsoft.Identity.Web.OWIN; + using Microsoft.Identity.Web.TokenCacheProviders.InMemory; + using Microsoft.IdentityModel.Validators; + using Microsoft.Owin.Security; + using Microsoft.Owin.Security.Cookies; + using Owin; ++ namespace WebApp + { + public partial class Startup + { + public void ConfigureAuth(IAppBuilder app) + { + app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType); ++ app.UseCookieAuthentication(new CookieAuthenticationOptions()); ++ // Get an TokenAcquirerFactory specialized for OWIN. + OwinTokenAcquirerFactory owinTokenAcquirerFactory = TokenAcquirerFactory.GetDefaultInstance<OwinTokenAcquirerFactory>(); ++ // Configure the web app. + app.AddMicrosoftIdentityWebApi(owinTokenAcquirerFactory); ++ // Add the services you need. + owinTokenAcquirerFactory.Services + .AddDownstreamApi("Graph", owinTokenAcquirerFactory.Configuration.GetSection("GraphBeta")) + .AddInMemoryTokenCaches(); + owinTokenAcquirerFactory.Build(); + } + } + } + ``` + # [Java](#tab/java) The On-behalf-of (OBO) flow is used to obtain a token to call the downstream web API. In this flow, your web API receives a bearer token with user delegated permissions from the client application and then exchanges this token for another access token to call the downstream web API. class MsalAuthHelper { The On-behalf-of (OBO) flow is used to obtain a token to call the downstream web API. In this flow, your web API receives a bearer token with user delegated permissions from the client application and then exchanges this token for another access token to call the downstream web API. -A Python web API will need to use some middleware to validate the bearer token received from the client. The web API can then obtain the access token for downstream API using MSAL Python library by calling the [`acquire_token_on_behalf_of`](https://msal-python.readthedocs.io/en/latest/?badge=latest#msal.ConfidentialClientApplication.acquire_token_on_behalf_of) method. For an example of using this API, see the [test code for the microsoft-authentication-library-for-python on GitHub](https://github.com/AzureAD/microsoft-authentication-library-for-python/blob/1.2.0/tests/test_e2e.py#L429-L472). Also see the discussion of [issue 53](https://github.com/AzureAD/microsoft-authentication-library-for-python/issues/53) in that same repository for an approach that bypasses the need for a middle-tier application. +A Python web API needs to use some middleware to validate the bearer token received from the client. The web API can then obtain the access token for downstream API using MSAL Python library by calling the [`acquire_token_on_behalf_of`](https://msal-python.readthedocs.io/en/latest/?badge=latest#msal.ConfidentialClientApplication.acquire_token_on_behalf_of) method. For an example of using this API, see the [test code for the microsoft-authentication-library-for-python on GitHub](https://github.com/AzureAD/microsoft-authentication-library-for-python/blob/1.2.0/tests/test_e2e.py#L429-L472). Also see the discussion of [issue 53](https://github.com/AzureAD/microsoft-authentication-library-for-python/issues/53) in that same repository for an approach that bypasses the need for a middle-tier application. You can also see an example of the OBO flow implementation in the [ms-identity-python-on-behalf-of](https://github.com/Azure-Samples/ms-identity-python-on-behalf-of) sample. |
active-directory | Scenario Web Api Call Api Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-call-api.md | In this scenario, you've added `.AddDownstreamWebApi()` in *Startup.cs* as speci public async Task<ActionResult> Details(int id) {- var value = await _downstreamWebApi.CallWebApiForUserAsync( + var value = await _downstreamWebApi.CallApiForUserAsync( ServiceName, options => { In this scenario, you've added `.AddDownstreamWebApi()` in *Startup.cs* as speci } ``` -The `CallWebApiForUserAsync` method also has strongly typed generic overrides that enable you to directly receive an object. For example, the following method received a `Todo` instance, which is a strongly typed representation of the JSON returned by the web API. +The `CallApiForUserAsync` method also has strongly typed generic overrides that enable you to directly receive an object. For example, the following method received a `Todo` instance, which is a strongly typed representation of the JSON returned by the web API. ```csharp // GET: TodoList/Details/5 public async Task<ActionResult> Details(int id) {- var value = await _downstreamWebApi.CallWebApiForUserAsync<object, Todo>( + var value = await _downstreamWebApi.CallApiForUserAsync<object, Todo>( ServiceName, null, options => The `CallWebApiForUserAsync` method also has strongly typed generic overrides th #### Option 3: Call a downstream web API without the helper class -If you've decided to acquire a token manually by using the `ITokenAcquisition` service, you now need to use the token. In that case, the following code continues the example code shown in [A web API that calls web APIs: Acquire a token for the app](scenario-web-api-call-api-acquire-token.md). The code is called in the actions of the API controllers. It calls a downstream API named *todolist*. +If you've decided to get an authorization header using the `IAuthorizationHeaderProvider` interface, the following code continues the example code shown in [A web API that calls web APIs: Acquire a token for the app](scenario-web-api-call-api-acquire-token.md). The code is called in the actions of the API controllers. It calls a downstream API named *todolist*. After you've acquired the token, use it as a bearer token to call the downstream API. If you've decided to acquire a token manually by using the `ITokenAcquisition` s private async Task CallTodoListService(string accessToken) { // After the token has been returned by Microsoft.Identity.Web, add it to the HTTP authorization header before making the call to access the todolist service.- _httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); + authorizationHeader = await authorizationHeaderProvider.GetAuthorizationHeaderForUserAsync(scopes); + _httpClient.DefaultRequestHeaders["Authorization"] = authorizationHeader; // Call the todolist service. HttpResponseMessage response = await _httpClient.GetAsync(TodoListBaseAddress + "/api/todolist"); private async Task CallTodoListService(string accessToken) } ``` +# [ASP.NET](#tab/aspnet) +++When you use *Microsoft.Identity.Web*, you have three usage options for calling an API: ++- [Option 1: Call Microsoft Graph with the Microsoft Graph SDK](#option-1-call-microsoft-graph-with-the-sdk-from-owin-app) +- [Option 2: Call a downstream web API with the helper class](#option-2-call-a-downstream-web-api-with-the-helper-class-from-owin-app) +- [Option 3: Call a downstream web API without the helper class](#option-3-call-a-downstream-web-api-without-the-helper-class-from-owin-app) ++#### Option 1: Call Microsoft Graph with the SDK from OWIN app ++You want to call Microsoft Graph. In this scenario, you've added `AddMicrosoftGraph` in *Startup.cs* as specified in [Code configuration](scenario-web-app-call-api-app-configuration.md#option-1-call-microsoft-graph), and you can get the `GraphServiceClient` in your controller or page constructor for use in the actions by using the `GetGraphServiceClient()` extension method on the controller. The following example displays the photo of the signed-in user. ++```csharp +[Authorize] +public class HomeController : Controller +{ ++ public async Task GetIndex() + { + var graphServiceClient = this.GetGraphServiceClient(); + var user = await graphServiceClient.Me.Request().GetAsync(); + try + { + using (var photoStream = await graphServiceClient.Me.Photo.Content.Request().GetAsync()) + { + byte[] photoByte = ((MemoryStream)photoStream).ToArray(); + ViewData["photo"] = Convert.ToBase64String(photoByte); + } + ViewData["name"] = user.DisplayName; + } + catch (Exception) + { + ViewData["photo"] = null; + } + } +} +``` ++#### Option 2: Call a downstream web API with the helper class from OWIN app ++You want to call a web API other than Microsoft Graph. In that case, you've added `AddDownstreamApi` in *Startup.cs* as specified in [Code configuration](scenario-web-app-call-api-app-configuration.md#option-2-call-a-downstream-web-api-other-than-microsoft-graph), and you can get `IDownstreamApi` service in your controller by calling the `GetDownstreamApi` extension method on the controller: ++```csharp +[Authorize] +public class TodoListController : Controller +{ + public async Task<ActionResult> Details(int id) + { + var downstreamApi = this.GetDownstreamApi(); + var value = await downstreamApi.CallApiForUserAsync( + ServiceName, + options => + { + options.RelativePath = $"me"; + }); + return View(value); + } +} +``` ++The `CallApiForUserAsync` also has strongly typed generic overrides that enable you to directly receive an object. For example, the following method receives a `Todo` instance, which is a strongly typed representation of the JSON returned by the web API. ++```csharp + // GET: TodoList/Details/5 + public async Task<ActionResult> Details(int id) + { + var downstreamApi = this.GetDownstreamApi(); + var value = await downstreamApi.GetForUserAsync<object, Todo>( + ServiceName, + null, + options => + { + options.RelativePath = $"api/todolist/{id}"; + }); + return View(value); + } + ``` ++#### Option 3: Call a downstream web API without the helper class from OWIN app ++You've decided to acquire an authorization header using the `IAuthorizationHeaderProvider` service, and you now need to use it in your `HttpClient` or `HttpRequest`. In that case, the following code continues the example code shown in [A web API that calls web APIs: Acquire a token for the app](scenario-web-api-call-api-acquire-token.md). The code is called in the actions of the web API controllers. ++```csharp +public async Task<IActionResult> Profile() +{ + // Acquire the access token. + string[] scopes = new string[]{"user.read"}; + var IAuthorizationHeaderProvider = this.GetAuthorizationHeaderProvider(); + string authorizationHeader = await IAuthorizationHeaderProvider.GetAuthorizationHeaderForUserAsync(scopes); ++ // Use the access token to call a protected web API. + HttpClient httpClient = new HttpClient(); + client.DefaultRequestHeaders.Add("Authorization", authorizationHeader); ++ var response = await httpClient.GetAsync($"{webOptions.GraphApiUrl}/beta/me"); ++ if (response.StatusCode == HttpStatusCode.OK) + { + var content = await response.Content.ReadAsStringAsync(); ++ dynamic me = JsonConvert.DeserializeObject(content); + ViewData["Me"] = me; + } ++ return View(); +} +``` + # [Java](#tab/java) The following code continues the example code that's shown in [A web API that calls web APIs: Acquire a token for the app](scenario-web-api-call-api-acquire-token.md). The code is called in the actions of the API controllers. It calls the downstream API MS Graph. |
active-directory | Scenario Web App Call Api Acquire Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-acquire-token.md | -You've built your client application object. Now, you'll use it to acquire a token to call a web API. In ASP.NET or ASP.NET Core, calling a web API is done in the controller: +You've built your client application object. Now, you use it to acquire a token to call a web API. In ASP.NET or ASP.NET Core, calling a web API is done in the controller: - Get a token for the web API by using the token cache. To get this token, you call the Microsoft Authentication Library (MSAL) `AcquireTokenSilent` method (or the equivalent in Microsoft.Identity.Web). - Call the protected API, passing the access token to it as a parameter. You've built your client application object. Now, you'll use it to acquire a tok *Microsoft.Identity.Web* adds extension methods that provide convenience services for calling Microsoft Graph or a downstream web API. These methods are explained in detail in [A web app that calls web APIs: Call an API](scenario-web-app-call-api-call-api.md). With these helper methods, you don't need to manually acquire a token. -If, however, you do want to manually acquire a token, the following code shows an example of using *Microsoft.Identity.Web* to do so in a home controller. It calls Microsoft Graph using the REST API (instead of the Microsoft Graph SDK). To get a token to call the downstream API, you inject the `ITokenAcquisition` service by dependency injection in your controller's constructor (or your page constructor if you use Blazor), and you use it in your controller actions, getting a token for the user (`GetAccessTokenForUserAsync`) or for the application itself (`GetAccessTokenForAppAsync`) in a daemon scenario. +If, however, you do want to manually acquire a token, the following code shows an example of using *Microsoft.Identity.Web* to do so in a home controller. It calls Microsoft Graph using the REST API (instead of the Microsoft Graph SDK). Usually, you don't need to get a token, you need to build an Authorization header that you add to your request. To get an authorization header, you inject the `IAuthorizationHeaderProvider` service by dependency injection in your controller's constructor (or your page constructor if you use Blazor), and you use it in your controller actions. This interface has methods that produce a string containing the protocol (Bearer, Pop, ...) and a token. To get an authorization header to call an API on behalf of the user, use (`CreateAuthorizationHeaderForUserAsync`). To get an authorization header to call a downstream API on behalf of the application itself, in a daemon scenario, use (`CreateAuthorizationHeaderForAppAsync`). The controller methods are protected by an `[Authorize]` attribute that ensures only authenticated users can use the web app. The controller methods are protected by an `[Authorize]` attribute that ensures [Authorize] public class HomeController : Controller {- readonly ITokenAcquisition tokenAcquisition; + readonly IAuthorizationHeaderProvider authorizationHeaderProvider; - public HomeController(ITokenAcquisition tokenAcquisition) + public HomeController(IAuthorizationHeaderProvider authorizationHeaderProvider) {- this.tokenAcquisition = tokenAcquisition; + this.authorizationHeaderProvider = authorizationHeaderProvider; } // Code for the controller actions (see code below) public class HomeController : Controller } ``` -The `ITokenAcquisition` service is injected by ASP.NET by using dependency injection. +ASP.NET Core makes `IAuthorizationHeaderProvider` available by dependency injection. Here's simplified code for the action of the `HomeController`, which gets a token to call Microsoft Graph: public async Task<IActionResult> Profile() { // Acquire the access token. string[] scopes = new string[]{"user.read"};- string accessToken = await tokenAcquisition.GetAccessTokenForUserAsync(scopes); + string accessToken = await authorizationHeaderProvider.CreateAuthorizationHeaderForUserAsync(scopes); // Use the access token to call a protected web API. HttpClient client = new HttpClient();- client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); + client.DefaultRequestHeaders.Add("Authorization", authorizationHeader); string json = await client.GetStringAsync(url); } ``` These advanced steps are covered in chapter 3 of the [3-WebApp-multi-APIs](https The code for ASP.NET is similar to the code shown for ASP.NET Core: - A controller action, protected by an [Authorize] attribute, extracts the tenant ID and user ID of the `ClaimsPrincipal` member of the controller. (ASP.NET uses `HttpContext.User`.)-- From there, it builds an MSAL.NET `IConfidentialClientApplication` object.-- Finally, it calls the `AcquireTokenSilent` method of the confidential client application.-- If interaction is required, the web app needs to challenge the user (re-sign in) and ask for more claims.+*Microsoft.Identity.Web* adds extension methods to the Controller that provide convenience services to call Microsoft Graph or a downstream web API, or to get an authorization header, or even a token. The methods used to call an API directly are explained in detail in [A web app that calls web APIs: Call an API](scenario-web-app-call-api-call-api.md). With these helper methods, you don't need to manually acquire a token. ->[!NOTE] ->The scope should be the fully qualified scope name. For example,`({api_uri}/scope)`. +If, however, you do want to manually acquire a token or build an authorization header, the following code shows how to use *Microsoft.Identity.Web* to do so in a controller. It calls an API (Microsoft Graph) using the REST API instead of the Microsoft Graph SDK. -The following code snippet is extracted from [HomeController.cs#L157-L192](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/257c8f96ec3ff875c351d1377b36403eed942a18/WebApp/Controllers/HomeController.cs#L157-L192) in the [ms-identity-aspnet-webapp-openidconnect](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) ASP.NET MVC code sample: +To get an authorization header, you get an `IAuthorizationHeaderProvider` service from the controller using an extension method `GetAuthorizationHeaderProvider`. To get an authorization header to call an API on behalf of the user, use `CreateAuthorizationHeaderForUserAsync`. To get an authorization header to call a downstream API on behalf of the application itself, in a daemon scenario, use `CreateAuthorizationHeaderForAppAsync`. -```C# -public async Task<ActionResult> ReadMail() -{ - IConfidentialClientApplication app = MsalAppBuilder.BuildConfidentialClientApplication(); - AuthenticationResult result = null; - var account = await app.GetAccountAsync(ClaimsPrincipal.Current.GetMsalAccountId()); - string[] scopes = { "Mail.Read" }; -- try - { - // try to get token silently - result = await app.AcquireTokenSilent(scopes, account).ExecuteAsync().ConfigureAwait(false); - } - catch (MsalUiRequiredException) - { - ViewBag.Relogin = "true"; - return View(); - } +The controller methods are protected by an `[Authorize]` attribute that ensures only authenticated users can use the web app. +++The following snippet shows the action of the `HomeController`, which gets an authorization header to call Microsoft Graph as a REST API: - // More code here - return View(); ++```csharp +[Authorize] +public class HomeController : Controller +{ + [AuthorizeForScopes(Scopes = new[] { "user.read" })] + public async Task<IActionResult> Profile() + { + // Get an authorization header. + IAuthorizationHeaderProvider authorizationHeaderProvider = this.GetAuthorizationHeaderProvider(); + string[] scopes = new string[]{"user.read"}; + string authorizationHeader = await authorizationHeaderProvider.CreateAuthorizationHeaderForUserAsync(scopes); ++ // Use the access token to call a protected web API. + HttpClient client = new HttpClient(); + client.DefaultRequestHeaders.Add("Authorization", authorizationHeader); + string json = await client.GetStringAsync(url); + } } ``` -For details see the code for [GetMsalAccountId](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/257c8f96ec3ff875c351d1377b36403eed942a18/WebApp/Utils/ClaimPrincipalExtension.cs#L38) in the code sample. +The following snippet shows the action of the `HomeController`, which gets an access token to call Microsoft Graph as a REST API: ++```csharp +[Authorize] +public class HomeController : Controller +{ + [AuthorizeForScopes(Scopes = new[] { "user.read" })] + public async Task<IActionResult> Profile() + { + // Get an authorization header. + ITokenAcquirer tokenAcquirer = TokenAcquirerFactory.GetDefaultInstance().GetTokenAcquirer(); + string[] scopes = new string[]{"user.read"}; + string token = await await tokenAcquirer.GetTokenForUserAsync(scopes); ++ // Use the access token to call a protected web API. + HttpClient client = new HttpClient(); + client.DefaultRequestHeaders.Add("Authorization", $"Bearer {token}"); + string json = await client.GetStringAsync(url); + } +} +``` # [Java](#tab/java) public ModelAndView getUserFromGraph(HttpServletRequest httpRequest, HttpServlet # [Python](#tab/python) -In the Python sample, the code that calls the API is in [app.py#L60-71](https://github.com/Azure-Samples/ms-identity-python-webapp/blob/0.5.0/app.py#L60-71). +In the Python sample, the code that calls the API is in `app.py`. The code attempts to get a token from the token cache. If it can't get a token, it redirects the user to the sign-in route. Otherwise, it can proceed to call the API. |
active-directory | Scenario Web App Call Api App Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-app-configuration.md | Select the tab for the platform you're interested in: ## Client secrets or client certificates -Given that your web app now calls a downstream web API, provide a client secret or client certificate in the *appsettings.json* file. You can also add a section that specifies: --- The URL of the downstream web API-- The scopes required for calling the API--In the following example, the `GraphBeta` section specifies these settings. --```JSON -{ - "AzureAd": { - "Instance": "https://login.microsoftonline.com/", - "ClientId": "[Client_id-of-web-app-eg-2ec40e65-ba09-4853-bcde-bcb60029e596]", - "TenantId": "common", -- // To call an API - "ClientSecret": "[Copy the client secret added to the app from the Azure portal]", - "ClientCertificates": [ - ] - }, - "GraphBeta": { - "BaseUrl": "https://graph.microsoft.com/beta", - "Scopes": "user.read" - } -} -``` --Instead of a client secret, you can provide a client certificate. The following code snippet shows using a certificate stored in Azure Key Vault. --```JSON -{ - "AzureAd": { - "Instance": "https://login.microsoftonline.com/", - "ClientId": "[Client_id-of-web-app-eg-2ec40e65-ba09-4853-bcde-bcb60029e596]", - "TenantId": "common", -- // To call an API - "ClientCertificates": [ - { - "SourceType": "KeyVault", - "KeyVaultUrl": "https://msidentitywebsamples.vault.azure.net", - "KeyVaultCertificateName": "MicrosoftIdentitySamplesCert" - } - ] - }, - "GraphBeta": { - "BaseUrl": "https://graph.microsoft.com/beta", - "Scopes": "user.read" - } -} -``` --*Microsoft.Identity.Web* provides several ways to describe certificates, both by configuration or by code. For details, see [Microsoft.Identity.Web - Using certificates](https://github.com/AzureAD/microsoft-identity-web/wiki/Using-certificates) on GitHub. ## Startup.cs -Your web app needs to acquire a token for the downstream API. You specify it by adding the `.EnableTokenAcquisitionToCallDownstreamApi()` line after `.AddMicrosoftIdentityWebApp(Configuration)`. This line exposes the `ITokenAcquisition` service that you can use in your controller and page actions. However, as you'll see in the following two options, it can be done more simply. You'll also need to choose a token cache implementation, for example `.AddInMemoryTokenCaches()`, in *Startup.cs*: +Your web app needs to acquire a token for the downstream API. You specify it by adding the `.EnableTokenAcquisitionToCallDownstreamApi()` line after `.AddMicrosoftIdentityWebApp(Configuration)`. This line exposes the `IAuthorizationHeaderProvider` service that you can use in your controller and page actions. However, as you see in the following two options, it can be done more simply. You also need to choose a token cache implementation, for example `.AddInMemoryTokenCaches()`, in *Startup.cs*: ```csharp using Microsoft.Identity.Web; Your web app needs to acquire a token for the downstream API. You specify it by The scopes passed to `EnableTokenAcquisitionToCallDownstreamApi` are optional, and enable your web app to request the scopes and the user's consent to those scopes when they sign in. If you don't specify the scopes, *Microsoft.Identity.Web* enables an incremental consent experience. -If you don't want to acquire the token yourself, *Microsoft.Identity.Web* provides two mechanisms for calling a web API from a web app. The option you choose depends on whether you want to call Microsoft Graph or another API. +*Microsoft.Identity.Web* offers two mechanisms for calling a web API from a web app without you having to acquire a token. The option you choose depends on whether you want to call Microsoft Graph or another API. ### Option 1: Call Microsoft Graph If you want to call Microsoft Graph, *Microsoft.Identity.Web* enables you to dir ### Option 2: Call a downstream web API other than Microsoft Graph -To call a web API other than Microsoft Graph, *Microsoft.Identity.Web* provides `.AddDownstreamWebApi()`, which requests tokens and calls the downstream web API. +If you want to call an API other than Microsoft Graph, *Microsoft.Identity.Web* enables you to use the `IDownstreamApi` interface in your API actions. To use this interface: ++1. Add the [Microsoft.Identity.Web.DownstreamApi](https://www.nuget.org/packages/Microsoft.Identity.Web.DownstreamApi) NuGet package to your project. +1. Add `.AddDownstreamApi()` after `.EnableTokenAcquisitionToCallDownstreamApi()` in the *Startup.cs* file. `.AddDownstreamApi()` has two arguments, and is shown in the following snippet: + - The name of a service (API), which is used in your controller actions to reference the corresponding configuration + - a configuration section representing the parameters used to call the downstream web API. + ```csharp using Microsoft.Identity.Web; To call a web API other than Microsoft Graph, *Microsoft.Identity.Web* provides services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp(Configuration, "AzureAd") .EnableTokenAcquisitionToCallDownstreamApi(new string[]{"user.read" })- .AddDownstreamWebApi("MyApi", Configuration.GetSection("GraphBeta")) + .AddDownstreamApi("MyApi", Configuration.GetSection("GraphBeta")) .AddInMemoryTokenCaches(); // ... } The following image shows the various possibilities of *Microsoft.Identity.Web* # [ASP.NET](#tab/aspnet) -Because user sign-in is delegated to the OpenID Connect (OIDC) middleware, you must interact with the OIDC process. How you interact depends on the framework you use. -For ASP.NET, you'll subscribe to middleware OIDC events: +## Client secrets or client certificates +++## Startup.Auth.cs ++Your web app needs to acquire a token for the downstream API, *Microsoft.Identity.Web* provides two mechanisms for calling a web API from a web app. The option you choose depends on whether you want to call Microsoft Graph or another API. ++### Option 1: Call Microsoft Graph ++If you want to call Microsoft Graph, *Microsoft.Identity.Web* enables you to directly use the `GraphServiceClient` (exposed by the Microsoft Graph SDK) in your API actions. To expose Microsoft Graph: ++1. Add the [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) NuGet package to your project. +1. Add `.AddMicrosoftGraph()` to the service collection in the *Startup.Auth.cs* file. `.AddMicrosoftGraph()` has several overrides. Using the override that takes a configuration section as a parameter, the code becomes: ++ ```csharp + using Microsoft.Extensions.DependencyInjection; + using Microsoft.Identity.Client; + using Microsoft.Identity.Web; + using Microsoft.Identity.Web.OWIN; + using Microsoft.Identity.Web.TokenCacheProviders.InMemory; + using Microsoft.IdentityModel.Validators; + using Microsoft.Owin.Security; + using Microsoft.Owin.Security.Cookies; + using Owin; ++ namespace WebApp + { + public partial class Startup + { + public void ConfigureAuth(IAppBuilder app) + { + app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType); ++ app.UseCookieAuthentication(new CookieAuthenticationOptions()); ++ // Get an TokenAcquirerFactory specialized for OWIN + OwinTokenAcquirerFactory owinTokenAcquirerFactory = TokenAcquirerFactory.GetDefaultInstance<OwinTokenAcquirerFactory>(); ++ // Configure the web app. + app.AddMicrosoftIdentityWebApp(owinTokenAcquirerFactory, + updateOptions: options => {}); ++ // Add the services you need. + owinTokenAcquirerFactory.Services + .Configure<ConfidentialClientApplicationOptions>(options => + { options.RedirectUri = "https://localhost:44326/"; }) + .AddMicrosoftGraph() + .AddInMemoryTokenCaches(); + owinTokenAcquirerFactory.Build(); + } + } + } + ``` ++### Option 2: Call a downstream web API other than Microsoft Graph ++If you want to call an API other than Microsoft Graph, *Microsoft.Identity.Web* enables you to use the `IDownstreamApi` interface in your API actions. To use this interface: ++1. Add the [Microsoft.Identity.Web.DownstreamApi](https://www.nuget.org/packages/Microsoft.Identity.Web.DownstreamApi) NuGet package to your project. +1. Add `.AddDownstreamApi()` after `.EnableTokenAcquisitionToCallDownstreamApi()` in the *Startup.cs* file. `.AddDownstreamApi()` has two arguments: + - The name of a service (api): you use this name in your controller actions to reference the corresponding configuration + - a configuration section representing the parameters used to call the downstream web API. ++Here's the code: ++ ```csharp + using Microsoft.Extensions.DependencyInjection; + using Microsoft.Identity.Client; + using Microsoft.Identity.Web; + using Microsoft.Identity.Web.OWIN; + using Microsoft.Identity.Web.TokenCacheProviders.InMemory; + using Microsoft.IdentityModel.Validators; + using Microsoft.Owin.Security; + using Microsoft.Owin.Security.Cookies; + using Owin; ++ namespace WebApp + { + public partial class Startup + { + public void ConfigureAuth(IAppBuilder app) + { + app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType); ++ app.UseCookieAuthentication(new CookieAuthenticationOptions()); ++ // Get an TokenAcquirerFactory specialized for OWIN. + OwinTokenAcquirerFactory owinTokenAcquirerFactory = TokenAcquirerFactory.GetDefaultInstance<OwinTokenAcquirerFactory>(); ++ // Configure the web app. + app.AddMicrosoftIdentityWebApp(owinTokenAcquirerFactory, + updateOptions: options => {}); ++ // Add the services you need. + owinTokenAcquirerFactory.Services + .Configure<ConfidentialClientApplicationOptions>(options => + { options.RedirectUri = "https://localhost:44326/"; }) + .AddDownstreamApi("Graph", owinTokenAcquirerFactory.Configuration.GetSection("GraphBeta")) + .AddInMemoryTokenCaches(); + owinTokenAcquirerFactory.Build(); + } + } + } + ``` ++### Summary ++You can choose various token cache implementations. For details, see [Microsoft.Identity.Web - Token cache serialization](https://aka.ms/ms-id-web/token-cache-serialization) on GitHub. ++The following image shows the various possibilities of *Microsoft.Identity.Web* and their effect on the *Startup.cs* file: + -- You'll let ASP.NET Core request an authorization code by means of the Open ID Connect middleware. ASP.NET or ASP.NET Core will let the user sign in and consent.-- You'll subscribe the web app to receive the authorization code. This subscription is done by using a C# delegate.-- When the authorization code is received, the code uses MSAL libraries to redeem it. The resulting access tokens and refresh tokens are stored in the token cache. The cache can be used in other parts of the application, such as controllers, to acquire other tokens silently.+> [!NOTE] +> To fully understand the code examples here, be familiar with [ASP.NET Core fundamentals](/aspnet/core/fundamentals), and in particular with [dependency injection](/aspnet/core/fundamentals/dependency-injection) and [options](/aspnet/core/fundamentals/configuration/options). Code examples in this article and the following one are extracted from the [ASP.NET Web app sample](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect). You might want to refer to that sample for full implementation details. Microsoft.Identity.Web simplifies your code by setting the correct OpenID Connec # [ASP.NET](#tab/aspnet) -ASP.NET handles things similarly to ASP.NET Core, except that the configuration of OpenID Connect and the subscription to the `OnAuthorizationCodeReceived` event happen in the [App_Start\Startup.Auth.cs](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/a2da310539aa613b77da1f9e1c17585311ab22b7/WebApp/App_Start/Startup.Auth.cs) file. The concepts are also similar to those in ASP.NET Core, except that in ASP.NET you must specify the `RedirectUri` in [Web.config#L15](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/master/WebApp/Web.config#L15). This configuration is a bit less robust than the one in ASP.NET Core, because you need to change it when you deploy your application. --Here's the code for Startup.Auth.cs: --```csharp -public partial class Startup -{ - public void ConfigureAuth(IAppBuilder app) - { - app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType); -- app.UseCookieAuthentication(new CookieAuthenticationOptions()); -- // Custom middleware initialization. This is activated when the code obtained from a code_grant is present in the query string (&code=<code>). - app.UseOAuth2CodeRedeemer( - new OAuth2CodeRedeemerOptions - { - ClientId = AuthenticationConfig.ClientId, - ClientSecret = AuthenticationConfig.ClientSecret, - RedirectUri = AuthenticationConfig.RedirectUri - } - ); -- app.UseOpenIdConnectAuthentication( - new OpenIdConnectAuthenticationOptions - { - // The `Authority` represents the v2.0 endpoint - https://login.microsoftonline.com/common/v2.0. - Authority = AuthenticationConfig.Authority, - ClientId = AuthenticationConfig.ClientId, - RedirectUri = AuthenticationConfig.RedirectUri, - PostLogoutRedirectUri = AuthenticationConfig.RedirectUri, - Scope = AuthenticationConfig.BasicSignInScopes + " Mail.Read", // A basic set of permissions for user sign-in and profile access "openid profile offline_access" - TokenValidationParameters = new TokenValidationParameters - { - ValidateIssuer = false, - // In a real application, you would use IssuerValidator for additional checks, such as making sure the user's organization has signed up for your app. - // IssuerValidator = (issuer, token, tvp) => - // { - // //if(MyCustomTenantValidation(issuer)) - // return issuer; - // //else - // // throw new SecurityTokenInvalidIssuerException("Invalid issuer"); - // }, - //NameClaimType = "name", - }, - Notifications = new OpenIdConnectAuthenticationNotifications() - { - AuthorizationCodeReceived = OnAuthorizationCodeReceived, - AuthenticationFailed = OnAuthenticationFailed, - } - }); - } -- private async Task OnAuthorizationCodeReceived(AuthorizationCodeReceivedNotification context) - { - // Upon successful sign-in, get the access token and cache it by using MSAL. - IConfidentialClientApplication clientApp = MsalAppBuilder.BuildConfidentialClientApplication(new ClaimsPrincipal(context.AuthenticationTicket.Identity)); - AuthenticationResult result = await clientApp.AcquireTokenByAuthorizationCode(new[] { "Mail.Read" }, context.Code).ExecuteAsync(); - } -- private Task OnAuthenticationFailed(AuthenticationFailedNotification<OpenIdConnectMessage, OpenIdConnectAuthenticationOptions> notification) - { - notification.HandleResponse(); - notification.Response.Redirect("/Error?message=" + notification.Exception.Message); - return Task.FromResult(0); - } -} -``` +*Microsoft.Identity.Web.OWIN* simplifies your code by setting the correct OpenID Connect settings, subscribing to the code received event, and redeeming the code. No extra code is required to redeem the authorization code. See [Microsoft.Identity.Web source code](https://github.com/AzureAD/microsoft-identity-web/blob/9fdcf15c66819b31b1049955eed5d3e5391656f5/src/Microsoft.Identity.Web.OWIN/AppBuilderExtension.cs#L95) for details on how this works. # [Java](#tab/java) The Microsoft sign-in screen sends the authorization code to the `/getAToken` UR :::code language="python" source="~/ms-identity-python-webapp-tutorial/app.py" range="36-41"::: -See [app.py](https://github.com/Azure-Samples/ms-identity-python-webapp/blob/0.5.0/app.py#L36-41) for the full context of that code. +See `app.py` for the full context of that code. For details about the token-cache providers, see also Microsoft.Identity.Web's [ # [ASP.NET](#tab/aspnet) -The token-cache implementation for web apps or web APIs is different from the implementation for desktop applications, which is often [file based](msal-net-token-cache-serialization.md). +The ASP.NET tutorial uses dependency injection to let you decide the token cache implementation in the *Startup.Auth.cs* file for your application. *Microsoft.Identity.Web* comes with prebuilt token-cache serializers described in [Token cache serialization](msal-net-token-cache-serialization.md). An interesting possibility is to choose ASP.NET Core [distributed memory caches](/aspnet/core/performance/caching/distributed#distributed-memory-cache): -The web-app implementation can use the ASP.NET session or the server memory. For example, see how the cache implementation is hooked after the creation of the MSAL.NET application in [MsalAppBuilder.cs#L39-L51](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/79e3e1f084cd78f9170a8ca4077869f217735a1a/WebApp/Utils/MsalAppBuilder.cs#L57-L58): ---First, to use these implementations: -- add the Microsoft.Identity.Web NuGet package. These token cache serializers aren't brought in MSAL.NET directly to avoid unwanted dependencies. In addition to a higher level for ASP.NET Core, Microsoft.Identity.Web brings classes that are helpers for MSAL.NET, -- In your code, use the Microsoft.Identity.Web namespace:+```csharp +var services = owinTokenAcquirerFactory.Services; +// Use a distributed token cache by adding: +services.AddDistributedTokenCaches(); - ```csharp - #using Microsoft.Identity.Web - ``` -- Once you have built your confidential client application, add the token cache serialization of your choice.+// Then, choose your implementation. +// For instance, the distributed in-memory cache (not cleared when you stop the app): +services.AddDistributedMemoryCache(); -```csharp -public static class MsalAppBuilder +// Or a Redis cache: +services.AddStackExchangeRedisCache(options => {- private static IConfidentialClientApplication clientapp; + options.Configuration = "localhost"; + options.InstanceName = "SampleInstance"; +}); - public static IConfidentialClientApplication BuildConfidentialClientApplication() - { - if (clientapp == null) - { - clientapp = ConfidentialClientApplicationBuilder.Create(AuthenticationConfig.ClientId) - .WithClientSecret(AuthenticationConfig.ClientSecret) - .WithRedirectUri(AuthenticationConfig.RedirectUri) - .WithAuthority(new Uri(AuthenticationConfig.Authority)) - .Build(); -- // After the ConfidentialClientApplication is created, we overwrite its default UserTokenCache serialization with our implementation - clientapp.AddInMemoryTokenCache(); - } - return clientapp; - } +// Or even a SQL Server token cache: +services.AddDistributedSqlServerCache(options => +{ + options.ConnectionString = _config["DistCache_ConnectionString"]; + options.SchemaName = "dbo"; + options.TableName = "TestCache"; +}); ``` -Instead of `clientapp.AddInMemoryTokenCache()`, you can also use more advanced cache serialization implementations like Redis, SQL, Cosmos DB, or distributed memory. Here's an example for Redis: --```csharp - clientapp.AddDistributedTokenCache(services => - { - services.AddStackExchangeRedisCache(options => - { - options.Configuration = "localhost"; - options.InstanceName = "SampleInstance"; - }); - }); -``` +For details about the token-cache providers, see also the *Microsoft.Identity.Web* [Token cache serialization](https://aka.ms/ms-id-web/token-cache-serialization) article, and the [ASP.NET Core Web app tutorials | Token caches](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/2-WebApp-graph-user/2-2-TokenCache) phase of the web app's tutorial. For details see [Token cache serialization for MSAL.NET](./msal-net-token-cache-serialization.md). |
active-directory | Scenario Web App Call Api Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-call-api.md | public class IndexModel : PageModel } ``` +For a full sample, see [ASP.NET Core Web app that calls Microsoft Graph](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-1-Call-MSGraph/README.md) + #### Option 2: Call a downstream web API with the helper class -You want to call a web API other than Microsoft Graph. In that case, you've added `AddDownstreamWebApi` in *Startup.cs* as specified in [Code configuration](scenario-web-app-call-api-app-configuration.md#option-2-call-a-downstream-web-api-other-than-microsoft-graph), and you can directly inject an `IDownstreamWebApi` service in your controller or page constructor and use it in the actions: +You want to call a web API other than Microsoft Graph. In that case, you've added `AddDownstreamApi` in *Startup.cs* as specified in [Code configuration](scenario-web-app-call-api-app-configuration.md#option-2-call-a-downstream-web-api-other-than-microsoft-graph), and you can directly inject an `IDownstreamApi` service in your controller or page constructor and use it in the actions: ```csharp [Authorize] [AuthorizeForScopes(ScopeKeySection = "TodoList:Scopes")] public class TodoListController : Controller {- private IDownstreamWebApi _downstreamWebApi; + private IDownstreamApi _downstreamApi; private const string ServiceName = "TodoList"; - public TodoListController(IDownstreamWebApi downstreamWebApi) + public TodoListController(IDownstreamApi downstreamApi) {- _downstreamWebApi = downstreamWebApi; + _downstreamApi = downstreamApi; } public async Task<ActionResult> Details(int id) {- var value = await _downstreamWebApi.CallWebApiForUserAsync( + var value = await _downstreamApi.CallApiForUserAsync( ServiceName, options => { The `CallWebApiForUserAsync` also has strongly typed generic overrides that enab // GET: TodoList/Details/5 public async Task<ActionResult> Details(int id) {- var value = await _downstreamWebApi.CallWebApiForUserAsync<object, Todo>( + var value = await _downstreamApi.CallApiForUserAsync<object, Todo>( ServiceName, null, options => The `CallWebApiForUserAsync` also has strongly typed generic overrides that enab } ``` + For a full sample, see [ASP.NET Core Web app that calls an API](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/4-WebApp-your-API/4-1-MyOrg) + #### Option 3: Call a downstream web API without the helper class -You've decided to acquire a token manually using the `ITokenAcquisition` service, and you now need to use the token. In that case, the following code continues the example code shown in [A web app that calls web APIs: Acquire a token for the app](scenario-web-app-call-api-acquire-token.md). The code is called in the actions of the web app controllers. +You've decided to acquire a token manually using the `IAuthorizationHeaderProvider` service, and you now need to use the token. In that case, the following code continues the example code shown in [A web app that calls web APIs: Acquire a token for the app](scenario-web-app-call-api-acquire-token.md). The code is called in the actions of the web app controllers. After you've acquired the token, use it as a bearer token to call the downstream API, in this case Microsoft Graph. public async Task<IActionResult> Profile() { // Acquire the access token. string[] scopes = new string[]{"user.read"};- string accessToken = await tokenAcquisition.GetAccessTokenForUserAsync(scopes); + string authorizationHeader = await IAuthorizationHeaderProvider.GetAuthorizationHeaderForUserAsync(scopes); // Use the access token to call a protected web API. HttpClient httpClient = new HttpClient();- httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); + client.DefaultRequestHeaders.Add("Authorization", authorizationHeader); var response = await httpClient.GetAsync($"{webOptions.GraphApiUrl}/beta/me"); public async Task<IActionResult> Profile() > > Most Azure web APIs provide an SDK that simplifies calling the API as is the case for Microsoft Graph. ++# [ASP.NET](#tab/aspnet) +++When you use *Microsoft.Identity.Web*, you have three usage options for calling an API: ++- [Option 1: Call Microsoft Graph with the Microsoft Graph SDK](#option-1-call-microsoft-graph-with-the-sdk-from-owin-app) +- [Option 2: Call a downstream web API with the helper class](#option-2-call-a-downstream-web-api-with-the-helper-class-from-owin-app) +- [Option 3: Call a downstream web API without the helper class](#option-3-call-a-downstream-web-api-without-the-helper-class-from-owin-app) ++#### Option 1: Call Microsoft Graph with the SDK from OWIN app ++You want to call Microsoft Graph. In this scenario, you've added `AddMicrosoftGraph` in *Startup.cs* as specified in [Code configuration](scenario-web-app-call-api-app-configuration.md#option-1-call-microsoft-graph), and you can get the `GraphServiceClient` in your controller or page constructor for use in the actions by using the `GetGraphServiceClient()` extension method on the controller. The following example displays the photo of the signed-in user. ++```csharp +[Authorize] +[AuthorizeForScopes(Scopes = new[] { "user.read" })] +public class HomeController : Controller +{ ++ public async Task GetIndex() + { + var graphServiceClient = this.GetGraphServiceClient(); + var user = await graphServiceClient.Me.Request().GetAsync(); + try + { + using (var photoStream = await graphServiceClient.Me.Photo.Content.Request().GetAsync()) + { + byte[] photoByte = ((MemoryStream)photoStream).ToArray(); + ViewData["photo"] = Convert.ToBase64String(photoByte); + } + ViewData["name"] = user.DisplayName; + } + catch (Exception) + { + ViewData["photo"] = null; + } + } +} +``` ++For a full sample, see [ASP.NET OWIN Web app that calls Microsoft Graph](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) ++#### Option 2: Call a downstream web API with the helper class from OWIN app ++You want to call a web API other than Microsoft Graph. In that case, you've added `AddDownstreamApi` in *Startup.cs* as specified in [Code configuration](scenario-web-app-call-api-app-configuration.md#option-2-call-a-downstream-web-api-other-than-microsoft-graph), and you can get `IDownstreamApi` service in your controller by calling the `GetDownstreamApi` extension method on the controller: ++```csharp +[Authorize] +public class TodoListController : Controller +{ + public async Task<ActionResult> Details(int id) + { + var downstreamApi = this.GetDownstreamApi(); + var value = await downstreamApi.CallApiForUserAsync( + ServiceName, + options => + { + options.RelativePath = $"me"; + }); + return View(value); + } +} +``` ++The `CallApiForUserAsync` also has strongly typed generic overrides that enable you to directly receive an object. For example, the following method receives a `Todo` instance, which is a strongly typed representation of the JSON returned by the web API. ++```csharp + // GET: TodoList/Details/5 + public async Task<ActionResult> Details(int id) + { + var downstreamApi = this.GetDownstreamApi(); + var value = await downstreamApi.CallApiForUserAsync<object, Todo>( + ServiceName, + null, + options => + { + options.HttpMethod = HttpMethod.Get; + options.RelativePath = $"api/todolist/{id}"; + }); + return View(value); + } + ``` ++#### Option 3: Call a downstream web API without the helper class from OWIN app ++You've decided to acquire an authorization header using the `IAuthorizationHeaderProvider` service, and you now need to use it in your HttpClient or HttpRequest. In that case, the following code continues the example code shown in [A web app that calls web APIs: Acquire a token for the app](scenario-web-app-call-api-acquire-token.md). The code is called in the actions of the web app controllers. ++```csharp +public async Task<IActionResult> Profile() +{ + // Acquire the access token. + string[] scopes = new string[]{"user.read"}; + var IAuthorizationHeaderProvider = this.GetAuthorizationHeaderProvider(); + string authorizationHeader = await IAuthorizationHeaderProvider.GetAuthorizationHeaderForUserAsync(scopes); ++ // Use the access token to call a protected web API. + HttpClient httpClient = new HttpClient(); + client.DefaultRequestHeaders.Add("Authorization", authorizationHeader); ++ var response = await httpClient.GetAsync($"{webOptions.GraphApiUrl}/beta/me"); ++ if (response.StatusCode == HttpStatusCode.OK) + { + var content = await response.Content.ReadAsStringAsync(); ++ dynamic me = JsonConvert.DeserializeObject(content); + ViewData["Me"] = me; + } ++ return View(); +} +``` + # [Java](#tab/java) ```java |
active-directory | Scenario Web App Call Api Production | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-production.md | Learn more by trying out the full, progressive tutorial for ASP.NET Core web app - Handles incremental consent. - Calls your own web API. -[ASP.NET Core web app tutorial](https://github.com/Azure-Samples/ms-identity-aspnetcore-webapp-tutorial#scope-of-this-tutorial) +> [!div class="nextstepaction"] +> [ASP.NET Core web app tutorial](https://github.com/Azure-Samples/ms-identity-aspnetcore-webapp-tutorial#scope-of-this-tutorial) <! Removing this diagram as it's already shown from the next step linked tutorial |
active-directory | Scenario Web App Call Api Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-sign-in.md | |
active-directory | Scenario Web App Sign User App Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md | In the same way, the sign-out URI would be set to `https://localhost:44321/signo # [ASP.NET](#tab/aspnet) -In ASP.NET, the application is configured through the [Web.config](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/a2da310539aa613b77da1f9e1c17585311ab22b7/WebApp/Web.config#L12-L15) file, lines 12 to 15. --```xml -<?xml version="1.0" encoding="utf-8"?> -<!-- - For more information on how to configure your ASP.NET application, visit - https://go.microsoft.com/fwlink/?LinkId=301880 - --> -<configuration> - <appSettings> - <add key="webpages:Version" value="3.0.0.0" /> - <add key="webpages:Enabled" value="false" /> - <add key="ClientValidationEnabled" value="true" /> - <add key="UnobtrusiveJavaScriptEnabled" value="true" /> - <add key="ida:ClientId" value="[Enter your client ID, as obtained from the app registration portal]" /> - <add key="ida:ClientSecret" value="[Enter your client secret, as obtained from the app registration portal]" /> - <add key="ida:AADInstance" value="https://login.microsoftonline.com/{0}{1}" /> - <add key="ida:RedirectUri" value="https://localhost:44326/" /> - <add key="vs:EnableBrowserLink" value="false" /> - </appSettings> +In ASP.NET, the application is configured through the [appsettings.json](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/a2da310539aa613b77da1f9e1c17585311ab22b7/WebApp/Web.config#L12-L15) file, lines 12 to 15. ++```json +{ + "AzureAd": { + "Instance": "https://login.microsoftonline.com/", + "TenantId": "[Enter the tenantId here]", + "ClientId": "[Enter the Client Id here]", + } +} ``` In the Azure portal, the reply URIs that you register on the **Authentication** page for your application need to match these URLs. That is, they should be `https://localhost:44326/`. For simplicity in this article, the client secret is stored in the configuration The configuration parameters are set in *.env* as environment variables: -```bash -CLIENT_ID=<client id> -CLIENT_SECRET=<client secret> -TENANT_ID=<tenant id> -``` + Those environment variables are referenced in *app_config.py*: The *.env* file should never be checked into source control, since it contains secrets. The quickstart sample includes a *.gitignore* file that prevents the *.env* file from being checked in. In ASP.NET Core web apps (and web APIs), the application is protected because yo 1. Add the [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web) and [Microsoft.Identity.Web.UI](https://www.nuget.org/packages/Microsoft.Identity.Web.UI) NuGet packages to your project. Remove the `Microsoft.AspNetCore.Authentication.AzureAD.UI` NuGet package if it's present. -2. Update the code in `ConfigureServices` so that it uses the `AddMicrosoftIdentityWebAppAuthentication` and `AddMicrosoftIdentityUI` methods. +2. Update the code in `ConfigureServices` so that it uses the `AddMicrosoftIdentityWebApp` and `AddMicrosoftIdentityUI` methods. ```c# public class Startup In ASP.NET Core web apps (and web APIs), the application is protected because yo // This method gets called by the runtime. Use this method to add services to the container. public void ConfigureServices(IServiceCollection services) {- services.AddMicrosoftIdentityWebAppAuthentication(Configuration, "AzureAd"); + services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) + .AddMicrosoftIdentityWebApp(Configuration, "AzureAd"); services.AddRazorPages().AddMvcOptions(options => { In ASP.NET Core web apps (and web APIs), the application is protected because yo ``` In that code:-- The `AddMicrosoftIdentityWebAppAuthentication` extension method is defined in **Microsoft.Identity.Web**, which;- - Adds the authentication service. +- The `AddMicrosoftIdentityWebApp` extension method is defined in **Microsoft.Identity.Web**, which; - Configures options to read the configuration file (here from the "AzureAD" section) - Configures the OpenID Connect options so that the authority is the Microsoft identity platform. - Validates the issuer of the token. - Ensures that the claims corresponding to name are mapped from the `preferred_username` claim in the ID token. -- In addition to the configuration object, you can specify the name of the configuration section when calling `AddMicrosoftIdentityWebAppAuthentication`. By default, it's `AzureAd`.+- In addition to the configuration object, you can specify the name of the configuration section when calling `AddMicrosoftIdentityWebApp`. By default, it's `AzureAd`. -- `AddMicrosoftIdentityWebAppAuthentication` has other parameters for advanced scenarios. For example, tracing OpenID Connect middleware events can help you troubleshoot your web application if authentication doesn't work. Setting the optional parameter `subscribeToOpenIdConnectMiddlewareDiagnosticsEvents` to `true` will show you how information is processed by the set of ASP.NET Core middleware as it progresses from the HTTP response to the identity of the user in `HttpContext.User`.+- `AddMicrosoftIdentityWebApp` has other parameters for advanced scenarios. For example, tracing OpenID Connect middleware events can help you troubleshoot your web application if authentication doesn't work. Setting the optional parameter `subscribeToOpenIdConnectMiddlewareDiagnosticsEvents` to `true` will show you how information is processed by the set of ASP.NET Core middleware as it progresses from the HTTP response to the identity of the user in `HttpContext.User`. - The `AddMicrosoftIdentityUI` extension method is defined in **Microsoft.Identity.Web.UI**. It provides a default controller to handle sign-in and sign-out. For more information about how Microsoft.Identity.Web enables you to create web # [ASP.NET](#tab/aspnet) -The code related to authentication in an ASP.NET web app and web APIs is located in the [App_Start/Startup.Auth.cs](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/a2da310539aa613b77da1f9e1c17585311ab22b7/WebApp/App_Start/Startup.Auth.cs#L17-L61) file. +The code related to authentication in an ASP.NET web app and web APIs is located in the [App_Start/Startup.Auth.cs](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/master/WebApp/App_Start/Startup.Auth.cs) file. -```csharp +```c# public void ConfigureAuth(IAppBuilder app) { app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType); app.UseCookieAuthentication(new CookieAuthenticationOptions()); - app.UseOpenIdConnectAuthentication( - new OpenIdConnectAuthenticationOptions - { - // Authority` represents the identity platform endpoint - https://login.microsoftonline.com/common/v2.0. - // `Scope` describes the initial permissions that your app will need. - // See https://azure.microsoft.com/documentation/articles/active-directory-v2-scopes/. - ClientId = clientId, - Authority = String.Format(CultureInfo.InvariantCulture, aadInstance, "common", "/v2.0"), - RedirectUri = redirectUri, - Scope = "openid profile", - PostLogoutRedirectUri = redirectUri, - }); + OwinTokenAcquirerFactory factory = TokenAcquirerFactory.GetDefaultInstance<OwinTokenAcquirerFactory>(); + factory.Build(); + app.AddMicrosoftIdentityWebApi(factory); } ``` |
active-directory | Cross Tenant Access Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md | The default cross-tenant access settings apply to all Azure AD organizations ext - **Organizational settings**: No organizations are added to your Organizational settings by default. This means all external Azure AD organizations are enabled for B2B collaboration with your organization. -- **Cross-tenant sync (preview)**: No users from other tenants are synchronized into your tenant with cross-tenant synchronization.+- **Cross-tenant sync**: No users from other tenants are synchronized into your tenant with cross-tenant synchronization. The behaviors described above apply to B2B collaboration with other Azure AD tenants in your same Microsoft Azure cloud. In cross-cloud scenarios, default settings work a little differently. See [Microsoft cloud settings](#microsoft-cloud-settings) later in this article. You can configure organization-specific settings by adding an organization and m ### Automatic redemption setting -> [!IMPORTANT] -> Automatic redemption is currently in preview. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - [!INCLUDE [automatic-redemption-include](../includes/automatic-redemption-include.md)] To configure this setting using Microsoft Graph, see the [Update crossTenantAccessPolicyConfigurationPartner](/graph/api/crosstenantaccesspolicyconfigurationpartner-update?view=graph-rest-beta&preserve-view=true) API. For information about building your own onboarding experience, see [B2B collaboration invitation manager](external-identities-overview.md#azure-ad-microsoft-graph-api-for-b2b-collaboration). For more information, see [Configure cross-tenant synchronization](../multi-tena ### Cross-tenant synchronization setting -> [!IMPORTANT] -> [Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) is currently in preview. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - [!INCLUDE [cross-tenant-synchronization-include](../includes/cross-tenant-synchronization-include.md)] To configure this setting using Microsoft Graph, see the [Update crossTenantIdentitySyncPolicyPartner](/graph/api/crosstenantidentitysyncpolicypartner-update?view=graph-rest-beta&preserve-view=true) API. For more information, see [Configure cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md). |
active-directory | Cross Tenant Access Settings B2b Collaboration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md | With inbound settings, you select which external users and groups will be able t ### Allow users to sync into this tenant -If you select **Inbound access** of the added organization, you'll see the **Cross-tenant sync (Preview)** tab and the **Allow users sync into this tenant** check box. Cross-tenant synchronization is a one-way synchronization service in Azure AD that automates creating, updating, and deleting B2B collaboration users across tenants in an organization. For more information, see [Configure cross-tenant synchronization](../../active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md) and the [Multi-tenant organizations documentation](../multi-tenant-organizations/index.yml). +If you select **Inbound access** of the added organization, you'll see the **Cross-tenant sync** tab and the **Allow users sync into this tenant** check box. Cross-tenant synchronization is a one-way synchronization service in Azure AD that automates creating, updating, and deleting B2B collaboration users across tenants in an organization. For more information, see [Configure cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md) and the [Multi-tenant organizations documentation](../multi-tenant-organizations/index.yml). :::image type="content" source="media/cross-tenant-access-settings-b2b-collaboration/cross-tenant-sync-tab.png" alt-text="Screenshot that shows the Cross-tenant sync tab with the Allow users sync into this tenant check box." lightbox="media/cross-tenant-access-settings-b2b-collaboration/cross-tenant-sync-tab.png"::: |
active-directory | Cross Tenant Access Settings B2b Direct Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-direct-connect.md | With inbound settings, you select which external users and groups will be able t 1. Select **Save**. > [!NOTE]-> When configuring settings for an organization, you'll notice a **Cross-tenant sync (Preview)** tab. This tab doesn't apply to your B2B direct connect configuration. Instead, this feature is used by multi-tenant organizations to enable B2B collaboration across their tenants. For more information, see the [multi-tenant organization documentation](../multi-tenant-organizations/index.yml). +> When configuring settings for an organization, you'll notice a **Cross-tenant sync** tab. This tab doesn't apply to your B2B direct connect configuration. Instead, this feature is used by multi-tenant organizations to enable B2B collaboration across their tenants. For more information, see the [multi-tenant organization documentation](../multi-tenant-organizations/index.yml). ## Modify outbound access settings |
active-directory | External Identities Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-identities-overview.md | Currently, B2B direct connect enables the Teams Connect shared channels feature, You use [cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md) to manage trust relationships with other Azure AD organizations and define inbound and outbound policies for B2B direct connect. -For details about the resources, files, and applications, that are available to the B2B direct connect user via the Teams shared channel, refer to [Chat, teams, channels, & apps in Microsoft Teams](/microsoftteams/deploy-chat-teams-channels-microsoft-teams-landing-page). +For details about the resources, files, and applications that are available to the B2B direct connect user via the Teams shared channel refer to [Chat, teams, channels, & apps in Microsoft Teams](/microsoftteams/deploy-chat-teams-channels-microsoft-teams-landing-page). ## Azure AD B2C The following table gives a detailed comparison of the scenarios you can enable | **Branding** | Host/inviting organization's brand is used. | For sign-in screens, the userΓÇÖs home organization brand is used. In the shared channel, the resource organization's brand is used. | Fully customizable branding per application or organization. | | **More information** | [Blog post](https://blogs.technet.microsoft.com/enterprisemobility/2017/02/01/azure-ad-b2b-new-updates-make-cross-business-collab-easy/), [Documentation](what-is-b2b.md) | [Documentation](b2b-direct-connect-overview.md) | [Product page](https://azure.microsoft.com/services/active-directory-b2c/), [Documentation](../../active-directory-b2c/index.yml) | -Based on your organizationΓÇÖs requirements you might use cross-tenant synchronization (preview) in multi-tenant organizations. For more information about this new feature, see the [multi-tenant organization documentation](../multi-tenant-organizations/index.yml) and the [feature comparison](../multi-tenant-organizations/overview.md#compare-multi-tenant-capabilities). +Based on your organizationΓÇÖs requirements you might use cross-tenant synchronization in multi-tenant organizations. For more information about this new feature, see the [multi-tenant organization documentation](../multi-tenant-organizations/index.yml) and the [feature comparison](../multi-tenant-organizations/overview.md#compare-multi-tenant-capabilities). ## Managing External Identities features Azure AD B2B collaboration and B2B direct connect are features Azure AD, and the ### Cross-tenant access settings -Cross-tenant access settings let you manage B2B collaboration and B2B direct connect with other Azure AD organizations. You can determine how other Azure AD organizations collaborate with you (inbound access), and how your users collaborate with other Azure AD organizations (outbound access). Granular controls let you determine the people, groups, and apps, both in your organization and in external Azure AD organizations, that can participate in B2B collaboration and B2B direct connect. You can also trust multi-factor authentication (MFA) and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations. +Cross-tenant access settings let you manage B2B collaboration and B2B direct connect with other Azure AD organizations. You can determine how other Azure AD organizations collaborate with you (inbound access), and how your users collaborate with other Azure AD organizations (outbound access). Granular controls let you determine the people, groups, and apps, both in your organization and in external Azure AD organizations that can participate in B2B collaboration and B2B direct connect. You can also trust multi-factor authentication (MFA) and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations. - **Default cross-tenant access settings** determine your baseline inbound and outbound settings for both B2B collaboration and B2B direct connect. Initially, your default settings are configured to allow all inbound and outbound B2B collaboration with other Azure AD organizations and to block B2B direct connect with all Azure AD organizations. You can change these initial settings to create your own default configuration. Cross-tenant access settings let you manage B2B collaboration and B2B direct con For more information, see [Cross-tenant access in Azure AD External Identities](cross-tenant-access-overview.md). -Azure AD has a new feature for multi-tenant organizations called cross-tenant synchronization (preview), which allows for a seamless collaboration experience across Azure AD tenants. Cross-tenant synchronization settings are configured under the **Organization-specific access settings**. To learn more about multi-tenant organizations and cross-tenant synchronization see the [Multi-tenant organizations documentation](../multi-tenant-organizations/index.yml). +Azure AD has a feature for multi-tenant organizations called cross-tenant synchronization, which allows for a seamless collaboration experience across Azure AD tenants. Cross-tenant synchronization settings are configured under the **Organization-specific access settings**. To learn more about multi-tenant organizations and cross-tenant synchronization see the [Multi-tenant organizations documentation](../multi-tenant-organizations/index.yml). ### Microsoft cloud settings for B2B collaboration If you offer a Software as a Service (SaaS) application to many organizations, y ### Multi-tenant organizations -A multi-tenant organization is an organization that has more than one instance of Azure AD. There are various reasons for [multi-tenancy](../../active-directory/multi-tenant-organizations/overview.md#what-is-a-multi-tenant-organization), like using multiple clouds or having multiple geographical boundaries. Multi-tenant organizations use a one-way synchronization service in Azure AD, called [cross-tenant synchronization](../../active-directory/multi-tenant-organizations/overview.md#cross-tenant-synchronization-preview). Cross-tenant synchronization enables seamless collaboration for a multi-tenant organization. It improves user experience and ensures that users can access resources, without receiving an invitation email and having to accept a consent prompt in each tenant. Cross-tenant synchronization is currently in preview. +A multi-tenant organization is an organization that has more than one instance of Azure AD. There are various reasons for [multi-tenancy](../multi-tenant-organizations/overview.md#what-is-a-multi-tenant-organization), like using multiple clouds or having multiple geographical boundaries. Multi-tenant organizations use a one-way synchronization service in Azure AD, called [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md). Cross-tenant synchronization enables seamless collaboration for a multi-tenant organization. It improves user experience and ensures that users can access resources, without receiving an invitation email and having to accept a consent prompt in each tenant. ## Next steps |
active-directory | Redemption Experience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/redemption-experience.md | If you see an error that requires admin consent while accessing an application, ### Automatic redemption setting -> [!IMPORTANT] -> Automatic redemption is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - You might want to automatically redeem invitations so users don't have to accept the consent prompt when they're added to another tenant for B2B collaboration. When configured, a notification email is sent to the B2B collaboration user that requires no action from the user. Users are sent the notification email directly and they don't need to access the tenant first before they receive the email. The following shows an example notification email if you automatically redeem invitations in both tenants. :::image type="content" source="media/redemption-experience/email-consent-prompt-suppressed.png" alt-text="Screenshot that shows B2B notification email when the consent prompt is suppressed."::: |
active-directory | 11 Onboard External User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/11-onboard-external-user.md | + + Title: Onboard external users to Line-of-business applications using Azure Active Directory B2B +description: Learn how to onboard external users to Line-of-business applications using Azure Active Directory B2B +++++++ Last updated : 5/08/2023+++++++# Onboard external users to Line-of-business applications using Azure Active Directory B2B ++Application developers can use Azure Active Directory B2B (Azure AD B2B) to onboard and collaborate with external users within line-of-business (LOB) applications. Similar to the **Share** button in many Office 365 applications, application developers can create a one-click invitation experience within any LOB application that is integrated with Azure AD. ++Benefits include: ++- Simple and easy user onboarding and access to the LOB applications with users able to gain access with a few steps. ++- Enables external users to bring their own identity and perform Single sign-on (SSO). ++- Automatic provisioning of external identities to Azure AD. ++- Apply Azure AD Conditional Access and cross tenant access policies to enforce authorization policies such as requiring multi-factor authentication. ++## Integration flow ++To integrate LOB applications with Azure AD B2B, follow this pattern: ++ ++| Step | Description | +|:-|:--| +| 1. | The end user triggers the **invitation** within the LOB application and provides the email address of the external user. The application checks if the user already exists, and if they donΓÇÖt, proceeds to [step #2](#step-2-create-and-send-invitation)| +| 2. | The application sends a POST to the Microsoft Graph API on behalf of the user. It provides the redirect URL and external userΓÇÖs email that is defined in [step #1](#step-1-check-if-the-external-user-already-exists). | +| 3. | Microsoft Graph API provisions the guest user in Azure AD. | +| 4. | Microsoft Graph API returns the success/failure status of the API call. If successful, the response includes the Azure AD user object ID and the invitation link that is sent to the invited userΓÇÖs email. You can optionally suppress the Microsoft email and send your own custom email. | +| 5. | (Optional) If you want to write more attributes to the invited user or add the invited user to a group, the application makes an extra API call to the Microsoft Graph API. | +| 6. | (Optional) Microsoft Graph API makes the desired updates to Azure AD.| +| 7. | (Optional) Microsoft Graph API returns the success/failure status to the application. | +| 8. | The application provisions the user to its own database/backend user directory using the userΓÇÖs object ID attribute as the **immutable ID**. | +| 9. | The application presents the success/failure status to the end user. | ++If assignment is required to access the LOB application, the invited guest user must also be assigned to the application with an appropriate application role. This can be done as another API call adding the invited guest to a group (steps #5-7) or by automating group membership with Azure AD dynamic groups. Using dynamic groups wouldn't require another API call by the application. However, group membership wouldn't be updated as quickly compared to adding a user to a group immediately after user invitation. ++## Step 1: Check if the external user already exists ++It's possible that the external user has previously been invited and onboarded. The LOB application should check whether the user already exists in the directory. There are many approaches, however, the simplest involves making an API call to the Microsoft Graph API and presenting the possible matches to the inviting user for them to pick from. ++For example: ++``` +Application Permission: User.read.all ++GET https://graph.microsoft.com/v1.0/users?$filter=othermails/any(id:id eq 'userEmail@contoso.com') +``` +If you receive a userΓÇÖs details in the response, then the user already exists. You should present the users returned to the inviting user and allow them to choose which external user they want to grant access. You should proceed to make appropriate API calls or trigger other processes to grant this user access to the application rather than proceeding with the invitation step. ++## Step 2: Create and send invitation ++If the external user doesn't already exist in the directory, you can use Azure AD B2B to invite the user and onboard them to your Azure AD tenant. As an application developer, you need to determine what to include in the invitation request to Microsoft Graph API. ++At minimum, you need to: ++- Prompt the end user to provide the external userΓÇÖs email address. ++- Determine the invitation URL. This URL is where the invited user gets redirected to after they authenticate and redeem the B2B invitation. The URL can be a generic landing page for the application or dynamically determined by the LOB application based on where the end user triggered the invitation. ++More flags and attributes to consider for inclusion in the invitation request: ++- Display name of the invited user. +- Determine whether you want to use the default Microsoft invitation email or suppress the default email to create your own. ++Once the application has collected the required information and determined any other flags or information to include, the application must POST the request to the Microsoft Graph API invitation manager. Ensure the application registration has the appropriate permissions in Azure AD. ++For example: ++``` +Delegated Permission: User.Invite.All ++POST https://graph.microsoft.com/v1.0/invitations +Content-type: application/json ++{ +"invitedUserDisplayName": "John Doe", +"invitedUserEmailAddress": "john.doe@contoso.com", +"sendInvitationMessage": true, +"inviteRedirectUrl": "https://customapp.contoso.com" +} +``` ++>[!NOTE] +> To see the full list of available options for the JSON body of the invitation, check out [invitation resource type - Microsoft Graph v1.0](https://learn.microsoft.com/graph/api/resources/invitation?view=graph-rest-1.0). ++Application developers can alternatively onboard external users using Azure AD Self-service sign-up or Entitlement management access packages. You can create your **invitation** button in your LOB application that triggers a custom email containing a predefined Self-service sign-up URL or access package URL. The invited user then self-service onboard and access the application. ++## Step 3: Write other attributes to Azure AD (optional) ++>[!IMPORTANT] +>Granting an application permission to update users in your directory is a highly privileged action. You should take steps to secure and monitor your LOB app if you grant the application these highly privileged permissions. ++Your organization or the LOB application may require to store more information for future use, such as claims emittance in tokens or granular authorization policies. Your application can make another API call to update the external user after theyΓÇÖve been invited/created in Azure AD. Doing so requires your application to have extra API permissions and would require an extra call to the Microsoft Graph API. ++To update the user, you need to use the object ID of the newly created guest user received in the response from the invitation API call. This is the **ID** value in the API response from either the existence check or invitation. You can write to any standard attribute or custom extension attributes you may have created. ++For example: ++``` +Application Permission: User.ReadWrite.All ++PATCH https://graph.microsoft.com/v1.0/users/<userΓÇÖs object ID> +Content-type: application/json ++{ +"businessPhones": [ + "+1 234 567 8900" + ], +"givenName": "John" +"surname": "Doe", +"extension_cf4ff515cbf947218d468c96f9dc9021_appRole": "external" +} +``` +For more information, see [Update user - Microsoft Graph v1.0](https://learn.microsoft.com//graph/api/user-update?view=graph-rest-1.0&tabs=http). ++## Step 4: Assign the invited user to a group ++>[!NOTE] +>If user assignment is not required to access the application, you may skip this step. ++If user assignment is required in Azure AD for application access and/or role assignment, the user must be assigned to the application, or else the user is unable to gain access regardless of successful authentication. To achieve this, you should make another API call to add the invited external user to a specific group. The group can be assigned to the application and mapped to a specific application role. ++For example: ++Permissions: Assign the Group updater role or a custom role to the enterprise application and scope the role assignment to only the group(s) this application should be updating. Or assign the `group.readwrite.all` permission in Microsoft Graph API. ++``` +POST https://graph.microsoft.com/v1.0/groups/<insert group id>/members/$ref +Content-type: application/json ++{ +"@odata.id": "https://graph.microsoft.com/v1.0/directoryObjects/<insert user id>" +} +``` +For more information, see [Add members - Microsoft Graph v1.0](https://learn.microsoft.com/graph/api/group-post-members?view=graph-rest-1.0&tabs=http). + +Alternatively, you can use Azure AD dynamic groups, which can automatically assign users to group based on the userΓÇÖs attributes. However, if end-user access is time-sensitive this wouldn't be the recommended approach as dynamic groups can take up to 24 hours to populate. ++If you prefer to use dynamic groups, you don't need to add the users to a group explicitly with another API call. Create a dynamic group that automatically adds the user as a member of the group based on available attributes such as userType, email, or a custom attribute. For more information, see [Create or edit a dynamic group and get status](../enterprise-users/groups-create-rule.md). + +## Step 5: Provision the invited user to the application ++Once the invited external user has been provisioned to Azure AD, the Microsoft Graph API returns a response with the necessary user information such as object ID and email. The LOB application can then provision the user to its own directory/database. Depending on the type of application and internal directory type the application uses, the actual implementation of this provisioning varies. ++With the external user provisioned in both Azure AD and the application, the LOB application can now notify the end user who initiated the invitation that the process has been successful. The invited user can get SSO with their own identity without the inviting organization needing to onboard and issue extra credentials. Azure AD can enforce authorization policies such as Conditional Access, Azure AD Multi-Factor Authentication, and risk-based Identity Protection. ++## Other considerations ++- Ensure proper error handling is done within the LOB application. The application should validate that each API call is successful. If unsuccessful, extra attempts and/or presenting error messages to the end user would be appropriate. ++- If you need the LOB application to update external users once theyΓÇÖve been invited, consider granting a custom role that allows the application to only update users and assign the scope to a dynamic administrative unit. For example, you can create a dynamic administrative unit to contain all users where usertype = guest. Once the external user is onboarded to Azure AD, it takes some time for them to be added to the administrative unit. So, the LOB application needs to attempt to update the user after some time and it may take more than one attempt if there are delays. Despite these delays, this is the best approach available to enable the LOB application to update external users without granting it permission to update any user in the directory. |
active-directory | Multi Tenant User Management Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-user-management-scenarios.md | Synchronized sharing across tenants is the most complex of the patterns describe In automated scenarios, resource tenant admins use an identity provisioning system to automate provisioning and deprovisioning processes. In scenarios within Microsoft's Commercial Cloud instance, we have [cross-tenant synchronization](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/seamless-application-access-and-lifecycle-management-for-multi/ba-p/3728752). In scenarios that span Microsoft Sovereign Cloud instances, you need other approaches because cross-tenant synchronization doesn't yet support cross-cloud. -For example, within a Microsoft Commercial Cloud instance, a multi-national conglomeration has multiple subsidiaries with the following requirements. +For example, within a Microsoft Commercial Cloud instance, a multi-national/regional conglomeration has multiple subsidiaries with the following requirements. - Each has their own Azure AD tenant and need to work together. - In addition to synchronizing new users among tenants, automatically synchronize attribute updates and automate deprovisioning. |
active-directory | Secure External Access Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-external-access-resources.md | Azure Active Directory (Azure AD) B2B integrates with other tools in Azure AD, a * [Secure access with Conditional Access policies](7-secure-access-conditional-access.md) * [Control access with sensitivity labels](8-secure-access-sensitivity-labels.md) * [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)+* [Convert local guest accounts](10-secure-local-guest.md) +* [Onboard external users to Line-of-business applications](11-onboard-external-user.md) |
active-directory | Security Operations Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-devices.md | Sign-in logs where ResourceDisplayName == "Device Registration Service" - where conditionalAccessStatus == "success" + where ConditionalAccessStatus == "success" where AuthenticationRequirement <> "multiFactorAuthentication" ~~~ SigninLogs | where DeviceDetail.isCompliant == false -| where conditionalAccessStatus == "success" +| where ConditionalAccessStatus == "success" ``` **Sign-ins by unknown devices** |
active-directory | Access Reviews Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-overview.md | Depending on what you want to review, you'll either create your access review in [!INCLUDE [Azure AD Premium P2 license](../../../includes/active-directory-p2-license.md)] -### How many licenses must you have? --Your directory needs at least as many Azure AD Premium P2 licenses as the number of employees who will be performing the following tasks: --- Member users who are assigned as reviewers-- Member users who perform a self-review-- Member users as group owners who perform an access review-- Member users as application owners who perform an access review--For guest users, licensing needs will depend on the licensing model youΓÇÖre using. However, the below guest usersΓÇÖ activities are considered Azure AD Premium P2 usage: --- Guest users who are assigned as reviewers-- Guest users who perform a self-review-- Guest users as group owners who perform an access review-- Guest users as application owners who perform an access review---Azure AD Premium P2 licenses are **not** required for users with the Global Administrator or User Administrator roles who set up access reviews, configure settings, or apply the decisions from the reviews. --Azure AD guest user access is based on a monthly active users (MAU) billing model, which replaces the 1:5 ratio billing model. For more information, see [Azure AD External Identities pricing](../external-identities/external-identities-pricing.md). --For more information about licenses, see [Assign or remove licenses using the Azure portal](../fundamentals/license-users-groups.md). --### Example license scenarios --Here are some example license scenarios to help you determine the number of licenses you must have. --| Scenario | Calculation | Number of licenses | -| | | | -| An administrator creates an access review of Group A with 75 users and 1 group owner, and assigns the group owner as the reviewer. | 1 license for the group owner as reviewer | 1 | -| An administrator creates an access review of Group B with 500 users and 3 group owners, and assigns the 3 group owners as reviewers. | 3 licenses for each group owner as reviewers | 3 | -| An administrator creates an access review of Group B with 500 users. Makes it a self-review. | 500 licenses for each user as self-reviewers | 500 | -| An administrator creates an access review of Group C with 50 member users and 25 guest users. Makes it a self-review. | 50 licenses for each user as self-reviewers.* | 50 | -| An administrator creates an access review of Group D with 6 member users and 108 guest users. Makes it a self-review. | 6 licenses for each user as self-reviewers. Guest users are billed on a monthly active user (MAU) basis. No additional licenses are required. * | 6 | --\* Azure AD External Identities (guest user) pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month. This model replaces the 1:5 ratio billing model, which allowed up to five guest users for each Azure AD Premium license in your tenant. When your tenant is linked to a subscription and you use External Identities features to collaborate with guest users, you'll be automatically billed using the MAU-based billing model. For more information, see [Billing model for Azure AD External Identities](../external-identities/external-identities-pricing.md). --> [!NOTE] -> Access Reviews for Service Principals requires an Entra Workload Identities Premium plan in addition to Azure AD Premium P2 license. You can view and acquire licenses on the [Workload Identities blade](https://portal.azure.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade) in the Azure portal. - ## Next steps - [Prepare for an access review of users' access to an application](access-reviews-application-preparation.md) |
active-directory | Cross Tenant Synchronization Configure Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure-graph.md | Title: Configure cross-tenant synchronization using Microsoft Graph API (preview) + Title: Configure cross-tenant synchronization using Microsoft Graph API description: Learn how to configure cross-tenant synchronization in Azure Active Directory using Microsoft Graph API. -# Configure cross-tenant synchronization using Microsoft Graph API (preview) --> [!IMPORTANT] -> Cross-tenant synchronization is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +# Configure cross-tenant synchronization using Microsoft Graph API This article describes the key steps to configure cross-tenant synchronization using Microsoft Graph API. When configured, Azure AD automatically provisions and de-provisions B2B users in your target tenant. For detailed steps using the Azure portal, see [Configure cross-tenant synchronization](cross-tenant-synchronization-configure.md). |
active-directory | Cross Tenant Synchronization Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md | Title: Configure cross-tenant synchronization (preview) + Title: Configure cross-tenant synchronization description: Learn how to configure cross-tenant synchronization in Azure Active Directory using the Azure portal. -# Configure cross-tenant synchronization (preview) --> [!IMPORTANT] -> Cross-tenant synchronization is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +# Configure cross-tenant synchronization This article describes the steps to configure cross-tenant synchronization using the Azure portal. When configured, Azure AD automatically provisions and de-provisions B2B users in your target tenant. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). By the end of this article, you'll be able to: 1. Under **Inbound access** of the added organization, select **Inherited from default**. -1. Select the **Cross-tenant sync (Preview)** tab. +1. Select the **Cross-tenant sync** tab. 1. Check the **Allow users sync into this tenant** check box. In this step, you automatically redeem invitations in the source tenant. <br/>**Source tenant** -1. In the source tenant, select **Azure Active Directory** > **Cross-tenant synchronization (Preview)**. +1. In the source tenant, select **Azure Active Directory** > **Cross-tenant synchronization**. :::image type="content" source="./media/cross-tenant-synchronization-configure/azure-ad-overview.png" alt-text="Screenshot that shows the Azure Active Directory Overview page." lightbox="./media/cross-tenant-synchronization-configure/azure-ad-overview.png"::: Attribute mappings allow you to define how data should flow between the source t Now that you have a configuration, you can test on-demand provisioning with one of your users. -1. In the source tenant, select **Azure Active Directory** > **Cross-tenant synchronization (Preview)**. +1. In the source tenant, select **Azure Active Directory** > **Cross-tenant synchronization**. 1. Select **Configurations** and then select your configuration. Now that you have a configuration, you can test on-demand provisioning with one The provisioning job starts the initial synchronization cycle of all users defined in **Scope** of the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. -1. In the source tenant, select **Azure Active Directory** > **Cross-tenant synchronization (Preview)**. +1. In the source tenant, select **Azure Active Directory** > **Cross-tenant synchronization**. 1. Select **Configurations** and then select your configuration. This setting also applies to B2B collaboration and B2B direct connect, so if you Follows these steps to delete a configuration on the **Configurations** page. -1. In the source tenant, select **Azure Active Directory** > **Cross-tenant synchronization (Preview)**. +1. In the source tenant, select **Azure Active Directory** > **Cross-tenant synchronization**. 1. On the **Configurations** page, add a check mark next to the configuration you want to delete. |
active-directory | Cross Tenant Synchronization Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md | Title: What is a cross-tenant synchronization in Azure Active Directory? (preview) + Title: What is a cross-tenant synchronization in Azure Active Directory? description: Learn about cross-tenant synchronization in Azure Active Directory. -# What is cross-tenant synchronization? (preview) --> [!IMPORTANT] -> Cross-tenant synchronization is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +# What is cross-tenant synchronization? *Cross-tenant synchronization* automates creating, updating, and deleting [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) users across tenants in an organization. It enables users to access applications and collaborate across tenants, while still allowing the organization to evolve. With cross-tenant synchronization, you can do the following: ## Teams and Microsoft 365 -Users created by cross-tenant synchronization will have the same experience when accessing Microsoft Teams and other Microsoft 365 services as B2B collaboration users created through a manual invitation. The [userType](../external-identities/user-properties.md) property on the B2B user, whether guest or member, does not change the end user experience. Over time, the member userType will be used by the various Microsoft 365 services to provide differentiated end user experiences for users in a multi-tenant organization. +Users created by cross-tenant synchronization will have the same experience when accessing Microsoft Teams and other Microsoft 365 services as B2B collaboration users created through a manual invitation. Microsoft Teams currently does not support the userType `member` with shared channels. If your organization uses shared channels, please create users with type `guest`. Please see the [known issues](../app-provisioning/known-issues.md) document for additional details. Over time, the `member` userType will be used by the various Microsoft 365 services to provide differentiated end user experiences for users in a multi-tenant organization. ## Properties |
active-directory | Cross Tenant Synchronization Topology | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-topology.md | Title: Topologies for cross-tenant synchronization (preview) + Title: Topologies for cross-tenant synchronization description: Learn about topologies for cross-tenant synchronization in Azure Active Directory. -# Topologies for cross-tenant synchronization (preview) --> [!IMPORTANT] -> Cross-tenant synchronization is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +# Topologies for cross-tenant synchronization Cross-tenant synchronization provides a flexible solution to enable collaboration, but every organization is different. Each cross-tenant synchronization configuration provides one-way synchronization between two Azure AD tenants, which enables configuration of the following topologies. |
active-directory | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/overview.md | Here's the primary constraint with using B2B direct connect across multiple tena :::image type="content" source="./media/overview/multi-tenant-b2b-direct-connect.png" alt-text="Diagram that shows using B2B direct connect across tenants." lightbox="./media/overview/multi-tenant-b2b-direct-connect.png"::: -## Cross-tenant synchronization (preview) --> [!IMPORTANT] -> Cross-tenant synchronization is currently in PREVIEW. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +## Cross-tenant synchronization If you want users to have a more seamless collaboration experience across tenants, you can use [cross-tenant synchronization](./cross-tenant-synchronization-overview.md). Cross-tenant synchronization is a one-way synchronization service in Azure AD that automates creating, updating, and deleting B2B collaboration users across tenants in an organization. Cross-tenant synchronization builds on the B2B collaboration functionality and utilizes existing B2B cross-tenant access settings. Users are represented in the target tenant as a B2B collaboration user object. |
active-directory | Azure Ad Pci Dss Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/azure-ad-pci-dss-guidance.md | CHD consists of: * **Primary account number (PAN)** - a unique payment card number (credit, debit, or prepaid cards, etc.) that identifies the issuer and the cardholder account * **Cardholder name** ΓÇô the card owner * **Card expiration date** ΓÇô the day and month the card expires-* **Service code** - a three- or four-digit value in the magnetic stripe that follows the expiration date of the payment card on the track data. It defines service attributes, differentiating between international and national interchange, or identifying usage restrictions. +* **Service code** - a three- or four-digit value in the magnetic stripe that follows the expiration date of the payment card on the track data. It defines service attributes, differentiating between international and national/regional interchange, or identifying usage restrictions. SAD consists of security-related information used to authenticate cardholders and/or authorize payment card transactions. SAD includes, but isn't limited to: |
active-directory | Memo 22 09 Other Areas Zero Trust | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-other-areas-zero-trust.md | You can use analytics in the following tools to aggregate information from Azure * **Microsoft Sentinel** analyze information from Azure AD: * Microsoft Sentinel User and Entity Behavior Analytics (UEBA) delivers intelligence on potential threats from user, host, IP address, and application entities. * Use analytics rule templates to hunt for threats and alerts in your Azure AD logs. Your security or operation analyst can triage and remediate threats.- * Microsoft Sentinel workbooks help visualize Azure AD data sources. See sign-ins by country, region, or applications. + * Microsoft Sentinel workbooks help visualize Azure AD data sources. See sign-ins by country/region or applications. * See, [Commonly used Microsoft Sentinel workbooks](../../sentinel/top-workbooks.md) * See, [Visualize collected data](../../sentinel/get-visibility.md) * See, [Identify advanced threats with UEBA in Microsoft Sentinel](../../sentinel/identify-threats-with-entity-behavior-analytics.md) |
active-directory | Pci Requirement 6 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-6.md | -|**6.3.1** Security vulnerabilities are identified and managed as follows: </br> New security vulnerabilities are identified using industry-recognized sources for security vulnerability information, including alerts from international and national computer emergency response teams (CERTs). </br> Vulnerabilities are assigned a risk ranking based on industry best practices and consideration of potential impact. </br> Risk rankings identify, at a minimum, all vulnerabilities considered to be a high-risk or critical to the environment. </br> Vulnerabilities for bespoke and custom, and third-party software (for example operating systems and databases) are covered.|Learn about vulnerabilities. [MSRC | Security Updates, Security Update Guide](https://msrc.microsoft.com/update-guide)| +|**6.3.1** Security vulnerabilities are identified and managed as follows: </br> New security vulnerabilities are identified using industry-recognized sources for security vulnerability information, including alerts from international and national/regional computer emergency response teams (CERTs). </br> Vulnerabilities are assigned a risk ranking based on industry best practices and consideration of potential impact. </br> Risk rankings identify, at a minimum, all vulnerabilities considered to be a high-risk or critical to the environment. </br> Vulnerabilities for bespoke and custom, and third-party software (for example operating systems and databases) are covered.|Learn about vulnerabilities. [MSRC | Security Updates, Security Update Guide](https://msrc.microsoft.com/update-guide)| |**6.3.2** An inventory of bespoke and custom software, and third-party software components incorporated into bespoke and custom software is maintained to facilitate vulnerability and patch management.|Generate reports for applications using Azure AD for authentication for inventory. [applicationSignInDetailedSummary resource type](/graph/api/resources/applicationsignindetailedsummary?view=graph-rest-beta&viewFallbackFrom=graph-rest-1.0&preserve-view=true) </br> [Applications listed in Enterprise applications](../manage-apps/application-list.md)| |**6.3.3** All system components are protected from known vulnerabilities by installing applicable security patches/updates as follows: </br> Critical or high-security patches/updates (identified according to the risk ranking process at Requirement 6.3.1) are installed within one month of release. </br> All other applicable security patches/updates are installed within an appropriate time frame as determined by the entity (for example, within three months of release).|Not applicable to Azure AD.| |
active-directory | Linkedin Employment Verification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/linkedin-employment-verification.md | Title: LinkedIn employment verification + Title: Setting up LinkedIn workplace verification description: A design pattern describing how to configure employment verification using LinkedIn -# LinkedIn employment verification +# Setting up place of work verification on LinkedIn -If your organization wants its employees to get verified on LinkedIn, you need to follow these few steps: +If your organization wants its employees to get their place of work verified on LinkedIn, you need to follow these few steps: 1. Setup your Microsoft Entra Verified ID service by following these [instructions](verifiable-credentials-configure-tenant.md). 1. [Create](how-to-use-quickstart-verifiedemployee.md#create-a-verified-employee-credential) a Verified ID Employee credential.-1. Configure the LinkedIn company page with your organization DID (decentralized identity) and URL of the custom Webapp. +1. Deploy the custom webapp from [GitHub](https://github.com/Azure-Samples/VerifiedEmployeeIssuance). +1. Configure the LinkedIn company page with your organization DID (decentralized identity) and URL of the custom Webapp. You cannot self-service the LinkedIn company page. Today, you need to fill in [this form](https://www.linkedin.com/help/linkedin/answer/a1359065) and we can enable your organization. 1. Once you deploy the updated LinkedIn mobile app your employees can get verified. +>[!IMPORTANT] +> The app version required is Android **4.1.813** or newer, or IOS we require **9.27.2173** or newer. Keep in mind that inside the app, the version number shows **9.27.2336**, but in the App store the version number would be **9.1.312** or higher. + >[!NOTE] > Review LinkedIn's documentation for information on [verifications on LinkedIn profiles.](https://www.linkedin.com/help/linkedin/answer/a1359065). |
aks | Kubelet Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubelet-logs.md | Title: View kubelet logs in Azure Kubernetes Service (AKS) description: Learn how to view troubleshooting information in the kubelet logs from Azure Kubernetes Service (AKS) nodes Previously updated : 03/05/2019- Last updated : 05/09/2023 #Customer intent: As a cluster operator, I want to view the logs for the kubelet that runs on each node in an AKS cluster to troubleshoot problems. # Get kubelet logs from Azure Kubernetes Service (AKS) cluster nodes -As part of operating an AKS cluster, you may need to review logs to troubleshoot a problem. Built-in to the Azure portal is the ability to view logs for the [AKS master components][aks-master-logs] or [containers in an AKS cluster][azure-container-logs]. Occasionally, you may need to get *kubelet* logs from an AKS node for troubleshooting purposes. +When operating an Azure Kubernetes Service (AKS) cluster, you may need to review logs to troubleshoot a problem. Azure portal has a built-in capability that allows you to view logs for AKS [main components][aks-main-logs] and [cluster containers][azure-container-logs]. Occasionally, you may need to get *kubelet* logs from AKS nodes for troubleshooting purposes. -This article shows you how you can use `journalctl` to view the *kubelet* logs on an AKS node. +This article shows you how you can use `journalctl` to view *kubelet* logs on an AKS node. ## Before you begin -This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal]. +This article assumes you have an existing AKS cluster. If you need an AKS cluster, create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [Azure portal][aks-quickstart-portal]. ## Create an SSH connection -First, create an SSH connection with the node on which you need to view *kubelet* logs. This operation is detailed in the [SSH into Azure Kubernetes Service (AKS) cluster nodes][aks-ssh] document. +First, you need to create an SSH connection with the node you need to view *kubelet* logs for. To create this connection, follow the steps in [SSH into AKS cluster nodes][aks-ssh]. ## Get kubelet logs -Once you have connected to the node via `kubectl debug`, run the following command to pull the *kubelet* logs: +Once you connect to the node using `kubectl debug`, run the following command to pull the *kubelet* logs: ```console chroot /host journalctl -u kubelet -o cat ```-> [!NOTE] -> You don't need to use `sudo journalctl` since you are already `root` on the node. > [!NOTE] > For Windows nodes, the log data is in `C:\k` and can be viewed using the *more* command:-> ``` +> +> ```console > more C:\k\kubelet.log > ``` -The following sample output shows the *kubelet* log data: +The following example output shows *kubelet* log data: -``` +```output I0508 12:26:17.905042 8672 kubelet_node_status.go:497] Using Node Hostname from cloudprovider: "aks-agentpool-11482510-0" I0508 12:26:27.943494 8672 kubelet_node_status.go:497] Using Node Hostname from cloudprovider: "aks-agentpool-11482510-0" I0508 12:26:28.920125 8672 server.go:796] GET /stats/summary: (10.370874ms) 200 [[Ruby] 10.244.0.2:52292] I0508 12:28:58.344656 8672 kubelet_node_status.go:497] Using Node Hostname fr ## Next steps -If you need additional troubleshooting information from the Kubernetes master, see [view Kubernetes master node logs in AKS][aks-master-logs]. +If you need more troubleshooting information for the Kubernetes main, see [view Kubernetes main node logs in AKS][aks-main-logs]. <!-- LINKS - internal --> [aks-ssh]: ssh.md-[aks-master-logs]: monitor-aks-reference.md#resource-logs +[aks-main-logs]: monitor-aks-reference.md#resource-logs [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md-[aks-master-logs]: monitor-aks-reference.md#resource-logs [azure-container-logs]: ../azure-monitor/containers/container-insights-overview.md |
aks | Kubernetes Helm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-helm.md | Title: Install existing applications with Helm in AKS + Title: Install existing applications with Helm in Azure Kubernetes Service (AKS) description: Learn how to use the Helm packaging tool to deploy containers in an Azure Kubernetes Service (AKS) cluster Previously updated : 12/07/2020 Last updated : 05/09/2023 #Customer intent: As a cluster operator or developer, I want to learn how to deploy Helm into an AKS cluster and then install and manage applications using Helm charts.-[Helm][helm] is an open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. Similar to Linux package managers such as *APT* and *Yum*, Helm is used to manage Kubernetes charts, which are packages of preconfigured Kubernetes resources. +[Helm][helm] is an open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. Similar to Linux package managers, such as *APT* and *Yum*, you can use Helm to manage Kubernetes charts, which are packages of preconfigured Kubernetes resources. -This article shows you how to configure and use Helm in a Kubernetes cluster on AKS. +This article shows you how to configure and use Helm in a Kubernetes cluster on Azure Kubernetes Service (AKS). ## Before you begin -This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal]. --In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr]. --You also need the Helm CLI installed, which is the client that runs on your development system. It allows you to start, stop, and manage applications with Helm. If you use the Azure Cloud Shell, the Helm CLI is already installed. For installation instructions on your local platform, see [Installing Helm][helm-install]. +* This article assumes you have an existing AKS cluster. If you need an AKS cluster, create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [Azure portal][aks-quickstart-portal]. +* Your AKS cluster needs to have **an integrated ACR**. For details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr]. +* You also need the Helm CLI installed, which is the client that runs on your development system. It allows you to start, stop, and manage applications with Helm. If you use the Azure Cloud Shell, the Helm CLI is already installed. For installation instructions on your local platform, see [Installing Helm][helm-install]. > [!IMPORTANT] > Helm is intended to run on Linux nodes. If you have Windows Server nodes in your cluster, you must ensure that Helm pods are only scheduled to run on Linux nodes. You also need to ensure that any Helm charts you install are also scheduled to run on the correct nodes. The commands in this article use [node-selectors][k8s-node-selector] to make sure pods are scheduled to the correct nodes, but not all Helm charts may expose a node selector. You can also consider using other options on your cluster, such as [taints][taints]. ## Verify your version of Helm -Use the `helm version` command to verify you have Helm 3 installed: --```console -helm version -``` +* Use the `helm version` command to verify you have Helm 3 installed. -The following example shows Helm version 3.0.0 installed: + ```console + helm version + ``` -```console -$ helm version + The following example output shows Helm version 3.0.0 installed: -version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"} -``` + ```output + version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"} + ``` ## Install an application with Helm v3 ### Add Helm repositories -Use the [helm repo][helm-repo-add] command to add the *ingress-nginx* repository. +* Add the *ingress-nginx* repository using the [helm repo][helm-repo-add] command. -```console -helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx -``` + ```console + helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx + ``` ### Find Helm charts -Helm charts are used to deploy applications into a Kubernetes cluster. To search for pre-created Helm charts, use the [helm search][helm-search] command: +1. Search for precreated Helm charts using the [helm search][helm-search] command. -```console -helm search repo ingress-nginx -``` + ```console + helm search repo ingress-nginx + ``` -The following condensed example output shows some of the Helm charts available for use: + The following condensed example output shows some of the Helm charts available for use: -```console -$ helm search repo ingress-nginx + ```output + NAME CHART VERSION APP VERSION DESCRIPTION + ingress-nginx/ingress-nginx 2.12.0 0.34.1 Ingress controller for Kubernetes using NGINX a... + ``` -NAME CHART VERSION APP VERSION DESCRIPTION -ingress-nginx/ingress-nginx 2.12.0 0.34.1 Ingress controller for Kubernetes using NGINX a... -``` +2. Update the list of charts using the [helm repo update][helm-repo-update] command. -To update the list of charts, use the [helm repo update][helm-repo-update] command. + ```console + helm repo update + ``` -```console -helm repo update -``` + The following example output shows a successful repo update: -The following example shows a successful repo update: + ```output + Hang tight while we grab the latest from your chart repositories... + ...Successfully got an update from the "ingress-nginx" chart repository + Update Complete. ΓÄê Happy Helming!ΓÄê + ``` -```console -$ helm repo update +## Import the Helm chart images into your ACR -Hang tight while we grab the latest from your chart repositories... -...Successfully got an update from the "ingress-nginx" chart repository -Update Complete. ΓÄê Happy Helming!ΓÄê -``` +This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images. -## Import the images used by the Helm chart into your ACR +* Use `az acr import` to import the NGINX ingress controller images into your ACR. -This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images. Use `az acr import` to import those images into your ACR. + ```azurecli + REGISTRY_NAME=<REGISTRY_NAME> + CONTROLLER_REGISTRY=k8s.gcr.io + CONTROLLER_IMAGE=ingress-nginx/controller + CONTROLLER_TAG=v0.48.1 + PATCH_REGISTRY=docker.io + PATCH_IMAGE=jettech/kube-webhook-certgen + PATCH_TAG=v1.5.1 + DEFAULTBACKEND_REGISTRY=k8s.gcr.io + DEFAULTBACKEND_IMAGE=defaultbackend-amd64 + DEFAULTBACKEND_TAG=1.5 -```azurecli -REGISTRY_NAME=<REGISTRY_NAME> -CONTROLLER_REGISTRY=k8s.gcr.io -CONTROLLER_IMAGE=ingress-nginx/controller -CONTROLLER_TAG=v0.48.1 -PATCH_REGISTRY=docker.io -PATCH_IMAGE=jettech/kube-webhook-certgen -PATCH_TAG=v1.5.1 -DEFAULTBACKEND_REGISTRY=k8s.gcr.io -DEFAULTBACKEND_IMAGE=defaultbackend-amd64 -DEFAULTBACKEND_TAG=1.5 + az acr import --name $REGISTRY_NAME --source $CONTROLLER_REGISTRY/$CONTROLLER_IMAGE:$CONTROLLER_TAG --image $CONTROLLER_IMAGE:$CONTROLLER_TAG + az acr import --name $REGISTRY_NAME --source $PATCH_REGISTRY/$PATCH_IMAGE:$PATCH_TAG --image $PATCH_IMAGE:$PATCH_TAG + az acr import --name $REGISTRY_NAME --source $DEFAULTBACKEND_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG + ``` -az acr import --name $REGISTRY_NAME --source $CONTROLLER_REGISTRY/$CONTROLLER_IMAGE:$CONTROLLER_TAG --image $CONTROLLER_IMAGE:$CONTROLLER_TAG -az acr import --name $REGISTRY_NAME --source $PATCH_REGISTRY/$PATCH_IMAGE:$PATCH_TAG --image $PATCH_IMAGE:$PATCH_TAG -az acr import --name $REGISTRY_NAME --source $DEFAULTBACKEND_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG -``` --> [!NOTE] -> In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm]. + > [!NOTE] + > In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm]. ### Run Helm charts -To install charts with Helm, use the [helm install][helm-install-command] command and specify a release name and the name of the chart to install. To see installing a Helm chart in action, let's install a basic nginx deployment using a Helm chart. --> [!TIP] -> The following example creates a Kubernetes namespace for the ingress resources named *ingress-basic* and is intended to work within that namespace. Specify a namespace for your own environment as needed. --```console -ACR_URL=<REGISTRY_URL> --# Create a namespace for your ingress resources -kubectl create namespace ingress-basic --# Use Helm to deploy an NGINX ingress controller -helm install nginx-ingress ingress-nginx/ingress-nginx \ - --version 4.0.13 \ - --namespace ingress-basic \ - --set controller.replicaCount=2 \ - --set controller.nodeSelector."kubernetes\.io/os"=linux \ - --set controller.image.registry=$ACR_URL \ - --set controller.image.image=$CONTROLLER_IMAGE \ - --set controller.image.tag=$CONTROLLER_TAG \ - --set controller.image.digest="" \ - --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \ - --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \ - --set controller.admissionWebhooks.patch.image.registry=$ACR_URL \ - --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \ - --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \ - --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \ - --set defaultBackend.image.registry=$ACR_URL \ - --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \ - --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG \ - --set defaultBackend.image.digest="" -``` --The following condensed example output shows the deployment status of the Kubernetes resources created by the Helm chart: --```console -NAME: nginx-ingress -LAST DEPLOYED: Wed Jul 28 11:35:29 2021 -NAMESPACE: ingress-basic -STATUS: deployed -REVISION: 1 -TEST SUITE: None -NOTES: -The ingress-nginx controller has been installed. -It may take a few minutes for the LoadBalancer IP to be available. -You can watch the status by running 'kubectl --namespace ingress-basic get services -o wide -w nginx-ingress-ingress-nginx-controller' -... -``` --Use the `kubectl get services` command to get the *EXTERNAL-IP* of your service. --```console -kubectl --namespace ingress-basic get services -o wide -w nginx-ingress-ingress-nginx-controller -``` --For example, the below command shows the *EXTERNAL-IP* for the *nginx-ingress-ingress-nginx-controller* service: --```console -$ kubectl --namespace ingress-basic get services -o wide -w nginx-ingress-ingress-nginx-controller --NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR -nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.254.93 <EXTERNAL_IP> 80:30004/TCP,443:30348/TCP 61s app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx -``` +1. Install Helm charts using the [helm install][helm-install-command] command and specify a release name and the name of the chart to install. ++ > [!TIP] + > The following example creates a Kubernetes namespace for the ingress resources named *ingress-basic* and is intended to work within that namespace. Specify a namespace for your own environment as needed. ++ ```console + ACR_URL=<REGISTRY_URL> ++ # Create a namespace for your ingress resources + kubectl create namespace ingress-basic ++ # Use Helm to deploy an NGINX ingress controller + helm install ingress-nginx ingress-nginx/ingress-nginx \ + --version 4.0.13 \ + --namespace ingress-basic \ + --set controller.replicaCount=2 \ + --set controller.nodeSelector."kubernetes\.io/os"=linux \ + --set controller.image.registry=$ACR_URL \ + --set controller.image.image=$CONTROLLER_IMAGE \ + --set controller.image.tag=$CONTROLLER_TAG \ + --set controller.image.digest="" \ + --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \ + --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \ + --set controller.admissionWebhooks.patch.image.registry=$ACR_URL \ + --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \ + --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \ + --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \ + --set defaultBackend.image.registry=$ACR_URL \ + --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \ + --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG \ + --set defaultBackend.image.digest="" + ``` ++ The following condensed example output shows the deployment status of the Kubernetes resources created by the Helm chart: ++ ```output + NAME: nginx-ingress + LAST DEPLOYED: Wed Jul 28 11:35:29 2021 + NAMESPACE: ingress-basic + STATUS: deployed + REVISION: 1 + TEST SUITE: None + NOTES: + The ingress-nginx controller has been installed. + It may take a few minutes for the LoadBalancer IP to be available. + You can watch the status by running 'kubectl --namespace ingress-basic get services -o wide -w nginx-ingress-ingress-nginx-controller' + ... + ``` ++2. Get the *EXTERNAL-IP* of your service using the `kubectl get services` command. ++ ```console + kubectl --namespace ingress-basic get services -o wide -w ingress-nginx-ingress-nginx-controller + ``` ++ The following example output shows the *EXTERNAL-IP* for the *ingress-nginx-ingress-nginx-controller* service: ++ ```output + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR + nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.254.93 <EXTERNAL_IP> 80:30004/TCP,443:30348/TCP 61s app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx + ``` ### List releases -To see a list of releases installed on your cluster, use the `helm list` command. +* Get a list of releases installed on your cluster using the `helm list` command. -```console -helm list --namespace ingress-basic -``` + ```console + helm list --namespace ingress-basic + ``` -The following example shows the *my-nginx-ingress* release deployed in the previous step: + The following example output shows the *ingress-nginx* release deployed in the previous step: -```console -$ helm list --namespace ingress-basic -NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION -nginx-ingress ingress-basic 1 2021-07-28 11:35:29.9623734 -0500 CDT deployed ingress-nginx-3.34.0 0.47.0 -``` + ```output + NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION + ingress-nginx ingress-basic 1 2021-07-28 11:35:29.9623734 -0500 CDT deployed ingress-nginx-3.34.0 0.47.0 + ``` ### Clean up resources -When you deploy a Helm chart, a number of Kubernetes resources are created. These resources include pods, deployments, and services. To clean up these resources, use the [helm uninstall][helm-cleanup] command and specify your release name, as found in the previous `helm list` command. +Deploying a Helm chart creates Kubernetes resources like pods, deployments, and services. -```console -helm uninstall --namespace ingress-basic nginx-ingress -``` +* Clean up resources using the [helm uninstall][helm-cleanup] command and specify your release name. -The following example shows the release named *my-nginx-ingress* has been uninstalled: + ```console + helm uninstall --namespace ingress-basic ingress-nginx + ``` -```console -$ helm uninstall --namespace ingress-basic nginx-ingress + The following example output shows the release named *ingress-nginx* has been uninstalled: -release "nginx-ingress" uninstalled -``` + ```output + release "nginx-ingress" uninstalled + ``` -To delete the entire sample namespace, use the `kubectl delete` command and specify your namespace name. All the resources in the namespace are deleted. +* Delete the entire sample namespace along with the resources using the `kubectl delete` command and specify your namespace name. -```console -kubectl delete namespace ingress-basic -``` + ```console + kubectl delete namespace ingress-basic + ``` ## Next steps For more information about managing Kubernetes application deployments with Helm [helm-search]: https://helm.sh/docs/intro/using_helm/#helm-search-finding-charts [helm-repo-update]: https://helm.sh/docs/intro/using_helm/#helm-repo-working-with-repositories [ingress-nginx-helm-chart]: https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx- +[k8s-node-selector]: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector + <!-- LINKS - internal --> [acr-helm]: ../container-registry/container-registry-helm-repos.md [aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md-[taints]: operator-best-practices-advanced-scheduler.md +[taints]: operator-best-practices-advanced-scheduler.md |
aks | Managed Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/managed-azure-ad.md | Title: AKS-managed Azure Active Directory integration description: Learn how to configure Azure AD for your Azure Kubernetes Service (AKS) clusters. Previously updated : 04/17/2023 Last updated : 05/02/2023 Learn more about the Azure AD integration flow in the [Azure AD documentation](c * Enable AKS-managed Azure AD integration on your existing Kubernetes RBAC enabled cluster using the [`az aks update`][az-aks-update] command. Make sure to set your admin group to keep access on your cluster. ```azurecli-interactive- az aks update -g MyResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id-1> [--aad-tenant-id <id>] + az aks update -g MyResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id-1>,<id-2> [--aad-tenant-id <id>] ``` A successful activation of an AKS-managed Azure AD cluster has the following section in the response body: There are some non-interactive scenarios, such as continuous integration pipelin export KUBECONFIG=/path/to/kubeconfig kubelogin convert-kubeconfig ```+ +> [!NOTE] +> If you meet the `error: The azure auth plugin has been removed.`, you need to run `kubelogin convert-kubeconfig` to convert the kubeconfig format manually. ## Troubleshoot access issues with AKS-managed Azure AD |
aks | Private Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md | You can configure private DNS zones using the following parameters: * If the private DNS zone is in a different subscription than the AKS cluster, you need to register the Azure provider **Microsoft.ContainerServices** in both subscriptions. * "fqdn-subdomain" can be utilized with "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" only to provide subdomain capabilities to `privatelink.<region>.azmk8s.io`. * If your AKS cluster is configured with an Active Directory service principal, AKS doesn't support using a system-assigned managed identity with custom private DNS zone.+ * If you are specifying a `<subzone>` there is a 32 character limit for the `<subzone>` name. ### Create a private AKS cluster with a private DNS zone |
aks | Use Cvm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-cvm.md | Title: Use Confidential Virtual Machines (CVM) in Azure Kubernetes Service (AKS) description: Learn how to create Confidential Virtual Machines (CVM) node pools with Azure Kubernetes Service (AKS) Previously updated : 10/04/2022 Last updated : 05/08/2023 # Use Confidential Virtual Machines (CVM) in Azure Kubernetes Service (AKS) cluster -You can use the generally available [confidential VM sizes (DCav5/ECav5)][cvm-announce] to add a node pool to your AKS cluster with CVM. Confidential VMs with AMD SEV-SNP support bring a new set of security features to protect data-in-use with full VM memory encryption. These features enable node pools with CVM to target the migration of highly sensitive container workloads to AKS without any code refactoring while benefiting from the features of AKS. The nodes in a node pool created with CVM use a customized Ubuntu 20.04 image specially configured for CVM. For more details on CVM, see [Confidential VM node pools support on AKS with AMD SEV-SNP confidential VMs][cvm]. +You can use [confidential VM sizes (DCav5/ECav5)][cvm-announce] to add a node pool to your AKS cluster with CVM. Confidential VMs with AMD SEV-SNP support bring a new set of security features to protect data-in-use with full VM memory encryption. These features enable node pools with CVM to target the migration of highly sensitive container workloads to AKS without any code refactoring while benefiting from the features of AKS. The nodes in a node pool created with CVM use a customized Ubuntu 20.04 image specially configured for CVM. For more details on CVM, see [Confidential VM node pools support on AKS with AMD SEV-SNP confidential VMs][cvm]. Adding a node pool with CVM to your AKS cluster is currently in preview. - ## Before you begin +Before you begin, make sure you have the following: + - An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). - [Azure CLI installed](/cli/azure/install-azure-cli). - An existing AKS cluster in the *westus*, *eastus*, *westeurope*, or *northeurope* region. Adding a node pool with CVM to your AKS cluster is currently in preview. The following limitations apply when adding a node pool with CVM to AKS: -- You can't use `--enable-fips-image`, ARM64, or Mariner.+- You can't use `--enable-fips-image`, ARM64, or Azure Linux. - You can't upgrade an existing node pool to use CVM. - The [DCasv5 and DCadsv5-series][cvm-subs-dc] or [ECasv5 and ECadsv5-series][cvm-subs-ec] SKUs must be available for your subscription in the region where the cluster is created. ## Add a node pool with the CVM to AKS -To add a node pool with the CVM to AKS, use `az aks nodepool add` and set `node-vm-size` to `Standard_DCa4_v5`. For example: +- Add a node pool with CVM to AKS using the [`az aks nodepool add`][az-aks-nodepool-add] command and set the `node-vm-size` to `Standard_DCa4_v5`. -```azurecli-interactive -az aks nodepool add \ - --resource-group myResourceGroup \ - --cluster-name myAKSCluster \ - --name cvmnodepool \ - --node-count 3 \ - --node-vm-size Standard_DC4as_v5 -``` + ```azurecli-interactive + az aks nodepool add \ + --resource-group myResourceGroup \ + --cluster-name myAKSCluster \ + --name cvmnodepool \ + --node-count 3 \ + --node-vm-size Standard_DC4as_v5 + ``` ## Verify the node pool uses CVM -To verify a node pool uses CVM, use `az aks nodepool show` and verify the `vmSize` is `Standard_DCa4_v5`. For example: +- Verify a node pool uses CVM using the [`az aks nodepool show`][az-aks-nodepool-show] command and verify the `vmSize` is `Standard_DCa4_v5`. -```azurecli-interactive -az aks nodepool show \ - --resource-group myResourceGroup \ - --cluster-name myAKSCluster \ - --name cvmnodepool \ - --query 'vmSize' -``` + ```azurecli-interactive + az aks nodepool show \ + --resource-group myResourceGroup \ + --cluster-name myAKSCluster \ + --name cvmnodepool \ + --query 'vmSize' + ``` -The following example command and output shows the node pool uses CVM: + The following example command and output shows the node pool uses CVM: -```output -az aks nodepool show \ - --resource-group myResourceGroup \ - --cluster-name myAKSCluster \ - --name cvmnodepool \ - --query 'vmSize' + ```output + az aks nodepool show \ + --resource-group myResourceGroup \ + --cluster-name myAKSCluster \ + --name cvmnodepool \ + --query 'vmSize' -"Standard_DC4as_v5" -``` + "Standard_DC4as_v5" + ``` ## Remove a node pool with CVM from an AKS cluster -To remove a node pool with CVM from an AKS cluster, use `az aks nodepool delete`. For example: +- Remove a node pool with CVM from an AKS cluster using the [`az aks nodepool delete`][az-aks-nodepool-delete] command. -```azurecli-interactive -az aks nodepool delete \ - --resource-group myResourceGroup \ - --cluster-name myAKSCluster \ - --name cvmnodepool -``` + ```azurecli-interactive + az aks nodepool delete \ + --resource-group myResourceGroup \ + --cluster-name myAKSCluster \ + --name cvmnodepool + ``` ## Next steps In this article, you learned how to add a node pool with CVM to an AKS cluster. [cvm-announce]: https://techcommunity.microsoft.com/t5/azure-confidential-computing/azure-confidential-vms-using-sev-snp-dcasv5-ecasv5-are-now/ba-p/3573747 [cvm-subs-dc]: ../virtual-machines/dcasv5-dcadsv5-series.md [cvm-subs-ec]: ../virtual-machines/ecasv5-ecadsv5-series.md+[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add +[az-aks-nodepool-show]: /cli/azure/aks/nodepool#az_aks_nodepool_show +[az-aks-nodepool-delete]: /cli/azure/aks/nodepool#az_aks_nodepool_delete |
aks | Use Labels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-labels.md | description: Learn how to use labels in an Azure Kubernetes Service (AKS) cluste Previously updated : 03/03/2022 Last updated : 05/09/2023 #Customer intent: As a cluster operator, I want to learn how to use labels in an AKS cluster so that I can set scheduling rules for nodes. # Use labels in an Azure Kubernetes Service (AKS) cluster -If you have multiple node pools, you may want to add a label during node pool creation. [These labels][kubernetes-labels] are visible in Kubernetes for handling scheduling rules for nodes. You can add labels to a node pool anytime, and they'll be set on all nodes in the node pool. +If you have multiple node pools, you may want to add a label during node pool creation. [Kubernetes labels][kubernetes-labels] handle the scheduling rules for nodes. You can add labels to a node pool anytime and apply them to all nodes in the node pool. -In this how-to guide, you'll learn how to use labels in an AKS cluster. +In this how-to guide, you learn how to use labels in an Azure Kubernetes Service (AKS) cluster. ## Prerequisites You need the Azure CLI version 2.2.0 or later installed and configured. Run `az ## Create an AKS cluster with a label -To create an AKS cluster with a label, use [az aks create][az-aks-create]. Specify the `--node-labels` parameter to set your labels. Labels must be a key/value pair and have a [valid syntax][kubernetes-label-syntax]. +1. Create an AKS cluster with a label using the [`az aks create`][az-aks-create] command and specify the `--node-labels` parameter to set your labels. Labels must be a key/value pair and have a [valid syntax][kubernetes-label-syntax]. -```azurecli-interactive -az aks create \ - --resource-group myResourceGroup \ - --name myAKSCluster \ - --node-count 2 \ - --nodepool-labels dept=IT costcenter=9000 -``` + ```azurecli-interactive + az aks create \ + --resource-group myResourceGroup \ + --name myAKSCluster \ + --node-count 2 \ + --nodepool-labels dept=IT costcenter=9000 + ``` -Verify the labels were set by running `kubectl get nodes --show-labels`. +2. Verify the labels were set using the `kubectl get nodes --show-labels` command. -```bash -kubectl get nodes --show-labels | grep -e "costcenter=9000" -e "dept=IT" -``` + ```bash + kubectl get nodes --show-labels | grep -e "costcenter=9000" -e "dept=IT" + ``` ## Create a node pool with a label -To create a node pool with a label, use [az aks nodepool add][az-aks-nodepool-add]. Specify the name *labelnp* and use the `--labels` parameter to specify *dept=HR* and *costcenter=5000* for labels. Labels must be a key/value pair and have a [valid syntax][kubernetes-label-syntax] --```azurecli-interactive -az aks nodepool add \ - --resource-group myResourceGroup \ - --cluster-name myAKSCluster \ - --name labelnp \ - --node-count 1 \ - --labels dept=HR costcenter=5000 \ - --no-wait -``` --The following example output from the [az aks nodepool list][az-aks-nodepool-list] command shows that *labelnp* is *Creating* nodes with the specified *nodeLabels*: --```azurecli -az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster --```output -[ - { - ... - "count": 1, - ... - "name": "labelnp", - "orchestratorVersion": "1.15.7", - ... - "provisioningState": "Creating", - ... - "nodeLabels": { - "costcenter": "5000", - "dept": "HR" - }, - ... - }, - ... -] -``` --Verify the labels were set by running `kubectl get nodes --show-labels`. --```bash -kubectl get nodes --show-labels | grep -e "costcenter=5000" -e "dept=HR" -``` +1. Create a node pool with a label using the [`az aks nodepool add`][az-aks-nodepool-add] command and specify a name for the `--name` parameters and labels for the `--labels` parameter. Labels must be a key/value pair and have a [valid syntax][kubernetes-label-syntax] ++ The following example command creates a node pool named *labelnp* with the labels *dept=HR* and *costcenter=5000*. ++ ```azurecli-interactive + az aks nodepool add \ + --resource-group myResourceGroup \ + --cluster-name myAKSCluster \ + --name labelnp \ + --node-count 1 \ + --labels dept=HR costcenter=5000 \ + --no-wait + ``` ++ The following example output from the [`az aks nodepool list`][az-aks-nodepool-list] command shows the *labelnp* node pool is *Creating* nodes with the specified *nodeLabels*: ++ ```output + [ + { + ... + "count": 1, + ... + "name": "labelnp", + "orchestratorVersion": "1.15.7", + ... + "provisioningState": "Creating", + ... + "nodeLabels": { + "costcenter": "5000", + "dept": "HR" + }, + ... + }, + ... + ] + ``` ++2. Verify the labels were set using the `kubectl get nodes --show-labels` command. ++ ```bash + kubectl get nodes --show-labels | grep -e "costcenter=5000" -e "dept=HR" + ``` ## Updating labels on existing node pools -To update a label on existing node pools, use [az aks nodepool update][az-aks-nodepool-update]. Updating labels on existing node pools will overwrite the old labels with the new labels. Labels must be a key/value pair and have a [valid syntax][kubernetes-label-syntax]. +1. Update a label on an existing node pool using the [`az aks nodepool update`][az-aks-nodepool-update] command. Updating labels on existing node pools overwrites the old labels with the new labels. Labels must be a key/value pair and have a [valid syntax][kubernetes-label-syntax]. -```azurecli-interactive -az aks nodepool update \ - --resource-group myResourceGroup \ - --cluster-name myAKSCluster \ - --name labelnp \ - --labels dept=ACCT costcenter=6000 \ - --no-wait -``` + ```azurecli-interactive + az aks nodepool update \ + --resource-group myResourceGroup \ + --cluster-name myAKSCluster \ + --name labelnp \ + --labels dept=ACCT costcenter=6000 \ + --no-wait + ``` -Verify the labels were set by running `kubectl get nodes --show-labels`. +2. Verify the labels were set using the `kubectl get nodes --show-labels` command. -```bash -kubectl get nodes --show-labels | grep -e "costcenter=6000" -e "dept=ACCT" -``` + ```bash + kubectl get nodes --show-labels | grep -e "costcenter=6000" -e "dept=ACCT" + ``` ## Unavailable labels ### Reserved system labels -Since the [2021-08-19 AKS release][aks-release-2021-gh], Azure Kubernetes Service (AKS) has stopped the ability to make changes to AKS reserved labels. Attempting to change these labels will result in an error message. +Since the [2021-08-19 AKS release][aks-release-2021-gh], AKS stopped the ability to make changes to AKS reserved labels. Attempting to change these labels results in an error message. -The following labels are reserved for use by AKS. *Virtual node usage* specifies if these labels could be a supported system feature on virtual nodes. --Some properties that these system features change aren't available on the virtual nodes, because they require modifying the host. +The following labels are AKS reserved labels. *Virtual node usage* specifies if these labels could be a supported system feature on virtual nodes. Some properties that these system features change aren't available on the virtual nodes because they require modifying the host. | Label | Value | Example/Options | Virtual node usage | | - | | | | Some properties that these system features change aren't available on the virtua ### Reserved prefixes -The following list of prefixes are reserved for usage by AKS and can't be used for any node. +The following prefixes are AKS reserved prefixes and can't be used for any node: * kubernetes.azure.com/ * kubernetes.io/ -For additional reserved prefixes, see [Kubernetes well-known labels, annotations, and taints][kubernetes-well-known-labels]. +For more information on reserved prefixes, see [Kubernetes well-known labels, annotations, and taints][kubernetes-well-known-labels]. ### Deprecated labels -The following labels are planned for deprecation with the release of [Kubernetes v1.24][aks-release-calendar]. Customers should change any label references to the recommended substitute. +The following labels are planned for deprecation with the release of [Kubernetes v1.24][aks-release-calendar]. You should change any label references to the recommended substitute. | Label | Recommended substitute | Maintainer | | | | | The following labels are planned for deprecation with the release of [Kubernetes | Storagetier* | kubernetes.azure.com/storagetier | Azure Kubernetes Service | Accelerator* | kubernetes.azure.com/accelerator | Azure Kubernetes Service -*Newly deprecated. For more information, see [Release Notes][aks-release-notes-gh] on when these labels will no longer be maintained. +*Newly deprecated. For more information, see the [Release Notes][aks-release-notes-gh]. ## Next steps -Learn more about Kubernetes labels at the [Kubernetes labels documentation][kubernetes-labels]. +Learn more about Kubernetes labels in the [Kubernetes labels documentation][kubernetes-labels]. <!-- LINKS - external --> [aks-release-2021-gh]: https://github.com/Azure/AKS/releases/tag/2021-08-19 Learn more about Kubernetes labels at the [Kubernetes labels documentation][kube [az-aks-nodepool-list]: /cli/azure/aks/nodepool#az-aks-nodepool-list [az-aks-nodepool-update]: /cli/azure/aks/nodepool#az-aks-nodepool-update [create-or-update-os-sku]: /rest/api/aks/agent-pools/create-or-update#ossku-[install-azure-cli]: /cli/azure/install-azure-cli +[install-azure-cli]: /cli/azure/install-azure-cli |
aks | Use Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md | Title: Use a managed identity in Azure Kubernetes Service (AKS) description: Learn how to use a system-assigned or user-assigned managed identity in Azure Kubernetes Service (AKS). Previously updated : 04/26/2023 Last updated : 05/10/2023 # Use a managed identity in Azure Kubernetes Service (AKS) AKS uses several managed identities for built-in services and add-ons. | Identity | Name | Use case | Default permissions | Bring your own identity |-|--|-|-| Control plane | AKS Cluster Name | Used by AKS control plane components to manage cluster resources including ingress load balancers and AKS-managed public IPs, Cluster Autoscaler, Azure Disk & File CSI drivers. | Contributor role for Node resource group | Supported +| Control plane | AKS Cluster Name | Used by AKS control plane components to manage cluster resources including ingress load balancers and AKS-managed public IPs, Cluster Autoscaler, Azure Disk, File, Blob CSI drivers. | Contributor role for Node resource group | Supported | Kubelet | AKS Cluster Name-agentpool | Authentication with Azure Container Registry (ACR). | N/A (for kubernetes v1.15+) | Supported | Add-on | AzureNPM | No identity required. | N/A | No | Add-on | AzureCNI network monitoring | No identity required. | N/A | No |
aks | Use Windows Hpc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-windows-hpc.md | Title: Use Windows HostProcess containers description: Learn how to use HostProcess & Privileged containers for Windows workloads on AKS Previously updated : 4/6/2022 Last updated : 05/09/2023 HostProcess / Privileged containers extend the Windows container model to enable A privileged DaemonSet can carry out changes or monitor a Linux host on Kubernetes but not Windows hosts. HostProcess containers are the Windows equivalent of host elevation. - ## Limitations * HostProcess containers require Kubernetes 1.23 or greater. A privileged DaemonSet can carry out changes or monitor a Linux host on Kubernet * Resource limits such as disk, memory, and cpu count, work the same way as fashion as processes on the host. * Named pipe mounts and Unix domain sockets aren't directly supported, but can be accessed on their host path, for example `\\.\pipe\*`. - ## Run a HostProcess workload To use HostProcess features with your deployment, set *hostProcess: true* and *hostNetwork: true*: spec: hostProcess: true runAsUserName: "NT AUTHORITY\\SYSTEM" command:- - powershell.exe + - C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe - -command - | $AdminRights = ([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole]"Administrator") Process has admin rights: For more information on HostProcess containers and Microsoft's contribution to Kubernetes upstream, see the [Alpha in v1.22: Windows HostProcess Containers][blog-post]. - <!-- LINKS - External --> [blog-post]: https://kubernetes.io/blog/2021/08/16/windows-hostprocess-containers/ |
aks | Virtual Nodes Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/virtual-nodes-portal.md | Title: Create virtual nodes using the portal in Azure Kubernetes Services (AKS) + Title: Create virtual nodes in Azure Kubernetes Service (AKS) using the Azure portal description: Learn how to use the Azure portal to create an Azure Kubernetes Services (AKS) cluster that uses virtual nodes to run pods. Previously updated : 03/15/2021 Last updated : 05/09/2023 # Create and configure an Azure Kubernetes Services (AKS) cluster to use virtual nodes in the Azure portal -This article shows you how to use the Azure portal to create and configure the virtual network resources and an AKS cluster with virtual nodes enabled. +Virtual nodes enable network communication between pods that run in Azure Container Instances (ACI) and Azure Kubernetes Service (AKS) clusters. To provide this communication, a virtual network subnet is created and delegated permissions are assigned. Virtual nodes only work with AKS clusters created using *advanced* networking (Azure CNI). AKS clusters are created with *basic* networking (kubenet) by default. ++This article shows you how to create a virtual network and subnets, and then deploy an AKS cluster that uses advanced networking using the Azure portal. > [!NOTE]-> [This article](virtual-nodes.md) gives you an overview of the region availability and limitations using virtual nodes. +> For an overview of virtual node region availability and limitations, see [Use virtual nodes in AKS](virtual-nodes.md). ## Before you begin -Virtual nodes enable network communication between pods that run in Azure Container Instances (ACI) and the AKS cluster. To provide this communication, a virtual network subnet is created and delegated permissions are assigned. Virtual nodes only work with AKS clusters created using *advanced* networking (Azure CNI). By default, AKS clusters are created with *basic* networking (kubenet). This article shows you how to create a virtual network and subnets, then deploy an AKS cluster that uses advanced networking. --If you have not previously used ACI, register the service provider with your subscription. You can check the status of the ACI provider registration using the [az provider list][az-provider-list] command, as shown in the following example: +You need the ACI service provider registered on your subscription. -```azurecli-interactive -az provider list --query "[?contains(namespace,'Microsoft.ContainerInstance')]" -o table -``` +* Check the status of the ACI provider registration using the [`az provider list`][az-provider-list] command. -The *Microsoft.ContainerInstance* provider should report as *Registered*, as shown in the following example output: + ```azurecli-interactive + az provider list --query "[?contains(namespace,'Microsoft.ContainerInstance')]" -o table + ``` -```output -Namespace RegistrationState RegistrationPolicy - - -- -Microsoft.ContainerInstance Registered RegistrationRequired -``` + The following example output shows the *Microsoft.ContainerInstance* provider is *Registered*: -If the provider shows as *NotRegistered*, register the provider using the [az provider register][az-provider-register] as shown in the following example: + ```output + Namespace RegistrationState RegistrationPolicy + - -- + Microsoft.ContainerInstance Registered RegistrationRequired + ``` -```azurecli-interactive -az provider register --namespace Microsoft.ContainerInstance -``` +* If the provider is *NotRegistered*, register it using the [`az provider register`][az-provider-register] command. -## Sign in to Azure --Sign in to the Azure portal at https://portal.azure.com. + ```azurecli-interactive + az provider register --namespace Microsoft.ContainerInstance + ``` ## Create an AKS cluster -In the top left-hand corner of the Azure portal, select **Create a resource** > **Kubernetes Service**. --On the **Basics** page, configure the following options: --- *PROJECT DETAILS*: Select an Azure subscription, then select or create an Azure resource group, such as *myResourceGroup*. Enter a **Kubernetes cluster name**, such as *myAKSCluster*.-- *CLUSTER DETAILS*: Select a region and Kubernetes version for the AKS cluster.-- *PRIMARY NODE POOL*: Select a VM size for the AKS nodes. The VM size **cannot** be changed once an AKS cluster has been deployed.- - Select the number of nodes to deploy into the cluster. For this article, set **Node count** to *1*. Node count **can** be adjusted after the cluster has been deployed. --Click **Next: Node Pools**. --On the **Node Pools** page, select *Enable virtual nodes*. +1. Navigate to the Azure portal home page. +2. Select **Create a resource** > **Containers**. +3. On the **Azure Kubernetes Service (AKS)** resource, select **Create**. +4. On the **Basics** page, configure the following options: + * *Project details*: Select an Azure subscription, then select or create an Azure resource group, such as *myResourceGroup*. + * *Cluster details*: Enter a **Kubernetes cluster name**, such as *myAKSCluster*. Select a region and Kubernetes version for the AKS cluster. +5. Select **Next: Node pools** and check **Enable virtual nodes*. + :::image type="content" source="media/virtual-nodes-portal/enable-virtual-nodes.png" alt-text="Screenshot that shows creating a cluster with virtual nodes enabled on the Azure portal. The option 'Enable virtual nodes' is highlighted."::: +6. Select **Review + create**. +7. After the validation completes, select **Create**. +By default, this process creates a managed cluster identity, which is used for cluster communication and integration with other Azure services. For more information, see [Use managed identities](use-managed-identity.md). You can also use a service principal as your cluster identity. -By default, a cluster identity is created. This cluster identity is used for cluster communication and integration with other Azure services. By default, this cluster identity is a managed identity. For more information, see [Use managed identities](use-managed-identity.md). You can also use a service principal as your cluster identity. --The cluster is also configured for advanced networking. The virtual nodes are configured to use their own Azure virtual network subnet. This subnet has delegated permissions to connect Azure resources between the AKS cluster. If you don't already have delegated subnet, the Azure portal creates and configures the Azure virtual network and subnet for use with the virtual nodes. --Select **Review + create**. After the validation is complete, select **Create**. --It takes a few minutes to create the AKS cluster and to be ready for use. +This process configures the cluster for advanced networking and the virtual nodes to use their own Azure virtual network subnet. The subnet has delegated permissions to connect Azure resources between the AKS cluster. If you don't already have a delegated subnet, the Azure portal creates and configures an Azure virtual network and subnet with the virtual nodes. ## Connect to the cluster -The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. To manage a Kubernetes cluster, use [kubectl][kubectl], the Kubernetes command-line client. The `kubectl` client is pre-installed in the Azure Cloud Shell. --To open the Cloud Shell, select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com/bash](https://shell.azure.com/bash). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press enter to run it. +The Azure Cloud Shell is a free interactive shell you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. To manage a Kubernetes cluster, use [kubectl][kubectl], the Kubernetes command-line client. The `kubectl` client is pre-installed in the Azure Cloud Shell. -Use the [az aks get-credentials][az-aks-get-credentials] command to configure `kubectl` to connect to your Kubernetes cluster. The following example gets credentials for the cluster name *myAKSCluster* in the resource group named *myResourceGroup*: +1. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. The following example gets credentials for the cluster name *myAKSCluster* in the resource group named *myResourceGroup*: -```azurecli-interactive -az aks get-credentials --resource-group myResourceGroup --name myAKSCluster -``` + ```azurecli-interactive + az aks get-credentials --resource-group myResourceGroup --name myAKSCluster + ``` -To verify the connection to your cluster, use the [kubectl get][kubectl-get] command to return a list of the cluster nodes. +2. Verify the connection to your cluster using the [`kubectl get nodes`][kubectl-get]. -```console -kubectl get nodes -``` + ```azurecli-interactive + kubectl get nodes + ``` -The following example output shows the single VM node created and then the virtual node for Linux, *virtual-node-aci-linux*: + The following example output shows the single VM node created and the virtual Linux node named *virtual-node-aci-linux*: -```output -NAME STATUS ROLES AGE VERSION -virtual-node-aci-linux Ready agent 28m v1.11.2 -aks-agentpool-14693408-0 Ready agent 32m v1.11.2 -``` + ```output + NAME STATUS ROLES AGE VERSION + virtual-node-aci-linux Ready agent 28m v1.11.2 + aks-agentpool-14693408-0 Ready agent 32m v1.11.2 + ``` ## Deploy a sample app -In the Azure Cloud Shell, create a file named `virtual-node.yaml` and copy in the following YAML. To schedule the container on the node, a [nodeSelector][node-selector] and [toleration][toleration] are defined. These settings allow the pod to be scheduled on the virtual node and confirm that the feature is successfully enabled. --```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: aci-helloworld -spec: - replicas: 1 - selector: - matchLabels: - app: aci-helloworld - template: +1. In the Azure Cloud Shell, create a file named `virtual-node.yaml` and copy in the following YAML: ++ ```yaml + apiVersion: apps/v1 + kind: Deployment metadata:- labels: - app: aci-helloworld + name: aci-helloworld spec:- containers: - - name: aci-helloworld - image: mcr.microsoft.com/azuredocs/aci-helloworld - ports: - - containerPort: 80 - nodeSelector: - kubernetes.io/role: agent - beta.kubernetes.io/os: linux - type: virtual-kubelet - tolerations: - - key: virtual-kubelet.io/provider - operator: Exists -``` --Run the application with the [kubectl apply][kubectl-apply] command. --```azurecli-interactive -kubectl apply -f virtual-node.yaml -``` --Use the [kubectl get pods][kubectl-get] command with the `-o wide` argument to output a list of pods and the scheduled node. Notice that the `virtual-node-helloworld` pod has been scheduled on the `virtual-node-linux` node. --```console -kubectl get pods -o wide -``` --```output -NAME READY STATUS RESTARTS AGE IP NODE -virtual-node-helloworld-9b55975f-bnmfl 1/1 Running 0 4m 10.241.0.4 virtual-node-aci-linux -``` --The pod is assigned an internal IP address from the Azure virtual network subnet delegated for use with virtual nodes. + replicas: 1 + selector: + matchLabels: + app: aci-helloworld + template: + metadata: + labels: + app: aci-helloworld + spec: + containers: + - name: aci-helloworld + image: mcr.microsoft.com/azuredocs/aci-helloworld + ports: + - containerPort: 80 + nodeSelector: + kubernetes.io/role: agent + beta.kubernetes.io/os: linux + type: virtual-kubelet + tolerations: + - key: virtual-kubelet.io/provider + operator: Exists + ``` ++ The YAML defines a [nodeSelector][node-selector] and [toleration][toleration], which allows the pod to be scheduled on the virtual node. The pod is assigned an internal IP address from the Azure virtual network subnet delegated for use with virtual nodes. ++2. Run the application using the [`kubectl apply`][kubectl-apply] command. ++ ```azurecli-interactive + kubectl apply -f virtual-node.yaml + ``` ++3. View the pods scheduled on the node using the [`kubectl get pods`][kubectl-get] command with the `-o wide` argument. ++ ```azurecli-interactive + kubectl get pods -o wide + ``` ++ The following example output shows the `virtual-node-helloworld` pod scheduled on the `virtual-node-linux` node. ++ ```output + NAME READY STATUS RESTARTS AGE IP NODE + virtual-node-helloworld-9b55975f-bnmfl 1/1 Running 0 4m 10.241.0.4 virtual-node-aci-linux + ``` > [!NOTE]-> If you use images stored in Azure Container Registry, [configure and use a Kubernetes secret][acr-aks-secrets]. A current limitation of virtual nodes is that you can't use integrated Azure AD service principal authentication. If you don't use a secret, pods scheduled on virtual nodes fail to start and report the error `HTTP response status code 400 error code "InaccessibleImage"`. +> If you use images stored in Azure Container Registry, [configure and use a Kubernetes secret][acr-aks-secrets]. A limitation of virtual nodes is you can't use integrated Azure AD service principal authentication. If you don't use a secret, pods scheduled on virtual nodes fail to start and report the error `HTTP response status code 400 error code "InaccessibleImage"`. ## Test the virtual node pod -To test the pod running on the virtual node, browse to the demo application with a web client. As the pod is assigned an internal IP address, you can quickly test this connectivity from another pod on the AKS cluster. Create a test pod and attach a terminal session to it: +To test the pod running on the virtual node, browse to the demo application with a web client. The pod is assigned an internal IP address, so you can easily test the connectivity from another pod on the AKS cluster. ++1. Create a test pod and attach a terminal session to it using the following `kubectl run` command. ++ ```console + kubectl run -it --rm virtual-node-test --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 + ``` -```console -kubectl run -it --rm virtual-node-test --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 -``` +2. Install `curl` in the pod using the following `apt-get` command. -Install `curl` in the pod using `apt-get`: + ```console + apt-get update && apt-get install -y curl + ``` -```console -apt-get update && apt-get install -y curl -``` +3. Access the address of your pod using the following `curl` command and provide your internal IP address. -Now access the address of your pod using `curl`, such as *http://10.241.0.4*. Provide your own internal IP address shown in the previous `kubectl get pods` command: + ```console + curl -L http://10.241.0.4 + ``` -```console -curl -L http://10.241.0.4 -``` + The following condensed example output shows the demo application. -The demo application is displayed, as shown in the following condensed example output: + ```output + <html> + <head> + <title>Welcome to Azure Container Instances!</title> + </head> + [...] + ``` -```output -<html> -<head> - <title>Welcome to Azure Container Instances!</title> -</head> -[...] -``` +4. Close the terminal session to your test pod with `exit`, which also deletes the pod. -Close the terminal session to your test pod with `exit`. When your session is ended, the pod is the deleted. + ```console + exit + ``` ## Next steps -In this article, a pod was scheduled on the virtual node and assigned a private, internal IP address. You could instead create a service deployment and route traffic to your pod through a load balancer or ingress controller. For more information, see [Create a basic ingress controller in AKS][aks-basic-ingress]. +In this article, you scheduled a pod on the virtual node and assigned a private, internal IP address. If you want, you can instead create a service deployment and route traffic to your pod through a load balancer or ingress controller. For more information, see [Create a basic ingress controller in AKS][aks-basic-ingress]. Virtual nodes are one component of a scaling solution in AKS. For more information on scaling solutions, see the following articles: -- [Use the Kubernetes horizontal pod autoscaler][aks-hpa]-- [Use the Kubernetes cluster autoscaler][aks-cluster-autoscaler]-- [Check out the Autoscale sample for Virtual Nodes][virtual-node-autoscale]-- [Read more about the Virtual Kubelet open source library][virtual-kubelet-repo]+* [Use the Kubernetes horizontal pod autoscaler][aks-hpa] +* [Use the Kubernetes cluster autoscaler][aks-cluster-autoscaler] +* [Autoscale for virtual nodes][virtual-node-autoscale] +* [Virtual Kubelet open source library][virtual-kubelet-repo] <!-- LINKS - external --> [kubectl]: https://kubernetes.io/docs/reference/kubectl/ Virtual nodes are one component of a scaling solution in AKS. For more informati [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [node-selector]:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ [toleration]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/-[azure-cni]: https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md -[aks-github]: https://github.com/azure/aks/issues] [virtual-node-autoscale]: https://github.com/Azure-Samples/virtual-node-autoscale [virtual-kubelet-repo]: https://github.com/virtual-kubelet/virtual-kubelet [acr-aks-secrets]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ <!-- LINKS - internal -->-[aks-network]: ./configure-azure-cni.md [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [aks-hpa]: tutorial-kubernetes-scale.md [aks-cluster-autoscaler]: cluster-autoscaler.md |
api-management | Soft Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/soft-delete.md | You can verify that a soft-deleted API Management instance is available to resto ### Get a soft-deleted instance by name -Use the API Management [Get By Name](/rest/api/apimanagement/current-ga/deleted-services/get-by-name) operation, substituting `{subscriptionId}`, `{location}`, and `{serviceName}` with your Azure subscription, resource location, and API Management instance name: +Use the API Management [Get By Name](/rest/api/apimanagement/current-ga/deleted-services/get-by-name) operation, substituting `{subscriptionId}`, `{location}`, and `{serviceName}` with your Azure subscription, [resource location name](/rest/api/resources/subscriptions/list-locations#location), and API Management instance name: ```rest GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.ApiManagement/locations/{location}/deletedservices/{serviceName}?api-version=2021-08-01 |
app-service | App Service Asp Net Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-asp-net-migration.md | Bulk migration provides the following key capabilities: - Download CSV with details of target web app and app service plan name - Track progress of migration using ARM template deployment experience +### Move .NET apps to Azure App Service ++Azure App Service is a cloud platform that offers a fast, easy, and cost-effective way to migrate your .NET web apps from on-premises to the cloud. Start learning today about how Azure empowers you to modernize your .NET apps with the following resources. ++Ready for a migration assessment? Select one of the following options to get started. ++- [Self-service assessment](https://azure.microsoft.com/products/app-service/migration-tools/) +- [Partner assessment](https://aka.ms/app-service-migration-dotnet) ++Want to learn more? ++| More resources to migrate .NET apps to the cloud | +|-| +| **Video** | +| [.NET on Azure for Beginners](https://www.youtube.com/playlist?list=PLdo4fOcmZ0oVSBX3Lde8owu6dSgZLIXfu) | +| [Start Your Cloud Journey with Azure App Service](https://aka.ms/cloudjourney/start/video) | +| **Blog** | +| [Reliable web app pattern for .NET](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/announcing-the-reliable-web-app-pattern-for-net/ba-p/3745270) | +| [Start your cloud journey with Azure App Service](https://aka.ms/cloudjourney/start/part1) | +| [Start your cloud journey with Azure App Service - Move your code](https://aka.ms/cloudjourney/start/part2) | +| [Learn how to modernize your .NET apps from the pros](https://devblogs.microsoft.com/dotnet/learn-how-to-modernize-your-dotnet-apps/) | +| **On-demand event** | +| [Azure Developers - .NET Day](/events/learn-events/azuredeveloper-dotnetday/) +| **Learning path** | +| [Migrate ASP.NET Apps to Azure](/training/paths/migrate-dotnet-apps-azure/) | +| [Host a web application with Azure App Service](/training/modules/host-a-web-app-with-azure-app-service/) | +| [Publish a web app to Azure with Visual Studio](/training/modules/publish-azure-web-app-with-visual-studio/) | ++ ### At-scale migration resources | How-tos | |
app-service | Configure Language Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md | The built-in Java images are based on the [Alpine Linux](https://alpine-linux.re ::: zone-end -### Flight Recorder +### Java Profiler -All Java runtimes on App Service using the Azul JVMs come with the Zulu Flight Recorder. You can use this to record JVM, system, and application events and troubleshoot problems in your Java applications. +All Java runtimes on Azure App Service come with the JDK Flight Recorder for profiling Java workloads. You can use this to record JVM, system, and application events and troubleshoot problems in your applications. --#### Timed Recording --To take a timed recording, you'll need the PID (Process ID) of the Java application. To find the PID, open a browser to your web app's SCM site at `https://<your-site-name>.scm.azurewebsites.net/ProcessExplorer/`. This page shows the running processes in your web app. Find the process named "java" in the table and copy the corresponding PID (Process ID). --Next, open the **Debug Console** in the top toolbar of the SCM site and run the following command. Replace `<pid>` with the process ID you copied earlier. This command will start a 30-second profiler recording of your Java application and generate a file named `timed_recording_example.jfr` in the `D:\home` directory. --``` -jcmd <pid> JFR.start name=TimedRecording settings=profile duration=30s filename="D:\home\timed_recording_example.JFR" -``` ---SSH into your App Service and run the `jcmd` command to see a list of all the Java processes running. In addition to jcmd itself, you should see your Java application running with a process ID number (pid). --```shell -078990bbcd11:/home# jcmd -Picked up JAVA_TOOL_OPTIONS: -Djava.net.preferIPv4Stack=true -147 sun.tools.jcmd.JCmd -116 /home/site/wwwroot/app.jar -``` --Execute the command below to start a 30-second recording of the JVM. This will profile the JVM and create a JFR file named *jfr_example.jfr* in the home directory. (Replace 116 with the pid of your Java app.) --```shell -jcmd 116 JFR.start name=MyRecording settings=profile duration=30s filename="/home/jfr_example.jfr" -``` --During the 30-second interval, you can validate the recording is taking place by running `jcmd 116 JFR.check`. This will show all recordings for the given Java process. --#### Continuous Recording --You can use Zulu Flight Recorder to continuously profile your Java application with minimal impact on runtime performance. To do so, run the following Azure CLI command to create an App Setting named JAVA_OPTS with the necessary configuration. The contents of the JAVA_OPTS App Setting are passed to the `java` command when your app is started. --```azurecli -az webapp config appsettings set -g <your_resource_group> -n <your_app_name> --settings JAVA_OPTS=-XX:StartFlightRecording=disk=true,name=continuous_recording,dumponexit=true,maxsize=1024m,maxage=1d -``` --Once the recording has started, you can dump the current recording data at any time using the `JFR.dump` command. --```shell -jcmd <pid> JFR.dump name=continuous_recording filename="/home/recording1.jfr" -``` ---#### Analyze `.jfr` files --Use [FTPS](deploy-ftp.md) to download your JFR file to your local machine. To analyze the JFR file, download and install [Zulu Mission Control](https://www.azul.com/products/zulu-mission-control/). For instructions on Zulu Mission Control, see the [Azul documentation](https://docs.azul.com/zmc/) and the [installation instructions](/java/azure/jdk/java-jdk-flight-recorder-and-mission-control). +To learn more about the Java Profiler, visit the [Azure Application Insights documentation](/azure/azure-monitor/app/java-standalone-profiler). ### App logging To configure the app setting from the Maven plugin, add setting/value tags in th <appSettings> <property> <name>JAVA_OPTS</name>- <value>-Xms512m -Xmx1204m</value> + <value>-Xms1024m -Xmx1024m</value> </property> </appSettings> ``` Microsoft and Adoptium builds of OpenJDK are provided and supported on App Servi | Java Version | Linux | Windows | |--||-|-| Java 8 | 1.8.0_312 (Zulu) * | 1.8.0_312 (Adoptium) | -| Java 11 | 11.0.13 (MSFT) | 11.0.13 (MSFT) | -| Java 17 | 17.0.1 (MSFT) | 17.0.1 (MSFT) | +| Java 8 | 1.8.0_312 (Adoptium) * | 1.8.0_312 (Adoptium) | +| Java 11 | 11.0.13 (Microsoft) | 11.0.13 (Microsoft) | +| Java 17 | 17.0.1 (Microsoft) | 17.0.1 (Microsoft) | \* In following releases, Java 8 on Linux will be distributed from Adoptium builds of the OpenJDK. -If you are [pinned](#choosing-a-java-runtime-version) to an older minor version of Java your site may be using the [Zulu for Azure](https://www.azul.com/downloads/#zulu) binaries provided through [Azul Systems](https://www.azul.com/). You can continue to use these binaries for your site, but any security patches or improvements will only be available in new versions of the OpenJDK, so we recommend that you periodically update your Web Apps to a later version of Java. +If you are [pinned](#choosing-a-java-runtime-version) to an older minor version of Java, your site may be using the deprecated [Azul Zulu for Azure](https://devblogs.microsoft.com/java/end-of-updates-support-and-availability-of-zulu-for-azure/) binaries provided through [Azul Systems](https://www.azul.com/). You can continue to use these binaries for your site, but any security patches or improvements will only be available in new versions of the OpenJDK, so we recommend that you periodically update your Web Apps to a later version of Java. Major version updates will be provided through new runtime options in Azure App Service. Customers update to these newer versions of Java by configuring their App Service deployment and are responsible for testing and ensuring the major update meets their needs. If a supported Java runtime will be retired, Azure developers using the affected ### Local development -Developers can download the Production Edition of Azul Zulu Enterprise JDK for local development from [Azul's download site](https://www.azul.com/downloads/#zulu). +Developers can download the Microsoft Build of OpenJDK for local development from [our download site](/java/openjdk/download). ### Development support |
app-service | Configure Language Php | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-php.md | Title: Configure PHP apps -description: Learn how to configure a PHP app in the native Windows instances, or in a pre-built PHP container, in Azure App Service. This article shows the most common configuration tasks. +description: Learn how to configure a PHP app in a pre-built PHP container, in Azure App Service. This article shows the most common configuration tasks. ms.devlang: php Previously updated : 06/02/2020 Last updated : 05/09/2023 zone_pivot_groups: app-service-platform-windows-linux |
app-service | Creation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/creation.md | If you're creating an App Service Environment with an external VIP, you can sele  -5. From the **Review + create** tab, check that your configuration is correct, and select **Create**. Your App Service Environment can more than one hour to create. +5. From the **Review + create** tab, check that your configuration is correct, and select **Create**. Your App Service Environment can take more than one hour to create. When your App Service Environment has been successfully created, you can select it as a location when you're creating your apps. |
app-service | Overview Access Restrictions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-access-restrictions.md | In this scenario, you're accessing your site through a private endpoint and are Traffic from Azure Front Door to your application originates from a well known set of IP ranges defined in the `AzureFrontDoor.Backend` service tag. Using a service tag restriction rule, you can restrict traffic to only originate from Azure Front Door. To ensure traffic only originates from your specific instance, you need to further filter the incoming requests based on the unique http header that Azure Front Door sends called X-Azure-FDID. You can find the Front Door ID in the portal. ## Next steps+> [!NOTE] +> Access restriction rules that block public access to your site can also block services such as log streaming. If you require these, you will need to allow your App Service's IP address in your restrictions. > [!div class="nextstepaction"] > [How to restrict access](app-service-ip-restrictions.md) |
app-service | Overview Vnet Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md | Title: Integrate your app with an Azure virtual network description: Integrate your app in Azure App Service with Azure virtual networks. Previously updated : 05/01/2023 Last updated : 05/09/2023 The virtual network integration feature supports two virtual interfaces per work Virtual network integration depends on a dedicated subnet. When you create a subnet, the Azure subnet consumes five IPs from the start. One address is used from the integration subnet for each App Service plan instance. If you scale your app to four instances, then four addresses are used. -When you scale up or down in size, the required address space is doubled for a short period of time. The scale operation affects the real, available supported instances for a given subnet size. Platform upgrades need free IP addresses to ensure upgrade can happen without interruptions to outbound traffic. Finally, after scale up, down or in operations complete, there might be a short period of time before IP addresses are released. +When you scale up/down in size or in/out in number of instances, the required address space is doubled for a short period of time. The scale operation affects the real, available supported instances for a given subnet size. Platform upgrades need free IP addresses to ensure upgrade can happen without interruptions to outbound traffic. Finally, after scale up, down or in operations complete, there might be a short period of time before IP addresses are released. Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. You should also reserve IP addresses for platform upgrades. To avoid any issues with subnet capacity, use a `/26` with 64 addresses. When you're creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /27 is required. If the subnet already exists before integrating through the portal, you can use a /28 subnet. |
app-service | Reference App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md | Title: Environment variables and app settings reference description: Describes the commonly used environment variables, and which ones can be modified with app settings. Previously updated : 11/01/2022 Last updated : 05/09/2023 # Environment variables and app settings in Azure App Service APPSVC_REMOTE_DEBUGGING_BREAK | debugArgs+=" -debugWait" --> | Setting name | Description | Example| |-|-|-| | `PHP_Extensions` | Comma-separated list of PHP extensions. | `extension1.dll,extension2.dll,Name1=value1` |-| `PHP_ZENDEXTENSIONS` | For Windows native apps, set to the path of the XDebug extension, such as `D:\devtools\xdebug\2.6.0\php_7.2\php_xdebug-2.6.0-7.2-vc15-nts.dll`. For Linux apps, set to `xdebug` to use the XDebug version of the PHP container. || +| `PHP_ZENDEXTENSIONS` | For Linux apps, set to `xdebug` to use the XDebug version of the PHP container. || | `PHP_VERSION` | Read-only. The selected PHP version. ||-| `PORT` | Read-only. Port that Apache server listens to in the container. || +| `WEBSITE_PORT` | Read-only. Port that Apache server listens to in the container. || | `WEBSITE_ROLE_INSTANCE_ID` | Read-only. ID of the current instance. || | `WEBSITE_PROFILER_ENABLE_TRIGGER` | Set to `TRUE` to add `xdebug.profiler_enable_trigger=1` and `xdebug.profiler_enable=0` to the default `php.ini`. ||-| `WEBSITE_ENABLE_PHP_ACCESS_LOGS` | Set to `TRUE` to log requests to the server (`CustomLog \dev\stderr combined` is added to `/etc/apache2/apache2.conf`). || -| `APACHE_SERVER_LIMIT` | Apache specific variable. The default is `1000`. || -| `APACHE_MAX_REQ_WORKERS` | Apache specific variable. The default is `256`. || <!-- ZEND_BIN_PATH |
applied-ai-services | Project Share Custom Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/project-share-custom-models.md | + + Title: "Share custom model projects using Form Recognizer Studio" ++description: Learn how to share custom model projects using Form Recognizer Studio. +++++ Last updated : 05/08/2023++monikerRange: 'form-recog-3.0.0' +++# Share custom model projects using Form Recognizer Studio ++Form Recognizer Studio is an online tool to visually explore, understand, train, and integrate features from the Form Recognizer service into your applications. Form Recognizer Studio enables project sharing feature within the custom extraction model. Projects can be shared easily via a project token. The same project token can also be used to import a project. ++## Prerequisite ++In order to share and import your custom extraction projects seamlessly, both users (user who shares and user who imports) need an An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). Also, both users need to configure permissions to grant access to the Form Recognizer and storage resources. ++## Granted access and permissions ++ > [!IMPORTANT] + > Custom model projects can be imported only if you have the access to the storage account that is associated with the project you are trying to import. Check your storage account permission before starting to share or import projects with others. ++### Managed identity ++Enable a system-assigned managed identity for your Form Recognizer resource. A system-assigned managed identity is enabled directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting. ++For more information, *see*, [Enable a system-assigned managed identity](../managed-identities.md#enable-a-system-assigned-managed-identity) ++### Role-based access control (RBAC) ++Grant your Form Recognizer managed identity access to your storage account using Azure role-based access control (Azure RBAC). The [Storage Blob Data Contributor](../../..//role-based-access-control/built-in-roles.md#storage-blob-data-reader) role grants read, write, and delete permissions for Azure Storage containers and blobs. ++For more information, *see*, [Grant access to your storage account](../managed-identities.md#grant-access-to-your-storage-account) ++### Configure cross origin resource sharing (CORS) ++CORS needs to be configured in your Azure storage account for it to be accessible to the Form Recognizer Studio. You can update the CORS setting in the Azure portal. ++Form more information, *see* [Configure CORS](../quickstarts/try-form-recognizer-studio.md#configure-cors) ++### Virtual networks and firewalls ++If your storage account VNet is enabled or if there are any firewall constraints, the project can't be shared. If you want to bypass those restrictions, ensure that those settings are turned off. ++A workaround is to manually create a project using the same settings as the project being shared. ++### User sharing requirements ++Users sharing the project need to create a project [**`ListAccountSAS`**](/rest/api/storagerp/storage-accounts/list-account-sas) to configure the storage account CORS and a [**`ListServiceSAS`**](/rest/api/storagerp/storage-accounts/list-service-sas) to generate a SAS token for *read*, *write* and *list* container's file in addition to blob storage data *update* permissions. ++### User importing requirements ++Users who want to import the project need a [**`ListServiceSAS`**](/rest/api/storagerp/storage-accounts/list-service-sas) to generate a SAS token for *read*, *write* and *list* container's file in addition to blob storage data *update* permissions. ++## Share a custom extraction model with Form Recognizer studio ++Follow these steps to share your project using Form Recognizer studio: ++1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). ++1. In the Studio, select the **Custom extraction models** tile, under the custom models section. ++ :::image type="content" source="../media/how-to/studio-custom-extraction.png" alt-text="Screenshot showing how to select a custom extraction model in the Studio."::: ++1. On the custom extraction models page, select the desired model to share and then select the **Share** button. ++ :::image type="content" source="../media/how-to/studio-project-share.png" alt-text="Screenshot showing how to select the desired model and select the share option."::: ++1. On the share project dialog, copy the project token for the selected project. +++## Import custom extraction model with Form Recognizer studio ++Follow these steps to import a project using Form Recognizer studio. ++1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). ++1. In the Studio, select the **Custom extraction models** tile, under the custom models section. ++ :::image type="content" source="../media/how-to/studio-custom-extraction.png" alt-text="Screenshot: Select custom extraction model in the Studio."::: ++1. On the custom extraction models page, select the **Import** button. ++ :::image type="content" source="../media/how-to/studio-project-import.png" alt-text="Screenshot: Select import within custom extraction model page."::: ++1. On the import project dialog, paste the project token shared with you and select import. +++## Next steps ++> [!div class="nextstepaction"] +> [Back up and recover models](../disaster-recovery.md) |
automanage | Virtual Machines Custom Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/virtual-machines-custom-profile.md | The `azureSecurityBaselineAssignmentType` is the audit mode that you can choose You can also specify an existing log analytics workspace by adding this setting to the configuration section of properties below: * "LogAnalytics/Workspace": "/subscriptions/**subscriptionId**/resourceGroups/**resourceGroupName**/providers/Microsoft.OperationalInsights/workspaces/**workspaceName**"-* "LogAnalytics/Behavior": false -Specify your existing workspace in the `LogAnalytics/Workspace` line. Set the `LogAnalytics/Behavior` setting to true if you would like this log analytics workspace to be used in all cases. This means that any machine with this custom profile will use this workspace, even it is already connected to one. By default, the `LogAnalytics/Behavior` is set to false. If your machine is already connected to a workspace, then that workspace will continue to be used. If it's not connected to a workspace, then the workspace specified in `LogAnalytics\Workspace` will be used. +* "LogAnalytics/Reprovision": false +Specify your existing workspace in the `LogAnalytics/Workspace` line. Set the `LogAnalytics/Reprovision` setting to true if you would like this log analytics workspace to be used in all cases. This means that any machine with this custom profile will use this workspace, even it is already connected to one. By default, the `LogAnalytics/Reprovision` is set to false. If your machine is already connected to a workspace, then that workspace will continue to be used. If it's not connected to a workspace, then the workspace specified in `LogAnalytics\Workspace` will be used. Also, you can add tags to resources specified in the custom profile like below: |
automation | Automation Webhooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-webhooks.md | Title: Start an Azure Automation runbook from a webhook description: This article tells how to use a webhook to start a runbook in Azure Automation from an HTTP call. Previously updated : 07/21/2021 Last updated : 05/09/2022 Consider the following strategies: write-output "start" write-output ("object type: {0}" -f $WebhookData.gettype()) write-output $WebhookData- #write-warning (Test-Json -Json $WebhookData) - $Payload = $WebhookData | ConvertFrom-Json write-output "`n`n"- write-output $Payload.WebhookName - write-output $Payload.RequestBody - write-output $Payload.RequestHeader + write-output $WebhookData.WebhookName + write-output $WebhookData.RequestBody + write-output $WebhookData.RequestHeader write-output "end" - if ($Payload.RequestBody) { - $names = (ConvertFrom-Json -InputObject $Payload.RequestBody) + if ($WebhookData.RequestBody) { + $names = (ConvertFrom-Json -InputObject $WebhookData.RequestBody) foreach ($x in $names) { |
automation | Overview Monitoring Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview-monitoring-agent.md | The following table shows the tracked item limits per machine for change trackin ## Supported operating systems -Change Tracking and Inventory is supported on all operating systems that meet Log Analytics agent requirements. See [supported operating systems](../../azure-monitor/agents/agents-overview.md#supported-operating-systems) for a list of the Windows and Linux operating system versions that are currently supported by the Log Analytics agent. +Change Tracking and Inventory is supported on all operating systems that meet Azure Monitor agent requirements. See [supported operating systems](../../azure-monitor/agents/agents-overview.md#supported-operating-systems) for a list of the Windows and Linux operating system versions that are currently supported by the Azure Monitor agent. To understand client requirements for TLS 1.2, see [TLS 1.2 for Azure Automation](../automation-managing-data.md#tls-12-for-azure-automation). |
automation | Python 3 Packages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/python-3-packages.md | After a package has been imported, it's listed on the Python packages page in yo ### Import a package with dependencies -You can import a Python 3.8 package and its dependencies by importing the following Python script into a Python 3.8 runbook, and then running it. +You can import a Python 3.8 package and its dependencies by importing the following Python script into a Python 3.8 runbook. Ensure that Managed identity is enabled for your Automation account and has Automation Contributor access for successful import of package. ```cmd |
automation | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md | description: Significant updates to Azure Automation updated each month. Previously updated : 04/05/2023 Last updated : 05/10/2023 Azure Automation receives improvements on an ongoing basis. To stay up to date w This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md). +## May 2023 ++### General Availability: Python 3.8 runbooks ++Azure Automation announces General Availability of Python 3.8 runbooks. Learn more about Azure Automation [Runbooks](automation-runbook-types.md) and [Python packages](python-3-packages.md). ## April 2023 |
azure-arc | Adding Exporters And Pipelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/adding-exporters-and-pipelines.md | The following properties are currently configurable during the Public Preview: The Telemetry Router supports logs and metrics pipelines. These pipelines are exposed in the custom resource specification of the Arc telemetry router and available for modification. +You can't remove the last pipeline from the telemetry router. If you apply a yaml file that removes the last pipeline, the service rejects the update. + #### Pipeline Settings | Setting | Description | metadata: spec: credentials: certificates:- - certificateName: arcdata-msft-elasticsearch-exporter-internal + - certificateName: arcdata-elasticsearch-exporter - certificateName: cluster-ca-certificate exporters: elasticsearch: - caCertificateName: cluster-ca-certificate- certificateName: arcdata-msft-elasticsearch-exporter-internal + certificateName: arcdata-elasticsearch-exporter endpoint: https://logsdb-svc:9200 index: logstash-otel- name: arcdata/msft/internal + name: arcdata pipelines: logs: exporters:- - elasticsearch/arcdata/msft/internal + - elasticsearch/arcdata ``` spec: secretName: <name_of_secret> secretNamespace: <namespace_with_secret> exporters:- elasticsearch: + Elasticsearch: # Step 2. Declare your Elasticsearch exporter with the needed settings # (certificates, endpoint, and index to export to) - name: myLogs |
azure-arc | Configure Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-managed-instance.md | The following example will scale down the number of replicas from 3 to 2. az sql mi-arc update --name sqlmi1 --replicas 2 --k8s-namespace mynamespace --use-k8s ``` -> [Note] +> [!Note] > If you scale down from 2 replicas to 1 replica, you may run into a conflict with the pre-configured `--readable--secondaries` setting. You can first edit the `--readable--secondaries` before scaling down the replicas. ## Configure Server options -You can configure server configuration settings for Azure Arc-enabled SQL managed instance after creation time. This article describes how to configure settings like enabling or disabling mssql Agent, enable specific trace flags for troubleshooting scenarios. +You can configure certain server configuration settings for Azure Arc-enabled SQL managed instance either during or after creation time. This article describes how to configure settings like enabling "Ad Hoc Distributed Queries" or "backup compression default" etc. ++Currently the following server options can be configured: +- Ad Hoc Distributed Queries +- Default Trace Enabled +- Database Mail XPs +- Backup compression default +- Cost threshold for parallelism +- Optimize for ad hoc workloads ++> [!Note] +> - Currently these options can only be specified via YAML file, either during Arc SQL MI creation or post deployment. +> - The Arc SQL MI image tag has to be at least version v1.19.x or above ++Add the following to your YAML file during deployment to configure any of these options. ++```yml +spec: + serverConfigurations: + - name: "Ad Hoc Distributed Queries" + value: 1 + - name: "Default Trace Enabled" + value: 0 + - name: "Database Mail XPs" + value: 1 + - name: "backup compression default" + value: 1 + - name: "cost threshold for parallelism" + value: 50 + - name: "optimize for ad hoc workloads" + value: 1 +``` ++If you already have an existing Arc SQL MI, you can run `kubectl edit sqlmi <sqlminame> -n <namespace>` and add the above options into the spec. +++Sample Arc SQL MI YAML file: ++```yml +apiVersion: sql.arcdata.microsoft.com/v13 +kind: SqlManagedInstance +metadata: + name: sql1 + annotations: + exampleannotation1: exampleannotationvalue1 + exampleannotation2: exampleannotationvalue2 + labels: + examplelabel1: examplelabelvalue1 + examplelabel2: examplelabelvalue2 +spec: + dev: true #options: [true, false] + licenseType: LicenseIncluded #options: [LicenseIncluded, BasePrice]. BasePrice is used for Azure Hybrid Benefits. + tier: GeneralPurpose #options: [GeneralPurpose, BusinessCritical] + serverConfigurations: + - name: "Ad Hoc Distributed Queries" + value: 1 + - name: "Default Trace Enabled" + value: 0 + - name: "Database Mail XPs" + value: 1 + - name: "backup compression default" + value: 1 + - name: "cost threshold for parallelism" + value: 50 + - name: "optimize for ad hoc workloads" + value: 1 + security: + adminLoginSecret: sql1-login-secret + scheduling: + default: + resources: + limits: + cpu: "2" + memory: 4Gi + requests: + cpu: "1" + memory: 2Gi + + primary: + type: LoadBalancer + storage: + backups: + volumes: + - className: azurefile # Backup volumes require a ReadWriteMany (RWX) capable storage class + size: 5Gi + data: + volumes: + - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment + size: 5Gi + datalogs: + volumes: + - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment + size: 5Gi + logs: + volumes: + - className: default # Use default configured storage class or modify storage class based on your Kubernetes environment + size: 5Gi +``` -### Enable SQL Server agent +## Enable SQL Server agent -SQL Server agent is disabled by default. It can be enabled by running the following command: +SQL Server agent is disabled during a default deployment of Arc SQL MI. It can be enabled by running the following command: ```azurecli az sql mi-arc update -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s --agent-enabled true ```+ As an example:+ ```azurecli az sql mi-arc update -n sqlinstance1 --k8s-namespace arc --use-k8s --agent-enabled true ``` -### Enable Trace flags +## Enable trace flags Trace flags can be enabled as follows:+ ```azurecli az sql mi-arc update -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s --trace-flags "3614,1234" ``` |
azure-arc | Configure Transparent Data Encryption Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-sql-managed-instance.md | -Enabling service-managed transparent data encryption will require the managed instance to use a service-managed database master key as well as the service-managed server certificate. These credentials will be automatically created when service-managed transparent data encryption is enabled. For more info on TDE, please refer to [Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption). -+For more info on TDE, please refer to [Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption). Turning on the TDE feature does the following: Before you proceed with this article, you must have an Azure Arc-enabled SQL Man ## Limitations -The following limitations must be considered when deploying Service-Managed TDE: +The following limitations apply when you enable automatic TDE: - Only General Purpose Tier is supported.-- Failover Groups are not supported.+- Failover groups aren't supported. ## Turn on transparent data encryption on the managed instance-### Prerequisites -Turning on TDE on the managed instance will result in the following operations taking place: +When TDE is enabled on Arc-enabled SQL Managed Instance, the data service automatically does the following tasks: ++1. Adds the service-managed database master key in the `master` database. +2. Adds the service-managed certificate protector. +3. Adds the associated Database Encryption Keys (DEK) on all databases on the managed instance. +4. Enables encryption on all databases on the managed instance. ++You can set Azure Arc-enabled SQL Managed Instance TDE in one of two modes: ++- Service-managed +- Customer-managed ++In service-managed mode, transparent data encryption requires the managed instance to use a service-managed database master key as well as the service-managed server certificate. These credentials are automatically created when service-managed transparent data encryption is enabled. ++In customer-managed mode, transparent data encryption uses a service-managed database master key and uses keys you provide for the server certificate. To configure customer-managed mode: -1. Adding the service-managed database master key in the `master` database. -2. Adding the service-managed certificate protector. -3. Adding the associated Database Encryption Keys (DEK) on all databases on the managed instance. -4. Enabling encryption on all databases on the managed instance. +1. Create a certificate. +1. Store the certificate as a secret in the same Kubernetes namespace as the instance. ++> [!NOTE] +> If you need to change from one mode to the other, you must disable TDE from the current mode before you apply the new mode. For details, see [Turn off transparent data encryption on the managed instance](#turn-off-transparent-data-encryption-on-the-managed-instance). +> +> For example, if the service is encrypted using service-managed mode, go to `Disabled` mode before you enable customer-managed mode. +> +> ```console +> kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "Disabled" } } } }' +> ``` +++To proceed, select the mode you want to use. ### [Service-managed mode](#tab/service-managed-mode) -Run kubectl patch to enable service-managed TDE ++To enable TDE in service-managed mode, run kubectl patch to enable service-managed TDE ```console kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "ServiceManaged" } } } }' kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ Example: ```console-kubectl patch sqlmi contososqlmi --namespace arc --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "ServiceManaged" } } } }' +kubectl patch sqlmi sqlmi-tde --namespace arc --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "ServiceManaged" } } } }' ```++### [Customer-managed mode](#tab/customer-managed-mode) ++To enable TDE in customer managed mode: ++1. Create a certificate. ++ ```console + openssl req -x509 -newkey rsa:2048 -nodes -keyout <key-file> -days 365 -out <cert-file> + ``` ++1. Create a secret for the certificate. ++ > [!IMPORTANT] + > Store the secret in the same namespace as the managed instance ++ ```console + kubectl create secret generic <tde-secret-name> --from-literal=privatekey.pem="$(cat <key-file>)" --from-literal=certificate.pem="$(cat <cert-file>) --namespace <namespace>" + ``` ++1. Run `kubectl patch ...` to enable customer-managed TDE ++ ```console + kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "CustomerManaged", "protectorSecret": "<tde-secret-name>" } } } }' + ``` ++ Example: ++ ```console + kubectl patch sqlmi sqlmi-tde --namespace arc --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "CustomerManaged", "protectorSecret": "sqlmi-tde-protector-cert-secret" } } } }' + ``` + ## Turn off transparent data encryption on the managed instance -Turning off TDE on the managed instance will result in the following operations taking place: +When TDE is disabled on Arc-enabled SQL Managed Instance, the data service automatically does the following tasks: -1. Disabling encryption on all databases on the managed instance. -2. Dropping the associated DEKs on all databases on the managed instance. -3. Dropping the service-managed certificate protector. -4. Dropping the service-managed database master key in the `master` database. +1. Disables encryption on all databases on the managed instance. +2. Drops the associated DEKs on all databases on the managed instance. +3. Drops the service-managed certificate protector. +4. Drops the service-managed database master key in the `master` database. ### [Service-managed mode](#tab/service-managed-mode) -Run kubectl patch to disable service-managed TDE +Run kubectl patch to disable service-managed TDE. ++```console +kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "Disabled" } } } }' +``` ++Example: +```console +kubectl patch sqlmi sqlmi-tde --namespace arc --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "Disabled" } } } }' +``` ++### [Customer-managed mode](#tab/customer-managed-mode) ++Run kubectl patch to disable customer-managed TDE. ++When you disable TDE in customer-managed mode, you need to set `"protectorSecret" : null`. ```console-kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": null } } } }' +kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "Disabled" , "protectorSecret": null } } } }' ``` + Example:+ ```console-kubectl patch sqlmi contososqlmi --namespace arc --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": null } } } }' +kubectl patch sqlmi sqlmi-tde --namespace arc --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "Disabled" , "protectorSecret": null } } } }' ```+ ## Back up a transparent data encryption credential |
azure-arc | Deploy Telemetry Router | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-telemetry-router.md | apiVersion: arcdata.microsoft.com/v1beta4 namespace: <namespace> spec: credentials:- certificates: - - certificateName: arcdata-msft-elasticsearch-exporter-internal - - certificateName: cluster-ca-certificate exporters:- elasticsearch: - - caCertificateName: cluster-ca-certificate - certificateName: arcdata-msft-elasticsearch-exporter-internal - endpoint: https://logsdb-svc:9200 - index: logstash-otel - name: arcdata/msft/internal pipelines:- logs: - exporters: - - elasticsearch/arcdata/msft/internal - ``` -For the public preview, the pipeline and exporter have a default pre-configuration to Microsoft's deployment of Elasticsearch. This default deployment gives you an example of how the parameters for credentials, exporters, and pipelines are set up within the spec. You can follow this example to export to your own monitoring solutions. See [adding exporters and pipelines](adding-exporters-and-pipelines.md) for more examples. This example deployment will be removed at the conclusion of the public preview. +At the time of creation, no pipeline or exporters are set up. You can [setup your own pipelines and exporters](adding-exporters-and-pipelines.md) to route metrics and logs data to your own instances of Kafka and Elasticsearch. -After the TelemetryRouter is deployed, both TelemetryCollector custom resources should be in a *Ready* state. These resources are system managed and editing them isn't supported. If you look at the pods, you should see the following types of pods: +After the TelemetryRouter is deployed, an instance of Kafka (arc-router-kafka) and a single instance of TelemetryCollector (collector-inbound) should be deployed and in a ready state. These resources are system managed and editing them isn't supported. The following pods will be deployed as a result: -- Two telemetry collector pods - `arctc-collector-inbound-0` and `arctc-collector-outbound-0`+- An inbound collector pod - `arctc-collector-inbound-0` - A kakfa broker pod - `arck-arc-router-kafka-broker-0` - A kakfa controller pod - `arck-arc-router-kafka-controller-0` ++> [!NOTE] +> An outbound collector pod isn't created until at least one pipeline has been added to the telemetry router. +> +> After you create the first pipeline, an additional TelemetryCollector resource (collector-outbound) and pod `arctc-collector-outbound-0` are deployed. + ```bash kubectl get pods -n <namespace> arc-webhook-job-facc4-z7dd7 0/1 Completed 0 15h arck-arc-router-kafka-broker-0 2/2 Running 0 15h arck-arc-router-kafka-controller-0 2/2 Running 0 15h arctc-collector-inbound-0 2/2 Running 0 15h-arctc-collector-outbound-0 2/2 Running 0 15h bootstrapper-8d5bff6f7-7w88j 1/1 Running 0 15h control-vpfr9 2/2 Running 0 15h controldb-0 2/2 Running 0 15h logsui-fwrh9 3/3 Running 0 15h metricsdb-0 2/2 Running 0 15h metricsdc-bc4df 2/2 Running 0 15h metricsdc-fm7jh 2/2 Running 0 15h-metricsdc-qgl26 2/2 Running 0 15h -metricsdc-sndjv 2/2 Running 0 15h -metricsdc-xh78q 2/2 Running 0 15h metricsui-qqgbv 2/2 Running 0 15h ``` |
azure-arc | Point In Time Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/point-in-time-restore.md | You can restore a database to a point-in-time within a pre-configured retention You can check the retention setting for an Azure Arc-enabled SQL managed instance as follows: For **Direct** connected mode:-``` ++```azurecli az sql mi-arc show --name <SQL instance name> --resource-group <resource-group> #Example az sql mi-arc show --name sqlmi --resource-group myresourcegroup ```+ For **Indirect** connected mode:-``` ++```azurecli az sql mi-arc show --name <SQL instance name> --k8s-namespace <SQL MI namespace> --use-k8s #Example az sql mi-arc show --name sqlmi --k8s-namespace arc --use-k8s The Retention period for an Azure Arc-enabled SQL managed instance can be reconf -The ```--retention-period``` can be changed for a SQL Managed Instance-Azure Arc as follows. The below command applies to both ```direct``` and ```indirect``` connected modes. +The `--retention-period` can be changed for a SQL Managed Instance-Azure Arc as follows. The below command applies to both `direct` and `indirect` connected modes. ```azurecli az sql mi-arc update --name <SQLMI name> --k8s-namespace <namespace> --use-k8s ``` For example:+ ```azurecli az sql mi-arc update --name sqlmi --k8s-namespace arc --use-k8s --retention-days 10 ``` |
azure-arc | Privacy Data Collection And Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/privacy-data-collection-and-reporting.md | Operational data is collected for all database instances and for the Azure Arc-e - Metrics ΓÇô Performance and capacity related metrics, which are collected to an Influx DB provided as part of Azure Arc-enabled data services. You can view these metrics in the provided Grafana dashboard. -- Logs ΓÇô Records emitted by all components including failure, warning, and informational events are collected to an Elasticsearch database provided as part of Azure Arc-enabled data services. You can view the logs in the provided Kibana dashboard. +- Logs ΓÇô Records emitted by all components including failure, warning, and informational events are collected to an OpenSearch database provided as part of Azure Arc-enabled data services. You can view the logs in the provided Kibana dashboard. Prior to the May, 2023 release, the log database used Elasticsearch. Thereafter, it uses OpenSearch. -The operational data stored locally requires built-in administrative privileges to view it in Grafana/Kibana. +The operational data stored locally requires built-in administrative privileges to view it in Grafana/Kibana. The operational data does not leave your environment unless you chooses to export/upload (indirect connected mode) or automatically send (directly connected mode) the data to Azure Monitor/Log Analytics. The data goes into a Log Analytics workspace, which you control. |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md | +## May 9, 2023 ++### Image tag ++`v1.19.0_2023-05-09` ++For complete release version information, review [Version log](version-log.md#may-9-2023). ++New for this release: ++### Release notes ++- Arc data services + - OpenSearch replaces Elasticsearch for log database + - OpenSearch Dashboards replaces Kibana for logs interface + - There is a known issue with user settings migration to OpenSearch Dashboards for some versions of Elasticsearch, including the version used in Arc data services. + + > [!IMPORTANT] + > Before upgrade, save any Kibana configuration externally so that it can be re-created in OpenSearch Dashboards. ++ - Automatic upgrade is disabled for the Arc data services extension + - Error-handling in the `az` CLI is improved during data controller upgrade + - Fixed a bug to preserve the resource limits for Azure Arc Data Controller where the resource limits could get reset during an upgrade. ++- Azure Arc-enabled SQL Managed Instance + - General Purpose: Customer-managed TDE encryption keys (preview). For information, review [Enable transparent data encryption on Azure Arc-enabled SQL Managed Instance](configure-transparent-data-encryption-sql-managed-instance.md). + - Support for customer-managed keytab rotation. For information, review [Rotate Azure Arc-enabled SQL Managed Instance customer-managed keytab](rotate-customer-managed-keytab.md). + - Support for `sp_configure` to manage configuration. For information, review [Configure Azure Arc-enabled SQL managed instance](configure-managed-instance.md). + - Service-managed credential rotation. For information, review [How to rotate service-managed credentials in a managed instance](rotate-sql-managed-instance-credentials.md#how-to-rotate-service-managed-credentials-in-a-managed-instance). + ## April 12, 2023 ### Image tag Data controller sends controller logs to the Log Analytics Workspace if logs upl Removed the `--ad-connector-namespace` parameter from `az sql mi-arc create` command because for now the AD connector resource must always be in the same namespace as the SQL Managed Instance resource. -Updated ElasticSearch to latest version `7.9.1-36fefbab37-205465`. Also Grafana, Kibana, Telegraf, Fluent Bit, Go. +Updated Elasticsearch to latest version `7.9.1-36fefbab37-205465`. Also Grafana, Kibana, Telegraf, Fluent Bit, Go. All container image sizes were reduced by approximately 40% on average. Use the following tools: - Currently, additional basic authentication users can be added to Grafana using the Grafana administrative experience. Customizing Grafana by modifying the Grafana .ini files is not supported. -- Currently, modifying the configuration of ElasticSearch and Kibana is not supported beyond what is available through the Kibana administrative experience. Only basic authentication with a single user is supported.+- Currently, modifying the configuration of Elasticsearch and Kibana is not supported beyond what is available through the Kibana administrative experience. Only basic authentication with a single user is supported. - Custom metrics in Azure portal - preview. |
azure-arc | Rotate Customer Managed Keytab | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/rotate-customer-managed-keytab.md | + + Title: Rotate customer-managed keytab +description: How to rotate a customer-managed keytab ++++++ Last updated : 05/05/2023+++# Rotate Azure Arc-enabled SQL Managed Instance customer-managed keytab ++This article describes how to rotate customer-managed keytabs for Azure Arc-enabled SQL Managed Instance. These keytabs are used to enable Active Directory logins for the managed instance. ++## Prerequisites: ++Before you proceed with this article, you must have an active directory connector in customer-managed keytab mode and an Azure Arc-enabled SQL Managed Instance created. ++- [Deploy a customer-managed keytab active directory connector](./deploy-customer-managed-keytab-active-directory-connector.md) +- [Deploy and connect an Azure Arc-enabled SQL Managed Instance](./deploy-active-directory-sql-managed-instance.md) ++## How to rotate customer-managed keytabs in a managed instance ++The following steps need to be followed to rotate the keytab: ++1. Get `kvno` value for the current generation of credentials for the SQL MI Active Directory account. +1. Create a new keytab file with entries for the current generation of credentials. Specifically, the `kvno` value should match from step (1.) above. +1. Update the new keytab file with new entries for the new credentials for the SQL MI Active Directory account. +1. Create a kubernetes secret holding the new keytab file contents in the same namespace as the SQL MI. +1. Edit the SQL MI spec to point the Active Directory keytab secret setting to this new secret. +1. Change the password in the Active Directory domain. ++We have provided the following PowerShell and bash scripts that will take care of steps 1-5 for you: +- [`rotate-sqlmi-keytab.sh`](https://github.com/microsoft/azure_arc/blob/main/arc_data_services/deploy/scripts/rotate-sql-keytab.sh) - This bash script uses `ktutil` or `adutil` (if the `--use-adutil` flag is specified) to generate the new keytab for you. +- [`rotate-sqlmi-keytab.ps1`](https://github.com/microsoft/azure_arc/blob/main/arc_data_services/deploy/scripts/rotate-sql-keytab.ps1) - This PowerShell script uses `ktpass.exe` to generate the new keytab for you. ++Executing the above script would result in the following keytab file for the user `arcsqlmi@CONTOSO.COM`, secret `sqlmi-keytab-secret-kvno-2-3` and namespace `test`: ++```text +KVNO Timestamp Principal +- - + 2 02/16/2023 17:12:05 arcsqlmiuser@CONTOSO.COM (aes256-cts-hmac-sha1-96) + 2 02/16/2023 17:12:05 arcsqlmiuser@CONTOSO.COM (arcfour-hmac) + 2 02/16/2023 17:12:05 MSSQLSvc/arcsqlmi.contoso.com@CONTOSO.COM (aes256-cts-hmac-sha1-96) + 2 02/16/2023 17:12:05 MSSQLSvc/arcsqlmi.contoso.com@CONTOSO.COM (arcfour-hmac) + 2 02/16/2023 17:12:05 MSSQLSvc/arcsqlmi.contoso.com:31433@CONTOSO.COM (aes256-cts-hmac-sha1-96) + 2 02/16/2023 17:12:05 MSSQLSvc/arcsqlmi.contoso.com:31433@CONTOSO.COM (arcfour-hmac) + 3 02/16/2023 17:13:41 arcsqlmiuser@CONTOSO.COM (aes256-cts-hmac-sha1-96) + 3 02/16/2023 17:13:41 arcsqlmiuser@CONTOSO.COM (arcfour-hmac) + 3 02/16/2023 17:13:41 MSSQLSvc/arcsqlmi.contoso.com@CONTOSO.COM (aes256-cts-hmac-sha1-96) + 3 02/16/2023 17:13:41 MSSQLSvc/arcsqlmi.contoso.com@CONTOSO.COM (arcfour-hmac) + 3 02/16/2023 17:13:41 MSSQLSvc/arcsqlmi.contoso.com:31433@CONTOSO.COM (aes256-cts-hmac-sha1-96) + 3 02/16/2023 17:13:41 MSSQLSvc/arcsqlmi.contoso.com:31433@CONTOSO.COM (arcfour-hmac) +``` ++And the following updated-secret.yaml spec: +```yaml +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: sqlmi-keytab-secret-kvno-2-3 + namespace: test +data: + keytab: + <keytab-contents> +``` ++Finally, change the password for `arcsqlmi` user account in the domain controller for the Active Directory domain `contoso.com`: ++1. Open **Server Manager** on the domain controller for the Active Directory domain `contoso.com`. You can either search for *Server Manager* or open it through the Start menu. +1. Go to **Tools** > **Active Directory Users and Computers** ++ :::image type="content" source="media/rotate-customer-managed-keytab/active-directory-users-and-computers.png" alt-text="Screenshot of Active Directory Users and Computers."::: ++1. Select the user that you want to change password for. Right-click to select the user. Select **Reset password**: ++ :::image type="content" source="media/rotate-customer-managed-keytab/reset-password.png" alt-text="Screenshot of the control to reset the password for an Active Directory user account."::: ++1. Enter new password and select `OK`. ++### Troubleshooting errors after rotation ++In case there are errors when trying to use Active Directory Authentication after completing keytab rotation, the following files in the `arc-sqlmi` container in the SQL MI pod are a good place to start investigating the root cause: +- `security.log` file located at `/var/opt/mssql/log` - This log file has logs for SQL's interactions with the Active Directory domain. +- `errorlog` file located at `/var/opt/mssql/log` - This log file contains logs from the SQL Server running on the container. +- `mssql.keytab` file located at `/var/run/secrets/managed/keytabs/mssql` - Verify that this keytab file contains the newly updated entries and matches the keytab file created by using the scripts provided above. The keytab file can be read using the `klist` command i.e. `klist -k mssql.keytab -e` ++Additionally, after getting the kerberos Ticket-Granting Ticket (TGT) by using `kinit` command, verify the `kvno` of the SQL user matches the highest `kvno` in the `mssql.keytab` file in the `arc-sqlmi` container. For example, for `arcsqlmi@CONTOSO.COM` user: ++- Get the kerberos TGT from the Active Directory domain by running `kinit arcsqlmi@CONTOSO.COM`. This will prompt a user input for the password for `arcsqlmi` user. +- Once this succeeds, the `kvno` can be queried by running `kvno arcsqlmi@CONTOSO.COM`. ++We can also enable debug logging for the `kinit` command by running the following: `KRB5_TRACE=/dev/stdout kinit -V arcsqlmi@CONTOSO.COM`. This increases the verbosity and outputs the logs to stdout as the command is being executed. ++## Next steps ++- [View the SQL managed instance dashboards](azure-data-studio-dashboards.md#view-the-sql-managed-instance-dashboards) +- [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md) |
azure-arc | Rotate Sql Managed Instance Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/rotate-sql-managed-instance-credentials.md | -This article describes how to rotate service-managed credentials for Azure Arc-enabled SQL Managed Instance. Arc data services generates various service-managed credentials like certificates and SQL logins used for Monitoring, Backup/Restore, High Availability etc. These credentials are considered custom resource credentials managed by Azure Arc data services. +This article describes how to rotate service-managed credentials for Azure Arc-enabled SQL Managed Instance. Arc data services generate various service-managed credentials like certificates and SQL logins used for Monitoring, Backup/Restore, High Availability etc. These credentials are considered custom resource credentials managed by Azure Arc data services. Service-managed credential rotation is a user-triggered operation that you initiate during a security issue or when periodic rotation is required for compliance. Service-managed credential rotation is a user-triggered operation that you initi Consider the following limitations when you rotate a managed instance service-managed credentials: - SQL Server failover groups aren't supported.-- Automatically pre-scheduled rotation isn't supported.+- Automatically prescheduled rotation isn't supported. - The service-managed DPAPI symmetric keys, keytab, active directory accounts, and service-managed TDE credentials aren't included in this credential rotation.-- SQL Managed Instance Business Critical tier isn't supported.-- This feature should not be used in production currently. There is a known limitation where _rollback_ cannot be triggered unless credential rotation is completed successfully and the SQLMI is in "Ready" state. ## General Purpose tier -During a SQL Managed Instance service-managed credential rotation, the managed instance Kubernetes pod is terminated and reprovisioned when new credentials are generated. This process causes a short amount of downtime as the new managed instance pod is created. To handle the interruption, build resiliency into your application such as connection retry logic, to ensure minimal disruption. Read [Overview of the reliability pillar](/azure/architecture/framework/resiliency/overview) for more information on how to architect resiliency and [retry guidance for Azure Services](/azure/architecture/best-practices/retry-service-specific#sql-database-using-adonet). +During General Purpose SQL Managed Instance service-managed credential rotation, the managed instance Kubernetes pod is terminated and reprovisioned with rotated credentials. This process causes a short amount of downtime as the new managed instance pod is created. To handle the interruption, build resiliency into your application such as connection retry logic, to ensure minimal disruption. Read [Overview of the reliability pillar](/azure/architecture/framework/resiliency/overview) for more information on how to architect resiliency and [retry guidance for Azure Services](/azure/architecture/best-practices/retry-service-specific#sql-database-using-adonet). ++## Business Critical tier ++During Business Critical SQL Managed Instance service-managed credential rotation with more than one replica: ++- The secondary replica pods are terminated and reprovisioned with the rotated service-managed credentials +- After the replicas are reprovisioned, the primary will fail over to a reprovisioned replica +- The previous primary pod is terminated and reprovisioned with the rotated service-managed credentials, and becomes a secondary ++There's a brief moment of downtime when the failover occurs. ## Prerequisites: Before you proceed with this article, you must have an Azure Arc-enabled SQL Man Service-managed credentials are associated with a generation within the managed instance. To rotate all service-managed credentials for a managed instance, the generation must be increased by 1. -Run the following commands to get current service-managed credentials generation from spec and generate the new generation of service-managed credentials. This action triggers a service-managed credential rotation. +Run the following commands to get current service-managed credentials generation from spec and generate the new generation of service-managed credentials. This action triggers service-managed credential rotation. ```console-rotateCredentialGeneration=$(($(kubectl get sqlmi <sqlmi-name> -o jsonpath='{.spec.update.managedCredentialsGeneration}' -n <namespace>) + 1))  +rotateCredentialGeneration=$(($(kubectl get sqlmi <sqlmi-name> -o jsonpath='{.spec.update.managedCredentialsGeneration}' -n <namespace>) + 1)) ``` ```console-kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "update": { "managedCredentialsGeneration": '$rotateCredentialGeneration'} } }'  +kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "update": { "managedCredentialsGeneration": '$rotateCredentialGeneration'} } }' ``` -The `managedCredentialsGeneration` identifies the target generation for the service-managed credentials. The rest of the features like configuration and the kubernetes topology remain the same. +The `managedCredentialsGeneration` identifies the target generation for service-managed credentials. The rest of the features like configuration and the kubernetes topology remain the same. ## How to roll back service-managed credentials in a managed instance > [!NOTE]-> Rollback is required when credential rotation failed for any reasons. Rollback to previous credentials generation is supported only once to n-1 where n is current generation. +> Rollback is required when credential rotation fails. Rollback to previous credentials generation is supported only once to n-1 where n is the current generation. +> +> If rollback is triggered while credential rotation is in progress and all the replicas have not been reprovisioned then the rollback __may__ take about 30 minutes to complete for the managed instance to be in a **Ready** state. Run the following two commands to get current service-managed credentials generation from spec and rollback to the previous generation of service-managed credentials: ```console-rotateCredentialGeneration=$(($(kubectl get sqlmi <sqlmi-name> -o jsonpath='{.spec.update.managedCredentialsGeneration}' -n <namespace>) - 1))  +rotateCredentialGeneration=$(($(kubectl get sqlmi <sqlmi-name> -o jsonpath='{.spec.update.managedCredentialsGeneration}' -n <namespace>) - 1)) ``` ```console-kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "update": { "managedCredentialsGeneration": '$rotateCredentialGeneration'} } }'  +kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "update": { "managedCredentialsGeneration": '$rotateCredentialGeneration'} } }' ``` Triggering rollback is the same as triggering a rotation of service-managed credentials except that the target generation is previous generation and doesn't generate a new generation or credentials. Triggering rollback is the same as triggering a rotation of service-managed cred ## Next steps - [View the SQL managed instance dashboards](azure-data-studio-dashboards.md#view-the-sql-managed-instance-dashboards)-- [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md)+- [View SQL Managed Instance in the Azure portal](view-arc-data-services-inventory-in-azure-portal.md) |
azure-arc | Storage Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/storage-configuration.md | Some services in Azure Arc for data services depend upon being configured to use |**Service**|**Persistent Volume Claims**| |||-|**ElasticSearch**|`<namespace>/logs-logsdb-0`, `<namespace>/data-logsdb-0`| +|**OpenSearch**|`<namespace>/logs-logsdb-0`, `<namespace>/data-logsdb-0`| |**InfluxDB**|`<namespace>/logs-metricsdb-0`, `<namespace>/data-metricsdb-0`| |**Controller SQL instance**|`<namespace>/logs-controldb`, `<namespace>/data-controldb`| |**Controller API service**|`<namespace>/data-controller`| |
azure-arc | Troubleshooting Get Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshooting-get-logs.md | The following folder hierarchy is an example. It's organized by pod name, then c Γöé Γöé ΓööΓöÇΓöÇΓöÇcontrolwatchdog Γöé Γöé ΓööΓöÇΓöÇΓöÇcontrolwatchdog Γöé Γö£ΓöÇΓöÇΓöÇlogsdb-0-Γöé Γöé ΓööΓöÇΓöÇΓöÇelasticsearch +Γöé Γöé ΓööΓöÇΓöÇΓöÇopensearch Γöé Γöé Γö£ΓöÇΓöÇΓöÇagent-Γöé Γöé Γö£ΓöÇΓöÇΓöÇelasticsearch +Γöé Γöé Γö£ΓöÇΓöÇΓöÇopensearch Γöé Γöé Γö£ΓöÇΓöÇΓöÇprovisioner Γöé Γöé ΓööΓöÇΓöÇΓöÇsupervisor Γöé Γöé ΓööΓöÇΓöÇΓöÇlog |
azure-arc | Version Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md | +## May 9, 2023 ++|Component|Value| +|--|--| +|Container images tag |`v1.19.0_2023-05-09`| +|**CRD names and version:**| | +|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1| +|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| +|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| +|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| +|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| +|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| +|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| +|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| +|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| +|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| +|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| +|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| +|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| +|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| +|Azure Resource Manager (ARM) API version|2023-01-15-preview| +|`arcdata` Azure CLI extension version|1.5.0 ([Download](https://aka.ms/az-cli-arcdata-ext))| +|Arc-enabled Kubernetes helm chart extension version|1.19.0| +|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| + ## April 11, 2023 |Component|Value| |
azure-arc | Agent Release Notes Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md | Title: Archive for What's new with Azure Arc-enabled servers agent description: Release notes for Azure Connected Machine agent versions older than six months Previously updated : 03/10/2023 Last updated : 05/08/2023 The Azure Connected Machine agent receives improvements on an ongoing basis. Thi - Known issues - Bug fixes +## Version 1.26 - January 2023 ++Download for [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) ++> [!NOTE] +> Version 1.26 is only available for Linux operating systems. ++### Fixed ++- Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Microsoft Defender for Endpoint extension (MDE.Linux) on Linux to improve installation reliability ++## Version 1.25 - January 2023 ++Download for [Windows](https://download.microsoft.com/download/2/#installing-a-specific-version-of-the-agent) ++### New features ++- Red Hat Enterprise Linux (RHEL) 9 is now a [supported operating system](prerequisites.md#supported-operating-systems) ++### Fixed ++- Reliability improvements in the machine (guest) configuration policy engine +- Improved error messages in the Windows MSI installer +- Additional improvements to the detection logic for machines running on Azure Stack HCI ## Version 1.24 - November 2022 Download for [Windows](https://download.microsoft.com/download/f/9/d/f9d60cc9-7c2a-4077-b890-f6a54cc55775/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) |
azure-arc | Agent Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md | Title: What's new with Azure Arc-enabled servers agent description: This article has release notes for Azure Arc-enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 04/06/2023 Last updated : 05/08/2023 The Azure Connected Machine agent receives improvements on an ongoing basis. To This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md). +## Version 1.30 - May 2023 ++Download for [Windows](https://download.microsoft.com/download/7/7/9/779eae73-a12b-4170-8c5e-abec71bc14cf/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) ++### Fixed ++- Resolved an issue that could cause the agent to go offline after rotating its connectivity keys. +- `azcmagent show` no longer shows an incomplete resource ID or Azure portal page URL when the agent isn't configured. + ## Version 1.29 - April 2023 Download for [Windows](https://download.microsoft.com/download/2/7/0/27063536-949a-4b16-a29a-3d1dcb29cff7/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) Download for [Windows](https://download.microsoft.com/download/8/4/5/845d5e04-bb - Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Microsoft Sentinel DNS extension to improve log collection reliability - Tenant IDs are better validated when connecting the server -## Version 1.26 - January 2023 --Download for [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) --> [!NOTE] -> Version 1.26 is only available for Linux operating systems. --### Fixed --- Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Microsoft Defender for Endpoint extension (MDE.Linux) on Linux to improve installation reliability--## Version 1.25 - January 2023 --Download for [Windows](https://download.microsoft.com/download/2/#installing-a-specific-version-of-the-agent) --### New features --- Red Hat Enterprise Linux (RHEL) 9 is now a [supported operating system](prerequisites.md#supported-operating-systems)--### Fixed --- Reliability improvements in the machine (guest) configuration policy engine-- Improved error messages in the Windows MSI installer-- Additional improvements to the detection logic for machines running on Azure Stack HCI- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods. |
azure-arc | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md | If two agents use the same configuration, you will encounter inconsistent behavi Azure Arc supports the following Windows and Linux operating systems. Only x86-64 (64-bit) architectures are supported. The Azure Connected Machine agent does not run on x86 (32-bit) or ARM-based architectures. -* Windows Server 2008 R2 SP1, 2012 R2, 2016, 2019, and 2022 +* Windows Server 2012 R2, 2016, 2019, and 2022 * Both Desktop and Server Core experiences are supported * Azure Editions are supported on Azure Stack HCI * Windows 10, 11 (see [client operating system guidance](#client-operating-system-guidance)) Microsoft doesn't recommend running Azure Arc on short-lived (ephemeral) servers Windows operating systems: * NET Framework 4.6 or later. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers).-* Windows PowerShell 4.0 or later (already included with Windows Server 2012 R2 and later). For Windows Server 2008 R2 SP1, [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616). +* Windows PowerShell 4.0 or later (already included with Windows Server 2012 R2 and later). Linux operating systems: |
azure-cache-for-redis | Cache How To Premium Clustering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-clustering.md | Clustering doesn't increase the number of connections available for a clustered In Azure, Redis cluster is offered as a primary/replica model where each shard has a primary/replica pair with replication, where the replication is managed by Azure Cache for Redis service. +## Azure Cache for Redis now supports upto 30 shards (preview) ++Azure Cache for Redis now supports upto 30 shards for clustered caches. Clustered caches configured with two replicas can support upto 20 shards and clustered caches configured with three replicas can support upto 15 shards. ++**Limitations** +* Shard limit for caches with Redis verion 4 is 10. +* Shard limit for [caches affected by cloud service retirement](./cache-faq.yml#caches-with-a-dependency-on-cloud-services--classic) is 10. +* Maintenance will take longer as each node take roughly 20 minutes to update. Other maintenance operations will be blocked while your cache is under maintenance. + ## Set up clustering Clustering is enabled **New Azure Cache for Redis** on the left during cache creation. |
azure-cache-for-redis | Cache Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md | Last updated 03/28/2023 # What's New in Azure Cache for Redis +## May 2023 ++### Support for upto 30 shards for clustered Azure Cache for Redis instances ++Azure Cache for Redis now supports clustered caches with upto 30 shards which means your applications can store more data and scale better with your workloads. ++For more information, see [Configure clustering for Azure Cache for Redis instance](cache-how-to-premium-clustering.md#azure-cache-for-redis-now-supports-upto-30-shards-preview). + ## March 2023 ### In-place scale up and scale out for the Enterprise tiers (preview) |
azure-functions | Create Resources Azure Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-resources-azure-powershell.md | Title: Create function app resources in Azure using PowerShell description: Azure PowerShell scripts that show you how to create the Azure resources required to host your functions code in Azure. Previously updated : 07/18/2022 Last updated : 05/02/2023 # Create function app resources in Azure using PowerShell This article contains the following examples: * [Create a function app in a Dedicated plan](#create-a-function-app-in-a-dedicated-plan) * [Create a function app with a named Storage connection](#create-a-function-app-with-a-named-storage-connection) * [Create a function app with an Azure Cosmos DB connection](#create-a-function-app-with-an-azure-cosmos-db-connection)-* [Create a function app with an Azure Cosmos DB connection](#create-a-function-app-with-an-azure-cosmos-db-connection) * [Create a function app with continuous deployment](#create-a-function-app-with-continuous-deployment) * [Create a serverless Python function app and mount file share](#create-a-serverless-python-function-app-and-mount-file-share) |
azure-functions | Durable Functions Node Model Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-node-model-upgrade.md | const response = yield context.df.callHttp( :::zone pivot="programming-language-typescript" -## Leverage New Types +## Leverage new types The `durable-functions` package now exposes new types that weren't previously exported! This allows you to more strongly type your functions and provide stronger type safety for your orchestrations, entities, and activities! This also improves intellisense for authoring these functions. |
azure-functions | Functions Bindings Error Pages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md | zone_pivot_groups: programming-languages-set-functions-lang-workers Handling errors in Azure Functions is important to help you avoid lost data, avoid missed events, and monitor the health of your application. It's also an important way to help you understand the retry behaviors of event-based triggers. -This article describes general strategies for error handling and the available retry strategies. +This article describes general strategies for error handling and the available retry strategies. > [!IMPORTANT] > We're removing retry policy support in the runtime for triggers other than Timer, Kafka, and Event Hubs after this feature becomes generally available (GA). Preview retry policy support for all triggers other than Timer and Event Hubs was removed in December 2022. For more information, see the [Retries](#retries) section. The occurrence of errors when you're processing data can be a problem for your f ## Retries -There are two kinds of retries available for your functions: +There are two kinds of retries available for your functions: * Built-in retry behaviors of individual trigger extensions * Retry policies provided by the Functions runtime The following table indicates which triggers support retries and where the retry behavior is configured. It also links to more information about errors that come from the underlying services. -| Trigger/binding | Retry source | Configuration | -| - | - | -- | +| Trigger/binding | Retry source | Configuration | +| - | - | -- | | Azure Cosmos DB | [Retry policies](#retry-policies) | Function-level | | Azure Blob Storage | [Binding extension](functions-bindings-storage-blob-trigger.md#poison-blobs) | [host.json](functions-bindings-storage-queue.md#host-json) |-| Azure Event Grid | [Binding extension](../event-grid/delivery-and-retry.md) | Event subscription | -| Azure Event Hubs | [Retry policies](#retry-policies) | Function-level | -| Azure Queue Storage | [Binding extension](functions-bindings-storage-queue-trigger.md#poison-messages) | [host.json](functions-bindings-storage-queue.md#host-json) | -| RabbitMQ | [Binding extension](functions-bindings-rabbitmq-trigger.md#dead-letter-queues) | [Dead letter queue](https://www.rabbitmq.com/dlx.html) | -| Azure Service Bus | [Binding extension](../service-bus-messaging/service-bus-dead-letter-queues.md) | [Dead letter queue](../service-bus-messaging/service-bus-dead-letter-queues.md#maximum-delivery-count) | +| Azure Event Grid | [Binding extension](../event-grid/delivery-and-retry.md) | Event subscription | +| Azure Event Hubs | [Retry policies](#retry-policies) | Function-level | +| Azure Queue Storage | [Binding extension](functions-bindings-storage-queue-trigger.md#poison-messages) | [host.json](functions-bindings-storage-queue.md#host-json) | +| RabbitMQ | [Binding extension](functions-bindings-rabbitmq-trigger.md#dead-letter-queues) | [Dead letter queue](https://www.rabbitmq.com/dlx.html) | +| Azure Service Bus | [Binding extension](../service-bus-messaging/service-bus-dead-letter-queues.md) | [Dead letter queue](../service-bus-messaging/service-bus-dead-letter-queues.md#maximum-delivery-count) | |Timer | [Retry policies](#retry-policies) | Function-level | |Kafka | [Retry policies](#retry-policies) | Function-level | ### Retry policies -Starting with version 3.x of the Azure Functions runtime, you can define retry policies for Timer, Kafka, and Event Hubs triggers that are enforced by the Functions runtime. +Starting with version 3.x of the Azure Functions runtime, you can define retry policies for Timer, Kafka, and Event Hubs triggers that are enforced by the Functions runtime. -The retry policy tells the runtime to rerun a failed execution until either successful completion occurs or the maximum number of retries is reached. +The retry policy tells the runtime to rerun a failed execution until either successful completion occurs or the maximum number of retries is reached. -A retry policy is evaluated when a Timer, Kafka, or Event Hubs-triggered function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry. +A retry policy is evaluated when a Timer, Kafka, or Event Hubs-triggered function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry. > [!IMPORTANT] > Event Hubs checkpoints won't be written until the retry policy for the execution has finished. Because of this behavior, progress on the specific partition is paused until the current batch has finished. You can configure two retry strategies that are supported by policy: # [Fixed delay](#tab/fixed-delay) -A specified amount of time is allowed to elapse between each retry. +A specified amount of time is allowed to elapse between each retry. -# [Exponential backoff](#tab/exponential-backoff) +# [Exponential backoff](#tab/exponential-backoff) -The first retry waits for the minimum delay. On subsequent retries, time is added exponentially to the initial duration for each retry, until the maximum delay is reached. Exponential back-off adds some small randomization to delays to stagger retries in high-throughput scenarios. +The first retry waits for the minimum delay. On subsequent retries, time is added exponentially to the initial duration for each retry, until the maximum delay is reached. Exponential back-off adds some small randomization to delays to stagger retries in high-throughput scenarios. #### Max retry counts -You can configure the maximum number of times that a function execution is retried before eventual failure. The current retry count is stored in memory of the instance. +You can configure the maximum number of times that a function execution is retried before eventual failure. The current retry count is stored in memory of the instance. -It's possible for an instance to have a failure between retry attempts. When an instance fails during a retry policy, the retry count is lost. When there are instance failures, the Event Hubs trigger is able to resume processing and retry the batch on a new instance, with the retry count reset to zero. The timer trigger doesn't resume on a new instance. +It's possible for an instance to have a failure between retry attempts. When an instance fails during a retry policy, the retry count is lost. When there are instance failures, the Event Hubs trigger is able to resume processing and retry the batch on a new instance, with the retry count reset to zero. The timer trigger doesn't resume on a new instance. This behavior means that the maximum retry count is a best effort. In some rare cases, an execution could be retried more than the requested maximum number of times. For Timer triggers, the retries can be less than the maximum number requested. public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubCon ``` |Property | Description |-||-| +||-| |MaxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.| |DelayInterval|The delay that's used between retries. Specify it as a string with the format `HH:mm:ss`.| # [Isolated process](#tab/isolated-process/fixed-delay) -```csharp -[Function("EventHubsFunction")] -[FixedDelayRetry(5, "00:00:10")] -[EventHubOutput("dest", Connection = "EventHubConnectionAppSetting")] -public static string Run([EventHubTrigger("src", Connection = "EventHubConnectionAppSetting")] string[] input, - FunctionContext context) -{ -// ... -} - ``` +Function-level retries are supported with the following NuGet packages: ++- [Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) >= 1.9.0 +- [Microsoft.Azure.Functions.Worker.Extensions.EventHubs](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.EventHubs) >= 5.2.0 +- [Microsoft.Azure.Functions.Worker.Extensions.Kafka](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Kafka) >= 3.8.0 +- [Microsoft.Azure.Functions.Worker.Extensions.Timer](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Timer) >= 4.2.0 ++ |Property | Description |-||-| +||-| |MaxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.| |DelayInterval|The delay that's used between retries. Specify it as a string with the format `HH:mm:ss`.| + # [C# script](#tab/csharp-script/fixed-delay) Here's the retry policy in the *function.json* file: Here's the retry policy in the *function.json* file: ``` |*function.json* property | Description |-||-| +||-| |strategy|Use `fixedDelay`.| |maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.| |delayInterval|The delay that's used between retries. Specify it as a string with the format `HH:mm:ss`.| public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubCon ``` |Property | Description |-||-| +||-| |MaxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.| |MinimumInterval|The minimum retry delay. Specify it as a string with the format `HH:mm:ss`.|-|MaximumInterval|The maximum retry delay. Specify it as a string with the format `HH:mm:ss`.| +|MaximumInterval|The maximum retry delay. Specify it as a string with the format `HH:mm:ss`.| # [Isolated process](#tab/isolated-process/exponential-backoff) -Retry policies aren't yet supported when they're running in an isolated worker process. +Function-level retries are supported with the following NuGet packages: ++- [Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) >= 1.9.0 +- [Microsoft.Azure.Functions.Worker.Extensions.EventHubs](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.EventHubs) >= 5.2.0 +- [Microsoft.Azure.Functions.Worker.Extensions.Kafka](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Kafka) >= 3.8.0 +- [Microsoft.Azure.Functions.Worker.Extensions.Timer](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Timer) >= 4.2.0 + # [C# script](#tab/csharp-script/exponential-backoff) Here's the retry policy in the *function.json* file: ``` |*function.json* property | Description |-||-| +||-| |strategy|Use `exponentialBackoff`.| |maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.| |minimumInterval|The minimum retry delay. Specify it as a string with the format `HH:mm:ss`.|-|maximumInterval|The maximum retry delay. Specify it as a string with the format `HH:mm:ss`.| +|maximumInterval|The maximum retry delay. Specify it as a string with the format `HH:mm:ss`.| ::: zone-end Here's the retry policy in the *function.json* file: Here's the retry policy in the *function.json* file: } ``` -# [Exponential backoff](#tab/exponential-backoff) +# [Exponential backoff](#tab/exponential-backoff) ```json { Here's the retry policy in the *function.json* file: |*function.json* property | Description |-||-| +||-| |strategy|Required. The retry strategy to use. Valid values are `fixedDelay` or `exponentialBackoff`.| |maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.| |delayInterval|The delay that's used between retries when you're using a `fixedDelay` strategy. Specify it as a string with the format `HH:mm:ss`.| |minimumInterval|The minimum retry delay when you're using an `exponentialBackoff` strategy. Specify it as a string with the format `HH:mm:ss`.|-|maximumInterval|The maximum retry delay when you're using `exponentialBackoff` strategy. Specify it as a string with the format `HH:mm:ss`.| +|maximumInterval|The maximum retry delay when you're using `exponentialBackoff` strategy. Specify it as a string with the format `HH:mm:ss`.| ::: zone pivot="programming-language-python" Here's a Python sample that uses the retry context in a function: import logging def main(mytimer: azure.functions.TimerRequest, context: azure.functions.Context) -> None: logging.info(f'Current retry count: {context.retry_context.retry_count}')- + if context.retry_context.retry_count == context.retry_context.max_retry_count: logging.warn( f"Max retries of {context.retry_context.max_retry_count} for " f"function {context.function_name} has been reached")- + ``` ::: zone-end public void run( } ``` -# [Exponential backoff](#tab/exponential-backoff) +# [Exponential backoff](#tab/exponential-backoff) ```java @FunctionName("TimerTriggerJava1") public void run( ``` |Element | Description |-||-| +||-| |maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.| |delayInterval|The delay that's used between retries when you're using a `fixedDelay` strategy. Specify it as a string with the format `HH:mm:ss`.| |minimumInterval|The minimum retry delay when you're using an `exponentialBackoff` strategy. Specify it as a string with the format `HH:mm:ss`.| |
azure-functions | Functions Reference Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-java.md | As a Java developer, if you're new to Azure Functions, consider first reading on | Getting started | Concepts| Scenarios/samples | | -- | -- | -- |-| <ul><li>[Java function using Visual Studio Code](./create-first-function-vs-code-java.md)</li><li>[Jav)</li></ul> | <ul><li>[Java samples with different triggers](/samples/azure-samples/azure-functions-samples-java/azure-functions-java/)</li><li>[Event Hubs trigger and Azure Cosmos DB output binding](/samples/azure-samples/java-functions-eventhub-cosmosdb/sample/)</li></ul> | +| <ul><li>[Java function using Visual Studio Code](./create-first-function-vs-code-java.md)</li><li>[Jav)</li></ul> | <ul><li>[Java samples with different triggers](/samples/azure-samples/azure-functions-samples-java/azure-functions-java/)</li><li>[Event Hubs trigger and Azure Cosmos DB output binding](/samples/azure-samples/java-functions-eventhub-cosmosdb/sample/)</li><li>[Dependency injection samples](https://github.com/Azure/azure-functions-java-worker/tree/dev/samples/dependency-injection-example)</li></ul> | ## Java function basics public class Function { } ```+## Use dependency injection in Java Functions ++Azure Functions Java supports the dependency injection (DI) software design pattern, which is a technique to achieve [Inversion of Control (IoC)](https://learn.microsoft.com/dotnet/architecture/modern-web-apps-azure/architectural-principles#dependency-inversion) between classes and their dependencies. Java Azure Functions provides a hook to integrate with popular Dependency Injection frameworks in your Functions Apps. [Azure Functions Java SPI](https://github.com/Azure/azure-functions-java-additions/tree/dev/azure-functions-java-spi) contains an interface [FunctionInstanceInjector](https://github.com/Azure/azure-functions-java-additions/blob/dev/azure-functions-java-spi/src/main/java/com/microsoft/azure/functions/spi/inject/FunctionInstanceInjector.java). By implementing this interface, you can return an instance of your function class and your functions will be invoked on this instance. This gives frameworks like [Spring](https://learn.microsoft.com/azure/developer/java/spring-framework/getting-started-with-spring-cloud-function-in-azure?toc=%2Fazure%2Fazure-functions%2Ftoc.json), [Quarkus](https://learn.microsoft.com/azure/azure-functions/functions-create-first-quarkus), Google Guice, Dagger, etc. the ability to create the function instance and register it into their IOC container. This means you can use those Dependency Injection frameworks to manage your functions naturally. ++> [!NOTE] +> Microsoft Azure Functions Java SPI Types ([azure-function-java-spi](https://mvnrepository.com/artifact/com.microsoft.azure.functions/azure-functions-java-spi/1.0.0)) is a package that contains all SPI interfaces for third parties to interact with Microsoft Azure functions runtime. ++### Function instance injector for dependency injection +[azure-function-java-spi](https://mvnrepository.com/artifact/com.microsoft.azure.functions/azure-functions-java-spi/1.0.0) contains an interface FunctionInstanceInjector ++```java +package com.microsoft.azure.functions.spi.inject; ++/** ++ * The instance factory used by DI framework to initialize function instance. ++ * ++ * @since 1.0.0 ++ */ ++public interface FunctionInstanceInjector { ++ /** ++ * This method is used by DI framework to initialize the function instance. This method takes in the customer class and returns ++ * an instance create by the DI framework, later customer functions will be invoked on this instance. ++ * @param functionClass the class that contains customer functions ++ * @param <T> customer functions class type ++ * @return the instance that will be invoked on by azure functions java worker ++ * @throws Exception any exception that is thrown by the DI framework during instance creation ++ */ ++ <T> T getInstance(Class<T> functionClass) throws Exception; ++} ++``` ++For more examples that use FunctionInstanceInjector to integrate with Dependency injection frameworks refer to [this](https://github.com/Azure/azure-functions-java-worker/tree/dev/samples/dependency-injection-example) repository. ## Next steps |
azure-government | Documentation Government How To Access Enterprise Agreement Billing Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-how-to-access-enterprise-agreement-billing-account.md | + + Title: Access your EA billing account in the Azure Government portal | Microsoft Docs +description: This article describes how to Access your EA billing account in the Azure Government portal. ++cloud: gov +documentationcenter: '' ++++++ na + Last updated : 05/08/2023+++# Access your EA billing account in the Azure Government portal ++As an Azure Government Enterprise Agreement (EA) customer, you can now manage your EA billing account directly from [Azure Government portal](https://portal.azure.us/). This article helps you to get started with your billing account on the Azure Government portal. ++> [!NOTE] +> The Azure Enterprise (EA) portal is getting retired. We recommend that both direct and indirect EA Azure Government customers use Cost Management + Billing in the Azure Government portal to manage their enrollment and billing instead of using the EA portal. ++## Access the Azure Government portal ++You can manage your Enterprise Agreement (EA) billing account using the [Azure Government portal](https://portal.azure.us/). To access the portal, sign in using your Azure Government credentials. ++If you don't have Azure Government credentials, contact the User Administrator or Global Administrator of your Azure Government Active Directory (Azure AD) tenant. Ask them to add you as a new user in Azure Government Active directory. ++A User Administrator or Global Administrator uses the following steps to add a new user: ++1. Sign in to the [Azure Government portal](https://portal.azure.us/) in the User Administrator or Global Administrator role. +1. Navigate to **Azure Active Directory** > **Users**. +1. Select **New user** > **Create new user** from the menu. + :::image type="content" source="./media/documentation-government-how-to-access-enterprise-agreement-billing-account-01.png" alt-text="Screenshot showing the New user option." lightbox="./media/documentation-government-how-to-access-enterprise-agreement-billing-account-01.png" ::: +1. On the **New User** page, provide the new user's information like user name, display name, role etc. +1. Copy the autogenerated password provided in the **Password** box. Share it with the new user to sign in for the first time. +1. Select **Create**. ++Once you have the credentials, sign into the [Azure Government Portal](https://portal.azure.us/) and you should see **Microsoft Azure Government** in the upper left section of the main navigation bar. +++To access your Enterprise Agreement (EA) billing account or enrollment, assign the appropriate permissions to the newly created Azure Government user account. Reach out to an existing Enterprise Administrator and they should be able to assign one of the following roles: ++- Enterprise Administrator +- Enterprise Administrator (read only) +- Department Administrator +- Department Administrator (read only) +- Account Owner ++Each role has a varying degree of user limits and permissions. For more information, see [Organization structure and permissions by role](../cost-management-billing/manage/understand-ea-roles.md#organization-structure-and-permissions-by-role). ++## Access your EA billing account ++Billing administration on the Azure Government portal happens in the context of a billing account scope (or enrollment scope). To access your EA billing account, use the following steps: ++1. Sign in to the [Azure Government Portal](https://portal.azure.us/) +1. Search for **Cost Management + Billing** and select it. + :::image type="content" source="./media/documentation-government-how-to-access-enterprise-agreement-billing-account-03.png" alt-text="Screenshot showing search for Cost Management + Billing." lightbox="./media/documentation-government-how-to-access-enterprise-agreement-billing-account-03.png" ::: +1. If you have access to more than one billing account, select **Billing scopes** from the navigation menu. Then, select the billing account that you want to work with. + :::image type="content" source="./media/documentation-government-how-to-access-enterprise-agreement-billing-account-04.png" alt-text="Screenshot showing Billing scopes." lightbox="./media/documentation-government-how-to-access-enterprise-agreement-billing-account-04.png" ::: ++## Next steps ++Once you have access to your enrollment on the Azure Government portal, refer to the following articles. ++- For more information about managing your enrollment, creating a department or subscription, adding administrators and account owners, and other administrative tasks, see [Azure EA billing administration](../cost-management-billing/manage/direct-ea-administration.md). +- To view a usage summary, price sheet, and download reports, see [Review usage charges](../cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md#review-usage-charges). +- To learn more about EA billing roles, read [Understand Azure Enterprise Agreement administrative roles in Azure](../cost-management-billing/manage/understand-ea-roles.md) +- For information on which REST APIs to use with your Azure enterprise enrollment and an explanation for how to resolve common issues with REST APIs, see [Azure Enterprise REST APIs](../cost-management-billing/manage/enterprise-rest-apis.md). |
azure-government | Documentation Government Overview Dod | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-dod.md | Azure Government offers the following regions to DoD mission owners and their pa |Regions|Relevant authorizations|# of IL5 PA services| ||||-|US Gov Arizona </br> US Gov Texas </br> US Gov Virginia|FedRAMP High, DoD IL4, DoD IL5|145| +|US Gov Arizona </br> US Gov Texas </br> US Gov Virginia|FedRAMP High, DoD IL4, DoD IL5|150| |US DoD Central </br> US DoD East|DoD IL5|60| Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia (**US Gov regions**) are intended for US federal (including DoD), state, and local government agencies, and their partners. Azure Government regions US DoD Central and US DoD East (**US DoD regions**) are reserved for exclusive DoD use. Separate DoD IL5 PAs are in place for US Gov regions vs. US DoD regions. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all®ions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). |
azure-maps | Create Data Source Web Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-web-sdk.md | var source = new atlas.source.DataSource(); map.sources.add(source); //Create a polygon and add it to the data source.-source.add(new atlas.data.Polygon([[[/* Coordinates for polygon */]]])); +source.add(new atlas.data.Feature( + new atlas.data.Polygon([[[/* Coordinates for polygon */]]])); //Create a polygon layer to render the filled in area of the polygon. var polygonLayer = new atlas.layer.PolygonLayer(source, 'myPolygonLayer', { var polygonLayer = new atlas.layer.PolygonLayer(source, 'myPolygonLayer', { //Create a line layer for greater control of rendering the outline of the polygon. var lineLayer = new atlas.layer.LineLayer(source, 'myLineLayer', {- color: 'orange', - width: 2 + strokeColor: 'orange', + strokeWidth: 2 }); //Create a bubble layer to render the vertices of the polygon as scaled circles. |
azure-maps | How To Search For Address | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-search-for-address.md | The example in this section uses [Get Search Address] to convert an address into 5. Next, try setting the `query` key to `400 Broa`. -6. Select the **Send** button. The response includes results from multiple countries. To geobias results to the relevant area for your users, always add as many location details as possible to the request. +6. Select the **Send** button. The response includes results from multiple countries/regions. To geobias results to the relevant area for your users, always add as many location details as possible to the request. ## Fuzzy Search The example in this section uses [Get Search Address] to convert an address into ### Search for an address using Fuzzy Search -The example in this section uses `Fuzzy Search` to search the entire world for *pizza*, then searches over the scope of a specific country. Finally, it demonstrates how to use a coordinate location and radius to scope a search over a specific area, and limit the number of returned results. +The example in this section uses `Fuzzy Search` to search the entire world for *pizza*, then searches over the scope of a specific country/region. Finally, it demonstrates how to use a coordinate location and radius to scope a search over a specific area, and limit the number of returned results. > [!IMPORTANT] > To geobias results to the relevant area for your users, always add as many location details as possible. For more information, see [Best Practices for Search]. |
azure-maps | Rest Sdk Developer Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md | Azure Maps Java SDK supports [Java 8][Java 8] or above. | [Rendering][java rendering readme]| [azure-maps-rendering][java rendering package]|[rendering sample][java rendering sample] | | [Geolocation][java geolocation readme]|[azure-maps-geolocation][java geolocation package]|[geolocation sample][java geolocation sample] | | [Timezone][java timezone readme] | [azure-maps-timezone][java timezone package] | [timezone samples][java timezone sample] |-| [Elevation][java elevation readme] ([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023)) | [azure-maps-elevation][java elevation package] | [elevation samples][java elevation sample] | For more information, see the [Java SDK Developers Guide]. For more information, see the [Java SDK Developers Guide]. [java timezone package]: https://repo1.maven.org/maven2/com/azure/azure-maps-timezone [java timezone readme]: https://github.com/Azure/azure-sdk-for-jav [java timezone sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-timezone/src/samples/java/com/azure/maps/timezone/samples-[java elevation package]: https://repo1.maven.org/maven2/com/azure/azure-maps-elevation -[java elevation readme]: https://github.com/Azure/azure-sdk-for-jav -[java elevation sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-elevation/src/samples/java/com/azure/maps/elevation/samples |
azure-maps | Tutorial Creator Indoor Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md | Title: 'Tutorial: Use Microsoft Azure Maps Creator to create indoor maps' -description: Tutorial on how to use Microsoft Azure Maps Creator to create indoor maps +description: Learn how to use Microsoft Azure Maps Creator to create indoor maps. Last updated 01/28/2022-# Tutorial: Use Creator to create indoor maps +# Tutorial: Use Azure Maps Creator to create indoor maps This tutorial describes how to create indoor maps for use in Microsoft Azure Maps. This tutorial demonstrates how to: > [!div class="checklist"] >-> * Upload your indoor map drawing package. +> * Upload your drawing package for indoor maps. > * Convert your drawing package into map data. > * Create a dataset from your map data. > * Create a tileset from the data in your dataset. > * Get the default map configuration ID from your tileset. -> [!TIP] -> You can also create a dataset from a GeoJSON package. For more information, see [Create a dataset using a GeoJson package (Preview)]. +You can also create a dataset from a GeoJSON package. For more information, see [Create a dataset using a GeoJSON package (preview)]. ## Prerequisites * An [Azure Maps account] * A [subscription key] * A [Creator resource]-* Download the [Sample drawing package] +* The [sample drawing package] downloaded This tutorial uses the [Postman] application, but you can use a different API development environment. >[!IMPORTANT] >-> * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator services]. -> * Replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key in the URL examples. +> * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator services]. +> * In the URL examples, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. ## Upload a drawing package -Use the [Data Upload API] to upload the drawing package to Azure Maps resources. --The Data Upload API is a long running transaction that implements the pattern defined in [Creator Long-Running Operation API V2]. +Use the [Data Upload API] to upload the drawing package to Azure Maps resources. The Data Upload API is a long-running transaction that implements the pattern defined in [Creator Long-Running Operation API V2]. To upload the drawing package: To upload the drawing package: 2. In the **Create New** window, select **HTTP Request**. -3. Enter a **Request name** for the request, such as *POST Data Upload*. +3. For **Request name**, enter a name for the request, such as **POST Data Upload**. 4. Select the **POST** HTTP method. -5. Enter the following URL to the [Data Upload API] The request should look like the following URL: +5. Enter the following URL to the [Data Upload API]: ```http https://us.atlas.microsoft.com/mapData?api-version=2.0&dataFormat=dwgzippackage&subscription-key={Your-Azure-Maps-Subscription-key} To upload the drawing package: 6. Select the **Headers** tab. -7. In the **KEY** field, select `Content-Type`. +7. In the **KEY** field, select **Content-Type**. -8. In the **VALUE** field, select `application/octet-stream`. +8. In the **VALUE** field, select **application/octet-stream**. - :::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-header.png"alt-text="A screenshot of Postman showing the header tab information for data upload that highlights the Content Type key with the value of application forward slash octet dash stream."::: + :::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-header.png"alt-text="Screenshot of Postman that shows information on the Headers tab, including key and value."::: 9. Select the **Body** tab. -10. Select the **binary** radio button. +10. Select the **binary** option. -11. Select **Select File**, and then select a drawing package. +11. Choose **Select File**, and then select a drawing package. - :::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-body.png" alt-text="A screenshot of Postman showing the body tab in the POST window, with Select File highlighted, it's used to select the drawing package to import into Creator."::: + :::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-body.png" alt-text="Screenshot of Postman that shows the Body tab in the POST window, with the button for selecting a file."::: 12. Select **Send**. 13. In the response window, select the **Headers** tab. -14. Copy the value of the **Operation-Location** key. The Operation-Location key is also known as the `status URL` and is required to check the status of the drawing package upload, which is explained in the next section. +14. Copy the value of the **Operation-Location** key. This key is also known as the *status URL*. You need it to check the status of the drawing package upload in the next section. - :::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-response-header.png" alt-text="A screenshot of Postman showing the header tab in the response window, with the Operation Location key highlighted."::: + :::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-response-header.png" alt-text="Screenshot of Postman that shows the Operation-Location key on the Headers tab in the response window."::: -### Check the drawing package upload status +### Check the upload status of the drawing package To check the status of the drawing package and retrieve its unique ID (`udid`): To check the status of the drawing package and retrieve its unique ID (`udid`): 2. In the **Create New** window, select **HTTP Request**. -3. Enter a **Request name** for the request, such as *GET Data Upload Status*. +3. For **Request name**, enter a name for the request, such as **GET Data Upload Status**. 4. Select the **GET** HTTP method. -5. Enter the `status URL` you copied as the last step in the previous section. The request should look like the following URL: +5. Enter the status URL that you copied as the last step in the previous section. The request should look like the following URL: ```http https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key} To check the status of the drawing package and retrieve its unique ID (`udid`): 7. In the response window, select the **Headers** tab. -8. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the drawing package resource. +8. Copy the value of the **Resource-Location** key, which is the resource location URL. The resource location URL contains the unique identifier (`udid`) of the drawing package resource. - :::image type="content" source="./media/tutorial-creator-indoor-maps/resource-location-url.png" alt-text="A screenshot of Postman showing the resource location URL in the responses header."::: + :::image type="content" source="./media/tutorial-creator-indoor-maps/resource-location-url.png" alt-text="Screenshot of Postman that shows the resource location URL in the response header."::: -### (Optional) Retrieve drawing package metadata +### (Optional) Retrieve metadata from the drawing package You can retrieve metadata from the drawing package resource. The metadata contains information like the resource location URL, creation date, updated date, size, and upload status. To retrieve content metadata: 2. In the **Create New** window, select **HTTP Request**. -3. Enter a **Request name** for the request, such as *GET Data Upload Metadata*. +3. For **Request name**, enter a name for the request, such as **GET Data Upload Metadata**. -4. . Select the **GET** HTTP method. +4. Select the **GET** HTTP method. -5. Enter the `resource Location URL` you copied as the last step in the previous section: +5. Enter the resource location URL that you copied as the last step in the previous section: ```http https://us.atlas.microsoft.com/mapData/metadata/{udid}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key} To retrieve content metadata: 6. Select **Send**. -7. In the response window, select the **Body** tab. The metadata should like the following JSON fragment: +7. In the response window, select the **Body** tab. The metadata should look like the following JSON fragment: ```json { To retrieve content metadata: ## Convert a drawing package -Now that the drawing package is uploaded, you use the `udid` for the uploaded package to convert the package into map data. The [Conversion API] uses a long-running transaction that implements the pattern defined in the [Creator Long-Running Operation] article. +Now that the drawing package is uploaded, you use the `udid` value for the uploaded package to convert the package into map data. The [Conversion API] uses a long-running transaction that implements the pattern defined in the [Creator Long-Running Operation] article. To convert a drawing package: To convert a drawing package: 2. In the **Create New** window, select **HTTP Request**. -3. Enter a **Request name** for the request, such as *POST Convert Drawing Package*. +3. For **Request name**, enter a name for the request, such as **POST Convert Drawing Package**. 4. Select the **POST** HTTP method. -5. Enter the following URL to the [Conversion service] (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key and `udid` with the `udid` of the uploaded package): +5. Enter the following URL to the [Conversion service]. Replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. Replace `udid` with the `udid` value of the uploaded package. ```http https://us.atlas.microsoft.com/conversions?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2023-03-01-preview&udid={udid}&inputType=DWG&dwgPackageVersion=2.0 To convert a drawing package: 7. In the response window, select the **Headers** tab. -8. Copy the value of the **Operation-Location** key, it contains the `status URL` that you use to check the status of the conversion. +8. Copy the value of the **Operation-Location** key. It contains the status URL that you use to check the status of the conversion. - :::image type="content" source="./media/tutorial-creator-indoor-maps/data-convert-location-url.png" border="true" alt-text="A screenshot of Postman showing the URL value of the operation location key in the responses header."::: + :::image type="content" source="./media/tutorial-creator-indoor-maps/data-convert-location-url.png" border="true" alt-text="Screenshot of Postman that shows the URL value of the operation location key in the response header."::: -### Check the drawing package conversion status +### Check the status of the drawing package conversion -After the conversion operation completes, it returns a `conversionId`. We can access the `conversionId` by checking the status of the drawing package conversion process. The `conversionId` can then be used to access the converted data. +After the conversion operation finishes, it returns a `conversionId` value. You can access the `conversionId` value by checking the status of the drawing package's conversion process. You can then use the `conversionId` value to access the converted data. -To check the status of the conversion process and retrieve the `conversionId`: +To check the status of the conversion process and retrieve the `conversionId` value: 1. In the Postman app, select **New**. 2. In the **Create New** window, select **HTTP Request**. -3. Enter a **Request name** for the request, such as *GET Conversion Status*. +3. For **Request name**, enter a name for the request, such as **GET Conversion Status**. -4. Select the **GET** HTTP method: +4. Select the **GET** HTTP method. -5. Enter the `status URL` you copied in [Convert a drawing package](#convert-a-drawing-package). The request should look like the following URL: +5. Enter the status URL that you copied in the [Convert a drawing package](#convert-a-drawing-package) section. The request should look like the following URL: ```http https://us.atlas.microsoft.com/conversions/operations/{operationId}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key} To check the status of the conversion process and retrieve the `conversionId`: 7. In the response window, select the **Headers** tab. -8. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`conversionId`), which is used by other APIs to access the converted map data. +8. Copy the value of the **Resource-Location** key, which is the resource location URL. The resource location URL contains the unique identifier `conversionId`, which other APIs use to access the converted map data. - :::image type="content" source="./media/tutorial-creator-indoor-maps/data-conversion-id.png" alt-text="A screenshot of Postman highlighting the conversion ID value that appears in the resource location key in the responses header."::: + :::image type="content" source="./media/tutorial-creator-indoor-maps/data-conversion-id.png" alt-text="Screenshot of Postman that highlights the conversion ID value that appears in the Resource-Location key in the response header."::: -The sample drawing package should be converted without errors or warnings. However, if you receive errors or warnings from your own drawing package, the JSON response includes a link to the [Drawing error visualizer]. You can use the Drawing Error visualizer to inspect the details of errors and warnings. To receive recommendations to resolve conversion errors and warnings, see [Drawing conversion errors and warnings]. +The sample drawing package should be converted without errors or warnings. But if you receive errors or warnings from your own drawing package, the JSON response includes a link to the [Drawing Error Visualizer]. You can use the Drawing Error Visualizer to inspect the details of errors and warnings. To get recommendations for resolving conversion errors and warnings, see [Drawing conversion errors and warnings]. The following JSON fragment displays a sample conversion warning: The following JSON fragment displays a sample conversion warning: ## Create a dataset -A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset, use the [Dataset Create API]. The Dataset Create API takes the `conversionId` for the converted drawing package and returns a `datasetId` of the created dataset. +A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset, use the [Dataset Create API]. The Dataset Create API takes the `conversionId` value for the converted drawing package and returns a `datasetId` value for the created dataset. To create a dataset: To create a dataset: 2. In the **Create New** window, select **HTTP Request**. -3. Enter a **Request name** for the request, such as *POST Dataset Create*. +3. For **Request name**, enter a name for the request, such as **POST Dataset Create**. 4. Select the **POST** HTTP method. -5. Enter the following URL to the [Dataset service]. The request should look like the following URL (replace `{conversionId`} with the `conversionId` obtained in [Check drawing package conversion status](#check-the-drawing-package-conversion-status)): +5. Enter the following URL to the [Dataset service]. Replace `{conversionId}` with the `conversionId` value that you obtained in [Check the status of the drawing package conversion](#check-the-status-of-the-drawing-package-conversion). ```http https://us.atlas.microsoft.com/datasets?api-version=2023-03-01-preview&conversionId={conversionId}&subscription-key={Your-Azure-Maps-Subscription-key} To create a dataset: 7. In the response window, select the **Headers** tab. -8. Copy the value of the **Operation-Location** key, it contains the `status URL` that you use to check the status of the dataset. +8. Copy the value of the **Operation-Location** key. It contains the status URL that you use to check the status of the dataset. - :::image type="content" source="./media/tutorial-creator-indoor-maps/data-dataset-location-url.png" border="true" alt-text="A screenshot of Postman showing the value of the operation location key for dataset in the responses header."::: + :::image type="content" source="./media/tutorial-creator-indoor-maps/data-dataset-location-url.png" border="true" alt-text="Screenshot of Postman that shows the value of the Operation-Location key for a dataset in the response header."::: ### Check the dataset creation status -To check the status of the dataset creation process and retrieve the `datasetId`: +To check the status of the dataset creation process and retrieve the `datasetId` value: 1. In the Postman app, select **New**. 2. In the **Create New** window, select **HTTP Request**. -3. Enter a **Request name** for the request, such as *GET Dataset Status*. +3. For **Request name**, enter a name for the request, such as **GET Dataset Status**. 4. Select the **GET** HTTP method. -5. Enter the `status URL` you copied in [Create a dataset](#create-a-dataset). The request should look like the following URL: +5. Enter the status URL that you copied in the [Create a dataset](#create-a-dataset) section. The request should look like the following URL: ```http https://us.atlas.microsoft.com/datasets/operations/{operationId}?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key} To check the status of the dataset creation process and retrieve the `datasetId` 6. Select **Send**. -7. In the response window, select the **Headers** tab. The value of the **Resource-Location** key is the `resource location URL`. The `resource location URL` contains the unique identifier (`datasetId`) of the dataset. +7. In the response window, select the **Headers** tab. The value of the **Resource-Location** key is the resource location URL. The resource location URL contains the unique identifier (`datasetId`) of the dataset. 8. Save the `datasetId` value, because you'll use it in the next tutorial. - :::image type="content" source="./media/tutorial-creator-indoor-maps/dataset-id.png" alt-text="A screenshot of Postman highlighting the dataset ID value of the resource location key in the responses header."::: + :::image type="content" source="./media/tutorial-creator-indoor-maps/dataset-id.png" alt-text="Screenshot of Postman that shows the dataset ID value of the Resource-Location key in the response header."::: ## Create a tileset -A tileset is a set of vector tiles that render on the map. Tilesets are created from existing datasets. However, a tileset is independent from the dataset from which it was sourced. If the dataset is deleted, the tileset continues to exist. +A tileset is a set of vector tiles that render on the map. Tilesets are created from existing datasets. However, a tileset is independent from the dataset that it comes from. If the dataset is deleted, the tileset continues to exist. To create a tileset: To create a tileset: 2. In the **Create New** window, select **HTTP Request**. -3. Enter a **Request name** for the request, such as *POST Tileset Create*. +3. For **Request name**, enter a name for the request, such as **POST Tileset Create**. 4. Select the **POST** HTTP method. -5. Enter the following URL to the [Tileset service]. The request should look like the following URL (replace `{datasetId`} with the `datasetId` obtained in the [Check the dataset creation status](#check-the-dataset-creation-status) section: +5. Enter the following URL to the [Tileset service]. Replace `{datasetId}` with the `datasetId` value that you obtained in the [Check the dataset creation status](#check-the-dataset-creation-status) section. ```http https://us.atlas.microsoft.com/tilesets?api-version=2023-03-01-preview&datasetID={datasetId}&subscription-key={Your-Azure-Maps-Primary-Subscription-key} To create a tileset: 7. In the response window, select the **Headers** tab. -8. Copy the value of the **Operation-Location** key, it contains the `status URL`, which you use to check the status of the tileset. +8. Copy the value of the **Operation-Location** key. It contains the status URL, which you use to check the status of the tileset. - :::image type="content" source="./media/tutorial-creator-indoor-maps/data-tileset-location-url.png" border="true" alt-text="A screenshot of Postman highlighting the status URL that is the value of the operation location key in the responses header."::: + :::image type="content" source="./media/tutorial-creator-indoor-maps/data-tileset-location-url.png" border="true" alt-text="Screenshot of Postman that shows the status URL, which is the value of the Operation-Location key in the response header."::: -### Check the tileset creation status +### Check the status of tileset creation -To check the status of the tileset creation process and retrieve the `tilesetId`: +To check the status of the tileset creation process and retrieve the `tilesetId` value: 1. In the Postman app, select **New**. 2. In the **Create New** window, select **HTTP Request**. -3. Enter a **Request name** for the request, such as *GET Tileset Status*. +3. For **Request name**, enter a name for the request, such as **GET Tileset Status**. 4. Select the **GET** HTTP method. -5. Enter the `status URL` you copied in [Create a tileset](#create-a-tileset). The request should look like the following URL: +5. Enter the status URL that you copied in the [Create a tileset](#create-a-tileset) section. The request should look like the following URL: ```http https://us.atlas.microsoft.com/tilesets/operations/{operationId}?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key} To check the status of the tileset creation process and retrieve the `tilesetId` 6. Select **Send**. -7. In the response window, select the **Headers** tab. The value of the **Resource-Location** key is the `resource location URL`. The `resource location URL` contains the unique identifier (`tilesetId`) of the dataset. +7. In the response window, select the **Headers** tab. The value of the **Resource-Location** key is the resource location URL. The resource location URL contains the unique identifier (`tilesetId`) of the dataset. - :::image type="content" source="./media/tutorial-creator-indoor-maps/tileset-id.png" alt-text="A screenshot of Postman highlighting the tileset ID that is part of the value of the resource location URL in the responses header."::: + :::image type="content" source="./media/tutorial-creator-indoor-maps/tileset-id.png" alt-text="Screenshot of Postman that shows the tileset ID, which is part of the value of the resource location URL in the response header."::: -## The map configuration (preview) +## Get the map configuration (preview) -Once your tileset creation completes, you can get the `mapConfigurationId` using the [tileset get] HTTP request: +After you create a tileset, you can get the `mapConfigurationId` value by using the [tileset get] HTTP request: 1. In the Postman app, select **New**. 2. In the **Create New** window, select **HTTP Request**. -3. Enter a **Request name** for the request, such as *GET mapConfigurationId from Tileset*. +3. For **Request name**, enter a name for the request, such as **GET mapConfigurationId from Tileset**. 4. Select the **GET** HTTP method. -5. Enter the following URL to the [Tileset service], passing in the tileset ID you obtained in the previous step. +5. Enter the following URL to the [Tileset service]. Pass in the tileset ID that you obtained in the previous step. ```http https://us.atlas.microsoft.com/tilesets/{tilesetId}?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Subscription-key} Once your tileset creation completes, you can get the `mapConfigurationId` using 6. Select **Send**. -7. The tileset JSON appears in the body of the response, scroll down to see the `mapConfigurationId`: +7. The tileset JSON appears in the body of the response. Scroll down to see the `mapConfigurationId` value: ```json "defaultMapConfigurationId": "5906cd57-2dba-389b-3313-ce6b549d4396" ``` -For more information, see [Map configuration] in the indoor maps concepts article. +For more information, see [Map configuration] in the article about indoor map concepts. ## Next steps For more information, see [Map configuration] in the indoor maps concepts articl [Sample drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/blob/master/Sample%20-%20Contoso%20Drawing%20Package.zip [Postman]: https://www.postman.com [Access to Creator services]: how-to-manage-creator.md#access-to-creator-services-[Create a dataset using a GeoJson package (Preview)]: how-to-dataset-geojson.md +[Create a dataset using a GeoJSON package (Preview)]: how-to-dataset-geojson.md [Data Upload API]: /rest/api/maps/data-v2/upload [Creator Long-Running Operation API V2]: creator-long-running-operation-v2.md [Conversion API]: /rest/api/maps/v2/conversion |
azure-maps | Understanding Azure Maps Transactions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md | The following table summarizes the Azure Maps services that generate transaction | Azure Maps Service | Billable | Transaction Calculation | Meter | |--|-|-|-| | [Data v1](/rest/api/maps/data)<br>[Data v2](/rest/api/maps/data-v2)<br>[Data registry](/rest/api/maps/data-registry) | Yes, except for MapDataStorageService.GetDataStatus and MapDataStorageService.GetUserData, which are non-billable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>|-| [Elevation (DEM)](/rest/api/maps/elevation)([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023))| Yes| One request = 2 transactions<br> <ul><li>If requesting elevation for a single point then one request = 1 transaction| <ul><li>Location Insights Elevation (Gen2 pricing)</li><li>Standard S1 Elevation Service Transactions (Gen1 S1 pricing)</li></ul>| | [Geolocation](/rest/api/maps/geolocation)| Yes| One request = 1 transaction| <ul><li>Location Insights Geolocation (Gen2 pricing)</li><li>Standard S1 Geolocation Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li></ul>|-| [Render v1](/rest/api/maps/render)<br>[Render v2](/rest/api/maps/render-v2) | Yes, except for Terra maps (MapTile.GetTerraTile and layer=terra) which are non-billable.|<ul><li>15 tiles = 1 transaction, except microsoft.dem ([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023)) is one tile = 50 transactions</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table](#azure-maps-creator). |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>| +| [Render v1](/rest/api/maps/render)<br>[Render v2](/rest/api/maps/render-v2) | Yes, except for Terra maps (MapTile.GetTerraTile and layer=terra) which are non-billable.|<ul><li>15 tiles = 1 transaction</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table](#azure-maps-creator). |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>| | [Route](/rest/api/maps/route) | Yes | One request = 1 transaction<br><ul><li>If using the Route Matrix, each cell in the Route Matrix request generates a billable Route transaction.</li><li>If using Batch Directions, each origin/destination coordinate pair in the Batch request call generates a billable Route transaction. Note, the billable Route transaction usage results generated by the batch request will have **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Routing (Gen2 pricing)</li><li>Standard S1 Routing Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> | | [Search v1](/rest/api/maps/search)<br>[Search v2](/rest/api/maps/search-v2) | Yes | One request = 1 transaction.<br><ul><li>If using Batch Search, each location in the Batch request generates a billable Search transaction. Note, the billable Search transaction usage results generated by the batch request will have **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Search</li><li>Standard S1 Search Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> | | [Spatial](/rest/api/maps/spatial) | Yes, except for `Spatial.GetBoundingBox`, `Spatial.PostBoundingBox` and `Spatial.PostPointInPolygonBatch`, which are non-billable.| One request = 1 transaction.<br><ul><li>If using Geofence, five requests = 1 transaction</li></ul> | <ul><li>Location Insights Spatial Calculations (Gen2 pricing)</li><li>Standard S1 Spatial Transactions (Gen1 S1 pricing)</li></ul> | |
azure-monitor | Alerts Common Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema.md | The common alert schema provides a consistent structure for: - Azure Automation runbook > [!NOTE]-> Alerts generated by [VM insights](../vm/vminsights-overview.md) do not support the common schema. +> - Alerts generated by [VM insights](../vm/vminsights-overview.md) do not support the common schema. +>- Smart detection alerts use the common schema by default. You don't have to enable the common schema for smart detection alerts. ## Structure of the common schema The common schema includes information about the affected resource and the cause If you want to route alert instances to specific teams based on criteria such as a resource group, you can use the fields in the **Essentials** section to provide routing logic for all alert types. The teams that receive the alert notification can then use the context fields for their investigation. - **Alert context**: Fields that vary depending on the type of the alert. The alert context fields describe the cause of the alert. For example, a metric alert would have fields like the metric name and metric value in the alert context. An activity log alert would have information about the event that generated the alert.-- **Custom properties**: You can add more information to the alert payload by adding custom properties if you've configured action groups for a metric alert rule. +- **Custom properties**: Additional information that is not included in the alert payload by default can be included in the alert payload using custom properties. Custom properties are a **Key:Value** pair that can include any information configured in the alert rule. - > [!NOTE] - > Custom properties are currently only supported by metric alerts. For all other alert types, the **custom properties** field is null. ## Sample alert payload ```json For sample alerts that use the common schema, see [Sample alert payloads](alerts |webTestName |If the condition type is `webtest`, the name of the webtest. | |windowStartTime |The start time of the evaluation window in which the alert fired. | |windowEndTime |The end time of the evaluation window in which the alert fired. |-|customProperties || - ### Sample metric alert with a static threshold when the monitoringService = `Platform` ```json See [Azure Monitor managed service for Prometheus rule groups (preview)](../esse ``` ## Custom properties fields -If you've configured action groups for a metric alert rule, you can add more information to the alert payload by adding custom properties. --The custom properties section contains ΓÇ£key: valueΓÇ¥ objects that are added to webhook notifications. +If the alert rule that generated your alert contains action groups, custom properties can contain additional information about the alert. The custom properties section contains ΓÇ£key: valueΓÇ¥ objects that are added to webhook notifications. If custom properties aren't set in the alert rule, the field is null.--> [!NOTE] -> Custom properties are currently only supported by metric alerts. For all other alert types, the **custom properties** field is null. ## Enable the common alert schema Use action groups in the Azure portal or use the REST API to enable the common alert schema. Schemas are defined at the action level. For example, you must separately enable the schema for an email action and a webhook action. -> [!NOTE] -> Smart detection alerts support the common schema by default. You don't have to enable the common schema for smart detection alerts. - ### Enable the common schema in the Azure portal  |
azure-monitor | Alerts Create New Alert Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md | Alerts triggered by these alert rules contain a payload that uses the [common al :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new alert rule."::: +1. (Optional) In the **Custom properties** section, if you've configured action groups for this alert rule, you can add custom properties in key:value pairs to the alert payload to add more information to the payload. Add the property **Name** and **Value** for the custom property you want included in the payload. ++ You can also use custom properties to extract and manipulate data from alert payloads that use the common schema. You can use those values in the action group webhook or logic app. ++ The format for extracting values from the common schema, use a "$", and then the path of the common schema field inside curly brackets. For example: `${data.essentials.monitorCondition}`. ++ For example, you could use these values in the **custom properties** to utilize data from the payload. ++ |Custom properties name |Custom properties value |Result | + |||| + |AdditionalDetails|Evaluation windowStartTime: ${data.alertContext.condition.windowStartTime}. windowEndTime: ${data.alertContext.condition.windowEndTime}|AdditionalDetails": "Evaluation windowStartTime: 2023-04-04T14:39:24.492Z. windowEndTime: 2023-04-04T14:44:24.492Z" | + |Alert ${data.essentials.monitorCondition} reason |ΓÇ£${data.alertContext.condition.allOf[0].metricName} ${data.alertContext.condition.allOf[0].operator}${data.alertContext.condition.allOf[0].threshold} ${data.essentials.monitorCondition}. The value is ${data.alertContext.condition.allOf[0].metricValue}" |Examples of the results could be: <br> - Alert Resolved reason": "Percentage CPU GreaterThan5 Resolved. The value is 3.585 <br>Percentage CPU GreaterThan5 Fired. The value is 10.585 | + 1. On the **Details** tab, define the **Project details**. - Select the **Subscription**. - Select the **Resource group**. Alerts triggered by these alert rules contain a payload that uses the [common al ||| |Enable upon creation| Select for the alert rule to start running as soon as you're done creating it.| |Automatically resolve alerts (preview) |Select to make the alert stateful. When an alert is stateful, the alert is resolved when the condition is no longer met.<br> If you don't select this checkbox, metric alerts are stateless. Stateless alerts fire each time the condition is met, even if alert already fired.<br> The frequency of notifications for stateless metric alerts differs based on the alert rule's configured frequency:<br>**Alert frequency of less than 5 minutes**: While the condition continues to be met, a notification is sent somewhere between one and six minutes.<br>**Alert frequency of more than 5 minutes**: While the condition continues to be met, a notification is sent between the configured frequency and double the frequency. For example, for an alert rule with a frequency of 15 minutes, a notification is sent somewhere between 15 to 30 minutes.|- 1. (Optional) If you've configured action groups for this alert rule, you can add custom properties to the alert payload to add more information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload. - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-metric-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new alert rule."::: + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-metric-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new alert rule."::: ### [Log alert](#tab/log) Alerts triggered by these alert rules contain a payload that uses the [common al |Mute actions |Select to set a period of time to wait before alert actions are triggered again. If you select this checkbox, the **Mute actions for** field appears to select the amount of time to wait after an alert is fired before triggering actions again.| |Check workspace linked storage|Select if logs workspace linked storage for alerts is configured. If no linked storage is configured, the rule isn't created.| - 1. (Optional) If you've configured action groups for this alert rule, you can add custom properties to the alert payload to add more information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload. -- > [!NOTE] - > The [common schema](alerts-common-schema.md) overwrites custom configurations. Therefore, you can't use both custom properties and the common schema for log alerts. -- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new log alert rule."::: + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new log alert rule."::: ### [Activity log alert](#tab/activity-log) 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select the **Region**.- 1. (Optional) In the **Advanced options** section, select **Enable upon creation** for the alert rule to start running as soon as you're done creating it. - 1. (Optional) If you've configured action groups for this alert rule, you can add custom properties to the alert payload to add more information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload. -- > [!NOTE] - > The [common schema](alerts-common-schema.md) overwrites custom configurations. Therefore, you can't use both custom properties and the common schema for activity log alerts. + 1. Select **Enable upon creation** for the alert rule to start running as soon as you're done creating it. - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule."::: + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule."::: ### [Resource Health alert](#tab/resource-health) 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select the **Region**.- 1. (Optional) In the **Advanced options** section, select **Enable upon creation** for the alert rule to start running as soon as you're done creating it. - 1. (Optional) If you've configured action groups for this alert rule, you can add custom properties to the alert payload to add more information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload. + 1. Select **Enable upon creation** for the alert rule to start running as soon as you're done creating it. - > [!NOTE] - > The [common schema](alerts-common-schema.md) overwrites custom configurations. Therefore, you can't use both custom properties and the common schema for resource health alerts. -- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule."::: + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule."::: ### [Service Health alert](#tab/service-health) 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select the **Region**.- 1. (Optional) In the **Advanced options** section, select **Enable upon creation** for the alert rule to start running as soon as you're done creating it. - 1. (Optional) If you've configured action groups for this alert rule, you can add custom properties to the alert payload to add more information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload. -- > [!NOTE] - > The [common schema](alerts-common-schema.md) overwrites custom configurations. Therefore, you can't use both custom properties and the common schema for service health alerts. + 1. Select **Enable upon creation** for the alert rule to start running as soon as you're done creating it. - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule."::: + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule."::: |
azure-monitor | Monitor Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md | For more advanced use cases, you can modify telemetry by adding spans, updating ### Enable distributed tracing for Java function apps -1. **Option 1**: On the function app **Overview** pane, go to **Application Insights**. Under **Collection Level**, select **Recommended**. +On the function app **Overview** pane, go to **Application Insights**. Under **Collection Level**, select **Recommended**. - > [!div class="mx-imgBorder"] - > :::image type="content" source="./media//functions/collection-level.jpg" lightbox="./media//functions/collection-level.jpg" alt-text="Screenshot that shows the how to enable the AppInsights Java Agent."::: --2. **Option 2**: On the function app **Overview** pane, go to **Configuration**. Under **Application settings**, select **New application setting**. -- > [!div class="mx-imgBorder"] - > :::image type="content" source="./media//functions/create-new-setting.png" lightbox="./media//functions/create-new-setting.png" alt-text="Screenshot that shows the New application setting option."::: -- Add an application setting with the following values and select **Save**. -- ``` - APPLICATIONINSIGHTS_ENABLE_AGENT: true - ``` +> [!div class="mx-imgBorder"] ### Troubleshooting -Your Java functions might have slow startup times if you adopted this feature before February 2023. Follow the steps to fix the issue. +Your Java functions might have slow startup times if you adopted this feature before February 2023. From the function app **Overview** pane, go to **Configuration** in the left-hand side navigation menu. Then click on **Application settings** and follow the steps below to fix the issue. #### Windows Your Java functions might have slow startup times if you adopted this feature be ApplicationInsightsAgent_EXTENSION_VERSION -> ~2 ``` -1. Enable the latest version by adding this setting: +2. Enable the latest version by adding this setting: ``` APPLICATIONINSIGHTS_ENABLE_AGENT: true Your Java functions might have slow startup times if you adopted this feature be ApplicationInsightsAgent_EXTENSION_VERSION -> ~3 ``` -1. Enable the latest version by adding this setting: +2. Enable the latest version by adding this setting: ``` APPLICATIONINSIGHTS_ENABLE_AGENT: true To collect custom telemetry from services such as Redis, Memcached, and MongoDB, * See what [Application Map](./app-map.md?tabs=net) can do for your business. * Read about [requests and dependencies for Java apps](./java-in-process-agent.md). * Learn more about [Azure Monitor](../overview.md) and [Application Insights](./app-insights-overview.md).++ |
azure-monitor | Opentelemetry Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md | A connection string in Application Insights defines the target location for send ### [.NET](#tab/net) -Currently unavailable. +Use one of the following three ways to configure the connection string: ++- Add `UseAzureMonitor()` to your application startup. Depending on your version of .NET, this will be in either your `startup.cs` or `program.cs` class. + ```csharp + var builder = WebApplication.CreateBuilder(args); ++ builder.Services.AddOpenTelemetry().UseAzureMonitor(options => { + options.ConnectionString = "<Your Connection String>"; + }); ++ var app = builder.Build(); ++ app.Run(); + ``` +- Set an environment variable: + ```console + APPLICATIONINSIGHTS_CONNECTION_STRING=<Your Connection String> + ``` +- Add the following section to your `appsettings.json` config file: + ```json + { + "AzureMonitor": { + "ConnectionString": "<Your Connection String>" + } + } + ``` + +> [!NOTE] +> If you set the connection string in more than one place, we adhere to the following precedence: +> 1. Code +> 2. Environment Variable +> 3. Configuration File ### [Java](#tab/java) Use one of the following two ways to configure the connection string: ### [Python](#tab/python) -Currently unavailable. +Use one of the following two ways to configure the connection string: ++- Set an environment variable: + + ```console + APPLICATIONINSIGHTS_CONNECTION_STRING=<Your Connection String> + ``` ++- Pass into `configure_azure_monitor`: ++```python +from azure.monitor.opentelemetry import configure_azure_monitor ++configure_azure_monitor( + connection_string="<your-connection-string>", +) +``` export OTEL_TRACES_SAMPLER_ARG=0.1 You might want to enable Azure Active Directory (Azure AD) Authentication for a more secure connection to Azure, which prevents unauthorized telemetry from being ingested into your subscription. #### [.NET](#tab/net)- ++We support the credential classes provided by [Azure Identity](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/identity/Azure.Identity#credential-classes). ++- We recommend `DefaultAzureCredential` for local development. +- We recommend `ManagedIdentityCredential` for system-assigned and user-assigned managed identities. + - For system-assigned, use the default constructor without parameters. + - For user-assigned, provide the client ID to the constructor. +- We recommend `ClientSecretCredential` for service principals. + - Provide the tenant ID, client ID, and client secret to the constructor. + ```csharp-Currently unavailable. +var builder = WebApplication.CreateBuilder(args); ++builder.Services.AddOpenTelemetry().UseAzureMonitor(options => { + options.Credential = new DefaultAzureCredential(); +}); ++var app = builder.Build(); ++app.Run(); ``` #### [Java](#tab/java) const appInsights = new ApplicationInsightsClient(config); #### [Python](#tab/python) ```python-Currently unavailable. +from azure.identity import ManagedIdentityCredential +from azure.monitor.opentelemetry import configure_azure_monitor ++configure_azure_monitor( + credential=ManagedIdentityCredential(), +) ``` For more information about Java, see the [Java supplemental documentation](java- #### [Node.js](#tab/nodejs) -1. Install the [OpenTelemetry Collector Exporter](https://www.npmjs.com/package/@opentelemetry/exporter-otlp-http) package in your project. +1. Install the [OpenTelemetry Collector Trace Exporter](https://www.npmjs.com/package/@opentelemetry/exporter-trace-otlp-http) package in your project. ```sh- npm install @opentelemetry/exporter-otlp-http + npm install @opentelemetry/exporter-trace-otlp-http ``` 2. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-js/tree/main/examples/otlp-exporter-node). ```javascript const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights");- const { SimpleSpanProcessor } = require('@opentelemetry/sdk-trace-base'); - const { OTLPTraceExporter } = require('@opentelemetry/exporter-otlp-http'); + const { BatchSpanProcessor } = require('@opentelemetry/sdk-trace-base'); + const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http'); const appInsights = new ApplicationInsightsClient(new ApplicationInsightsConfig()); const otlpExporter = new OTLPTraceExporter();- appInsights.getTraceHandler().getTracerProvider().addSpanProcessor(new SimpleSpanProcessor(otlpExporter)); + appInsights.getTraceHandler().addSpanProcessor(new BatchSpanProcessor(otlpExporter)); ``` #### [Python](#tab/python) The following OpenTelemetry configurations can be accessed through environment v ### [.NET](#tab/net) -Currently unavailable. +| Environment variable | Description | +| -- | -- | +| `APPLICATIONINSIGHTS_CONNECTION_STRING` | Set this to the connection string for your Application Insights resource. | +| `APPLICATIONINSIGHTS_STATSBEAT_DISABLED` | Set this to `true` to opt-out of internal metrics collection. | +| `OTEL_RESOURCE_ATTRIBUTES` | Key-value pairs to be used as resource attributes. See the [Resource SDK specification](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.5.0/specification/resource/sdk.md#specifying-resource-information-via-an-environment-variable) for more details. | +| `OTEL_SERVICE_NAME` | Sets the value of the `service.name` resource attribute. If `service.name` is also provided in `OTEL_RESOURCE_ATTRIBUTES`, then `OTEL_SERVICE_NAME` takes precedence. | ### [Java](#tab/java) |
azure-monitor | Opentelemetry Dotnet Exporter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-dotnet-exporter.md | This section provides guidance that shows how to enable OpenTelemetry. The following code demonstrates how to enable OpenTelemetry in a C# console application by setting up OpenTelemetry TracerProvider. This code must be in the application startup. For ASP.NET Core, it's done typically in the `ConfigureServices` method of the application `Startup` class. For ASP.NET applications, it's done typically in `Global.asax.cs`. ```csharp-using System.Diagnostics; -using Azure.Monitor.OpenTelemetry.Exporter; -using OpenTelemetry; -using OpenTelemetry.Trace; - public class Program { private static readonly ActivitySource MyActivitySource = new ActivitySource( Dependencies ### Logs ```csharp-using Azure.Monitor.OpenTelemetry.Exporter; -using OpenTelemetry.Logs; - public class Program { public static void Main() describes the instruments and provides examples of when you might use each one. #### Histogram Example ```csharp-using System.Diagnostics.Metrics; -using Azure.Monitor.OpenTelemetry.Exporter; -using OpenTelemetry; -using OpenTelemetry.Metrics; - public class Program { private static readonly Meter meter = new("OTel.AzureMonitor.Demo"); public class Program #### Counter Example ```csharp-using System.Diagnostics.Metrics; -using Azure.Monitor.OpenTelemetry.Exporter; -using OpenTelemetry; -using OpenTelemetry.Metrics; - public class Program { private static readonly Meter meter = new("OTel.AzureMonitor.Demo"); public class Program #### Gauge Example ```csharp-using System.Diagnostics.Metrics; -using Azure.Monitor.OpenTelemetry.Exporter; -using OpenTelemetry; -using OpenTelemetry.Metrics; - public class Program { private static readonly Meter meter = new("OTel.AzureMonitor.Demo"); using var tracerProvider = Sdk.CreateTracerProviderBuilder() Add `ActivityEnrichingProcessor.cs` to your project with the following code: ```csharp-using System.Diagnostics; -using OpenTelemetry; -using OpenTelemetry.Trace; - public class ActivityEnrichingProcessor : BaseProcessor<Activity> { public override void OnEnd(Activity activity) You might use the following ways to filter out telemetry before it leaves your a Add `ActivityFilteringProcessor.cs` to your project with the following code: ```csharp- using System.Diagnostics; - using OpenTelemetry; - using OpenTelemetry.Trace; - public class ActivityFilteringProcessor : BaseProcessor<Activity> { public override void OnStart(Activity activity) |
azure-monitor | Opentelemetry Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md | To enable Azure Monitor Application Insights, you will make a minor modification Add `UseAzureMonitor()` to your application startup. Depending on your version of .NET Core, this will be in either your `startup.cs` or `program.cs` class. ```csharp-using Azure.Monitor.OpenTelemetry.AspNetCore; -using Microsoft.AspNetCore.Builder; -using Microsoft.Extensions.DependencyInjection; - var builder = WebApplication.CreateBuilder(args); -builder.Services.AddOpenTelemetry().UseAzureMonitor( --//Uncomment the line below when setting the Application Insights Connection String via code -//options => options.ConnectionString = "<Your Connection String>" --); +builder.Services.AddOpenTelemetry().UseAzureMonitor(); var app = builder.Build(); Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights ```javascript const { ApplicationInsightsClient, ApplicationInsightsConfig } = require("applicationinsights"); const config = new ApplicationInsightsConfig();--//Uncomment the line below when setting the Application Insights Connection String via code -//config.azureMonitorExporterConfig.connectionString = "<Your Connection String>"; - const appInsights = new ApplicationInsightsClient(config); ``` To paste your Connection String, select from the options below: C. Set via Code - ASP.NET Core, Node.js, and Python Only (Not recommended) - Uncomment the code line with `<Your Connection String>`, and replace the placeholder with *your* unique connection string. + See [Connection String Configuration](opentelemetry-configuration.md#connection-string) for example setting Connection String via code. > [!NOTE] > If you set the connection string in more than one place, we adhere to the following precendence: The following table represents the currently supported custom telemetry types: | Language | Custom Events | Custom Metrics | Dependencies | Exceptions | Page Views | Requests | Traces | |-||-|--|||-|--| | **ASP.NET Core** | | | | | | | |-| OpenTelemetry API | | | Yes | Yes | | Yes | | +| OpenTelemetry API | | Yes | Yes | Yes | | Yes | | | iLogger API | | | | | | | Yes | | AI Classic API | | | | | | | | | | | | | | | | | private static IEnumerable<Measurement<int>> GetThreadState(Process process) } ``` --```csharp -using System.Diagnostics.Metrics; -using Azure.Monitor.OpenTelemetry.Exporter; -using OpenTelemetry; -using OpenTelemetry.Metrics; --public class Program -{ - internal static readonly Meter meter = new("OTel.AzureMonitor.Demo"); -- public static void Main() - { - using var meterProvider = Sdk.CreateMeterProviderBuilder() - .AddMeter("OTel.AzureMonitor.Demo") - .AddAzureMonitorMetricExporter(o => - { - o.ConnectionString = "<Your Connection String>"; - }) - .Build(); -- var process = Process.GetCurrentProcess(); - - ObservableGauge<int> myObservableGauge = meter.CreateObservableGauge("Thread.State", () => GetThreadState(process)); -- System.Console.WriteLine("Press Enter key to exit."); - System.Console.ReadLine(); - } - - private static IEnumerable<Measurement<int>> GetThreadState(Process process) - { - foreach (ProcessThread thread in process.Threads) - { - yield return new((int)thread.ThreadState, new("ProcessId", process.Id), new("ThreadId", thread.Id)); - } - } -} -``` - #### [Java](#tab/java) ```Java To add span attributes, use either of the following two ways: * Add a custom span processor. > [!TIP]-> The advantage of using options provided by instrumentation libraries, when they're available, is that the entire context is available. As a result, users can select to add or filter more attributes. For example, the enrich option in the HttpClient instrumentation library gives users access to the httpRequestMessage itself. They can select anything from it and store it as an attribute. +> The advantage of using options provided by instrumentation libraries, when they're available, is that the entire context is available. As a result, users can select to add or filter more attributes. For example, the enrich option in the HttpClient instrumentation library gives users access to the [HttpRequestMessage](/dotnet/api/system.net.http.httprequestmessage) and the [HttpResponseMessage](/dotnet/api/system.net.http.httpresponsemessage) itself. They can select anything from it and store it as an attribute. 1. Many instrumentation libraries provide an enrich option. For guidance, see the readme files of individual instrumentation libraries: - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich)- - [HttpClient and HttpWebRequest](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#enrich) + - [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#enrich) 1. Use a custom processor: app.Run(); Add `ActivityEnrichingProcessor.cs` to your project with the following code: ```csharp-using System.Diagnostics; -using OpenTelemetry; -using OpenTelemetry.Trace; - public class ActivityEnrichingProcessor : BaseProcessor<Activity> { public override void OnEnd(Activity activity) class SpanEnrichingProcessor implements SpanProcessor{ onEnd(span: ReadableSpan){ span.attributes["CustomDimension1"] = "value1"; span.attributes["CustomDimension2"] = "value2";- span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>"; } } Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching c #### [Node.js](#tab/nodejs) -Currently unavailable. +Attributes could be added only when calling manual track APIs only, log attributes for console, bunyan and winston are currently not supported. ++```javascript +const config = new ApplicationInsightsConfig(); +config.instrumentations.http = httpInstrumentationConfig; +const appInsights = new ApplicationInsightsClient(config); +const logHandler = appInsights.getLogHandler(); +const attributes = { + "testAttribute1": "testValue1", + "testAttribute2": "testValue2", + "testAttribute3": "testValue3" +}; +logHandler.trackEvent({ + name: "testEvent", + properties: attributes +}); +``` #### [Python](#tab/python) You might use the following ways to filter out telemetry before it leaves your a Add `ActivityFilteringProcessor.cs` to your project with the following code: ```csharp- using System.Diagnostics; - using OpenTelemetry; - using OpenTelemetry.Trace; - public class ActivityFilteringProcessor : BaseProcessor<Activity> { public override void OnStart(Activity activity) To provide feedback: - To enable usage experiences, [enable web or browser user monitoring](javascript.md). ++<!-- PR for Hector--> |
azure-monitor | Data Collection Endpoint Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-endpoint-overview.md | Data collection endpoints (DCEs) provide a connection for certain data sources o ## Data sources that use DCEs The following data sources currently use DCEs: -- [Azure Monitor Agent when network isolation is required](../agents/azure-monitor-agent-data-collection-endpoint.md)-- [Custom logs](../logs/logs-ingestion-api-overview.md)+- [Azure Monitor Agent when network isolation is required](../agents/azure-monitor-agent-data-collection-endpoint.md#enable-network-isolation-for-azure-monitor-agent) +- [Logs ingestion API](../logs/logs-ingestion-api-overview.md) ## Components of a data collection endpoint A DCE includes the following components: A DCE includes the following components: Data collection endpoints are Azure Resource Manager resources created within specific regions. An endpoint in a given region *can only be associated with machines in the same region*. However, you can have more than one endpoint within the same region according to your needs. ## Limitations-Data collection endpoints only support Log Analytics workspaces as a destination for collected data. [Custom metrics (preview)](../essentials/metrics-custom-overview.md) collected and uploaded via Azure Monitor Agent aren't currently controlled by DCEs. They also can't be configured over private links. +Data collection endpoints only support Log Analytics workspaces as a destination for collected data. [Custom metrics (preview)](../essentials/metrics-custom-overview.md) collected and uploaded via Azure Monitor Agent aren't currently controlled by DCEs. Data collection endpoints also can't be configured over private links. ## Create a data collection endpoint |
azure-monitor | Migrate To Batch Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-batch-api.md | + + Title: Migrate from the metrics API to the getBatch API +description: How to migrate from the metrics API to the getBatch API +++ Last updated : 05/07/2023+++#customer-intent: As a customer, I want to understand how to migrate from the metrics API to the getBatch API ++# How to migrate from the metrics API to the getBatch API ++Heavy use of the [metrics API](https://learn.microsoft.com/rest/api/monitor/metrics/list?tabs=HTTP) can result in throttling or performance problems. Migrating to the [metrics:getBatch](https://learn.microsoft.com/rest/api/monitor/metrics-data-plane/batch?tabs=HTTP) API allows you to query multiple resources in a single REST request. The two APIs share a common set of query parameter and response formats that make migration easy. ++## Request format + The metrics:getBatch API request has the following format: + ```http +POST /subscriptions/<subscriptionId>/metrics:getBatch?metricNamespace=<resource type namespace>&api-version=2023-03-01-preview +Host: <region>.metrics.monitor.azure.com +Content-Type: application/json +Authorization: Bearer <token> +{ + "resourceids":[<comma separated list of resource IDs>] +} ++``` ++For example, +```http +POST /subscriptions/12345678-1234-1234-1234-123456789abc/metrics:getBatch?metricNamespace=microsoft.compute/virtualMachines&api-version=2023-03-01-preview +Host: eastus.metrics.monitor.azure.com +Content-Type: application/json +Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhb...TaXzf6tmC4jhog +{ + "resourceids":["/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/rg-vms-01/providers/Microsoft.Compute/virtualMachines/vmss-001_41df4bb9", + "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/rg-vms-02/providers/Microsoft.Compute/virtualMachines/vmss-001_c1187e2f" +] +} + ``` ++## Batching restrictions ++Consider the following restrictions on which resources can be batched together when deciding if the metrics:getBatch API is the correct choice for your scenario. ++- All resources in a batch must be in the same subscription. +- All resources in a batch must be in the same Azure region. +- All resources in a batch must be the same resource type. ++To help identify groups of resources that meet these criteria, run the following Azure Resource Graph query using the [Azure Resource Graph Explorer](https://portal.azure.com/#view/HubsExtension/ArgQueryBlade), or via the [Azure Resource Manager Resources query API](https://learn.microsoft.com/rest/api/azureresourcegraph/resourcegraph(2021-03-01)/resources/resources?tabs=HTTP). ++``` + resources + | project id, subscriptionId, ['type'], location + | order by subscriptionId, ['type'], location +``` ++## Request conversion steps ++To convert an existing metrics API call to a metric:getBatch API call, follow these steps: ++Assume the following API call is being used to request metrics: ++```http +GET https://management.azure.com/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample-test/providers/Microsoft.Storage/storageAccounts/testaccount/providers/microsoft.Insights/metrics?timespan=2023-04-20T12:00:00.000Z/2023-04-22T12:00:00.000Z&interval=PT6H&metricNamespace=microsoft.storage%2Fstorageaccounts&metricnames=Ingress,Egress&aggregation=total,average,minimum,maximum&top=10&orderby=total desc&$filter=ApiName eq '*'&api-version=2019-07-01 +``` ++1. Change the hostname. + Replace `management.azure.com` with a regional endpoint for the Azure Monitor Metrics data plane using the following format: `<region>.metrics.monitor.azure.com` where `region` is region of the resources you're requesting metrics for. For the example, if the resources are in westus2, the hostname is `westus2.metrics.monitor.azure.com`. ++1. Change the API name and path. + The metrics:getBatch API is a subscription level POST API. The resources for which the metrics are requested, are specified in the request body rather than in the URL path. + Change the url path as follows: + from `/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample-test/providers/Microsoft.Storage/storageAccounts/testaccount/providers/microsoft.Insights/metrics` + to `/subscriptions/12345678-1234-1234-1234-123456789abc/metrics:getBatch` ++1. The `metricNamespace` query param is required for metrics:getBatch. For Azure standard metrics, the namespace name is usually the resource type of the resources you've specified. To check the namespace value to use, see the [metrics namespaces API](https://learn.microsoft.com/rest/api/monitor/metric-namespaces/list?tabs=HTTP) +1. Update the api-version query parameter as follows: `&api-version=2023-03-01-preview` +1. The filter query param isn't prefixed with a `$` in the metrics:getBatch API. Change the query param from `$filter=` to `filter=`. +1. The metrics:getBatch API is a POST call with a body that contains a comma-separated list of resourceIds in the following format: + For example: + ```http + { + "resourceids": [ + <comma separated list of resource ids> + ] + } + ``` + + For example: + ```http + { + "resourceids": [ + "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample-test/providers/Microsoft.Storage/storageAccounts/testaccount" + ] + } + ``` ++ Up to 50 unique resource IDs may be specified in each call. Each resource must belong to the same subscription, region, and be of the same resource type. ++> [!IMPORTANT] +> + The `resourceids` object property in the body must be lower case. +> + Ensure that there are no trailing commas on your last resourceid in the array list. ++### Converted batch request ++The following example shows the converted batch request. +```http + POST https://westus2.metrics.monitor.azure.com/subscriptions/12345678-1234-1234-1234-123456789abc/metrics:getBatch?timespan=2023-04-20T12:00:00.000Z/2023-04-22T12:00:00.000Z&interval=PT6H&metricNamespace=microsoft.storage%2Fstorageaccounts&metricnames=Ingress,Egress&aggregation=total,average,minimum,maximum&top=10&orderby=total desc&filter=ApiName eq '*'&api-version=2023-03-01-preview + + { + "resourceids": [ + "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample-test/providers/Microsoft.Storage/storageAccounts/testaccount", + "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample2-test-rg/providers/Microsoft.Storage/storageAccounts/eax252qtemplate", + "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample3/providers/Microsoft.Storage/storageAccounts/sample3diag", + "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample3/providers/Microsoft.Storage/storageAccounts/sample3prefile", + "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample3/providers/Microsoft.Storage/storageAccounts/sample3tipstorage", + "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample3backups/providers/Microsoft.Storage/storageAccounts/pod01account1" + ] + } +``` ++## Response Format ++The response format of the metrics:getBatch API encapsulates a list of individual metrics call responses in the following format: +```http + { + "values": [ + <One individual metrics response per requested resourceId> + ] + } +``` ++A `resourceid` property has been added to each resources' metrics list in the metrics:getBatch API response. ++ The following show sample response formats. ++### [Individual response](#tab/individual-response) ++ ```http + { + "cost": 11516, + "timespan": "2023-04-20T12:00:00Z/2023-04-22T12:00:00Z", + "interval": "P1D", + "value": [ + { + "id": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample-test/providers/Microsoft.Storage/storageAccounts/testaccount/providers/Microsoft.Insights/metrics/Ingress", + "type": "Microsoft.Insights/metrics", + "name": { + "value": "Ingress", + "localizedValue": "Ingress" + }, + "displayDescription": "The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.", + "unit": "Bytes", + "timeseries": [ + { + "metadatavalues": [ + { + "name": { + "value": "apiname", + "localizedValue": "apiname" + }, + "value": "EntityGroupTransaction" + } + ], + "data": [ + { + "timeStamp": "2023-04-20T12:00:00Z", + "total": 1737897, + "average": 5891.17627118644, + "minimum": 1674, + "maximum": 10976 + }, + { + "timeStamp": "2023-04-21T12:00:00Z", + "total": 1712543, + "average": 5946.329861111111, + "minimum": 1674, + "maximum": 10980 + } + ] + }, + { + "metadatavalues": [ + { + "name": { + "value": "apiname", + "localizedValue": "apiname" + }, + "value": "GetBlobServiceProperties" + } + ], + "data": [ + { + "timeStamp": "2023-04-20T12:00:00Z", + "total": 1284, + "average": 321, + "minimum": 321, + "maximum": 321 + }, + { + "timeStamp": "2023-04-21T12:00:00Z", + "total": 1926, + "average": 321, + "minimum": 321, + "maximum": 321 + } + ] + } + ], + "errorCode": "Success" + }, + { + "id": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample-test/providers/Microsoft.Storage/storageAccounts/testaccount/providers/Microsoft.Insights/metrics/Egress", + "type": "Microsoft.Insights/metrics", + "name": { + "value": "Egress", + "localizedValue": "Egress" + }, + "displayDescription": "The amount of egress data. This number includes egress to external client from Azure Storage as well as egress within Azure. As a result, this number does not reflect billable egress.", + "unit": "Bytes", + "timeseries": [ + { + "metadatavalues": [ + { + "name": { + "value": "apiname", + "localizedValue": "apiname" + }, + "value": "EntityGroupTransaction" + } + ], + "data": [ + { + "timeStamp": "2023-04-20T12:00:00Z", + "total": 249603, + "average": 846.1118644067797, + "minimum": 839, + "maximum": 1150 + }, + { + "timeStamp": "2023-04-21T12:00:00Z", + "total": 244652, + "average": 849.4861111111111, + "minimum": 839, + "maximum": 1150 + } + ] + }, + { + "metadatavalues": [ + { + "name": { + "value": "apiname", + "localizedValue": "apiname" + }, + "value": "GetBlobServiceProperties" + } + ], + "data": [ + { + "timeStamp": "2023-04-20T12:00:00Z", + "total": 3772, + "average": 943, + "minimum": 943, + "maximum": 943 + }, + { + "timeStamp": "2023-04-21T12:00:00Z", + "total": 5658, + "average": 943, + "minimum": 943, + "maximum": 943 + } + ] + } + ], + "errorCode": "Success" + } + ], + "namespace": "microsoft.storage/storageaccounts", + "resourceregion": "westus2" + } +``` +### [metrics:getBatch Response](#tab/batch-response) ++```http + { + "values": [ + { + "cost": 11516, + "timespan": "2023-04-20T12:00:00Z/2023-04-22T12:00:00Z", + "interval": "P1D", + "value": [ + { + "id": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample-test/providers/Microsoft.Storage/storageAccounts/testaccount/providers/Microsoft.Insights/metrics/Ingress", + "type": "Microsoft.Insights/metrics", + "name": { + "value": "Ingress", + "localizedValue": "Ingress" + }, + "displayDescription": "The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.", + "unit": "Bytes", + "timeseries": [ + { + "metadatavalues": [ + { + "name": { + "value": "apiname", + "localizedValue": "apiname" + }, + "value": "EntityGroupTransaction" + } + ], + "data": [ + { + "timeStamp": "2023-04-20T12:00:00Z", + "total": 1737897, + "average": 5891.17627118644, + "minimum": 1674, + "maximum": 10976 + }, + { + "timeStamp": "2023-04-21T12:00:00Z", + "total": 1712543, + "average": 5946.329861111111, + "minimum": 1674, + "maximum": 10980 + } + ] + }, + { + "metadatavalues": [ + { + "name": { + "value": "apiname", + "localizedValue": "apiname" + }, + "value": "GetBlobServiceProperties" + } + ], + "data": [ + { + "timeStamp": "2023-04-20T12:00:00Z", + "total": 1284, + "average": 321, + "minimum": 321, + "maximum": 321 + }, + { + "timeStamp": "2023-04-21T12:00:00Z", + "total": 1926, + "average": 321, + "minimum": 321, + "maximum": 321 + } + ] + } + ], + "errorCode": "Success" + }, + { + "id": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample-test/providers/Microsoft.Storage/storageAccounts/testaccount/providers/Microsoft.Insights/metrics/Egress", + "type": "Microsoft.Insights/metrics", + "name": { + "value": "Egress", + "localizedValue": "Egress" + }, + "displayDescription": "The amount of egress data. This number includes egress to external client from Azure Storage as well as egress within Azure. As a result, this number does not reflect billable egress.", + "unit": "Bytes", + "timeseries": [ + { + "metadatavalues": [ + { + "name": { + "value": "apiname", + "localizedValue": "apiname" + }, + "value": "EntityGroupTransaction" + } + ], + "data": [ + { + "timeStamp": "2023-04-20T12:00:00Z", + "total": 249603, + "average": 846.1118644067797, + "minimum": 839, + "maximum": 1150 + }, + { + "timeStamp": "2023-04-21T12:00:00Z", + "total": 244652, + "average": 849.4861111111111, + "minimum": 839, + "maximum": 1150 + } + ] + }, + { + "metadatavalues": [ + { + "name": { + "value": "apiname", + "localizedValue": "apiname" + }, + "value": "GetBlobServiceProperties" + } + ], + "data": [ + { + "timeStamp": "2023-04-20T12:00:00Z", + "total": 3772, + "average": 943, + "minimum": 943, + "maximum": 943 + }, + { + "timeStamp": "2023-04-21T12:00:00Z", + "total": 5658, + "average": 943, + "minimum": 943, + "maximum": 943 + } + ] + } + ], + "errorCode": "Success" + } + ], + "namespace": "microsoft.storage/storageaccounts", + "resourceregion": "westus2", + "resourceid": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample-test/providers/Microsoft.Storage/storageAccounts/testaccount" + }, + { + "cost": 11516, + "timespan": "2023-04-20T12:00:00Z/2023-04-22T12:00:00Z", + "interval": "P1D", + "value": [ + { + "id": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample3/providers/Microsoft.Storage/storageAccounts/sample3diag/providers/Microsoft.Insights/metrics/Ingress", + "type": "Microsoft.Insights/metrics", + "name": { + "value": "Ingress", + "localizedValue": "Ingress" + }, + "displayDescription": "The amount of ingress data, in bytes. This number includes ingress from an external client into Azure Storage as well as ingress within Azure.", + "unit": "Bytes", + "timeseries": [ + { + "metadatavalues": [ + { + "name": { + "value": "apiname", + "localizedValue": "apiname" + }, + "value": "EntityGroupTransaction" + } + ], + "data": [ + { + "timeStamp": "2023-04-20T12:00:00Z", + "total": 867509, + "average": 5941.842465753424, + "minimum": 1668, + "maximum": 10964 + }, + { + "timeStamp": "2023-04-21T12:00:00Z", + "total": 861018, + "average": 5979.291666666667, + "minimum": 1668, + "maximum": 10964 + } + ] + }, + { + "metadatavalues": [ + { + "name": { + "value": "apiname", + "localizedValue": "apiname" + }, + "value": "GetBlobServiceProperties" + } + ], + "data": [ + { + "timeStamp": "2023-04-20T12:00:00Z", + "total": 1268, + "average": 317, + "minimum": 312, + "maximum": 332 + }, + { + "timeStamp": "2023-04-21T12:00:00Z", + "total": 1560, + "average": 312, + "minimum": 312, + "maximum": 312 + } + ] + } + ], + "errorCode": "Success" + }, + { + "id": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample3/providers/Microsoft.Storage/storageAccounts/sample3diag/providers/Microsoft.Insights/metrics/Egress", + "type": "Microsoft.Insights/metrics", + "name": { + "value": "Egress", + "localizedValue": "Egress" + }, + "displayDescription": "The amount of egress data. This number includes egress to external client from Azure Storage as well as egress within Azure. As a result, this number does not reflect billable egress.", + "unit": "Bytes", + "timeseries": [ + { + "metadatavalues": [ + { + "name": { + "value": "apiname", + "localizedValue": "apiname" + }, + "value": "EntityGroupTransaction" + } + ], + "data": [ + { + "timeStamp": "2023-04-20T12:00:00Z", + "total": 124316, + "average": 851.4794520547945, + "minimum": 839, + "maximum": 1150 + }, + { + "timeStamp": "2023-04-21T12:00:00Z", + "total": 122943, + "average": 853.7708333333334, + "minimum": 839, + "maximum": 1150 + } + ] + }, + { + "metadatavalues": [ + { + "name": { + "value": "apiname", + "localizedValue": "apiname" + }, + "value": "GetBlobServiceProperties" + } + ], + "data": [ + { + "timeStamp": "2023-04-20T12:00:00Z", + "total": 3384, + "average": 846, + "minimum": 846, + "maximum": 846 + }, + { + "timeStamp": "2023-04-21T12:00:00Z", + "total": 4230, + "average": 846, + "minimum": 846, + "maximum": 846 + } + ] + } + ], + "errorCode": "Success" + } + ], + "namespace": "microsoft.storage/storageaccounts", + "resourceregion": "westus2", + "resourceid": "/subscriptions/12345678-1234-1234-1234-123456789abc/resourceGroups/sample3/providers/Microsoft.Storage/storageAccounts/sample3diag" + } + ] + } +``` +++## Error response changes ++In the metrics:getBatch error response, the error content is wrapped inside a top level "error" property on the response. For example, +++ Metrics API error response++ ```http + { + "code": "BadRequest", + "message": "Metric: Ingress does not support requested dimension combination: apiname2, supported ones are: GeoType,ApiName,Authentication, TraceId: {ab11d9c2-b17e-4f75-8986-b314ecacbe11}" + } + ``` +++ Batch API error response:++ ```http + { + "error": { + "code": "BadRequest", + "message": "Metric: Egress does not support requested dimension combination: apiname2, supported ones are: GeoType,ApiName,Authentication, TraceId: {cd77617d-5f11-e50d-58c5-ba2f2cdc38ce}" + } + } + ``` ++## Troubleshooting +++ No returned data can be due to the wrong region being specified. + The batch API verifies that all of the resource IDs specified belong to the same subscriptionId and resource type. The batch API doesn't verify that all of the specified resource IDs are in the same region specified in the hostname. The only indicator that the region may be wrong for a given resource is getting empty time series data for all that resource. + for example,`"timeseries": [],` +++ Custom metrics aren't currently supported. + The metrics:getBatch API doesn't support querying custom metrics, or queries where the metric namespace name isn't a resource type. This is the case for VM Guest OS metrics that use the namespace "azure.vm.windows.guestmetrics" or "azure.vm.linux.guestmetrics". +++ The top parameter applies per resource ID specified. +How the top parameter works in the context of the batch API can be a little confusing. Rather than enforcing a limit on the total time series returned by the entire call, it rather enforces the total time series returned *per metric per resource ID*. If you have a batch query with many '*' filters specified, two metrics, and four resource IDs with a top of 5. The maximum possible time series returned by that query is 40, that is 2x4x5 time series. ++### 401 authorization errors ++The individual metrics API requires a user have the [Monitoring Reader](https://learn.microsoft.com/azure/role-based-access-control/built-in-roles#monitoring-reader) permission on the resource being queried. Because the metrics:getBatch API is a subscription level API, users must have the Monitoring Reader permission for the queried subscription to use the batch API. Even if users have Monitoring Reader on all the resources being queried in the batch API, the request fails if the user doesn't have Monitoring Reader on the subscription itself. ++### 529 throttling errors ++While the data plane batch API is designed to help mitigate throttling problems, 529 error codes can still occur which indicates that the metrics backend is currently throttling some customers. The recommended action is to implement an exponential backoff retry scheme. ++## Paging +Paging is not supported by the metrics:getBatch API. The most common use-case for this API is frequently calling every few minutes for the same metrics and resources for the latest timeframe. Low latency is an important consideration so many customers parallelize their queries as much as possible. Paging forces customers into a sequential calling pattern that introduces additional query latency. In scenarios where requests return volumes of metric data where paging would be beneficial, it's recommended to split the query into multiple parallel queries. ++## Billing +Yes all metrics data plane and batching calls are billed. For more information, see the **Azure Monitor native metrics** section in [Basic Log Search Queries](https://azure.microsoft.com/pricing/details/monitor/#pricing) |
azure-monitor | Prometheus Metrics Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md | Use any of the following methods to install the Azure Monitor agent on your AKS - Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in the Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`. - The aks-preview extension must be installed by using the command `az extension add --name aks-preview`. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).-- The aks-preview version 0.5.136 or higher is required for this feature. Check the aks-preview version by using the `az version` command.+- The aks-preview version 0.5.138 or higher is required for this feature. Check the aks-preview version by using the `az version` command. #### Install the metrics add-on |
azure-monitor | Prometheus Remote Write Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write-active-directory.md | Title: Remote-write in Azure Monitor Managed Service for Prometheus using Azure Active Directory (preview) description: Describes how to configure remote-write to send data from self-managed Prometheus running in your Kubernetes cluster running on-premises or in another cloud using Azure Active Directory authentication. -+ Last updated 11/01/2022 This step is only required if you didn't enable Azure Key Vault Provider for Sec | Value | Description | |:|:| | `<CLUSTER-NAME>` | Name of your AKS cluster |- | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20230323.1`<br>This is the remote write container image version. | + | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20230505.1`<br>This is the remote write container image version. | | `<INGESTION-URL>` | **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace | | `<APP-REGISTRATION -CLIENT-ID> ` | Client ID of your application | | `<TENANT-ID> ` | Tenant ID of the Azure Active Directory application | |
azure-monitor | Prometheus Remote Write Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write-managed-identity.md | Title: Remote-write in Azure Monitor Managed Service for Prometheus using managed identity (preview) description: Describes how to configure remote-write to send data from self-managed Prometheus running in your AKS cluster or Azure Arc-enabled Kubernetes cluster using managed identity authentication. -+ Last updated 11/01/2022 This step isn't required if you're using an AKS identity since it will already h | Value | Description | |:|:| | `<AKS-CLUSTER-NAME>` | Name of your AKS cluster |- | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20230323.1`<br>This is the remote write container image version. | + | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20230505.1`<br>This is the remote write container image version. | | `<INGESTION-URL>` | **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace | | `<MANAGED-IDENTITY-CLIENT-ID>` | **Client ID** from the **Overview** page for the managed identity | | `<CLUSTER-NAME>` | Name of the cluster Prometheus is running on | |
azure-monitor | Tutorial Logs Ingestion Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-portal.md | A [data collection endpoint](../essentials/data-collection-endpoint-overview.md) ## Create new table in Log Analytics workspace Before you can send data to the workspace, you need to create the custom table where the data will be sent.+ > [!NOTE]-> The table creation for a log ingestion API custom log below can't be used to create a [agent custom log table](..agents/data-collection-text-log.md). You must use the CLI or custom template process to create the table. If you do not have sufficent rights to run CLI or custom tempate you must ask your adminitrator to add the table for you. +> The table creation for a log ingestion API custom log below can't be used to create a [agent custom log table](../agents/data-collection-text-log.md). You must use the CLI or custom template process to create the table. If you do not have sufficent rights to run CLI or custom tempate you must ask your adminitrator to add the table for you. 1. Go to the **Log Analytics workspaces** menu in the Azure portal and select **Tables**. The tables in the workspace will appear. Select **Create** > **New custom log (DCR based)**. |
azure-monitor | Vminsights Log Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-log-query.md | VM insights collects performance and connection metrics, computer and process in ## Map records +> [!IMPORTANT] +> If your virtual machine is using VM insights with Azure Monitor agent, then you must have [processes and dependencies enabled](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent) for these tables to be created. + One record is generated per hour for each unique computer and process, in addition to the records that are generated when a process or computer starts or is added to VM insights. The fields and values in the ServiceMapComputer_CL events map to fields of the Machine resource in the ServiceMap Azure Resource Manager API. The fields and values in the ServiceMapProcess_CL events map to the fields of the Process resource in the ServiceMap Azure Resource Manager API. The ResourceName_s field matches the name field in the corresponding Resource Manager resource. There are internally generated properties you can use to identify unique processes and computers: The performance counters currently collected into the *InsightsMetrics* table ar |:|:|:|:|:| | Computer | Heartbeat | Computer Heartbeat | | | | Memory | AvailableMB | Memory Available Bytes | Megabytes | memorySizeMB - Total memory size|-| Network | WriteBytesPerSecond | Network Write Bytes Per Second | BytesPerSecond | NetworkDeviceId - Id of the device<br>bytes - Total sent bytes | -| Network | ReadBytesPerSecond | Network Read Bytes Per Second | BytesPerSecond | networkDeviceId - Id of the device<br>bytes - Total received bytes | +| Network | WriteBytesPerSecond | Network Write Bytes Per Second | BytesPerSecond | NetworkDeviceId - ID of the device<br>bytes - Total sent bytes | +| Network | ReadBytesPerSecond | Network Read Bytes Per Second | BytesPerSecond | networkDeviceId - ID of the device<br>bytes - Total received bytes | | Processor | UtilizationPercentage | Processor Utilization Percentage | Percent | totalCpus - Total CPUs | | LogicalDisk | WritesPerSecond | Logical Disk Writes Per Second | CountPerSecond | mountId - Mount ID of the device | | LogicalDisk | WriteLatencyMs | Logical Disk Write Latency Millisecond | MilliSeconds | mountId - Mount ID of the device | The performance counters currently collected into the *InsightsMetrics* table ar * If you are new to writing log queries in Azure Monitor, review [how to use Log Analytics](../logs/log-analytics-tutorial.md) in the Azure portal to write log queries. -* Learn about [writing search queries](../logs/get-started-queries.md). +* Learn about [writing search queries](../logs/get-started-queries.md). |
azure-netapp-files | Azacsnap Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-release-notes.md | For specific information on Preview features, refer to the [AzAcSnap Preview](az ## May-2023 -### AzAcSnap 8 (Build: 1AC073A) +### AzAcSnap 8 (Build: 1AC279E) AzAcSnap 8 is being released with the following fixes and improvements: AzAcSnap 8 is being released with the following fixes and improvements: - Backup (`-c backup`) changes: - Fix for incorrect error output when using `-c backup` and the database has ΓÇÿbackintΓÇÖ configured. - Remove lower-case conversion for anfBackup rename-only option using `-c backup` so the snapshot name maintains case of Volume name.+ - Fix for when a snapshot is created even though SAP HANA wasn't put into backup-mode. Now if SAP HANA cannot be put into backup-mode, AzAcSnap will immediately exit with an error. - Details (`-c details`) changes: - Fix for listing snapshot details with `-c details` when using Azure Large Instance storage. - Logging enhancements: |
azure-netapp-files | Performance Oracle Multiple Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-oracle-multiple-volumes.md | + + Title: Oracle database performance on Azure NetApp Files multiple volumes | Microsoft Docs +description: Migrating highly performant Exadata grade databases to the cloud is increasingly becoming an imperative for Microsoft customers. ++documentationcenter: '' +++editor: '' ++ms.assetid: +++ na + Last updated : 05/04/2023++++# Oracle database performance on Azure NetApp Files multiple volumes ++Migrating highly performant Exadata grade databases to the cloud is increasingly becoming an imperative for Microsoft customers. Supply chain software suites typically set the bar high due to the intense demands on storage I/O with a mixed read and write workload driven by a single compute node. Azure infrastructure in combination with Azure NetApp Files is able to meet the needs of this highly demanding workload. This article presents an example of how this demand was met for one customer and how Azure can meet the demands of your critical Oracle workloads. ++## Enterprise scale Oracle performance ++When exploring the upper limits of performance, it's important to recognize and reduce any constraints that could falsely skew results. For example, if the intent is to prove performance capabilities of a storage system, the client should ideally be configured so that CPU does not become a mitigating factor before storage performance limits are reached. To that end, testing started with the E104ids_v5 instance type as this VM comes equipped not just with a 100 GBps network interface, but with an equally large (100 GBps) egress limit. ++The testing occurred in two phases: ++1. The first phase focused on testing using Kevin ClossonΓÇÖs now industry standard SLOB2 (Silly Little Oracle Benchmark) tool - [version 2.5.4](https://github.com/therealkevinc/SLOB_2.5.4). The goal being to drive as much Oracle I/O as possible from one virtual machine (VM) to multiple Azure NetApp Files volumes, and then scale out using more databases to demonstrate linear scaling. +2. After testing scaling limits, our testing pivoted to the less expensive but almost as capable E96ds_v5 for a customer phase of testing using a true Supply Chain application workload and real-world data. ++### SLOB2 scale-up performance ++The following charts capture the performance profile of a single E104ids_v5 Azure VM running a single Oracle 19c database against eight Azure NetApp Files volumes with eight storage endpoints. The volumes are spread across three ASM disk groups: data, log, and archive. Five volumes were allocated to the data disk group, two volumes to the log disk group, and one volume to the archive disk group. All results captured throughout this article were collected using production Azure regions and active production Azure services. ++#### Single-host architecture ++The following diagram depicts the architecture that testing was completed against; note the Oracle database spread across multiple Azure NetApp Files volumes and endpoints. +++#### Single-host storage IO ++The following diagram shows a 100% randomly selected workload with a database buffer hit ratio of about 8%. SLOB2 was able to drive approximately 850,000 I/O requests per second while maintaining a submillisecond DB file sequential read event latency. With a database block size of 8K that amounts to approximately 6,800 MiB/s of storage throughput. ++++#### Single-host throughput ++The following diagram demonstrates that, for bandwidth intensive sequential IO workloads such as full table scans or RMAN activities, Azure NetApp Files can deliver the full bandwidth capabilities of the E104ids_v5 VM itself. +++>[!NOTE] +>As the compute instance is at the theoretical maximum of its bandwidth, adding additional application concurrency results only in increased client-side latency. This results in SLOB2 workloads exceeding the targeted completion timeframe therefore thread count was capped at six. ++### SLOB2 scale-out performance ++The following charts capture the performance profile of three E104ids_v5 Azure VMs each running a single Oracle 19c database and each with their own set of Azure NetApp Files volumes and an identical ASM disk group layout as described in the Scale up performance section. The graphics show that with Azure NetApp Files multi-volume/multi-endpoint, performance easily scales out with consistency and predictability. ++#### Multi-host architecture ++The following diagram depicts the architecture that testing was completed against; note the three Oracle databases spread across multiple Azure NetApp Files volumes and endpoints. Endpoints can be dedicated to a single host as shown with Oracle VM 1 or shared among hosts as shown with Oracle VM2 and Oracle VM 3. +++#### Multi-host storage IO ++The following diagram shows a 100% randomly selected workload with a database buffer hit ratio of about 8%. SLOB2 was able to drive approximately 850,000 I/O requests per second across all three hosts individually. SLOB2 was able accomplish this while executing in parallel to a collective total of about 2,500,000 I/O requests per second with each host still maintaining a submillisecond db file sequential read event latency. With a database block size of 8K, this amounts to approximately 20,000 MiB/s between the three hosts. +++#### Multi-host throughput ++The following diagram demonstrates that for sequential workloads, Azure NetApp Files can still deliver the full bandwidth capabilities of the E104ids_v5 VM itself even as it scales outward. SLOB2 was able to drive I/O totaling over 30,000 MiB/s across the three hosts while running in parallel. +++#### Real-world performance ++After scaling limits were tested with SLOB2, tests were conducted with a real-word supply chain application suite against Oracle on Azure NetApp files with excellent results. The following data from Oracle Automatic Workload Repository (AWR) report is a highlighted look at how one specific critical job performed. ++This database has significant extra IO going on in addition to the application workload due to flashback being enabled and has a database block size of 16k. From the IO profile section of the AWR report, it's apparent that there is a heavy ratio of writes in comparison to reads. ++| | Read and write per second | Read per second | Write per second | +| - | -- | -- | -- | +| Total (MB) | 4,988.1 | 1,395.2 | 3,592.9 | ++Despite the db file sequential read wait event showing a higher latency at 2.2 ms than in the SLOB2 testing, this customer saw a fifteen-minute reduction in job execution time coming from a RAC database on Exadata to a single instance database in Azure. ++## Azure resource constraints ++All systems eventually hit resource constraints, traditionally known as chokepoints. Database workloads, especially highly demanding ones such as supply chain application suites, are resource intensive entities. Finding these resource constraints and working through them is vital to any successful deployment. This section illuminates various constraints you may expect to encounter in just such an environment and how to work through them. In each subsection, expect to learn both best practices and rationale behind them. ++### Virtual machines ++This section details the criteria to be considered in selecting [VMs](../virtual-machines/sizes.md) for best performance and the rationale behind selections made for testing. Azure NetApp Files is a Network Attached Storage (NAS) service, therefore appropriate network bandwidth sizing is critical for optimal performance. ++#### Chipsets ++The first topic of interest is chipset selection. Make sure that whatever VM SKU you select is built on a single chipset for consistency reasons. The Intel variant of E_v5 VMs runs on a third Generation Intel Xeon Platinum 8370C (Ice Lake) configuration. All VMs in this family come equipped with a single 100 GBps network interface. In contrast, the E_v3 series, mentioned by way of example, is built on four separate chipsets, with various physical network bandwidths. The four chipsets used in the E_v3 family (Broadwell, Skylake, Cascade Lake, Haswell) have different processor speeds, which affect the performance characteristics of the machine. ++Read the [Azure Compute documentation](/azure/architecture/guide/technology-choices/compute-decision-tree) carefully paying attention to chipset options. Also refer to [Azure VM SKUs best practices for Azure NetApp Files](performance-virtual-machine-sku.md). Selecting a VM with a single chipset is preferable for best consistency. ++#### Available network bandwidth ++It's important to understand the difference between the available bandwidth of the VM network interface and the metered bandwidth applied against the same. When [Azure Compute documentation](../virtual-network/virtual-machine-network-throughput.md) speaks to network bandwidth limits, these limits are applied on egress (write) only. Ingress (read) traffic is not metered and as such is limited only by the physical bandwidth of the NIC itself. The network bandwidth of most VMs outpaces the egress limit applied against the machine. ++As Azure NetApp Files volumes are network attached, the egress limit can be understood as being applied against writes specifically whereas ingress is defined as reads and read-like workloads. While the egress limit of most machines is greater than the network bandwidth of the NIC, the same cannot be said for the E104_v5 used in testing for this article. The E104_v5 has a 100 GBps NIC with the egress limit set at 100 GBps as well. By comparison, the E96_v5, with its 100 GBps NIC has an egress limit of 35 GBps with ingress unfettered at 100 GBps. As VMs decrease in size, egress limits decrease but ingress remains unfettered by logically imposed limits. ++Egress limits are VM-wide and are applied as such against all network-based workloads. When using Oracle Data Guard, all writes are doubled to archive logs and must be factored to egress limit considerations. This is also true for archive log with multi-destination and RMAN, if used. When selecting VMs, familiarize yourselves with such command line tools as `ethtool`, which expose the configuration of the NIC as Azure does not document network interface configurations. ++#### Network concurrency ++Azure VMs and Azure NetApp Files volumes come equipped with specific amounts of bandwidth. As shown earlier, so long as a VM has sufficient CPU headroom, a workload can in theory consume the bandwidth made available to it--that is within the limits of the network card and or egress limit applied. In practice however, the amount of throughput achievable is predicated upon the concurrency of the workload at the network, that is the number of network flows and network endpoints. ++Read the [network flow limits](../virtual-network/virtual-machine-network-throughput.md#network-flow-limits) section of the VM network bandwidth document for greater understanding of. The takeaway: the more network flows connecting client to storage the richer the potential performance. ++Oracle supports two separate NFS clients, Kernel NFS and [Direct NFS (dNFS)](https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/about-direct-nfs-client-mounts-to-nfs-storage-devices.html). Kernel NFS, until late, supported a single network flow between two endpoints (compute ΓÇô storage). Direct NFS, the more performant of the two, supports a variable number of network flows ΓÇô tests have shown hundreds of unique connections per endpoint - increasing or decreasing as load demands. Due to the scaling of network flows between two endpoints, Direct NFS is far preferred over Kernel NFS, and as such, the recommended configuration. Azure NetApp Files product group does not recommend using Kernel NFS with Oracle workloads. For more information, refer to the [Benefits of using Azure NetApp Files with Oracle Database](solutions-benefits-azure-netapp-files-oracle-database.md). ++#### Execution concurrency ++Utilizing Direct NFS, a single chipset for consistency, and understanding network bandwidth constraints only takes you so far. In the end, the application drives performance. Proofs of concept using SLOB2 and proofs of concept using a real-world supply chain application suite against real customer data were able to drive significant amounts of throughput only because the applications ran at high degrees of concurrency; the former using a significant number of threads per schema, the latter using multiple connections from multiple application servers. In short, concurrency drives workload, low concurrency--low throughput, high concurrency--high throughput so long as the infrastructure is in place to support the same. ++#### Accelerated networking ++Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the data path, which reduces latency, jitter, and CPU utilization for the most demanding network workloads on supported VM types. When deploying VMs through configuration management utilities such as terraform or command line, be aware that accelerated networking is not enabled by default. For optimal performance, enable accelerated networking. Take note, accelerated networking is enabled or disabled on a network interface by network interface basis. The accelerated networking feature is one that may be enabled or disabled dynamically. ++>[!NOTE] +>This article contains references to the term `SLAVE`, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. ++An authoritative approach to ensuing accelerated networking is enabled for a NIC is via the Linux terminal. If accelerated networking is enabled for a NIC, a second virtual NIC is present associated with the first NIC. This second NIC is configured by the system with the `SLAVE` flag enabled. If no NIC is present with the `SLAVE` flag, accelerated networking is not enabled for that interface. ++In the scenario where multiple NICs are configured, you need to determine which `SLAVE` interface is associated with the NIC used to mount the NFS volume. Adding network interface cards to the VM has no effect on performance. ++Use the following process to identify the mapping between configured network interface and its associated virtual interface. This process validates that accelerated networking is enabled for a specific NIC on your Linux machine and display the physical ingress speed the NIC can potentially achieve. ++1. Execute the `ip a` command: + :::image type="content" alt-text="Screenshot of output of ip a command." source="../media/azure-netapp-files/ip-a-command-output.png"::: +1. List the `/sys/class/net/` directory of the NIC ID you are verifying (`eth0` in the example) and `grep` for the word lower: + ```bash + ls /sys/class/net/eth0 | grep lower lower_eth1 + ``` +1. Execute the `ethtool` command against the ethernet device identified as the lower device in the previous step. + :::image type="content" alt-text="Screenshot of output of settings for eth1." source="../media/azure-netapp-files/ethtool-output.png"::: ++#### Azure VM: Network vs. disk bandwidth limits ++A level of expertise is required when reading Azure VM performance limits documentation. Be aware of: +* Temp storage throughput and IOPS numbers refer to the performance capabilities of the ephemeral on-box storage directly attached to the VM. +* Uncached disk throughput and I/O numbers refer specifically to Azure Disk (Premium, Premium v2, and Ultra) and have no bearing on network attached storage such as Azure NetApp Files. +* Attaching additional NICs to the VM has no impact on performance limits or performance capabilities of the VM (documented and tested to be true). +* Maximum network bandwidth refers to egress limits (that is, writes when Azure NetApp Files is involved) applied against the VM network bandwidth. No ingress limits (that is, reads when Azure NetApp Files is involved) are applied. Given enough CPU, enough network concurrency, and rich enough endpoints a VM could theoretically drive ingress traffic to the limits of the NIC. As mentioned in the [Available network bandwidth](#available-network-bandwidth) section, use tools such `ethtool` to see the bandwidth of the NIC. ++A sample chart is shown for reference: +++### Azure NetApp Files ++The Azure first-party storage service Azure NetApp Files provides a highly available fully managed storage solution capable of supporting the demanding Oracle workloads introduced earlier. ++As the limits of scale-up storage performance in an Oracle database are well [understood](performance-oracle-single-volumes.md), this article intentionally focuses on scale-out storage performance. Scaling out storage performance implies giving a single Oracle instance access to many Azure NetApp Files volumes where these volumes are distributed over multiple storage endpoints. ++By scaling a database workload across multiple volumes in such a way, the performance of the database is untethered from both volume and endpoint upper limits. With the storage no longer imposing performance limitations, the VM architecture (CPU, NIC, and VM egress limits) becomes the chokepoint to contend with. As noted in the [VM section](#virtual-machines), selection of the E104ids_v5 and E96ds_v5 instances were made keeping this in mind. ++Whether a database is placed on a single large capacity volume or spread across multiple smaller volumes, the total financial cost is the same. The advantage of distributing I/O across multiple volumes and endpoint in contrast to a single volume and endpoint is the avoidance of bandwidth limits--you get to use entirely what you pay for. ++>[!IMPORTANT] +>To deploy using Azure NetApp Files in a `multiple volume:multiple endpoint` configuration, reach out to your Azure NetApp Files Specialist or Cloud Solution Architect for assistance. ++### Database ++OracleΓÇÖs database version 19c is OracleΓÇÖs current [long term release](https://www.oracle.com/us/assets/lifetime-support-technology-069183.pdf) version and the one used to produce all test results discussed in this paper. ++For best performance, all database volumes were mounted using the [Direct NFS](https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/about-direct-nfs-client-mounts-to-nfs-storage-devices.html#GUID-31591084-74BD-4B66-8C5B-68BF0FEE8750), Kernel NFS is recommended against due to performance constraints. For a performance comparison between the two clients, refer to [Oracle database performance on Azure NetApp Files single volumes](performance-oracle-single-volumes.md). Note, all relevant [dNFS patches (Oracle Support ID 1495104)](https://support.oracle.com/knowledge/Oracle%20Cloud/1495104_1.html) were applied, as were the best practices outlined in the [Oracle Databases on Microsoft Azure using Azure NetApp Files](https://www.netapp.com/media/17105-tr4780.pdf) report. ++While Oracle and Azure NetApp Files support both NFSv3 and NFSv4.1, as NFSv3 is the more mature protocol it's generally viewed as having the most stability and is the more reliable option for environments that are highly sensitive to disruption. The testing described in this article was all completed over NFSv3. ++>[!IMPORTANT] +>Some of the recommended patches that Oracle documents in Support ID 1495104 are critical for maintaining data integrity when using dNFS. Application of such patches is strongly advised for production environments. + +Automatic Storage Management (ASM) is supported for NFS volumes. Though typically associated with block-based storage where ASM replaces logical volume management (LVM) and filesystem both, ASM plays a valuable role in multi-volume NFS scenarios and is worthy of strong consideration. One such advantage of ASM, dynamic online addition of and rebalance across newly added NFS volumes and endpoints, simplifies management allowing for expansion of both performance and capacity at will. Though ASM does not in and of itself increase the performance of a database, its use avoids hot files and the need to manually maintain file distribution--a benefit easy to see. ++An ASM over dNFS configuration was used to produce all test results discussed in this article. The following diagram illustrates the ASM file layout within the Azure NetApp Files volumes and the file allocation to the ASM disk groups. +++There are some limitations with the use of ASM over Azure NetApp Files NFS mounted volumes when it comes to storage snapshots that can be overcome with certain architectural considerations. Contact your Azure NetApp Files specialist or cloud solutions architect for an in-depth review of these considerations. ++## Synthetic test tools and tunables ++This section describes the test architecture, tunables, and configuration detail in specifics. While the previous section is focused reasons why configuration decisions are made, this section focuses specifically on the "what" of configuration decisions. ++### Automated deployment ++* The database VMs are deployed using bash scripts available on [GitHub](https://github.com/Azure/Oracle-Workloads-for-Azure/tree/main/oravm). +* The layout and allocation of multiple Azure NetApp Files volumes and endpoints are completed manually. You need to work with your Azure NetApp Files Specialist or Cloud Solution Architect for assistance. +* The grid installation, ASM configuration, database creation and configuration, and SLOB2 environment on each machine is configured using Ansible for consistency. +* Parallel SLOB2 test executions across multiple hosts are also completed using Ansible for consistency and simultaneous execution. ++### VM configuration ++| Configuration | Value | +| - | - | +| Azure region | West Europe | +| VM SKU | E104ids_v5 | +|NIC count | 1 NOTE: Adding vNICs has no effect on system count | +| Max egress networking bandwidth (Mbps) | 100,000 | +| Temp storage (SSD) GiB | 3,800 | ++### System configuration ++All Oracle required system configuration settings for version 19c were implemented according to Oracle documentation. ++The following parameters were added to the `/etc/sysctl.conf` Linux system file: +* `sunrpc.max_tcp_slot_table_entries: 128` +* `sunrpc.tcp_slot_table_entries = 128 ` ++### Azure NetApp Files ++All Azure NetApp Files volumes were mounted with the following NFS mount options. ++`nfs rw,hard,rsize=262144,wsize=262144,sec=sys,vers=3,tcp` ++### Database parameters ++| Parameters | Value | +| - | - | +| `db_cache_size` | 2g | +| `large_pool_size` | 2g | +| `pga_aggregate_target` | 3g | +| `pga_aggregate_limit`| 3g | +| `sga_target` | 25g | +| `shared_io_pool_size` | 500m | +| `shared_pool_size` | 5g | +| `db_files` | 500 | +| `filesystemio_options` | SETALL | +| `job_queue_processes` | 0 | +| `db_flash_cache_size` | 0 | +| `_cursor_obsolete_threshold` | 130 | +| `_db_block_prefetch_limit` | 0 | +| `_db_block_prefetch_quota` | 0 | +| `_db_file_noncontig_mblock_read_count` | 0 | ++### SLOB2 configuration ++All workload generation for testing was completed using the SLOB2 tool version 2.5.4. ++Fourteen SLOB2 schemas were loaded into a standard Oracle tablespace and executed against, which in combination with the slob configuration file settings listed, put the SLOB2 data set at 7 TiB. The following settings reflect a random read execution for SLOB2. The configuration parameter `SCAN_PCT=0` was changed to `SCAN_PCT=100` during sequential testing. ++* `UPDATE_PCT=0` +* `SCAN_PCT=0` +* `RUN_TIME=600` +* `SCALE=450G` +* `SCAN_TABLE_SZ=50G` +* `WORK_UNIT=32` +* `REDO_STRESS=LITE` +* `THREADS_PER_SCHEMA=1` +* `DATABASE_STATISTICS_TYPE=awr` ++For random read testing, nine SLOB2 executions were performed. The thread count was increased by six with each test iteration starting from one. ++For sequential testing, seven SLOB2 executions were performed. The thread count was increased by two with each test iteration starting from one. The thread count was capped at six due to reaching network bandwidth maximum limits. ++### AWR metrics ++All performance metrics were reported through the Oracle Automatic Workload Repository (AWR). The following are the metrics presented in the results: ++* Throughput: the sum of the average read throughput and write throughput from the AWR Load Profile section +* Average read IO requests from the AWR Load Profile section +* db file sequential read wait event average wait time from the AWR Foreground Wait Events section ++## Migrating from purpose-built, engineered systems to the cloud ++Oracle Exadata is an engineered system--a combination of hardware and software that is considered the most optimized solution for running Oracle workloads. Although the cloud has significant advantages in the overall scheme of the technical world, these specialized systems can look incredibly attractive to those who have read and viewed the optimizations Oracle has built around their particular workload(s). ++When it comes to running Oracle on Exadata, there are some common reasons Exadata is chosen: ++* 1-2 high IO workloads that are natural fit for Exadata features and as these workloads require significant Exadata engineered features, the rest of the databases running along with them were consolidated to the Exadata. +* Complicated or difficult OLTP workloads that require RAC to scale and are difficult to architect with proprietary hardware without deep knowledge of Oracle optimization or may be technical debt unable to be optimized. +* Under-utilized existing Exadata with various workloads: this exists either due to previous migrations, end-of-life on a previous Exadata, or due to a desire to work/test an Exadata in-house. ++It's essential for any migration from an Exadata system to be understood from the perspective of the workloads and how simple or complex the migration may be. A secondary need is to understand the reason for the Exadata purchase from a status perspective. Exadata and RAC skills are in higher demand and may have driven the recommendation to purchase one by the technical stakeholders. ++>[!IMPORTANT] +>No matter the scenario, the overall take-away should be, for any database workload coming from an Exadata, the more Exadata proprietary features used, the more complex the migration and planning is. Environments that are not heavily utilizing Exadata proprietary features have opportunities for a simpler migration and planning process. ++There are several tools that can be used to assess these workload opportunities: ++* The Automatic Workload Repository (AWR): + * All Exadata databases are licensed to use AWR reports and connected performance and diagnostic features. + * Is always on and collects data that can be used to view historical workload information and assess usage. Peak values can assess the high usage on the system, + * Larger window AWR reports can assess the overall workload, providing valuable insight into feature usage and how to migrate the workload to non-Exadata effectively. Peak AWR reports in contrast are best for performance optimization and troubleshooting. +* The Global (RAC-Aware) AWR report for Exadata also includes an Exadata specific section, which drills down into specific Exadata feature usage and provides valuable insight info flash cache, flash logging, IO and other feature usage by database and cell node. ++### Decoupling from Exadata ++When identifying Oracle Exadata workloads to migrate to the cloud, consider the following questions and data points: ++* Is the workload consuming multiple Exadata features, outside of hardware benefits? + * Smart scans + * Storage indices + * Flash cache + * Flash logging + * Hybrid columnar compression +* Is the workload using Exadata offloading efficiently? In the top time foreground events, what is the ratio (more than 10% of DB time) of workload using: + * Cell smart table scan (optimal) + * Cell multiblock physical read (less optimal) + * Cell single block physical read (least optimal) +* Hybrid Columnar Compression (HCC/EHCC): What is the compressed vs. uncompressed ratios: + * Is the database spending over 10% of database time on compressing and decompressing data? + * Inspect the performance gains for predicates using the compression in queries: is the value gained worth it versus the amount saved with compression? +* Cell physical IO: Inspect the savings provided from: + * the amount directed to the DB node to balance CPU. + * identifying the number of bytes returned by smart scan. These values can be subtracted in IO for the percentage of cell single block physical reads once it migrates off Exadata. +* Note the number of logical reads from cache. Determine if flash cache will be required in a cloud IaaS solution for the workload. +* Compare the physical read and write total bytes to the amount performed total in cache. Can memory be raised to eliminate physical read requirements (it's common for some to shrink down SGA to force offloading for Exadata)? +* In **System Statistics**, identify what objects are impacted by what statistic. If tuning SQL, further indexing, partitioning, or other physical tuning may optimize the workload dramatically. +* Inspect **Initialization Parameters** for underscore (_) or deprecated parameters, which should be justified due to database level impact they may be causing on performance. ++## Exadata server configuration ++In Oracle version 12.2 and above, an Exadata specific addition will be included in the AWR global report. This report has sections that provide exceptional value to a migration from Exadata. +* Exadata version and system details +* Cell node alerts detail +* Exadata nononline disks +* Outlier data for any Exadata OS statistics + * Yellow/Pink: Of concern. Exadata is not running optimally. + * Red: Exadata performance is impacted significantly. + * Exadata OS CPU statistic: top cells + * These statistics are collected by the OS on the cells and are not restricted to this database or instances + * A `v` and a dark yellow background indicate an outlier value below the low range + * A `^` and a light yellow background indicate an outlier value above the high range + * The top cells by percentage CPU are display and are in descending order of percentage CPU + * Average: 39.34% CPU, 28.57% user, 10.77% sys ++ :::image type="content" alt-text="Screenshot of a table showing top cells by percentage CPU." source="../media/azure-netapp-files/exadata-top-cells.png"::: ++* Single cell physical block reads +* Flash cache usage +* Temp IO +* Columnar cache efficiency ++### Top database by IO throughput ++Although sizing assessments can be performed, there are some questions about the averages and the simulated peaks that are built into these values for large workloads. This section, found at the end of an AWR report, is exceptionally valuable as it shows both the average flash and disk usage of the top 10 databases on Exadata. Although many may assume they want to size databases for peak performance in the cloud, this doesnΓÇÖt make sense for most deployments (over 95% is in the average range; with a simulated peak calculated in, the average range be greater than 98%). It's important to pay for what is needed, even for the highest of OracleΓÇÖs demand workloads and inspecting the **Top Databases by IO Throughput** can be enlightening for understanding the resource needs for the database. ++### Right-size Oracle using the AWR on Exadata ++When performing capacity planning for on-premises systems, itΓÇÖs only natural to have significant overhead built into the hardware. The over-provisioned hardware needs to serve the Oracle workload for several years to come, no matter the workload additions due to data growth, code changes, or upgrades. ++One of the benefits of the cloud is scaling resources in a VM host and storage can be performed as demands increase. This helps conserve cloud costs and licensing costs that are attached to processor usage (pertinent with Oracle). ++Right-sizing involves removing the hardware from the traditional lift and shift migration and using the workload information provided by OracleΓÇÖs Automatic Workload Repository (AWR) to lift and shift the workload to compute and storage that is specially designed to support it in the cloud of the customerΓÇÖs choice. The right-sizing process ensures that the architecture going forward removes infrastructure technical debt, architecture redundancy that would occur if duplication of the on-premises system was replicated to the cloud and implements cloud services whenever possible. ++Microsoft Oracle subject matter experts have estimated that more than 80% of Oracle databases are over-provisioned and experience either the same cost or savings going to the cloud if they take the time to right-size the Oracle database workload before migrating to the cloud. This assessment requires the database specialists on the team to shift their mindset on how they may have performed capacity planning in the past, but it's worth the stakeholder's investment in the cloud and the businessΓÇÖs cloud strategy. ++## Next steps ++* [Run Your Most Demanding Oracle Workloads in Azure without Sacrificing Performance or Scalability](https://techcommunity.microsoft.com/t5/azure-architecture-blog/run-your-most-demanding-oracle-workloads-in-azure-without/ba-p/3264545) +* [Solution architectures using Azure NetApp Files - Oracle](azure-netapp-files-solution-architectures.md#oracle) +* [Design and implement an Oracle database in Azure](../virtual-machines/workloads/oracle/oracle-design.md) +* [Estimate Tool for Sizing Oracle Workloads to Azure IaaS VMs ](https://techcommunity.microsoft.com/t5/data-architecture-blog/estimate-tool-for-sizing-oracle-workloads-to-azure-iaas-vms/ba-p/1427183) +* [Reference architectures for Oracle Database Enterprise Edition on Azure](../virtual-machines/workloads/oracle/oracle-reference-architecture.md) +* [Understand Azure NetApp Files application volumes groups for SAP HANA](application-volume-group-introduction.md) |
azure-resource-manager | Add Template To Azure Pipelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/add-template-to-azure-pipelines.md | Title: CI/CD with Azure Pipelines and Bicep files description: In this quickstart, you learn how to configure continuous integration in Azure Pipelines by using Bicep files. It shows how to use an Azure CLI task to deploy a Bicep file. Previously updated : 01/10/2023 Last updated : 05/05/2023 # Quickstart: Integrate Bicep with Azure Pipelines steps: az deployment group create --resource-group $(resourceGroupName) --template-file $(templateFile) ``` +To override the parameters, update the last line of `inlineScript` to: ++```bicep +az deployment group create --resource-group $(resourceGroupName) --template-file $(templateFile) --parameters storageAccountType='Standard_GRS' location='eastus' +``` + For the descriptions of the task inputs, see [Azure CLI task](/azure/devops/pipelines/tasks/reference/azure-cli-v2). When using the task on air-gapped cloud, you must set the `useGlobalConfig` property of the task to `true`. The default value is `false`. Select **Save**. The build pipeline automatically runs. Go back to the summary for your build pipeline, and watch the status. |
azure-resource-manager | Operator Safe Dereference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/operator-safe-dereference.md | + + Title: Bicep safe-dereference operator +description: Describes Bicep safe-dereference operator. + Last updated : 05/09/2023+++# Bicep safe-dereference operator ++The safe-dereference operator provides a way to access properties of an object or elements of an array in a safe manner. It helps to prevent errors that can occur when attempting to access properties or elements without proper knowledge of their existence or value. ++## safe-dereference ++`<base>.?<property>` +`<base>[?<index>]` ++A safe-dereference operator applies a member access, `.?<property>`, or element access, `[?<index>]`, operation to its operand only if that operand evaluates to non-null; otherwise, it returns null. That is, ++- If `a` evaluates to `null`, the result of `a.?x` or `a[?x]` is `null`. +- If `a` is an object that doesn't have an `x` property, then `a.?x` is `null`. +- If `a` is an array whose length is less than or equal to `x`, then `a[?x]` is `null`. +- If `a` is non-null and has a property named `x`, the result of `a.?x` is the same as the result of `a.x`. +- If `a` is non-null and has an element at index `x`, the result of `a[?x]` is the same as the result of `a[x]` ++The safe-dereference operators are short-circuiting. That is, if one operation in a chain of conditional member or element access operations returns `null`, the rest of the chain doesn't execute. In the following example, `.?name` isn't evaluated if `storageAccountsettings[?i]` evaluates to `null`: ++```bicep +param storageAccountSettings array = [] +param storageCount int +param location string = resourceGroup().location ++resource storage 'Microsoft.Storage/storageAccounts@2022-09-01' = [for i in range(0, storageCount): { + name: storageAccountSettings[?i].?name ?? 'defaultname' + location: storageAccountSettings[?i].?location ?? location + kind: storageAccountSettings[?i].?kind ?? 'StorageV2' + sku: { + name: storageAccountSettings[?i].?sku ?? 'Standard_GRS' + } +}] ++``` ++## Next steps ++- To run the examples, use Azure CLI or Azure PowerShell to [deploy a Bicep file](./quickstart-create-bicep-use-visual-studio-code.md#deploy-the-bicep-file). +- To create a Bicep file, see [Quickstart: Create Bicep files with Visual Studio Code](./quickstart-create-bicep-use-visual-studio-code.md). +- For information about how to resolve Bicep type errors, see [Any function for Bicep](./bicep-functions-any.md). |
azure-resource-manager | Operators | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/operators.md | -This article describes the Bicep operators. Operators are used to calculate values, compare values, or evaluate conditions. There are five types of Bicep operators: +This article describes the Bicep operators. Operators are used to calculate values, compare values, or evaluate conditions. There are six types of Bicep operators: - [accessor](#accessor) - [comparison](#comparison) - [logical](#logical) - [null-forgiving](#null-forgiving) - [numeric](#numeric)+- [safe-dereference](#safe-dereference) ## Operator precedence and associativity The numeric operators use integers to do calculations and return integer values. > Subtract and minus use the same operator. The functionality is different because subtract uses two > operands and minus uses one operand. +## Safe-dereference ++The safe-dereference operator helps to prevent errors that can occur when attempting to access properties or elements without proper knowledge of their existence or value. ++| Operator | Name | Description | +| - | - | - | +| `<base>.?<property>`, `<base>[?<index>]` | [Safe-dereference](./operator-safe-dereference.md#safe-dereference) | Applies an object member access or an array element access operation to its operand only if that operand evaluates to non-null, otherwise, it returns `null`. | ## Next steps |
azure-resource-manager | Azure Subscription Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md | For Azure Database for PostgreSQL limits, see [Limitations in Azure Database for [!INCLUDE [Deployment Environments limits](../../../includes/deployment-environments-limits.md)] +## Azure Files and Azure File Sync ++To learn more about the limits for Azure Files and File Sync, see [Azure Files scalability and performance targets](../../storage/files/storage-files-scale-targets.md). + ## Azure Functions limits [!INCLUDE [functions-limits](../../../includes/functions-limits.md)] The following limits apply to [Azure role-based access control (Azure RBAC)](../ To learn more about the limits for Azure Spring Apps, see [Quotas and service plans for Azure Spring Apps](../../spring-apps/quotas.md). +## Azure Storage limits ++This section lists the following limits for Azure Storage: ++- [Standard storage account limits](#standard-storage-account-limits) +- [Azure Storage resource provider limits](#azure-storage-resource-provider-limits) +- [Azure Blob Storage limits](#azure-blob-storage-limits) +- [Azure Queue storage limits](#azure-queue-storage-limits) +- [Azure Table storage limits](#azure-table-storage-limits) ++### Standard storage account limits ++<!--like # storage accts --> ++### Azure Storage resource provider limits +++### Azure Blob Storage limits +++### Azure Queue storage limits +++### Azure Table storage limits ++ ## Azure subscription creation limits To learn more about the creation limits for Azure subscriptions, see [Billing accounts and scopes in the Azure portal](../../cost-management-billing/manage/view-all-accounts.md). The maximum number of private endpoints per Azure SQL Database logical server is [!INCLUDE [synapse-analytics-limits](../../../includes/synapse-analytics-limits.md)] -## Azure Files and Azure File Sync -To learn more about the limits for Azure Files and File Sync, see [Azure Files scalability and performance targets](../../storage/files/storage-files-scale-targets.md). --## Storage limits --<!--like # storage accts --> --For more information on limits for standard storage accounts, see [Scalability targets for standard storage accounts](../../storage/common/scalability-targets-standard-account.md). --### Storage resource provider limits ---### Azure Blob storage limits ---### Azure Queue storage limits ---### Azure Table storage limits -- <!-- conceptual info about disk limits -- applies to unmanaged and managed --> ### Virtual machine disk limits |
azure-video-indexer | Logic Apps Connector Arm Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md | The following image shows the first flow: |Key| Value| |-|-| | Connection name| <*Enter a name for the connection*>, in this case `aviconnection`.|- | API key| This is your personal API key, which is available under **Profile** in the [developer portal](https://api-portal.videoindexer.ai/profile)| + | API key| This is your personal API key, which is available under **Profile** in the [developer portal](https://api-portal.videoindexer.ai/profile) Because this Logic App is for ARM accounts we do not need the actual API key and you can fill in a dummy value like 12345 | Select **Create**.+ 1. Fill **Upload video and index** action parameters. > [!TIP] |
backup | Backup Azure Microsoft Azure Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-microsoft-azure-backup.md | Once the extraction process completes, check the box to launch the freshly extra When you use your own instance of SQL, make sure you add builtin\Administrators to sysadmin role to master DB. - **SSRS Configuration with SQL 2017** + **SSRS Configuration with SQL** - When you're using your own instance of SQL 2017, you need to manually configure SSRS. After SSRS configuration, ensure that *IsInitialized* property of SSRS is set to *True*. When this is set to True, MABS assumes that SSRS is already configured and will skip the SSRS configuration. + When you're using your own instance of SQL 2019 or 2022 with MABS V4, you need to manually configure SSRS. After SSRS configuration, ensure that *IsInitialized* property of SSRS is set to *True*. When this is set to True, MABS assumes that SSRS is already configured and will skip the SSRS configuration. Use the following values for SSRS configuration: * Service Account: ΓÇÿUse built-in accountΓÇÖ should be Network Service |
backup | Backup Mabs Unattended Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-unattended-install.md | Title: Silent installation of Azure Backup Server V2 -description: Use a PowerShell script to silently install Azure Backup Server V2. This kind of installation is also called an unattended installation. + Title: Silent installation of Azure Backup Server V4 +description: Use a PowerShell script to silently install Azure Backup Server V4. This kind of installation is also called an unattended installation. Last updated 11/13/2018 -These steps don't apply if you're installing Azure Backup Server V1. +These steps don't apply if you're installing older version of Azure Backup Server like MABS V1, V2 and V3. ## Install Backup Server -1. On the server that hosts Azure Backup Server V2 or later, create a text file. (You can create the file in Notepad or in another text editor.) Save the file as MABSSetup.ini. --2. Paste the following code in the MABSSetup.ini file. Replace the text inside the brackets (\< \>) with values from your environment. The following text is an example: +1. Ensure that there's a directory under Program Files called "Microsoft Azure Recovery Services Agent" by running the following command in an elevated command prompt. + ```cmd + mkdir "C:\Program Files\Microsoft Azure Recovery Services Agent" + ``` +2. Install the pre-requisites for MABS ahead of time in an elevated command prompt. The following command can result in an automatic server restart, but if that does not happen, a manual restart is recommended. + ```cmd + start /wait dism.exe /Online /Enable-feature /All /FeatureName:Microsoft-Hyper-V /FeatureName:Microsoft-Hyper-V-Management-PowerShell /quiet + ``` +3. On the server that hosts Azure Backup Server V4 or later, create a text file. (You can create the file in Notepad or in another text editor.) Save the file as MABSSetup.ini. +4. Paste the following code in the MABSSetup.ini file. Replace the text inside the brackets (\< \>) with values from your environment. The following text is an example: ```text [OPTIONS] These steps don't apply if you're installing Azure Backup Server V1. SQLMachinePassword=<admin password> SQLMachineDomainName=<machine domain> ReportingMachineName=localhost- ReportingInstanceName=<reporting instance name> + ReportingInstanceName=SSRS SqlAccountPassword=<admin password> ReportingMachineUserName=<username> ReportingMachinePassword=<reporting admin password> ReportingMachineDomainName=<domain>- VaultCredentialFilePath=<vault credential full path and complete name> + VaultCredentialFilePath=<vault credential full path and complete name, without spaces in both> SecurityPassphrase=<passphrase>- PassphraseSaveLocation=<passphrase save location> + PassphraseSaveLocation=<passphrase save location, an existing directory where the passphrase file can be created> UseExistingSQL=<1/0 use or do not use existing SQL> ```--3. Save the file. Then, at an elevated command prompt on the installation server, enter this command: +5. Save the file. Then, at an elevated command prompt on the installation server, enter this command: ```cmd start /wait <cdlayout path>/Setup.exe /i /f <.ini file path>/setup.ini /L <log path>/setup.log |
batch | Batch Docker Container Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md | To configure a container-enabled pool without prefetched container images, defin image_ref_to_use = batch.models.ImageReference( publisher='microsoft-azure-batch', offer='ubuntu-server-container',- sku='16-04-lts', + sku='20-04-lts', version='latest') """ new_pool = batch.models.PoolAddParameter( virtual_machine_configuration=batch.models.VirtualMachineConfiguration( image_reference=image_ref_to_use, container_configuration=container_conf,- node_agent_sku_id='batch.node.ubuntu 16.04'), + node_agent_sku_id='batch.node.ubuntu 20.04'), vm_size='STANDARD_D1_V2', target_dedicated_nodes=1) ... new_pool = batch.models.PoolAddParameter( ImageReference imageReference = new ImageReference( publisher: "microsoft-azure-batch", offer: "ubuntu-server-container",- sku: "16-04-lts", + sku: "20-04-lts", version: "latest"); // Specify container configuration. This is required even though there are no prefetched images. ContainerConfiguration containerConfig = new ContainerConfiguration(); // VM configuration VirtualMachineConfiguration virtualMachineConfiguration = new VirtualMachineConfiguration( imageReference: imageReference,- nodeAgentSkuId: "batch.node.ubuntu 16.04"); + nodeAgentSkuId: "batch.node.ubuntu 20.04"); virtualMachineConfiguration.ContainerConfiguration = containerConfig; // Create pool The following basic Python example shows how to prefetch a standard Ubuntu conta image_ref_to_use = batch.models.ImageReference( publisher='microsoft-azure-batch', offer='ubuntu-server-container',- sku='16-04-lts', + sku='20-04-lts', version='latest') """ new_pool = batch.models.PoolAddParameter( virtual_machine_configuration=batch.models.VirtualMachineConfiguration( image_reference=image_ref_to_use, container_configuration=container_conf,- node_agent_sku_id='batch.node.ubuntu 16.04'), + node_agent_sku_id='batch.node.ubuntu 20.04'), vm_size='STANDARD_D1_V2', target_dedicated_nodes=1) ... The following C# example assumes that you want to prefetch a TensorFlow image fr ImageReference imageReference = new ImageReference( publisher: "microsoft-azure-batch", offer: "ubuntu-server-container",- sku: "16-04-lts", + sku: "20-04-lts", version: "latest"); ContainerRegistry containerRegistry = new ContainerRegistry( containerConfig.ContainerRegistries = new List<ContainerRegistry> { containerReg // VM configuration VirtualMachineConfiguration virtualMachineConfiguration = new VirtualMachineConfiguration( imageReference: imageReference,- nodeAgentSkuId: "batch.node.ubuntu 16.04"); + nodeAgentSkuId: "batch.node.ubuntu 20.04"); virtualMachineConfiguration.ContainerConfiguration = containerConfig; // Set a native host command line start task You can also prefetch container images by authenticating to a private container image_ref_to_use = batch.models.ImageReference( publisher='microsoft-azure-batch', offer='ubuntu-server-container',- sku='16-04-lts', + sku='20-04-lts', version='latest') # Specify a container registry new_pool = batch.models.PoolAddParameter( virtual_machine_configuration=batch.models.VirtualMachineConfiguration( image_reference=image_ref_to_use, container_configuration=container_conf,- node_agent_sku_id='batch.node.ubuntu 16.04'), + node_agent_sku_id='batch.node.ubuntu 20.04'), vm_size='STANDARD_D1_V2', target_dedicated_nodes=1) ``` containerConfig.ContainerRegistries = new List<ContainerRegistry> { containerReg // VM configuration VirtualMachineConfiguration virtualMachineConfiguration = new VirtualMachineConfiguration( imageReference: imageReference,- nodeAgentSkuId: "batch.node.ubuntu 16.04"); + nodeAgentSkuId: "batch.node.ubuntu 20.04"); virtualMachineConfiguration.ContainerConfiguration = containerConfig; // Create pool containerConfig.ContainerRegistries = new List<ContainerRegistry> { containerReg // VM configuration VirtualMachineConfiguration virtualMachineConfiguration = new VirtualMachineConfiguration( imageReference: imageReference,- nodeAgentSkuId: "batch.node.ubuntu 16.04"); + nodeAgentSkuId: "batch.node.ubuntu 20.04"); virtualMachineConfiguration.ContainerConfiguration = containerConfig; // Create pool |
cdn | Cdn Pop Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-pop-locations.md | This article lists current metros containing point-of-presence (POP) locations, | Africa | Johannesburg, South Africa <br/> Nairobi, Kenya | South Africa | | Middle East | Muscat, Oman<br />Fujirah, United Arab Emirates | Qatar<br />United Arab Emirates | | India | Bengaluru (Bangalore), India<br />Chennai, India<br />Mumbai, India<br />New Delhi, India<br /> | India |-| Asia | Hong Kong SAR<br />Jakarta, Indonesia<br />Osaka, Japan<br />Tokyo, Japan<br />Singapore<br />Kaohsiung, Taiwan<br />Taipei, Taiwan <br />Manila, Philippines | Hong Kong SAR<br />Indonesia<br />Israel<br />Japan<br />Macao<br />Malaysia<br />Philippines<br />Singapore<br />South Korea<br />Taiwan<br />Thailand<br />T├╝rkiye<br />Vietnam | +| Asia | Hong Kong SAR<br />Jakarta, Indonesia<br />Osaka, Japan<br />Tokyo, Japan<br />Singapore<br />Kaohsiung, Taiwan<br />Taipei, Taiwan <br />Manila, Philippines | Hong Kong SAR<br />Indonesia<br />Israel<br />Japan<br />Macao SAR<br />Malaysia<br />Philippines<br />Singapore<br />South Korea<br />Taiwan<br />Thailand<br />T├╝rkiye<br />Vietnam | | Australia and New Zealand | Melbourne, Australia<br />Sydney, Australia<br />Auckland, New Zealand | Australia<br />New Zealand | ## Next steps |
cloud-services | Cloud Services Guestos Msrc Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md | +## May 2023 Guest OS ++>[!NOTE] ++>The May Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the May Guest OS. This list is subject to change. ++| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | +| | | | | | +| Rel 23-05 | [5026363] | Latest Cumulative Update(LCU) | 5.81 | May 9, 2023 | +| Rel 23-05 | [5017397] | IE Cumulative Updates | 2.137, 3.125, 4.117 | Sep 13, 2022 | +| Rel 23-05 | [5026370] | Latest Cumulative Update(LCU) | 7.25 | May 9, 2023 | +| Rel 23-05 | [5026362] | Latest Cumulative Update(LCU) | 6.57 | May 9, 2023 | +| Rel 23-05 | [5022523] | .NET Framework 3.5 Security and Quality Rollup LKG  | 2.137 | Feb 14, 2023 | +| Rel 23-05 | [5022515] | .NET Framework 4.6.2 Security and Quality Rollup LKG  | 2.137 | Feb 14, 2023 | +| Rel 23-05 | [5022525] | .NET Framework 3.5 Security and Quality Rollup LKG  | 4.117 | Feb 14, 2023 | +| Rel 23-05 | [5022513] | .NET Framework 4.6.2 Security and Quality Rollup LKG  | 4.117 | Feb 14, 2023 | +| Rel 23-05 | [5022574] | .NET Framework 3.5 Security       and Quality Rollup LKG  | 3.125 | Feb 14, 2023 | +| Rel 23-05 | [5022512] | .NET Framework 4.6.2 Security and Quality Rollup LKG  | 3.125 | Feb 14, 2023 | +| Rel 23-05 | [5022511] | . NET Framework 4.7.2 Cumulative Update LKG  | 6.57 | Feb 14, 2023 | +| Rel 23-05 | [5022507] | .NET Framework 4.8 Security and Quality Rollup LKG  | 7.25 | Feb 14, 2023 | +| Rel 23-05 | [5026413] | Monthly Rollup  | 2.137 | May 9, 2023 | +| Rel 23-05 | [5026419] | Monthly Rollup  | 3.125 | May 9, 2023 | +| Rel 23-05 | [5026415] | Monthly Rollup  | 4.117 | May 9, 2023 | +| Rel 23-05 | [5023791] | Servicing Stack Update LKG  | 3.125 | Mar 14, 2023 | +| Rel 23-05 | [5023790] | Servicing Stack Update LKG  | 4.117 | Mar 14, 2022 | +| Rel 23-05 | [4578013] | OOB Standalone Security Update  | 4.117 | Aug 19, 2020 | +| Rel 23-05 | [5023788] | Servicing Stack Update LKG  | 5.81 | Mar 14, 2023 | +| Rel 23-05 | [5017397] | Servicing Stack Update LKG  | 2.137 | Sep 13, 2022 | +| Rel 23-05 | [4494175] | Microcode  | 5.81 | Sep 1, 2020 | +| Rel 23-05 | [4494174] | Microcode  | 6.57 | Sep 1, 2020 | +| Rel 23-05 | [5026493] | Servicing Stack Update  | 7.25 | | ++[5026363]: https://support.microsoft.com/kb/5026363 +[5017397]: https://support.microsoft.com/kb/5017397 +[5026370]: https://support.microsoft.com/kb/5026370 +[5026362]: https://support.microsoft.com/kb/5026362 +[5022523]: https://support.microsoft.com/kb/5022523 +[5022515]: https://support.microsoft.com/kb/5022515 +[5022525]: https://support.microsoft.com/kb/5022525 +[5022513]: https://support.microsoft.com/kb/5022513 +[5022574]: https://support.microsoft.com/kb/5022574 +[5022512]: https://support.microsoft.com/kb/5022512 +[5022511]: https://support.microsoft.com/kb/5022511 +[5022507]: https://support.microsoft.com/kb/5022507 +[5026413]: https://support.microsoft.com/kb/5026413 +[5026419]: https://support.microsoft.com/kb/5026419 +[5026415]: https://support.microsoft.com/kb/5026415 +[5023791]: https://support.microsoft.com/kb/5023791 +[5023790]: https://support.microsoft.com/kb/5023790 +[4578013]: https://support.microsoft.com/kb/4578013 +[5023788]: https://support.microsoft.com/kb/5023788 +[5017397]: https://support.microsoft.com/kb/5017397 +[4494175]: https://support.microsoft.com/kb/4494175 +[4494174]: https://support.microsoft.com/kb/4494174 + ## April 2023 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | |
cognitive-services | Call Analyze Image 40 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image-40.md | Define an **ImageAnalysisOptions** object, which specifies visual features you'd var analysisOptions = new ImageAnalysisOptions() { // Mandatory. You must set one or more features to analyze. Here we use the full set of features.- // Note that 'Captions' is only supported in Azure GPU regions (East US, France Central, Korea Central, - // North Europe, Southeast Asia, West Europe, West US) + // Note that 'Caption' is only supported in Azure GPU regions (East US, France Central, Korea Central, + // North Europe, Southeast Asia, West Europe, West US and East Asia) Features = ImageAnalysisFeature.CropSuggestions- | ImageAnalysisFeature.Captions + | ImageAnalysisFeature.Caption | ImageAnalysisFeature.Objects | ImageAnalysisFeature.People | ImageAnalysisFeature.Text Specify which visual features you'd like to extract in your analysis. image_analysis_options = visionsdk.ImageAnalysisOptions() # Mandatory. You must set one or more features to analyze. Here we use the full set of features.-# Note that 'Captions' is only supported in Azure GPU regions (East US, France Central, Korea Central, +# Note that 'Caption' is only supported in Azure GPU regions (East US, France Central, Korea Central, # North Europe, Southeast Asia, West Europe, West US) image_analysis_options.features = ( visionsdk.ImageAnalysisFeature.CROP_SUGGESTIONS |- visionsdk.ImageAnalysisFeature.CAPTIONS | + visionsdk.ImageAnalysisFeature.CAPTION | visionsdk.ImageAnalysisFeature.OBJECTS | visionsdk.ImageAnalysisFeature.PEOPLE | visionsdk.ImageAnalysisFeature.TEXT | You can specify which features you want to use by setting the URL query paramete |||--| |`features`|`Read` | reads the visible text in the image and outputs it as structured JSON data.| |`features`|`Caption` | describes the image content with a complete sentence in supported languages.|-|`features`|`DenseCaption` | generates detailed captions for individual regions in the image. | +|`features`|`DenseCaption` | generates detailed captions for up to 10 prominent image regions. | |`features`|`SmartCrops` | finds the rectangle coordinates that would crop the image to a desired aspect ratio while preserving the area of interest.| |`features`|`Objects` | detects various objects within an image, including the approximate location. The Objects argument is only available in English.| |`features`|`Tags` | tags the image with a detailed list of words related to the image content.| See the following list of possible errors and their causes: * Explore the [concept articles](../concept-describe-images-40.md) to learn more about each feature. * Explore the [code samples on GitHub](https://github.com/Azure-Samples/azure-ai-vision-sdk/blob/main/samples/).-* See the [API reference](https://aka.ms/vision-4-0-ref) to learn more about the API functionality. +* See the [API reference](https://aka.ms/vision-4-0-ref) to learn more about the API functionality. |
cognitive-services | Audio Processing Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/audio-processing-overview.md | |
cognitive-services | Call Center Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/call-center-overview.md | |
cognitive-services | Devices Sdk Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/devices-sdk-release-notes.md | |
cognitive-services | Get Started Speech Translation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-speech-translation.md | |
cognitive-services | How To Custom Commands Developer Flow Test | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-developer-flow-test.md | |
cognitive-services | How To Custom Speech Continuous Integration Continuous Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-continuous-integration-continuous-deployment.md | |
cognitive-services | How To Custom Speech Human Labeled Transcriptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-human-labeled-transcriptions.md | |
cognitive-services | How To Custom Speech Model And Endpoint Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md | |
cognitive-services | How To Use Audio Input Streams | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-audio-input-streams.md | Title: Speech SDK audio input stream concepts -description: An overview of the capabilities of the Speech SDK audio input stream API. +description: An overview of the capabilities of the Speech SDK audio input stream. Previously updated : 04/12/2023 Last updated : 05/09/2023 ms.devlang: csharp - # How to use the audio input stream The Speech SDK provides a way to stream audio into the recognizer as an alternative to microphone or file input. Identify the format of the audio stream. The format must be supported by the Spe Supported audio samples are: - - PCM format (int-16) + - PCM format (int-16, signed) - One channel - 16 bits per sample, 8,000 or 16,000 samples per second (16,000 bytes or 32,000 bytes per second) - Two-block aligned (16 bit including padding for a sample) int samplesPerSecond = 16000; // or 8000 var audioFormat = AudioStreamFormat.GetWaveFormatPCM(samplesPerSecond, bitsPerSample, channels); ``` -Make sure that your code provides the RAW audio data according to these specifications. Also, make sure that 16-bit samples arrive in little-endian format. Signed samples are also supported. If your audio source data doesn't match the supported formats, the audio must be transcoded into the required format. +Make sure that your code provides the RAW audio data according to these specifications. Also, make sure that 16-bit samples arrive in little-endian format. If your audio source data doesn't match the supported formats, the audio must be transcoded into the required format. ## Create your own audio input stream class You can create your own audio input stream class derived from `PullAudioInputStreamCallback`. Implement the `Read()` and `Close()` members. The exact function signature is language-dependent, but the code looks similar to this code sample: ```csharp-public class ContosoAudioStream : PullAudioInputStreamCallback { - ContosoConfig config; -- public ContosoAudioStream(const ContosoConfig& config) { - this.config = config; - } -- public int Read(byte[] buffer, uint size) { - // Returns audio data to the caller. - // E.g., return read(config.YYY, buffer, size); - } -- public void Close() { - // Close and clean up resources. - } -}; +public class ContosoAudioStream : PullAudioInputStreamCallback +{ + public ContosoAudioStream() {} ++ public override int Read(byte[] buffer, uint size) + { + // Returns audio data to the caller. + // E.g., return read(config.YYY, buffer, size); + return 0; + } ++ public override void Close() + { + // Close and clean up resources. + } +} ``` -Create an audio configuration based on your audio format and input stream. Pass in both your regular speech configuration and the audio input configuration when you create your recognizer. For example: +Create an audio configuration based on your audio format and custom audio input stream. For example: ```csharp-var audioConfig = AudioConfig.FromStreamInput(new ContosoAudioStream(config), audioFormat); --var speechConfig = SpeechConfig.FromSubscription(...); -var recognizer = new SpeechRecognizer(speechConfig, audioConfig); +var audioConfig = AudioConfig.FromStreamInput(new ContosoAudioStream(), audioFormat); +``` -// Run stream through recognizer. -var result = await recognizer.RecognizeOnceAsync(); +Here's how the custom audio input stream is used in the context of a speech recognizer: -var text = result.GetText(); +```csharp +using System; +using System.IO; +using System.Threading.Tasks; +using Microsoft.CognitiveServices.Speech; +using Microsoft.CognitiveServices.Speech.Audio; ++public class ContosoAudioStream : PullAudioInputStreamCallback +{ + public ContosoAudioStream() {} ++ public override int Read(byte[] buffer, uint size) + { + // Returns audio data to the caller. + // E.g., return read(config.YYY, buffer, size); + return 0; + } ++ public override void Close() + { + // Close and clean up resources. + } +} ++class Program +{ + static string speechKey = Environment.GetEnvironmentVariable("SPEECH_KEY"); + static string speechRegion = Environment.GetEnvironmentVariable("SPEECH_REGION"); ++ async static Task Main(string[] args) + { + byte channels = 1; + byte bitsPerSample = 16; + uint samplesPerSecond = 16000; // or 8000 + var audioFormat = AudioStreamFormat.GetWaveFormatPCM(samplesPerSecond, bitsPerSample, channels); + var audioConfig = AudioConfig.FromStreamInput(new ContosoAudioStream(), audioFormat); ++ var speechConfig = SpeechConfig.FromSubscription(speechKey, speechRegion); + speechConfig.SpeechRecognitionLanguage = "en-US"; + var speechRecognizer = new SpeechRecognizer(speechConfig, audioConfig); ++ Console.WriteLine("Speak into your microphone."); + var speechRecognitionResult = await speechRecognizer.RecognizeOnceAsync(); + Console.WriteLine($"RECOGNIZED: Text={speechRecognitionResult.Text}"); + } +} ``` - ## Next steps -- [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)-- [See how to recognize speech in C#](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=dotnet)+- [Speech to text quickstart](./get-started-speech-to-text.md?pivots=programming-language-csharp) +- [How to recognize speech](./how-to-recognize-speech.md?pivots=programming-language-csharp) |
cognitive-services | Migrate V3 0 To V3 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-v3-0-to-v3-1.md | |
cognitive-services | Releasenotes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md | |
cognitive-services | Speech Synthesis Markup Pronunciation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-pronunciation.md | The following content types are supported for the `interpret-as` and `format` at | `date` | dmy, mdy, ymd, ydm, ym, my, md, dm, d, m, y | The text is spoken as a date. The `format` attribute specifies the date's format (*d=day, m=month, and y=year*). The speech synthesis engine pronounces:<br /><br />`Today is <say-as interpret-as="date" format="mdy">10-19-2016</say-as>`<br /><br />As "Today is October nineteenth two thousand sixteen." | | `time` | hms12, hms24 | The text is spoken as a time. The `format` attribute specifies whether the time is specified by using a 12-hour clock (hms12) or a 24-hour clock (hms24). Use a colon to separate numbers representing hours, minutes, and seconds. Here are some valid time examples: 12:35, 1:14:32, 08:15, and 02:50:45. The speech synthesis engine pronounces:<br /><br />`The train departs at <say-as interpret-as="time" format="hms12">4:00am</say-as>`<br /><br />As "The train departs at four A M." | | `duration` | hms, hm, ms | The text is spoken as a duration. The `format` attribute specifies the duration's format (*h=hour, m=minute, and s=second*). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="duration">01:18:30</say-as>`<br /><br /> As "one hour eighteen minutes and thirty seconds".<br />Pronounces:<br /><br />`<say-as interpret-as="duration" format="ms">01:18</say-as>`<br /><br /> As "one minute and eighteen seconds".<br />This tag is only supported on English and Spanish. |-| `telephone` | None | The text is spoken as a telephone number. The `format` attribute can contain digits that represent a country code. Examples are "1" for the United States or "39" for Italy. The speech synthesis engine can use this information to guide its pronunciation of a phone number. The phone number might also include the country code, and if so, takes precedence over the country code in the `format` attribute. The speech synthesis engine pronounces:<br /><br />`The number is <say-as interpret-as="telephone" format="1">(888) 555-1212</say-as>`<br /><br />As "My number is area code eight eight eight five five five one two one two." | +| `telephone` | None | The text is spoken as a telephone number. The speech synthesis engine pronounces:<br /><br />`The number is <say-as interpret-as="telephone">(888) 555-1212</say-as>`<br /><br />As "My number is area code eight eight eight five five five one two one two." | | `currency` | None | The text is spoken as a currency. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="currency">99.9 USD</say-as>`<br /><br />As "ninety-nine US dollars and ninety cents."| | `address`| None | The text is spoken as an address. The speech synthesis engine pronounces:<br /><br />`I'm at <say-as interpret-as="address">150th CT NE, Redmond, WA</say-as>`<br /><br />As "I'm at 150th Court Northeast Redmond Washington."| | `name` | None | The text is spoken as a person's name. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="name">ED</say-as>`<br /><br />As [æd]. <br />In Chinese names, some characters pronounce differently when they appear in a family name. For example, the speech synthesis engine says 仇 in <br /><br />`<say-as interpret-as="name">仇先生</say-as>`<br /><br /> As [qiú] instead of [chóu]. | |
cognitive-services | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/troubleshooting.md | |
cognitive-services | Create Sas Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-sas-tokens.md | Go to the [Azure portal](https://portal.azure.com/#home) and navigate to your co * Consider setting a longer duration period for the time you're using your storage account for Translator Service operations. * The value for the expiry time is a maximum of seven days from the creation of the SAS token. -1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, authorization fails. +1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, authorization fails. The IP address or a range of IP addresses must be public IPs, not private. For more information *see*, [**Specify an IP address or IP range**](/rest/api/storageservices/create-account-sas#specify-an-ip-address-or-ip-range). 1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the SAS. The default value is HTTPS. |
cognitive-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/service-limits.md | + + Title: Service limits - Translator Service ++description: This article lists service limits for the Translator text and document translation. Charges are incurred based on character count, not request frequency with a limit of 50,000 characters per request. Character limits are subscription-based, with F0 limited to 2 million characters per hour. ++++++ Last updated : 05/08/2023++++# Service limits for Azure Translator Service ++This article provides both a quick reference and detailed description of Azure Translator Service character and array limits for text and document translation. ++## Text translation ++Charges are incurred based on character count, not request frequency. Character limits are subscription-based. ++### Character and array limits per request ++Each translate request is limited to 50,000 characters, across all the target languages. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3,000 × 3 = 9,000 characters and meets the request limit. You're charged per character, not by the number of requests, therefore, it's recommended that you send shorter requests. ++The following table lists array element and character limits for each text translation operation. ++| Operation | Maximum Size of Array Element | Maximum Number of Array Elements | Maximum Request Size (characters) | +|:-|:-|:-|:-| +| **Translate** | 50,000| 1,000| 50,000 | +| **Transliterate** | 5,000| 10| 5,000 | +| **Detect** | 50,000 |100 |50,000 | +| **BreakSentence** | 50,000| 100 |50,000 | +| **Dictionary Lookup** | 100 |10| 1,000 | +| **Dictionary Examples** | 100 for text and 100 for translation (200 total)| 10|2,000 | ++### Character limits per hour ++Your character limit per hour is based on your Translator subscription tier. ++The hourly quota should be consumed evenly throughout the hour. For example, at the F0 tier limit of 2 million characters per hour, characters should be consumed no faster than roughly 33,300 characters per minute. The sliding window range is 2 million characters divided by 60 minutes. ++You're likely to receive an out-of-quota response under the following circumstances: ++* You've reached or surpass the quota limit. +* You've sent a large portion of the quota in too short a period of time. ++There are no limits on concurrent requests. ++| Tier | Character limit | +||--| +| F0 | 2 million characters per hour | +| S1 | 40 million characters per hour | +| S2 / C2 | 40 million characters per hour | +| S3 / C3 | 120 million characters per hour | +| S4 / C4 | 200 million characters per hour | ++Limits for [multi-service subscriptions](./reference/v3-0-reference.md#authentication) are the same as the S1 tier. ++These limits are restricted to Microsoft's standard translation models. Custom translation models that use Custom Translator are limited to 3,600 characters per second, per model. ++### Latency ++The Translator has a maximum latency of 15 seconds using standard models and 120 seconds when using custom models. Typically, responses *for text within 100 characters* are returned in 150 milliseconds to 300 milliseconds. The custom translator models have similar latency characteristics on sustained request rate and may have a higher latency when your request rate is intermittent. Response times vary based on the size of the request and language pair. If you don't receive a translation or an [error response](./reference/v3-0-reference.md#errors) within that time frame, check your code, your network connection, and retry. ++## Document Translation ++This table lists the content limits for data sent using Document Translation: ++|Attribute | Limit| +||| +|Document size| Γëñ 40 MB | +|Total number of files.|Γëñ 1000 | +|Total content size in a batch | Γëñ 250 MB| +|Number of target languages in a batch| Γëñ 10 | +|Size of Translation memory file| Γëñ 10 MB| ++> [!NOTE] +> Document Translation can't be used to translate secured documents such as those with an encrypted password or with restricted access to copy content. ++## Next steps ++* [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/) +* [Regional availability](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services) +* [v3 Translator reference](./reference/v3-0-reference.md) |
cognitive-services | Cognitive Services Container Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-container-support.md | Azure Cognitive Services containers provide the following set of Docker containe | [Language service][ta-containers-language] | **Text Language Detection** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/language/about)) | For up to 120 languages, detects which language the input text is written in and report a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the score. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Language service][ta-containers-sentiment] | **Sentiment Analysis** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/sentiment/about)) | Analyzes raw text for clues about positive or negative sentiment. This version of sentiment analysis returns sentiment labels (for example *positive* or *negative*) for each document and sentence within it. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Language service][ta-containers-health] | **Text Analytics for health** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/healthcare/about))| Extract and label medical information from unstructured clinical text. | Generally available |+| [Language service][ta-containers-cner] | **Custom Named Entity Recognition** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/customner/about))| Extract named entities from text, using a custom model you create using your data. | Preview | | [Translator][tr-containers] | **Translator** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/translator/text-translation/about))| Translate text in several languages and dialects. | Generally available. Gated - [request access](https://aka.ms/csgate-translator). <br>This container can also [run in disconnected environments](containers/disconnected-containers.md). | ### Speech containers Install and explore the functionality provided by containers in Azure Cognitive [ta-containers-language]: language-service/language-detection/how-to/use-containers.md [ta-containers-sentiment]: language-service/sentiment-opinion-mining/how-to/use-containers.md [ta-containers-health]: language-service/text-analytics-for-health/how-to/use-containers.md+[ta-containers-cner]: language-service/custom-named-entity-recognition/how-to/use-containers.md [tr-containers]: translator/containers/translator-how-to-install-container.md [request-access]: https://aka.ms/csgate |
cognitive-services | Use Autolabeling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/use-autolabeling.md | + + Title: How to use autolabeling in custom named entity recognition ++description: Learn how to use autolabeling in custom named entity recognition. +++++++ Last updated : 03/20/2023++++# How to use autolabeling for Custom Named Entity Recognition ++[Labeling process](tag-data.md) is an important part of preparing your dataset. Since this process requires both time and effort, you can use the autolabeling feature to automatically label your entities. You can start autolabeling jobs based on a model you've previously trained or using GPT models. With autolabeling based on a model you've previously trained, you can start labeling a few of your documents, train a model, then create an autolabeling job to produce entity labels for other documents based on that model. With autolabeling with GPT, you may immediately trigger an autolabeling job without any prior model training. This feature can save you the time and effort of manually labeling your entities. ++## Prerequisites ++### [Autolabel based on a model you've trained](#tab/autolabel-model) ++Before you can use autolabeling based on a model you've trained, you need: +* A successfully [created project](create-project.md) with a configured Azure blob storage account. +* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account. +* [Labeled data](tag-data.md) +* A [successfully trained model](train-model.md) +++### [Autolabel with GPT](#tab/autolabel-gpt) +Before you can use autolabeling with GPT, you need: +* A successfully [created project](create-project.md) with a configured Azure blob storage account. +* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account. +* Entity names that are meaningful. The GPT models label entities in your documents based on the name of the entity you've provided. +* [Labeled data](tag-data.md) isn't required. +* An Azure OpenAI [resource and deployment](../../../openai/how-to/create-resource.md). ++++## Trigger an autolabeling job ++### [Autolabel based on a model you've trained](#tab/autolabel-model) ++When you trigger an autolabeling job based on a model you've trained, there's a monthly limit of 5,000 text records per month, per resource. This means the same limit applies on all projects within the same resource. ++> [!TIP] +> A text record is calculated as the ceiling of (Number of characters in a document / 1,000). For example, if a document has 8921 characters, the number of text records is: +> +> `ceil(8921/1000) = ceil(8.921)`, which is 9 text records. ++1. From the left navigation menu, select **Data labeling**. +2. Select the **Autolabel** button under the Activity pane to the right of the page. +++ :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job." lightbox="../media/trigger-autotag.png"::: + +3. Choose Autolabel based on a model you've trained and click on Next. ++ :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png"::: + +4. Choose a trained model. It's recommended to check the model performance before using it for autolabeling. ++ :::image type="content" source="../media/choose-model-trained.png" alt-text="A screenshot showing how to choose trained model for autotagging." lightbox="../media/choose-model-trained.png"::: ++5. Choose the entities you want to be included in the autolabeling job. By default, all entities are selected. You can see the total labels, precision and recall of each entity. It's recommended to include entities that perform well to ensure the quality of the automatically labeled entities. ++ :::image type="content" source="../media/choose-entities.png" alt-text="A screenshot showing which entities to be included in autotag job." lightbox="../media/choose-entities.png"::: ++6. Choose the documents you want to be automatically labeled. The number of text records of each document is displayed. When you select one or more documents, you should see the number of texts records selected. It's recommended to choose the unlabeled documents from the filter. ++ > [!NOTE] + > * If an entity was automatically labeled, but has a user defined label, only the user defined label is used and visible. + > * You can view the documents by clicking on the document name. + + :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png"::: ++7. Select **Autolabel** to trigger the autolabeling job. +You should see the model used, number of documents included in the autolabeling job, number of text records and entities to be automatically labeled. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included. ++ :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png"::: ++### [Autolabel with GPT](#tab/autolabel-gpt) ++When you trigger an autolabeling job with GPT, you're charged to your Azure OpenAI resource as per your consumption. You're charged an estimate of the number of tokens in each document being autolabeled. Refer to the [Azure OpenAI pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) for a detailed breakdown of pricing per token of different models. ++1. From the left navigation menu, select **Data labeling**. +2. Select the **Autolabel** button under the Activity pane to the right of the page. ++ :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job from the activity pane." lightbox="../media/trigger-autotag.png"::: ++4. Choose Autolabel with GPT and click on Next. ++ :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png"::: ++5. Choose your Azure OpenAI resource and deployment. You must [create an Azure OpenAI resource and deploy a model](../../../openai/how-to/create-resource.md) in order to proceed. ++ :::image type="content" source="../media/autotag-choose-open-ai.png" alt-text="A screenshot showing how to choose OpenAI resource and deployments" lightbox="../media/autotag-choose-open-ai.png"::: + +6. Choose the entities you want to be included in the autolabeling job. By default, all entities are selected. Having descriptive names for labels, and including examples for each label is recommended to achieve good quality labeling with GPT. ++ :::image type="content" source="../media/choose-entities.png" alt-text="A screenshot showing which entities to be included in autotag job." lightbox="../media/choose-entities.png"::: + +7. Choose the documents you want to be automatically labeled. It's recommended to choose the unlabeled documents from the filter. ++ > [!NOTE] + > * If an entity was automatically labeled, but has a user defined label, only the user defined label is used and visible. + > * You can view the documents by clicking on the document name. + + :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png"::: ++8. Select **Start job** to trigger the autolabeling job. +You should be directed to the autolabeling page displaying the autolabeling jobs initiated. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included. ++ :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png"::: +++++## Review the auto labeled documents ++When the autolabeling job is complete, you can see the output documents in the **Data labeling** page of Language Studio. Select **Review documents with autolabels** to view the documents with the **Auto labeled** filter applied. +++Entities that have been automatically labeled appear with a dotted line. These entities have two selectors (a checkmark and an "X") that allow you to accept or reject the automatic label. ++Once an entity is accepted, the dotted line changes to a solid one, and the label is included in any further model training becoming a user defined label. ++Alternatively, you can accept or reject all automatically labeled entities within the document, using **Accept all** or **Reject all** in the top right corner of the screen. ++After you accept or reject the labeled entities, select **Save labels** to apply the changes. ++> [!NOTE] +> * We recommend validating automatically labeled entities before accepting them. +> * All labels that were not accepted are be deleted when you train your model. +++## Next steps ++* Learn more about [labeling your data](tag-data.md). |
cognitive-services | Use Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/use-containers.md | + + Title: Use Docker containers for Custom Named Entity Recognition on-premises ++description: Learn how to use Docker containers for Custom Named Entity Recognition on-premises. ++++++ Last updated : 05/08/2023+++keywords: on-premises, Docker, container, natural language processing +++# Install and run Custom Named Entity Recognition containers +++Containers enable you to host the Custom Named Entity Recognition API on your own infrastructure using your own trained model. If you have security or data governance requirements that can't be fulfilled by calling Custom Named Entity Recognition remotely, then containers might be a good option. ++> [!NOTE] +> * The free account is limited to 5,000 text records per month and only the **Free** and **Standard** [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics) are valid for containers. For more information on transaction request rates, see [Data and service limits](../../concepts/data-limits.md). +++## Prerequisites ++* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/). +* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure. + * On Windows, Docker must also be configured to support Linux containers. + * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/). +* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">Language resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/). +* A [trained and deployed Custom Named Entity Recognition model](../quickstart.md) +++## Host computer requirements and recommendations +++The following table describes the minimum and recommended specifications for Custom Named Entity Recognition containers. Each CPU core must be at least 2.6 gigahertz (GHz) or faster. The allowable Transactions Per Second (TPS) are also listed. ++| | Minimum host specs | Recommended host specs | Minimum TPS | Maximum TPS| +|||-|--|--| +| **Custom Named Entity Recognition** | 1 core, 2 GB memory | 1 core, 4 GB memory |15 | 30| ++CPU core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command. ++## Export your Custom Named Entity Recognition model ++Before you proceed with running the docker image, you will need to export your own trained model to expose it to your container. Use the following command to extract your model and replace the placeholders below with your own values: ++| Placeholder | Value | Format or example | +|-|-|| +| **{API_KEY}** | The key for your Custom Named Entity Recognition resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`| +| **{ENDPOINT_URI}** | The endpoint for accessing the Custom Named Entity Recognition API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` | +| **{PROJECT_NAME}** | The name of the project containing the model that you want to export. You can find it on your projects tab in the Language Studio portal. |`myProject`| +| **{TRAINED_MODEL_NAME}** | The name of the trained model you want to export. You can find your trained models on your model evaluation tab under your project in the Language Studio portal. |`myTrainedModel`| ++```bash +curl --location --request PUT '{ENDPOINT_URI}/language/authoring/analyze-text/projects/{PROJECT_NAME}/exported-models/{TRAINED_MODEL_NAME}?api-version=2023-04-15-preview' \ +--header 'Ocp-Apim-Subscription-Key: {API_KEY}' \ +--header 'Content-Type: application/json' \ +--data-raw '{ +    "TrainedmodelLabel": "{TRAINED_MODEL_NAME}" +}' +``` ++## Get the container image with `docker pull` ++The Custom Named Entity Recognition container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/textanalytics/` repository and is named `customner`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/textanalytics/customner`. ++To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/textanalytics/customner/about). ++Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry. ++``` +docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/customner:latest +``` +++## Run the container with `docker run` ++Once the container is on the host computer, use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the containers. The container will continue to run until you stop it. ++> [!IMPORTANT] +> * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements. +> * The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing). ++To run the *Custom Named Entity Recognition* container, execute the following `docker run` command. Replace the placeholders below with your own values: ++| Placeholder | Value | Format or example | +|-|-|| +| **{API_KEY}** | The key for your Custom Named Entity Recognition resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`| +| **{ENDPOINT_URI}** | The endpoint for accessing the Custom Named Entity Recognition API. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` | +| **{PROJECT_NAME}** | The name of the project containing the model that you want to export. You can find it on your projects tab in the Language Studio portal. |`myProject`| +| **{LOCAL_PATH}** | The path where the exported model in the previous step will be downloaded in. You can choose any path of your liking. |`C:/custom-ner-model`| +| **{TRAINED_MODEL_NAME}** | The name of the trained model you want to export. You can find your trained models on your model evaluation tab under your project in the Language Studio portal. |`myTrainedModel`| +++```bash +docker run --rm -it -p5000:5000 --memory 4g --cpus 1 \ +-v {LOCAL_PATH}:/modelPath \ +mcr.microsoft.com/azure-cognitive-services/textanalytics/customner:latest \ +EULA=accept \ +BILLING={ENDPOINT_URI} \ +APIKEY={API_KEY} \ +projectName={PROJECT_NAME} +exportedModelName={TRAINED_MODEL_NAME} +``` ++This command: ++* Runs a *Custom Named Entity Recognition* container and downloads your exported model to the local path specified. +* Allocates one CPU core and 4 gigabytes (GB) of memory +* Exposes TCP port 5000 and allocates a pseudo-TTY for the container +* Automatically removes the container after it exits. The container image is still available on the host computer. +++## Query the container's prediction endpoint ++The container provides REST-based query prediction endpoint APIs. ++Use the host, `http://localhost:5000`, for container APIs. ++++## Stop the container +++## Troubleshooting ++If you run the container with an output [mount](../../concepts/configure-containers.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container. +++## Billing ++The Custom Named Entity Recognition containers send billing information to Azure, using a _Custom Named Entity Recognition_ resource on your Azure account. +++## Summary ++In this article, you learned concepts and workflow for downloading, installing, and running Custom Named Entity Recognition containers. In summary: ++* Custom Named Entity Recognition provides Linux containers for Docker. +* Container images are downloaded from the Microsoft Container Registry (MCR). +* Container images run in Docker. +* You can use either the REST API or SDK to call operations in Custom Named Entity Recognition containers by specifying the host URI of the container. +* You must specify billing information when instantiating a container. ++> [!IMPORTANT] +> Cognitive Services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers do not send customer data (e.g. text that is being analyzed) to Microsoft. ++## Next steps ++* See [Configure containers](../../concepts/configure-containers.md) for configuration settings. |
cognitive-services | Use Autolabeling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/use-autolabeling.md | + + Title: How to use autolabeling in custom text classification ++description: Learn how to use autolabeling in custom text classification. +++++++ Last updated : 3/15/2023++++# How to use autolabeling for Custom Text Classification ++[Labeling process](tag-data.md) is an important part of preparing your dataset. Since this process requires much time and effort, you can use the autolabeling feature to automatically label your documents with the classes you want to categorize them into. You can currently start autolabeling jobs based on a model using GPT models where you may immediately trigger an autolabeling job without any prior model training. This feature can save you the time and effort of manually labeling your documents. ++## Prerequisites ++Before you can use autolabeling with GPT, you need: +* A successfully [created project](create-project.md) with a configured Azure blob storage account. +* Text data that [has been uploaded](design-schema.md#data-preparation) to your storage account. +* Class names that are meaningful. The GPT models label documents based on the names of the classes you've provided. +* [Labeled data](tag-data.md) isn't required. +* An Azure OpenAI [resource and deployment](../../../openai/how-to/create-resource.md). ++++## Trigger an autolabeling job ++When you trigger an autolabeling job with GPT, you're charged to your Azure OpenAI resource as per your consumption. You're charged an estimate of the number of tokens in each document being autolabeled. Refer to the [Azure OpenAI pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) for a detailed breakdown of pricing per token of different models. ++1. From the left navigation menu, select **Data labeling**. +2. Select the **Autolabel** button under the Activity pane to the right of the page. ++ :::image type="content" source="../media/trigger-autotag.png" alt-text="A screenshot showing how to trigger an autotag job from the activity pane." lightbox="../media/trigger-autotag.png"::: ++4. Choose Autolabel with GPT and click on Next. ++ :::image type="content" source="../media/choose-models.png" alt-text="A screenshot showing model choice for auto labeling." lightbox="../media/choose-models.png"::: ++5. Choose your Azure OpenAI resource and deployment. You must [create an Azure OpenAI resource and deploy a model](../../../openai/how-to/create-resource.md) in order to proceed. ++ :::image type="content" source="../media/autotag-choose-open-ai.png" alt-text="A screenshot showing how to choose OpenAI resource and deployments" lightbox="../media/autotag-choose-open-ai.png"::: + +6. Select the classes you want to be included in the autolabeling job. By default, all classes are selected. Having descriptive names for classes, and including examples for each class is recommended to achieve good quality labeling with GPT. ++ :::image type="content" source="../media/choose-classes.png" alt-text="A screenshot showing which labels to be included in autotag job." lightbox="../media/choose-classes.png"::: + +7. Choose the documents you want to be automatically labeled. It's recommended to choose the unlabeled documents from the filter. ++ > [!NOTE] + > * If a document was automatically labeled, but this label was already user defined, only the user defined label is used. + > * You can view the documents by clicking on the document name. + + :::image type="content" source="../media/choose-files.png" alt-text="A screenshot showing which documents to be included in the autotag job." lightbox="../media/choose-files.png"::: ++8. Select **Start job** to trigger the autolabeling job. +You should be directed to the autolabeling page displaying the autolabeling jobs initiated. Autolabeling jobs can take anywhere from a few seconds to a few minutes, depending on the number of documents you included. ++ :::image type="content" source="../media/review-autotag.png" alt-text="A screenshot showing the review screen for an autotag job." lightbox="../media/review-autotag.png"::: +++++## Review the auto labeled documents ++When the autolabeling job is complete, you can see the output documents in the **Data labeling** page of Language Studio. Select **Review documents with autolabels** to view the documents with the **Auto labeled** filter applied. +++Documents that have been automatically classified have suggested labels in the activity pane highlighted in purple. Each suggested label has two selectors (a checkmark and a cancel icon) that allow you to accept or reject the automatic label. ++Once a label is accepted, the purple color changes to the default blue one, and the label is included in any further model training becoming a user defined label. ++After you accept or reject the labels for the autolabeled documents, select **Save labels** to apply the changes. ++> [!NOTE] +> * We recommend validating automatically labeled documents before accepting them. +> * All labels that were not accepted are deleted when you train your model. +++## Next steps ++* Learn more about [labeling your data](tag-data.md). |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/quickstart.md | If you want to clean up and remove a Cognitive Services subscription, you can de * [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources) -> [!div class="nextstepaction"] -> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST API&Pillar=Language&Product=Key-phrase-extraction&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a> + ## Next steps |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/quickstart.md | If you want to clean up and remove a Cognitive Services subscription, you can de * [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources) -> [!div class="nextstepaction"] -> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST API&Pillar=Language&Product=Language-detection&Page=quickstart&Section=Clean-up-resources" target="_target" target="_target">I ran into an issue</a> + ## Next steps |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/quickstart.md | If you want to clean up and remove a Cognitive Services subscription, you can de * [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources) -> [!div class="nextstepaction"] -> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=JAVA&Pillar=Language&Product=Named-entity-recognition&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a> + ## Next steps |
cognitive-services | Extract Excel Information | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/tutorials/extract-excel-information.md | The Excel file will get updated in your OneDrive account. It will look like the ## Next steps -> [!div class="nextstepaction"] -> [Call NER using the REST API or client library](../quickstart.md) + |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/quickstart.md | If you want to clean up and remove a Cognitive Services subscription, you can de * [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources) -> [!div class="nextstepaction"] -> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=PYTHON&Pillar=Language&Product=Personally-identifying-info&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a> + ## Next steps |
cognitive-services | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/concepts/best-practices.md | Question answering allows users to collaborate on a project. Users need access t ## Next steps -> [!div class="nextstepaction"] -> [Edit a project](../How-to/manage-knowledge-base.md) + |
cognitive-services | Confidence Score | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/concepts/confidence-score.md | If you have a project in different regions, each region uses its own Azure Cogni When no good match is found by the ranker, the confidence score of 0.0 or "None" is returned and the default response is returned. You can change the [default response](../how-to/change-default-answer.md). ## Next steps-> [!div class="nextstepaction"] -> [Best practices](./best-practices.md) + |
cognitive-services | Project Development Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/concepts/project-development-lifecycle.md | The *published project* is the version that's used in your chat bot or applicati ## Next steps -> [!div class="nextstepaction"] -> [Active learning suggestions](../tutorials/active-learning.md) |
cognitive-services | Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/analytics.md | AzureDiagnostics ## Next steps -> [!div class="nextstepaction"] -> [Choose capactiy](../../../qnamaker/how-to/improve-knowledge-base.md) + |
cognitive-services | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/best-practices.md | Punctuation is ignored in user query before sending it to the ranking stack. Ide ## Next steps -> [!div class="nextstepaction"] -> [Get started with Question Answering](../quickstart/sdk.md) |
cognitive-services | Chit Chat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/chit-chat.md | Select the **manage sources** pane, and choose your chitchat source. Your specif ## Next steps -> [!div class="nextstepaction"] -> [Import a project](./migrate-knowledge-base.md) + |
cognitive-services | Create Test Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/create-test-deploy.md | If you will not continue to test custom question answering, you can delete the a ## Next steps -> [!div class="nextstepaction"] -> [Add questions with metadata](../../../qnamaker/quickstarts/add-question-metadata-portal.md) + |
cognitive-services | Migrate Knowledge Base | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/migrate-knowledge-base.md | There is no way to move chat logs with projects. If diagnostic logs are enabled, ## Next steps <!-- TODO: Replace Link-->-> [!div class="nextstepaction"] -> [Edit a project](../../../qnamaker/How-To/edit-knowledge-base.md) + |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/language-support.md | This additional ranking is an internal working of the custom question answering' ## Next steps -> [!div class="nextstepaction"] -> [Language selection](../index.yml) + |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/overview.md | We offer quickstarts in most popular programming languages, each designed to tea ## Next steps Question answering provides everything you need to build, manage, and deploy your custom project. -> [!div class="nextstepaction"] -> [Review the latest changes](../whats-new.md) |
cognitive-services | Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/quickstart/sdk.md | If you want to clean up and remove a Cognitive Services subscription, you can de * [Portal](../../../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../../../cognitive-services-apis-create-account-cli.md#clean-up-resources) -> [!div class="nextstepaction"] -> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=PYTHON&Pillar=Language&Product=Question-answering&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a> + ## Explore the REST API |
cognitive-services | Active Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/active-learning.md | By adding alternate questions along with active learning, we further enrich the ## Next steps -> [!div class="nextstepaction"] -> [Improve the quality of responses with synonyms](adding-synonyms.md) + |
cognitive-services | Adding Synonyms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/adding-synonyms.md | As you can see, when `troubleshoot` was not added as a synonym, we got a low con ## Next steps -> [!div class="nextstepaction"] -> [Create projects in multiple languages](multiple-languages.md) |
cognitive-services | Bot Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/bot-service.md | If you're not going to continue to use this application, delete the associate qu ## Next steps Advance to the next article to learn how to customize your FAQ bot with multi-turn prompts.-> [!div class="nextstepaction"] -> [Multi-turn prompts](guided-conversations.md) |
cognitive-services | Guided Conversations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/guided-conversations.md | Using the editor, we add a new QnA pair with a follow-up prompt by clicking on * ## Next steps -> [!div class="nextstepaction"] -> [Enrich your knowlege base with active learning](active-learning.md) |
cognitive-services | Document Summarization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/how-to/document-summarization.md | curl -X GET https://<your-language-resource-endpoint>/language/analyze-text/jobs -H "Ocp-Apim-Subscription-Key: <your-language-resource-key>" ``` -> [!div class="nextstepaction"] -> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST API&Pillar=Language&Product=Summarization&Page=quickstart&Section=Document-summarization" target="_target">I ran into an issue</a> + ### Abstractive document summarization example JSON response |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/quickstart.md | If you want to clean up and remove a Cognitive Services subscription, you can de * [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources) -> [!div class="nextstepaction"] -> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST API&Pillar=Language&Product=Summarization&Page=quickstart&Section=Clean-up-resources" target="_target" target="_target">I ran into an issue</a> + ## Next steps |
cognitive-services | Assertion Detection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/concepts/assertion-detection.md | Text Analytics for health returns assertion modifiers, which are informative att * **Subject** [Default]: the concept is associated with the subject of the text, usually the patient. * **Other**: the concept is associated with someone who is not the subject of the text. +**TEMPORAL** - provides additional temporal information for a concept detailing whether it is an occurrence related to the past, present, or future. +* **Current** [Default]: the concept is related to conditions/events that belong to the current encounter. For example, medical symptoms that have brought the patient to seek medical attention (e.g., “started having headaches 5 days prior to their arrival to the ER”). This includes newly made diagnoses, symptoms experienced during or leading to this encounter, treatments and examinations done within the encounter. +* **Past**: the concept is related to conditions, examinations, treatments, medication events that are mentioned as something that existed or happened prior to the current encounter, as might be indicated by hints like s/p, recently, ago, previously, in childhood, at age X. For example, diagnoses that were given in the past, treatments that were done, past examinations and their results, past admissions, etc. Medical background is considered as PAST. +* **Future**: the concept is related to conditions/events that are planned/scheduled/suspected to happen in the future, e.g., will be obtained, will undergo, is scheduled in two weeks from now. Assertion detection represents negated entities as a negative value for the certainty category, for example: |
cognitive-services | Relation Extraction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/concepts/relation-extraction.md | The following list presents all the recognized relations by the Text Analytics f **ABBREVIATION** +**AMOUNT_OF_SUBSTANCE_USE** + **BODY_SITE_OF_CONDITION** **BODY_SITE_OF_TREATMENT** The following list presents all the recognized relations by the Text Analytics f **FREQUENCY_OF_MEDICATION** +**FREQUENCY_OF_SUBSTANCE_USE** + **FREQUENCY_OF_TREATMENT** **MUTATION_TYPE_OF_GENE** The following list presents all the recognized relations by the Text Analytics f ## Next steps -* [How to call the Text Analytics for health](../how-to/call-api.md) +* [How to call the Text Analytics for health](../how-to/call-api.md) |
cognitive-services | Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/call-api.md | There are two ways to call the service: ## Specify the Text Analytics for health model -By default, Text Analytics for health will use the latest available AI model on your text. You can also configure your API requests to use a specific model version. The model you specify will be used to perform operations provided by the Text Analytics for health. Extraction of social determinants of health entities is supported with the new preview model version "2023-01-01-preview". +By default, Text Analytics for health will use the ("2022-03-01") model version on your text. You can also configure your API requests to use a specific model version. The model you specify will be used to perform operations provided by the Text Analytics for health. Extraction of social determinants of health entities along with their assertions and relationships (**only in English**) is supported with the latest preview model version "2023-04-15-preview". -| Supported Versions | latest version | +| Supported Versions | Status | |--|--|-| `2023-01-01-preview` | `2023-01-01-preview` | -| `2022-08-15-preview` | `2022-08-15-preview` | -| `2022-03-01` | `2022-03-01` | +| `2023-04-15-preview` | Preview | +| `2023-04-01` | Generally available | +| `2023-01-01-preview` | Preview | +| `2022-08-15-preview` | Preview | +| `2022-03-01` | Generally available | +## Specify the Text Analytics for health API version ++When making a Text Analytics for health API call, you must specify an API version. The latest generally available API version is "2023-04-01" which supports relationship confidence scores in the results. The latest preview API version is "2023-04-15-preview", offering the latest feature which is support for [temporal assertions](../concepts/assertion-detection.md). ++| Supported Versions | Status | +|--|--| +| `2023-04-15-preview`| Preview | +| `2023-04-01`| Generally available | +| `2022-10-01-preview` | Preview | +| `2022-05-01` | Generally available | ### Text Analytics for health container |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/quickstart.md | If you want to clean up and remove a Cognitive Services subscription, you can de * [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources) -> [!div class="nextstepaction"] -> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CSHARP&Pillar=Language&Product=Text-analytics-for-health&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a> + ## Next steps |
cognitive-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md | +## May 2023 ++* [Custom Named Entity Recognition (NER) Docker containers](./custom-named-entity-recognition/how-to/use-containers.md) are now available for on-premises deployment. + ## April 2023 * You can now use Azure OpenAI to automatically label or generate data during authoring. Learn more with the links below. |
cognitive-services | Understand Embeddings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/understand-embeddings.md | Embeddings make it easier to do machine learning on large inputs representing wo ## Cosine similarity -One method of identifying similar documents is to count the number of common words between documents. Unfortunately, this approach doesn't scale since an expansion in document size is likely to lead to a greater number of common words detected even among completely disparate topics. For this reason, cosine similarity can offer a more effective alternative. +Azure OpenAI embeddings rely on cosine similarity to compute similarity between documents and a query. -From a mathematic perspective, cosine similarity measures the cosine of the angle between two vectors projected in a multi-dimensional space. This is beneficial because if two documents are far apart by Euclidean distance because of size, they could still have a smaller angle between them and therefore higher cosine similarity. For more information on cosine similarity and the [underlying formula](https://en.wikipedia.org/wiki/Cosine_similarity). +From a mathematic perspective, cosine similarity measures the cosine of the angle between two vectors projected in a multi-dimensional space. This is beneficial because if two documents are far apart by Euclidean distance because of size, they could still have a smaller angle between them and therefore higher cosine similarity. For more information about cosine similarity equations, see [this article on Wikipedia](https://en.wikipedia.org/wiki/Cosine_similarity). -Azure OpenAI embeddings rely on cosine similarity to compute similarity between documents and a query. +An alternative method of identifying similar documents is to count the number of common words between documents. Unfortunately, this approach doesn't scale since an expansion in document size is likely to lead to a greater number of common words detected even among completely disparate topics. For this reason, cosine similarity can offer a more effective alternative. ## Next steps |
communication-services | Meeting Interop Features Inline Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/chat-interop/meeting-interop-features-inline-image.md | -# Tutorial: Enable inline interoperability features in your Chat app +# Tutorial: Enable inline image support in your Chat app ## Add inline image support The Chat SDK is designed to work with Microsoft Teams seamlessly. Specifically, Chat SDK provides a solution to receive inline images sent by users from Microsoft Teams. Currently this feature is only available in the Chat SDK for JavaScript. -The Chat SDK for JavaScript provides `previewUrl` and `url` for each inline images. Please note that some GIF images fetched from `previewUrl` might not be animated and a static preview image would be returned instead. Developers are expected to use the `url` if the intention is to fetch animated images only. +The Chat SDK for JavaScript provides `previewUrl` and `url` for each inline image. Please note that some GIF images fetched from `previewUrl` might not be animated and a static preview image would be returned instead. Developers are expected to use the `url` if the intention is to fetch animated images only. [!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)] |
confidential-computing | Attestation Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/attestation-solutions.md | -Computing is an essential part of our daily lives, powering everything from our smartphones to critical infrastructure. However, increasing regulatory environments, prevalence of cyberattacks, and growing sophistication of attackers have made it difficult to trust the authenticity and integrity of the computing technologies we depend on. Attestation, a technique to verify the software and hardware components of a system, is a critical process for establishing trust and ensuring that computing technologies we rely on are trustworthy. +Computing is an essential part of our daily lives, powering everything from our smartphones to critical infrastructure. However, increasing regulatory environments, prevalence of cyberattacks, and growing sophistication of attackers have made it difficult to trust the authenticity and integrity of the computing technologies we depend on. Attestation, a technique to verify the software and hardware components of a system, is a critical process for establishing and ensuring that computing technologies we rely on are trustworthy. In this document, we are looking at what attestation is, types of attestation Microsoft offers today, and how customers can utilize these types of attestation scenarios in Microsoft solutions. In remote attestation, ΓÇ£one peer (the "Attester") produces believable informat ### Passport Model #### Passport Model - Immigration Desk-1. A Citizen wants a passport to travel to a Foreign Country. The Citizen submits evidence requirements to their Host Country. -2. Host country receives the evidence of policy compliance from the individual and verifies whether the supplied evidence proves that the individual complies with the policies for being issued a passport. +1. A Citizen wants a passport to travel to a Foreign Country/Region. The Citizen submits evidence requirements to their Host Country/Region. +2. Host country/region receives the evidence of policy compliance from the individual and verifies whether the supplied evidence proves that the individual complies with the policies for being issued a passport. - Birth certificate is valid and hasn't been altered. - Issuer of the birth certificate is trusted - Individual isn't part of a restricted list-3. If the Host Country decides the evidence meets their policies, the Host Country will issue a passport for a Citizen. -4. The Citizen travels to a foreign nation, but first must present their passport to the Foreign Country Border Patrol Agent for evaluation. -5. The Foreign Country Border Patrol Agent checks a series of rules on the passport before trusting it +3. If the Host Country/Region decides the evidence meets their policies, the Host Country/Region will issue a passport for a Citizen. +4. The Citizen travels to a foreign nation, but first must present their passport to the Foreign Country/Region Border Patrol Agent for evaluation. +5. The Foreign Country/Region Border Patrol Agent checks a series of rules on the passport before trusting it - Passport is authentic and hasn't been altered.- - Passport was produced by a trusted country. + - Passport was produced by a trusted country/region. - Passport isn't expired or revoked. - Passport conforms to policy of a Visa or age requirement.-6. The Foreign Country Border Patrol Agent approves of the Passport and the Citizen can enter the Foreign Country. +6. The Foreign Country/Region Border Patrol Agent approves of the Passport and the Citizen can enter the Foreign Country/Region.  |
container-registry | Container Registry Auto Purge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auto-purge.md | For more information about image storage, see [Container image storage in Azure <!-- LINKS - Internal --> [azure-cli-install]: /cli/azure/install-azure-cli-[az-acr-run]: /cli/azure/acr#az_acr_run +[az-acr-run]: /cli/azure/acr#az-acr-run [az-acr-task-create]: /cli/azure/acr/task#az-acr-task-create [az-acr-task-show]: /cli/azure/acr/task#az-acr-task-show |
container-registry | Container Registry Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-private-link.md | az group delete --name $RESOURCE_GROUP To pull content from a registry with private link enabled, clients must allow access to the registry REST endpoint, as well as all regional data endpoints. The client proxy or firewall must allow access to -REST endpoint: `.azurecr.io` -Data endpoint(s): `..data.azurecr.io` +REST endpoint: `{REGISTRY_NAME}.azurecr.io` +Data endpoint(s): `{REGISTRY_NAME}.{REGISTRY_LOCATION}.data.azurecr.io` For a geo-replicated registry, customer needs to configure access to the data endpoint for each regional replica. |
cosmos-db | Continuous Backup Restore Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md | However, there could be scenarios where you don't know the exact time of acciden ## Permissions -Azure Cosmos DB allows you to isolate and restrict the restore permissions for continuous backup account to a specific role or a principal. The owner of the account can trigger a restore and assign a role to other principals to perform the restore operation. To learn more, see the [Permissions](continuous-backup-restore-permissions.md) article. +Azure Cosmos DB allows you to isolate and restrict the restore permissions for continuous backup account to a specific role or a principal. To learn more, see the [Permissions](continuous-backup-restore-permissions.md) article. ## <a id="continuous-backup-pricing"></a>Pricing Currently the point in time restore functionality has the following limitations: * [Migrate to an account from periodic backup to continuous backup](migrate-continuous-backup.md). * [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode. * [Resource model of continuous backup mode](continuous-backup-restore-resource-model.md)++ |
cosmos-db | Bulk Executor Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/bulk-executor-dotnet.md | Modify the parameters, as described in the following table: | Parameter|Description | |||-|`ConnectionString`| Your .NET SDK endpoint, which you'll find in the **Overview** section of your Azure Cosmos DB for Gremlin database account. It's formatted as `https://your-graph-database-account.documents.azure.com:443/`. -`DatabaseName`, `ContainerName`|The names of the target database and container.| +|`ConnectionString`| Your service connection string, which you'll find in the **Keys** section of your Azure Cosmos DB for Gremlin account. It's formatted as `AccountEndpoint=https://<account-name>.documents.azure.com:443/;AccountKey=<account-key>;`. | +|`DatabaseName`, `ContainerName`|The names of the target database and container.| |`DocumentsToInsert`| The number of documents to be generated (relevant only to synthetic data).| |`PartitionKey` | Ensures that a partition key is specified with each document during data ingestion.| |`NumberOfRUs` | Is relevant only if a container doesn't already exist and it needs to be created during execution.| |
cosmos-db | Indexing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/indexing.md | You could create the same single field index on `name` in the Azure portal: One query uses multiple single field indexes where available. You can create up to 500 single field indexes per collection. ### Compound indexes (MongoDB server version 3.6+)-In the API for MongoDB, compound indexes are **required** if your query needs the ability to sort on multiple fields at once. For queries with multiple filters that don't need to sort, create multiple single field indexes instead of a compound index to save on indexing costs. -A compound index or single field indexes for each field in the compound index will result in the same performance for filtering in queries. +In the API for MongoDB, compound indexes are **required** if your query needs the ability to sort on multiple fields at once. For queries with multiple filters that don't need to sort, create multiple single field indexes instead of a compound index to save on indexing costs. -Compounded indexes on nested fields are not supported by default due to limiations with arrays. If your nested field does not contain an array, the index will work as intended. If your nested field contains an array (anywhere on the path), that value will be ignored in the index. +A compound index or single field indexes for each field in the compound index results in the same performance for filtering in queries. -For example a compound index containing people.tom.age will work in this case since there's no array on the path: -```javascript -{ "people": { "tom": { "age": "25" }, "mark": { "age": "30" } } } +Compounded indexes on nested fields aren't supported by default due to limitations with arrays. If your nested field doesn't contain an array, the index works as intended. If your nested field contains an array (anywhere on the path), that value is ignored in the index. ++As an example, a compound index containing `people.dylan.age` works in this case since there's no array on the path: ++```json +{ + "people": { + "dylan": { + "name": "Dylan", + "age": "25" + }, + "reed": { + "name": "Reed", + "age": "30" + } + } +} ```-but won't won't work in this case since there's an array in the path: -```javascript -{ "people": { "tom": [ { "age": "25" } ], "mark": [ { "age": "30" } ] } } ++This same compound index doesn't work in this case since there's an array in the path: ++```json +{ + "people": [ + { + "name": "Dylan", + "age": "25" + }, + { + "name": "Reed", + "age": "30" + } + ] +} ``` This feature can be enabled for your database account by [enabling the 'EnableUniqueCompoundNestedDocs' capability](how-to-configure-capabilities.md). |
cosmos-db | Sdk Java Spring Data V3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-spring-data-v3.md | You can use Spring Data Azure Cosmos DB in your applications hosted in [Azure Sp ### Spring Boot Version Support -This project supports multiple Spring Boot Versions. Visit [spring boot support policy](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-spring-data-cosmos#spring-boot-support-policy) for more information. Maven users can inherit from the `spring-boot-starter-parent` project to obtain a dependency management section to let Spring manage the versions for dependencies. Visit [spring boot version support](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-spring-data-cosmos#spring-boot-version-support) for more information. +This project supports multiple Spring Boot Versions. Visit [spring boot support policy](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/azure-spring-data-cosmos#spring-boot-support-policy) for more information. Maven users can inherit from the `spring-boot-starter-parent` project to obtain a dependency management section to let Spring manage the versions for dependencies. Visit [spring boot version support](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/azure-spring-data-cosmos#spring-boot-version-support) for more information. ### Spring Data Version Support -This project supports different spring-data-commons versions. Visit [spring data version support](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-spring-data-cosmos#spring-data-version-support) for more information. +This project supports different spring-data-commons versions. Visit [spring data version support](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/azure-spring-data-cosmos#spring-data-version-support) for more information. ### Which Version of Azure Spring Data Azure Cosmos DB Should I Use -Azure Spring Data Azure Cosmos DB library supports multiple versions of Spring Boot / Spring Cloud. Refer to [azure Spring Data Azure Cosmos DB version mapping](https://github.com/Azure/azure-sdk-for-jav#which-version-of-azure-spring-data-cosmos-should-i-use) for detailed information on which version of Azure Spring Data Azure Cosmos DB to use with Spring Boot / Spring Cloud version. +Azure Spring Data Azure Cosmos DB library supports multiple versions of Spring Boot / Spring Cloud. Refer to [azure Spring Data Azure Cosmos DB version mapping](https://github.com/Azure/azure-sdk-for-jav#which-version-of-azure-spring-data-cosmos-should-i-use) for detailed information on which version of Azure Spring Data Azure Cosmos DB to use with Spring Boot / Spring Cloud version. > [!IMPORTANT] > These release notes are for version 3 of Spring Data Azure Cosmos DB. Azure Spring Data Azure Cosmos DB library supports multiple versions of Spring B | Content | Link | |||-| **Release notes** | [Release notes for Spring Data Azure Cosmos DB SDK v3](https://github.com/Azure/azure-sdk-for-jav) | -| **SDK Documentation** | [Azure Spring Data Azure Cosmos DB SDK v3 documentation](https://github.com/Azure/azure-sdk-for-jav) | +| **Release notes** | [Release notes for Spring Data Azure Cosmos DB SDK v3](https://github.com/Azure/azure-sdk-for-jav) | +| **SDK Documentation** | [Azure Spring Data Azure Cosmos DB SDK v3 documentation](https://github.com/Azure/azure-sdk-for-jav) | | **SDK download** | [Maven](https://mvnrepository.com/artifact/com.azure/azure-spring-data-cosmos) | | **API documentation** | [Java API reference documentation](/java/api/overview/azure/spring-data-cosmos-readme?view=azure-java-stable&preserve-view=true) |-| **Contribute to SDK** | [Azure SDK for Java Central Repo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-spring-data-cosmos) | +| **Contribute to SDK** | [Azure SDK for Java Central Repo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/azure-spring-data-cosmos) | | **Get started** | [Quickstart: Build a Spring Data Azure Cosmos DB app to manage Azure Cosmos DB for NoSQL data](./quickstart-java-spring-data.md) <br> [GitHub repo with quickstart code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-getting-started) | | **Basic code samples** | [Azure Cosmos DB: Spring Data Azure Cosmos DB examples for the API for NoSQL](samples-java-spring-data.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples)| | **Performance tips**| [Performance tips for Java SDK v4 (applicable to Spring Data)](performance-tips-java-sdk-v4.md)| Azure Spring Data Azure Cosmos DB library supports multiple versions of Spring B | **Azure Cosmos DB workshops and labs** |[Azure Cosmos DB workshops home page](https://aka.ms/cosmosworkshop) ## Release history-Release history is maintained in the azure-sdk-for-java repo, for detailed list of releases, see the [changelog file](https://github.com/Azure/azure-sdk-for-jav). +Release history is maintained in the azure-sdk-for-java repo, for detailed list of releases, see the [changelog file](https://github.com/Azure/azure-sdk-for-jav). ## Recommended version |
cosmos-db | Periodic Backup Restore Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/periodic-backup-restore-introduction.md | The following steps show how Azure Cosmos DB performs data backup: - The backups are taken without affecting the performance or availability of your application. Azure Cosmos DB performs data backup in the background without consuming any extra provisioned throughput (RUs) or affecting the performance and availability of your database. +With the periodic backup mode, the backups are taken only in the write region of your Azure Cosmos DB account. The restore action always restores data into a new account which is located in the write region of the source account. ++## What is restored into new account? ++-You can choose to restore any combination of provisioned throughput containers, shared throughput database, or the entire account. +-The restore action restores all data and its index properties into a new account. +-The duration of restore will depend on the amount of data that needs to be restored. +-The newly restored database accountΓÇÖs consistency setting will be same as the source database accountΓÇÖs consistency settings. ++## What isn't restored? ++The following configurations aren't restored after the point-in-time recovery. +- A subset of containers under a shared throughput database cannot be restored. The entire database can be restored as a whole. +- Database account keys. The restored account will be generated with new database account keys. +-Firewall, VNET, Data plane RBAC or private endpoint settings. Enabling/Disabling public network access can be provided as an input to the restore request. +-Regions. The restored account will only be a single region account, which is the write region of the source account. +-Stored procedures, triggers, UDFs. +-Role-based access control assignments. These will need to be re-assigned. +-Documents that were deleted because of expired TTL. +-Analytical data when synapse link is enabled. +-Materialized views ++Some of these configurations can be added to the restored account after the restore is completed. + ## Azure Cosmos DB Backup with Azure Synapse Link For Azure Synapse Link enabled accounts, analytical store data isn't included in the backups and restores. When Azure Synapse Link is enabled, Azure Cosmos DB continues to automatically take backups of your data in the transactional store at a scheduled backup interval. Automatic backup and restore of your data in the analytical store isn't supported at this time. With Azure Cosmos DB API for NoSQL accounts, you can also maintain your own back Use [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) to move data periodically to a storage solution of your choice. + ### Azure Cosmos DB change feed Use Azure Cosmos DB [change feed](change-feed.md) to read data periodically for full backups or for incremental changes, and store it in your own storage. Use Azure Cosmos DB [change feed](change-feed.md) to read data periodically for > [!div class="nextstepaction"] > [Periodic backup storage redundancy](periodic-backup-storage-redundancy.md)+ |
cosmos-db | Concepts Performance Tuning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-performance-tuning.md | mind there are helper libraries for several popular application frameworks that make it easier. Here are instructions: * [Ruby on Rails](https://docs.citusdata.com/en/stable/develop/migration_mt_ror.html),-* [Django](https://docs.citusdata.com/en/stable/develop/migration_mt_django.html), +* [Django](https://django-multitenant.readthedocs.io/en/latest/migration_mt_django.html), * [ASP.NET](https://docs.citusdata.com/en/stable/develop/migration_mt_asp.html), * [Java Hibernate](https://www.citusdata.com/blog/2018/02/13/using-hibernate-and-spring-to-build-multitenant-java-apps/). |
cosmos-db | Quickstart Build Scalable Apps Model Multi Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-build-scalable-apps-model-multi-tenant.md | There are helper libraries for several popular application frameworks that make it easy to include a tenant ID in queries. Here are instructions: * [Ruby on Rails instructions](https://docs.citusdata.com/en/stable/develop/migration_mt_ror.html)-* [Django instructions](https://docs.citusdata.com/en/stable/develop/migration_mt_django.html) +* [Django instructions](https://django-multitenant.readthedocs.io/en/latest/migration_mt_django.html) * [ASP.NET](https://docs.citusdata.com/en/stable/develop/migration_mt_asp.html) * [Java Hibernate](https://www.citusdata.com/blog/2018/02/13/using-hibernate-and-spring-to-build-multitenant-java-apps/) |
data-factory | Data Factory Service Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-service-identity.md | If you find your service instance doesn't have a managed identity associated fol > >- Managed identity cannot be modified. Updating a service instance which already has a managed identity won't have any impact, and the managed identity is kept unchanged. >- If you update a service instance which already has a managed identity without specifying the "identity" parameter in the factory objects or without specifying "identity" section in REST request body, you will get an error.->- When you delete a service instance, the associated managed identity will be deleted along. +>- When you delete a service instance, the associated managed identity will also be deleted. #### Generate system-assigned managed identity using PowerShell See the following topics that introduce when and how to use managed identity: See [Managed Identities for Azure Resources Overview](../active-directory/managed-identities-azure-resources/overview.md) for more background on managed identities for Azure resources, on which managed identity in Azure Data Factory is based. -See [Limitations](../active-directory/managed-identities-azure-resources/managed-identities-faq.md#limitations) of managed identities, which also apply to managed identities in Azure Data Factory. +See [Limitations](../active-directory/managed-identities-azure-resources/managed-identities-faq.md#limitations) of managed identities, which also apply to managed identities in Azure Data Factory. |
data-factory | Source Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/source-control.md | When creating a new data factory in the Azure portal, you can configure Git repo Visual authoring with Azure Repos Git integration supports source control and collaboration for work on your data factory pipelines. You can associate a data factory with an Azure Repos Git organization repository for source control, collaboration, versioning, and so on. A single Azure Repos Git organization can have multiple repositories, but an Azure Repos Git repository can be associated with only one data factory. If you don't have an Azure Repos organization or repository, follow [these instructions](/azure/devops/organizations/accounts/create-organization?view=azure-devops&preserve-view=true) to create your resources. ++ > [!NOTE] > You can store script and data files in an Azure Repos Git repository. However, you have to upload the files manually to Azure Storage. A data factory pipeline doesn't automatically upload script or data files stored in an Azure Repos Git repository to Azure Storage. Visual authoring with GitHub integration supports source control and collaborati The GitHub integration with Data Factory supports both public GitHub (that is, [https://github.com](https://github.com)), GitHub Enterprise Cloud and GitHub Enterprise Server. You can use both public and private GitHub repositories with Data Factory as long you have read and write permission to the repository in GitHub. To connect with a public repository, select the **Use Link Repository option**, as they will not be visible in the dropdown menu of **Repository name**. ADF’s GitHub enterprise server integration only works with [officially supported versions of GitHub enterprise server.](https://docs.github.com/en/enterprise-server@3.1/admin/all-releases) +For repositories owned by GitHub organization account, the admin has to authorize ADF app. For repositories owned by GitHub user account, a user with at least collaborator permission can authorize ADF app. This doesn't give ADF app direct access to all the repositories owned by the account/organization, it only allows ADF app to act on-behalf of user to access repositories based on user's access permissions. + > [!NOTE] > If you are using Microsoft Edge, GitHub Enterprise version less than 2.1.4 does not work with it. GitHub officially supports >=3.0 and these all should be fine for ADF. As GitHub changes its minimum version, ADF supported versions will also change. |
defender-for-cloud | Adaptive Application Controls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md | Title: Adaptive application controls in Microsoft Defender for Cloud description: This document helps you use adaptive application control in Microsoft Defender for Cloud to create an allowlist of applications running for Azure machines.--++ Last updated 02/06/2023 |
defender-for-cloud | Adaptive Network Hardening | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-network-hardening.md | Title: Adaptive network hardening in Microsoft Defender for Cloud description: Learn how to use actual traffic patterns to harden your network security groups (NSG) rules and further improve your security posture.--++ Last updated 12/13/2022 |
defender-for-cloud | Alert Validation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alert-validation.md | Title: Alert validation in Microsoft Defender for Cloud description: Learn how to validate that your security alerts are correctly configured in Microsoft Defender for Cloud Last updated 10/06/2022--++ # Alert validation in Microsoft Defender for Cloud |
defender-for-cloud | Alerts Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-overview.md | Title: Security alerts and incidents in Microsoft Defender for Cloud description: Learn how Microsoft Defender for Cloud generates security alerts and correlates them into incidents. --++ Last updated 11/29/2022 |
defender-for-cloud | Alerts Schemas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-schemas.md | Title: Schemas for the Microsoft Defender for Cloud alerts description: This article describes the different schemas used by Microsoft Defender for Cloud for security alerts. --++ Last updated 11/09/2021 |
defender-for-cloud | Alerts Suppression Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-suppression-rules.md | Title: Suppressing false positives or other unwanted security alerts - Microsoft description: This article explains how to use Microsoft Defender for Cloud's suppression rules to hide unwanted security alerts, such as false positives Last updated 01/09/2023 --++ # Suppress alerts from Microsoft Defender for Cloud |
defender-for-cloud | Apply Security Baseline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/apply-security-baseline.md | Title: Harden your Windows and Linux OS with Azure security baseline and Microso description: Learn how Microsoft Defender for Cloud uses the guest configuration to compare your OS hardening with the guidance from Microsoft cloud security benchmark --++ Last updated 11/09/2021 # Apply Azure security baselines to machines |
defender-for-cloud | Asset Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/asset-inventory.md | Title: Using the asset inventory to view your security posture with Microsoft De description: Learn about Microsoft Defender for Cloud's asset management experience providing full visibility over all your Defender for Cloud monitored resources. Last updated 01/03/2023 --++ # Use asset inventory to manage your resources' security posture |
defender-for-cloud | Auto Deploy Azure Monitoring Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-azure-monitoring-agent.md | Title: Deploy the Azure Monitor Agent with Microsoft Defender for Cloud description: Learn how to deploy the Azure Monitor Agent on your Azure, multicloud, and on-premises servers to support Microsoft Defender for Cloud protections.--++ Last updated 03/01/2023 |
defender-for-cloud | Concept Agentless Data Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md | Title: Agentless scanning of cloud machines using Microsoft Defender for Cloud description: Learn how Defender for Cloud can gather information about your multicloud compute resources without installing an agent on your machines.--++ Last updated 09/28/2022 |
defender-for-cloud | Concept Data Security Posture Prepare | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md | Title: Support and prerequisites for data-aware security posture - Microsoft Defender for Cloud description: Learn about the requirements for data-aware security posture in Microsoft Defender for Cloud--++ Last updated 03/23/2023 The table summarizes support for data-aware posture management. | What Azure data resources can I discover? | [Block blob](../storage/blobs/storage-blobs-introduction.md) storage accounts in Azure Storage v1/v2<br/><br/> Azure Data Lake Storage Gen2<br/><br/>Storage accounts behind private networks are supported.<br/><br/> Storage accounts encrypted with a customer-managed server-side key are supported.<br/><br/> Accounts aren't supported if any of these settings are enabled: [Public network access is disabled](../storage/common/storage-network-security.md#change-the-default-network-access-rule); Storage account is defined as [Azure DNS Zone](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-create-additional-5000-azure-storage-accounts/ba-p/3465466); The storage account endpoint has a [custom domain mapped to it](../storage/blobs/storage-custom-domain-name.md). What AWS data resources can I discover? | AWS S3 buckets<br/><br/> Defender for Cloud can discover KMS-encrypted data, but not data encrypted with a customer-managed key.-What permissions do I need for discovery? | Storage account: Subscription Owner or Microsoft.Storage/storageaccounts/{read/write} and Microsoft.Authorization/roleAssignments/{read/write/delete}<br/><br/> Amazon S3 buckets: AWS account permission to run Cloud Formation (to create a role). +What permissions do I need for discovery? | Storage account: Subscription Owner<br/> **or**<br/> Microsoft.Authorization/roleAssignments/* (read, write, delete) **and** Microsoft.Security/pricings/* (read, write, delete) **and** Microsoft.Security/pricings/SecurityOperators (read, write)<br/><br/> Amazon S3 buckets: AWS account permission to run Cloud Formation (to create a role). What file types are supported for sensitive data discovery? | Supported file types (you can't select a subset) - .doc, .docm, .docx, .dot, .odp, .ods, .odt, .pdf, .pot, .pps, .ppsx, .ppt, .pptm, .pptx, .xlc, .xls, .xlsb, .xlsm, .xlsx, .xlt, .csv, .json, .psv, .ssv, .tsv, .txt., xml, .parquet, .avro, .orc. What Azure regions are supported? | You can discover Azure storage accounts in:<br/><br/> Australia Central; Australia Central 2; Australia East; Australia Southeast; Brazil South; Canada Central; Canada East; Central India; Central US; East Asia; East US; East US 2; France Central; Germany West Central; Japan East; Japan West: Jio India West: North Central US; North Europe; Norway East; South Africa North: South Central US; South India; Sweden Central; Switzerland North; UAE North; UK South; UK West: West Central US; West Europe; West US, West US3.<br/><br/> Discovery is done locally in the region. What AWS regions are supported? | Asia Pacific (Mumbai); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Central); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br/><br/> Discovery is done locally in the region. |
defender-for-cloud | Concept Data Security Posture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture.md | Title: Data-aware security posture in Microsoft Defender for Cloud description: Learn how Defender for Cloud helps improve data security posture in a multicloud environment.--++ Last updated 03/09/2023 |
defender-for-cloud | Concept Defender For Cosmos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-defender-for-cosmos.md | description: Learn about the benefits and features of Microsoft Defender for Azu --++ Last updated 11/27/2022 |
defender-for-cloud | Configure Email Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/configure-email-notifications.md | Title: Configure email notifications for Microsoft Defender for Cloud alerts description: Learn how to fine-tune the Microsoft Defender for Cloud security alert emails. --++ Last updated 11/09/2021 |
defender-for-cloud | Continuous Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/continuous-export.md | Title: Continuous export can send Microsoft Defender for Cloud's alerts and recommendations to Log Analytics or Azure Event Hubs description: Learn how to configure continuous export of security alerts and recommendations to Log Analytics or Azure Event Hubs--++ Last updated 01/19/2023 |
defender-for-cloud | Cross Tenant Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/cross-tenant-management.md | description: Learn how to set up cross-tenant management to manage the security documentationcenter: na ms.assetid: 7d51291a-4b00-4e68-b872-0808b60e6d9c --++ na Last updated 11/09/2021 |
defender-for-cloud | Custom Dashboards Azure Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md | Title: Workbooks gallery in Microsoft Defender for Cloud description: Learn how to create rich, interactive reports of your Microsoft Defender for Cloud data with the integrated Azure Monitor Workbooks gallery --++ Last updated 02/02/2023 |
defender-for-cloud | Data Security Posture Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security-posture-enable.md | Title: Enable data-aware security posture for Azure datastores - Microsoft Defender for Cloud description: Learn how to enable data-aware security posture in Defender for Cloud--++ Last updated 04/13/2023 |
defender-for-cloud | Data Security Review Risks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security-review-risks.md | Title: Explore risks to sensitive data in Microsoft Defender for Cloud description: Learn how to use attack paths and security explorer to find and remediate sensitive data risks.--++ Last updated 03/14/2023 |
defender-for-cloud | Data Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security.md | Title: Microsoft Defender for Cloud data security description: Learn how data is managed and safeguarded in Microsoft Defender for Cloud. --++ Last updated 11/09/2021 # Microsoft Defender for Cloud data security |
defender-for-cloud | Data Sensitivity Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-sensitivity-settings.md | Title: Customize data sensitivity settings in Microsoft Defender for Cloud description: Learn how to customize data sensitivity settings in Defender for Cloud--++ Last updated 03/22/2023 |
defender-for-cloud | Defender For Apis Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-introduction.md | Defender for APIs currently provides security for APIs published in Azure API Ma - **Security findings**: Analyze API security findings, including information about external, unused, or unauthenticated APIs. - **Security posture**: Review and implement security recommendations to improve API security posture, and harden at-risk surfaces. - **API data classification**: Classify APIs that receive or respond with sensitive data, to support risk prioritization.-- **Real time threat detection**: Ingest API traffic and monitor it with runtime anomaly detection, using machine-learning and rule-based analytics, to detect API security threats, including the [OWASP Top 10](https://owasp.org/www-project-top-ten/) critical threats.+- **Threat detection**: Ingest API traffic and monitor it with runtime anomaly detection, using machine-learning and rule-based analytics, to detect API security threats, including the [OWASP API Top 10](https://owasp.org/www-project-api-security/) critical threats. - **Defender CSPM integration**: Integrate with Cloud Security Graph in [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) for API visibility and risk assessment across your organization. - **Azure API Management integration**: With the Defender for APIs plan enabled, you can receive API security recommendations and alerts in the Azure API Management portal. - **SIEM integration**: Integrate with security information and event management (SIEM) systems, making it easier for security teams to investigate with existing threat response workflows. [Learn more](tutorial-security-incident.md). Review the inventory and security findings for onboarded APIs in the Defender fo :::image type="content" source="media/defender-for-apis-introduction/inventory.png" alt-text="Screenshot that shows the onboarded API inventory."::: -You can drill down into API collection to review security findings for onboarded API endpoints. +You can drill down into the API collection to review security findings for onboarded API endpoints. :::image type="content" source="media/defender-for-apis-introduction/endpoint-details.png" alt-text="Screenshot for reviewing the API endpoint details."::: API endpoint information includes: - **Endpoint name**: The name of API endpoint/operation as defined in Azure API Management.-- **Endpoint**: The URL path of the API endpoints, and the HTTPS method. +- **Endpoint**: The URL path of the API endpoints, and the HTTP method. Last called data (UTC): The date when API traffic was last observed going to/from API endpoints (in UTC time zone). -- **30 days unused**: Shows whether API endpoints have received any API call traffic in the last 30 days. APIs that haven't received any traffic in the last 30 days are marked as Inactive. -- **Authentication**: Shows when a monitored API endpoint has no authentication. Defender for APIs assesses the authentication state using the subscription keys, JSON web token (JWT), and client certificate configured in Azure API Management. If none of these authentication mechanisms are present or executed, the API is marked as "unauthenticated".+- **30 days unused**: Shows whether API endpoints have received any API call traffic in the last 30 days. APIs that haven't received any traffic in the last 30 days are marked as *Inactive*. +- **Authentication**: Shows when a monitored API endpoint has no authentication. Defender for APIs assesses the authentication state using the subscription keys, JSON web token (JWT), and client certificate configured in Azure API Management. If none of these authentication mechanisms are present or executed, the API is marked as *unauthenticated*. - **External traffic observed date**: The date when external API traffic was observed going to/from the API endpoint. - **Data classification**: Classifies API request and response bodies based on supported data types. Defender for API provides a number of recommendations, including recommendations -## Detecting runtime threats +## Detecting threats -Defender for APIs monitors runtime traffic and threat intelligence feeds, and issues threat detection alerts. API alerts detect the top 10 OWASP threats, data exfiltration, volumetric attacks, anomalous and suspicious API parameters, traffic and IP access anomalies, and usage patterns. +Defender for APIs monitors runtime traffic and threat intelligence feeds, and issues threat detection alerts. API alerts detect the top 10 OWASP API threats, data exfiltration, volumetric attacks, anomalous and suspicious API parameters, traffic and IP access anomalies, and usage patterns. [Review the security alerts reference](alerts-reference.md). ## Responding to threats -Act on recommendations and alerts to mitigate threats and risk. Defender for Cloud alerts and recommendations can be exported into SIEM systems such as Microsoft Sentinel, for investigation within existing threat response workflows for fast and efficient remediation. [Learn more](export-to-siem.md). +Act on alerts to mitigate threats and risk. Defender for Cloud alerts and recommendations can be exported into SIEM systems such as Microsoft Sentinel, for investigation within existing threat response workflows for fast and efficient remediation. [Learn more](export-to-siem.md). ## Investigating Cloud Security Graph insights - [Cloud Security Graph](concept-attack-path.md) in the Defender CSPM plan analyses assets and connections across your organization, to expose risks, vulnerabilities, and possible lateral movement paths. **When Defender for APIs is enabled together with the Defender CSPM plan**, you can use Cloud Security Explorer to proactively and efficiently query your organizational information to locate, identify, and remediate API assets, security issues, and risks. ## Next steps -[Review support and prerequisites](defender-for-apis-prepare.md) for Defender for APIs deployment. +[Review support and prerequisites](defender-for-apis-prepare.md) for Defender for APIs deployment. |
defender-for-cloud | Defender For Apis Posture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-posture.md | This article describes how to investigate API security findings, alerts, and sec 1. In the API collection page, to drill down into an API endpoint, select the ellipses (...) > **View resource**. - :::image type="content" source="media/defender-for-apis-posture/view-resource.png" alt-text="Screenshot that shows an API endpoint details." lightbox="media/defender-for-apis-posture/view-resource.png"::: + :::image type="content" source="media/defender-for-apis-posture/view-resource.png" alt-text="Screenshot that shows API endpoint details." lightbox="media/defender-for-apis-posture/view-resource.png"::: 1. In the **Resource health** page, review the endpoint settings. 1. In the **Recommendations** tab, review recommendation details and status.-1. In the **Alerts** tab review security alerts for the endpoint. Defender for Endpoint monitors API traffic to and from endpoints, to provide runtime protection against suspicious behavior and malicious attacks. +1. In the **Alerts** tab, review security alerts for the endpoint. Defender for Endpoint monitors API traffic to and from endpoints, to provide runtime protection against suspicious behavior and malicious attacks. :::image type="content" source="media/defender-for-apis-posture/resource-health.png" alt-text="Screenshot that shows the health of an endpoint." lightbox="media/defender-for-apis-posture/resource-health.png"::: This article describes how to investigate API security findings, alerts, and sec In Defender for Cloud you can use sample alerts to evaluate your Defender for Cloud plans, and validate your security configuration. [Follow these instructions](alert-validation.md#generate-sample-security-alerts) to set up sample alerts, and select the relevant APIs within your subscriptions. +## Simulate alerts ++To see the alert process in action, you can simulate an action that triggers a Defender for APIs alert. [Follow the instructions in our Tech Community blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/validating-microsoft-defender-for-apis-alerts/ba-p/3803874) to do that. + ## Build queries in Cloud Security Explorer In Defender CSPM, [Cloud Security Graph](concept-attack-path.md) collects data to provide a map of assets and connections across organization, to expose security risks, vulnerabilities, and possible lateral movement paths. -When the Defender CSPM plan is enabled together with Defender for APIs, you can use Cloud Security Explorer to query Cloud Security Graph, to identify, review and analyze API security risks across your organization. +When the Defender CSPM plan is enabled together with Defender for APIs, you can use Cloud Security Explorer to identify, review and analyze API security risks across your organization. 1. In the Defender for Cloud portal, select **Cloud Security Explorer**.-1. You can build your own query, or select the API query template. - 1. To build your own query, in **What would you like to search?** select the **APIs** category. You can query: - - API collections that contain one or more API endpoints. - - API endpoints for Azure API Management operations. -- :::image type="content" source="media/defender-for-apis-posture/api-insights.png" alt-text="Screenshot that shows the predefined API query." lightbox="media/defender-for-apis-posture/api-insights.png"::: - - The search resultS display each API resource with its associated insights, so that you can review, prioritize, and fix any issues. -- Alternatively, you can select the predefined query **Unauthenticated API endpoints containing sensitive data are outside the virtual network** > **Open query**. The query returns all unauthenticated API endpoints that contain sensitive data and aren't part of the Azure API management network. - - :::image type="content" source="media/defender-for-apis-posture/predefined-query.png" alt-text="Screenshot that shows a predefined API query."::: - +1. In **What would you like to search?** select the **APIs** category. +1. Review the search results so that you can review, prioritize, and fix any API issues. + ## Next steps |
defender-for-cloud | Defender For App Service Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-app-service-introduction.md | Title: Microsoft Defender for App Service - the benefits and features description: Learn about the capabilities of Microsoft Defender for App Service and how to enable it on your subscription Last updated 01/10/2023 --++ # Overview of Defender for App Service to protect your Azure App Service web apps and APIs |
defender-for-cloud | Defender For Container Registries Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md | description: Learn about the benefits and features of Microsoft Defender for con Last updated 04/07/2022 --++ # Introduction to Microsoft Defender for container registries (deprecated) |
defender-for-cloud | Defender For Containers Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md | Title: Container security architecture in Microsoft Defender for Cloud description: Learn about the architecture of Microsoft Defender for Containers for each container platform--++ Last updated 06/19/2022 |
defender-for-cloud | Defender For Containers Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md | Title: How to enable Microsoft Defender for Containers in Microsoft Defender for Cloud description: Enable the container protections of Microsoft Defender for Containers --++ zone_pivot_groups: k8s-host Last updated 10/30/2022 |
defender-for-cloud | Defender For Containers Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md | Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers --++ Last updated 09/11/2022 |
defender-for-cloud | Defender For Containers Vulnerability Assessment Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md | Title: Identify vulnerabilities in Azure Container Registry with Microsoft Defender for Cloud description: Learn how to use Defender for Containers to scan images in your Azure Container Registry to find vulnerabilities.-- Previously updated : 01/11/2023++ Last updated : 05/09/2023 The triggers for an image scan are: - Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster. -When a scan is triggered, findings are available as Defender for Cloud recommendations from 2 minutes up to 15 minutes after the scan is complete. -+Once a scan is triggered, scan results will typically appear in the Defender for Cloud recommendations after a few minutes, but in some cases it may take up to an hour. ## Prerequisites Before you can scan your ACR images: To create a rule: 1. To view, override, or delete a rule: 1. Select **Disable rule**.- 1. From the scope list, subscriptions with active rules show as **Rule applied**. - :::image type="content" source="./media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Modify or delete an existing rule."::: + 1. From the scope list, subscriptions with active rules appear as **Rule applied**. + :::image type="content" source="./media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Screenshot showing the scope list."::: 1. To view or delete the rule, select the ellipsis menu ("..."). ## View vulnerabilities for images running on your AKS clusters Defender for Containers pulls the image from the registry and runs it in an isol Defender for Cloud filters and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying you when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts. +### What is the difference between Not Applicable Resources and Unverified Resources? ++- **Not applicable resources** are resources for which the recommendation can't give a definitive answer. The not applicable tab includes reasons for each resource that could not be assessed. +- **Unverified resources** are resources that have been scheduled to be assessed, but have not been assessed yet. + ### Does Microsoft share any information with Qualys in order to perform image scans? No, the Qualys scanner is hosted by Microsoft, and no customer data is shared with Qualys. |
defender-for-cloud | Defender For Containers Vulnerability Assessment Elastic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-elastic.md | Title: Identify vulnerabilities in Amazon AWS Elastic Container Registry with Microsoft Defender for Cloud description: Learn how to use Defender for Containers to scan images in your Amazon AWS Elastic Container Registry (ECR) to find vulnerabilities.--++ Last updated 09/11/2022 |
defender-for-cloud | Defender For Databases Enable Cosmos Protections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-enable-cosmos-protections.md | Title: Enable Microsoft Defender for Azure Cosmos DB description: Learn how to enable enhanced security features in Microsoft Defender for Azure Cosmos DB. --++ Last updated 11/28/2022 |
defender-for-cloud | Defender For Databases Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-introduction.md | Title: Microsoft Defender for open-source relational databases - the benefits an description: Learn about the benefits and features of Microsoft Defender for open-source relational databases such as PostgreSQL, MySQL, and MariaDB Last updated 06/19/2022 --++ # Overview of Microsoft Defender for open-source relational databases |
defender-for-cloud | Defender For Databases Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-usage.md | Title: Setting up and responding to alerts from Microsoft Defender for open-sour description: Learn how to configure Microsoft Defender for open-source relational databases to detect anomalous database activities indicating potential security threats to the database. Last updated 11/09/2021 --++ # Enable Microsoft Defender for open-source relational databases and respond to alerts |
defender-for-cloud | Defender For Dns Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-dns-alerts.md | Title: Respond to Microsoft Defender for DNS alerts - Microsoft Defender for Clo description: Learn best practices for responding to alerts that indicate security risks in DNS services. Last updated 6/21/2022 --++ # Respond to Microsoft Defender for DNS alerts |
defender-for-cloud | Defender For Dns Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-dns-introduction.md | Title: Microsoft Defender for DNS - the benefits and features description: Learn about the benefits and features of Microsoft Defender for DNS Last updated 01/10/2023 --++ # Overview of Microsoft Defender for DNS |
defender-for-cloud | Defender For Key Vault Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-key-vault-introduction.md | Title: Microsoft Defender for Key Vault - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Key Vault. Last updated 11/09/2021 --++ |
defender-for-cloud | Defender For Kubernetes Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-kubernetes-introduction.md | Title: Microsoft Defender for Kubernetes - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Kubernetes. Last updated 07/11/2022--++ |
defender-for-cloud | Defender For Resource Manager Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-resource-manager-usage.md | Title: How to respond to Microsoft Defender for Resource Manager alerts description: Learn about the steps necessary for responding to alerts from Microsoft Defender for Resource Manager Last updated 11/09/2021 --++ # Respond to Microsoft Defender for Resource Manager alerts |
defender-for-cloud | Defender For Sql Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-introduction.md | description: Learn how Microsoft Defender for Azure SQL protects your Azure SQL Last updated 07/28/2022 --++ # Overview of Microsoft Defender for Azure SQL |
defender-for-cloud | Defender For Sql On Machines Vulnerability Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-on-machines-vulnerability-assessment.md | Title: Scan for vulnerabilities on on-premises and Azure Arc-enabled SQL servers description: Learn about Microsoft Defender for SQL servers on machines' integrated vulnerability assessment scanner --++ Last updated 11/09/2021 |
defender-for-cloud | Defender For Sql Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md | Title: How to enable Microsoft Defender for SQL servers on machines description: Learn how to protect your Microsoft SQL servers on Azure VMs, on-premises, and in hybrid and multicloud environments with Microsoft Defender for Cloud. --++ Last updated 07/28/2022 Learn more about [vulnerability assessment for Azure SQL servers on machines](de |Release state:|General availability (GA)| |Pricing:|**Microsoft Defender for SQL servers on machines** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)| |Protected SQL versions:|SQL Server version: 2012, 2014, 2016, 2017, 2019, 2022 <br>- [SQL on Azure virtual machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview)<br>- [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview)<br>- On-premises SQL servers on Windows machines without Azure Arc<br>|-|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet| +|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure China 21Vianet **(Advanced Threat Protection Only)**| ## Set up Microsoft Defender for SQL servers on machines |
defender-for-cloud | Defender For Storage Classic Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-classic-enable.md | Title: Enable and configure Microsoft Defender for Storage (classic) - Microsoft Defender for Cloud description: Learn about how to enable and configure Microsoft Defender for Storage (classic). Last updated 03/16/2023--++ |
defender-for-cloud | Defender For Storage Classic Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-classic-migrate.md | Title: Migrate from Defender for Storage (classic) - Microsoft Defender for Cloud description: Learn about how to migrate from Defender for Storage (classic) to the new Defender for Storage plan to take advantage of its enhanced capabilities and pricing. Last updated 03/16/2023--++ |
defender-for-cloud | Defender For Storage Classic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-classic.md | Title: Microsoft Defender for Storage (classic) - Microsoft Defender for Cloud description: Learn about the benefits and features of Microsoft Defender for Storage (classic). Last updated 03/16/2023--++ |
defender-for-cloud | Defender For Storage Configure Malware Scan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-configure-malware-scan.md | Title: Setting up response to Malware Scanning - Microsoft Defender for Cloud description: Learn about how to configure response to malware scanning to prevent harmful files from being uploaded to Azure Storage. Last updated 03/16/2023--++ |
defender-for-cloud | Defender For Storage Data Sensitivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-data-sensitivity.md | Title: Detect threats to sensitive data - Microsoft Defender for Cloud description: Learn about using security alerts to protect your sensitive data from exposure. Last updated 03/16/2023--++ |
defender-for-cloud | Defender For Storage Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md | Title: Microsoft Defender for Storage - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Storage. Last updated 03/23/2023--++ |
defender-for-cloud | Defender For Storage Malware Scan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md | Title: Malware Scanning in Defender for Storage - Microsoft Defender for Cloud description: Learn about the benefits and features of malware scanning in Microsoft Defender for Storage. Last updated 03/16/2023--++ |
defender-for-cloud | Defender For Storage Test | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-test.md | Title: Test the Defender for Storage data security features - Microsoft Defender for Cloud description: Learn how to test the Malware Scanning, sensitive data threat detection, and activity monitoring provided by Defender for Storage.--++ Last updated 03/23/2023 |
defender-for-cloud | Defender For Storage Threats Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-threats-alerts.md | Title: List of security threats and security alerts - Microsoft Defender for Cloud description: Learn about the security threats and alerts Microsoft Defender for Storage provides to detect and respond to potential security risks. Last updated 03/16/2023--++ |
defender-for-cloud | Deploy Vulnerability Assessment Byol Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-byol-vm.md | Title: BYOL VM vulnerability assessment in Microsoft Defender for Cloud description: Deploy a BYOL vulnerability assessment solution on your Azure virtual machines to get recommendations in Microsoft Defender for Cloud that can help you protect your virtual machines. --++ Last updated 05/03/2023 |
defender-for-cloud | Deploy Vulnerability Assessment Defender Vulnerability Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-defender-vulnerability-management.md | description: Enable, deploy, and use Microsoft Defender Vulnerability Management Last updated 11/24/2022--++ # Investigate weaknesses with Microsoft Defender Vulnerability Management |
defender-for-cloud | Deploy Vulnerability Assessment Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md | Title: Defender for Cloud's integrated vulnerability assessment solution for Azure, hybrid, and multicloud machines description: Install a vulnerability assessment solution on your Azure machines to get recommendations in Microsoft Defender for Cloud that can help you protect your Azure and hybrid machines--++ Last updated 07/12/2022 |
defender-for-cloud | Enable Vulnerability Assessment Agentless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-vulnerability-assessment-agentless.md | Title: Find software and vulnerabilities with agentless scanning - Microsoft Defender for Cloud description: Find installed software and software vulnerabilities on your Azure machines and AWS machines without installing an agent.--++ Previously updated : 04/24/2023 Last updated : 05/09/2023 + # Find vulnerabilities and collect software inventory with agentless scanning Agentless scanning provides visibility into installed software and software vulnerabilities on your workloads to extend vulnerability assessment coverage to server workloads without a vulnerability assessment agent installed. Agentless vulnerability assessment uses the Microsoft Defender Vulnerability Man ## Compatibility with agent-based vulnerability assessment solutions -Defender for Cloud already supports different agent-based vulnerability scans, including [Microsoft Defender Vulnerability Management](deploy-vulnerability-assessment-defender-vulnerability-management.md), [BYOL](deploy-vulnerability-assessment-byol-vm.md) and [Qualys](deploy-vulnerability-assessment-vm.md). Agentless scanning extends the visibility of Defender for Cloud to reach more devices. +Defender for Cloud already supports different agent-based vulnerability scans, including [Microsoft Defender Vulnerability Management](deploy-vulnerability-assessment-defender-vulnerability-management.md) (MDVM), [BYOL](deploy-vulnerability-assessment-byol-vm.md) and [Qualys](deploy-vulnerability-assessment-vm.md). Agentless scanning extends the visibility of Defender for Cloud to reach more devices. When you enable agentless vulnerability assessment: -- If you have **no existing integrated vulnerability** assessment solutions, Defender for Cloud automatically displays vulnerability assessment results from agentless scanning.-- If you have **Defender Vulnerability Management** as part of an [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md), Defender for Cloud shows a unified and consolidated view that optimizes coverage and freshness.+- If you have **no existing integrated vulnerability** assessment solutions enabled on any of your VMs on your subscription, Defender for Cloud automatically enables MDVM by default. ++- If you select **Microsoft Defender Vulnerability Management** as part of an [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md), Defender for Cloud shows a unified and consolidated view that optimizes coverage and freshness. - Machines covered by just one of the sources (Defender Vulnerability Management or agentless) show the results from that source. - Machines covered by both sources show the agent-based results only for increased freshness. -- If you have **Vulnerability assessment with Qualys or BYOL integrations** - Defender for Cloud shows the agent-based results by default. Results from the agentless scan will be shown for machines that don't have an agent installed or from machines that aren't reporting findings correctly.+- If you select **Vulnerability assessment with Qualys or BYOL integrations** - Defender for Cloud shows the agent-based results by default. Results from the agentless scan are shown for machines that don't have an agent installed or from machines that aren't reporting findings correctly. - If you want to change the default behavior so that Defender for Cloud always displays results from Defender Vulnerability Management (regardless of a third-party agent solution), select the [Defender Vulnerability Management](auto-deploy-vulnerability-assessment.md#automatically-enable-a-vulnerability-assessment-solution) setting in the vulnerability assessment solution. + If you want to change the default behavior so that Defender for Cloud always displays results from MDVM (regardless of a third-party agent solution), select the [Microsoft Defender Vulnerability Management](auto-deploy-vulnerability-assessment.md#automatically-enable-a-vulnerability-assessment-solution) setting in the vulnerability assessment solution. ## Enabling agentless scanning for machines If you have Defender for Servers P2 already enabled and agentless scanning is tu :::image type="content" source="media/enable-vulnerability-assessment-agentless/defender-plan-settings-azure.png" alt-text="Screenshot of link for the settings of the Defender plans for Azure accounts." lightbox="media/enable-vulnerability-assessment-agentless/defender-plan-settings-azure.png"::: - The agentless scanning setting is shared by both Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2. When you enable agentless scanning on either plan, the setting is enabled for both plans. + The agentless scanning settings are shared by both Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2. When you enable agentless scanning on either plan, the setting is enabled for both plans. 1. In the settings pane, turn on **Agentless scanning for machines**. If you have Defender for Servers P2 already enabled and agentless scanning is tu 1. Select **Update**. -After you enable agentless scanning, software inventory and vulnerability information is updated automatically in Defender for Cloud. +After you enable agentless scanning, software inventory and vulnerability information are updated automatically in Defender for Cloud. ## Exclude machines from scanning |
defender-for-cloud | Endpoint Protection Recommendations Technical | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/endpoint-protection-recommendations-technical.md | Title: Endpoint protection recommendations in Microsoft Defender for Cloud description: How the endpoint protection solutions are discovered and identified as healthy. --++ Last updated 03/08/2022 # Endpoint protection assessment and recommendations in Microsoft Defender for Cloud |
defender-for-cloud | Exempt Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/exempt-resource.md | Title: Exempt a Microsoft Defender for Cloud recommendation from a resource, sub description: Learn how to create rules to exempt security recommendations from subscriptions or management groups and prevent them from impacting your secure score --++ Last updated 01/02/2022 |
defender-for-cloud | Export To Siem | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-siem.md | Title: Stream your alerts from Microsoft Defender for Cloud to Security Information and Event Management (SIEM) systems and other monitoring solutions description: Learn how to stream your security alerts to Microsoft Sentinel, third-party SIEMs, SOAR, or ITSM solutions --++ Last updated 04/04/2022 |
defender-for-cloud | Export To Splunk Or Qradar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-splunk-or-qradar.md | Title: Set up the required Azure resources to export security alerts to IBM QRadar and Splunk description: Learn how to configure the required Azure resources in the Azure portal to stream security alerts to IBM QRadar and Splunk--++ Last updated 04/04/2022 |
defender-for-cloud | File Integrity Monitoring Enable Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-enable-ama.md | Title: Enable File Integrity Monitoring (Azure Monitor Agent) description: Learn how to enable File Integrity Monitor when you collect data with the Azure Monitor Agent (AMA)--++ Last updated 11/14/2022 |
defender-for-cloud | File Integrity Monitoring Enable Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-enable-log-analytics.md | Title: Enable File Integrity Monitoring (Log Analytics agent) description: Learn how to enable File Integrity Monitoring when you collect data with the Log Analytics agent--++ Last updated 11/14/2022 |
defender-for-cloud | File Integrity Monitoring Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-overview.md | Title: Track changes to system files and registry keys description: Learn about tracking changes to system files and registry keys with file integrity monitoring in Microsoft Defender for Cloud.--++ Last updated 11/14/2022 |
defender-for-cloud | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/get-started.md | Title: Microsoft Defender for Cloud's enhanced security features description: Learn how to enable Microsoft Defender for Cloud's enhanced security features. --++ Last updated 11/09/2021 |
defender-for-cloud | Harden Docker Hosts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/harden-docker-hosts.md | Title: Use Microsoft Defender for Cloud to harden your Docker hosts and protect the containers description: How-to protect your Docker hosts and verify they're compliant with the CIS Docker benchmark--++ Last updated 11/09/2021 |
defender-for-cloud | Implement Security Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/implement-security-recommendations.md | Title: Implement security recommendations in Microsoft Defender for Cloud description: This article explains how to respond to recommendations in Microsoft Defender for Cloud to protect your resources and satisfy security policies. --++ Last updated 10/20/2022 # Implement security recommendations in Microsoft Defender for Cloud |
defender-for-cloud | Incidents | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/incidents.md | Title: Manage security incidents in Microsoft Defender for Cloud description: This document helps you to use Microsoft Defender for Cloud to manage security incidents. --++ Last updated 11/09/2021 # Manage security incidents in Microsoft Defender for Cloud |
defender-for-cloud | Information Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/information-protection.md | Title: Prioritize security actions by data sensitivity - Microsoft Defender for Cloud description: Use Microsoft Purview's data sensitivity classifications in Microsoft Defender for Cloud--++ Last updated 06/29/2022 |
defender-for-cloud | Integration Defender For Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md | Title: Using Microsoft Defender for Endpoint in Microsoft Defender for Cloud to protect native, on-premises, and AWS machines. description: Learn about deploying Microsoft Defender for Endpoint from Microsoft Defender for Cloud to protect Azure, hybrid, and multicloud machines.--++ Last updated 04/24/2023 |
defender-for-cloud | Just In Time Access Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-usage.md | Title: Just-in-time virtual machine access in Microsoft Defender for Cloud description: Learn how just-in-time VM access (JIT) in Microsoft Defender for Cloud helps you control access to your Azure virtual machines. --++ Last updated 12/11/2022 |
defender-for-cloud | Kubernetes Workload Protections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md | Title: Kubernetes data plane hardening description: Learn how to use Microsoft Defender for Cloud's set of Kubernetes data plane hardening security recommendations --++ Last updated 03/08/2022 |
defender-for-cloud | Managing And Responding Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/managing-and-responding-alerts.md | Title: Manage security alerts in Microsoft Defender for Cloud description: This document helps you to use Microsoft Defender for Cloud capabilities to manage and respond to security alerts.--++ Last updated 07/14/2022 |
defender-for-cloud | Monitoring Components | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/monitoring-components.md | Title: Overview of the extensions that collect data from your workloads description: Learn about the extensions that collect data from your workloads to let you protect your workloads with Microsoft Defender for Cloud.--++ Last updated 11/27/2022 |
defender-for-cloud | Plan Defender For Servers Agents | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-agents.md | Title: Plan Defender for Servers agents and extensions deployment description: Plan for agent deployment to protect Azure, AWS, GCP, and on-premises servers with Microsoft Defender for Servers. --++ Last updated 11/06/2022 # Plan agents, extensions, and Azure Arc for Defender for Servers |
defender-for-cloud | Plan Defender For Servers Data Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-data-workspace.md | Title: Plan Defender for Servers data residency and workspaces description: Review data residency and workspace design for Microsoft Defender for Servers. --++ Last updated 11/06/2022 |
defender-for-cloud | Plan Defender For Servers Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-roles.md | Title: Plan Defender for Servers roles and permissions description: Review roles and permissions for Microsoft Defender for Servers. --++ Last updated 11/06/2022 # Plan roles and permissions for Defender for Servers |
defender-for-cloud | Plan Defender For Servers Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-scale.md | Title: Scale a Defender for Servers deployment description: Scale protection of Azure, AWS, GCP, and on-premises servers by using Microsoft Defender for Servers. --++ Last updated 11/06/2022 # Scale a Defender for Servers deployment |
defender-for-cloud | Plan Defender For Servers Select Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-select-plan.md | Title: Select a Defender for Servers plan in Microsoft Defender for Cloud description: Select a Microsoft Defender for Servers plan in Microsoft Defender for Cloud to protect Azure, AWS, and GCP servers and on-premises machines. --++ Last updated 11/06/2022 # Select a Defender for Servers plan |
defender-for-cloud | Plan Defender For Servers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers.md | Title: Plan a Defender for Servers deployment to protect on-premises and multicl description: Design a solution to protect on-premises and multicloud servers with Microsoft Defender for Servers. Last updated 11/06/2022--++ # Plan your Defender for Servers deployment |
defender-for-cloud | Plan Multicloud Security Automate Connector Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-automate-connector-deployment.md | Title: Defender for Cloud planning multicloud security automating connector deployment description: Learn about automating connector deployment when planning multicloud deployment with Microsoft Defender for Cloud. --++ Last updated 10/03/2022 |
defender-for-cloud | Plan Multicloud Security Define Adoption Strategy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-define-adoption-strategy.md | Title: Defender for Cloud Planning multicloud security defining adoption strateg description: Learn about defining broad requirements for business needs and ownership in multicloud environment with Microsoft Defender for Cloud. --++ Last updated 10/03/2022 |
defender-for-cloud | Plan Multicloud Security Determine Access Control Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-access-control-requirements.md | Title: Defender for Cloud Planning multicloud security determine access control requirements guidance description: Learn about determining access control requirements to meet business goals in multicloud environment with Microsoft Defender for Cloud. --++ Last updated 10/03/2022 |
defender-for-cloud | Plan Multicloud Security Determine Business Needs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-business-needs.md | Title: Defender for Cloud Planning multicloud security determining business needs guidance description: Learn about determining business needs to meet business goals in multicloud environment with Microsoft Defender for Cloud. --++ Last updated 10/03/2022 |
defender-for-cloud | Plan Multicloud Security Determine Compliance Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-compliance-requirements.md | Title: Defender for Cloud Planning multicloud security compliance requirements guidance AWS standards GCP standards description: Learn about determining compliance requirements in multicloud environment with Microsoft Defender for Cloud. --++ Last updated 10/03/2022 |
defender-for-cloud | Plan Multicloud Security Determine Data Residency Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-data-residency-requirements.md | Title: Defender for Cloud Planning multicloud security determine data residency requirements GDPR agent considerations guidance description: Learn about determining data residency requirements when planning multicloud deployment with Microsoft Defender for Cloud. --++ Last updated 10/03/2022 Agents are used in the Defender for Servers plan as follows: - Log Analytics workspace: - You define the Log Analytics workspace you use at the subscription level. It can be either a default workspace, or a custom-created workspace. - There are [several reasons](../azure-monitor/logs/workspace-design.md) to select the default workspace rather than the custom workspace.- - The location of the default workspace depends on your Azure Arc machine region. [Learn more](https://learn.microsoft.com/azure/defender-for-cloud/faq-data-collection-agents#where-is-the-default-log-analytics-workspace-created-). - - The location of the custom-created workspace is set by your organization. [Learn more](https://learn.microsoft.com/azure/defender-for-cloud/faq-data-collection-agents#how-can-i-use-my-existing-log-analytics-workspace-) about using a custom workspace. + - The location of the default workspace depends on your Azure Arc machine region. [Learn more](/azure/defender-for-cloud/faq-data-collection-agents#where-is-the-default-log-analytics-workspace-created-). + - The location of the custom-created workspace is set by your organization. [Learn more](/azure/defender-for-cloud/faq-data-collection-agents#how-can-i-use-my-existing-log-analytics-workspace-) about using a custom workspace. ## Defender for Containers plan |
defender-for-cloud | Plan Multicloud Security Determine Multicloud Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-multicloud-dependencies.md | Title: Defender for Cloud Planning multicloud security determine multicloud dependencies CSPM CWPP guidance cloud workload protection description: Learn about determining multicloud dependencies when planning multicloud deployment with Microsoft Defender for Cloud. --++ Last updated 10/03/2022 |
defender-for-cloud | Plan Multicloud Security Determine Ownership Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-ownership-requirements.md | Title: Defender for Cloud Planning multicloud security determine ownership requirements security functions team alignment best practices guidance description: Learn about determining ownership requirements when planning multicloud deployment with Microsoft Defender for Cloud. --++ Last updated 10/03/2022 |
defender-for-cloud | Plan Multicloud Security Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-get-started.md | Title: Defender for Cloud Planning multicloud security get started guidance before you begin cloud solution description: Learn about designing a solution for securing and protecting your multicloud environment with Microsoft Defender for Cloud. --++ Last updated 10/03/2022 |
defender-for-cloud | Powershell Sample Vulnerability Assessment Azure Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/powershell-sample-vulnerability-assessment-azure-sql.md | Title: PowerShell script sample - Enable vulnerability assessment on a SQL server description: In this article, learn how to enable vulnerability assessments on Azure SQL databases with the express configuration using a PowerShell script. --++ Last updated 11/29/2022 |
defender-for-cloud | Powershell Sample Vulnerability Assessment Baselines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/powershell-sample-vulnerability-assessment-baselines.md | Title: PowerShell script sample - Set up baselines on Azure SQL databases description: In this article, learn how to set up baselines for vulnerability assessments on Azure SQL databases using a PowerShell script. --++ Last updated 11/29/2022 |
defender-for-cloud | Quickstart Enable Database Protections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-enable-database-protections.md | Title: Enable database protection for your subscription description: Learn how to enable Microsoft Defender for Cloud for all of your database types for your entire subscription.--++ Last updated 11/27/2022 |
defender-for-cloud | Quickstart Onboard Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md | Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Last updated 04/23/2023--++ zone_pivot_groups: connect-aws-accounts |
defender-for-cloud | Quickstart Onboard Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-machines.md | Title: Connect your non-Azure machines to Microsoft Defender for Cloud description: Learn how to connect your non-Azure machines to Microsoft Defender for Cloud Last updated 02/27/2022--++ zone_pivot_groups: non-azure-machines |
defender-for-cloud | Regulatory Compliance Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md | Title: 'Tutorial: Regulatory compliance checks - Microsoft Defender for Cloud' description: 'Tutorial: Learn how to Improve your regulatory compliance using Microsoft Defender for Cloud.' Previously updated : 01/24/2023 Last updated : 05/09/2023 # Tutorial: Improve your regulatory compliance To customize the regulatory compliance dashboard, and focus only on the standard After you take action to resolve recommendations, wait 12 hours to see the changes to your compliance data. Assessments are run approximately every 12 hours, so you'll see the effect on your compliance data only after the assessments run. ### What permissions do I need to access the compliance dashboard?-To view compliance data, you need to have at least **Reader** access to the policy compliance data as well; so Security Reader alone wonΓÇÖt suffice. If you're a global reader on the subscription, that will be enough too. ++To access all compliance data in your tenant, you need to have at least a **Reader** level of permissions on the applicable scope of your tenant, or all relevant subscriptions. The minimum set of roles for accessing the dashboard and managing standards is **Resource Policy Contributor** and **Security Admin**. |
defender-for-cloud | Release Notes Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md | Title: Archive of what's new in Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud from six months ago and earlier.--++ Previously updated : 04/17/2023 Last updated : 05/03/2023 # Archive for what's new in Defender for Cloud? |
defender-for-cloud | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md | Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 05/08/2023 Last updated : 05/09/2023 # What's new in Microsoft Defender for Cloud? Updates in May include: - [Revised JIT (Just-In-Time) rule naming conventions in Defender for Cloud](#revised-jit-just-in-time-rule-naming-conventions-in-defender-for-cloud) - [Onboard selected AWS regions](#onboard-selected-aws-regions) - [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations)+- [Deprecation of legacy standards in compliance dashboard](#deprecation-of-legacy-standards-in-compliance-dashboard) +- [Two Defender for DevOps recommendations now include Azure DevOps scan findings](#two-defender-for-devops-recommendations-now-include-azure-devops-scan-findings) +- [New default setting for Defender for Servers vulnerability assessment solution](#new-default-setting-for-defender-for-servers-vulnerability-assessment-solution) ### Agentless scanning now supports encrypted disks in AWS The following security recommendations are now deprecated: We recommend updating your custom scripts, workflows, and governance rules to correspond with the V2 recommendations. +### Deprecation of legacy standards in compliance dashboard ++Legacy PCI DSS v3.2.1 and legacy SOC TSP have been fully deprecated in the Defender for Cloud compliance dashboard, and replaced by [SOC 2 Type 2](https://learn.microsoft.com/azure/compliance/offerings/offering-soc-2) initiative and [PCI DSS v4](https://learn.microsoft.com/azure/compliance/offerings/offering-pci-dss) initiative-based compliance standards. +We have fully deprecated support of [PCI DSS](https://learn.microsoft.com/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet. ++Learn how to [customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md). ++### Two Defender for DevOps recommendations now include Azure DevOps scan findings ++Defender for DevOps Code and IaC has expanded its recommendation coverage in Microsoft Defender for Cloud to include Azure DevOps security findings for the following two recommendations: ++- `Code repositories should have code scanning findings resolved` ++- `Code repositories should have infrastructure as code scanning findings resolved` ++Previously, coverage for Azure DevOps security scanning only included the secrets recommendation. ++Learn more about [Defender for DevOps](defender-for-devops-introduction.md). ++### New default setting for Defender for Servers vulnerability assessment solution ++Vulnerability assessment (VA) solutions are essential to safeguard machines from cyberattacks and data breaches. ++Microsoft Defender Vulnerability Management (MDVM) is now enabled (default) as a built-in solution in the Defender for Servers plan that doesn't have a VA solution selected. ++If a subscription has a VA solution enabled on any of it's VMs, no changes will be made and MDVM will not be enabled by default on the remaining VMs in that subscription. You can choose to [enable a VA solution](deploy-vulnerability-assessment-defender-vulnerability-management.md) on the remaining VMs on your subscriptions. ++Learn how to [Find vulnerabilities and collect software inventory with agentless scanning (Preview)](enable-vulnerability-assessment-agentless.md). + ## April 2023 Updates in April include: |
defender-for-cloud | Remediate Vulnerability Findings Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/remediate-vulnerability-findings-vm.md | Title: View findings from vulnerability assessment solutions in Microsoft Defender for Cloud description: Microsoft Defender for Cloud includes a fully integrated vulnerability assessment solution from Qualys. Learn more about this Defender for Cloud extension on this page. --++ Last updated 11/09/2021 |
defender-for-cloud | Sql Azure Vulnerability Assessment Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-enable.md | Title: Enable vulnerability assessment on your Azure SQL databases using Microsoft Defender for Cloud description: Learn how to enable SQL vulnerability assessment on Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics.--++ Last updated 10/06/2022 |
defender-for-cloud | Sql Azure Vulnerability Assessment Find | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-find.md | Title: Find vulnerabilities in your Azure SQL databases using Microsoft Defender for Cloud description: Learn how to find software vulnerabilities with the express configuration on Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics.--++ Last updated 11/29/2022 |
defender-for-cloud | Sql Azure Vulnerability Assessment Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-manage.md | Title: Manage vulnerability findings in your Azure SQL databases using Microsoft Defender for Cloud description: Learn how to remediate software vulnerabilities and disable findings with the express configuration on Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics.--++ Last updated 11/29/2022 To change an Azure SQL database from the express vulnerability assessment config -ScanResultsContainerName "vulnerability-assessment" ``` - You may have tweak `Update-AzSqlServerVulnerabilityAssessmentSetting` according to [Store Vulnerability Assessment scan results in a storage account accessible behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&view=azuresql). + You may have to tweak `Update-AzSqlServerVulnerabilityAssessmentSetting` according to [Store Vulnerability Assessment scan results in a storage account accessible behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage). ### Errors |
defender-for-cloud | Sql Azure Vulnerability Assessment Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-overview.md | Title: Scan your Azure SQL databases for vulnerabilities using Microsoft Defender for Cloud description: Learn how to configure SQL vulnerability assessment and interpret the assessment reports on Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics.--++ Last updated 11/29/2022 |
defender-for-cloud | Sql Information Protection Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-information-protection-policy.md |