Updates from: 01/27/2024 02:11:33
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Analytics With Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/analytics-with-application-insights.md
Previously updated : 01/11/2024 Last updated : 01/26/2024 zone_pivot_groups: b2c-policy-type
In Azure Active Directory B2C (Azure AD B2C), you can send event data directly t
- Measure performance. - Create notifications from Application Insights. + ## Overview To enable custom event logs, add an Application Insights technical profile. In the technical profile, you define the Application Insights instrumentation key, the event name, and the claims to record. To post an event, add the technical profile as an orchestration step in a [user journey](userjourneys.md).
Open the *TrustFrameworkExtensions.xml* file from the starter pack. Add the tech
## Add the technical profiles as orchestration steps
-Add new orchestration steps that refer to the technical profiles.
+Add new orchestration steps that refer to the technical profiles.
> [!IMPORTANT] > After you add the new orchestration steps, renumber the steps sequentially without skipping any integers from 1 to N.
+1. Identify the policy file that contains your user journey, such as `SocialAndLocalAccounts/SignUpOrSignin.xml`, then open it.
+ 1. Call `AppInsights-SignInRequest` as the second orchestration step. This step tracks that a sign-up or sign-in request has been received. ```xml
Add new orchestration steps that refer to the technical profiles.
</OrchestrationStep> ```
-1. Before the `SendClaims` orchestration step, add a new step that calls `AppInsights-UserSignup`. It's triggered when the user selects the sign-up button in a sign-up or sign-in journey.
+1. Before the `SendClaims` orchestration step, add a new step that calls `AppInsights-UserSignup`. It's triggered when the user selects the sign-up button in a sign-up or sign-in journey. You may need to update the orchestration step, `Order="8"`,to make sure you don't skip any integer from the first to the last orchestration step.
```xml
- <!-- Handles the user selecting the sign-up link in the local account sign-in page -->
+ <!-- Handles the user selecting the sign-up link in the local account sign-in page
+ The `SendClaims` orchestration step comes after this one,
+ -->
<OrchestrationStep Order="8" Type="ClaimsExchange"> <Preconditions> <Precondition Type="ClaimsExist" ExecuteActionsIf="false">
Add new orchestration steps that refer to the technical profiles.
</OrchestrationStep> ```
-1. After the `SendClaims` orchestration step, call `AppInsights-SignInComplete`. This step shows a successfully completed journey.
+1. After the `SendClaims` orchestration step, call `AppInsights-SignInComplete`. This step shows a successfully completed journey. You may need to update the orchestration step, `Order="10"`, to make sure you don't skip any integer from the first to the last orchestration step.
```xml
- <!-- Track that we have successfully sent a token -->
+ <!-- Track that we have successfully sent a token
+ The `SendClaims` orchestration step come before this one,
+ -->
<OrchestrationStep Order="10" Type="ClaimsExchange"> <ClaimsExchanges> <ClaimsExchange Id="TrackSignInComplete" TechnicalProfileReferenceId="AppInsights-SignInComplete" />
active-directory-b2c B2c Global Identity Funnel Based Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-funnel-based-design.md
Title: Build a global identity solution with funnel-based approach description: Learn the funnel-based design consideration for Azure AD B2C to provide customer identity management for global customers.- - - Previously updated : 12/15/2022 Last updated : 01/26/2024 +
+#customer intent: I'm a developer, and I need to understand how to build a global identity solution using a funnel-based approach, so I can implement it in my organization's Azure AD B2C environment.
# Build a global identity solution with funnel-based approach
active-directory-b2c B2c Global Identity Proof Of Concept Funnel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-proof-of-concept-funnel.md
Previously updated : 12/15/2022 Last updated : 01/26/2024 +
+#customer intent: As a developer, I want to understand how to build a global identity solution using a funnel-based approach, so I can implement it in my organization's Azure AD B2C environment.
# Azure Active Directory B2C global identity framework proof of concept for funnel-based configuration
active-directory-b2c B2c Global Identity Proof Of Concept Regional https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-proof-of-concept-regional.md
Title: Azure Active Directory B2C global identity framework proof of concept for region-based configuration description: Learn how to create a proof of concept regional based approach for Azure AD B2C to provide customer identity and access management for global customers.- - - Previously updated : 12/15/2022 Last updated : 01/24/2024 +
+#customer intent: I'm a developer implementing Azure Active Directory B2C, and I want to configure region-based sign-up, sign-in, and password reset journeys. My goal is for users to be directed to the correct region and their data managed accordingly.
# Azure Active Directory B2C global identity framework proof of concept for region-based configuration
active-directory-b2c B2c Global Identity Region Based Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-region-based-design.md
Title: Build a global identity solution with region-based approach description: Learn the region-based design consideration for Azure AD B2C to provide customer identity management for global customers.- - - Previously updated : 12/15/2022 Last updated : 01/26/2024 +
+#customer intent: I'm a developer implementing a global identity solution. I need to understand the different scenarios and workflows for region-based design approach in Azure AD B2C. My goal is to design and implement the authentication and sign-up processes effectively for users from different regions.
# Build a global identity solution with region-based approach
active-directory-b2c B2c Global Identity Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-solutions.md
Title: Azure Active Directory B2C global identity framework description: Learn how to configure Azure AD B2C to provide customer identity and access management for global customers.- - - Previously updated : 12/15/2022 Last updated : 01/26/2024 +
+#customer intent: I'm a developer building a customer-facing application. I need to understand the different approaches to implement an identity platform using Azure AD B2C tenants for a globally operating business model. I want to make an informed decision about the architecture that best suits my application's requirements.
# Azure Active Directory B2C global identity framework
active-directory-b2c B2clogin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2clogin.md
Previously updated : 01/11/2024 Last updated : 01/26/2024
-#Customer intent: As an Azure AD B2C application developer, I want to update the redirect URLs in my identity provider's applications to reference b2clogin.com or a custom domain, so that I can authenticate users with Azure AD B2C using the updated endpoints and policies.
+#Customer intent: As an Azure AD B2C application developer, I want to update the redirect URLs in my identity provider's applications to reference b2clogin.com or a custom domain, so that I can authenticate users with Azure AD B2C using the updated endpoints.
When you set up an identity provider for sign-up and sign-in in your Azure Activ
The transition to b2clogin.com only applies to authentication endpoints that use Azure AD B2C policies (user flows or custom policies) to authenticate users. These endpoints have a `<policy-name>` parameter, which specifies the policy Azure AD B2C should use. [Learn more about Azure AD B2C policies](technical-overview.md#identity-experiences-user-flows-or-custom-policies). Old endpoints may look like:-- <code>https://<b>login.microsoft.com</b>/\<tenant-name\>.onmicrosoft.com/<b>\<policy-name\></b>/oauth2/v2.0/authorize</code>-- <code>https://<b>login.microsoft.com</b>/\<tenant-name\>.onmicrosoft.com/oauth2/v2.0/authorize<b>?p=\<policy-name\></b></code>
+- <code>https://<b>login.microsoft.com</b>/\<tenant-name\>.onmicrosoft.com/<b>\<policy-name\></b>/oauth2/v2.0/authorize</code> or <code>https://<b>login.microsoft.com</b>/\<tenant-name\>.onmicrosoft.com/oauth2/v2.0/authorize<b>?p=\<policy-name\></b></code> for `/authorize` endpoint.
+- <code>https://<b>login.microsoft.com</b>/\<tenant-name\>.onmicrosoft.com/<b>\<policy-name\></b>/oauth2/v2.0/logout</code> or <code>https://<b>login.microsoft.com</b>/\<tenant-name\>.onmicrosoft.com/oauth2/v2.0/logout<b>?p=\<policy-name\></b></code> for `/logout` endpoint.
-A corresponding updated endpoint would look like:
-- <code>https://<b>\<tenant-name\>.b2clogin.com</b>/\<tenant-name\>.onmicrosoft.com/<b>\<policy-name\></b>/oauth2/v2.0/authorize</code>-- <code>https://<b>\<tenant-name\>.b2clogin.com</b>/\<tenant-name\>.onmicrosoft.com/oauth2/v2.0/authorize?<b>p=\<policy-name\></b></code>
+A corresponding updated endpoint would look similar to the following endpoints:
+- <code>https://<b>\<tenant-name\>.b2clogin.com</b>/\<tenant-name\>.onmicrosoft.com/<b>\<policy-name\></b>/oauth2/v2.0/authorize</code> or <code>https://<b>\<tenant-name\>.b2clogin.com</b>/\<tenant-name\>.onmicrosoft.com/oauth2/v2.0/authorize?<b>p=\<policy-name\></b></code> for the `/authorize` endpoint.
+- <code>https://<b>\<tenant-name\>.b2clogin.com</b>/\<tenant-name\>.onmicrosoft.com/<b>\<policy-name\></b>/oauth2/v2.0/logout</code> or <code>https://<b>\<tenant-name\>.b2clogin.com</b>/\<tenant-name\>.onmicrosoft.com/oauth2/v2.0/logout?<b>p=\<policy-name\></b></code> for the `/logout` endpoint.
-With Azure AD B2C [custom domain](./custom-domain.md) the corresponding updated endpoint would look like:
-- <code>https://<b>login.contoso.com</b>/\<tenant-name\>.onmicrosoft.com/<b>\<policy-name\></b>/oauth2/v2.0/authorize</code>-- <code>https://<b>login.contoso.com</b>/\<tenant-name\>.onmicrosoft.com/oauth2/v2.0/authorize?<b>p=\<policy-name\></b></code>
+With Azure AD B2C [custom domain](./custom-domain.md) the corresponding updated endpoint would look similar to the following endpoints. You can use either of these endpoints:
+
+- <code>https://<b>login.contoso.com</b>/\<tenant-name\>.onmicrosoft.com/<b>\<policy-name\></b>/oauth2/v2.0/authorize</code> or <code>https://<b>login.contoso.com</b>/\<tenant-name\>.onmicrosoft.com/oauth2/v2.0/authorize?<b>p=\<policy-name\></b></code> for the `/authorize` endpoint.
+- <code>https://<b>login.contoso.com</b>/\<tenant-name\>.onmicrosoft.com/<b>\<policy-name\></b>/oauth2/v2.0/logout</code> or <code>https://<b>login.contoso.com</b>/\<tenant-name\>.onmicrosoft.com/oauth2/v2.0/logout?<b>p=\<policy-name\></b></code> for the `/logout` endpoint.
## Endpoints that are not affected
This change doesn't affect all endpoints, which don't contain a policy parameter
https://login.microsoftonline.com/<tenant-name>.onmicrosoft.com/oauth2/v2.0/token ```
+However, if you only want to obtain a token to authenticate users, then you can specify the policy that your application wishes to use to authenticate users. In this case, the updated `/token` endpoints would look similar to the following examples.
+
+- <code>https://<b>\<tenant-name\>.b2clogin.com</b>/\<tenant-name\>.onmicrosoft.com/<b>\<policy-name\></b>/oauth2/v2.0/token</code> or <code>https://<b>\<tenant-name\>.b2clogin.com</b>/\<tenant-name\>.onmicrosoft.com/oauth2/v2.0/token?<b>p=\<policy-name\></b></code> when you use *b2clogin.com*.
+
+- <code>https://<b>login.contoso.com</b>/\<tenant-name\>.onmicrosoft.com/<b>\<policy-name\></b>/oauth2/v2.0/token</code> or <code>https://<b>login.contoso.com</b>/\<tenant-name\>.onmicrosoft.com/oauth2/v2.0/token?<b>p=\<policy-name\></b></code> when you use a custom domain.
+ ## Overview of required changes There are several modifications you might need to make to migrate your applications from *login.microsoftonline.com* using Azure AD B2C endpoints:
For migrating Azure API Management APIs protected by Azure AD B2C, see the [Migr
[msal-dotnet]: https://github.com/AzureAD/microsoft-authentication-library-for-dotnet [msal-dotnet-b2c]: https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/AAD-B2C-specifics [msal-js]: https://github.com/AzureAD/microsoft-authentication-library-for-js
-[msal-js-b2c]: ../active-directory/develop/msal-b2c-overview.md
+[msal-js-b2c]: ../active-directory/develop/msal-b2c-overview.md
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md
Previously updated : 01/11/2024 Last updated : 01/26/2024
In the following redirect URI:
https://<custom-domain-name>/<tenant-name>/oauth2/authresp ``` -- Replace **&lt;custom-domain-name&gt;** with your custom domain name.-- Replace **&lt;tenant-name&gt;** with the name of your tenant, or your tenant ID.
+- Replace &lt;`custom-domain-name`&gt; with your custom domain name.
+- Replace &lt;`tenant-name`&gt; with the name of your tenant, or your tenant ID.
The following example shows a valid OAuth redirect URI:
The custom domain integration applies to authentication endpoints that use Azure
- <code>https://\<custom-domain\>/<tenant-name\>/<b>\<policy-name\></b>/oauth2/v2.0/token</code> Replace:-- **custom-domain** with your custom domain-- **tenant-name** with your tenant name or tenant ID-- **policy-name** with your policy name.
+- &lt;`custom-domain`&gt; with your custom domain
+- &lt;`tenant-name`&gt; with your tenant name or tenant ID
+- &lt;`policy-name`&gt; with your policy name.
The [SAML service provider](./saml-service-provider.md) metadata may look like the following sample:
active-directory-b2c External Identities Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/external-identities-videos.md
Title: Microsoft Azure Active Directory B2C external identity video series description: Learn about external identities in Azure AD B2C in the Microsoft identity platform - - Previously updated : 06/08/2023 Last updated : 01/26/2024 +
+#customer intent: I'm a developers working with Azure Active Directory B2C. I need videos that provide a deep-dive into the architecture and features of the service. My goal is to gain a better understanding of how to implement and utilize Azure AD B2C in my applications.
# Microsoft Azure Active Directory B2C external identity video series
active-directory-b2c Identity Verification Proofing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-verification-proofing.md
Title: Identity proofing and verification for Azure AD B2C description: Learn about our partners who integrate with Azure AD B2C to provide identity proofing and verification solutions - - Previously updated : 01/18/2023 Last updated : 01/26/2024 +
+#customer intent: I'm a developer integrating Azure AD B2C, and I want to configure an identity verification and proofing provider. I need to combat identity fraud and create a trusted user experience for account registration.
# Identity verification and proofing partners
active-directory-b2c Partner Akamai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-akamai.md
Title: Configure Azure Active Directory B2C with Akamai Web Application Protector description: Configure Akamai Web Application Protector with Azure AD B2C- - Previously updated : 05/04/2023 Last updated : 01/26/2024 +
+#customer intent: I'm an IT admin, and I want to configure Azure Active Directory B2C with Akamai Enterprise Application Access for SSO and secure hybrid access. I want to enable Azure AD B2C authentication for end users accessing private applications secured by Akamai Enterprise Application Access.
# Configure Azure Active Directory B2C with Akamai Web Application Protector
active-directory-b2c Partner Arkose Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-arkose-labs.md
Title: Tutorial to configure Azure Active Directory B2C with the Arkose Labs platform description: Learn to configure Azure Active Directory B2C with the Arkose Labs platform to identify risky and fraudulent users- - Previously updated : 01/18/2023 Last updated : 01/26/2024 +
+#customer intent: I'm a developer integrating Azure Active Directory B2C with the Arkose Labs platform. I need to configure the integration, so I can protect against bot attacks, account takeover, and fraudulent account openings.
+ # Tutorial: Configure Azure Active Directory B2C with the Arkose Labs platform
active-directory-b2c Partner Asignio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-asignio.md
Title: Configure Asignio with Azure Active Directory B2C for multifactor authentication description: Configure Azure Active Directory B2C with Asignio for multifactor authentication- - Previously updated : 05/04/2023 Last updated : 01/26/2024 zone_pivot_groups: b2c-policy-type+
+#customer intent: I'm a developer integrating Asignio with Azure AD B2C for multifactor authentication. I want to configure an application with Asignio and set it up as an identity provider (IdP) in Azure AD B2C, so I can provide a passwordless, soft biometric, and multifactor authentication experience to customers.
# Configure Asignio with Azure Active Directory B2C for multifactor authentication
active-directory-b2c Partner Bindid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md
Title: Configure Transmit Security with Azure Active Directory B2C for passwordless authentication description: Configure Azure AD B2C with Transmit Security hosted sign in for passwordless customer authentication- - Previously updated : 01/23/2024 Last updated : 01/26/2024 zone_pivot_groups: b2c-policy-type+
+#customer intent: I'm a developer integrating Azure Active Directory B2C with Transmit Security BindID. I need instructions to configure integration, so I can enable passwordless authentication using FIDO2 biometrics for my application.
# Configure Transmit Security with Azure Active Directory B2C for passwordless authentication
active-directory-b2c Partner Biocatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-biocatch.md
Title: Tutorial to configure BioCatch with Azure Active Directory B2C description: Tutorial to configure Azure Active Directory B2C with BioCatch to identify risky and fraudulent users- - Previously updated : 03/13/2023 Last updated : 01/26/2024 +
+#customer intent: I'm a developer integrating Azure AD B2C authentication with BioCatch technology. I need to configure the custom UI, policies, and user journey. My goal is to enhance the security of my Customer Identity and Access Management (CIAM) system by analyzing user physical and cognitive behaviors.
# Tutorial: Configure BioCatch with Azure Active Directory B2C
active-directory-b2c Partner Bloksec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bloksec.md
Title: Tutorial to configure Azure Active Directory B2C with BlokSec for passwordless authentication description: Learn how to integrate Azure AD B2C authentication with BlokSec for Passwordless authentication- - Previously updated : 03/09/2023 Last updated : 01/26/2024 zone_pivot_groups: b2c-policy-type+
+#customer intent: I'm a developer integrating Azure Active Directory B2C with BlokSec for passwordless authentication. I need to configure integration, so I can simplify user sign-in and protect against identity-related attacks.
# Tutorial: Configure Azure Active Directory B2C with BlokSec for passwordless authentication
active-directory-b2c Partner Cloudflare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-cloudflare.md
Title: Tutorial to configure Azure Active Directory B2C with Cloudflare Web Application Firewall description: Tutorial to configure Azure Active Directory B2C with Cloudflare Web application firewall and protect applications from malicious attacks - - Previously updated : 12/6/2022 Last updated : 01/26/2024 +
+#customer intent: I'm a developer configuring Azure AD B2C with Cloudflare WAF. I need to enable and configure the Web Application Firewall, so I can protect my application from malicious attacks such as SQL Injection and cross-site scripting (XSS).
# Tutorial: Configure Cloudflare Web Application Firewall with Azure Active Directory B2C
active-directory-b2c Partner Datawiza https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-datawiza.md
Title: Tutorial to configure Azure Active Directory B2C with Datawiza description: Learn how to integrate Azure AD B2C authentication with Datawiza for secure hybrid access - - Previously updated : 01/23/2023 Last updated : 01/26/2024 +
+#customer intent: I'm a developer, and I want to integrate Azure Active Directory B2C with Datawiza Access Proxy (DAP). My goal is to enable single sign-on (SSO) and granular access control for on-premises legacy applications, without rewriting them.
# Tutorial: Configure Azure Active Directory B2C with Datawiza to provide secure hybrid access
active-directory-b2c Partner Deduce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-deduce.md
Title: Configure Azure Active Directory B2C with Deduce description: Learn how to integrate Azure AD B2C authentication with Deduce for identity verification - - Previously updated : 8/22/2022 Last updated : 01/26/2024 +
+#customer intent: As an Azure AD B2C administrator, I want to integrate Deduce with Azure AD B2C authentication. I want to combat identity fraud and create a trusted user experience for my organization.
# Configure Azure Active Directory B2C with Deduce to combat identity fraud and create a trusted user experience
For additional information, review the following articles:
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy&tabs=applications)
+- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy&tabs=applications)
active-directory-b2c Partner Dynamics 365 Fraud Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md
Title: Tutorial to configure Azure Active Directory B2C with Microsoft Dynamics 365 Fraud Protection description: Tutorial to configure Azure AD B2C with Microsoft Dynamics 365 Fraud Protection to identify risky and fraudulent accounts- - Previously updated : 02/27/2023 Last updated : 01/26/2024 +
+#customer intent: I'm a developer, and I want to integrate Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C. I need to assess risk during attempts to create fraudulent accounts and sign-ins, and then block or challenge suspicious attempts.
# Tutorial: Configure Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C
active-directory-b2c Partner Eid Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-eid-me.md
Title: Configure Azure Active Directory B2C with Bluink eID-Me for identity verification description: Learn how to integrate Azure AD B2C authentication with eID-Me for identity verification - - Previously updated : 03/10/2023 Last updated : 01/26/2024 zone_pivot_groups: b2c-policy-type+
+#customer intent: I'm an Azure AD B2C administrator, and I want to configure eID-Me as an identity provider (IdP). My goal is to enable users to verify their identity and sign in using eID-Me.
# Configure Azure Active Directory B2C with Bluink eID-Me for identity verification
active-directory-b2c Partner Experian https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-experian.md
Title: Tutorial to configure Azure Active Directory B2C with Experian description: Learn how to integrate Azure AD B2C authentication with Experian for Identification verification and proofing based on user attributes to prevent fraud.- - Previously updated : 12/6/2022 Last updated : 01/26/2024 +
+#customer intent: I'm an Azure AD B2C administrator, and I want to integrate Experian CrossCore with Azure AD B2C. I need to verify user identification and perform risk analysis based on user attributes during sign-up.
# Tutorial: Configure Experian with Azure Active Directory B2C
active-directory-b2c Partner F5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-f5.md
- Previously updated : 04/05/2023 Last updated : 01/26/2024+
+#customer intent: As an IT admin responsible for securing applications, I want to integrate Azure Active Directory B2C with F5 BIG-IP Access Policy Manager. I want to expose legacy applications securely to the internet with preauthentication, Conditional Access, and single sign-on (SSO) capabilities.
# Tutorial: Enable secure hybrid access for applications with Azure Active Directory B2C and F5 BIG-IP
active-directory-b2c Partner Grit App Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-app-proxy.md
Title: Migrate applications to Azure AD B2C with Grit's app proxy description: Learn how Grit's app proxy can migrate your applications to Azure AD B2C with no code change- - Previously updated : 1/25/2023 Last updated : 01/26/2024 +
+#customer intent: I'm an application developer using header-based authentication, and I want to migrate my legacy application to Azure Active Directory B2C with Grit app proxy. I need to enable modern authentication experiences, enhance security, and save on licensing costs.
# Migrate applications using header-based authentication to Azure Active Directory B2C with Grit's app proxy
active-directory-b2c Partner Grit Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-authentication.md
Title: Configure Grit's biometric authentication with Azure Active Directory B2C description: Learn how Grit's biometric authentication with Azure AD B2C secures your account- - Previously updated : 1/25/2023 Last updated : 01/26/2024 +
+#customer intent: As an application developer using header-based authentication, I want to migrate my legacy application to Azure Active Directory B2C with Grit app proxy. I want to enable modern authentication experiences, enhance security, and save on licensing costs.
# Configure Grit's biometric authentication with Azure Active Directory B2C
active-directory-b2c Partner Grit Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-editor.md
Title: Edit identity experience framework XML with Grit Visual Identity Experience Framework (IEF) Editor description: Learn how Grit Visual IEF Editor enables fast authentication deployments in Azure AD B2C- - Previously updated : 10/10/2022 Last updated : 01/26/2024 +
+#customer intent: I'm an Azure AD B2C administrator, and I want to use the Visual IEF Editor tool to create, modify, and deploy Azure AD B2C policies, without writing code.
active-directory-b2c Partner Grit Iam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-iam.md
Title: Configure the Grit IAM B2B2C solution with Azure Active Directory B2C description: Learn how to integrate Azure AD B2C authentication with the Grit IAM B2B2C solution- - Previously updated : 9/15/2022 Last updated : 01/26/2024 +
+#customer intent: I'm a developer, and I want to integrate Azure Active Directory B2C authentication with the Grit IAM B2B2C solution. I need to provide secure and user-friendly identity and access management for my customers.
# Tutorial: Configure the Grit IAM B2B2C solution with Azure Active Directory B2C
active-directory-b2c Partner Haventec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-haventec.md
- Previously updated : 03/10/2023 Last updated : 01/26/2024 +
+#customer intent: I'm a developer integrating Haventec Authenticate with Azure AD B2C. I need instructions to configure integration, so I can enable single-step, multi-factor passwordless authentication for my web and mobile applications.
# Tutorial: Configure Haventec Authenticate with Azure Active Directory B2C for single-step, multi-factor passwordless authentication
active-directory-b2c Partner Hypr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-hypr.md
Title: Tutorial to configure Azure Active Directory B2C with HYPR description: Tutorial to configure Azure Active Directory B2C with Hypr for true passwordless strong customer authentication- - Previously updated : 12/7/2022 Last updated : 01/26/2024 +
+#customer intent: I'm a developer integrating HYPR with Azure AD B2C. I want a tutorial to configure the Azure AD B2C policy to enable passwordless authentication using HYPR for my customer applications.
# Tutorial for configuring HYPR with Azure Active Directory B2C
active-directory-b2c Partner Idemia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-idemia.md
Title: Configure IDEMIA Mobile ID with Azure Active Directory B2C description: Learn to integrate Azure AD B2C authentication with IDEMIA Mobile ID for a relying party to consume Mobile ID, or US state-issued mobile IDs- - Previously updated : 03/10/2023 Last updated : 01/26/2024 zone_pivot_groups: b2c-policy-type+
+#customer intent: I'm an Azure AD B2C administrator, and I want to configure IDEMIA Mobile ID integration with Azure AD B2C. I want users to authenticate using biometric authentication services and benefit from a trusted, government-issued digital ID.
# Tutorial: Configure IDEMIA Mobile ID with Azure Active Directory B2C
active-directory-b2c Partner Jumio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-jumio.md
Title: Tutorial to configure Azure Active Directory B2C with Jumio description: Configure Azure Active Directory B2C with Jumio for automated ID verification, safeguarding customer data.- - Previously updated : 12/7/2022 Last updated : 01/26/2024 +
+#customer intent: I'm an Azure AD B2C administrator, and I want to integrate Jumio with Azure AD B2C. I need to enable real-time automated ID verification for user accounts and protect customer data.
# Tutorial for configuring Jumio with Azure Active Directory B2C
active-directory-b2c Partner Keyless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-keyless.md
Title: Tutorial to configure Keyless with Azure Active Directory B2C description: Tutorial to configure Sift Keyless with Azure Active Directory B2C for passwordless authentication - - Previously updated : 03/06/2023 Last updated : 01/26/2024 +
+#customer intent: I'm a developer integrating Azure AD B2C with Keyless for passwordless authentication. I need to configure Keyless with Azure AD B2C, so I can provide a secure and convenient passwordless authentication experience for my customer applications.
+ # Tutorial: Configure Keyless with Azure Active Directory B2C
-Learn to configure Azure Active Directory B2C (Azure AD B2C) with the Sift Keyless passwordless solution. With Azure AD B2C as an identity provider (IdP), integrate Keyless with customer applications to provide passwordless authentication. The Keyless Zero-Knowledge Biometric (ZKB) is passwordless multi-factor authentication that helps eliminate fraud, phishing, and credential reuse, while enhancing the customer experience and protecting privacy.
+Learn to configure Azure Active Directory B2C (Azure AD B2C) with the Sift Keyless passwordless solution. With Azure AD B2C as an identity provider (IdP), integrate Keyless with customer applications to provide passwordless authentication. The Keyless Zero-Knowledge Biometric (ZKB) is passwordless multifactor authentication that helps eliminate fraud, phishing, and credential reuse, while enhancing the customer experience and protecting privacy.
Go to keyless.io to learn about:
active-directory-b2c Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-lexisnexis.md
- Previously updated : 12/7/2022 Last updated : 01/26/2024 +
+#customer intent: I'm a developer integrating Azure Active Directory B2C with LexisNexis ThreatMetrix. I want to configure the API and UI components, so I can verify user identities and perform risk analysis based on user attributes and device profiling information.
+ # Tutorial for configuring LexisNexis with Azure Active Directory B2C
active-directory-b2c Partner N8identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-n8identity.md
Title: Configure TheAccessHub Admin Tool by using Azure Active Directory B2C description: Configure TheAccessHub Admin Tool with Azure Active Directory B2C for customer account migration and customer service request (CSR) administration- - Previously updated : 12/6/2022 Last updated : 01/26/2024 +
+#customer intent: As an administrator managing customer accounts in Azure AD B2C, I want to configure TheAccessHub Admin Tool with Azure AD B2C. My goal is to migrate customer accounts, administer CSR requests, synchronize data, and customize notifications.
+ # Configure TheAccessHub Admin Tool with Azure Active Directory B2C
active-directory-b2c Partner Nevis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nevis.md
Title: Tutorial to configure Azure Active Directory B2C with Nevis description: Learn how to integrate Azure AD B2C authentication with Nevis for passwordless authentication - - Previously updated : 12/8/2022 Last updated : 01/26/2024 +
+#customer intent: I'm a developer, and I want to configure Nevis with Azure Active Directory B2C for passwordless authentication. I need to enable customer authentication and comply with Payment Services Directive 2 (PSD2) transaction requirements.
# Tutorial to configure Nevis with Azure Active Directory B2C for passwordless authentication
-In this tutorial, learn to enable passwordless authentication in Azure Active Directory B2C (Azure AD B2C) with the [Nevis](https://www.nevis.net/en/solution/authentication-cloud) Access app to enable customer authentication and comply with Payment Services Directive 2 (PSD2) transaction requirements. PSD2 is a European Union (EU) directive, administered by the European Commission (Directorate General Internal Market) to regulate payment services and payment service providers throughout the EU and European Economic Area (EEA).
+In this tutorial, learn to enable passwordless authentication in Azure Active Directory B2C (Azure AD B2C) with the Nevis Access app to enable customer authentication and comply with Payment Services Directive 2 (PSD2) transaction requirements. PSD2 is a European Union (EU) directive, administered by the European Commission (Directorate General Internal Market) to regulate payment services and payment service providers throughout the EU and European Economic Area (EEA).
## Prerequisites
active-directory-b2c Partner Nok Nok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nok-nok.md
Title: Tutorial to configure Nok Nok Passport with Azure Active Directory B2C for passwordless FIDO2 authentication description: Configure Nok Nok Passport with Azure AD B2C to enable passwordless FIDO2 authentication- - Previously updated : 03/13/2023 Last updated : 01/26/2024 +
+#customer intent: I'm a developer integrating Azure Active Directory B2C with a third-party authentication provider. I want to learn how to configure Nok Nok Passport as an identity provider (IdP) in Azure AD B2C. My goal is to enable passwordless FIDO authentication for my users.
# Tutorial: Configure Nok Nok Passport with Azure Active Directory B2C for passwordless FIDO2 authentication
-Learn to integrate the Nok Nok S3 Authentication Suite into your Azure Active Directory B2C (Azure AD B2C) tenant. Nok Nok solutions enable FIDO certified multi-factor authentication such as FIDO UAF, FIDO U2F, WebAuthn, and FIDO2 for mobile and web applications. Nok Nok solutions improve security posture while balancing user experience.
-
+Learn to integrate the Nok Nok S3 Authentication Suite into your Azure Active Directory B2C (Azure AD B2C) tenant. The Nok Nok solutions enable FIDO certified multi-factor authentication such as FIDO UAF, FIDO U2F, WebAuthn, and FIDO2 for mobile and web applications. Nok Nok solutions improve security posture while balancing user the experience.
-To to noknok.com to learn more: [Nok Nok Labs, Inc.](https://noknok.com/)
+Go to noknok.com to learn more: [Nok Nok Labs, Inc.](https://noknok.com/)
## Prerequisites
To get started, you need:
* If you don't have one, get a [Azure free account](https://azure.microsoft.com/free/) * An Azure AD B2C tenant linked to the Azure subscription * [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md)
-* Go to [noknok.com](https://noknok.com/). On the top menu, select **Demo**.
+* Go to [noknok.com](https://noknok.com/).
+ * On the top menu, select **Demo**.
## Scenario description
-To enable passwordless FIDO authentication for your users, enable Nok Nok as an identity provider (IdP) in your Azure AD B2C tenant. Nok Nok solution integration includes the following components:
+To enable passwordless FIDO authentication for your users, enable Nok Nok as an identity provider (IdP) in your Azure AD B2C tenant. The Nok Nok solution integration includes the following components:
* **Azure AD B2C** ΓÇô authorization server that verifies user credentials * **Web and mobile applications** ΓÇô mobile or web apps to protect with Nok Nok solutions and Azure AD B2C
To enable passwordless FIDO authentication for your users, enable Nok Nok as an
* Go to the Apple App Store for [Nok Nok Passport](https://apps.apple.com/us/app/nok-nok-passport/id1050437340) * Or, Google Play [Nok Nok Passport](https://play.google.com/store/apps/details?id=com.noknok.android.passport2&hl=en&gl=US)
-The following diagram illustrates the Nok Nok solution as IdP for Azure AD B2C using OpenID Connect (OIDC) for passwordless authentication.
+The following diagram illustrates the Nok Nok solution as an IdP for Azure AD B2C by using OpenID Connect (OIDC) for passwordless authentication.
![Diagram of Nok Nok as IdP for Azure AD B2C using OpenID Connect (OIDC) for passwordless authentication.](./media/partner-nok-nok/nok-nok-architecture-diagram.png)
-1. At the sign-in page, user selects sign-in or sign-up and enters the username.
-2. Azure AD B2C redirects user to the Nok Nok OIDC authentication provider.
+1. At the sign-in page, select sign-in or sign-up and enters the username.
+2. Azure AD B2C redirects to the Nok Nok OIDC authentication provider.
3. For mobile authentications, a QR code appears or push notification goes to the user device. For desktop sign-in, the user is redirected to the web app sign-in page for passwordless authentication.
-4. User scans the QR code with Nok Nok app SDK or Passport app. Or, username is sign-in page input.
-5. User is prompted for authentication. User does passwordless authentication: biometrics, device PIN, or any roaming authenticator. Authentication prompt appears on web application. User does passwordless authentication: biometrics, device PIN, or any roaming authenticator.
-6. Nok Nok server validates FIDO assertion and sends OIDC authentication response to Azure AD B2C.
-7. User is granted or denied access.
+4. Scan the QR code with the Nok Nok app SDK or Passport app. Or, username is the sign-in page input.
+5. A prompt appears for authentication. Perform passwordless authentication: biometrics, device PIN, or any roaming authenticator.
+6. The authentication prompt appears on the web application.
+7. Perform passwordless authentication: biometrics, device PIN, or any roaming authenticator.
+8. The Nok Nok server validates FIDO assertion and sends OIDC authentication response to Azure AD B2C.
+9. The user is granted or denied access.
## Get started with Nok Nok
-1. Go to the noknok.com [Contact](https://noknok.com/contact/) page.
+1. Go to noknok.com [Contact](https://noknok.com/contact/).
2. Fill out the form for a Nok Nok tenant. 3. An email arrives with tenant access information and links to documentation. 4. Use the Nok Nok integration documentation to complete the tenant OIDC configuration. ## Integrate with Azure AD B2C
-Use the following instructions to add and configure an IdP then configure a user flow.
+Use the following instructions to add and configure an IdP, and then configure a user flow.
### Add a new Identity provider For the following instructions, use the directory with the Azure AD B2C tenant. To add a new IdP:
-1. Sign in to the **[Azure portal](https://portal.azure.com/#home)** as Global Administrator of the Azure AD B2C tenant.
+1. Sign in to the [Azure portal](https://portal.azure.com/#home) as Global Administrator of the Azure AD B2C tenant.
2. In the portal toolbar, select the **Directories + subscriptions**. 3. On **Portal settings, Directories + subscriptions**, in the **Directory name** list, locate the Azure AD B2C directory. 4. Select **Switch**.
For the following instructions, use the directory with the Azure AD B2C tenant.
To configure an IdP: 1. Select **Identity provider type** > **OpenID Connect (Preview)**.
-2. For **Name**, enter Nok Nok Authentication Provider, or another name.
-3. For **Metadata URL**, enter hosted Nok Nok Authentication app URI, followed by the path such as `https://demo.noknok.com/mytenant/oidc/.well-known/openid-configuration`
+2. For **Name**, enter the Nok Nok Authentication Provider, or another name.
+3. For **Metadata URL**, enter the hosted Nok Nok Authentication app URI, followed by the path such as `https://demo.noknok.com/mytenant/oidc/.well-known/openid-configuration`.
4. For **Client Secret**, use the Client Secret from Nok Nok.
-5. For **Client ID**, use the client ID provided by Nok Nok.
+5. For **Client ID**, use the Client ID provided by Nok Nok.
6. For **Scope**, use **OpenID profile email**. 7. For **Response type**, use **code**. 8. For **Response mode**, use **form_post**.
For the following instructions, Nok Nok is a new OIDC IdP in the B2C identity pr
## Test the user flow
-1. Open the Azure AD B2C tenant and under **Policies** select **Identity Experience Framework**.
+1. Open the Azure AD B2C tenant. Under **Policies** select **Identity Experience Framework**.
2. Select the created **SignUpSignIn**. 3. Select **Run user flow**. 4. For **Application**, select the registered app. The example is JWT.
active-directory-b2c Partner Onfido https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-onfido.md
Title: Tutorial to configure Azure Active Directory B2C with Onfido description: Learn how to integrate Azure AD B2C authentication with Onfido for document ID and facial biometrics verification - - Previously updated : 12/8/2022 Last updated : 01/26/2024 +
+#customer intent: I'm a developer integrating Azure Active Directory B2C with Onfido. I need to configure the Onfido service to verify identity in the sign-up or sign-in flow. My goal is to meet Know Your Customer and identity requirements and provide a reliable onboarding experience, while reducing fraud.
# Tutorial for configuring Onfido with Azure Active Directory B2C
active-directory-b2c Partner Ping Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-ping-identity.md
Title: Tutorial to configure Azure Active Directory B2C with Ping Identity description: Learn how to integrate Azure AD B2C authentication with Ping Identity- - Previously updated : 01/20/2023 Last updated : 01/26/2024 +
+#customer intent: I'm a developer, and I want to learn how to configure Ping Identity with Azure Active Directory B2C for secure hybrid access (SHA). I need to extend the capabilities of Azure AD B2C and enable secure hybrid access using PingAccess and PingFederate.
# Tutorial: Configure Ping Identity with Azure Active Directory B2C for secure hybrid access
active-directory-b2c Partner Saviynt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-saviynt.md
Title: Tutorial to configure Saviynt with Azure Active Directory B2C description: Learn to configure Azure AD B2C with Saviynt for cross-application integration for better security, governance, and compliance.ΓÇ»- - Previously updated : 05/23/2023 Last updated : 01/26/2024 +
+#customer intent: As a security manager, I want to integrate Azure Active Directory B2C with Saviynt. I need visibility, security, and governance over user life-cycle management and access control.
# Tutorial to configure Saviynt with Azure Active Directory B2C
active-directory-b2c Partner Strata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-strata.md
Title: Tutorial to configure Azure Active Directory B2C with Strata description: Learn how to integrate Azure AD B2C authentication with whoIam for user verification - - Previously updated : 12/16/2022 Last updated : 01/26/2024 +
+#customer intent: As an IT admin, I want to integrate Azure Active Directory B2C with StrataMaverics Identity Orchestrator. I need to protect on-premises applications and enable customer single sign-on (SSO) to hybrid apps.
+ # Tutorial to configure Azure Active Directory B2C with Strata
The following architecture diagram shows the implementation.
1. The user requests access the on-premises hosted application. Maverics Identity Orchestrator proxies the request to the application. 2. Orchestrator checks the user authentication state. If there's no session token, or the token is invalid, the user goes to Azure AD B2C for authentication 3. Azure AD B2C sends the authentication request to the configured social IdP.
-4. The IdP challenges the user for credential. Multi-factor authentication (MFA) might be required.
+4. The IdP challenges the user for credential. Multifactor authentication (MFA) might be required.
5. The IdP sends the authentication response to Azure AD B2C. The user can create a local account in the Azure AD B2C directory. 6. Azure AD B2C sends the user request to the endpoint specified during the Orchestrator app registration in the Azure AD B2C tenant. 7. The Orchestrator evaluates access policies and attribute values for HTTP headers forwarded to the app. Orchestrator might call to other attribute providers to retrieve information to set the header values. The Orchestrator sends the request to the app.
active-directory-b2c Partner Trusona https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-trusona.md
Title: Trusona Authentication Cloud with Azure AD B2C description: Learn how to add Trusona Authentication Cloud as an identity provider on Azure AD B2C to enable a "tap-and-go" passwordless authentication- - Previously updated : 03/10/2023 Last updated : 01/26/2024 zone_pivot_groups: b2c-policy-type+
+#customer intent: I'm a developer integrating Azure AD B2C authentication with Trusona Authentication Cloud. I want to configure Trusona Authentication Cloud as an identity provider (IdP) in Azure AD B2C, so I can enable passwordless authentication and provide a better user experience for my web application users.
# Configure Trusona Authentication Cloud with Azure Active Directory B2C
active-directory-b2c Partner Typingdna https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-typingdna.md
description: Learn how to integrate Azure AD B2C authentication with TypingDNA to help with Identity verification and proofing based on user typing pattern, provides ID verification solutions forcing multifactor authentication and helps to comply with SCA requirements for Payment Services Directive 2 (PSD2). - - Previously updated : 06/25/2020 Last updated : 01/26/2024 +
+#customer intent: I'm an Azure AD B2C administrator, and I want to integrate TypingDNA with Azure AD B2C. I need to comply with Payment Services Directive 2 (PSD2) transaction requirements through keystroke dynamics and strong customer authentication.
# Tutorial for configuring TypingDNA with Azure Active Directory B2C
active-directory-b2c Partner Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-web-application-firewall.md
Title: Tutorial to configure Azure Active Directory B2C with Azure Web Application Firewall description: Learn to configure Azure AD B2C with Azure Web Application Firewall to protect applications from malicious attacks - - Previously updated : 03/08/2023 Last updated : 01/26/2024 +
+#customer intent: I'm a developer configuring Azure Active Directory B2C with Azure Web Application Firewall. I want to enable the WAF service for my B2C tenant with a custom domain, so I can protect my web applications from common exploits and vulnerabilities.
+ # Tutorial: Configure Azure Active Directory B2C with Azure Web Application Firewall
active-directory-b2c Partner Whoiam Rampart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam-rampart.md
Title: Configure WhoIAM Rampart with Azure Active Directory B2C description: Learn how to integrate Azure AD B2C authentication with WhoIAM Rampart- - Last updated 05/02/2023 +
+#customer intent: I'm a developer integrating WhoIAM Rampart with Azure AD B2C. I need to configure and integrate Rampart with Azure AD B2C using custom policies. My goal is to enable an integrated helpdesk and invitation-gated user registration experience for my application.
+ # Configure WhoIAM Rampart with Azure Active Directory B2C
active-directory-b2c Partner Whoiam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam.md
Title: Tutorial to configure Azure Active Directory B2C with WhoIAM description: In this tutorial, learn how to integrate Azure AD B2C authentication with WhoIAM for user verification. - - Previously updated : 01/18/2023 Last updated : 01/26/2024 +
+#customer intent: I'm a developer integrating Azure Active Directory B2C with a third-party identity management system. I need a tutorial to configure WhoIAM Branded Identity Management System (BRIMS) with Azure AD B2C. My goal is to enable user verification with voice, SMS, and email in my application.
# Tutorial to configure Azure Active Directory B2C with WhoIAM
active-directory-b2c Partner Xid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-xid.md
Title: Configure xID with Azure Active Directory B2C for passwordless authentication description: Configure Azure Active Directory B2C with xID for passwordless authentication- - Previously updated : 05/04/2023 Last updated : 01/26/2024 +
+#customer intent: As an Azure AD B2C administrator, I want to configure xID as an identity provider, so users can sign in using xID and authenticate with their digital identity on their device.
# Configure xID with Azure Active Directory B2C for passwordless authentication
active-directory-b2c Partner Zscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-zscaler.md
Title: Tutorial - Configure Zscaler Private access with Azure Active Directory B2C - description: Learn how to integrate Azure AD B2C authentication with Zscaler.- - Previously updated : 01/18/2023 Last updated : 01/26/2024 +
+#customer intent: As an IT admin, I want to integrate Azure Active Directory B2C authentication with Zscaler Private Access. I need to provide secure access to private applications and assets without the need for a virtual private network (VPN).
# Tutorial: Configure Zscaler Private Access with Azure Active Directory B2C
active-directory-b2c Troubleshoot With Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/troubleshoot-with-application-insights.md
Previously updated : 01/11/2024 Last updated : 01/22/2024 zone_pivot_groups: b2c-policy-type
zone_pivot_groups: b2c-policy-type
::: zone pivot="b2c-custom-policy" + This article provides steps for collecting logs from Active Directory B2C (Azure AD B2C) so that you can diagnose problems with your custom policies. Application Insights provides a way to diagnose exceptions and visualize application performance issues. Azure AD B2C includes a feature for sending data to Application Insights. The detailed activity logs described here should be enabled **ONLY** during the development of your custom policies.
After you save the settings, the Application insights logs appear on the **Azure
## Configure Application Insights in Production
-To improve your production environment performance and better user experience, it's important to configure your policy to ignore messages that are unimportant. Use the following configuration in production environments and no logs are sent to your application insights.
+To improve your production environment performance and better user experience, it's important to configure your policy to ignore messages that are unimportant. You also need to make sure that you don't log Personally Identifiable Information (PII). Use the following configuration in production environments and no logs are sent to your application insights.
1. Set the `DeploymentMode` attribute of the [TrustFrameworkPolicy](trustframeworkpolicy.md) to `Production`.
active-directory-b2c Trustframeworkpolicy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/trustframeworkpolicy.md
Previously updated : 01/11/2024 Last updated : 01/23/2024
The **TrustFrameworkPolicy** element contains the following elements:
To inherit a policy from another policy, a **BasePolicy** element must be declared under the **TrustFrameworkPolicy** element of the policy file. The **BasePolicy** element is a reference to the base policy from which this policy is derived. + The **BasePolicy** element contains the following elements: | Element | Occurrences | Description |
active-directory-b2c View Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/view-audit-logs.md
Previously updated : 01/11/2024 Last updated : 01/22/2024
-#Customer intent: As an Azure AD B2C administrator, I want to access and view the audit logs for my B2C tenant, so that I can monitor activity, track user sign-ins, and troubleshoot any issues related to B2C resources and applications.
+#Customer intent: As an Azure AD B2C administrator, I want to access and view the audit logs for my Azure AD B2C tenant, so that I can monitor activity, track user sign-ins, and troubleshoot any issues related to B2C resources and applications.
# Accessing Azure AD B2C audit logs + Azure Active Directory B2C (Azure AD B2C) emits audit logs containing activity information about B2C resources, tokens issued, and administrator access. This article provides a brief overview of the information available in audit logs and instructions on how to access this data for your Azure AD B2C tenant. Audit log events are only retained for **seven days**. Plan to download and store your logs using one of the methods shown below if you require a longer retention period.
ai-services Gpt With Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/gpt-with-vision.md
To try out GPT-4 Turbo with Vision, see the [quickstart](/azure/ai-services/open
The GPT-4 Turbo with Vision model answers general questions about what's present in the images or videos you upload. - ## Enhancements Enhancements let you incorporate other Azure AI services (such as Azure AI Vision) to add new functionality to the chat-with-vision experience. **Object grounding**: Azure AI Vision complements GPT-4 Turbo with VisionΓÇÖs text response by identifying and locating salient objects in the input images. This lets the chat model give more accurate and detailed responses about the contents of the image.
+> [!IMPORTANT]
+> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S0) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
+ :::image type="content" source="../media/concepts/gpt-v/object-grounding.png" alt-text="Screenshot of an image with object grounding applied. Objects have bounding boxes with labels."::: :::image type="content" source="../media/concepts/gpt-v/object-grounding-response.png" alt-text="Screenshot of a chat response to an image prompt about an outfit. The response is an itemized list of clothing items seen in the image."::: **Optical Character Recognition (OCR)**: Azure AI Vision complements GPT-4 Turbo with Vision by providing high-quality OCR results as supplementary information to the chat model. It allows the model to produce higher quality responses for images with dense text, transformed images, and numbers-heavy financial documents, and increases the variety of languages the model can recognize in text.
+> [!IMPORTANT]
+> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S0) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
+ :::image type="content" source="../media/concepts/gpt-v/receipts.png" alt-text="Photo of several receipts."::: :::image type="content" source="../media/concepts/gpt-v/ocr-response.png" alt-text="Screenshot of the JSON response of an OCR call."::: **Video prompt**: The **video prompt** enhancement lets you use video clips as input for AI chat, enabling the model to generate summaries and answers about video content. It uses Azure AI Vision Video Retrieval to sample a set of frames from a video and create a transcript of the speech in the video.
-In order to use the video prompt enhancement, you need both an Azure AI Vision resource and an Azure Video Indexer resource, in addition to your Azure OpenAI resource.
- > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RW1eHRf]
+> [!NOTE]
+> In order to use the video prompt enhancement, you need both an Azure AI Vision resource and an Azure Video Indexer resource, in the paid (S0) tier, in addition to your Azure OpenAI resource.
## Special pricing information
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
See [model versions](../concepts/model-versions.md) to learn about how Azure Ope
### GPT-4 and GPT-4 Turbo Preview model availability
-| Model Availability | gpt-4 (0314) | gpt-4 (0613) | gpt-4 (1106-preview) | gpt-4 (vision-preview) |
-||:|:|:|:|
-| Available to all subscriptions with Azure OpenAI access | | Australia East <br> Canada East <br> France Central <br> Sweden Central <br> Switzerland North | Australia East <br> Canada East <br> East US 2 <br> France Central <br> Norway East <br> South India <br> Sweden Central <br> UK South <br> West US | Sweden Central <br> Switzerland North <br> West US |
-| Available to subscriptions with current access to the model version in the region | East US <br> France Central <br> South Central US <br> UK South | East US <br> East US 2 <br> Japan East <br> UK South | | Australia East |
+
+| Model | Regions where model is available to all subscriptions with Azure OpenAI access | Regions where model is available only to subscriptions with previous access to that model/region |
+||:|:|
+| gpt-4 (0314) | | East US <br> France Central <br> South Central US <br> UK South |
+| gpt-4 (0613) | Australia East <br> Canada East <br> France Central <br> Sweden Central <br> Switzerland North | East US <br> East US 2 <br> Japan East <br> UK South |
+| gpt-4 (1106-preview) | Australia East <br> Canada East <br> East US 2 <br> France Central <br> Norway East <br> South India <br> Sweden Central <br> UK South <br> West US | |
+| gpt-4 (vision-preview) | | Sweden Central <br> Switzerland North<br>Australia East <br> West US |
+
+> [!NOTE]
+> As a temporary measure, GPT-4 Turbo with Vision is currently unavailable to new customers.
### GPT-3.5 models
ai-services Gpt V Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/gpt-v-quickstart.md
zone_pivot_groups: openai-quickstart-gpt-v
::: zone-end +++ ## Next steps * Learn more about these APIs in the [GPT-4 Turbo with Vision how-to guide](./gpt-v-quickstart.md)
ai-services Gpt With Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/gpt-with-vision.md
The GPT-4 Turbo with Vision model answers general questions about what's present
## Call the Chat Completion APIs
-The following REST command shows the most basic way to use the GPT-4 Turbo with Vision model with code. If this is your first time using these models programmatically, we recommend starting with our [GPT-4 Turbo with Vision quickstart](../gpt-v-quickstart.md).
+The following command shows the most basic way to use the GPT-4 Turbo with Vision model with code. If this is your first time using these models programmatically, we recommend starting with our [GPT-4 Turbo with Vision quickstart](../gpt-v-quickstart.md).
+
+#### [REST](#tab/rest)
Send a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}/chat/completions?api-version=2023-12-01-preview` where
Send a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployme
- `Content-Type`: application/json - `api-key`: {API_KEY} ++ **Body**:
-The following is a sample request body. The format is the same as the chat completions API for GPT-4, except that the message content can be an array containing strings and images (either a valid HTTP or HTTPS URL to an image, or a base-64-encoded image). Remember to set a `"max_tokens"` value, or the return output will be cut off.
+The following is a sample request body. The format is the same as the chat completions API for GPT-4, except that the message content can be an array containing text and images (either a valid HTTP or HTTPS URL to an image, or a base-64-encoded image).
+
+> [!IMPORTANT]
+> Remember to set a `"max_tokens"` value, or the return output will be cut off.
```json {
The following is a sample request body. The format is the same as the chat compl
{ "type": "image_url", "image_url": {
- "url": "<URL or base-64-encoded image>"
+ "url": "<image URL>"
} } ]
The following is a sample request body. The format is the same as the chat compl
} ```
+#### [Python](#tab/python)
+
+1. Define your Azure OpenAI resource endpoint and key.
+1. Enter the name of your GPT-4 Turbo with Vision model deployment.
+1. Create a client object using those values.
+
+ ```python
+ api_base = '<your_azure_openai_endpoint>' # your endpoint should look like the following https://YOUR_RESOURCE_NAME.openai.azure.com/
+ api_key="<your_azure_openai_key>"
+ deployment_name = '<your_deployment_name>'
+ api_version = '2023-12-01-preview' # this might change in the future
+
+ client = AzureOpenAI(
+ api_key=api_key,
+ api_version=api_version,
+ base_url=f"{api_base}openai/deployments/{deployment_name}/extensions",
+ )
+ ```
+
+1. Then call the client's **create** method. The following code shows a sample request body. The format is the same as the chat completions API for GPT-4, except that the message content can be an array containing text and images (either a valid HTTP or HTTPS URL to an image, or a base-64-encoded image).
+
+ > [!IMPORTANT]
+ > Remember to set a `"max_tokens"` value, or the return output will be cut off.
+
+ ```python
+ response = client.chat.completions.create(
+ model=deployment_name,
+ messages=[
+ { "role": "system", "content": "You are a helpful assistant." },
+ { "role": "user", "content": [
+ {
+ "type": "text",
+ "text": "Describe this picture:"
+ },
+ {
+ "type": "image_url",
+ "image_url": {
+ "url": "<image URL>"
+ }
+ }
+ ] }
+ ],
+ max_tokens=2000
+ )
+ print(response)
+ ```
+++
+> [!TIP]
+> ### Use a local image
+>
+> If you want to use a local image, you can use the following Python code to convert it to base64 so it can be passed to the API. Alternative file conversion tools are available online.
+>
+> ```python
+> import base64
+> from mimetypes import guess_type
+>
+> # Function to encode a local image into data URL
+> def local_image_to_data_url(image_path):
+> # Guess the MIME type of the image based on the file extension
+> mime_type, _ = guess_type(image_path)
+> if mime_type is None:
+> mime_type = 'application/octet-stream' # Default MIME type if none is found
+>
+> # Read and encode the image file
+> with open(image_path, "rb") as image_file:
+> base64_encoded_data = base64.b64encode(image_file.read()).decode('utf-8')
+>
+> # Construct the data URL
+> return f"data:{mime_type};base64,{base64_encoded_data}"
+>
+> # Example usage
+> image_path = '<path_to_image>'
+> data_url = local_image_to_data_url(image_path)
+> print("Data URL:", data_url)
+> ```
+>
+> When your base64 image data is ready, you can pass it to the API in the request body like this:
+>
+> ```json
+> ...
+> "type": "image_url",
+> "image_url": {
+> "url": "data:image/jpeg;base64,<your_image_data>"
+> }
+> ...
+> ```
+ ### Output The API response should look like the following.
The API response should look like the following.
} ```
-Every response includes a `"finish_reason"` field. It has the following possible values:
+Every response includes a `"finish_details"` field. It has the following possible values:
- `stop`: API returned complete model output. - `length`: Incomplete model output due to the `max_tokens` input parameter or model's token limit. - `content_filter`: Omitted content due to a flag from our content filters. ## Detail parameter settings in image processing: Low, High, Auto
-The detail parameter in the model offers three choices: `low`, `high`, or `auto`, to adjust the way the model interprets and processes images. The default setting is auto, where the model decides between low or high based on the size of the image input.
+The _detail_ parameter in the model offers three choices: `low`, `high`, or `auto`, to adjust the way the model interprets and processes images. The default setting is auto, where the model decides between low or high based on the size of the image input.
- `low` setting: the model does not activate the "high res" mode, instead processes a lower resolution 512x512 version, resulting in quicker responses and reduced token consumption for scenarios where fine detail isn't crucial. - `high` setting: the model activates "high res" mode. Here, the model initially views the low-resolution image and then generates detailed 512x512 segments from the input image. Each segment uses double the token budget, allowing for a more detailed interpretation of the image.''
The **Optical character recognition (OCR)** integration allows the model to prod
The **object grounding** integration brings a new layer to data analysis and user interaction, as the feature can visually distinguish and highlight important elements in the images it processes. > [!IMPORTANT]
-> To use Vision enhancement, you need a Computer Vision resource, and it must be in the same Azure region as your GPT-4 Turbo with Vision resource.
+> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S0) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
> [!CAUTION] > Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges.
+#### [REST](#tab/rest)
+ Send a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}/extensions/chat/completions?api-version=2023-12-01-preview` where - RESOURCE_NAME is the name of your Azure OpenAI resource
Send a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployme
The format is similar to that of the chat completions API for GPT-4, but the message content can be an array containing strings and images (either a valid HTTP or HTTPS URL to an image, or a base-64-encoded image).
-You must also include the `enhancements` and `dataSources` objects. `enhancements` represents the specific Vision enhancement features requested in the chat. It has a `grounding` and `ocr` property, which both have a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service. `dataSources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` property which should be `"AzureComputerVision"` and a `parameters` property. Set the `endpoint` and `key` to the endpoint URL and access key of your Computer Vision resource. Remember to set a `"max_tokens"` value, or the return output will be cut off.
+You must also include the `enhancements` and `dataSources` objects. `enhancements` represents the specific Vision enhancement features requested in the chat. It has a `grounding` and `ocr` property, which both have a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service. `dataSources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` property which should be `"AzureComputerVision"` and a `parameters` property. Set the `endpoint` and `key` to the endpoint URL and access key of your Computer Vision resource.
+
+> [!IMPORTANT]
+> Remember to set a `"max_tokens"` value, or the return output will be cut off.
+ ```json {
You must also include the `enhancements` and `dataSources` objects. `enhancement
{ "type": "image_url", "image_url": {
- "url":"<URL or base-64-encoded image>"
+ "url":"<image URL>"
} } ]
You must also include the `enhancements` and `dataSources` objects. `enhancement
} ```
+#### [Python](#tab/python)
+
+You call the same method as in the previous step, but include the new *extra_body* parameter. It contains the `enhancements` and `dataSources` fields.
+
+`enhancements` represents the specific Vision enhancement features requested in the chat. It has a `grounding` and `ocr` field, which both have a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service.
+
+`dataSources` represents the Computer Vision resource data that's needed for Vision enhancement. It has a `type` field which should be `"AzureComputerVision"` and a `parameters` field. Set the `endpoint` and `key` to the endpoint URL and access key of your Computer Vision resource. R
+
+> [!IMPORTANT]
+> Remember to set a `"max_tokens"` value, or the return output will be cut off.
++
+```python
+response = client.chat.completions.create(
+ model=deployment_name,
+ messages=[
+ { "role": "system", "content": "You are a helpful assistant." },
+ { "role": "user", "content": [
+ {
+ "type": "text",
+ "text": "Describe this picture:"
+ },
+ {
+ "type": "image_url",
+ "image_url": {
+ "url": "<image URL>"
+ }
+ }
+ ] }
+ ],
+ extra_body={
+ "dataSources": [
+ {
+ "type": "AzureComputerVision",
+ "parameters": {
+ "endpoint": "<your_computer_vision_endpoint>",
+ "key": "<your_computer_vision_key>"
+ }
+ }],
+ "enhancements": {
+ "ocr": {
+ "enabled": True
+ },
+ "grounding": {
+ "enabled": True
+ }
+ }
+ },
+ max_tokens=2000
+)
+print(response)
+```
++++ ### Output The chat responses you receive from the model should now include enhanced information about the image, such as object labels and bounding boxes, and OCR results. The API response should look like the following.
The chat responses you receive from the model should now include enhanced inform
} ```
-Every response includes a `"finish_reason"` field. It has the following possible values:
+Every response includes a `"finish_details"` field. It has the following possible values:
- `stop`: API returned complete model output. - `length`: Incomplete model output due to the `max_tokens` input parameter or model's token limit. - `content_filter`: Omitted content due to a flag from our content filters.
Every response includes a `"finish_reason"` field. It has the following possible
GPT-4 Turbo with Vision provides exclusive access to Azure AI Services tailored enhancements. The **video prompt** integration uses Azure AI Vision video retrieval to sample a set of frames from a video and create a transcript of the speech in the video. It enables the AI model to give summaries and answers about video content. > [!IMPORTANT]
-> To use Vision enhancement, you need a Computer Vision resource, and it must be in the same Azure region as your GPT-4 Turbo with Vision resource.
+> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S0) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
> [!CAUTION] > Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information).
-Follow these steps to set up a video retrieval system and integrate it with your AI chat model:
+### Set up video retrieval
+
+Follow these steps to set up a video retrieval system to integrate with your AI chat model:
1. Get an Azure AI Vision resource in the same region as the Azure OpenAI resource you're using. 1. Follow the instructions in [Do video retrieval using vectorization](/azure/ai-services/computer-vision/how-to/video-retrieval) to create a video retrieval index. Return to this guide once your index is created. 1. Save the index name, the `documentId` values of your videos, and the blob storage SAS URLs of your videos to a temporary location. You'll need these values the next steps.+
+### Call the Chat Completion API
+
+#### [REST](#tab/rest)
+ 1. Prepare a POST request to `https://{RESOURCE_NAME}.openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}/extensions/chat/completions?api-version=2023-12-01-preview` where - RESOURCE_NAME is the name of your Azure OpenAI resource
Follow these steps to set up a video retrieval system and integrate it with your
1. Fill in all the `<placeholder>` fields above with your own information: enter the endpoint URLs and keys of your OpenAI and AI Vision resources where appropriate, and retrieve the video index information from the earlier step. 1. Send the POST request to the API endpoint. It should contain your OpenAI and AI Vision credentials, the name of your video index, and the ID and SAS URL of a single video.
+#### [Python](#tab/python)
+
+Call the client's **create** method as in the previous sections, but include the *extra_body* parameter. Here, it contains the `enhancements` and `dataSources` fields. `enhancements` represents the specific Vision enhancement features requested in the chat. It has a `video` field, which has a boolean `enabled` property. Use this to request the video retrieval service.
+
+`dataSources` represents the external resource data that's needed for Vision enhancement. It has a `type` field which should be `"AzureComputerVisionVideoIndex"` and a `parameters` field.
+
+Set the `computerVisionBaseUrl` and `computerVisionApiKey` to the endpoint URL and access key of your Computer Vision resource. Set `indexName` to the name of your video index. Set `videoUrls` to a list of SAS URLs of your videos.
+
+> [!IMPORTANT]
+> Remember to set a `"max_tokens"` value, or the return output will be cut off.
+
+```python
+response = client.chat.completions.create(
+ model=deployment_name,
+ messages=[
+ { "role": "system", "content": "You are a helpful assistant." },
+ { "role": "user", "content": [
+ {
+ "type": "text",
+ "text": "Describe this video:"
+ },
+ {
+ "type": "acv_document_id",
+ "acv_document_id": "<your_video_ID>"
+ }
+ ] }
+ ],
+ extra_body={
+ "dataSources": [
+ {
+ "type": "AzureComputerVisionVideoIndex",
+ "parameters": {
+ "computerVisionBaseUrl": "<your_computer_vision_endpoint>", # your endpoint should look like the following https://YOUR_RESOURCE_NAME.cognitiveservices.azure.com/computervision
+ "computerVisionApiKey": "<your_computer_vision_key>",
+ "indexName": "<name_of_your_index>",
+ "videoUrls": ["<your_video_SAS_URL>"]
+ }
+ }],
+ "enhancements": {
+ "video": {
+ "enabled": True
+ }
+ }
+ },
+ max_tokens=100
+)
+
+print(response)
+```
+ ### Output
The chat responses you receive from the model should include information about t
} ```
-Every response includes a `"finish_reason"` field. It has the following possible values:
+Every response includes a `"finish_details"` field. It has the following possible values:
- `stop`: API returned complete model output. - `length`: Incomplete model output due to the `max_tokens` input parameter or model's token limit. - `content_filter`: Omitted content due to a flag from our content filters.
ai-services Provisioned Throughput Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-throughput-onboarding.md
For each new commitment you need to create, follow these steps:
### Adding Provisioned Throughput Units to existing commitments
-The steps are the same as in the previous example, but you'll increase the **amount to commit (PTU)** value.
+The steps are the same as in the previous example, but you'll increase the **amount to commit (PTU)** value. The value shown here is the total amount of PTUs purchased not incremental. The additional price charge displayed will represent a pro-rated amount to pay for the added PTUs over the remaining time in the time period.
:::image type="content" source="../media/how-to/provisioned-onboarding/increase-commitment.png" alt-text="Screenshot of commitment purchase UI with an increase in the amount to commit value." lightbox="../media/how-to/provisioned-onboarding/increase-commitment.png":::
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
Azure OpenAI provides two methods for authentication. you can use either API Ke
- **API Key authentication**: For this type of authentication, all API requests must include the API Key in the ```api-key``` HTTP header. The [Quickstart](./quickstart.md) provides guidance for how to make calls with this type of authentication. -- **Microsoft Entra authentication**: You can authenticate an API call using a Microsoft Entra token. Authentication tokens are included in a request as the ```Authorization``` header. The token provided must be preceded by ```Bearer```, for example ```Bearer YOUR_AUTH_TOKEN```. You can read our how-to guide on [authenticating with Microsoft Entra ID](./how-to/managed-identity.md).
+- **Microsoft Entra ID authentication**: You can authenticate an API call using a Microsoft Entra token. Authentication tokens are included in a request as the ```Authorization``` header. The token provided must be preceded by ```Bearer```, for example ```Bearer YOUR_AUTH_TOKEN```. You can read our how-to guide on [authenticating with Microsoft Entra ID](./how-to/managed-identity.md).
### REST API versioning
POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/com
- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) #### Example request
-You can make requests using [Azure AI Search](./concepts/use-your-data.md?tabs=ai-search#ingesting-your-data), [Azure Cosmos DB for MongoDB vCore](./concepts/use-your-data.md?tabs=mongo-db#ingesting-your-data), [Azure Machine learning](/azure/machine-learning/overview-what-is-azure-machine-learning), [Pinecone](https://www.pinecone.io/), and [Elasticsearch](https://www.elastic.co/).
+You can make requests using [Azure AI Search](./concepts/use-your-data.md?tabs=ai-search#ingesting-your-data), [Azure Cosmos DB for MongoDB vCore](./concepts/use-your-data.md?tabs=mongo-db#ingesting-your-data), [Azure Machine Learning](/azure/machine-learning/overview-what-is-azure-machine-learning), [Pinecone](https://www.pinecone.io/), and [Elasticsearch](https://www.elastic.co/).
##### Azure AI Search
The following parameters are used inside of the optional `embeddingDependency` p
}, ```
-### Azure CosmosDB for MongoDB vCore parameters
+### Azure Cosmos DB for MongoDB vCore parameters
The following parameters are used for Azure Cosmos DB for MongoDB vCore.
The following parameters are used for Pinecone.
| `filepathField` (found inside of `fieldsMapping`) | string | Required | null | The name of the index field to use as a file path. | | `contentFields` (found inside of `fieldsMapping`) | string | Required | null | The name of the index fields that should be treated as content. | | `vectorFields` | dictionary | Optional | null | The names of fields that represent vector data |
-| `contentFieldsSeparator` (found inside of `fieldsMapping`) | string | Required | null | The separator for the your content fields. Use `\n` by default. |
+| `contentFieldsSeparator` (found inside of `fieldsMapping`) | string | Required | null | The separator for your content fields. Use `\n` by default. |
The following parameters are used inside of the optional `embeddingDependency` parameter, which contains details of a vectorization source that is based on an internal embeddings model deployment name in the same Azure OpenAI resource.
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
recommendations: false
GPT-4 Turbo with Vision on Azure OpenAI service is now in public preview. GPT-4 Turbo with Vision is a large multimodal model (LMM) developed by OpenAI that can analyze images and provide textual responses to questions about them. It incorporates both natural language processing and visual understanding. With enhanced mode, you can use the [Azure AI Vision](/azure/ai-services/computer-vision/overview) features to generate additional insights from the images. - Explore the capabilities of GPT-4 Turbo with Vision in a no-code experience using the [Azure Open AI Playground](https://oai.azure.com/). Learn more in the [Quickstart guide](./gpt-v-quickstart.md).-- Vision enhancement using GPT-4 Turbo with Vision is now available in the [Azure Open AI Playground](https://oai.azure.com/) and includes support for Optical Character Recognition, object grounding, image support for "add your data," and support for video prompt.
+- Vision enhancement using GPT-4 Turbo with Vision is now available in the [Azure Open AI Playground](https://oai.azure.com/) and includes support for Optical Character Recognition, object grounding, image support for "add your data," and support for video prompt.
- Make calls to the chat API directly using the [REST API](https://aka.ms/gpt-v-api-ref). - Region availability is currently limited to `SwitzerlandNorth`, `SwedenCentral`, `WestUS`, and `AustraliaEast` - Learn more about the known limitations of GPT-4 Turbo with Vision and other [frequently asked questions](/azure/ai-services/openai/faq#gpt-4-with-vision).
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-support.md
Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz.
Note that the following neural voices are retired. -- The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021. If you're using container Neural TTS, [download](speech-container-ntts.md#get-the-container-image-with-docker-pull) and deploy the latest version. All requests with previous versions won't succeed starting from October 30, 2021.-- The `en-US-JessaNeural` voice is retired and replaced by `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria."
+- The English (United Kingdom) voice `en-GB-MiaNeural` is retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021. If you're using container Neural TTS, [download](speech-container-ntts.md#get-the-container-image-with-docker-pull) and deploy the latest version. All requests with previous versions won't succeed starting from October 30, 2021.
+- The `en-US-JessaNeural` voice is retired and replaced by `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria."
+- The Chinese (Mandarin, Simplified) voice `zh-CN-XiaoxuanNeural` is retired on Feburary 29, 2024. All service requests to `zh-CN-XiaoxuanNeural` will be redirected to `zh-CN-XiaozhenNeural` automatically as of Feburary 29, 2024. If you're using container Neural TTS, [download](speech-container-ntts.md#get-the-container-image-with-docker-pull) and deploy the latest version. All requests with previous versions won't succeed starting from Feburary 29, 2024.
### Custom neural voice
ai-services Personal Voice How To Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-how-to-use.md
You can use the SSML via the [Speech SDK](./get-started-text-to-speech.md), [RES
## Reference documentation
-The API reference documentation is made available to approved customers. You can apply for access [here](https://aka.ms/customneural).
+> [!div class="nextstepaction"]
+> [Custom voice API specification - 2023-12-01-preview](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/Speech/TextToSpeech/preview/2023-12-01-preview/texttospeech.json/)
## Next steps
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
This article shows you how to use Azure CNI networking to create and use a virtu
## Configure networking
+For information on planning IP addressing for your AKS cluster, see [Plan IP addressing for your cluster](./azure-cni-overview.md#plan-ip-addressing-for-your-cluster).
+ # [**Portal**](#tab/configure-networking-portal) 1. Sign in to the [Azure portal](https://portal.azure.com/).
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
You can create and use an internal load balancer to restrict access to your appl
## Specify an IP address
-When you specify an IP address for the load balancer, the specified IP address must reside in the same subnet as the AKS cluster, but it can't already be assigned to a resource. For example, you shouldn't use an IP address in the range designated for the Kubernetes subnet within the AKS cluster.
+When you specify an IP address for the load balancer, the specified IP address must reside in the same virtual network as the AKS cluster, but it can't already be assigned to another resource in the virtual network. For example, you shouldn't use an IP address in the range designated for the Kubernetes subnet within the AKS cluster. Using an IP address that's already assigned to another resource in the same virtual network can cause issues with the load balancer.
You can use the [`az network vnet subnet list`][az-network-vnet-subnet-list] Azure CLI command or the [`Get-AzVirtualNetworkSubnetConfig`][get-azvirtualnetworksubnetconfig] PowerShell cmdlet to get the subnets in your virtual network.
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
To operate properly, each self-hosted gateway needs outbound connectivity on por
| Hostname of Azure Table Storage account | ✔️ | Optional<sup>2</sup> | Account associated with instance (`<table-storage-account-name>.table.core.windows.net`) | | Endpoints for Azure Resource Manager | ✔️ | Optional<sup>3</sup> | Required endpoints are `management.azure.com`. | | Endpoints for Microsoft Entra integration | ✔️ | Optional<sup>4</sup> | Required endpoints are `<region>.login.microsoft.com` and `login.microsoftonline.com`. |
-| Endpoints for [Azure Application Insights integration](api-management-howto-app-insights.md) | Optional<sup>5</sup> | Optional<sup>5</sup> | Minimal required endpoints are:<ul><li>`rt.services.visualstudio.com:443`</li><li>`dc.services.visualstudio.com:443`</li><li>`{region}.livediagnostics.monitor.azure.com:443`</li></ul>Learn more in [Azure Monitor docs](../azure-monitor/app/ip-addresses.md#outgoing-ports) |
+| Endpoints for [Azure Application Insights integration](api-management-howto-app-insights.md) | Optional<sup>5</sup> | Optional<sup>5</sup> | Minimal required endpoints are:<ul><li>`rt.services.visualstudio.com:443`</li><li>`dc.services.visualstudio.com:443`</li><li>`{region}.livediagnostics.monitor.azure.com:443`</li></ul>Learn more in [Azure Monitor docs](../azure-monitor/ip-addresses.md#outgoing-ports) |
| Endpoints for [Event Hubs integration](api-management-howto-log-event-hubs.md) | Optional<sup>5</sup> | Optional<sup>5</sup> | Learn more in [Azure Event Hubs docs](../event-hubs/network-security.md) | | Endpoints for [external cache integration](api-management-howto-cache-external.md) | Optional<sup>5</sup> | Optional<sup>5</sup> | This requirement depends on the external cache that is being used |
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md
Enable publishing the [developer portal](api-management-howto-developer-portal.m
You're not required to allow inbound requests from service tag `AzureLoadBalancer` for the Developer SKU, since only one compute unit is deployed behind it. However, inbound connectivity from `AzureLoadBalancer` becomes **critical** when scaling to a higher SKU, such as Premium, because failure of the health probe from load balancer then blocks all inbound access to the control plane and data plane. ## Application Insights
- If you enabled [Azure Application Insights](api-management-howto-app-insights.md) monitoring on API Management, allow outbound connectivity to the [telemetry endpoint](../azure-monitor/app/ip-addresses.md#outgoing-ports) from the VNet.
+ If you enabled [Azure Application Insights](api-management-howto-app-insights.md) monitoring on API Management, allow outbound connectivity to the [telemetry endpoint](../azure-monitor/ip-addresses.md#outgoing-ports) from the VNet.
## KMS endpoint
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
description: Significant updates to Azure Automation updated each month.
Previously updated : 12/23/2023 Last updated : 01/26/2024
Azure Automation receives improvements on an ongoing basis. To stay up to date w
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md).
+## January 2024
+
+### Public Preview: Azure Automation Runtime environment & support for Azure CLI commands in runbooks
+
+Azure Automation introduces [Runtime environment](runtime-environment-overview.md) (preview) that provides a simple and hassle-free experience for [updating scripts](quickstart-update-runbook-in-runtime-environment.md) to the latest language versions. It also provides complete control to configure the script execution environment, without worrying about conflicting module versions in a single Automation account. All existing runbooks are automatically available in the new Runtime environment experience with zero manual effort. You can navigate seamlessly between the old and new experience with a single click. [Learn more](manage-runtime-environment.md).
+
+Additionally, this feature enables support for [Azure CLI commands](quickstart-cli-support-powershell-runbook-runtime-environment.md) (preview) in PowerShell 7.2 runbooks. Now reap combined benefits of the rich command set of Azure CLI in Azure Automation runbooks to streamline management of Azure resources.
++ ## December 2023 ### Restore deleted Automation runbooks
azure-arc Deploy Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/deploy-marketplace.md
+
+ Title: "Deploy and manage applications from Azure Marketplace on Azure Arc-enabled Kubernetes clusters"
Last updated : 01/26/2024++
+description: "Learn how to discover Kubernetes applications in Azure Marketplace and deploy them to your Arc-enabled Kubernetes clusters."
++
+# Deploy and manage applications from Azure Marketplace on Azure Arc-enabled Kubernetes clusters
+
+[Azure Marketplace](/marketplace/azure-marketplace-overview) is an online store that contains thousands of IT software applications and services built by industry-leading technology companies. In Azure Marketplace, you can find, try, buy, and deploy the software and services that you need to build new solutions and manage your cloud infrastructure. The catalog includes solutions for different industries and technical areas, free trials, and consulting services from Microsoft partners.
+
+Included among these solutions are Kubernetes application-based container offers. These offers contain applications that can run on Azure Arc-enabled Kubernetes clusters, represented as [cluster extensions](conceptual-extensions.md). Deploying an offer from Azure Marketplace creates a new instance of the extension on your Arc-enabled Kubernetes cluster.
+
+This article shows you how to:
+
+- Discover applications that support Azure Arc-enabled Kubernetes clusters.
+- Purchase an application.
+- Deploy the application on your cluster.
+- Monitor usage and billing information.
+
+You can use Azure CLI or the Azure portal to perform these tasks.
+
+## Prerequisites
+
+To deploy an application, you must have an existing Azure Arc-enabled Kubernetes connected cluster, with at least one node of operating system and architecture type `linux/amd64`. If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md). Be sure to [upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version before you get started.
+
+- An existing Azure Arc-enabled Kubernetes connected cluster, with at least one node of operating system and architecture type `linux/amd64`. If deploying [Flux (GitOps)](extensions-release.md#flux-gitops), you can use an ARM64-based cluster without a `linux/amd64` node.
+ - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
+- If using Azure CLI to review, deploy, and manage Azure Marketplace applications:
+ - The latest version of [Azure CLI](/cli/azure/install-azure-cli).
+ - The latest version of the `k8s-extension` Azure CLI extension. Install the extension by running `az extension add --name k8s-extension`. If the `k8s-extension` extension is already installed, make sure it's updated to the latest version by running `az extension update --name k8s-extension`.
+
+> [!NOTE]
+> This feature is currently supported only in the following regions:
+>
+>- East US, East US2, EastUS2 EUAP, West US, West US2, Central US, West Central US, South Central US, West Europe, North Europe, Canada Central, South East Asia, Australia East, Central India, Japan East, Korea Central, UK South, UK West, Germany West Central, France Central, East Asia, West US3, Norway East, South African North, North Central US, Australia South East, Switzerland North, Japan West, South India
+
+## Discover Kubernetes applications that supports Azure Arc-enabled clusters
+
+### [Azure portal](#tab/azure-portal)
+
+To discover Kubernetes applications in the Azure Marketplace from within the Azure portal:
+
+1. In the Azure portal, search for **Marketplace**. In the results, under **Services**, select **Marketplace**.
+1. From **Marketplace**, you can search for an offer or publisher directly by name, or you can browse all offers. To find Kubernetes application offers, select **Containers** from the **Categories** section in the left menu.
+
+ > [!IMPORTANT]
+ > The **Containers** category includes both Kubernetes applications and standalone container images. Be sure to select only Kubernetes application offers when following these steps. Container images have a different deployment process, and generally can't be deployed on Arc-enabled Kubernetes clusters.
+
+ :::image type="content" source="media/deploy-marketplace/marketplace-containers.png" alt-text="Screenshot of Azure Marketplace showing the Containers menu item." lightbox="media/deploy-marketplace/marketplace-containers.png":::
+
+1. You'll see several Kubernetes application offers displayed on the page. To view all of the Kubernetes application offers, select **See more**.
+
+ :::image type="content" source="media/deploy-marketplace/marketplace-see-more.png" alt-text="Screenshot showing the See more link for the Containers category in Azure Marketplace." lightbox="media/deploy-marketplace/marketplace-see-more.png":::
+
+1. Alternately, you can search for a specific `publisherId` to view that publisher's Kubernetes applications in Azure Marketplace. For details on how to find publisher IDs, see the Azure CLI tab for this article.
+
+ :::image type="content" source="media/deploy-marketplace/marketplace-search-by-publisher.png" alt-text="Screenshot showing the option to search by publisher in Azure Marketplace." lightbox="media/deploy-marketplace/marketplace-search-by-publisher.png":::
+
+Once you find an application that you want to deploy, move on to the next section.
+
+### [Azure CLI](#tab/azure-cli)
+
+You can use Azure CLI to get a list of extensions, including Azure Marketplace applications, that can be deployed on Azure Arc-enabled connected clusters. To do so, run this command, providing the name of your connected cluster and the resource group where the cluster is located.
+
+```azurecli-interactive
+az k8s-extension extension-types list-by-cluster --cluster-type connectedClusters --cluster-name <clusterName> --resource-group <resourceGroupName>
+```
+
+The command will return a list of extension types that can be deployed on the connected clusters, similar to the example shown here.
+
+```json
+"id": "/subscriptions/{sub}/resourceGroups/{rg} /providers/Microsoft.Kubernetes/connectedClusters/{clustername} /providers/Microsoft.KubernetesConfiguration/extensiontypes/contoso",
+"name": "contoso",
+"type": "Microsoft.KubernetesConfiguration/extensionTypes",
+"properties": {
+ "extensionType": "contoso",
+ "description": "Contoso extension",
+ "isSystemExtension": false,
+ "publisher": "contoso",
+ "isManagedIdentityRequired": false,
+ "supportedClusterTypes": [
+ "managedclusters",
+ "connectedclusters"
+ ],
+ "supportedScopes": {
+ "defaultScope": "namespace",
+ "clusterScopeSettings": {
+ "allowMultipleInstances": false,
+ "defaultReleaseNamespace": null
+ }
+ },
+ "planInfo": {
+ "offerId": "contosoOffer",
+ "planId": "contosoPlan",
+ "publisherId": "contoso"
+ }
+}
+```
+
+When you find an application that you want to deploy, note the following values from the response received: `planId`, `publisherId`, `offerID`, and `extensionType`. You'll need these values to accept the application's terms and deploy the application.
+++
+## Deploy a Kubernetes application
+
+### [Azure portal](#tab/azure-portal)
+
+Once you've identified an offer you want to deploy, follow these steps:
+
+1. In the **Plans + Pricing** tab, review the options. If there are multiple plans available, find the one that meets your needs. Review the terms on the page to make sure they're acceptable, and then select **Create**.
+
+ :::image type="content" source="media/deploy-marketplace/marketplace-plans-pricing.png" alt-text="Screenshot of the Plans + Pricing page for a Kubernetes offer in Azure Marketplace." lightbox="media/deploy-marketplace/marketplace-plans-pricing.png":::
+
+1. Select the resource group and Arc-enabled cluster to which you want to deploy the application.
+
+ :::image type="content" source="media/deploy-marketplace/marketplace-select-cluster.png" alt-text="Screenshot showing the option to select a resource group and cluster for the Marketplace offer.":::
+
+1. Complete all pages of the deployment wizard to specify all configuration options that the application requires.
+
+ :::image type="content" source="media/deploy-marketplace/marketplace-configuration.png" alt-text="Screenshot showing configuration options for an Azure Marketplace offer.":::
+
+1. When you're finished, select **Review + Create**, then select **Create** to deploy the offer.
+
+### [Azure CLI](#tab/azure-cli)
+
+#### Accept terms and agreements
+
+Before you can deploy a Kubernetes application, you need to accept its terms and agreements. Be sure to read these terms carefully so that you understand costs and any other requirements.
+
+To view the details of the terms, run the following command, providing the values for `offerID`, `planID`, and `publisherID`:
+
+```azurecli-interactive
+az vm image terms show --offer <offerID> --plan <planId> --publisher <publisherId>
+```
+
+To accept the terms, run the following command, using the same values for `offerID`, `planID`, and `publisherID`.
+
+```azurecli-interactive
+az vm image terms accept --offer <offerID> --plan <planId> --publisher <publisherId>
+```
+
+> [!NOTE]
+> Although this command is for VMs, it also works for containers, including Arc-enabled Kubernetes clusters. For more information, see the [az vm image terms](/cli/azure/vm/image/terms) reference.
+
+#### Deploy the application
+
+To deploy the application (extension) through Azure CLI, follow the steps outlined in [Deploy and manage Azure Arc-enabled Kubernetes cluster extensions](extensions.md). An example command might look like this:
+
+```azurecli-interactive
+az k8s-extension create --name <offerID> --extension-type <extensionType> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters --plan-name <planId> --plan-product <offerID> --plan-publisher <publisherId>
+```
+++
+## Verify the deployment
+
+Deploying an offer from Azure Marketplace creates a new extension instance on your Arc-enabled Kubernetes cluster. You can verify that the deployment was successful by confirming the extension is running successfully.
+
+### [Azure portal](#tab/azure-portal)
+
+Verify the deployment navigating to the cluster you recently installed the extension on, then navigate to **Extensions**, where you'll see the extension status.
++
+If the deployment was successful, the **Status** will be **Succeeded**. If the status is **Creating**, the deployment is still in progress. Wait a few minutes then check again.
+
+If the deployment fails, see [Troubleshoot the failed deployment of a Kubernetes application offer](/troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer).
+
+### [Azure CLI](#tab/azure-cli)
+
+Verify the deployment by using the following command to list the extensions that are already running or being deployed on your cluster:
+
+```azurecli-interactive
+az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters
+```
+
+If the deployment was successful, `provisioningState` is `Succeeded`. If `provisioningState` is `Creating`, the deployment is still in progress. Wait a few minutes then check again.
+
+If the deployment fails, see [Troubleshoot the failed deployment of a Kubernetes application offer](/troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer).
+
+To view the extension instance from the cluster, run the following command:
+
+```azurecli-interactive
+az k8s-extension show --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters
+```
+++
+## Monitor billing and usage information
+
+You can monitor billing and usage information for a deployed extension in the Azure portal.
+
+1. In the Azure portal, navigate to your cluster's resource group.
+
+1. Select **Cost Management** > **Cost analysis**. Under **Product**, you can see a cost breakdown for the plan that you selected.
+
+ :::image type="content" source="media/deploy-marketplace/extension-cost-analysis.png" alt-text="Screenshot of the Azure portal page for a resource group, with billing information broken down by offer plan." lightbox="media/deploy-marketplace/extension-cost-analysis.png":::
+
+## Remove an application
+
+You can delete a purchased plan for a Kubernetes offer by deleting the extension instance on the cluster.
+
+### [Azure portal](#tab/azure-portal)
+
+To delete the extension instance in the Azure portal, select **Extensions** within your cluster. Select the application you want to remove, then select **Uninstall**.
++
+### [Azure CLI](#tab/azure-cli)
+
+The following command deletes an extension from the cluster:
+
+```azurecli-interactive
+az k8s-extension delete --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type connectedClusters
+```
+++
+## Troubleshooting
+
+For help with resolving issues, see [Troubleshoot the failed deployment of a Kubernetes application offer](/troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer).
+
+## Next steps
+
+- Learn about [extensions for Arc-enabled Kubernetes](conceptual-extensions.md).
+- Use our quickstart to [connect a Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md).
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
To learn just about deploying an isolated worker model project to Azure, see [De
There are two modes in which you can run your .NET class library functions: either [in the same process](functions-dotnet-class-library.md) as the Functions host runtime (_in-process_) or in an isolated worker process. When your .NET functions run in an isolated worker process, you can take advantage of the following benefits:
-+ **Fewer conflicts:** Because your functions run in a separate process, assemblies used in your app don't conflict with different version of the same assemblies used by the host process.
++ **Fewer conflicts:** Because your functions run in a separate process, assemblies used in your app don't conflict with different versions of the same assemblies used by the host process. + **Full control of the process**: You control the start-up of the app, which means that you can manage the configurations used and the middleware started. + **Standard dependency injection:** Because you have full control of the process, you can use current .NET behaviors for dependency injection and incorporating middleware into your function app.
-+ **.NET version flexibility:** Running outside of the host process means that your functions can on versions of .NET not natively supported by the Functions runtime, including the .NET Framework.
++ **.NET version flexibility:** Running outside of the host process means that your functions can run on versions of .NET not natively supported by the Functions runtime, including the .NET Framework. If you have an existing C# function app that runs in-process, you need to migrate your app to take advantage of these benefits. For more information, see [Migrate .NET apps from the in-process model to the isolated worker model][migrate].
A function can have zero or more input bindings that can pass data to a function
### Output bindings
-To write to an output binding, you must apply an output binding attribute to the function method, which define how to write to the bound service. The value returned by the method is written to the output binding. For example, the following example writes a string value to a message queue named `output-queue` by using an output binding:
+To write to an output binding, you must apply an output binding attribute to the function method, which defines how to write to the bound service. The value returned by the method is written to the output binding. For example, the following example writes a string value to a message queue named `output-queue` by using an output binding:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Queue/QueueFunction.cs" id="docsnippet_queue_output_binding" :::
The following snippet shows this configuration in the context of a project file:
```xml <ItemGroup> <FrameworkReference Include="Microsoft.AspNetCore.App" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.19.0" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.20.1" />
<PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.16.4" /> </ItemGroup> ```
azure-functions Durable Functions Isolated Create First Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-isolated-create-first-csharp.md
Add the following to your app project:
```xml <ItemGroup>
- <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.10.0" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.DurableTask" Version="1.0.0" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Http" Version="3.0.13" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.7.0" OutputItemType="Analyzer" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.20.1" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.DurableTask" Version="1.1.1" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Http" Version="3.1.0" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.16.4" OutputItemType="Analyzer" />
</ItemGroup> ``` ## Add functions to the app
Add the following to your app project:
```xml <ItemGroup>
- <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.10.0" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.DurableTask" Version="1.0.0" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Http" Version="3.0.13" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.7.0" OutputItemType="Analyzer" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.20.1" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.DurableTask" Version="1.1.1" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Http" Version="3.1.0" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.16.4" OutputItemType="Analyzer" />
</ItemGroup> ``` ## Add functions to the app
azure-functions Functions Bindings Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue.md
This extension is available by installing the [Microsoft.Azure.WebJobs.Extension
Using the .NET CLI: ```dotnetcli
-dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage.Queues --version 5.0.0
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage.Queues
``` [!INCLUDE [functions-bindings-storage-extension-v5-tables-note](../../includes/functions-bindings-storage-extension-v5-tables-note.md)]
azure-functions Functions Bindings Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md
Using the .NET CLI:
```dotnetcli # Install the Azure Tables extension
-dotnet add package Microsoft.Azure.WebJobs.Extensions.Tables --version 1.0.0
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Tables
# Update the combined Azure Storage extension (to a version which no longer includes Azure Tables)
-dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.0.0
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage
``` [!INCLUDE [functions-bindings-storage-extension-v5-tables-note](../../includes/functions-bindings-storage-extension-v5-tables-note.md)]
azure-functions Migrate Cosmos Db Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-cosmos-db-version-3-version-4.md
Update your `.csproj` project file to use the latest extension version for your
<OutputType>Exe</OutputType> </PropertyGroup> <ItemGroup>
- <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.14.1" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.CosmosDB" Version="4.4.1" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.10.0" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.20.1" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.CosmosDB" Version="4.5.1" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.16.4" />
</ItemGroup> <ItemGroup> <None Update="host.json">
azure-functions Migrate Service Bus Version 4 Version 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-service-bus-version-4-version-5.md
Update your `.csproj` project file to use the latest extension version for your
<OutputType>Exe</OutputType> </PropertyGroup> <ItemGroup>
- <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.14.1" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.ServiceBus" Version="5.14.1" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.10.0" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.20.1" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.ServiceBus" Version="5.15.0" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.16.4" />
</ItemGroup> <ItemGroup> <None Update="host.json">
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
Application Insights (part of Azure Monitor) enables the same features in both A
**SDK endpoint modifications** ΓÇô In order to send data from Application Insights to an Azure Government region, you'll need to modify the default endpoint addresses that are used by the Application Insights SDKs. Each SDK requires slightly different modifications, as described in [Application Insights overriding default endpoints](/previous-versions/azure/azure-monitor/app/create-new-resource#override-default-endpoints).
-**Firewall exceptions** ΓÇô Application Insights uses several IP addresses. You might need to know these addresses if the app that you're monitoring is hosted behind a firewall. For more information, see [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md) from where you can download Azure Government IP addresses.
+**Firewall exceptions** ΓÇô Application Insights uses several IP addresses. You might need to know these addresses if the app that you're monitoring is hosted behind a firewall. For more information, see [IP addresses used by Azure Monitor](../azure-monitor/ip-addresses.md) from where you can download Azure Government IP addresses.
>[!NOTE] >Although these addresses are static, it's possible that we'll need to change them from time to time. All Application Insights traffic represents outbound traffic except for availability monitoring and webhooks, which require inbound firewall rules.
azure-maps How To Dev Guide Js Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md
# JavaScript/TypeScript REST SDK Developers Guide (preview)
-The Azure Maps JavaScript/TypeScript REST SDK (JavaScript SDK) supports searching using the Azure Maps [Search service], like searching for an address, fuzzy searching for a point of interest (POI), and searching by coordinates. This article helps you get started building location-aware applications that incorporate the power of Azure Maps.
+The Azure Maps JavaScript/TypeScript REST SDK (JavaScript SDK) supports searching using the Azure Maps [Search service], like searching for an address, searching for boundary of a city or country, and searching by coordinates. This article helps you get started building location-aware applications that incorporate the power of Azure Maps.
> [!NOTE] > Azure Maps JavaScript SDK supports the LTS version of Node.js. For more information, seeΓÇ»[Node.js Release Working Group].
const credential = new AzureKeyCredential(process.env.MAPS_SUBSCRIPTION_KEY);
const client = MapsSearch(credential); ```
-## Fuzzy search an entity
+### Using a Shared Access Signature (SAS) Token Credential
-The following code snippet demonstrates how, in a simple console application, to import the `azure-maps-search` package and perform a fuzzy search on ΓÇ£StarbucksΓÇ¥ near Seattle:
+Shared access signature (SAS) tokens are authentication tokens created using the JSON Web token (JWT) format and are cryptographically signed to prove authentication for an application to the Azure Maps REST API.
-```JavaScript
-
-const MapsSearch = require("@azure-rest/maps-search").default;
-const { isUnexpected } = require("@azure-rest/maps-search");
-const { AzureKeyCredential } = require("@azure/core-auth");
-require("dotenv").config();
-
-async function main() {
- // Authenticate with Azure Map Subscription Key
- const credential = new AzureKeyCredential(
- process.env. MAPS_SUBSCRIPTION_KEY
- );
- const client = MapsSearch(credential);
-
- // Setup the fuzzy search query
- const response = await client.path("/search/fuzzy/{format}", "json").get({
- queryParameters: {
- query: "Starbucks",
- lat: 47.61559,
- lon: -122.33817,
- countrySet: ["US"],
- },
- });
-
- // Handle the error response
- if (isUnexpected(response)) {
- throw response.body.error;
- }
- // Log the result
- console.log(`Starbucks search result nearby Seattle:`);
- response.body.results.forEach((result) => {
- console.log(`\
- * ${result.address.streetNumber} ${result.address.streetName}
- ${result.address.municipality} ${result.address.countryCode} ${
- result.address.postalCode
- }
- Coordinate: (${result.position.lat.toFixed(4)}, ${result.position.lon.toFixed(4)})\
- `);
- });
-}
+You can get the SAS token using [`AzureMapsManagementClient.accounts.listSas`][listSas] package. Follow the section [Create and authenticate a `AzureMapsManagementClient`][azureMapsManagementClient] to setup first.
-main().catch((err) => {
- console.error(err);
-});
+Second, follow [Managed identities for Azure Maps][managedIdentity] to create a managed identity for your Azure Maps account. Copy the principal ID (object ID) of the managed identity.
+Next, install [Azure Core Authentication Package] package to use `AzureSASCredential`:
+```bash
+npm install @azure/core-auth
```
-This code snippet shows how to use the `MapsSearch` method from the Azure Maps Search client library to create a `client` object with your Azure credentials. You can use either your Azure Maps subscription key or the [Microsoft Entra credential](#using-a-microsoft-entra-credential) from the previous section. The `path` parameter specifies the API endpoint, which is "/search/fuzzy/{format}" in this case. The `get` method sends an HTTP GET request with the query parameters, such as `query`, `coordinates`, and `countryFilter`. The query searches for Starbucks locations near Seattle in the US. The SDK returns the results as a FuzzySearchResult object and writes them to the console. For more information, see the FuzzySearchRequest documentation.
-
-Run `search.js` with Node.js:
-
-```powershell
-node search.js
+Finally, you can use the SAS token to authenticate the client:
+
+```javascript
+ const MapsSearch = require("@azure-rest/maps-search").default;
+ const { AzureSASCredential } = require("@azure/core-auth");
+ const { DefaultAzureCredential } = require("@azure/identity");
+ const { AzureMapsManagementClient } = require("@azure/arm-maps");
+
+ const subscriptionId = "<subscription ID of the map account>"
+ const resourceGroupName = "<resource group name of the map account>";
+ const accountName = "<name of the map account>";
+ const mapsAccountSasParameters = {
+ start: "<start time in ISO format>", // e.g. "2023-11-24T03:51:53.161Z"
+ expiry: "<expiry time in ISO format>", // maximum value to start + 1 day
+ maxRatePerSecond: 500,
+ principalId: "<principle ID (object ID) of the managed identity>",
+ signingKey: "primaryKey",
+ };
+ const credential = new DefaultAzureCredential();
+ const managementClient = new AzureMapsManagementClient(credential, subscriptionId);
+ const {accountSasToken} = await managementClient.accounts.listSas(
+ resourceGroupName,
+ accountName,
+ mapsAccountSasParameters
+ );
+ if (accountSasToken === undefined) {
+ throw new Error("No accountSasToken was found for the Maps Account.");
+ }
+ const sasCredential = new AzureSASCredential(accountSasToken);
+ const client = MapsSearch(sasCredential);
```
-## Search an Address
+## Geocoding
-The searchAddress query can be used to get the coordinates of an address. Modify the `search.js` from the sample as follows:
+The following code snippet demonstrates how, in a simple console application, to import the `@azure-rest/maps-search` package and get the coordinates of an address using [GetGeocoding] query:
```JavaScript const MapsSearch = require("@azure-rest/maps-search").default;
async function main() {
); const client = MapsSearch(credential);
- const response = await client.path("/search/address/{format}", "json").get({
+ const response = await client.path("/geocode", "json").get({
queryParameters: { query: "1301 Alaskan Way, Seattle, WA 98101, US", },
async function main() {
if (isUnexpected(response)) { throw response.body.error; }
- const { lat, lon } = response.body.results[0].position;
+ const [ lon, lat ] = response.body.features[0].geometry.coordinates;
console.log(`The coordinate is: (${lat}, ${lon})`); }
main().catch((err) => {
```
-The results are ordered by confidence score and in this example only the first result returned with be displayed to the screen.
+This code snippet shows how to use the `MapsSearch` method from the Azure Maps Search client library to create a `client` object with your Azure credentials. You can use either your Azure Maps subscription key or the [Microsoft Entra credential](#using-a-microsoft-entra-credential). The `path` parameter specifies the API endpoint, which in this case is "/geocode". The `get` method sends an HTTP GET request with the query parameters. The query searches for the coordinate of "1301 Alaskan Way, Seattle, WA 98101, US". The SDK returns the results as a [GeocodingResponseOutput] object and writes them to the console. The results are ordered by confidence score in this example and only the first result is displayed to the screen. For more information, see [GetGeocoding].
-## Batch reverse search
-
-Azure Maps Search also provides some batch query methods. These methods return Long Running Operations (LRO) objects. The requests might not return all the results immediately, so you can use the poller to wait until completion. The following example demonstrates how to call batched reverse search method:
+Run `search.js` with Node.js:
-```JavaScript
- const batchItems = await createBatchItems([
- // This is an invalid query
- { query: [148.858561, 2.294911] },
- { query: [47.61010, -122.34255] },
- { query: [47.61559, -122.33817], radius: 5000 },
- ]);
- const initialResponse = await client.path("/search/address/reverse/batch/{format}", "json").post({
- body: { batchItems },
- });
+```powershell
+node search.js
```
-In this example, three queries are passed into the helper function `createBatchItems` that is imported from `@azure-rest/maps-search`. This helper function composed the body of the batch request. The first query is invalid, see [Handing failed requests](#handing-failed-requests) for an example showing how to handle the invalid query.
-
-Use the `getLongRunningPoller` method with the `initialResponse` to get the poller. Then you can use `pollUntilDone` to get the final result:
-
-```JavaScript
- const poller = getLongRunningPoller(client, initialResponse);
- const response = await poller.pollUntilDone();
- logResponseBody(response.body);
-```
+## Batch reverse geocoding
-A common scenario for LRO is to resume a previous operation later. Do that by serializing the pollerΓÇÖs state with the `toString` method, and rehydrating the state with a new poller using the `resumeFrom` option in `getLongRunningPoller`:
+Azure Maps Search also provides some batch query methods. The following example demonstrates how to call batched reverse search method:
```JavaScript
- const serializedState = poller.toString();
- const rehydratedPoller = getLongRunningPoller(client, initialResponse, {
- resumeFrom: serializedState,
+ const batchItems = [
+ // This is an invalid query
+ { coordinates: [2.294911, 148.858561] },
+ {
+ coordinates: [-122.34255, 47.6101],
+ },
+ { coordinates: [-122.33817, 47.6155] },
+ ];
+ const response = await client.path("/reverseGeocode:batch").post({
+ body: { batchItems },
});-
- const resumeResponse = await rehydratedPoller.pollUntilDone();
- logResponseBody(response);
```
+In this example, three coordinates are included in the `batchItems` of the request body. The first item is invalid, see [Handling failed requests](#handling-failed-requests) for an example showing how to handle the invalid item.
+ Once you get the response, you can log it: ```JavaScript
function logResponseBody(resBody) {
console.log(`Error in ${idx + 1} request: ${response.error.message}`); } else { console.log(`Results in ${idx + 1} request:`);
- response.addresses.forEach(({ address }) => {
- console.log(` ${address.freeformAddress}`);
+ response.features.forEach((feature) => {
+ console.log(` ${feature.properties.address.freeformAddress}`);
}); } });
function logResponseBody(resBody) {
```
-### Handing failed requests
+### Handling failed requests
Handle failed requests by checking for the `error` property in the response batch item. See the `logResponseBody` function in the completed batch reverse search following example.
The complete code for the reverse address batch search example:
```JavaScript const MapsSearch = require("@azure-rest/maps-search").default,
- { createBatchItems, getLongRunningPoller } = require("@azure-rest/maps-search");
+ { isUnexpected } = require("@azure-rest/maps-search");
const { AzureKeyCredential } = require("@azure/core-auth"); require("dotenv").config();
async function main() {
const credential = new AzureKeyCredential(process.env.MAPS_SUBSCRIPTION_KEY); const client = MapsSearch(credential);
- const batchItems = createBatchItems([
+ const batchItems = [
// This is an invalid query
- { query: [148.858561, 2.294911] },
+ { coordinates: [2.294911, 148.858561] },
{
- query: [47.6101, -122.34255],
+ coordinates: [-122.34255, 47.6101],
},
- { query: [47.6155, -122.33817], radius: 5000 },
- ]);
+ { coordinates: [-122.33817, 47.6155] },
+ ];
- const initialResponse = await client.path("/search/address/reverse/batch/{format}", "json").post({
+ const response = await client.path("/reverseGeocode:batch").post({
body: { batchItems }, });
- const poller = getLongRunningPoller(client, initialResponse);
- const response = await poller.pollUntilDone();
- logResponseBody(response.body);
+ if (isUnexpected(response)) {
+ throw response.body.error;
+ }
- const serializedState = poller.toString();
- const rehydratedPoller = getLongRunningPoller(client, initialResponse, {
- resumeFrom: serializedState,
- });
- const resumeResponse = await rehydratedPoller.pollUntilDone();
logResponseBody(resumeResponse.body); }
function logResponseBody(resBody) {
console.log(`Error in ${idx + 1} request: ${response.error.message}`); } else { console.log(`Results in ${idx + 1} request:`);
- response.addresses.forEach(({ address }) => {
- console.log(` ${address.freeformAddress}`);
+ response.features.forEach((feature) => {
+ console.log(` ${feature.properties.address.freeformAddress}`);
}); } });
-}
+}
main().catch(console.error); ```
+## Use V1 SDK
+
+We are working to make all V1 features available in V2, until then, install the following V1 SDK packages if needed:
+
+```bash
+npm install @azure-rest/map-search-v1@npm:@azure-rest/map-search@^1.0.0
+npm install @azure-rest/map-search-v2@npm:@azure-rest/map-search@^2.0.0
+```
+
+Then, you can import the two packages:
+
+```javascript
+const MapsSearchV1 = require("@azure-rest/map-search-v1").default;
+const MapsSearchV2 = require("@azure-rest/map-search-v2").default;
+```
+
+The following example demonstrates creating a function that accepts an address and search POIs around it. Use V2 SDK to get the coordinates of the address(/geocode) and V1 SDK to search POIs around it(/search/nearby).
+
+```javascript
+const MapsSearchV1 = require("@azure-rest/map-search-v1").default;
+const MapsSearchV2 = require("@azure-rest/map-search-v2").default;
+const { AzureKeyCredential } = require("@azure/core-auth");
+const { isUnexpected: isUnexpectedV1 } = require("@azure-rest/maps-search-v1");
+const { isUnexpected: isUnexpectedV2 } = require("@azure-rest/maps-search-v2");
+require("dotenv").config();
+
+/** Initialize the MapsSearchClient */
+const clientV1 = MapsSearchV1(new AzureKeyCredential(process.env.MAPS_SUBSCRIPTION_KEY));
+const clientV2 = MapsSearchV2(new AzureKeyCredential(process.env.MAPS_SUBSCRIPTION_KEY));
+
+async function searchNearby(address) {
+ /** Make a request to the geocoding API */
+ const geocodeResponse = await clientV2
+ .path("/geocode")
+ .get({ queryParameters: { query: address } });
+ /** Handle error response */
+ if (isUnexpectedV2(geocodeResponse)) {
+ throw geocodeResponse.body.error;
+ }
+
+ const [lon, lat] = geocodeResponse.body.features[0].geometry.coordinates;
+
+ /** Make a request to the search nearby API */
+ const nearByResponse = await clientV1.path("/search/nearby/{format}", "json").get({
+ queryParameters: { lat, lon },
+ });
+ /** Handle error response */
+ if (isUnexpectedV1(nearByResponse)) {
+ throw nearByResponse.body.error;
+ }
+ /** Log response body */
+ for(const results of nearByResponse.body.results) {
+ console.log(
+ `${result.poi ? result.poi.name + ":" : ""} ${result.address.freeformAddress}. (${
+ result.position.lat
+ }, ${result.position.lon})\n`
+ );
+ }
+}
+
+async function main(){
+ searchNearBy("15127 NE 24th Street, Redmond, WA 98052");
+}
+
+main().catch((err) => {
+ console.log(err);
+})
+```
+ ## Additional information - The [Azure Maps Search client library for JavaScript/TypeScript][JS-SDK].
main().catch(console.error);
[Azure Core Authentication Package]: /javascript/api/@azure/core-auth/ [Azure Identity library]: /javascript/api/overview/azure/identity-readme [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[DefaultAzureCredential]: https://github.com/Azure/azure-sdk-for-js/tree/@azure/maps-search_1.0.0-beta.1/sdk/identity/identity#defaultazurecredential
+[DefaultAzureCredential]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/identity/identity#defaultazurecredential
+[azureMapsManagementClient]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/arm-maps#create-and-authenticate-a-azuremapsmanagementclient
+[GetGeocoding]: /javascript/api/@azure-rest/maps-search/getgeocoding
+[GeocodingResponseOutput]: /javascript/api/@azure-rest/maps-search/geocodingresponseoutput
[dotenv]: https://github.com/motdotla/dotenv#readme
-[FuzzySearchRequest]: /javascript/api/@azure-rest/maps-search/fuzzysearch
-[FuzzySearchResult]: /javascript/api/@azure-rest/maps-search/searchfuzzysearch200response
[Host a daemon on non-Azure resources]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources [js geolocation package]: https://www.npmjs.com/package/@azure-rest/maps-geolocation [js geolocation readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-geolocation-rest/README.md
-[js geolocation sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-geolocation-rest/samples/v1-beta
+[js geolocation sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-geolocation-rest/samples/
[js render package]: https://www.npmjs.com/package/@azure-rest/maps-render [js render readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-render-rest/README.md
-[js render sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-render-rest/samples/v1-beta
+[js render sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-render-rest/samples/
[js route package]: https://www.npmjs.com/package/@azure-rest/maps-route [js route readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-route-rest/README.md
-[js route sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-route-rest/samples/v1-beta
+[js route sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-route-rest/samples/
[JS-SDK]: /javascript/api/@azure-rest/maps-search
+[listSas]: /javascript/api/%40azure/arm-maps/accounts#@azure-arm-maps-accounts-listsas
+[managedIdentity]: https://techcommunity.microsoft.com/t5/azure-maps-blog/managed-identities-for-azure-maps/ba-p/3666312
[Node.js Release Working Group]: https://github.com/nodejs/release#release-schedule [Node.js]: https://nodejs.org/en/download/ [search package]: https://www.npmjs.com/package/@azure-rest/maps-search [search readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-search-rest/README.md
-[search sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-search-rest/samples/v2-beta
-[Search service]: /rest/api/maps/search?view=rest-maps-1.0
-[searchAddress]: /javascript/api/@azure-rest/maps-search/searchaddress
+[search sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-search-rest/samples/
+[Search service]: /rest/api/maps/search
[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the M
## v3 (latest)
+### [3.1.1] (January 26, 2024)
+
+#### New features (3.1.1)
+
+- Added a new option, `enableAccessibilityLocationFallback`, to enable or disable reverse-geocoding API fallback for accessibility (screen reader).
+
+#### Bug fixes (3.1.1)
+
+- Resolved an issue where ApplicationInsights v3.0.5 was potentially sending a large number of requests.
+ ### [3.1.0] (January 12, 2024) #### New features (3.1.0)
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.1.1]: https://www.npmjs.com/package/azure-maps-control/v/3.1.1
[3.1.0]: https://www.npmjs.com/package/azure-maps-control/v/3.1.0 [3.0.3]: https://www.npmjs.com/package/azure-maps-control/v/3.0.3 [3.0.2]: https://www.npmjs.com/package/azure-maps-control/v/3.0.2
azure-monitor Alerts Troubleshoot Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-log.md
Try the following steps to resolve the problem:
1. Try running the query in Azure Monitor Logs, and fix any syntax issues. 2. If your query syntax is valid, check the connection to the service. - Flush the DNS cache on your local machine, by opening a command prompt and running the following command: `ipconfig /flushdns`, and then check again. If you still get the same error message, try the next step.
- - Copy and paste this URL into the browser: [https://api.loganalytics.io/v1/version](https://api.loganalytics.io/v1/version). If you get an error, contact your IT administrator to allow the IP addresses associated with **api.loganalytics.io** listed [here](../app/ip-addresses.md#application-insights-and-log-analytics-apis).
+ - Copy and paste this URL into the browser: [https://api.loganalytics.io/v1/version](https://api.loganalytics.io/v1/version). If you get an error, contact your IT administrator to allow the IP addresses associated with **api.loganalytics.io** listed [here](../ip-addresses.md#application-insights-and-log-analytics-apis).
## Next steps
azure-monitor Alerts Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot.md
If you can see a fired alert in the portal, but its configured action did not tr
1. **Have the source IP addresses been blocked?**
- Add the [IP addresses](../app/ip-addresses.md) that the webhook is called from to your allowlist.
+ Add the [IP addresses](../ip-addresses.md) that the webhook is called from to your allowlist.
1. **Does your webhook endpoint work correctly?**
azure-monitor Test Action Group Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/test-action-group-errors.md
This article describes troubleshooting steps for error messages you may get when using the test action group feature.
+> [!NOTE]
+> Azure Action Groups doesn't support the use of private links today. It is reccomended that customers use public network access to avoid any issues. For other networking issues ensure you are properly using the [Action Group Service Tag](./../../virtual-network/service-tags-overview.md).
+ ## Troubleshooting error codes for actions The error messages in this section are related to these **actions**:
The error messages in this section are related to these **actions**:
| | | |HTTP 400: The \<action\> returned a 'bad request' error. |Check the alert payload received on your endpoint, and make sure the endpoint can process the request successfully. |HTTP 400: The \<action\> couldn't be triggered because this alert type doesn't support the common alert schema. |1. Check if the alert type supports common alert schema.</br>2. Change the ΓÇ£Enable the common alert schemaΓÇ¥ in the action group action to ΓÇ£NoΓÇ¥ and retry.
-|HTTP 400: The \<action\> could not be triggered because the payload is empty or invalid. | Check if the payload is valid, and if it's included as part of the request.
-|HTTP 400: The \<action\> could not be triggered because Microsoft Entra auth is enabled but no auth context provided in the request. | 1. Check your Secure Webhook action settings.</br>2. Check your Microsoft Entra configuration. For more information, see [action groups](action-groups.md). |
+|HTTP 400: The \<action\> couldn't be triggered because the payload is empty or invalid. | Check if the payload is valid, and included as part of the request.
+|HTTP 400: The \<action\> couldn't be triggered because Microsoft Entra auth is enabled but no auth context provided in the request. | 1. Check your Secure Webhook action settings.</br>2. Check your Microsoft Entra configuration. For more information, see [action groups](action-groups.md). |
|HTTP 400: ServiceNow returned error: No such host is known | Check your ServiceNow host url to make sure it's valid and retry. For more information, see [Connect ServiceNow with IT Service Management Connector](./itsmc-connections-servicenow.md) | |</br>HTTP 401: The \<action\> returned an "Unauthorized" error.</br>HTTP 401: The request was rejected by the \<action\> endpoint. Make sure you have the required authorization. | 1. Check if the credential in the request is present and valid.</br>2. Check if your endpoint correctly validates the credentials from the request. |
-|</br>HTTP 403: The \<action\> returned a "Forbidden" response.</br>HTTP 403: Couldn't trigger the \<action\>. Make sure you have the required authorization.</br>HTTP 403: The \<action\> returned a 'Forbidden' response. Make sure you have the proper permissions to access it.</br>HTTP 403: The \<action\> is "Forbidden".</br>HTTP 403: Could not access the ITSM system. Make sure you have the required authorization. | 1. Check if the credential in the request is present, and valid.</br>2. Check if your endpoint correctly validates the credentials.</br>3. If it's Secure Webhook, make sure the Microsoft Entra authentication is set up correctly. For more information, see [action groups](action-groups.md).|
+|</br>HTTP 403: The \<action\> returned a "Forbidden" response.</br>HTTP 403: Couldn't trigger the \<action\>. Make sure you have the required authorization.</br>HTTP 403: The \<action\> returned a 'Forbidden' response. Make sure you have the proper permissions to access it.</br>HTTP 403: The \<action\> is "Forbidden".</br>HTTP 403: Couldn't access the ITSM system. Make sure you have the required authorization. | 1. Check if the credential in the request is present, and valid.</br>2. Check if your endpoint correctly validates the credentials.</br>3. If it's Secure Webhook, make sure the Microsoft Entra authentication is set up correctly. For more information, see [action groups](action-groups.md).|
| HTTP 403: The access token needs to be refreshed.| Refresh the access token and retry. For more information, see [Connect ServiceNow with IT Service Management Connector](./itsmc-connections-servicenow.md) |
-|HTTP 404: The \<action\> was not found.</br>HTTP 404: The \<action\> target workflow was not found.</br>HTTP 404: The \<action\> target was not found.</br>HTTP 404: The \<action\> endpoint could not be found.</br>HTTP 404: The \<action\> was deleted. | 1. Check if the endpoints included in the requests are valid, up and running and accepting the requests.</br>2. For ITSM, check if the ITSM connector is still active.|
+|HTTP 404: The \<action\> wasn't found.</br>HTTP 404: The \<action\> target workflow wasn't found.</br>HTTP 404: The \<action\> target wasn't found.</br>HTTP 404: The \<action\> endpoint couldn't be found.</br>HTTP 404: The \<action\> was deleted. | 1. Check if the endpoints included in the requests are valid, up and running and accepting the requests.</br>2. For ITSM, check if the ITSM connector is still active.|
|HTTP 408: The call to the \<action\> timed out.</br>HTTP 408: The call to the Azure App service endpoint timed out. | 1.Check the client network connection, and retry.</br>2. Check if your endpoint is up and running and can process the request successfully.</br>3. Clear the browser cache, and retry. | |HTTP 409: The \<action\> returned a 'conflict' error. |Check the alert payload received on your endpoint, and make sure the endpoint and its downstream service(s) can process the request successfully. |
-|HTTP 429: The \<action\> could not be triggered because it is handling too many requests right now. |Check if your endpoint can handle the requests.</br>2. Wait a few minutes and retry. |
-|HTTP 500: The \<action\> encountered an internal server error.</br>HTTP 500: Could not reach the Azure \<action\> server.</br>HTTP 500: The \<action\> returned an 'internal server' error.</br>HTTP 500: The ServiceNow endpoint returned an 'Unexpected' response.</li></ul> |Check the alert payload received on your endpoint, and make sure the endpoint and its downstream service(s) can process the request successfully. |
+|HTTP 429: The \<action\> couldn't be triggered because it is handling too many requests right now. |Check if your endpoint can handle the requests.</br>2. Wait a few minutes and retry. |
+|HTTP 500: The \<action\> encountered an internal server error.</br>HTTP 500: Couldn't reach the Azure \<action\> server.</br>HTTP 500: The \<action\> returned an 'internal server' error.</br>HTTP 500: The ServiceNow endpoint returned an 'Unexpected' response.</li></ul> |Check the alert payload received on your endpoint, and make sure the endpoint and its downstream service(s) can process the request successfully. |
|HTTP 502: The \<action\> returned a bad gateway error. |Check if your endpoint, and its downstream service(s) are up and running and are accepting requests. |
-|HTTP 503: The \<action\> host is not running.</br>HTTP 503: The service providing the \<action\> endpoint is temporarily unavailable.</br>HTTP 503: The ServiceNow returned Service Unavailable|Check if your endpoint is up and running and is accepting requests. |
-| The \<action\> could not be triggered because the \<action\> has not succeeded after XXX retries. Calls to the \<action\> will be blocked for up to XXX minutes. Try again in XXX minutes. |Check the alert payload received on your endpoint, and make sure the endpoint and its downstream service(s) can process the request successfully.|
+|HTTP 503: The \<action\> host isn't running.</br>HTTP 503: The service providing the \<action\> endpoint is temporarily unavailable.</br>HTTP 503: The ServiceNow returned Service Unavailable|Check if your endpoint is up and running and is accepting requests. |
+| The \<action\> couldn't be triggered because the \<action\> didn't succeeded after XXX retries. Calls to the \<action\> will be blocked for up to XXX minutes. Try again in XXX minutes. |Check the alert payload received on your endpoint, and make sure the endpoint and its downstream service(s) can process the request successfully.|
## Troubleshooting error codes for notifications
The error messages in this section are related to these **notifications**:
| Error Codes | Troubleshooting Steps | | | |
-| The email could not be sent because the recipient address was not found.</br>The email could not be sent because the email domain is invalid, or the MX resource record does not exist on the Domain Name Server (DNS). |Verify the email address(es) is/are valid and try again. |
-|</br>The email was sent but the delivery status could not be verified.</br>The email could not be sent because of a permanent error. |Wait a few minutes and retry. If the issue persists, file a support ticket. |
-|</br>Invalid destination number.</br>Invalid source address.</br>Invalid phone number. | Verify that the phone number is valid and retry.
-| The message could not be sent because it was blocked by the recipient's provider. | 1. Verify if you can receive SMS from other sources.</br>2. Check with your service provider. |
-|</br>The message could not be sent because the delivery timed out.</br> The message could not be delivered to the recipient. |Wait a few minutes and retry. If the issue still persists, file a support ticket. |
+| The email couldn't be sent because the recipient address wasn't found.</br> The email couldn't be sent because the email domain is invalid, or the MX resource record doesn't exist on the Domain Name Server (DNS). |Verify the email address(es) is/are valid and try again. |
+|</br>The email was sent but the delivery status couldn't be verified.</br> The email couldn't be sent because of a permanent error. |Wait a few minutes and retry. If the issue persists, file a support ticket. |
+|</br>Invalid destination number.</br> Invalid source address.</br> Invalid phone number. | Verify that the phone number is valid and retry.
+| The message couldn't be sent because it was blocked by the recipient's provider. | 1. Verify if you can receive SMS from other sources.</br>2. Check with your service provider. |
+|</br>The message couldn't be sent because the delivery timed out.</br> The message couldn't be delivered to the recipient. |Wait a few minutes and retry. If the issue still persists, file a support ticket. |
|The message was sent successfully, but there was no confirmation of delivery from the recipient's device. | 1. Make sure your device is on, and service is available.</br>2. Wait for a few minutes and retry. |
-|The call could not go through because the recipient's line was busy. | 1. Make sure your device is on, and service is available, and not busy.</br>2. Wait for a few minutes and retry. |
-| The call went through, but the recipient did not select any response. The call might have been picked up by a voice mail service. |Make sure your device is on, the line is not busy, your service is not interrupted, and call does not go into voice mail. |
-| HTTP 500: There was a problem connecting the call. Please contact Azure support for assistance. | Wait a few minutes and retry. If the issue still persists, file a support ticket. |
+|The call couldn't go through because the recipient's line was busy. | 1. Make sure your device is on, and service is available, and not busy.</br>2. Wait for a few minutes and retry. |
+| The call went through, but the recipient did not select any response. The call might have been picked up by a voice mail service. |Make sure your device is on, the line isn't busy, your service isn't interrupted, and call doesn't go into voice mail. |
+| HTTP 500: There was a problem connecting the call. Contact Azure support for assistance. | Wait a few minutes and retry. If the issue still persists, file a support ticket. |
> [!NOTE] > If your issue persists after you try to troubleshoot, please fill out a support ticket here: [Help + support - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
The logic model diagram visualizes components of Application Insights and how th
:::image type="content" source="media/app-insights-overview/app-insights-overview-blowout.svg" alt-text="Diagram that shows the path of data as it flows through the layers of the Application Insights service." lightbox="media/app-insights-overview/app-insights-overview-blowout.svg"::: > [!Note]
-> Firewall settings must be adjusted for data to reach ingestion endpoints. For more information, see [IP addresses used by Azure Monitor](./ip-addresses.md).
+> Firewall settings must be adjusted for data to reach ingestion endpoints. For more information, see [IP addresses used by Azure Monitor](../ip-addresses.md).
## Supported languages
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md
This section provides answers to common questions.
Our [web tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) run on points of presence that are distributed around the globe. There are two solutions:
-* **Firewall door**: Allow requests to your server from [the long and changeable list of web test agents](./ip-addresses.md).
+* **Firewall door**: Allow requests to your server from [the long and changeable list of web test agents](../ip-addresses.md).
* **Custom code**: Write your own code to send periodic requests to your server from inside your intranet. You could run Visual Studio web tests for this purpose. The tester could send the results to Application Insights by using the `TrackAvailability()` API. ## Next steps
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
For more information, see the [Authentication](./azure-ad-authentication.md) doc
## HTTP proxy
-If your application is behind a firewall and can't connect directly to Application Insights, refer to [IP addresses used by Application Insights](./ip-addresses.md).
+If your application is behind a firewall and can't connect directly to Application Insights, refer to [IP addresses used by Application Insights](../ip-addresses.md).
To work around this issue, you can configure Application Insights Java 3.x to use an HTTP proxy.
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Basic metrics include request, dependency, and exception rate. Performance metri
## Troubleshooting
-Live Metrics uses different IP addresses than other Application Insights telemetry. Make sure [those IP addresses](./ip-addresses.md) are open in your firewall. Also check that [outgoing ports for Live Metrics](./ip-addresses.md#outgoing-ports) are open in the firewall of your servers.
+Live Metrics uses different IP addresses than other Application Insights telemetry. Make sure [those IP addresses](../ip-addresses.md) are open in your firewall. Also check that [outgoing ports for Live Metrics](../ip-addresses.md#outgoing-ports) are open in the firewall of your servers.
As described in the [Azure TLS 1.2 migration announcement](https://azure.microsoft.com/updates/azuretls12/), Live Metrics now only supports TLS 1.2. If you're using an older version of TLS, Live Metrics doesn't display any data. For applications based on .NET Framework 4.5.1, see [Enable Transport Layer Security (TLS) 1.2 on clients - Configuration Manager](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client#bkmk_net) to support the newer TLS version.
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
Because the SDK batches data for submission, there might be a delay before items
* Continue to use the application. Take more actions to generate more telemetry. * Select **Refresh** in the portal resource view. Charts periodically refresh on their own, but manually refreshing forces them to refresh immediately.
-* Verify that [required outgoing ports](./ip-addresses.md) are open.
+* Verify that [required outgoing ports](../ip-addresses.md) are open.
* Use [Search](./transaction-search-and-diagnostics.md?tabs=transaction-search) to look for specific events. * Check the [FAQ][FAQ].
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
Scenarios most affected by this change:
- Firewall exceptions or proxy redirects:
- In cases where monitoring for intranet web server is required, our earlier solution asked you to add individual service endpoints to your configuration. For more information, see the [Can I monitor an intranet web server?](./ip-addresses.md#can-i-monitor-an-intranet-web-server). Connection strings offer a better alternative by reducing this effort to a single setting. A simple prefix, suffix amendment, allows automatic population and redirection of all endpoints to the right services.
+ In cases where monitoring for intranet web server is required, our earlier solution asked you to add individual service endpoints to your configuration. For more information, see the [Can I monitor an intranet web server?](../ip-addresses.md#can-i-monitor-an-intranet-web-server). Connection strings offer a better alternative by reducing this effort to a single setting. A simple prefix, suffix amendment, allows automatic population and redirection of all endpoints to the right services.
- Sovereign or hybrid cloud environments:
azure-monitor Prometheus Remote Write Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-active-directory.md
Title: Remote-write in Azure Monitor Managed Service for Prometheus using Microsoft Entra ID
-description: Describes how to configure remote-write to send data from self-managed Prometheus running in your Kubernetes cluster running on-premises or in another cloud using Microsoft Entra authentication.
+ Title: Set up Prometheus remote write by using Microsoft Entra authentication
+description: Learn how to set up remote write in Azure Monitor managed service for Prometheus. Use Microsoft Entra authentication to send data from a self-managed Prometheus server running in your Azure Kubernetes Server (AKS) cluster or Azure Arc-enabled Kubernetes cluster on-premises or in a different cloud.
Last updated 11/01/2022
-# Configure remote write for Azure Monitor managed service for Prometheus using Microsoft Entra authentication
-This article describes how to configure [remote-write](prometheus-remote-write.md) to send data from self-managed Prometheus running in your AKS cluster or Azure Arc-enabled Kubernetes cluster using Microsoft Entra authentication.
+# Send Prometheus data to Azure Monitor by using Microsoft Entra authentication
+
+This article describes how to set up [remote write](prometheus-remote-write.md) to send data from a self-managed Prometheus server running in your Azure Kubernetes Service (AKS) cluster or Azure Arc-enabled Kubernetes cluster by using Microsoft Entra authentication.
## Cluster configurations+ This article applies to the following cluster configurations: -- Azure Kubernetes service (AKS)
+- Azure Kubernetes Service cluster
- Azure Arc-enabled Kubernetes cluster-- Kubernetes cluster running in another cloud or on-premises
+- Kubernetes cluster running in a different cloud or on-premises
> [!NOTE]
-> For Azure Kubernetes service (AKS) or Azure Arc-enabled Kubernetes cluster, managed identify authentication is recommended. See [Azure Monitor managed service for Prometheus remote write - managed identity](prometheus-remote-write-managed-identity.md).
+> For an AKS cluster or an Azure Arc-enabled Kubernetes cluster, we recommend that you use managed identity authentication. For more information, see [Azure Monitor managed service for Prometheus remote write for managed identity](prometheus-remote-write-managed-identity.md).
## Prerequisites
-See prerequisites at [Azure Monitor managed service for Prometheus remote write](prometheus-remote-write.md#prerequisites).
+
+The prerequisites that are described in [Azure Monitor managed service for Prometheus remote write](prometheus-remote-write.md#prerequisites) apply to the processes that are described in this article.
+
+## Set up an application for Microsoft Entra ID
+
+The process to set up Prometheus remote write for an application by using Microsoft Entra authentication involves completing the following tasks:
+
+1. Register an application with Microsoft Entra ID.
+1. Get the client ID of the Microsoft Entra application.
+1. Assign the Monitoring Metrics Publisher role on the workspace data collection rule to the application.
+1. Create an Azure key vault and generate a certificate.
+1. Add a certificate to the Microsoft Entra application.
+1. Add a CSI driver and storage for the cluster.
+1. Deploy a sidecar container to set up remote write.
+
+The tasks are described in the following sections.
<a name='create-azure-active-directory-application'></a>
-## Create Microsoft Entra application
-Follow the procedure at [Register an application with Microsoft Entra ID and create a service principal](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) to register an application for Prometheus remote-write and create a service principal.
+### Register an application with Microsoft Entra ID
+Complete the steps to [register an application with Microsoft Entra ID](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) and create a service principal.
<a name='get-the-client-id-of-the-azure-active-directory-application'></a>
-## Get the client ID of the Microsoft Entra application.
+### Get the client ID of the Microsoft Entra application
-1. From the **Microsoft Entra ID** menu in the Azure portal, select **App registrations**.
-2. Locate your application and note the client ID.
+1. In the Azure portal, go to the **Microsoft Entra ID** menu and select **App registrations**.
+1. In the list of applications, copy the value for **Application (client) ID** for the registered application.
- :::image type="content" source="media/prometheus-remote-write-active-directory/application-client-id.png" alt-text="Screenshot showing client ID of Microsoft Entra application." lightbox="media/prometheus-remote-write-active-directory/application-client-id.png":::
-## Assign Monitoring Metrics Publisher role on the data collection rule to the application
-The application requires the *Monitoring Metrics Publisher* role on the data collection rule associated with your Azure Monitor workspace.
+### Assign the Monitoring Metrics Publisher role on the workspace data collection rule to the application
-1. From the menu of your Azure Monitor Workspace account, select the **Data collection rule** to open the **Overview** page for the data collection rule.
+The application must be assigned the Monitoring Metrics Publisher role on the data collection rule that is associated with your Azure Monitor workspace.
- :::image type="content" source="media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png" alt-text="Screenshot showing data collection rule used by Azure Monitor workspace." lightbox="media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png":::
+1. On the resource menu for your Azure Monitor workspace, select **Overview**. For **Data collection rule**, select the link.
-2. Select **Access control (IAM)** in the **Overview** page for the data collection rule.
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png" alt-text="Screenshot that shows the data collection rule that's used by Azure Monitor workspace." lightbox="media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png":::
- :::image type="content" source="media/prometheus-remote-write-managed-identity/azure-monitor-account-access-control.png" alt-text="Screenshot showing Access control (IAM) menu item on the data collection rule Overview page." lightbox="media/prometheus-remote-write-managed-identity/azure-monitor-account-access-control.png":::
+1. On the resource menu for the data collection rule, select **Access control (IAM)**.
-3. Select **Add** and then **Add role assignment**.
+1. Select **Add**, and then select **Add role assignment**.
- :::image type="content" source="media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png" alt-text="Screenshot showing adding a role assignment on Access control pages." lightbox="media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png":::
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png" alt-text="Screenshot that shows adding a role assignment on Access control pages." lightbox="media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png":::
-4. Select **Monitoring Metrics Publisher** role and select **Next**.
+1. Select the **Monitoring Metrics Publisher** role, and then select **Next**.
- :::image type="content" source="media/prometheus-remote-write-managed-identity/add-role-assignment.png" alt-text="Screenshot showing list of role assignments." lightbox="media/prometheus-remote-write-managed-identity/add-role-assignment.png":::
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/add-role-assignment.png" alt-text="Screenshot that shows a list of role assignments." lightbox="media/prometheus-remote-write-managed-identity/add-role-assignment.png":::
-5. Select **User, group, or service principal** and then select **Select members**. Select the application that you created and click **Select**.
+1. Select **User, group, or service principal**, and then choose **Select members**. Select the application that you created, and then choose **Select**.
- :::image type="content" source="media/prometheus-remote-write-active-directory/select-application.png" alt-text="Screenshot showing selection of application." lightbox="media/prometheus-remote-write-active-directory/select-application.png":::
+ :::image type="content" source="media/prometheus-remote-write-active-directory/select-application.png" alt-text="Screenshot that shows selecting the application." lightbox="media/prometheus-remote-write-active-directory/select-application.png":::
-6. Select **Review + assign** to complete the role assignment.
+1. To complete the role assignment, select **Review + assign**.
+### Create an Azure key vault and generate a certificate
-## Create an Azure key vault and generate certificate
-
-1. If you don't already have an Azure key vault, then create a new one using the guidance at [Create a vault](../../key-vault/general/quick-create-portal.md#create-a-vault).
-2. Create a certificate using the guidance at [Add a certificate to Key Vault](../../key-vault/certificates/quick-create-portal.md#add-a-certificate-to-key-vault).
-3. Download the newly generated certificate in CER format using the guidance at [Export certificate from Key Vault](../../key-vault/certificates/quick-create-portal.md#export-certificate-from-key-vault).
+1. If you don't already have an Azure key vault, [create a vault](../../key-vault/general/quick-create-portal.md#create-a-vault).
+1. Create a certificate by using the guidance in [Add a certificate to Key Vault](../../key-vault/certificates/quick-create-portal.md#add-a-certificate-to-key-vault).
+1. Download the certificate in CER format by using the guidance in [Export a certificate from Key Vault](../../key-vault/certificates/quick-create-portal.md#export-certificate-from-key-vault).
<a name='add-certificate-to-the-azure-active-directory-application'></a>
-## Add certificate to the Microsoft Entra application
+### Add a certificate to the Microsoft Entra application
-1. From the menu for your Microsoft Entra application, select **Certificates & secrets**.
-2. Select **Upload certificate** and select the certificate that you downloaded.
+1. On the resource menu for your Microsoft Entra application, select **Certificates & secrets**.
+1. On the **Certificates** tab, select **Upload certificate** and select the certificate that you downloaded.
- :::image type="content" source="media/prometheus-remote-write-active-directory/upload-certificate.png" alt-text="Screenshot showing upload of certificate for Microsoft Entra application." lightbox="media/prometheus-remote-write-active-directory/upload-certificate.png":::
+ :::image type="content" source="media/prometheus-remote-write-active-directory/upload-certificate.png" alt-text="Screenshot that shows uploading a certificate for a Microsoft Entra application." lightbox="media/prometheus-remote-write-active-directory/upload-certificate.png":::
> [!WARNING]
-> Certificates have an expiration date, and it's the responsibility of the user to keep these certificates valid.
+> Certificates have an expiration date. It's the responsibility of the user to keep certificates valid.
-## Add CSI driver and storage for cluster
+### Add a CSI driver and storage for the cluster
> [!NOTE]
-> Azure Key Vault CSI driver configuration is just one of the ways to get certificate mounted on the pod. The remote write container only needs a local path to a certificate in the pod for the setting `AZURE_CLIENT_CERTIFICATE_PATH` value in the [Deploy Side car and configure remote write on the Prometheus server](#deploy-side-car-and-configure-remote-write-on-the-prometheus-server) step below.
+> Azure Key Vault CSI driver configuration is only one of the ways to get a certificate mounted on a pod. The remote write container needs a local path to a certificate in the pod only for the `<AZURE_CLIENT_CERTIFICATE_PATH>` value in the step [Deploy a sidecar container to set up remote write](#deploy-a-sidecar-container-to-set-up-remote-write).
-This step is only required if you didn't enable Azure Key Vault Provider for Secrets Store CSI Driver when you created your cluster.
+This step is required only if you didn't turn on Azure Key Vault Provider for Secrets Store CSI Driver when you created your cluster.
-1. Run the following Azure CLI command to enable Azure Key Vault Provider for Secrets Store CSI Driver for your cluster.
+1. To turn on Azure Key Vault Provider for Secrets Store CSI Driver for your cluster, run the following Azure CLI command:
```azurecli az aks enable-addons --addons azure-keyvault-secrets-provider --name <aks-cluster-name> --resource-group <resource-group-name> ```
-2. Run the following commands to give the identity access to the key vault.
+1. To give the identity access to the key vault, run these commands:
```azurecli # show client id of the managed identity of the cluster
This step is only required if you didn't enable Azure Key Vault Provider for Sec
az keyvault set-policy -n <keyvault-name> --certificate-permissions get --spn <identity-client-id> ```
-3. Create a *SecretProviderClass* by saving the following YAML to a file named *secretproviderclass.yml*. Replace the values for `userAssignedIdentityID`, `keyvaultName`, `tenantId` and the objects to retrieve from your key vault. See [Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver](../../aks/csi-secrets-store-identity-access.md) for details on values to use.
+1. Create `SecretProviderClass` by saving the following YAML to a file named *secretproviderclass.yml*. Replace the values for `userAssignedIdentityID`, `keyvaultName`, `tenantId`, and the objects to retrieve from your key vault. For information about what values to use, see [Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver](../../aks/csi-secrets-store-identity-access.md).
[!INCLUDE [secret-provider-class-yaml](../includes/secret-procider-class-yaml.md)]
-4. Apply the *SecretProviderClass* by running the following command on your cluster.
+1. Apply `SecretProviderClass` by running the following command on your cluster:
- ```
+ ```bash
kubectl apply -f secretproviderclass.yml ```
-## Deploy Side car and configure remote write on the Prometheus server
+### Deploy a sidecar container to set up remote write
-1. Copy the YAML below and save to a file. This YAML assumes you're using 8081 as your listening port. Modify that value if you use a different port.
+1. Copy the following YAML and save it to a file. The YAML uses port 8081 as the listening port. If you use a different port, modify that value in the YAML.
[!INCLUDE [prometheus-sidecar-remote-write-entra-yaml](../includes/prometheus-sidecar-remote-write-entra-yaml.md)]
-2. Replace the following values in the YAML.
+1. Replace the following values in the YAML file:
| Value | Description | |:|:|
- | `<CLUSTER-NAME>` | Name of your AKS cluster |
+ | `<CLUSTER-NAME>` | The name of your AKS cluster. |
| `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20230906.1`<br>The remote write container image version. |
- | `<INGESTION-URL>` | **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace |
- | `<APP-REGISTRATION -CLIENT-ID> ` | Client ID of your application |
- | `<TENANT-ID> ` | Tenant ID of the Microsoft Entra application |
- | `<CERT-NAME>` | Name of the certificate |
- | `<CLUSTER-NAME>` | Name of the cluster Prometheus is running on |
-
-
+ | `<INGESTION-URL>` | The value for **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace. |
+ | `<APP-REGISTRATION -CLIENT-ID>` | The client ID of your application. |
+ | `<TENANT-ID>` | The tenant ID of the Microsoft Entra application. |
+ | `<CERT-NAME>` | The name of the certificate. |
+ | `<CLUSTER-NAME>` | The name of the cluster that Prometheus is running on. |
--
-3. Open Azure Cloud Shell and upload the YAML file.
-4. Use helm to apply the YAML file to update your Prometheus configuration with the following CLI commands.
+1. Open Azure Cloud Shell and upload the YAML file.
+1. Use Helm to apply the YAML file and update your Prometheus configuration:
```azurecli
- # set context to your cluster
+ # set the context to your cluster
az aks get-credentials -g <aks-rg-name> -n <aks-cluster-name>
- # use helm to update your remote write config
+ # use Helm to update your remote write config
helm upgrade -f <YAML-FILENAME>.yml prometheus prometheus-community/kube-prometheus-stack -namespace <namespace where Prometheus pod resides> ``` ## Verification and troubleshooting
-See [Azure Monitor managed service for Prometheus remote write](prometheus-remote-write.md#verify-remote-write-is-working-correctly).
-## Next steps
+For verification and troubleshooting information, see [Azure Monitor managed service for Prometheus remote write](prometheus-remote-write.md#verify-remote-write-is-working-correctly).
+
+## Related content
- [Collect Prometheus metrics from an AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)-- [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md).-- [Remote-write in Azure Monitor Managed Service for Prometheus](prometheus-remote-write.md)-- [Configure remote write for Azure Monitor managed service for Prometheus using managed identity authentication](./prometheus-remote-write-managed-identity.md)-- [Configure remote write for Azure Monitor managed service for Prometheus using Azure Workload Identity (preview)](./prometheus-remote-write-azure-workload-identity.md)-- [Configure remote write for Azure Monitor managed service for Prometheus using Microsoft Entra pod identity (preview)](./prometheus-remote-write-azure-ad-pod-identity.md)
+- [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md)
+- [Remote write in Azure Monitor managed service for Prometheus](prometheus-remote-write.md)
+- [Send Prometheus data to Azure Monitor by using managed identity authentication](./prometheus-remote-write-managed-identity.md)
+- [Send Prometheus data to Azure Monitor by using Microsoft Entra Workload ID (preview) authentication](./prometheus-remote-write-azure-workload-identity.md)
+- [Send Prometheus data to Azure Monitor by using Microsoft Entra pod-managed identity (preview) authentication](./prometheus-remote-write-azure-ad-pod-identity.md)
azure-monitor Prometheus Remote Write Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-azure-ad-pod-identity.md
Title: Configure remote write for Azure Monitor managed service for Prometheus using Microsoft Entra pod identity (preview)
-description: Configure remote write for Azure Monitor managed service for Prometheus using Microsoft Entra pod identity (preview)
+ Title: Set up Prometheus remote write by using Microsoft Entra pod-managed identity authentication
+description: Learn how to set up remote write for Azure Monitor managed service for Prometheus by using Microsoft Entra pod-managed identity (preview) authentication.
Last updated 05/11/2023
-# Configure remote write for Azure Monitor managed service for Prometheus using Microsoft Entra pod identity (preview)
+# Send Prometheus data to Azure Monitor by using Microsoft Entra pod-managed identity (preview) authentication
+This article describes how to set up remote write for Azure Monitor managed service for Prometheus by using Microsoft Entra pod-managed identity (preview) authentication.
-> [!NOTE]
-> The remote write sidecar should only be configured via the following steps only if the AKS cluster already has the Microsoft Entra pod enabled. This approach is not recommended as Microsoft Entra pod identity has been deprecated to be replace by [Azure Workload Identity](/azure/active-directory/workload-identities/workload-identities-overview)
+> [!NOTE]
+> The remote write sidecar container that's described in this article should be set up only by using the following steps, and only if the Azure Kubernetes Service (AKS) cluster already has a Microsoft Entra pod enabled. Microsoft Entra pod-managed identities have been deprecated to be replaced by [Microsoft Entra Workload ID](/azure/active-directory/workload-identities/workload-identities-overview). We recommend that you use Microsoft Entra Workload ID authentication.
+## Prerequisites
-To configure remote write for Azure Monitor managed service for Prometheus using Microsoft Entra pod identity, follow the steps below.
+The prerequisites that are described in [Azure Monitor managed service for Prometheus remote write](prometheus-remote-write.md#prerequisites) apply to the processes that are described in this article.
-1. Create user assigned identity or use an existing user assigned managed identity. For information on creating the managed identity, see [Configure remote write for Azure Monitor managed service for Prometheus using managed identity authentication](./prometheus-remote-write-managed-identity.md#get-the-client-id-of-the-user-assigned-identity).
-1. Assign the `Managed Identity Operator` and `Virtual Machine Contributor` roles to the managed identity created/used in the previous step.
+## Set up an application for Microsoft Entra pod-managed identity
- ```azurecli
- az role assignment create --role "Managed Identity Operator" --assignee <managed identity clientID> --scope <NodeResourceGroupResourceId>
+The process to set up Prometheus remote write for an application by using Microsoft Entra pod-managed identity authentication involves completing the following tasks:
+
+1. Register a user-assigned managed identity with Microsoft Entra ID.
+1. Assign the Managed Identity Operator and Virtual Machine Contributor roles to the managed identity.
+1. Assign the Monitoring Metrics Publisher role to the user-assigned managed identity.
+1. Create an Azure identity binding.
+1. Add the aadpodidbinding label to the Prometheus pod.
+1. Deploy a sidecar container to set up remote write.
+
+The tasks are described in the following sections.
+
+### Register a managed identity with Microsoft Entra ID
+
+Create a user-assigned managed identity or register an existing user-assigned managed identity.
+
+For information about creating a managed identity, see [Set up remote write for Azure Monitor managed service for Prometheus by using managed identity authentication](./prometheus-remote-write-managed-identity.md#get-the-client-id-of-the-user-assigned-managed-identity).
+
+### Assign the Managed Identity Operator and Virtual Machine Contributor roles to the managed identity
+
+```azurecli
+az role assignment create --role "Managed Identity Operator" --assignee <managed identity clientID> --scope <NodeResourceGroupResourceId>
- az role assignment create --role "Virtual Machine Contributor" --assignee <managed identity clientID> --scope <Node ResourceGroup Id>
- ```
+az role assignment create --role "Virtual Machine Contributor" --assignee <managed identity clientID> --scope <Node ResourceGroup Id>
+```
+
+The node resource group of the AKS cluster contains resources that you use in other steps in this process. This resource group has the name `MC_<AKS-RESOURCE-GROUP>_<AKS-CLUSTER-NAME>_<REGION>`. You can find the resource group name by using the **Resource groups** menu in the Azure portal.
+
+### Assign the Monitoring Metrics Publisher role to the managed identity
+
+```azurecli
+az role assignment create --role "Monitoring Metrics Publisher" --assignee <managed identity clientID> --scope <NodeResourceGroupResourceId>
+```
+
+### Create an Azure identity binding
+
+The user-assigned managed identity requires an identity binding for the identity to be used as a pod-managed identity.
+
+Copy the following YAML to the *aadpodidentitybinding.yaml* file:
+
+```yaml
+
+apiVersion: "aadpodidentity.k8s.io/v1"
- The node resource group of the AKS cluster contains resources that you will require for other steps in this process. This resource group has the name MC_\<AKS-RESOURCE-GROUP\>_\<AKS-CLUSTER-NAME\>_\<REGION\>. You can locate it from the Resource groups menu in the Azure portal.
+kind: AzureIdentityBinding
+metadata:
+name: demo1-azure-identity-binding
+spec:
+AzureIdentity: ΓÇ£<AzureIdentityName>ΓÇ¥
+Selector: ΓÇ£<AzureIdentityBindingSelector>ΓÇ¥
+```
-1. Grant user-assigned managed identity `Monitoring Metrics Publisher` roles.
+Run the following command:
- ```azurecli
- az role assignment create --role "Monitoring Metrics Publisher" --assignee <managed identity clientID> --scope <NodeResourceGroupResourceId>
- ```
+```azurecli
+kubectl create -f aadpodidentitybinding.yaml
+```
-1. Create AzureIdentityBinding
+### Add the aadpodidbinding label to the Prometheus pod
- The user assigned managed identity requires identity binding in order to be used as a pod identity. Run the following commands:
-
- Copy the following YAML to the `aadpodidentitybinding.yaml` file.
+The `aadpodidbinding` label must be added to the Prometheus pod for the pod-managed identity to take effect. You can add the label by updating the *deployment.yaml* file or by injecting labels when you deploy the sidecar container as described in the next section.
- ```yml
+### Deploy a sidecar container to set up remote write
- apiVersion: "aadpodidentity.k8s.io/v1"
+1. Copy the following YAML and save it to a file. The YAML uses port 8081 as the listening port. If you use a different port, modify that value in the YAML.
- kind: AzureIdentityBinding
- metadata:
- name: demo1-azure-identity-binding
- spec:
- AzureIdentity: ΓÇ£<AzureIdentityName>ΓÇ¥
- Selector: ΓÇ£<AzureIdentityBindingSelector>ΓÇ¥
- ```
+ [!INCLUDE[pod-identity-yaml](../includes/prometheus-sidecar-remote-write-pod-identity-yaml.md)]
- Run the following command:
+1. Use Helm to apply the YAML file and update your Prometheus configuration:
- ```azurecli
- kubectl create -f aadpodidentitybinding.yaml
- ```
-
-1. Add a `aadpodidbinding` label to the Prometheus pod.
- The `aadpodidbinding` label must be added to the Prometheus pod for the pod identity to take effect. This can be achieved by updating the `deployment.yaml` or injecting labels while deploying the sidecar as mentioned in the next step.
+ ```azurecli
+ # set the context to your cluster
+ az aks get-credentials -g <aks-rg-name> -n <aks-cluster-name>
-1. Deploy side car and configure remote write on the Prometheus server.
+ # use Helm to update your remote write config
+ helm upgrade -f <YAML-FILENAME>.yml prometheus prometheus-community/kube-prometheus-stack --namespace <namespace where Prometheus pod resides>
+ ```
- 1. Copy the YAML below and save to a file.
+## Verification and troubleshooting
- [!INCLUDE[pod-identity-yaml](../includes/prometheus-sidecar-remote-write-pod-identity-yaml.md)]
+For verification and troubleshooting information, see [Azure Monitor managed service for Prometheus remote write](prometheus-remote-write.md#verify-remote-write-is-working-correctly).
- b. Use helm to apply the YAML file to update your Prometheus configuration with the following CLI commands.
-
- ```azurecli
- # set context to your cluster
- az aks get-credentials -g <aks-rg-name> -n <aks-cluster-name>
- # use helm to update your remote write config
- helm upgrade -f <YAML-FILENAME>.yml prometheus prometheus-community/kube-prometheus-stack --namespace <namespace where Prometheus pod resides>
- ```
-## Next steps
+## Related content
- [Collect Prometheus metrics from an AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) - [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md)-- [Remote-write in Azure Monitor Managed Service for Prometheus](prometheus-remote-write.md)-- [Remote-write in Azure Monitor Managed Service for Prometheus using Microsoft Entra ID](./prometheus-remote-write-active-directory.md)-- [Configure remote write for Azure Monitor managed service for Prometheus using managed identity authentication](./prometheus-remote-write-managed-identity.md)-- [Configure remote write for Azure Monitor managed service for Prometheus using Azure Workload Identity (preview)](./prometheus-remote-write-azure-workload-identity.md)
+- [Remote write in Azure Monitor managed service for Prometheus](prometheus-remote-write.md)
+- [Send Prometheus data to Azure Monitor by using Microsoft Entra authentication](./prometheus-remote-write-active-directory.md)
+- [Send Prometheus data to Azure Monitor by using managed identity authentication](./prometheus-remote-write-managed-identity.md)
+- [Send Prometheus data to Azure Monitor by using Microsoft Entra Workload ID (preview) authentication](./prometheus-remote-write-azure-workload-identity.md)
azure-monitor Prometheus Remote Write Azure Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-azure-workload-identity.md
Title: Configure remote write for Azure managed service for Prometheus using Microsoft Entra Workload ID (preview)
-description: Configure remote write for Azure Monitor managed service for Prometheus using Microsoft Entra Workload ID (preview)
+ Title: Set up Prometheus remote write by using Microsoft Entra Workload ID authentication
+description: Learn how to set up remote write in Azure Monitor managed service for Prometheus. Use Microsoft Entra Workload ID (preview) authentication to send data from a self-managed Prometheus server to your Azure Monitor workspace.
-+ Last updated 09/10/2023
-# Configure remote write for Azure managed service for Prometheus using Microsoft Entra Workload ID (preview)
+# Send Prometheus data to Azure Monitor by using Microsoft Entra Workload ID (preview) authentication
-This article describes how to configure [remote-write](prometheus-remote-write.md) to send data from your Azure managed Prometheus cluster using Microsoft Entra Workload ID.
+This article describes how to set up [remote write](prometheus-remote-write.md) to send data from your Azure Monitor managed Prometheus cluster by using Microsoft Entra Workload ID authentication.
## Prerequisites
-* The cluster must have OIDC-specific feature flags and an OIDC issuer URL:
- * For managed clusters (AKS/EKS/GKE), see [Managed Clusters - Microsoft Entra Workload ID](https://azure.github.io/azure-workload-identity/docs/installation/managed-clusters.html)
- * For self-managed clusters, see [Self-Managed Clusters - Microsoft Entra Workload ID](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters.html)
-* Installed mutating admission webhook. For more information, see [Mutating Admission Webhook - Microsoft Entra Workload ID](https://azure.github.io/azure-workload-identity/docs/installation/mutating-admission-webhook.html)
-* The cluster already has Prometheus running. This guide assumes that the Prometheus is set up using [kube-prometheus-stack](https://azure.github.io/azure-workload-identity/docs/installation/managed-clusters.html), however, you can set up Prometheus any other way.
+To send data from a Prometheus server by using remote write with Microsoft Entra Workload ID authentication, you need:
-## Configure workload identity
+- A cluster that has feature flags that are specific to OpenID Connect (OIDC) and an OIDC issuer URL:
+ - For managed clusters (Azure Kubernetes Service, Amazon Elastic Kubernetes Service, and Google Kubernetes Engine), see [Managed Clusters - Microsoft Entra Workload ID](https://azure.github.io/azure-workload-identity/docs/installation/managed-clusters.html).
+ - For self-managed clusters, see [Self-Managed Clusters - Microsoft Entra Workload ID](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters.html).
+- An installed mutating admission webhook. For more information, see [Mutating Admission Webhook - Microsoft Entra Workload ID](https://azure.github.io/azure-workload-identity/docs/installation/mutating-admission-webhook.html).
+- Prometheus running in the cluster. This article assumes that the Prometheus cluster is set up by using the [kube-prometheus stack](https://azure.github.io/azure-workload-identity/docs/installation/managed-clusters.html), but you can set up Prometheus by using other methods.
-1. Export the following environment variables:
+## Set up a workload for Microsoft Entra Workload ID
- ```bash
- # [OPTIONAL] Only set this if you're using a Azure AD Application
- export APPLICATION_NAME="<your application name>"
+The process to set up Prometheus remote write for a workload by using Microsoft Entra Workload ID authentication involves completing the following tasks:
+
+1. Set up the workload identity.
+1. Create a Microsoft Entra application or user-assigned managed identity and grant permissions.
+1. Assign the Monitoring Metrics Publisher role on the workspace data collection rule to the application.
+1. Create or update your Kubernetes service account Prometheus pod.
+1. Establish federated identity credentials between the identity and the service account issuer and subject.
+1. Deploy a sidecar container to set up remote write.
+
+The tasks are described in the following sections.
+
+### Set up the workload identity
+
+To set up the workload identity, export the following environment variables:
+
+```bash
+# [OPTIONAL] Set this if you're using a Microsoft Entra application
+export APPLICATION_NAME="<your application name>"
- # [OPTIONAL] Only set this if you're using a user-assigned managed identity
- export USER_ASSIGNED_IDENTITY_NAME="<your user-assigned managed identity name>"
+# [OPTIONAL] Set this only if you're using a user-assigned managed identity
+export USER_ASSIGNED_IDENTITY_NAME="<your user-assigned managed identity name>"
- # environment variables for the Kubernetes service account & federated identity credential
- export SERVICE_ACCOUNT_NAMESPACE="<namespace of Prometheus pod>"
- export SERVICE_ACCOUNT_NAME="<name of service account associated with Prometheus pod>"
- export SERVICE_ACCOUNT_ISSUER="<your service account issuer url>"
- ```
+# Environment variables for the Kubernetes service account and federated identity credential
+export SERVICE_ACCOUNT_NAMESPACE="<namespace of Prometheus pod>"
+export SERVICE_ACCOUNT_NAME="<name of service account associated with Prometheus pod>"
+export SERVICE_ACCOUNT_ISSUER="<your service account issuer URL>"
+```
- For `SERVICE_ACCOUNT_NAME`, check if there's a service account (apart from the "default" service account) already associated with Prometheus pod, check for the value of `serviceaccountName` or `serviceAccount` (deprecated) in the `spec` of your Prometheus pod and use this value if it exists. If not, provide the name of the service account you would like to associate with your Prometheus pod.
+For `SERVICE_ACCOUNT_NAME`, check to see whether a service account (separate from the *default* service account) is already associated with the Prometheus pod. Look for the value of `serviceaccountName` or `serviceAccount` (deprecated) in the `spec` of your Prometheus pod. Use this value if it exists. If `serviceaccountName` and `serviceAccount` don't exist, enter the name of the service account you want to associate with your Prometheus pod.
-1. Create a Microsoft Entra app or user assigned managed identity and grant permission to publish metrics to Azure Monitor workspace.
- ```azurecli
- # create an Azure Active Directory application
- az ad sp create-for-rbac --name "${APPLICATION_NAME}"
+### Create a Microsoft Entra application or user-assigned managed identity and grant permissions
- # create a user-assigned managed identity if using user-assigned managed identity for this tutorial
- az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RESOURCE_GROUP}"
- ```
+Create a Microsoft Entra application or a user-assigned managed identity and grant permission to publish metrics to Azure Monitor workspace:
- Assign the *Monitoring Metrics Publisher* role to the Microsoft Entra app or user-assigned managed identity. For more information, see [Assign Monitoring Metrics Publisher role on the data collection rule to the managed identity](prometheus-remote-write-managed-identity.md#assign-monitoring-metrics-publisher-role-on-the-data-collection-rule-to-the-managed-identity).
+```azurecli
+# create a Microsoft Entra application
+az ad sp create-for-rbac --name "${APPLICATION_NAME}"
-1. Create or Update your Kubernetes service account Prometheus pod.
- Often there's a Kubernetes service account created and associated with the pod running the Prometheus container. If you're using kube-prometheus-stack, it automatically creates `prometheus-kube-prometheus-prometheus` service account.
+# create a user-assigned managed identity if you use a user-assigned managed identity for this article
+az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RESOURCE_GROUP}"
+```
- If there's no Kubernetes service account associated with Prometheus besides the "default" service account, create a new service account specifically for Pod running Prometheus by running the following kubectl command:
-
- ```bash
- cat <<EOF | kubectl apply -f -
- apiVersion: v1
- kind: service account
- metadata:
- annotations:
- azure.workload.identity/client-id: ${APPLICATION_CLIENT_ID:-$USER_ASSIGNED_IDENTITY_CLIENT_ID}
- name: ${SERVICE_ACCOUNT_NAME}
- namespace: ${SERVICE_ACCOUNT_NAMESPACE}
- EOF
- ```
+### Assign the Monitoring Metrics Publisher role on the workspace data collection rule to the application or managed identity
- If there's a Kubernetes service account associated with your pod other than "default" service account, add the following annotation to your service account:
+For information about assigning the role, see [Assign the Monitoring Metrics Publisher role on the workspace data collection rule to the managed identity](prometheus-remote-write-managed-identity.md#assign-the-monitoring-metrics-publisher-role-on-the-workspace-data-collection-rule-to-the-managed-identity).
- ```bash
- kubectl annotate sa ${SERVICE_ACCOUNT_NAME} -n ${SERVICE_ACCOUNT_NAMESPACE} azure.workload.identity/client-id="${APPLICATION_OR_USER_ASSIGNED_IDENTITY_CLIENT_ID}" ΓÇôoverwrite
- ```
+### Create or update your Kubernetes service account Prometheus pod
- If your Microsoft Entra app or user assigned managed identity isn't in the same tenant as your cluster, add the following annotation to your service account:
-
- ```bash
- kubectl annotate sa ${SERVICE_ACCOUNT_NAME} -n ${SERVICE_ACCOUNT_NAMESPACE} azure.workload.identity/tenant-id="${APPLICATION_OR_USER_ASSIGNED_IDENTITY_TENANT_ID}" ΓÇôoverwrite
- ```
+Often, a Kubernetes service account is created and associated with the pod running the Prometheus container. If you're using the kube-prometheus stack, the code automatically creates the prometheus-kube-prometheus-prometheus service account.
-1. Establish federated identity credentials between the identity and the service account issuer and subject
-
- Create federated credentials (Azure CLI)
-
- * User-Assigned Managed identity
- ```cli
- az identity federated-credential create \
- --name "kubernetes-federated-credential" \
- --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" \
- --resource-group "${RESOURCE_GROUP}" \
- --issuer "${SERVICE_ACCOUNT_ISSUER}" \
- --subject "system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
- ```
-
- * Microsoft Entra ID
- ```CLI
- # Get the ObjectID of the Microsoft Entra app.
-
- export APPLICATION_OBJECT_ID="$(az ad app show --id ${APPLICATION_CLIENT_ID} --query id -otsv)"
-
- #Add federated identity credential.
-
- cat <<EOF > params.json
- {
- "name": "kubernetes-federated-credential",
- "issuer": "${SERVICE_ACCOUNT_ISSUER}",
- "subject": "system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}",
- "description": "Kubernetes service account federated credential",
- "audiences": [
- "api://AzureADTokenExchange"
- ]
- }
- EOF
-
- az ad app federated-credential create --id ${APPLICATION_OBJECT_ID} --parameters @params.json
- ```
+If no Kubernetes service account except the default service account is associated with Prometheus, create a new service account specifically for the pod running Prometheus.
+
+To create the service account, run the following kubectl command:
+
+```bash
+cat <<EOF | kubectl apply -f -
+apiVersion: v1
+kind: service account
+metadata:
+ annotations:
+ azure.workload.identity/client-id: ${APPLICATION_CLIENT_ID:-$USER_ASSIGNED_IDENTITY_CLIENT_ID}
+ name: ${SERVICE_ACCOUNT_NAME}
+ namespace: ${SERVICE_ACCOUNT_NAMESPACE}
+EOF
+```
+
+If a Kubernetes service account other than default service account is associated with your pod, add the following annotation to your service account:
+
+```bash
+kubectl annotate sa ${SERVICE_ACCOUNT_NAME} -n ${SERVICE_ACCOUNT_NAMESPACE} azure.workload.identity/client-id="${APPLICATION_OR_USER_ASSIGNED_IDENTITY_CLIENT_ID}" ΓÇôoverwrite
+```
+
+If your Microsoft Entra application or user-assigned managed identity isn't in the same tenant as your cluster, add the following annotation to your service account:
+
+```bash
+kubectl annotate sa ${SERVICE_ACCOUNT_NAME} -n ${SERVICE_ACCOUNT_NAMESPACE} azure.workload.identity/tenant-id="${APPLICATION_OR_USER_ASSIGNED_IDENTITY_TENANT_ID}" ΓÇôoverwrite
+```
+
+### Establish federated identity credentials between the identity and the service account issuer and subject
+
+Create federated credentials by using the Azure CLI.
+
+#### User-assigned managed identity
+
+```cli
+az identity federated-credential create \
+ --name "kubernetes-federated-credential" \
+ --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" \
+ --resource-group "${RESOURCE_GROUP}" \
+ --issuer "${SERVICE_ACCOUNT_ISSUER}" \
+ --subject "system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
+```
+
+#### Microsoft Entra application
+
+```cli
+# Get the ObjectID of the Microsoft Entra app.
+
+export APPLICATION_OBJECT_ID="$(az ad app show --id ${APPLICATION_CLIENT_ID} --query id -otsv)"
+
+# Add a federated identity credential.
+
+cat <<EOF > params.json
+{
+ "name": "kubernetes-federated-credential",
+ "issuer": "${SERVICE_ACCOUNT_ISSUER}",
+ "subject": "system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}",
+ "description": "Kubernetes service account federated credential",
+ "audiences": [
+ "api://AzureADTokenExchange"
+ ]
+}
+EOF
+
+az ad app federated-credential create --id ${APPLICATION_OBJECT_ID} --parameters @params.json
+```
+
+### Deploy a sidecar container to set up remote write
- ## Deploy the side car container
-
> [!IMPORTANT]
-> * The Prometheus pod must have the following label: `azure.workload.identity/use: "true"`
-> * The remote write sidecar container requires the following environment values:
-> * `INGESTION_URL` - The metrics ingestion endpoint as shown on the Overview page for the Azure Monitor workspace.
-> * `LISTENING_PORT` ΓÇô `8081` (Any port is acceptable).
-> * `IDENTITY_TYPE` ΓÇô `workloadIdentity`.
+>
+>The Prometheus pod must have the following label: `azure.workload.identity/use: "true"`
+>
+> The remote write sidecar container requires the following environment values:
+>
+> - `INGESTION_URL`: The metrics ingestion endpoint that's shown on the **Overview** page for the Azure Monitor workspace
+> - `LISTENING_PORT`: `8081` (any port is supported)
+> - `IDENTITY_TYPE`: `workloadIdentity`
+
+1. Copy the following YAML and save it to a file. The YAML uses port 8081 as the listening port. If you use a different port, modify that value in the YAML.
-Use the sample yaml below if you're using kube-prometheus-stack:
+ [!INCLUDE [prometheus-sidecar-remote-write-workload-identity-yaml](../includes/prometheus-sidecar-remote-write-workload-identity-yaml.md)]
-1. Replace the following values in the YAML.
+1. Replace the following values in the YAML:
| Value | Description | |:|:|
- | `<CLUSTER-NAME>` | Name of your AKS cluster |
+ | `<CLUSTER-NAME>` | The name of your AKS cluster. |
| `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20230906.1` <br>The remote write container image version. |
- | `<INGESTION-URL>` | **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace |
+ | `<INGESTION-URL>` | The value for **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace. |
-1. Use helm to apply the YAML file to update your Prometheus configuration with the following CLI commands.
+1. Use Helm to apply the YAML file and update your Prometheus configuration:
```azurecli
- # set context to your cluster
+ # set a context to your cluster
az aks get-credentials -g <aks-rg-name> -n <aks-cluster-name>
- # use helm to update your remote write config
+ # use Helm to update your remote write config
helm upgrade -f <YAML-FILENAME>.yml prometheus prometheus-community/kube-prometheus-stack -namespace <namespace where Prometheus pod resides> ```
-## Next steps
+## Verification and troubleshooting
+
+For verification and troubleshooting information, see [Azure Monitor managed service for Prometheus remote write](prometheus-remote-write.md#verify-remote-write-is-working-correctly).
+
+## Related content
-* [Collect Prometheus metrics from an AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)
-* [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md)
-* [Remote-write in Azure Monitor Managed Service for Prometheus](prometheus-remote-write.md)
-* [Remote-write in Azure Monitor Managed Service for Prometheus using Microsoft Entra ID](./prometheus-remote-write-active-directory.md)
-* [Configure remote write for Azure Monitor managed service for Prometheus using managed identity authentication](./prometheus-remote-write-managed-identity.md)
-* [Configure remote write for Azure Monitor managed service for Prometheus using Microsoft Entra pod identity (preview)](./prometheus-remote-write-azure-ad-pod-identity.md)
+- [Collect Prometheus metrics from an AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)
+- [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md)
+- [Remote write in Azure Monitor managed service for Prometheus](prometheus-remote-write.md)
+- [Send Prometheus data to Azure Monitor by using Microsoft Entra authentication](./prometheus-remote-write-active-directory.md)
+- [Send Prometheus data to Azure Monitor by using managed identity authentication](./prometheus-remote-write-managed-identity.md)
+- [Send Prometheus data to Azure Monitor by using Microsoft Entra pod-managed identity (preview) authentication](./prometheus-remote-write-azure-ad-pod-identity.md)
azure-monitor Prometheus Remote Write Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-managed-identity.md
Title: Remote-write in Azure Monitor Managed Service for Prometheus using managed identity
-description: Describes how to configure remote-write to send data from self-managed Prometheus running in your AKS cluster or Azure Arc-enabled Kubernetes cluster using managed identity authentication.
+ Title: Set up Prometheus remote write by using managed identity authentication
+description: Learn how to set up remote write in Azure Monitor managed service for Prometheus. Use managed identity authentication to send data from a self-managed Prometheus server running in your Azure Kubernetes Server (AKS) cluster or Azure Arc-enabled Kubernetes cluster.
Last updated 11/01/2022
-# Configure remote write for Azure Monitor managed service for Prometheus using managed identity authentication
+# Send Prometheus data to Azure Monitor by using managed identity authentication
-This article describes how to configure [remote-write](prometheus-remote-write.md) to send data from self-managed Prometheus running in your AKS cluster or Azure Arc-enabled Kubernetes cluster using managed identity authentication. You either use an existing identity created by AKS or [create one of your own](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). Both options are described here.
+This article describes how to set up [remote write](prometheus-remote-write.md) to send data from a self-managed Prometheus server running in your Azure Kubernetes Service (AKS) cluster or Azure Arc-enabled Kubernetes cluster by using managed identity authentication. You can either use an existing identity that's created by AKS or [create your own](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). Both options are described here.
## Cluster configurations+ This article applies to the following cluster configurations: -- Azure Kubernetes service (AKS)
+- Azure Kubernetes Service cluster
- Azure Arc-enabled Kubernetes cluster > [!NOTE]
-> For a Kubernetes cluster running in another cloud or on-premises, see [Azure Monitor managed service for Prometheus remote write - Microsoft Entra ID](prometheus-remote-write-active-directory.md).
+> For information about setting up remote write for a Kubernetes cluster running in a different cloud or on-premises, see [Send Prometheus data to Azure Monitor by using Microsoft Entra authentication](prometheus-remote-write-active-directory.md).
## Prerequisites
-See prerequisites at [Azure Monitor managed service for Prometheus remote write](prometheus-remote-write.md#prerequisites).
-## Locate AKS node resource group
-The node resource group of the AKS cluster contains resources that you will require for other steps in this process. This resource group has the name `MC_<AKS-RESOURCE-GROUP>_<AKS-CLUSTER-NAME>_<REGION>`. You can locate it from the **Resource groups** menu in the Azure portal. Start by making sure that you can locate this resource group since other steps below will refer to it.
+The prerequisites that are described in [Azure Monitor managed service for Prometheus remote write](prometheus-remote-write.md#prerequisites) apply to the processes that are described in this article.
+
+## Set up an application for managed identity
+
+The process to set up Prometheus remote write for an application by using managed identity authentication involves completing the following tasks:
+
+1. Get the name of the AKS node resource group.
+1. Get the client ID of the user-assigned managed identity.
+1. Assign the Monitoring Metrics Publisher role on the workspace data collection rule to the managed identity.
+1. Give the AKS cluster access to the managed identity.
+1. Deploy a sidecar container to set up remote write.
+
+The tasks are described in the following sections.
+
+### Get the name of the AKS node resource group
+The node resource group of the AKS cluster contains resources that you use in other steps in this process. This resource group has the name `MC_<AKS-RESOURCE-GROUP>_<AKS-CLUSTER-NAME>_<REGION>`. You can find the resource group name by using the **Resource groups** menu in the Azure portal.
-## Get the client ID of the user assigned identity
-You will require the client ID of the identity that you're going to use. Note this value for use in later steps in this process.
-Instead of creating your own ID, you can use one of the identities created by AKS, which are listed in [Use a managed identity in Azure Kubernetes Service](../../aks/use-managed-identity.md). This article uses the `Kubelet` identity. The name of this identity is `<AKS-CLUSTER-NAME>-agentpool` and is located in the node resource group of the AKS cluster.
+### Get the client ID of the user-assigned managed identity
+You must get the client ID of the identity that you're going to use. Copy the client ID to use later in the process.
-Click on the `<AKS-CLUSTER-NAME>-agentpool` managed identity and copy the **Client ID** from the **Overview** page. To learn more about managed identity, visit [Managed Identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+Instead of creating your own client ID, you can use one of the identities that are created by AKS. To learn more about the identities, see [Use a managed identity in Azure Kubernetes Service](../../aks/use-managed-identity.md).
+This article uses the kubelet identity. The name of this identity is `<AKS-CLUSTER-NAME>-agentpool`, and it's in the node resource group of the AKS cluster.
-## Assign Monitoring Metrics Publisher role on the data collection rule to the managed identity
-The managed identity requires the *Monitoring Metrics Publisher* role on the data collection rule associated with your Azure Monitor workspace.
-1. From the menu of your Azure Monitor Workspace account, select the **Data collection rule** to open the **Overview** page for the data collection rule.
+Select the `<AKS-CLUSTER-NAME>-agentpool` managed identity. On the **Overview** page, copy the value for **Client ID**. For more information, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
- :::image type="content" source="media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png" alt-text="Screenshot showing data collection rule used by Azure Monitor workspace." lightbox="media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png":::
-2. Select **Access control (IAM)** in the **Overview** page for the data collection rule.
+### Assign the Monitoring Metrics Publisher role on the workspace data collection rule to the managed identity
- :::image type="content" source="media/prometheus-remote-write-managed-identity/azure-monitor-account-access-control.png" alt-text="Screenshot showing Access control (IAM) menu item on the data collection rule Overview page." lightbox="media/prometheus-remote-write-managed-identity/azure-monitor-account-access-control.png":::
+The managed identity must be assigned the Monitoring Metrics Publisher role on the data collection rule that is associated with your Azure Monitor workspace.
-3. Select **Add** and then **Add role assignment**.
+1. On the resource menu for your Azure Monitor workspace, select **Overview**. For **Data collection rule**, select the link.
- :::image type="content" source="media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png" alt-text="Screenshot showing adding a role assignment on Access control pages." lightbox="media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png":::
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png" alt-text="Screenshot that shows the data collection rule that's associated with an Azure Monitor workspace." lightbox="media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png":::
-4. Select **Monitoring Metrics Publisher** role and select **Next**.
+1. On the resource menu for the data collection rule, select **Access control (IAM)**.
- :::image type="content" source="media/prometheus-remote-write-managed-identity/add-role-assignment.png" alt-text="Screenshot showing list of role assignments." lightbox="media/prometheus-remote-write-managed-identity/add-role-assignment.png":::
+1. Select **Add**, and then select **Add role assignment**.
-5. Select **Managed Identity** and then select **Select members**. Choose the subscription the user assigned identity is located in and then select **User-assigned managed identity**. Select the User Assigned Identity that you're going to use and click **Select**.
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png" alt-text="Screenshot that shows adding a role assignment on Access control pages." lightbox="media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png":::
- :::image type="content" source="media/prometheus-remote-write-managed-identity/select-managed-identity.png" alt-text="Screenshot showing selection of managed identity." lightbox="media/prometheus-remote-write-managed-identity/select-managed-identity.png":::
+1. Select the **Monitoring Metrics Publisher** role, and then select **Next**.
-6. Select **Review + assign** to complete the role assignment.
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/add-role-assignment.png" alt-text="Screenshot that shows a list of role assignments." lightbox="media/prometheus-remote-write-managed-identity/add-role-assignment.png":::
+1. Select **Managed Identity**, and then choose **Select members**. Select the subscription that contains the user-assigned identity, and then select **User-assigned managed identity**. Select the user-assigned identity that you want to use, and then choose **Select**.
-## Grant AKS cluster access to the identity
-This step isn't required if you're using an AKS identity since it will already have access to the cluster.
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/select-managed-identity.png" alt-text="Screenshot that shows selecting a user-assigned managed identity." lightbox="media/prometheus-remote-write-managed-identity/select-managed-identity.png":::
+
+1. To complete the role assignment, select **Review + assign**.
+
+### Give the AKS cluster access to the managed identity
+
+This step isn't required if you're using an AKS identity. An AKS identity already has access to the cluster.
> [!IMPORTANT]
-> You must have owner/user access administrator access on the cluster.
+> To complete the steps in this section, you must have owner or user access administrator permissions for the cluster.
-1. Identify the virtual machine scale sets in the [node resource group](#locate-aks-node-resource-group) for your AKS cluster.
+1. Identify the virtual machine scale sets in the [node resource group](#get-the-name-of-the-aks-node-resource-group) for your AKS cluster.
- :::image type="content" source="media/prometheus-remote-write-managed-identity/resource-group-details-virtual-machine-scale-sets.png" alt-text="Screenshot showing virtual machine scale sets in the node resource group." lightbox="media/prometheus-remote-write-managed-identity/resource-group-details-virtual-machine-scale-sets.png":::
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/resource-group-details-virtual-machine-scale-sets.png" alt-text="Screenshot that shows virtual machine scale sets in the node resource group." lightbox="media/prometheus-remote-write-managed-identity/resource-group-details-virtual-machine-scale-sets.png":::
-2. Run the following command in Azure CLI for each virtual machine scale set.
+2. For each virtual machine scale set, run the following command in the Azure CLI:
```azurecli az vmss identity assign -g <AKS-NODE-RESOURCE-GROUP> -n <AKS-VMSS-NAME> --identities <USER-ASSIGNED-IDENTITY-RESOURCE-ID> ```
-## Deploy Side car and configure remote write on the Prometheus server
+### Deploy a sidecar container to set up remote write
-1. Copy the YAML below and save to a file. This YAML uses 8081 as the listening port but you can modify that value if you wish to use a different port.
+1. Copy the following YAML and save it to a file. The YAML uses port 8081 as the listening port. If you use a different port, modify the port in the YAML.
[!INCLUDE[managed-identity-yaml](../includes/prometheus-sidecar-remote-write-managed-identity-yaml.md)]
-2. Replace the following values in the YAML.
+1. Replace the following values in the YAML:
| Value | Description | |:|:|
- | `<AKS-CLUSTER-NAME>` | Name of your AKS cluster |
+ | `<AKS-CLUSTER-NAME>` | The name of your AKS cluster. |
| `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20230906.1`<br> The remote write container image version. |
- | `<INGESTION-URL>` | **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace |
- | `<MANAGED-IDENTITY-CLIENT-ID>` | **Client ID** from the **Overview** page for the managed identity |
- | `<CLUSTER-NAME>` | Name of the cluster Prometheus is running on |
+ | `<INGESTION-URL>` | The value for **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace. |
+ | `<MANAGED-IDENTITY-CLIENT-ID>` | The value for **Client ID** from the **Overview** page for the managed identity. |
+ | `<CLUSTER-NAME>` | Name of the cluster that Prometheus is running on. |
-> [!IMPORTANT]
-> For Azure Government cloud, add the following environment variables in the "env" section of the yaml: - name: INGESTION_AAD_AUDIENCE value: `https://monitor.azure.us/`
+ > [!IMPORTANT]
+ > For Azure Government cloud, add the following environment variables in the `env` section of the YAML file:
+ >
+ > `- name: INGESTION_AAD_AUDIENCE value: https://monitor.azure.us/`
-3. Open Azure Cloud Shell and upload the YAML file.
-4. Use helm to apply the YAML file to update your Prometheus configuration with the following CLI commands.
+1. Open Azure Cloud Shell and upload the YAML file.
+1. Use Helm to apply the YAML file and update your Prometheus configuration:
```azurecli # set context to your cluster az aks get-credentials -g <aks-rg-name> -n <aks-cluster-name>
- # use helm to update your remote write config
+ # use Helm to update your remote write config
helm upgrade -f <YAML-FILENAME>.yml prometheus prometheus-community/kube-prometheus-stack --namespace <namespace where Prometheus pod resides> ``` ## Verification and troubleshooting
-See [Azure Monitor managed service for Prometheus remote write](prometheus-remote-write.md#verify-remote-write-is-working-correctly).
-## Next steps
+For verification and troubleshooting information, see [Azure Monitor managed service for Prometheus remote write](prometheus-remote-write.md#verify-remote-write-is-working-correctly).
+
+## Related content
- [Collect Prometheus metrics from an AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) - [Learn more about Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md)-- [Remote-write in Azure Monitor Managed Service for Prometheus](prometheus-remote-write.md)-- [Remote-write in Azure Monitor Managed Service for Prometheus using Microsoft Entra ID](./prometheus-remote-write-active-directory.md)-- [Configure remote write for Azure Monitor managed service for Prometheus using managed identity authentication](./prometheus-remote-write-managed-identity.md)-- [Configure remote write for Azure Monitor managed service for Prometheus using Microsoft Entra pod identity (preview)](./prometheus-remote-write-azure-ad-pod-identity.md)
+- [Remote write in Azure Monitor managed service for Prometheus](prometheus-remote-write.md)
+- [Send Prometheus data to Azure Monitor by using Microsoft Entra authentication](./prometheus-remote-write-active-directory.md)
+- [Send Prometheus data to Azure Monitor by using Microsoft Entra Workload ID (preview) authentication](./prometheus-remote-write-azure-workload-identity.md)
+- [Send Prometheus data to Azure Monitor by using Microsoft Entra pod-managed identity (preview) authentication](./prometheus-remote-write-azure-ad-pod-identity.md)
azure-monitor Cost Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-usage.md
For example, usage from Log Analytics can be found by first filtering on the **M
Add a filter on the **Instance ID** column for **contains workspace** or **contains cluster**. The usage is shown in the **Consumed Quantity** column. The unit for each entry is shown in the **Unit of Measure** column.
-#### View data benefits used
+## View data allocation benefits
-Since the usage export has both the number of units of usage and their cost, you can use this export to see the amount of benefits you are receiving from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/). In the usage export, to see the benefits, look for the meters named:
+Since the usage export has both the number of units of usage and their cost, you can use this export to see the amount of benefits you are receiving from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/). In the usage export, to see the benefits, filter the *Instance ID* column to your workspace. (To select all of your workspaces in the spreadsheet, filter the *Instance ID* column to "contains /workspaces/".) Then filter on the Meter to either of the following two meters:
-- **Standard Data Included per Node**: this meter is under the service "Insight and Analytics" and tracks the benefits received when a workspace in either in Log Analytics [Per Node tier](logs/cost-logs.md#per-node-pricing-tier) data allowance and/or has [Defender for Servers](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) enabled.
+- **Standard Data Included per Node**: this meter is under the service "Insight and Analytics" and tracks the benefits received when a workspace in either in Log Analytics [Per Node tier](logs/cost-logs.md#per-node-pricing-tier) data allowance and/or has [Defender for Servers](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) enabled. Each of these provide a 500 MB/server/day data allowance.
- **Free Benefit - M365 Defender Data Ingestion**: this meter, under the service "Azure Monitor", tracks the benefit from the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/). > [!NOTE] > See [Azure Monitor billing meter names](cost-meters.md) for a reference of the billing meter names used by Azure Monitor in Azure Cost Management + Billing.
+You can also see these data benefits in the Log Analytics Usage and estimated costs page. If the workspace is receiving these benefits, there will be a sentence below the cost estimate table that gives the data volume of the benefits used over the last 31 days.
## Usage and estimated costs You can get additional usage details about Log Analytics workspaces and Application Insights resources from the **Usage and Estimated Costs** option for each.
B. Billable data ingestion by table from the past month.
To investigate your Application Insights usage more deeply, open the **Metrics** page, add the metric named *Data point volume*, and then select the *Apply splitting* option to split the data by "Telemetry item type".
-## View data allocation benefits
-
-To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5 and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to [export your usage details](#export-usage-details).
-
-1. Open the exported usage spreadsheet and filter the *Instance ID* column to your workspace. To select all of your workspaces in the spreadsheet, filter the *Instance ID* column to "contains /workspaces/".
-2. Filter the *ResourceRate* column to show only rows where this is equal to zero. Now you will see the data allocations from these various sources.
-
-> [!NOTE]
-> Data allocations from Defender for Servers 500 MB/server/day will appear in rows with the meter name "Standard Data Included per Node" and the meter category to "Insight and Analytics" (the name of a legacy offer still used with this meter.) If the workspace is in the legacy Per Node Log Analytics pricing tier, this meter will also include the data allocations from this Log Analytics pricing tier.
-- ## Operations Management Suite subscription entitlements Customers who purchased Microsoft Operations Management Suite E1 and E2 are eligible for per-node data ingestion entitlements for Log Analytics and Application Insights. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost.
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/ip-addresses.md
+
+ Title: IP addresses used by Azure Monitor | Microsoft Docs
+description: This article discusses server firewall exceptions that are required by Azure Monitor
+ Last updated : 11/15/2023
+ms.servce: azure-monitor
+++++
+# IP addresses used by Azure Monitor
+
+[Azure Monitor](.\overview.md) uses several IP addresses. Azure Monitor is made up of core platform metrics and logs in addition to Log Analytics and Application Insights. You might need to know IP addresses if the app or infrastructure that you're monitoring is hosted behind a firewall.
+
+> [!NOTE]
+> Although these addresses are static, it's possible that we'll need to change them from time to time. All Application Insights traffic represents outbound traffic with the exception of availability monitoring and webhook action groups, which also require inbound firewall rules.
+
+You can use Azure [network service tags](../virtual-network/service-tags-overview.md) to manage access if you're using Azure network security groups. If you're managing access for hybrid/on-premises resources, you can download the equivalent IP address lists as [JSON files](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files), which are updated each week. To cover all the exceptions in this article, use the service tags `ActionGroup`, `ApplicationInsightsAvailability`, and `AzureMonitor`.
+
+## Outgoing ports
+
+You need to open some outgoing ports in your server's firewall to allow the Application Insights SDK or Application Insights Agent to send data to the portal.
+
+> [!NOTE]
+> These addresses are listed by using Classless Interdomain Routing notation. As an example, an entry like `51.144.56.112/28` is equivalent to 16 IPs that start at `51.144.56.112` and end at `51.144.56.127`.
+
+| Purpose | URL | Type | IP | Ports |
+| | | | | |
+| Telemetry | dc.applicationinsights.azure.com<br/>dc.applicationinsights.microsoft.com<br/>dc.services.visualstudio.com<br/>\*.in.applicationinsights.azure.com<br/><br/> |Global<br/>Global<br/>Global<br/>Regional<br/>|| 443 |
+| Live Metrics | live.applicationinsights.azure.com<br/>rt.applicationinsights.microsoft.com<br/>rt.services.visualstudio.com<br/><br/>{region}.livediagnostics.monitor.azure.com<br/><br/>*Example for {region}: westus2<br/>Find all supported regions in [this table](#addresses-grouped-by-region-azure-public-cloud).*|Global<br/>Global<br/>Global<br/><br/>Regional<br/>|20.49.111.32/29<br/>13.73.253.112/29| 443 |
+
+> [!NOTE]
+> Application Insights ingestion endpoints are IPv4 only.
+
+> [!IMPORTANT]
+> For Live Metrics, it is *required* to add the list of [IPs for the respective region](#addresses-grouped-by-region-azure-public-cloud) aside from global IPs.
+
+## Application Insights Agent
+
+Application Insights Agent configuration is needed only when you're making changes.
+
+| Purpose | URL | Ports |
+| | | |
+| Configuration |`management.core.windows.net` |`443` |
+| Configuration |`management.azure.com` |`443` |
+| Configuration |`login.windows.net` |`443` |
+| Configuration |`login.microsoftonline.com` |`443` |
+| Configuration |`secure.aadcdn.microsoftonline-p.com` |`443` |
+| Configuration |`auth.gfx.ms` |`443` |
+| Configuration |`login.live.com` |`443` |
+| Installation | `globalcdn.nuget.org`, `packages.nuget.org` ,`api.nuget.org/v3/index.json` `nuget.org`, `api.nuget.org`, `dc.services.vsallin.net` |`443` |
+
+## Availability tests
+
+This is the list of addresses from which [availability web tests](./app/availability-overview.md) are run. If you want to run web tests on your app but your web server is restricted to serving specific clients, you'll have to permit incoming traffic from our availability test servers.
+
+> [!NOTE]
+> For resources located inside private virtual networks that can't allow direct inbound communication with the availability test agents in public Azure, the only option is to [create and host your own custom availability tests](app/availability-azure-functions.md#review-trackavailability-test-results).
+
+### Service tag
+
+If you're using Azure network security groups, add an *inbound port rule* to allow traffic from Application Insights availability tests. Select **Service Tag** as the **Source** and **ApplicationInsightsAvailability** as the **Source service tag**.
+
+>[!div class="mx-imgBorder"]
+>:::image type="content" source="./app/media/ip-addresses/add-inbound-security-rule.png" lightbox="./app/media/ip-addresses/add-inbound-security-rule.png" alt-text="Screenshot that shows selecting Inbound security rules and then selecting Add.":::
+
+>[!div class="mx-imgBorder"]
+>:::image type="content" source="./app/media/ip-addresses/add-inbound-security-rule2.png" lightbox="./app/media/ip-addresses/add-inbound-security-rule2.png" alt-text="Screenshot that shows the Add inbound security rule tab.":::
+
+Open port 80 (HTTP) and port 443 (HTTPS) for incoming traffic from these addresses. IP addresses are grouped by location.
+
+### IP addresses
+
+If you're looking for the actual IP addresses so that you can add them to the list of allowed IPs in your firewall, download the JSON file that describes Azure IP ranges. These files contain the most up-to-date information. After you download the appropriate file, open it by using your favorite text editor. Search for **ApplicationInsightsAvailability** to go straight to the section of the file that describes the service tag for availability tests.
+
+For Azure public cloud, you need to allow both the global IP ranges and the ones specific for the region of your Application Insights resource which receives live data. You can find the global IP ranges in the [Outgoing ports](#outgoing-ports) table at the top of this document, and the regional IP ranges in the [Addresses grouped by region](#addresses-grouped-by-region-azure-public-cloud) table below.
+
+#### Azure public cloud
+
+Download [public cloud IP addresses](https://www.microsoft.com/download/details.aspx?id=56519).
+
+#### Azure US Government cloud
+
+Download [US Government cloud IP addresses](https://www.microsoft.com/download/details.aspx?id=57063).
+
+#### Microsoft Azure operated by 21Vianet cloud
+
+Download [China cloud IP addresses](https://www.microsoft.com/download/details.aspx?id=57062).
+
+#### Addresses grouped by region (Azure public cloud)
+
+Add the subdomain of the corresponding region to the Live Metrics URL from the [Outgoing ports](#outgoing-ports) table.
+
+> [!NOTE]
+> As described in the [Azure TLS 1.2 migration announcement](https://azure.microsoft.com/updates/azuretls12/), Application Insights connection-string based regional telemetry endpoints only support TLS 1.2. Global telemetry endpoints continue to support TLS 1.0 and TLS 1.1.
+>
+> If you're using an older version of TLS, Application Insights will not ingest any telemetry. For applications based on .NET Framework see [Transport Layer Security (TLS) best practices with the .NET Framework](/dotnet/framework/network-programming/tls) to support the newer TLS version.
+
+| Continent/Country | Region | Subdomain | IP |
+| | | | |
+|Asia|East Asia|eastasia|52.229.216.48/28<br/>20.189.111.16/29|
+||Southeast Asia|southeastasia|52.139.250.96/28<br/>23.98.106.152/29|
+|Australia|Australia Central|australiacentral|20.37.227.104/29<br/><br/>|
+||Australia Central 2|australiacentral2|20.53.60.224/31<br/><br/>|
+||Australia East|australiaeast|20.40.124.176/28<br/>20.37.198.232/29|
+||Australia Southeast|australiasoutheast|20.42.230.224/29<br/><br/>|
+|Brazil|Brazil South|brazilsouth|191.233.26.176/28<br/>191.234.137.40/29|
+||Brazil Southeast|brazilsoutheast|20.206.0.196/31<br/><br/>|
+|Canada|Canada Central|canadacentral|52.228.86.152/29<br/><br/>|
+|Europe|North Europe|northeurope|52.158.28.64/28<br/>20.50.68.128/29|
+||West Europe|westeurope|51.144.56.96/28<br/>40.113.178.32/29|
+|France|France Central|francecentral|20.40.129.32/28<br/>20.43.44.216/29|
+||France South|francesouth|20.40.129.96/28<br/>52.136.191.12/31|
+|Germany|Germany West Central|germanywestcentral|20.52.95.50/31<br/><br/>|
+|India|Central India|centralindia|52.140.108.216/29<br/><br/>|
+||South India|southindia|20.192.153.106/31<br/><br/>|
+|Japan|Japan East|japaneast|52.140.232.160/28<br/>20.43.70.224/29|
+||Japan West|japanwest|20.189.194.102/31<br/><br/>|
+|Korea|Korea Central|koreacentral|20.41.69.24/29<br/><br/>|
+|Norway|Norway East|norwayeast|51.120.235.248/29<br/><br/>|
+||Norway West|norwaywest|51.13.143.48/31<br/><br/>|
+|Qatar|Qatar Central|qatarcentral|20.21.39.224/29<br/><br/>|
+|South Africa|South Africa North|southafricanorth|102.133.219.136/29<br/><br/>|
+|Switzerland|Switzerland North|switzerlandnorth|51.107.52.200/29<br/><br/>|
+||Switzerland West|switzerlandwest|51.107.148.8/29<br/><br/>|
+|United Arab Emirates|UAE North|uaenorth|20.38.143.44/31<br/>40.120.87.204/31|
+|United Kingdom|UK South|uksouth|51.105.9.128/28<br/>51.104.30.160/29|
+||UK West|ukwest|20.40.104.96/28<br/>51.137.164.200/29|
+|United States|Central US|centralus|13.86.97.224/28<br/>20.40.206.232/29|
+||East US|eastus|20.42.35.32/28<br/>20.49.111.32/29|
+||East US 2|eastus2|20.49.102.24/29<br/><br/>|
+||North Central US|northcentralus|23.100.224.16/28<br/>20.49.114.40/29|
+||South Central US|southcentralus|20.45.5.160/28<br/>13.73.253.112/29|
+||West US|westus|40.91.82.48/28<br/>52.250.228.8/29|
+||West US 2|westus2|40.64.134.128/29<br/><br/>|
+||West US 3|westus3|20.150.241.64/29<br/><br/>|
+
+#### Upcoming regions (Azure public cloud)
+
+> [!NOTE]
+> The following regions are not supported yet, but will be added in the near future.
+
+| Continent/Country | Region | Subdomain | IP |
+| | | | |
+|Canada|Canada East|TBD|52.242.40.208/31<br/><br/>|
+|Germany|Germany North|TBD|51.116.75.92/31<br/><br/>|
+|India|West India|TBD|20.192.84.164/31<br/><br/>|
+||Jio India Central|TBD|20.192.50.200/29<br/><br/>|
+||Jio India West|TBD|20.193.194.32/29<br/><br/>|
+|Israel|Israel Central|TBD|20.217.44.250/31<br/><br/>|
+|Poland|Poland Central|TBD|20.215.4.250/31<br/><br/>|
+|South Africa|South Africa West|TBD|102.37.86.196/31<br/><br/>|
+|Sweden|Sweden Central|TBD|51.12.25.192/29<br/><br/>|
+||Sweden South|TBD|51.12.17.128/29<br/><br/>|
+|Taiwan|Taiwan North|TBD|51.53.28.214/31<br/><br/>|
+||Taiwan Northwest|TBD|51.53.172.214/31<br/><br/>|
+|United Arab Emirates|UAE Central|TBD|20.45.95.68/31<br/><br/>|
+|United States|West Central US|TBD|52.150.154.24/29<br/><br/>|
+
+### Discovery API
+
+You might also want to [programmatically retrieve](../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api) the current list of service tags together with IP address range details.
+
+## Application Insights and Log Analytics APIs
+
+| Purpose | URI | IP | Ports |
+| | | | |
+| API |`api.applicationinsights.io`<br/>`api1.applicationinsights.io`<br/>`api2.applicationinsights.io`<br/>`api3.applicationinsights.io`<br/>`api4.applicationinsights.io`<br/>`api5.applicationinsights.io`<br/>`dev.applicationinsights.io`<br/>`dev.applicationinsights.microsoft.com`<br/>`dev.aisvc.visualstudio.com`<br/>`www.applicationinsights.io`<br/>`www.applicationinsights.microsoft.com`<br/>`www.aisvc.visualstudio.com`<br/>`api.loganalytics.io`<br/>`*.api.loganalytics.io`<br/>`dev.loganalytics.io`<br>`docs.loganalytics.io`<br/>`www.loganalytics.io` |20.37.52.188 <br/> 20.37.53.231 <br/> 20.36.47.130 <br/> 20.40.124.0 <br/> 20.43.99.158 <br/> 20.43.98.234 <br/> 13.70.127.61 <br/> 40.81.58.225 <br/> 20.40.160.120 <br/> 23.101.225.155 <br/> 52.139.8.32 <br/> 13.88.230.43 <br/> 52.230.224.237 <br/> 52.242.230.209 <br/> 52.173.249.138 <br/> 52.229.218.221 <br/> 52.229.225.6 <br/> 23.100.94.221 <br/> 52.188.179.229 <br/> 52.226.151.250 <br/> 52.150.36.187 <br/> 40.121.135.131 <br/> 20.44.73.196 <br/> 20.41.49.208 <br/> 40.70.23.205 <br/> 20.40.137.91 <br/> 20.40.140.212 <br/> 40.89.189.61 <br/> 52.155.118.97 <br/> 52.156.40.142 <br/> 23.102.66.132 <br/> 52.231.111.52 <br/> 52.231.108.46 <br/> 52.231.64.72 <br/> 52.162.87.50 <br/> 23.100.228.32 <br/> 40.127.144.141 <br/> 52.155.162.238 <br/> 137.116.226.81 <br/> 52.185.215.171 <br/> 40.119.4.128 <br/> 52.171.56.178 <br/> 20.43.152.45 <br/> 20.44.192.217 <br/> 13.67.77.233 <br/> 51.104.255.249 <br/> 51.104.252.13 <br/> 51.143.165.22 <br/> 13.78.151.158 <br/> 51.105.248.23 <br/> 40.74.36.208 <br/> 40.74.59.40 <br/> 13.93.233.49 <br/> 52.247.202.90 |80,443 |
+| Azure Pipeline annotations extension | aigs1.aisvc.visualstudio.com |dynamic|443 |
+
+## Application Insights analytics
+
+| Purpose | URI | IP | Ports |
+| | | | |
+| Analytics portal | analytics.applicationinsights.io | dynamic | 80,443 |
+| CDN | applicationanalytics.azureedge.net | dynamic | 80,443 |
+| Media CDN | applicationanalyticsmedia.azureedge.net | dynamic | 80,443 |
+
+The *.applicationinsights.io domain is owned by the Application Insights team.
+
+## Log Analytics portal
+
+| Purpose | URI | IP | Ports |
+| | | | |
+| Portal | portal.loganalytics.io | dynamic | 80,443 |
+| CDN | applicationanalytics.azureedge.net | dynamic | 80,443 |
+
+The *.loganalytics.io domain is owned by the Log Analytics team.
+
+## Application Insights Azure portal extension
+
+| Purpose | URI | IP | Ports |
+| | | | |
+| Application Insights extension | stamp2.app.insightsportal.visualstudio.com | dynamic | 80,443 |
+| Application Insights extension CDN | insightsportal-prod2-cdn.aisvc.visualstudio.com<br/>insightsportal-prod2-asiae-cdn.aisvc.visualstudio.com<br/>insightsportal-cdn-aimon.applicationinsights.io | dynamic | 80,443 |
+
+## Application Insights SDKs
+
+| Purpose | URI | IP | Ports |
+| | | | |
+| Application Insights JS SDK CDN | az416426.vo.msecnd.net<br/>js.monitor.azure.com | dynamic | 80,443 |
++
+## Action group webhooks
+
+You can query the list of IP addresses used by action groups by using the [Get-AzNetworkServiceTag PowerShell command](/powershell/module/az.network/Get-AzNetworkServiceTag).
+
+### Action group service tag
+
+Managing changes to source IP addresses can be time consuming. Using *service tags* eliminates the need to update your configuration. A service tag represents a group of IP address prefixes from a specific Azure service. Microsoft manages the IP addresses and automatically updates the service tag as addresses change, which eliminates the need to update network security rules for an action group.
+
+1. In the Azure portal under **Azure Services**, search for **Network Security Group**.
+1. Select **Add** and create a network security group:
+
+ 1. Add the resource group name, and then enter **Instance details** information.
+ 1. Select **Review + Create**, and then select **Create**.
+
+ :::image type="content" source="alerts/media/action-groups/action-group-create-security-group.png" alt-text="Screenshot that shows how to create a network security group."border="true":::
+
+1. Go to **Resource Group**, and then select the network security group you created:
+
+ 1. Select **Inbound security rules**.
+ 1. Select **Add**.
+
+ :::image type="content" source="alerts/media/action-groups/action-group-add-service-tag.png" alt-text="Screenshot that shows how to add inbound security rules." border="true":::
+
+1. A new window opens in the right pane:
+
+ 1. Under **Source**, enter **Service Tag**.
+ 1. Under **Source service tag**, enter **ActionGroup**.
+ 1. Select **Add**.
+
+ :::image type="content" source="alerts/media/action-groups/action-group-service-tag.png" alt-text="Screenshot that shows how to add a service tag." border="true":::
+
+## Profiler
+
+| Purpose | URI | IP | Ports |
+| | | | |
+| Agent | agent.azureserviceprofiler.net<br/>*.agent.azureserviceprofiler.net | 20.190.60.38<br/>20.190.60.32<br/>52.173.196.230<br/>52.173.196.209<br/>23.102.44.211<br/>23.102.45.216<br/>13.69.51.218<br/>13.69.51.175<br/>138.91.32.98<br/>138.91.37.93<br/>40.121.61.208<br/>40.121.57.2<br/>51.140.60.235<br/>51.140.180.52<br/>52.138.31.112<br/>52.138.31.127<br/>104.211.90.234<br/>104.211.91.254<br/>13.70.124.27<br/>13.75.195.15<br/>52.185.132.101<br/>52.185.132.170<br/>20.188.36.28<br/>40.89.153.171<br/>52.141.22.239<br/>52.141.22.149<br/>102.133.162.233<br/>102.133.161.73<br/>191.232.214.6<br/>191.232.213.239 | 443
+| Portal | gateway.azureserviceprofiler.net | dynamic | 443
+| Storage | *.core.windows.net | dynamic | 443
+
+## Snapshot Debugger
+
+> [!NOTE]
+> Profiler and Snapshot Debugger share the same set of IP addresses.
+
+| Purpose | URI | IP | Ports |
+| | | | |
+| Agent | agent.azureserviceprofiler.net<br/>*.agent.azureserviceprofiler.net | 20.190.60.38<br/>20.190.60.32<br/>52.173.196.230<br/>52.173.196.209<br/>23.102.44.211<br/>23.102.45.216<br/>13.69.51.218<br/>13.69.51.175<br/>138.91.32.98<br/>138.91.37.93<br/>40.121.61.208<br/>40.121.57.2<br/>51.140.60.235<br/>51.140.180.52<br/>52.138.31.112<br/>52.138.31.127<br/>104.211.90.234<br/>104.211.91.254<br/>13.70.124.27<br/>13.75.195.15<br/>52.185.132.101<br/>52.185.132.170<br/>20.188.36.28<br/>40.89.153.171<br/>52.141.22.239<br/>52.141.22.149<br/>102.133.162.233<br/>102.133.161.73<br/>191.232.214.6<br/>191.232.213.239 | 443
+| Portal | gateway.azureserviceprofiler.net | dynamic | 443
+| Storage | *.core.windows.net | dynamic | 443
+
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### Can I monitor an intranet web server?
+
+Yes, but you need to allow traffic to our services by either firewall exceptions or proxy redirects:
+
+- QuickPulse `https://rt.services.visualstudio.com:443`
+- ApplicationIdProvider `https://dc.services.visualstudio.com:443`
+- TelemetryChannel `https://dc.services.visualstudio.com:443`
+
+See [IP addresses used by Azure Monitor](./ip-addresses.md) to review our full list of services and IP addresses.
+
+### How do I reroute traffic from my server to a gateway on my intranet?
+
+Route traffic from your server to a gateway on your intranet by overwriting endpoints in your configuration. If the `Endpoint` properties aren't present in your config, these classes use the default values shown in the following ApplicationInsights.config example.
+
+Your gateway should route traffic to our endpoint's base address. In your configuration, replace the default values with `http://<your.gateway.address>/<relative path>`.
+
+#### Example ApplicationInsights.config with default endpoints:
+
+```xml
+<ApplicationInsights>
+...
+<TelemetryModules>
+ <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse.QuickPulseTelemetryModule, Microsoft.AI.PerfCounterCollector">
+ <QuickPulseServiceEndpoint>https://rt.services.visualstudio.com/QuickPulseService.svc</QuickPulseServiceEndpoint>
+ </Add>
+</TelemetryModules>
+ ...
+<TelemetryChannel>
+ <EndpointAddress>https://dc.services.visualstudio.com/v2/track</EndpointAddress>
+</TelemetryChannel>
+...
+<ApplicationIdProvider Type="Microsoft.ApplicationInsights.Extensibility.Implementation.ApplicationId.ApplicationInsightsApplicationIdProvider, Microsoft.ApplicationInsights">
+ <ProfileQueryEndpoint>https://dc.services.visualstudio.com/api/profiles/{0}/appId</ProfileQueryEndpoint>
+</ApplicationIdProvider>
+...
+</ApplicationInsights>
+```
+
+> [!NOTE]
+> `ApplicationIdProvider` is available starting in v2.6.0.
azure-monitor Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/availability-zones.md
Azure Monitor currently supports data resilience for availability-zone-enabled d
|||||| | Brazil South | France Central | Qatar Central | South Africa North | Australia East | | Canada Central | Germany West Central | UAE North | | Central India |
- | Central US | North Europe | | | Japan East |
+ | Central US | North Europe | Israel Central | | Japan East |
| East US | Norway East | | | Korea Central | | East US 2 | UK South | | | Southeast Asia | | South Central US | West Europe | | | East Asia | | West US 2 | Sweden Central | | | | | West US 3 | Switzerland North | | | |
- | | Poland Central | | | |
+ | | Poland Central | | | |
+ | | Italy North | | | |
> [!NOTE] > Moving to a dedicated cluster in a region that supports availablility zones protects data ingested after the move, not historical data.
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
Customer-managed key is delivered on [dedicated clusters](./logs-dedicated-clust
Data ingested in the last 14 days or recently used in queries is kept in hot-cache (SSD-backed) for query efficiency. SSD data is encrypted with Microsoft keys regardless customer-managed key configuration, but your control over SSD access adheres to [key revocation](#key-revocation)
-Log Analytics Dedicated Clusters [pricing model](./logs-dedicated-clusters.md#cluster-pricing-model) requires commitment Tier starting at 500 GB per day, and can have values of 500, 1000, 2000 or 5000 GB per day.
+Log Analytics Dedicated Clusters [pricing model](./logs-dedicated-clusters.md#cluster-pricing-model) requires commitment Tier starting at 100 GB per day.
## How Customer-managed key works in Azure Monitor
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
You should configure the daily cap setting for both Application Insights and Log
The maximum cap for an Application Insights classic resource is 1,000 GB/day unless you request a higher maximum for a high-traffic application. When you create a resource in the Azure portal, the daily cap is set to 100 GB/day. When you create a resource in Visual Studio, the default is small (only 32.3 MB/day). The daily cap default is set to facilitate testing. It's intended that the user will raise the daily cap before deploying the app into production. > [!NOTE]
-> If you are using connection strings to send data to Application Insights using [regional ingestion endpoints](../app/ip-addresses.md#outgoing-ports), then the Application Insights and Log Analytics daily cap settings are effective per region. If you are using only instrumentation key (ikey) to send data to Application Insights using the [global ingestion endpoint](../app/ip-addresses.md#outgoing-ports), then the Application Insights daily cap setting may not be effective across regions, but the Log Analytics daily cap setting will still apply.
+> If you are using connection strings to send data to Application Insights using [regional ingestion endpoints](../ip-addresses.md#outgoing-ports), then the Application Insights and Log Analytics daily cap settings are effective per region. If you are using only instrumentation key (ikey) to send data to Application Insights using the [global ingestion endpoint](../ip-addresses.md#outgoing-ports), then the Application Insights daily cap setting may not be effective across regions, but the Log Analytics daily cap setting will still apply.
We've removed the restriction on some subscription types that have credit that couldn't be used for Application Insights. Previously, if the subscription has a spending limit, the daily cap dialog has instructions to remove the spending limit and enable the daily cap to be raised beyond 32.3 MB/day.
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Log Analytics Dedicated Clusters use a commitment tier pricing model of at least
## Prerequisites -- Dedicated clusters require a minimum ingestion commitment of 500 GB per day.
+- Dedicated clusters require a minimum ingestion commitment of 100 GB per day.
- When creating a dedicated cluster, you can't name it with the same name as a cluster that was deleted within the past two weeks. ## Required permissions
azure-monitor Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design.md
You can configure default [data retention and archive settings](data-retention-a
### Commitment tiers [Commitment tiers](../logs/cost-logs.md#commitment-tiers) provide a discount to your workspace ingestion costs when you commit to a specific amount of daily data. You might choose to consolidate data in a single workspace to reach the level of a particular tier. This same volume of data spread across multiple workspaces wouldn't be eligible for the same tier, unless you have a dedicated cluster.
-If you can commit to daily ingestion of at least 500 GB per day, you should implement a [dedicated cluster](../logs/cost-logs.md#dedicated-clusters) that provides extra functionality and performance. Dedicated clusters also allow you to combine the data from multiple workspaces in the cluster to reach the level of a commitment tier.
+If you can commit to daily ingestion of at least 100 GB per day, you should implement a [dedicated cluster](../logs/cost-logs.md#dedicated-clusters) that provides extra functionality and performance. Dedicated clusters also allow you to combine the data from multiple workspaces in the cluster to reach the level of a commitment tier.
-- **If you'll ingest at least 500 GB per day across all resources:** Create a dedicated cluster and set the appropriate commitment tier.
+- **If you'll ingest at least 100 GB per day across all resources:** Create a dedicated cluster and set the appropriate commitment tier.
- **If you'll ingest at least 100 GB per day across resources:** Consider combining them into a single workspace to take advantage of a commitment tier. ### Legacy agent limitations
backup Backup Azure Alternate Dpm Server Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-alternate-dpm-server-troubleshoot.md
+
+ Title: Troubleshoot data recovery from Microsoft Azure Backup Server by using Azure Backup
+description: Learn how to troubleshoot data recovery from Microsoft Azure Backup Server.
+ Last updated : 01/26/2024++++++
+# Troubleshoot data recovery from Microsoft Azure Backup Server
+
+This article provides troubleshooting steps that help you resolve error massages caused during data recovery from Microsoft Azure Backup Server.
+
+## Troubleshoot error messages
+
+| Error Message | Cause | Resolution |
+|: |: |: |
+|This server is not registered to the vault specified by the vault credential. | This error appears when the vault credential file selected doesn't belong to the Recovery Services vault associated with Azure Backup Server on which the recovery is attempted. | Download the vault credential file from the Recovery Services vault to which the Azure Backup Server is registered. |
+|Either the recoverable data isn't available or the selected server isn't a DPM server. | There are no other Azure Backup Servers registered to the Recovery Services vault, or the servers haven't yet uploaded the metadata, or the selected server isn't an Azure Backup Server (using Windows Server or Windows Client). | If there are other Azure Backup Servers registered to the Recovery Services vault, ensure that the latest Azure Backup agent is installed. <br>If there are other Azure Backup Servers registered to the Recovery Services vault, wait for a day after installation to start the recovery process. The nightly job will upload the metadata for all the protected backups to cloud. The data will be available for recovery. |
+|No other DPM server is registered to this vault. | There are no other Azure Backup Servers that are registered to the vault from which the recovery is being attempted. | If there are other Azure Backup Servers registered to the Recovery Services vault, ensure that the latest Azure Backup agent is installed.<br>If there are other Azure Backup Servers registered to the Recovery Services vault, wait for a day after installation to start the recovery process. The nightly job uploads the metadata for all protected backups to cloud. The data will be available for recovery. |
+|The encryption passphrase provided does not match with passphrase associated with the following server: **\<server name>** | The encryption passphrase used in the process of encrypting the data from the Azure Backup ServerΓÇÖs data that's being recovered doesn't match the encryption passphrase provided. The agent is unable to decrypt the data, and so the recovery fails. | Provide the exact same encryption passphrase associated with the Azure Backup Server whose data is being recovered. |
+
+## Next steps
+
+- [Common questions](backup-azure-vm-backup-faq.yml) about Azure VM backups.
+- [Common questions](backup-azure-file-folder-backup-faq.yml) about the Azure Backup agent.
+- [Recover data from Azure Backup Server](backup-azure-alternate-dpm-server.md).
backup Backup Azure Alternate Dpm Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-alternate-dpm-server.md
Title: Recover data from an Azure Backup Server
+ Title: Recover data from an Azure Backup Server by using Azure Backup
description: Recover the data you've protected to a Recovery Services vault from any Azure Backup Server registered to that vault. Previously updated : 01/24/2023 Last updated : 01/26/2024 -+
To recover data from an Azure Backup Server, follow these steps:
![Screenshot shows how to clear external DPM.](./media/backup-azure-alternate-dpm-server/clear-external-dpm.png)
-## Troubleshoot error messages
-
-| Error Message | Cause | Resolution |
-|: |: |: |
-|This server is not registered to the vault specified by the vault credential. | This error appears when the vault credential file selected doesn't belong to the Recovery Services vault associated with Azure Backup Server on which the recovery is attempted. | Download the vault credential file from the Recovery Services vault to which the Azure Backup Server is registered. |
-|Either the recoverable data isn't available or the selected server isn't a DPM server. | There are no other Azure Backup Servers registered to the Recovery Services vault, or the servers haven't yet uploaded the metadata, or the selected server isn't an Azure Backup Server (using Windows Server or Windows Client). | If there are other Azure Backup Servers registered to the Recovery Services vault, ensure that the latest Azure Backup agent is installed. <br>If there are other Azure Backup Servers registered to the Recovery Services vault, wait for a day after installation to start the recovery process. The nightly job will upload the metadata for all the protected backups to cloud. The data will be available for recovery. |
-|No other DPM server is registered to this vault. | There are no other Azure Backup Servers that are registered to the vault from which the recovery is being attempted. | If there are other Azure Backup Servers registered to the Recovery Services vault, ensure that the latest Azure Backup agent is installed.<br>If there are other Azure Backup Servers registered to the Recovery Services vault, wait for a day after installation to start the recovery process. The nightly job uploads the metadata for all protected backups to cloud. The data will be available for recovery. |
-|The encryption passphrase provided does not match with passphrase associated with the following server: **\<server name>** | The encryption passphrase used in the process of encrypting the data from the Azure Backup ServerΓÇÖs data that's being recovered doesn't match the encryption passphrase provided. The agent is unable to decrypt the data, and so the recovery fails. | Provide the exact same encryption passphrase associated with the Azure Backup Server whose data is being recovered. |
## Next steps Read the other FAQs:
-* [Common questions](backup-azure-vm-backup-faq.yml) about Azure VM backups
-* [Common questions](backup-azure-file-folder-backup-faq.yml) about the Azure Backup agent
+- [Common questions](backup-azure-vm-backup-faq.yml) about Azure VM backups.
+- [Common questions](backup-azure-file-folder-backup-faq.yml) about the Azure Backup agent.
+- [Troubleshoot error messages](backup-azure-alternate-dpm-server-troubleshoot.md) caused during data recovery from Microsoft Azure Backup Server.
backup Backup Azure Backup Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-import-export.md
Title: Offline seeding workflow for MARS using customer-owned disks with Azure I
description: Learn how you can use Azure Backup to send data off the network by using the Azure Import/Export service. This article explains the offline seeding of the initial backup data by using the Azure Import/Export service. Previously updated : 12/05/2022 Last updated : 01/26/2024
This article describes how to send the initial full backup data from MARS to Azu
Azure Backup has several built-in efficiencies that save network and storage costs during the initial full backups of data to Azure. Initial full backups typically transfer large amounts of data and require more network bandwidth when compared to subsequent backups that transfer only the deltas/incrementals. Through the process of offline seeding, Azure Backup can use disks to upload the offline backup data to Azure.
-In this article, you'll learn about:
+## Offline-seeding flow
-> [!div class="checklist"]
-> - Offline-seeding process
-> - Supported configurations
-> - Prerequisites
-> - Workflow
-> - How to initiate offline backup
-> - How to prepare SATA drives and ship to Azure
-> - How to update the tracking and shipping details on the Azure import job
+The Azure Backup offline-seeding process is tightly integrated with the [Azure Import/Export service](../import-export/storage-import-export-service.md). You can use this service to transfer initial backup data to Azure by using disks. If you have terabytes (TBs) of initial backup data that need to be transferred over a high-latency and low-bandwidth network, you can use the offline-seeding workflow to ship the initial backup copy, on one or more hard drives to an Azure datacenter.
-## Offline-seeding process
-
-The Azure Backup offline-seeding process is tightly integrated with the [Azure Import/Export service](../import-export/storage-import-export-service.md). You can use this service to transfer initial backup data to Azure by using disks. If you have terabytes (TBs) of initial backup data that need to be transferred over a high-latency and low-bandwidth network, you can use the offline-seeding workflow to ship the initial backup copy, on one or more hard drives to an Azure datacenter. The following image provides an overview of the steps in the workflow.
-
- :::image type="content" source="./media/backup-azure-backup-import-export/offlinebackupworkflowoverview.png" alt-text="Screenshot shows the overview of offline import workflow process.":::
-
-The offline backup process involves these steps:
+To perform the offline backup:
1. Instead of sending the backup data over the network, write the backup data to a staging location. 1. Use the *AzureOfflineBackupDiskPrep* utility to write the data in the staging location to one or more SATA disks.
The offline backup process involves these steps:
1. At the Azure datacenter, the data on the disks is copied to an Azure storage account. 1. Azure Backup copies the backup data from the storage account to the Recovery Services vault, and incremental backups are scheduled.
+The following diagram provides an overview of the offline-seeding flow:
+
+ :::image type="content" source="./media/backup-azure-backup-import-export/offlinebackupworkflowoverview.png" alt-text="Diagram shows the overview of offline import workflow process.":::
+ >[!Note] >Ensure that you use the latest MARS agent (version 2.0.9250.0 or higher) before following the below sections. [Learn more](backup-azure-mars-troubleshoot.md#mars-offline-seeding-using-customer-owned-disks-importexport-is-not-working).
backup Backup Azure Backup Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-sql.md
Title: Back up SQL Server to Azure as a DPM workload description: An introduction to backing up SQL Server databases by using the Azure Backup service Previously updated : 01/17/2023 Last updated : 01/26/2024 -+ # Back up SQL Server to Azure as a DPM workload
This article describes how to back up and restore the SQL Server databases using
Azure Backup helps you to back up SQL Server databases to Azure via an Azure account. If you don't have one, you can create a free account in just a few minutes. For more information, see [Create your Azure free account](https://azure.microsoft.com/pricing/free-trial/).
-## SQL Server database backup workflow
+## Backup flow for SQL Server database
To back up a SQL Server database to Azure and to recover it from Azure:
To back up a SQL Server database to Azure and to recover it from Azure:
* DPM supports multi-site cluster configurations for an instance of SQL Server. * When you protect databases that use the Always On feature, DPM has the following limitations: * DPM will honor the backup policy for availability groups that's set in SQL Server based on the backup preferences, as follows:
- * Prefer secondary - Backups should occur on a secondary replica except when the primary replica is the only replica online. If there are multiple secondary replicas available, then the node with the highest backup priority will be selected for backup. IF only the primary replica is available, then the backup should occur on the primary replica.
+ * Prefer secondary - Backups should occur on a secondary replica except when the primary replica is the only replica online. If there are multiple secondary replicas available, then the node with the highest backup priority will be selected for backup. If only the primary replica is available, then the backup should occur on the primary replica.
* Secondary only - Backup shouldn't be performed on the primary replica. If the primary replica is the only one online, the backup shouldn't occur. * Primary - Backups should always occur on the primary replica. * Any Replica - Backups can happen on any of the availability replicas in the availability group. The node to be backed up from will be based on the backup priorities for each of the nodes.
- * Note the following:
- * Backups can happen from any readable replica - that is, primary, synchronous secondary, asynchronous secondary.
- * If any replica is excluded from backup, for example **Exclude Replica** is enabled or is marked as not readable, then that replica won't be selected for backup under any of the options.
- * If multiple replicas are available and readable, then the node with the highest backup priority will be selected for backup.
- * If the backup fails on the selected node, then the backup operation fails.
- * Recovery to the original location isn't supported.
+ >[!Note]
+ >- Backups can happen from any readable replica - that is, primary, synchronous secondary, asynchronous secondary.
+ >- If any replica is excluded from backup, for example **Exclude Replica** is enabled or is marked as not readable, then that replica won't be selected for backup under any of the options.
+ >- If multiple replicas are available and readable, then the node with the highest backup priority will be selected for backup.
+ >- If the backup fails on the selected node, then the backup operation fails.
+ >- Recovery to the original location isn't supported.
* SQL Server 2014 or above backup issues: * SQL server 2014 added a new feature to create a [database for on-premises SQL Server in Microsoft Azure Blob storage](/sql/relational-databases/databases/sql-server-data-files-in-microsoft-azure). DPM can't be used to protect this configuration. * There are some known issues with "Prefer secondary" backup preference for the SQL Always On option. DPM always takes a backup from secondary. If no secondary can be found, then the backup fails.
backup Backup Azure Mars Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mars-troubleshoot.md
Title: Troubleshoot the Azure Backup agent description: In this article, learn how to troubleshoot the installation and registration of the Azure Backup agent. Previously updated : 12/05/2022 Last updated : 05/31/2023 +
Unable to find changes in a file. This could be due to various reasons. Please r
## MARS offline seeding using customer-owned disks (Import/Export) is not working
-Azure Import/Export now uses Azure Data Box APIs for offline seeding on customer-owned disks. The Azure portal also list the Import/Export jobs created using the new API under [Azure Data Box jobs](../import-export/storage-import-export-view-drive-status.md?tabs=azure-portal-preview) with the Model column as Import/Export.
+Azure Import/Export now uses Azure Data Box APIs for offline seeding on customer-owned disks. The Azure portal also lists the Import/Export jobs created using the new API under [Azure Data Box jobs](../import-export/storage-import-export-view-drive-status.md?tabs=azure-portal-preview) with the Model column as Import/Export.
MARS agent versions lower than *2.0.9250.0* used the [old Azure Import/Export APIs](/rest/api/storageimportexport/), which will be discontinued after February 28, 2023 and the old MARS agents (version lower than 2.0.9250.0) can't do offline seeding using your own disks. So, we recommend you to use MARS agent 2.0.9250 or higher that uses the new Azure Data Box APIs for offline seeding on your own disks.
backup Backup Center Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-center-actions.md
Title: Perform actions using Backup center description: This article explains how to perform actions using Backup center Previously updated : 12/08/2022 Last updated : 03/27/2023 +
backup Backup Sql Server On Availability Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-on-availability-groups.md
Title: Back up SQL Server always on availability groups description: In this article, learn how to back up SQL Server on availability groups. Previously updated : 08/11/2022 Last updated : 01/25/2024
The backup preference used by Azure Backup SQL AG supports full and differential
| Prefer Secondary | Primary replica | Secondary replicas are preferred, but backups can run on primary replica also. | | None/Any | Primary replica | Any replica |
-The workload backup extension gets installed on the node when it is registered with the Azure Backup service. When an AG database is configured for backup, the backup schedules are pushed to all the registered nodes of the AG. The schedules fire on all the AG nodes and the workload backup extensions on these nodes synchronize between themselves to decide which node will perform the backup. The node selection depends on the backup type and the backup preference as explained in section 1.
+The workload backup extension gets installed on the node when it's registered with the Azure Backup service. When an AG database is configured for backup, the backup schedules are pushed to all the registered nodes of the AG. The schedules fire on all the AG nodes and the workload backup extensions on these nodes synchronize between themselves to decide which node will perform the backup. The node selection depends on the backup type and the backup preference as explained in section 1.
The selected node proceeds with the backup job, whereas the job triggered on the other nodes bails out, that is, it skips the job.
Based on the above sample AG deployment, following are various considerations:
- VM4 can't be registered to Vault 1 as it's in a different region. - If the backup preference is _secondary only_, VM1 (Primary) and VM2 (Secondary) must be registered to the Vault 1 (because full backups require the primary node and logs require a secondary node). For other backup preferences, VM1 (Primary) must be registered to Vault 1, VM2 is optional (because all backups can run on primary node). - While VM3 could be registered to vault 2 in subscription 2 and the AG databases would then show up for protection in vault 2 but due to absence of the primary node in vault 2, configuring backups would fail.-- Similarly, while VM4 could be registered to vault 4 in region 2, configuring backups would fail since the primary node is not registered in vault 4.
+- Similarly, while VM4 could be registered to vault 4 in region 2, configuring backups would fail since the primary node isn't registered in vault 4.
## Handle failover
Based on the above sample AG deployment, following are the various failover poss
- If the backup preference isn't secondary-only, backups can be configured now in Vault 2, because the primary node is registered in this vault. But this can lead to conflicts/backup failures. More about this in [Configure backups for a multi-region AG](#configure-backups-for-a-multi-region-ag). - Failover to VM4 (another region) - As backups aren't configured in Vault 4, no backups would happen.
- - If the backup preference is not secondary-only, backups can be configured now in Vault 4, because the primary node is registered in this vault. But this can lead to conflicts/backup failures. More about this in [Configure backups for a multi-region AG](#configure-backups-for-a-multi-region-ag).
+ - If the backup preference isn't secondary-only, backups can be configured now in Vault 4, because the primary node is registered in this vault. But this can lead to conflicts/backup failures. More about this in [Configure backups for a multi-region AG](#configure-backups-for-a-multi-region-ag).
## Configure backups for a multi-region AG Recovery services vault doesnΓÇÖt support cross-subscription or cross-region backups. This section summarizes how to enable backups for AGs that are spanning subscriptions or Azure regions and the associated considerations. -- Evaluate if you really need to enable backups from all nodes. If one region/subscription has most of the AG nodes and failover to other nodes happens very rarely, setting up the backup in that first region may be enough. If the failovers to other region/subscription happen frequently and for prolonged duration, then you may want to set aup backups proactively in the other region as well.
+- Evaluate if you really need to enable backups from all nodes. If one region/subscription has most of the AG nodes and failover to other nodes happens very rarely, setting up the backup in that first region may be enough. If the failovers to other region/subscription happen frequently and for prolonged duration, then you may want to set up backups proactively in the other region as well.
- Each vault where the backup gets enabled will have its own set of recovery point chains. Restores from these recovery points can be done to VMs registered in that vault only. - Full/differential backups will happen successfully only in the vault that has the primary node. These backups in other vaults will keep failing. -- Log backups will keep working in the previous vault till a log backup runs in the new vault (that is, in the vault where the new primary node is present) and _breaks_ the log chain for old vault.
+- Log backups will keep working in the previous vault till a log backup runs in the new vault (that's, in the vault where the new primary node is present) and _breaks_ the log chain for old vault.
>[!Note] >There's a hard limit of 15 days beyond which log backups will start failing.
The database must be configured for protection from under the standalone instanc
When a new node gets added to an AG that is configured for backups, the workload backup extensions running on the already registered AG nodes detect the AG topology change and inform the Azure Backup service during the next scheduled database discovery job. When this new node gets registered for backups to the same Recovery Services vault as the other existing nodes, Azure Backup service triggers a workflow that configures this new node with the necessary metadata for performing AG backups.
-After this, the new node syncs the AG backup schedule information from the Azure Backup service and starts participating in the synchronized backup process. If the new node is not able to sync the backup schedules and participate in backups, triggering a re-registration on the node forces reconfiguration of the node for AG backups as well. Similarly, node addition, the workload extensions detect the AG topology change in this case and inform the Azure Backup service. The service starts a node _un-configuration_ workflow in the removed node to clear the backup schedules for AG databases and delete the AG related metadata.
+After this, the new node syncs the AG backup schedule information from the Azure Backup service and starts participating in the synchronized backup process. If the new node isn't able to sync the backup schedules and participate in backups, triggering a re-registration on the node forces reconfiguration of the node for AG backups as well. Similarly, node addition, the workload extensions detect the AG topology change in this case and inform the Azure Backup service. The service starts a node _un-configuration_ workflow in the removed node to clear the backup schedules for AG databases and delete the AG related metadata.
## Un-register an AG node from Azure Backup If a node is part of an AG that has one or more databases configured for backup, then Azure Backup doesnΓÇÖt allow un-registration of that node. This is to prevent future backup failures in case the backup preference canΓÇÖt be met without this node. To unregister the node, first you need to remove it from the AG. When the node _un-configuration_ workflow completes, cleaning up that node, you can unregister it.
-Restore a database from Azure Backup to an AG
-SQL Availability Groups do not support directly restoring a database into AG. The database needs to be restored to a standalone SQL instance and then needs to be joined to an AG.
+Restore a database from Azure Backup to an AG SQL Availability Groups don't support directly restoring a database into AG. The database needs to be restored to a standalone SQL instance and then needs to be joined to an AG.
++++
+## Availability group re-creation scenarios for the SQL database server
+
+Re-creation of Availability group (AG), duplicate AGs, and the backup items get listed as *protectable items* or *protected items* in the following scenarios:
+
+- Re-creating AGs that are already protected appear as duplicate AGs on the **Configure Backup** page and in the **Protected items** list. If you want to retain the backup data that is already present in the older AG, then stop the backup by using the **Stop protection and retain data** option before re-creating and scheduling backups on the new AG items.
+
+ By design, Azure Backup lists the duplicate items on the **Protected items list**, and the **Configure Backup** page or **Protectable item list** and displays these items until you want to retain the backup data.
+
+- If you don't want the backup data from the older AG, then stop the backup operation by using the **Stop protection and delete data** option for the older item before re-creating and scheduling backups on the new AG.
+
+ >[!Caution]
+ >Stop protection and delete data is a destructive operation.
+
+- You can recreate the AG after performing one of the above Stop protection process to avoid backup failures.
## Next steps
backup Offline Backup Azure Data Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/offline-backup-azure-data-box.md
Title: Offline backup by using Azure Data Box description: Learn how you can use Azure Data Box to seed large initial backup data offline from the MARS Agent to a Recovery Services vault. Previously updated : 1/23/2023 Last updated : 10/25/2023 -+ # Azure Backup offline backup by using Azure Data Box
chaos-studio Chaos Studio Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md
The following are known limitations in Chaos Studio.
- **Resource Move not supported** - Azure Chaos Studio tracked resources (for example, Experiments) currently do NOT support Resource Move. Experiments can be easily copied (by copying Experiment JSON) for use in other subscriptions, resource groups, or regions. Experiments can also already target resources across regions. Extension resources (Targets and Capabilities) do support Resource Move. - **VMs require network access to Chaos studio** - For agent-based faults, the virtual machine must have outbound network access to the Chaos Studio agent service: - Regional endpoints to allowlist are listed in [Permissions and security in Azure Chaos Studio](chaos-studio-permissions-security.md#network-security).
- - If you're sending telemetry data to Application Insights, the IPs in [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md) are also required.
+ - If you're sending telemetry data to Application Insights, the IPs in [IP addresses used by Azure Monitor](../azure-monitor/ip-addresses.md) are also required.
- **Supported VM operating systems** - If you run an experiment that makes use of the Chaos Studio agent, the virtual machine must run one of the following operating systems:
communication-services Calling Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/calling-chat.md
Title: Teams calling and chat interoperability
+ Title: Teams calling interoperability
-description: Teams calling and chat interoperability
+description: Teams calling interoperability
Last updated 10/15/2021
-# Teams Interoperability: Calling and chat
+# Teams Interoperability: Calling
-> [!IMPORTANT]
-> Calling and chat interoperability is in private preview and restricted to a limited number of Azure Communication Services early adopters. You can [submit this form to request participation in the preview](https://forms.office.com/r/F3WLqPjw0D), and we'll review your scenario(s) and evaluate your participation in the preview.
->
-> Private Preview APIs and SDKs are provided without a service-level agreement, aren't appropriate for production workloads, and should only be used with test users and data. Certain features may not be supported or have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> For support, questions, or to provide feedback or report issues, please use the [Teams interop ad hoc calling and chat channel](https://teams.microsoft.com/l/channel/19%3abfc7d5e0b883455e80c9509e60f908fb%40thread.tacv2/Teams%2520Interop%2520ad%2520hoc%2520calling%2520and%2520chat?groupId=d78f76f3-4229-4262-abfb-172587b7a6bb&tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47). You must be a member of the Azure Communication Service TAP team.
-As part of this preview, the Azure Communication Services SDKs can be used to build applications that enable bring your own identity (BYOI) users to start 1:1 calls or 1:n chats with Teams users. [Standard Azure Communication Services pricing](https://azure.microsoft.com/pricing/details/communication-services/) applies to these users, but there's no extra fee for the interoperability capability. Custom applications built with Azure Communication Services to connect and communicate with Teams users or Teams voice applications can be used by end users or by bots, and there's no differentiation in how they appear to Teams users in Teams applications unless explicitly indicated by the developer of the application with a display name.
+As part of this preview, the Azure Communication Services SDKs can be used to build applications that enable bring your own identity (BYOI) users to start 1:1 calls with Teams users. [Standard Azure Communication Services pricing](https://azure.microsoft.com/pricing/details/communication-services/) applies to these users, but there's no extra fee for the interoperability capability. Custom applications built with Azure Communication Services to connect and communicate with Teams users or Teams voice applications can be used by end users or by bots, and there's no differentiation in how they appear to Teams users in Teams applications unless explicitly indicated by the developer of the application with a display name.
-To enable calling and chat between your Communication Services users and Teams tenant, allow your tenant via the [form](https://forms.office.com/r/F3WLqPjw0D) and enable the connection between the tenant and Communication Services resource.
+To enable calling between your Communication Services users and Teams tenant, allow your tenant via the [form](https://forms.office.com/r/F3WLqPjw0D) and enable the connection between the tenant and Communication Services resource.
[!INCLUDE [Enable interoperability in your Teams tenant](./../includes/enable-interoperability-for-teams-tenant.md)] ## Get Teams user ID
-To start a call or chat with a Teams user or Teams Voice application, you need an identifier of the target. You have the following options to retrieve the ID:
+To start a call with a Teams user or Teams Voice application, you need an identifier of the target. You have the following options to retrieve the ID:
- User interface of [Microsoft Entra ID](../troubleshooting-info.md?#getting-user-id) or with on-premises directory synchronization [Microsoft Entra Connect](../../../active-directory/hybrid/how-to-connect-sync-whatis.md) - Programmatically via [Microsoft Graph API](/graph/api/resources/users)
const call = callAgent.startCall([teamsCallee]);
- Many features in the Teams client don't work as expected during 1:1 calls with Communication Services users. - Third-party [devices for Teams](/MicrosoftTeams/devices/teams-ip-phones) and [Skype IP phones](/skypeforbusiness/certification/devices-ip-phones) aren't supported.
-## Chat
-With the Chat SDK, Communication Services users or endpoints can have group chats with Teams users, identified by their Microsoft Entra object ID. You can easily modify an existing application that creates chats with other Communication Services users to create chats with Teams users instead. Here is an example of how to use the Chat SDK to add Teams users as participants. To learn how to use Chat SDK to send a message, manage participants, and more, see our [quickstart](../../quickstarts/chat/get-started.md?pivots=programming-language-javascript).
-
-Creating a chat with a Teams user:
-```js
-async function createChatThread() {
-const createChatThreadRequest = { topic: "Hello, World!" };
-const createChatThreadOptions = {
- participants: [ {
- id: { microsoftTeamsUserId: '<Teams User AAD Object ID>' },
- displayName: '<USER_DISPLAY_NAME>' }
- ] };
-const createChatThreadResult = await chatClient.createChatThread(
-createChatThreadRequest, createChatThreadOptions );
-const threadId = createChatThreadResult.chatThread.id; return threadId; }
-```
-
-To make testing easier, we've published a sample app [here](https://github.com/Azure-Samples/communication-services-web-chat-hero/tree/teams-interop-chat-adhoc). Update the app with your Communication Services resource, and interop enabled Teams tenant to get started.
-
-**Limitations and known issues** </br>
-While in private preview, a Communication Services user can do various actions using the Communication Services Chat SDK, including sending and receiving plain and rich text messages, typing indicators, read receipts, real-time notifications, and more. However, most of the Teams chat features aren't supported. Here are some key behaviors and known issues:
-- Communication Services users can only initiate chats. -- Communication Services users can't send images, or files to the Teams user. But they can receive images and files from the Teams user. Links to files and images can also be shared.-- Communication Services users can delete the chat. This action removes the Teams user from the chat thread and hides the message history from the Teams client.-- Known issue: Communication Services users aren't displayed correctly in the participant list. They're currently displayed as External, but their people cards show inconsistent data. In addition, their displayname might not be shown properly in the Teams client.-- Known issue: The typing event from Teams side might contain a blank display name.-- Known issue: Read receipts aren't supported for Teams users.-- Known issue: A chat can't be escalated to a call from within the Teams app. -- Known issue: Editing of messages by the Teams user isn't supported.-
-Please refer to [Chat Capabilities](../interop/guest/capabilities.md) to learn more.
- ## Privacy
-Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chats. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting.
+Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls and meetings. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting.
Microsoft will indicate via the Azure Communication Services API that recording or transcription has commenced. You must communicate this fact in real time to your users within your application's user interface. You agree to indemnify Microsoft for all costs and damages incurred due to your failure to comply with this obligation.
communication-services Phone Number Management For Australia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-australia.md
Use the below tables to find all the relevant information on number availability
| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :- | :- | :- | : |
-| Toll-Free |- | - | - | Public Preview\* |
-| Local |- | - | Public Preview | Public Preview\* |
+| Toll-Free |- | - | - | General Availability\* |
+| Local |- | - | General Availability | General Availability\* |
| Alphanumeric Sender ID\** | General Availability | - | - | - | \* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
communication-services Record Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/record-calls.md
zone_pivot_groups: acs-plat-web-ios-android-windows
[!INCLUDE [Record Calls Client-side Windows](./includes/record-calls/record-calls-windows.md)] ::: zone-end
+### Compliance Recording
+Compliance recording is Teams policy based recording that could be enabled using this tutorial: [Introduction to Teams policy-based recording for callings](/microsoftteams/teams-recording-policy).<br>
+Policy based recording will be started automatically when user with this policy will join a call. To get notification from Azure Communication Service about recording - we can use Cloud Recording section from this article.
+
+```js
+const callRecordingApi = call.feature(Features.Recording);
+
+const isComplianceRecordingActive = callRecordingApi.isRecordingActive;
+
+const isComplianceRecordingActiveChangedHandler = () => {
+ console.log(callRecordingApi.isRecordingActive);
+};
+
+callRecordingApi.on('isRecordingActiveChanged', isComplianceRecordingActiveChangedHandler);
+```
+
+Compliance recording could be implemented by using custom recording bot [GitHub Example](https://github.com/microsoftgraph/microsoft-graph-comms-samples/tree/a3943bafd73ce0df780c0e1ac3428e3de13a101f/Samples/BetaSamples/LocalMediaSamples/ComplianceRecordingBot).<br>
+To hide this bot from participant roster we need to add specific metadata information, it will be used by Azure Communication SDK and Teams client:
+```json
+ "metadata": {"__platform":{"ui":{"hidden":true}
+```
++ ## Next steps - [Learn how to manage calls](./manage-calls.md) - [Learn how to manage video](./manage-video.md)
communication-services Calling Widget Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/calling-widget/calling-widget-tutorial.md
Following this tutorial will:
- [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms). - [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions [Node 18 LTS](https://nodejs.org/en) is recommended. Use the `node --version` command to check your version. - An Azure Communication Services resource. [Create a Communications Resource](../../quickstarts/create-communication-resource.md)-- Complete the Teams tenant setup in [Teams calling and chat interoperability](../../concepts/interop/calling-chat.md)
+- Complete the Teams tenant setup in [Teams Call Queues](../../quickstarts/voice-video-calling/get-started-teams-call-queue.md)
- Working with [Teams Call Queues](../../quickstarts/voice-video-calling/get-started-teams-call-queue.md) and Azure Communication Services. - Working with [Teams Auto Attendants](../../quickstarts/voice-video-calling/get-started-teams-auto-attendant.md) and Azure Communication Services.
communication-services Proxy Calling Support Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/proxy-calling-support-tutorial.md
Title: Tutorial - Proxy your Azure Communication Services calling traffic across your own servers
+ Title: 'Tutorial: Proxy your Azure Communication Services calling traffic across your own servers'
-description: Learn how to have your media and signaling traffic be proxied to servers that you can control.
+description: Learn how to have your media and signaling traffic proxied to servers that you can control.
zone_pivot_groups: acs-plat-web-ios-android-windows
-# How to force calling traffic to be proxied across your own server
+# Force calling traffic to be proxied across your own server
-In certain situations, it might be useful to have all your client traffic proxied to a server that you can control. When the SDK is initializing, you can provide the details of your servers that you would like the traffic to route to. Once enabled all the media traffic (audio/video/screen sharing) travel through the provided TURN servers instead of the Azure Communication Services defaults. This tutorial guides on how to have calling traffic be proxied to servers that you control.
+In this tutorial, you learn how to proxy your Azure Communication Services calling traffic across your own servers.
+
+In certain situations, it might be useful to have all your client traffic proxied to a server that you can control. When the SDK is initializing, you can provide the details of your servers that you want the traffic to route to. Once enabled, all the media traffic (audio/video/screen sharing) travels through the provided TURN servers instead of the Azure Communication Services defaults.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Set up a TURN server.
+> * Set up a signaling proxy server.
+
+## Prerequisites
+
+None
::: zone pivot="platform-web" [!INCLUDE [Proxy support with JavaScript](./includes/proxy-calling-support-tutorial-web.md)]
container-apps Code To Cloud Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/code-to-cloud-options.md
Previously updated : 08/30/2023 Last updated : 01/26/2024 # Select the right code-to-cloud path for Azure Container Apps
-You have several options available as you develop and deploy your apps to Azure Container Apps. As evaluate your goals and the needs of your team, consider the following questions:
+You have several options available as you develop and deploy your apps to Azure Container Apps. As you evaluate your goals and the needs of your team, consider the following questions.
-- Do you want to focus more on application changes, or infrastructure configuration?-- Are you working on a team or as an individual?-- How fast do you need to see changes reflected in the application or infrastructure?-- How important is an automated workflow vs. an experimental workflow?
+- Are you new to containers?
+- Is your focus more on your application or your infrastructure?
+- Are you innovating rapidly or in a stable steady state with your application?
-Based on your situation, your answers to these questions affect your preferred development and deployment strategies. Individuals who want to rapidly iterate features have different needs than structured teams deploying to mature production environments.
+Your answers to these questions affect your preferred development and deployment strategies. This article helps you select the most appropriate option for how you develop and deploy your applications to Azure Container Apps.
-This article helps you select the most appropriate option for how you develop and deploy your applications to Azure Container Apps.
+Depending on your situation, you might want to deploy from a [code editor](#code-editor), through the [Azure portal](#azure-portal), with a hosted [code repository](#code-repository), or via [infrastructure as code](#infrastructure-as-code). However, if you're new to containers, you can [learn more](#new-to-containers) about how containers can help your development process.
-Depending on your situation, you may want to deploy from a [code editor](#code-editor), through the [Azure portal](#azure-portal), with a hosted [code repository](#code-repository), or via [infrastructure as code](#infrastructure-as-code).
+## New to containers
+
+You can simplify the development and deployment of your application by packaging your app into a "container". Containers allow you to wrap up your application and all its dependencies into a single unit that is portal and can be run easily on any container platform.
+
+If you're interested in deploying your application to Azure Container Apps, but don't want to define a container ahead of time, Container Apps can create a container. The Container Apps cloud build feature automatically identifies your application stack and uses [CNCF Buildpacks](https://buildpacks.io/) to generate a container image for you.
++
+Defining containers ahead of time often requires using Docker and publishing your container on a container registry. When you use the Container Apps cloud build, you don't have to worry about special container tooling or registries.
+
+If your application currently doesn't use a container, consider using the Container Apps cloud build to deploy your application.
+
+### Resources
+
+- [Build and deploy your app to Azure Container Apps](tutorial-code-to-cloud.md)
+- [Deploy an artifact file (JAR) to Azure Container Apps](deploy-artifact.md)
## Code editor
-If you spend most your time editing code and favor rapid iteration of your applications, then you may want to use [Visual Studio](https://visualstudio.microsoft.com/) or [Visual Studio Code](https://code.visualstudio.com/). These editors allow you to easily build Docker files a deploy your applications directly to Azure Container Apps.
+If you spend most your time editing code and favor rapid iteration of your applications, then you might want to use [Visual Studio](https://visualstudio.microsoft.com/) or [Visual Studio Code](https://code.visualstudio.com/). These editors allow you to easily build Docker files a deploy your applications directly to Azure Container Apps.
This approach allows you to experiment with configuration options made in the early stages of an application's life.
In Azure Container Apps, you can use the [Azure CLI](/cli/azure/) or the [Azure
- **Azure CLI** - [Build and deploy your container app from a repository](quickstart-code-to-cloud.md)
- - [Deploy your first container app with containerapp up](get-started.md)
+ - [Deploy your first container app using the command line](get-started.md)
- [Set up GitHub Actions with Azure CLI](github-actions-cli.md) - [Build and deploy your container app from a repository](tutorial-deploy-first-app-cli.md)
container-apps Quickstart Code To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-code-to-cloud.md
Title: "Quickstart: Build and deploy your app from a repository to Azure Container Apps"
-description: Build your container app from a local or GitHub source repository and deploy in Azure Container Apps using az containerapp up.
+ Title: "Quickstart: Build and deploy your app from your local filesystem to Azure Container Apps"
+description: Build your container app from local source and deploy in Azure Container Apps using az containerapp up.
- devx-track-azurecli - ignite-2023 Previously updated : 03/29/2023 Last updated : 01/26/2024
-zone_pivot_groups: container-apps-image-build-from-repo
+zone_pivot_groups: container-apps-code-to-cloud-segmemts
-# Quickstart: Build and deploy your container app from a repository in Azure Container Apps
+# Quickstart: Build and deploy from local source code to Azure Container Apps
-This article demonstrates how to build and deploy a microservice to Azure Container Apps from a source repository using the programming language of your choice.
-
-In this quickstart, you create a backend web API service that returns a static collection of music albums. After completing this quickstart, you can continue to [Tutorial: Communication between microservices in Azure Container Apps](communicate-between-microservices.md) to learn how to deploy a front end application that calls the API.
+This article demonstrates how to build and deploy a microservice to Azure Container Apps from local source code using the programming language of your choice. In this quickstart, you create a backend web API service that returns a static collection of music albums.
> [!NOTE]
-> You can also build and deploy this sample application using the `az containerapp up` command. For more information, see [Tutorial: Build and deploy your app to Azure Container Apps](tutorial-code-to-cloud.md).
+> This sample application is available in two versions. One version where the source contains a Dockerfile. The other version has no Dockerfile. Select the version that best reflects your source code. If you are new to containers, select the **No Dockerfile** option at the top.
The following screenshot shows the output from the album API service you deploy.
The following screenshot shows the output from the album API service you deploy.
To complete this project, you need the following items: -- | Requirement | Instructions | |--|--| | Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. <br><br>Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
-| GitHub Account | Get one for [free](https://github.com/join). |
-| git | [Install git](https://git-scm.com/downloads) |
| Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).| --
-| Requirement | Instructions |
-|--|--|
-| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. <br><br>Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
-| GitHub Account | Get one for [free](https://github.com/join). |
-| git | [Install git](https://git-scm.com/downloads) |
-| Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
- ## Setup
az provider register --namespace Microsoft.OperationalInsights
Now that your Azure CLI setup is complete, you can define the environment variables that are used throughout this article. - # [Bash](#tab/bash) Define the following variables in your bash shell. ```azurecli
-RESOURCE_GROUP="album-containerapps"
-LOCATION="canadacentral"
-ENVIRONMENT="env-album-containerapps"
-API_NAME="album-api"
-FRONTEND_NAME="album-ui"
-GITHUB_USERNAME="<YOUR_GITHUB_USERNAME>"
-```
-
-Before you run this command, make sure to replace `<YOUR_GITHUB_USERNAME>` with your GitHub username.
-
-Next, define a container registry name unique to you.
-
-```azurecli
-ACR_NAME="acaalbums"$GITHUB_USERNAME
+export RESOURCE_GROUP="album-containerapps"
+export LOCATION="canadacentral"
+export ENVIRONMENT="env-album-containerapps"
+export API_NAME="album-api"
``` # [Azure PowerShell](#tab/azure-powershell)
$RESOURCE_GROUP="album-containerapps"
$LOCATION="canadacentral" $ENVIRONMENT="env-album-containerapps" $API_NAME="album-api"
-$FRONTEND_NAME="album-ui"
-$GITHUB_USERNAME="<YOUR_GITHUB_USERNAME>"
-```
-
-Before you run this command, make sure to replace `<YOUR_GITHUB_USERNAME>` with your GitHub username.
-
-Next, define a container registry name unique to you.
-
-```powershell
-$ACR_NAME="acaalbums"+$GITHUB_USERNAME
```
+## Get the sample code
+Download and extract the API sample application in the language of your choice.
-# [Bash](#tab/bash)
-Define the following variables in your bash shell.
+# [C#](#tab/csharp)
-```azurecli
-RESOURCE_GROUP="album-containerapps"
-LOCATION="canadacentral"
-ENVIRONMENT="env-album-containerapps"
-API_NAME="album-api"
-```
+[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-csharp/zip/refs/heads/main) to your machine.
-# [Azure PowerShell](#tab/azure-powershell)
+Extract the download and change into the *containerapps-albumapi-csharp-main/src* folder.
-Define the following variables in your PowerShell console.
-```powershell
-$RESOURCE_GROUP="album-containerapps"
-$LOCATION="canadacentral"
-$ENVIRONMENT="env-album-containerapps"
-$API_NAME="album-api"
-```
+# [Java](#tab/java)
-
+[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-java/zip/refs/heads/main) to your machine.
+Extract the download and change into the *containerapps-albumapi-java-main/src* folder.
-## Prepare the GitHub repository
-In a browser window, go to the GitHub repository for your preferred language and fork the repository.
+# [JavaScript](#tab/javascript)
-# [C#](#tab/csharp)
+[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-javascript/zip/refs/heads/main) to your machine.
-Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-csharp) to fork the repo to your account.
+Extract the download and change into the *containerapps-albumapi-javascript-main/src* folder.
-Now you can clone your fork of the sample repository.
+# [Python](#tab/python)
-Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-python/zip/refs/heads/main) to your machine.
-```git
-git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-csharp.git code-to-cloud
-```
+Extract the download and change into the *containerapps-albumapi-python-main/src* folder.
# [Go](#tab/go)
-Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-go) to fork the repo to your account.
+[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-go/zip/refs/heads/main) to your machine.
+Extract the download and navigate into the *containerapps-albumapi-go-main/src* folder.
-Now you can clone your fork of the sample repository.
-
-Use the following git command to clone your forked repo into the *code-to-cloud* folder:
-
-```git
-git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-go.git code-to-cloud
-```
::: zone-end
-# [Java](#tab/java)
-
-Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-java) to fork the repo to your account.
+# [C#](#tab/csharp)
-Now you can clone your fork of the sample repository.
+[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-csharp/zip/refs/heads/buildpack) to your machine.
-Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+Extract the download and change into the *containerapps-albumapi-csharp-buildpack/src* folder.
-```git
-git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-java.git code-to-cloud
-```
+# [Java](#tab/java)
-> [!NOTE]
-> The Java sample only supports a Maven build, which results in an executable JAR file. The build uses the default settings, as passing in environment variables is not supported.
+[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-java/zip/refs/heads/buildpack) to your machine.
-# [JavaScript](#tab/javascript)
+Extract the download and change into the *containerapps-albumapi-java-buildpack/src* folder.
-Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-javascript) to fork the repo to your account.
+> [!NOTE]
+> The Java Builpack currently supports the [Maven tool](https://maven.apache.org/what-is-maven.html) to build your application.
-Now you can clone your fork of the sample repository.
+# [JavaScript](#tab/javascript)
-Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-javascript/zip/refs/heads/buildpack) to your machine.
-```git
-git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-javascript.git code-to-cloud
-```
+Extract the download and change into the *containerapps-albumapi-javascript-buildpack/src* folder.
# [Python](#tab/python)
-Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-python) to fork the repo to your account.
+[Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-python/zip/refs/heads/buildpack) to your machine.
+Extract the download and change into the *containerapps-albumapi-python-buildpack/src* folder.
-Now you can clone your fork of the sample repository.
-Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+# [Go](#tab/go)
-```git
-git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-python.git code-to-cloud
-```
+Azure Container Apps cloud build doesn't currently support Buildpacks for Go.
::: zone-end - ## Build and deploy the container app
-Build and deploy your first container app from your local git repository with the `containerapp up` command. This command will:
+Build and deploy your first container app with the `containerapp up` command. This command will:
- Create the resource group - Create an Azure Container Registry - Build the container image and push it to the registry - Create the Container Apps environment with a Log Analytics workspace-- Create and deploy the container app using a public container image-
-The `up` command uses the Docker file in the root of the repository to build the container image. The target port is defined by the EXPOSE instruction in the Docker file. A Docker file isn't required to build a container app.
+- Create and deploy the container app using the built container image
-# [Bash](#tab/bash)
+- Create the resource group
+- Create a default registry as part of your environment
+- Detect the language and runtime of your application and build the image using the appropriate Buildpack
+- Push the image into the Azure Container Apps default registry
+- Create the Container Apps environment with a Log Analytics workspace
+- Create and deploy the container app using the built container image
-```azurecli
-az containerapp up \
- --name $API_NAME \
- --resource-group $RESOURCE_GROUP \
- --location $LOCATION \
- --environment $ENVIRONMENT \
- --source code-to-cloud/src
-```
-# [Azure PowerShell](#tab/azure-powershell)
+The `up` command uses the Dockerfile in the root of the repository to build the container image. The `EXPOSE` instruction in the Dockerfile defined the target port, which is the port used to send ingress traffic to the container.
-```powershell
-az containerapp up `
- --name $API_NAME `
- --resource-group $RESOURCE_GROUP `
- --location $LOCATION `
- --environment $ENVIRONMENT `
- --source code-to-cloud/src
-```
-
+If the `up` command doesn't find a Dockerfile, it automatically uses Buildpacks to turn your application source into a runnable container. Since the Buildpack is trying to run the build on your behalf, you need to tell the `up` command which port to send ingress traffic to.
::: zone-end
-## Build and deploy the container app
-Build and deploy your first container app from your forked GitHub repository with the `containerapp up` command. This command will:
+In the following code example, the `.` (dot) tells `containerapp up` to run in the `src` directory of the extracted sample API application.
-- Create the resource group-- Create an Azure Container Registry-- Build the container image and push it to the registry-- Create the Container Apps environment with a Log Analytics workspace-- Create and deploy the container app using a public container image-- Create a GitHub Action workflow to build and deploy the container app
+# [Bash](#tab/bash)
-The `up` command uses the Docker file in the root of the repository to build the container image. The target port is defined by the EXPOSE instruction in the Docker file. A Docker file isn't required to build a container app.
-Replace the `<YOUR_GITHUB_REPOSITORY_NAME>` with your GitHub repository name in the form of `https://github.com/<owner>/<repository-name>` or `<owner>/<repository-name>`.
+```azurecli
+az containerapp up \
+ --name $API_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
+ --environment $ENVIRONMENT \
+ --source .
+```
-# [Bash](#tab/bash)
```azurecli az containerapp up \
az containerapp up \
--resource-group $RESOURCE_GROUP \ --location $LOCATION \ --environment $ENVIRONMENT \
- --context-path ./src \
- --repo <YOUR_GITHUB_REPOSITORY_NAME>
+ --ingress external \
+ --target-port 8080 \
+ --source .
``` ++ # [Azure PowerShell](#tab/azure-powershell) + ```powershell az containerapp up ` --name $API_NAME ` --resource-group $RESOURCE_GROUP ` --location $LOCATION ` --environment $ENVIRONMENT `
- --context-path ./src `
- --repo <YOUR_GITHUB_REPOSITORY_NAME>
+ --source .
``` -
-Using the URL and the user code displayed in the terminal, go to the GitHub device activation page in a browser and enter the user code to the page. Follow the prompts to authorize the Azure CLI to access your GitHub repository.
+```powershell
+az containerapp up `
+ --name $API_NAME `
+ --resource-group $RESOURCE_GROUP `
+ --location $LOCATION `
+ --environment $ENVIRONMENT `
+ --ingress external `
+ --target-port 8080 `
+ --source .
+```
::: zone-end
-The `up` command creates a GitHub Action workflow in your repository *.github/workflows* folder. The workflow is triggered to build and deploy your container app when you push changes to the repository.
+ ## Verify deployment Copy the FQDN to a web browser. From your web browser, go to the `/albums` endpoint of the FQDN.
az group delete --name $RESOURCE_GROUP
## Next steps
+After completing this quickstart, you can continue to [Tutorial: Communication between microservices in Azure Container Apps](communicate-between-microservices.md) to learn how to deploy a front end application that calls the API.
+ > [!div class="nextstepaction"] > [Tutorial: Communication between microservices](communicate-between-microservices.md)
container-apps Quickstart Repo To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-repo-to-cloud.md
+
+ Title: "Quickstart: Build and deploy from a repository to Azure Container Apps"
+description: Build your container app from a code repository and deploy in Azure Container Apps using az containerapp up.
++++
+ - devx-track-azurecli
+ - ignite-2023
+ Last updated : 01/26/2024+
+zone_pivot_groups: container-apps-code-to-cloud-segmemts
+++
+# Quickstart: Build and deploy from a repository to Azure Container Apps
+
+This article demonstrates how to build and deploy a microservice to Azure Container Apps from a GitHub repository using the programming language of your choice. In this quickstart, you create a sample microservice, which represents a backend web API service that returns a static collection of music albums.
+
+This sample application is available in two versions. One version includes a container, where the source contains a Dockerfile. The other version has no Dockerfile. Select the version that best reflects your source code. If you're new to containers, select the **No Dockerfile** option at the top.
+
+> [!NOTE]
+> You can also build and deploy this sample application from your local filesystem. For more information, see [Build from local source code and deploy your application in Azure Container Apps](quickstart-code-to-cloud.md).
+
+The following screenshot shows the output from the album API service you deploy.
+++
+## Prerequisites
+
+To complete this project, you need the following items:
+
+| Requirement | Instructions |
+|--|--|
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. <br><br>Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
+| GitHub Account | Get one for [free](https://github.com/join). |
+| git | [Install git](https://git-scm.com/downloads) |
+| Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
++
+## Setup
+
+To sign in to Azure from the CLI, run the following command and follow the prompts to complete the authentication process.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az login
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+az login
+```
+++
+Ensure you're running the latest version of the CLI via the upgrade command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az upgrade
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+az upgrade
+```
+++
+Next, install or update the Azure Container Apps extension for the CLI.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az extension add --name containerapp --upgrade
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
++
+```azurepowershell
+az extension add --name containerapp --upgrade
+```
+++
+Register the `Microsoft.App` and `Microsoft.OperationalInsights` namespaces in your Azure subscription.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az provider register --namespace Microsoft.App
+```
+
+```azurecli
+az provider register --namespace Microsoft.OperationalInsights
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+az provider register --namespace Microsoft.App
+```
+
+```azurepowershell
+az provider register --namespace Microsoft.OperationalInsights
+```
+++
+Now that your Azure CLI setup is complete, you can define the environment variables that are used throughout this article.
++
+# [Bash](#tab/bash)
+
+Define the following variables in your bash shell.
+
+```azurecli
+export RESOURCE_GROUP="album-containerapps"
+export LOCATION="canadacentral"
+export ENVIRONMENT="env-album-containerapps"
+export API_NAME="album-api"
+export GITHUB_USERNAME="<YOUR_GITHUB_USERNAME>"
+```
+
+Before you run this command, make sure to replace `<YOUR_GITHUB_USERNAME>` with your GitHub username.
+
+Next, define a container registry name unique to you.
+
+```azurecli
+export ACR_NAME="acaalbums"$GITHUB_USERNAME
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+Define the following variables in your PowerShell console.
+
+```powershell
+$RESOURCE_GROUP="album-containerapps"
+$LOCATION="canadacentral"
+$ENVIRONMENT="env-album-containerapps"
+$API_NAME="album-api"
+$GITHUB_USERNAME="<YOUR_GITHUB_USERNAME>"
+```
+
+Before you run this command, make sure to replace `<YOUR_GITHUB_USERNAME>` with your GitHub username.
+
+Next, define a container registry name unique to you.
+
+```powershell
+$ACR_NAME="acaalbums"+$GITHUB_USERNAME
+```
++++++
+## Prepare the GitHub repository
++
+In a browser window, go to the GitHub repository for your preferred language and fork the repository.
+
+# [C#](#tab/csharp)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-csharp) to fork the repo to your account. Then copy the repo URL to use it
+in the next step.
++
+# [Java](#tab/java)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-java) to fork the repo to your account. Then copy the repo URL to use it
+in the next step.
++
+# [JavaScript](#tab/javascript)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-javascript) to fork the repo to your account. Then copy the repo URL to use it
+in the next step.
++
+# [Python](#tab/python)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-python) to fork the repo to your account. Then copy the repo URL to use it
+in the next step.
++
+# [Go](#tab/go)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-go) to fork the repo to your account. Then copy the repo URL to use it
+in the next step.
++++
+In a browser window, go to the GitHub repository for your preferred language and fork the repository **including branches**.
+
+# [C#](#tab/csharp)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-csharp) to fork the repo to your account. Uncheck "Copy the `main` branch only"
+to also fork the `buildpack` branch.
++
+# [Java](#tab/java)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-java) to fork the repo to your account. Uncheck "Copy the `main` branch only"
+to also fork the `buildpack` branch.
++
+# [JavaScript](#tab/javascript)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-javascript) to fork the repo to your account. Uncheck "Copy the `main` branch only"
+to also fork the `buildpack` branch.
++
+# [Python](#tab/python)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-python) to fork the repo to your account. Uncheck "Copy the `main` branch only"
+to also fork the `buildpack` branch.
++
+# [Go](#tab/go)
+
+Azure Container Apps cloud build doesn't currently support Buildpacks for Go.
+++++
+## Build and deploy the container app
+
+Build and deploy your first container app from your forked GitHub repository with the `containerapp up` command. This command will:
+
+- Create the resource group
+- Create the Container Apps environment with a Log Analytics workspace
+- Create an Azure Container Registry
+- Create a GitHub Action workflow to build and deploy the container app
+
+- Create the resource group
+- Create the Container Apps environment with a Log Analytics workspace
+- Automatically create a default registry as part of your environment
+- Create a GitHub Action workflow to build and deploy the container app
+
+When you push new code to the repository, the GitHub Action will:
+
+- Build the container image and push it to the Azure Container Registry
+- Deploy the container image to the created container app
+
+The `up` command uses the Dockerfile in the root of the repository to build the container image. The `EXPOSE` instruction in the Dockerfile defines the target port. A Docker file isn't required to build a container app.
+
+- Automatically detect the language and runtime
+- Build the image using the appropriate Buildpack
+- Push the image into the Azure Container Apps default registry
+
+The container app needs to be accessible to ingress traffic. Ensure to expose port 8080 to listen for incoming requests.
+
+In the following command, replace the `<YOUR_GITHUB_REPOSITORY_NAME>` with your GitHub repository name in the form of `https://github.com/<OWNER>/<REPOSITORY-NAME>` or `<OWNER>/<REPOSITORY-NAME>`.
+
+In the following command, replace the `<YOUR_GITHUB_REPOSITORY_NAME>` with your GitHub repository name in the form of `https://github.com/<OWNER>/<REPOSITORY-NAME>` or `<OWNER>/<REPOSITORY-NAME>`. Use the `--branch buildpack` option to point to the sample source without a Dockerfile.
+
+# [Bash](#tab/bash)
++
+```azurecli
+az containerapp up \
+ --name $API_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
+ --environment $ENVIRONMENT \
+ --context-path ./src \
+ --repo <YOUR_GITHUB_REPOSITORY_NAME>
+```
++
+```azurecli
+az containerapp up \
+ --name $API_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
+ --environment $ENVIRONMENT \
+ --context-path ./src \
+ --ingress external \
+ --target-port 8080 \
+ --repo <YOUR_GITHUB_REPOSITORY_NAME>
+ --branch buildpack
+ --
+```
+++
+# [Azure PowerShell](#tab/azure-powershell)
++
+```powershell
+az containerapp up `
+ --name $API_NAME `
+ --resource-group $RESOURCE_GROUP `
+ --location $LOCATION `
+ --environment $ENVIRONMENT `
+ --context-path ./src `
+ --repo <YOUR_GITHUB_REPOSITORY_NAME>
+```
++
+```powershell
+az containerapp up `
+ --name $API_NAME `
+ --resource-group $RESOURCE_GROUP `
+ --location $LOCATION `
+ --environment $ENVIRONMENT `
+ --context-path ./src `
+ --ingress external `
+ --target-port 8080 `
+ --repo <YOUR_GITHUB_REPOSITORY_NAME>
+ --branch buildpack
+```
+++++
+Using the URL and the user code displayed in the terminal, go to the GitHub device activation page in a browser and enter the user code to the page. Follow the prompts to authorize the Azure CLI to access your GitHub repository.
+
+The `up` command creates a GitHub Action workflow in your repository's *.github/workflows* folder. The workflow is triggered to build and deploy your container app when you push changes to the repository.
+++
+## Verify deployment
+
+Copy the domain name returned by the `containerapp up` to a web browser. From your web browser, go to the `/albums` endpoint of the URL.
++
+## Clean up resources
+
+If you're not going to continue on to the [Deploy a frontend](communicate-between-microservices.md) tutorial, you can remove the Azure resources created during this quickstart with the following command.
+
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If the group contains resources outside the scope of this quickstart, they are also deleted.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az group delete --name $RESOURCE_GROUP
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```powershell
+az group delete --name $RESOURCE_GROUP
+```
+++
+> [!TIP]
+> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
+
+## Next steps
+
+After completing this quickstart, you can continue to [Tutorial: Communication between microservices in Azure Container Apps](communicate-between-microservices.md) to learn how to deploy a front end application that calls the API.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Communication between microservices](communicate-between-microservices.md)
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
The *Is Configurable* column in the following tables denotes a feature maximum m
### Dedicated workload profiles | Feature | Scope | Default | Is Configurable | Remarks |
-||||||
+|--|--|--|--|--|
+| Cores | Subscription | 2000 | Yes | Maximum number of dedicated workload profile cores within one subscription |
| Cores | Replica | Up to maximum cores a workload profile supports | No | Maximum number of cores available to a revision replica. |
-| Cores | Environment | 100 | Yes | Maximum number of cores all Dedicated workload profiles in a Dedicated plan environment can accommodate. Calculated by the sum of cores available in each node of all workload profile in a Dedicated plan environment. |
| Cores | General Purpose Workload Profiles | 100 | Yes | The total cores available to all general purpose (D-series) profiles within an environment. | | Cores | Memory Optimized Workload Profiles | 50 | Yes | The total cores available to all memory optimized (E-series) profiles within an environment. |
container-apps Storage Mounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/storage-mounts.md
Azure Files storage has the following characteristics:
* All containers that mount the share can access files written by any other container or method. * More than one Azure Files volume can be mounted in a single container.
-To enable Azure Files storage in your container, you need to set up your container in the following ways:
+To enable Azure Files storage in your container, you need to set up your container as follows:
* Create a storage definition in the Container Apps environment. * Define a volume of type `AzureFile` in a revision. * Define a volume mount in one or more containers in the revision.
+* The Azure Files storage account used must be accessible from your container app's virtual network. For more information, see [Grant access from a virtual network](/azure/storage/common/storage-network-security#grant-access-from-a-virtual-network).
### Prerequisites
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
The following steps show how to create a self-signed certificate for testing pur
In this tutorial, if the image has already been built and is stored in the registry, the tag serves as an identifier for that image for convenience. ```bash
- IMAGE=$REGISTRY/${REPO}@$TAG
+ IMAGE=$REGISTRY/${REPO}:$TAG
``` 3. Get the Key ID of the signing key. A certificate in AKV can have multiple versions, the following command gets the Key ID of the latest version.
cost-management-billing Enable Preview Features Cost Management Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-preview-features-cost-management-labs.md
description: This article explains how to explore preview features and provides a list of the recent previews you might be interested in. Previously updated : 01/25/2024 Last updated : 01/26/2024
You can choose the latest or any of the previous dataset schema versions during
And, you can enhance security and compliance by configuring exports to storage accounts behind a firewall, which provides access control for the public endpoint of the storage account.
+>[!NOTE]
+> After you enable **Exports (preview)** in Cost Management Labs, you might have to refresh your browser to see the new **Export** menu item in the Cost Management menu.
+ :::image type="content" source="./media/enable-preview-features-cost-management-labs/export-preview.png" alt-text="Screenshot showing the Export window with various fields." lightbox="./media/enable-preview-features-cost-management-labs/export-preview.png" ::: ## How to share feedback
deployment-environments How To Create Environment With Azure Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-environment-with-azure-developer.md
Previously updated : 12/18/2023 Last updated : 01/26/2023+
+#customer intent: As a developer, I want to be able to create an enviroment by using AZD so that I can create my coding environment.
+ # Create an environment by using the Azure Developer CLI
To enable Azure Developer CLI features in Visual Studio Code, install the Azure
powershell -ex AllSigned -c "Invoke-RestMethod 'https://aka.ms/install-azd.ps1' | Invoke-Expression" ```
+# [Visual Studio](#tab/visual-studio)
+
+In Visual Studio 2022 17.3 Preview 2 or later, you can enable integration with azd as a preview feature.
+
+To enable the azd feature, go to **Tools** > **Options** > **Environment** > **Preview Features** and select **Integration with azd, the Azure Developer CLI**.
++
+When the feature is enabled, you can use the Azure Developer CLI from your terminal of choice on Windows, Linux, or macOS.
+ ### Sign in with Azure Developer CLI
Sign in to Azure at the CLI using the following command:
:::image type="content" source="media/how-to-create-environment-with-azure-developer/login.png" alt-text="Screenshot showing the azd auth login command and its result in the terminal." lightbox="media/how-to-create-environment-with-azure-developer/login.png":::
+# [Visual Studio](#tab/visual-studio)
+
+Access your Azure resources by logging in. When you initiate a log in, a browser window opens and prompts you to log in to Azure. After you sign in, the terminal displays a message that you're signed in to Azure.
+
+To open the Developer Command prompt:
+
+1. From the Tools menu, select **Terminal**.
+1. In the **Terminal** window, select **Developer Command Prompt**.
+
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/visual-studio-developer-command-prompt.png" alt-text="Screenshot showing the terminal window menu with Developer Command Prompt highlighted." lightbox="media/how-to-create-environment-with-azure-developer/visual-studio-developer-command-prompt.png":::
+
+Sign in to AZD using the Developer Command Terminal:
+
+```bash
+ azd auth login
+```
++ ### Enable AZD support for ADE
You can configure AZD to provision and deploy resources to your deployment envir
azd config set platform.type devcenter ```
+# [Visual Studio](#tab/visual-studio)
-
+```bash
+ azd config set platform.type devcenter
+```
## Create an environment from existing code
AZD uses an *azure.yaml* file to define the environment. The azure.yaml file def
1. `azd init` identifies the services defined in your app code and prompts you to confirm and continue, remove a service, or add a service. Select ***Confirm and continue initializing my app***.
- :::image type="content" source="media/how-to-create-environment-with-azure-developer/init-services.png" alt-text="Screenshot showing the AZD init prompt to confirm and continue, remove a service, or add a service." lightbox="media/how-to-create-environment-with-azure-developer/init-services.png":::
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/initialize-services.png" alt-text="Screenshot showing the AZD init prompt to confirm and continue, remove a service, or add a service." lightbox="media/how-to-create-environment-with-azure-developer/initialize-services.png":::
1. `azd init` continues to gather information to configure your app. For this example application, you're prompted for the name of your MongoDB database instance, and ports that the services listen on.
- :::image type="content" source="media/how-to-create-environment-with-azure-developer/init-app-services.png" alt-text="Screenshot showing the azd init prompt for a database name." lightbox="media/how-to-create-environment-with-azure-developer/init-app-services.png":::
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/initialize-app-services.png" alt-text="Screenshot showing the azd init prompt for a database name." lightbox="media/how-to-create-environment-with-azure-developer/initialize-app-services.png":::
1. Enter a name for your local AZD environment.
- :::image type="content" source="media/how-to-create-environment-with-azure-developer/init-new-environment-name.png" alt-text="Screenshot showing azd init prompt Enter a new environment name." lightbox="media/how-to-create-environment-with-azure-developer/init-new-environment-name.png":::
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/initialize-new-environment-name.png" alt-text="Screenshot showing azd init prompt Enter a new environment name." lightbox="media/how-to-create-environment-with-azure-developer/initialize-new-environment-name.png":::
1. `azd init` displays a list of the projects you have access to. Select the project for your environment
- :::image type="content" source="media/how-to-create-environment-with-azure-developer/init-select-project.png" alt-text="Screenshot showing azd init prompt Select project." lightbox="media/how-to-create-environment-with-azure-developer/init-select-project.png":::
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/initialize-select-project.png" alt-text="Screenshot showing azd init prompt Select project." lightbox="media/how-to-create-environment-with-azure-developer/initialize-select-project.png":::
1. `azd init` displays a list of environment definitions in the project. Select an environment definition.
AZD uses an *azure.yaml* file to define the environment. The azure.yaml file def
``` 1. In the AZD terminal, select ***Use code in the current directory***.
- :::image type="content" source="media/how-to-create-environment-with-azure-developer/init-folder.png" alt-text="Screenshot showing the az init command and the prompt How do you want to initialize your app." lightbox="media/how-to-create-environment-with-azure-developer/init-folder.png":::
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/initialize-folder.png" alt-text="Screenshot showing the az init command and the prompt How do you want to initialize your app." lightbox="media/how-to-create-environment-with-azure-developer/initialize-folder.png":::
AZD scans the current directory and gathers more information depending on the type of app you're building. Follow the prompts to configure your AZD environment. 1. `azd init` identifies the services defined in your app code and prompts you to confirm and continue, remove a service, or add a service. Select ***Confirm and continue initializing my app***.
- :::image type="content" source="media/how-to-create-environment-with-azure-developer/init-services.png" alt-text="Screenshot showing the AZD init prompt to confirm and continue, remove a service, or add a service." lightbox="media/how-to-create-environment-with-azure-developer/init-services.png":::
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/initialize-services.png" alt-text="Screenshot showing the AZD init prompt to confirm and continue, remove a service, or add a service." lightbox="media/how-to-create-environment-with-azure-developer/initialize-services.png":::
1. `azd init` continues to gather information to configure your app. For this example application, you're prompted for the name of your MongoDB database instance, and ports that the services listen on.
- :::image type="content" source="media/how-to-create-environment-with-azure-developer/init-app-services.png" alt-text="Screenshot showing the azd init prompt for a database name." lightbox="media/how-to-create-environment-with-azure-developer/init-app-services.png":::
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/initialize-app-services.png" alt-text="Screenshot showing the azd init prompt for a database name." lightbox="media/how-to-create-environment-with-azure-developer/initialize-app-services.png":::
1. Enter a name for your local AZD environment.
- :::image type="content" source="media/how-to-create-environment-with-azure-developer/init-new-environment-name.png" alt-text="Screenshot showing azd init prompt Enter a new environment name." lightbox="media/how-to-create-environment-with-azure-developer/init-new-environment-name.png":::
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/initialize-new-environment-name.png" alt-text="Screenshot showing azd init prompt Enter a new environment name." lightbox="media/how-to-create-environment-with-azure-developer/initialize-new-environment-name.png":::
-1. `azd init` displays a list of the projects you have access to. Select the project for your environment
+1. `azd init` displays a list of the projects you have access to. Select the project for your environment.
- :::image type="content" source="media/how-to-create-environment-with-azure-developer/init-select-project.png" alt-text="Screenshot showing azd init prompt Select project." lightbox="media/how-to-create-environment-with-azure-developer/init-select-project.png":::
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/initialize-select-project.png" alt-text="Screenshot showing azd init prompt Select project." lightbox="media/how-to-create-environment-with-azure-developer/initialize-select-project.png":::
1. `azd init` displays a list of environment definitions in the project. Select an environment definition. :::image type="content" source="media/how-to-create-environment-with-azure-developer/select-environment-definition.png" alt-text="Screenshot showing azd init prompt Select environment definitions." lightbox="media/how-to-create-environment-with-azure-developer/select-environment-definition.png"::: AZD creates the project resources, including an *azure.yaml* file in the root of your project. +
+# [Visual Studio](#tab/visual-studio)
+
+1. At the CLI, navigate to the folder that contains your application code.
+
+1. Run the following command to initialize your application and supply information when prompted:
+
+ ```bash
+ azd init
+ ```
+1. In the AZD terminal, select ***Use code in the current directory***.
+
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/visual-studio-initialize-folder.png" alt-text="Screenshot showing the az init command and the prompt How do you want to initialize your app." lightbox="media/how-to-create-environment-with-azure-developer/visual-studio-initialize-folder.png":::
+
+ AZD scans the current directory and gathers more information depending on the type of app you're building. Follow the prompts to configure your AZD environment.
+
+1. `azd init` identifies the services defined in your app code and prompts you to confirm and continue, remove a service, or add a service. Select ***Confirm and continue initializing my app***.
+
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/visual-studio-initialize-services.png" alt-text="Screenshot showing the AZD init prompt to confirm and continue, remove a service, or add a service." lightbox="media/how-to-create-environment-with-azure-developer/visual-studio-initialize-services.png":::
+
+1. `azd init` continues to gather information to configure your app. For this example application, you're prompted for the name of your MongoDB database instance, and ports that the services listen on.
+
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/visual-studio-initialize-app-services.png" alt-text="Screenshot showing the azd init prompt for a database name." lightbox="media/how-to-create-environment-with-azure-developer/visual-studio-initialize-app-services.png":::
+
+1. Enter a name for your local AZD environment.
+
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/visual-studio-new-environment-name.png" alt-text="Screenshot showing azd init prompt Enter a new environment name." lightbox="media/how-to-create-environment-with-azure-developer/visual-studio-new-environment-name.png":::
+
+1. `azd init` displays a list of the projects you have access to. Select the project for your environment.
+
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/visual-studio-initialize-select-project.png" alt-text="Screenshot showing azd init prompt Select project." lightbox="media/how-to-create-environment-with-azure-developer/visual-studio-initialize-select-project.png":::
+
+1. `azd init` displays a list of environment definitions in the project. Select an environment definition.
+
+ AZD creates the project resources, including an *azure.yaml* file in the root of your project.
+ ### Provision infrastructure to Azure Deployment Environment
azd provision
1. You can view the resources created in the Azure portal or in the [developer portal](https://devportal.microsoft.com).
+# [Visual Studio](#tab/visual-studio)
+
+Provision your application to Azure using the following command:
+
+```bash
+azd provision
+```
+
+1. 'azd provision' provides a list of projects that you have access to. Select the project that you want to provision your application to.
+
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/visual-studio-select-project.png" alt-text="Screenshot showing the azd init prompt to select a project." lightbox="media/how-to-create-environment-with-azure-developer/visual-studio-select-project.png":::
+
+1. 'azd provision' provides a list of environment definitions in the selected project. Select the environment definition that you want to use to provision your application.
+
+ :::image type="content" source="media/how-to-create-environment-with-azure-developer/visual-studio-select-environment-type.png" alt-text="Screenshot showing the azd init prompt to select an environment type." lightbox="media/how-to-create-environment-with-azure-developer/visual-studio-select-environment-type.png":::
+
+1. 'azd provision' provides a list of environment types in the selected project. Select the environment type that you want to use to provision your application.
+
+1. AZD instructs ADE to create a new environment based on the information you gave in the previous step.
+
+1. You can view the resources created in the Azure portal or in the [developer portal](https://devportal.microsoft.com).
### List existing environments (optional)
azd env list
:::image type="content" source="media/how-to-create-environment-with-azure-developer/environments-list.png" alt-text="Screenshot showing the local AZD environment and the remote Azure environment." lightbox="media/how-to-create-environment-with-azure-developer/environments-list.png":::
+# [Visual Studio](#tab/visual-studio)
+
+Use the following command to view the environments that you have access to: the local AZD environment and the remote Azure Deployment Environments environment.
+
+```bash
+azd env list
+```
+
+`azd env list` prompts you to select a project and an environment definition.
++ ### Deploy code to Azure Deployment Environments
For this sample application, you see something like this:
:::image type="content" source="media/how-to-create-environment-with-azure-developer/test-swagger.png" alt-text="Screenshot showing application in swagger interface." lightbox="media/how-to-create-environment-with-azure-developer/test-swagger.png":::
+# [Visual Studio](#tab/visual-studio)
+
+Deploy your application code to the remote Azure Deployment Environments environment you provisioned using the following command:
+
+```bash
+azd env deploy
+```
+Deploying your code to the remote environment can take several minutes.
+
+You can view the progress of the deployment in the Azure portal:
++
+When deployment completes, you can view the resources that were provisioned in the Azure portal:
++
+You can verify that your code is deployed by selecting the end point URLs listed in the AZD terminal.
+ ## Clean up resources
Delete your Azure resources by using the following command:
```bash azd down --environment <environmentName> ```+
+# [Visual Studio](#tab/visual-studio)
+
+Delete your Azure resources by using the following command:
+
+```bash
+azd down --environment <environmentName>
+```
+ ## Related content
event-grid Transport Layer Security Enforce Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/transport-layer-security-enforce-minimum-version.md
Be careful to restrict assignment of these roles only to those who require the a
When a client sends a request to an Event Grid topic or domain, the client establishes a connection with the Event Grid topic or domain endpoint first, before processing any requests. The minimum TLS version setting is checked after the TLS connection is established. If the request uses an earlier version of TLS than that specified by the setting, the connection continues to succeed, but the request will eventually fail.
-> [!NOTE]
-> Due to limitations in the confluent library, errors coming from an invalid TLS version will not surface when connecting through the Kafka protocol. Instead a general exception will be shown.
- Here are a few important points to consider: - A network trace would show the successful establishment of a TCP connection and successful TLS negotiation, before a 401 is returned if the TLS version used is less than the minimum TLS version configured.
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-about-virtual-network-gateways.md
Previously updated : 11/16/2023 Last updated : 01/25/2024
Additionally, you can downgrade the virtual network gateway SKU. The following d
For all other downgrade scenarios, you need to delete and recreate the gateway. Recreating a gateway incurs downtime.
+## Virtual network gateway limitations and performance
+ ### <a name="gatewayfeaturesupport"></a>Feature support by gateway SKU
-The following table shows the features supported across each gateway type.
+The following table shows the features supported across each gateway types and max number of ExpressRoute circuit connections supported by each gateway SKU.
-|Gateway SKU|VPN Gateway and ExpressRoute coexistence|FastPath|Max Number of Circuit Connections|
-| | | | |
-|**Standard SKU/ERGw1Az**|Yes|No|4|
-|**High Perf SKU/ERGw2Az**|Yes|No|8
-|**Ultra Performance SKU/ErGw3Az**|Yes|Yes|16
+| Gateway SKU | VPN Gateway and ExpressRoute coexistence | FastPath | Max Number of Circuit Connections |
+|--|--|--|--|
+| **Standard SKU/ERGw1Az** | Yes | No | 4 |
+| **High Perf SKU/ERGw2Az** | Yes | No | 8 |
+| **Ultra Performance SKU/ErGw3Az** | Yes | Yes | 16 |
+| **ErGwScale (Preview)** | Yes | Yes - minimum 10 of scale units | 4 - minimum 1 of scale unit<br>8 - minimum of 2 scale units<br>16 - minimum of 10 scale units |
>[!NOTE] > The maximum number of ExpressRoute circuits from the same peering location that can connect to the same virtual network is 4 for all gateways.
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
Previously updated : 11/06/2023 Last updated : 01/26/2024
The following table shows connectivity locations and the service providers for e
| **Paris2** | [Equinix](https://www.equinix.com/data-centers/europe-colocation/france-colocation/paris-data-centers/pa4) | 1 | France Central | Supported | Equinix<br/>InterCloud<br/>Orange | | **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | Supported | Equinix<br/>Megaport<br/>NextDC | | **Phoenix** | [EdgeConneX PHX01](https://www.cyrusone.com/data-centers/north-america/arizona/phx1-phx8-phoenix) | 1 | West US 3 | Supported | Cox Business Cloud Port<br/>CenturyLink Cloud Connect<br/>DE-CIX<br/>Megaport<br/>Zayo |
+| **Phoenix2** | [PhoenixNAP](https://phoenixnap.com/) | 1 | West US 3 | Supported | n/a |
| **Portland** | [EdgeConnex POR01](https://www.edgeconnex.com/locations/north-america/portland-or/) | 1 | West US 2 | Supported | | | **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India | Supported | Airtel<br/>Lightstorm<br/>Tata Communications | | **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | Supported | Bell Canada<br/>Equinix<br/>Megaport<br/>RISQ<br/>Telus |
The following table shows connectivity locations and the service providers for e
| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | Supported | Aryaka Networks<br/>CenturyLink Cloud Connect<br/>DE-CIX<br/>Equinix<br/>Level 3 Communications<br/>Megaport<br/>PacketFabric<br/>Telus<br/>Zayo | | **Seoul** | [KINX Gasan IDC](https://www.kinx.net/?lang=en) | 2 | Korea Central | Supported | KINX<br/>KT<br/>LG CNS<br/>LGUplus<br/>Equinix<br/>Sejong Telecom<br/>SK Telecom | | **Seoul2** | [KT IDC](https://www.kt-idc.com/eng/introduce/sub1_4_10.jsp#tab) | 2 | Korea Central | n/a | KT |
-| **Silicon Valley** | [Equinix SV1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv1/) | 1 | West US | Supported | Aryaka Networks<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Equinix<br/>InterCloud<br/>Internet2<br/>IX Reach<br/>Packet<br/>PacketFabric<br/>Level 3 Communications<br/>Megaport<br/>Orange<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Vodafone<br/>Zayo |
+| **Silicon Valley** | [Equinix SV1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv1/) | 1 | West US | Supported | Aryaka Networks<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Equinix<br/>InterCloud<br/>Internet2<br/>IX Reach<br/>Packet<br/>PacketFabric<br/>Level 3 Communications<br/>Megaport<br/>Momentum Telecom<br/>Orange<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Vodafone<br/>Zayo |
| **Silicon Valley2** | [Coresite SV7](https://www.coresite.com/data-centers/locations/silicon-valley/sv7) | 1 | West US | Supported | Colt<br/>Coresite | | **Singapore** | [Equinix SG1](https://www.equinix.com/data-centers/asia-pacific-colocation/singapore-colocation/singapore-data-center/sg1) | 2 | Southeast Asia | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>China Mobile International<br/>Epsilon Global Communications<br/>Equinix<br/>GTT<br/>InterCloud<br/>Level 3 Communications<br/>Megaport<br/>NTT Communications<br/>Orange<br/>PCCW Global Limited<br/>SingTel<br/>Tata Communications<br/>Telstra Corporation<br/>Telefonica<br/>Verizon<br/>Vodafone | | **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | Supported | CenturyLink Cloud Connect<br/>China Unicom Global<br/>Colt<br/>DE-CIX<br/>Epsilon Global Communications<br/>Equinix<br/>Lightstorm<br/>Megaport<br/>PCCW Global Limited<br/>SingTel<br/>Telehouse - KDDI |
The following table shows connectivity locations and the service providers for e
| **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | Supported | AT&T NetBond<br/>Bell Canada<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Equinix<br/>IX Reach Megaport<br/>Orange<br/>Telus<br/>Verizon<br/>Zayo | | **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | Supported | Fibrenoire<br/>Zayo | | **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | Supported | Bell Canada<br/>Cologix<br/>Megaport<br/>Telus<br/>Zayo |
-| **Warsaw** | [Equinix WA1](https://www.equinix.com/data-centers/europe-colocation/poland-colocation/warsaw-data-centers/wa1) | 1 | Poland Central | Supported | Equinix, Orange Poland, T-mobile Poland |
+| **Warsaw** | [Equinix WA1](https://www.equinix.com/data-centers/europe-colocation/poland-colocation/warsaw-data-centers/wa1) | 1 | Poland Central | Supported | Equinix<br/>Orange Poland<br/>T-mobile Poland |
| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/)<br/>[Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US<br/>East US 2 | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Crown Castle<br/>Equinix<br/>Internet2<br/>InterCloud<br/>Iron Mountain<br/>IX Reach<br/>Level 3 Communications<br/>Lightpath<br/>Megaport<br/>Neutrona Networks<br/>NTT Communications<br/>Orange<br/>PacketFabric<br/>SES<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Telefonica<br/>Verizon<br/>Zayo | | **Washington DC2** | [Coresite VA2](https://www.coresite.com/data-center/va2-reston-va) | 1 | East US<br/>East US 2 | n/a | CenturyLink Cloud Connect<br/>Coresite<br/>Intelsat<br/>Megaport<br/>Momentum Telecom<br/>Viasat<br/>Zayo | | **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | Supported | Colt<br/>Equinix<br/>Intercloud<br/>Interxion<br/>Megaport<br/>Swisscom<br/>Zayo |
If your connectivity provider isn't listed in previous sections, you can still c
* [InterXion](https://www.interxion.com/) * [NextDC](https://www.nextdc.com/) * [Megaport](https://www.megaport.com/services/microsoft-expressroute/)
+ * [Momentum Telecom](https://gomomentum.com/)
* [PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure) * [Teraco](https://www.teraco.co.za/platform-teraco/africa-cloud-exchange/)
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
Previously updated : 01/12/2024 Last updated : 01/26/2024
The following table shows locations by service provider. If you want to view ava
| **[Liquid Intelligent Technologies](https://liquidcloud.africa/connect/)** | Supported | Supported | Cape Town<br/>Johannesburg | | **[LGUplus](http://www.uplus.co.kr/)** |Supported |Supported | Seoul | | **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Atlanta<br/>Auckland<br/>Chicago<br/>Dallas<br/>Denver<br/>Dubai2<br/>Dublin<br/>Frankfurt<br/>Geneva<br/>Hong Kong<br/>Hong Kong2<br/>Las Vegas<br/>London<br/>London2<br/>Los Angeles<br/>Madrid<br/>Melbourne<br/>Miami<br/>Minneapolis<br/>Montreal<br/>Munich<br/>New York<br/>Osaka<br/>Oslo<br/>Paris<br/>Perth<br/>Phoenix<br/>Quebec City<br/>Queretaro (Mexico)<br/>San Antonio<br/>Seattle<br/>Silicon Valley<br/>Singapore<br/>Singapore2<br/>Stavanger<br/>Stockholm<br/>Sydney<br/>Sydney2<br/>Tokyo<br/>Tokyo2<br/>Toronto<br/>Vancouver<br/>Washington DC<br/>Washington DC2<br/>Zurich |
-| **[Momentum Telecom](https://gomomentum.com/)** | Supported | Supported | Chicago<br/>New York<br/>Washington DC2 |
+| **[Momentum Telecom](https://gomomentum.com/)** | Supported | Supported | Chicago<br/>New York<br/>Washington DC2<br/>Silicon Valley |
| **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** | Supported | Supported | London | | **MTN Global Connect** | Supported | Supported | Cape Town<br/>Johannesburg| | **[National Telecom](https://www.nc.ntplc.co.th/cat/category/264/855/CAT+Direct+Cloud+Connect+for+Microsoft+ExpressRoute?lang=en_EN)** | Supported | Supported | Bangkok |
If your connectivity provider isn't listed in previous sections, you can still c
* [Interxion](https://www.interxion.com/products/interconnection/cloud-connect/) * [IX Reach](https://www.ixreach.com/partners/cloud-partners/microsoft-azure/) * [Megaport](https://www.megaport.com/services/microsoft-expressroute/)
+ * [Momentum Telecom](https://gomomentum.com/)
* [NextDC](https://www.nextdc.com/) * [PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure) * [Teraco](https://www.teraco.co.za/platform-teraco/africa-cloud-exchange/)
If you're remote and don't have fiber connectivity, or you want to explore other
| **[Proximus](https://www.proximus.be/en/id_cl_explore/companies-and-public-sector/networks/corporate-networks/explore.html)**| Bics | Amsterdam<br/>Dublin<br/>London<br/>Paris | | **[QSC AG](https://www2.qbeyond.de/en/)** |Interxion | Frankfurt | | **[RETN](https://retn.net/products/cloud-connect)** | Equinix | Amsterdam |
-| **[Rogers]** | Cologix<br/>Equinix | Montreal<br/>Toronto |
+| **Rogers** | Cologix<br/>Equinix | Montreal<br/>Toronto |
| **[Spectrum Enterprise](https://enterprise.spectrum.com/services/internet-networking/wan/cloud-connect.html)** | Equinix | Chicago<br/>Dallas<br/>Los Angeles<br/>New York<br/>Silicon Valley |
-| **[Tamares Telecom](http://www.tamarestelecom.com/our-services/#Connectivity)** | Equinix | London |
+| **[Tamares Telecom](https://www.tamarestelecom.com/services/)** | Equinix | London |
| **[Tata Teleservices](https://www.tatatelebusiness.com/data-services/ez-cloud-connect/)** | Tata Communications | Chennai<br/>Mumbai | | **[TDC Erhverv](https://tdc.dk/)** | Equinix | Amsterdam | | **[Telecom Italia Sparkle](https://www.tisparkle.com/our-platform/enterprise-platform/sparkle-cloud-connect)**| Equinix | Amsterdam |
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
The Azure API for FHIR supports the following query parameters. All of these par
| \_typefilter | Yes | To request finer-grained filtering, you can use \_typefilter along with the \_type parameter. The value of the _typeFilter parameter is a comma-separated list of FHIR queries that further restrict the results | | \_container | No | Specifies the container within the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder into that container. If the container isnΓÇÖt specified, the data will be exported to a new container. | | \_till | No | Allows you to only export resources that have been modified till the time provided. This parameter is applicable to only System-Level export. In this case, if historical versions have not been disabled or purged, export guarantees true snapshot view, or, in other words, enables time travel. |
-|\_includeHistory | No | Allows you to export versioned resources. This filter does'nt work with '_typeFilter' query parameter. |
-|\_includeDeleted | No | Allows you to export soft deleted resources. To learn more about delete, refer to [delete using FHIR specification](https://www.hl7.org/fhir/http.html#delete). This filter does'nt work with '_typeFilter' query parameter. |
+|\includeAssociatedData | No | Allows you to export history and soft deleted resources. This filter doesn't work with '_typeFilter' query parameter. Include value as 'history' to export history/ non latest versioned resources. Include value as 'deleted' to export soft deleted resources. |
> [!NOTE] > Only storage accounts in the same subscription as that for Azure API for FHIR are allowed to be registered as the destination for $export operations.
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/export-data.md
The FHIR service supports the following query parameters for filtering exported
| `_typeFilter` | Yes | To request finer-grained filtering, you can use `_typeFilter` along with the `_type` parameter. The value of the `_typeFilter` parameter is a comma-separated list of FHIR queries that further limit the results. | | `_container` | No | Specifies the name of the container in the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder in that container. If the container isn't specified, the data will be exported to a new container with an autogenerated name. | | `_till` | No | Allows you to export resources that have been modified till the specified time. This parameter is applicable only with System-Level export. In this case, if historical versions have not been disabled or purged, export guarantees true snapshot view, or, in other words, enables time travel. |
-|`__includeHistory` | No | Allows you to export versioned resources. This filter does'nt work with '_typeFilter' query parameter. |
-|`_includeDeleted` | No | Allows you to export soft deleted resources. To learn more about delete, refer to [delete using FHIR specification](https://www.hl7.org/fhir/http.html#delete). This filter does'nt work with '_typeFilter' query parameter. |
+|`includeAssociatedData` | No | Allows you to export history and soft deleted resources. This filter doesn't work with '_typeFilter' query parameter. Include value as 'history' to export history/ non latest versioned resources. Include value as 'deleted' to export soft deleted resources. |
> [!NOTE] > Only storage accounts in the same subscription as the FHIR service are allowed to be registered as the destination for `$export` operations.
industrial-iot Tutorial Deploy Industrial Iot Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-deploy-industrial-iot-platform.md
The deployment script allows to select which set of components to deploy.
Other hosting and deployment methods: - For production deployments that require staging, rollback, scaling, and resilience, the platform can be deployed into [Azure Kubernetes Service (AKS)](/azure/aks/learn/quick-kubernetes-deploy-cli)-- Deploying Azure Industrial IoT Platform microservices into an existing Kubernetes cluster using [Helm](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-helm.md).-- Deploying [Azure Kubernetes Service (AKS) cluster on top of Azure Industrial IoT Platform created by deployment script and adding Azure Industrial IoT components into the cluster](https://github.com/Azure/Industrial-IoT/blob/main/docs/deploy/howto-add-aks-to-ps1.md).
+- Deploying Azure Industrial IoT Platform microservices into an existing Kubernetes cluster using [Helm](/azure/api-management/how-to-deploy-self-hosted-gateway-kubernetes-helm).
References: - [Deploying Azure Industrial IoT Platform](/azure/industrial-iot/tutorial-deploy-industrial-iot-platform)
iot-edge Tutorial Develop For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux.md
description: This tutorial walks through setting up your development machine and
Previously updated : 05/02/2023 Last updated : 01/23/2024
The [IoT Edge Dev Tool](https://github.com/Azure/iotedgedev) simplifies Azure Io
```bash mkdir c:\dev\iotedgesolution
+ cd c:\dev\iotedgesolution
``` 1. Use the **iotedgedev solution init** command to create a solution and set up your Azure IoT Hub in the development language of your choice.
The latest stable IoT Edge system module version is 1.4. Set your system modules
::: zone-end + ### Provide your registry credentials to the IoT Edge agent The environment file stores the credentials for your container registry and shares them with the IoT Edge runtime. The runtime needs these credentials to pull your container images onto the IoT Edge device.
Check to see if your credentials exist. If not, add them now:
> [!NOTE] > This tutorial uses administrator login credentials for Azure Container Registry that are convenient for development and test scenarios. When you're ready for production scenarios, we recommend a least-privilege authentication option like service principals or repository-scoped tokens. For more information, see [Manage access to your container registry](production-checklist.md#manage-access-to-your-container-registry). + ### Target architecture You need to select the architecture you're targeting with each solution, because that affects how the container is built and runs. The default is Linux AMD64. For this tutorial, we're using an Ubuntu virtual machine as the IoT Edge device and keep the default **amd64**.
If you need to change the target architecture for your solution, use the followi
::: zone pivot="iotedge-dev-cli"
+# [C\#](#tab/csharp)
+
+The target architecture is set when you create the container image in a later step.
+
+# [C, Java, Node.js, Python](#tab/c+java+node+python)
+ 1. Open or create **settings.json** in the **.vscode** directory of your solution. 1. Change the *platform* value to `amd64`, `arm32v7`, `arm64v8`, or `windows-amd64`. For example:
If you need to change the target architecture for your solution, use the followi
} ``` ++ ::: zone-end ### Update module with custom code
The sample C# code that comes with the project template uses the [ModuleClient C
} ```
-1. Find the `SetupCallbacksForModule` function. Replace the function with the following code that adds an **else if** statement to check if the module twin has been updated.
+1. Find the `SetupCallbacksForModule` function. Replace the function with the following code that adds an **else if** statement to check if the module twin is updated.
```c static int SetupCallbacksForModule(IOTHUB_MODULE_CLIENT_LL_HANDLE iotHubModuleClientHandle)
The sample C# code that comes with the project template uses the [ModuleClient C
import com.microsoft.azure.sdk.iot.device.DeviceTwin.TwinPropertyCallBack; ```
-1. Add the following definition into class **App**. This variable sets a temperature threshold. The measured machine temperature won't be reported to IoT Hub until it goes over this value.
+1. Add the following definition into class **App**. This variable sets a temperature threshold. The measured machine temperature isn't reported to IoT Hub until it goes over this value.
```java private static final String TEMP_THRESHOLD = "TemperatureThreshold";
The sample C# code that comes with the project template uses the [ModuleClient C
# [Python](#tab/python)
-In this section, add the code that expands the *filtermodule* to analyze the messages before sending them. You'll add code that filters messages where the reported machine temperature is within the acceptable limits.
+In this section, add the code that expands the *filtermodule* to analyze the messages before sending them. You add code that filters messages where the reported machine temperature is within the acceptable limits.
1. In the Visual Studio Code explorer, open **modules** > **filtermodule** > **main.py**.
In this section, add the code that expands the *filtermodule* to analyze the mes
## Build and push your solution
-You've updated the module code and the deployment template to help understand some key deployment concepts. Now, you're ready to build your module container image and push it to your container registry.
-
-### Sign in to Docker
-
-Provide your container registry credentials to Docker so that it can push your container image to storage in the registry.
-
-1. Open the Visual Studio Code integrated terminal by selecting **Terminal** > **New Terminal**.
-
-1. Sign in to Docker with the Azure Container Registry (ACR) credentials that you saved after creating the registry.
-
- ```bash
- docker login -u <ACR username> -p <ACR password> <ACR login server>
- ```
-
- You may receive a security warning recommending the use of `--password-stdin`. While that's a recommended best practice for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
-
-3. Sign in to the Azure Container Registry. You may need to [Install Azure CLI](/cli/azure/install-azure-cli) to use the `az` command. This command asks for your user name and password found in your container registry in **Settings** > **Access keys**.
-
- ```azurecli
- az acr login -n <ACR registry name>
- ```
->[!TIP]
->If you get logged out at any point in this tutorial, repeat the Docker and Azure Container Registry sign in steps to continue.
-
-### Build and push
-
-Visual Studio Code now has access to your container registry, so it's time to turn the solution code into a container image.
+You updated the module code and the deployment template to help understand some key deployment concepts. Now, you're ready to build your module container image and push it to your container registry.
In Visual Studio Code, open the **deployment.template.json** deployment manifest file. The [deployment manifest](module-deployment-monitoring.md#deployment-manifest) describes the modules to be configured on the targeted IoT Edge device. Before deployment, you need to update your Azure Container Registry credentials and your module images with the proper `createOptions` values. For more information about createOption values, see [How to configure container create options for IoT Edge modules](how-to-use-create-options.md).
For example, the *filtermodule* configuration should be similar to:
#### Build module Docker image
-Use the module's Dockerfile to [build](https://docs.docker.com/engine/reference/commandline/build/) the module Docker image.
+Open the Visual Studio Code integrated terminal by selecting **Terminal** > **New Terminal**.
+
+# [C\#](#tab/csharp)
+
+Use the `dotnet publish` command to build the container image for Linux and amd64 architecture. Change directory to the *filtermodule* directory in your project and run the *dotnet publish* command.
+
+```bash
+dotnet publish --os linux --arch x64 /t:PublishContainer
+```
+
+Currently, the *iotedgedev* tool template targets .NET 7.0. If you want to target a different version of .NET, you can edit the *filtermodule.csproj* file and change the *TargetFramework* and *PackageReference* values. For example to target .NET 8.0, your *filtermodule.csproj* file should look like this:
+
+```xml
+<Project Sdk="Microsoft.NET.Sdk.Worker">
+ <PropertyGroup>
+ <TargetFramework>net8.0</TargetFramework>
+ <Nullable>enable</Nullable>
+ <ImplicitUsings>enable</ImplicitUsings>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.Azure.Devices.Client" Version="1.42.0" />
+ <PackageReference Include="Microsoft.Extensions.Hosting" Version="8.0.0" />
+ <PackageReference Include="Microsoft.NET.Build.Containers" Version="8.0.101" />
+ </ItemGroup>
+</Project>
+```
+
+Tag the docker image with your container registry information, version, and architecture. Replace **myacr** with your own registry name.
+
+```bash
+docker tag filtermodule myacr.azurecr.io/filtermodule:0.0.1-amd64
+```
+
+# [C, Java, Node.js, Python](#tab/c+java+node+python)
+
+Use the module's Dockerfile to [build](https://docs.docker.com/engine/reference/commandline/build/) and tag the module Docker image.
```bash docker build --rm -f "<DockerFilePath>" -t <ImageNameAndTag> "<ContextPath>"
docker build --rm -f "<DockerFilePath>" -t <ImageNameAndTag> "<ContextPath>"
For example, to build the image for the local registry or an Azure container registry, use the following commands: ```bash
-# Build the image for the local registry
+# Build and tag the image for the local registry
docker build --rm -f "./modules/filtermodule/Dockerfile.amd64.debug" -t localhost:5000/filtermodule:0.0.1-amd64 "./modules/filtermodule"
-# Or build the image for an Azure Container Registry
+# Or build and tag the image for an Azure Container Registry. Replace myacr with your own registry name.
docker build --rm -f "./modules/filtermodule/Dockerfile.amd64.debug" -t myacr.azurecr.io/filtermodule:0.0.1-amd64 "./modules/filtermodule" ``` ++ #### Push module Docker image
-[Push](https://docs.docker.com/engine/reference/commandline/push/) your module image to the local registry or a container registry.
+Provide your container registry credentials to Docker so that it can push your container image to storage in the registry.
-```bash
-docker push <ImageName>
-```
+1. Sign in to Docker with the Azure Container Registry (ACR) credentials.
-For example:
+ ```bash
+ docker login -u <ACR username> -p <ACR password> <ACR login server>
+ ```
-```bash
-# Push the Docker image to the local registry
+ You might receive a security warning recommending the use of `--password-stdin`. While that's a recommended best practice for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
-docker push localhost:5000/filtermodule:0.0.1-amd64
+1. Sign in to the Azure Container Registry. You need to [Install Azure CLI](/cli/azure/install-azure-cli) to use the `az` command. This command asks for your user name and password found in your container registry in **Settings** > **Access keys**.
-# Or push the Docker image to an Azure Container Registry
-az acr login --name myacr
-docker push myacr.azurecr.io/filtermodule:0.0.1-amd64
-```
+ ```azurecli
+ az acr login -n <ACR registry name>
+ ```
+ >[!TIP]
+ >If you get logged out at any point in this tutorial, repeat the Docker and Azure Container Registry sign in steps to continue.
+
+1. [Push](https://docs.docker.com/engine/reference/commandline/push/) your module image to the local registry or a container registry.
+ ```bash
+ docker push <ImageName>
+ ```
+
+ For example:
+
+ ```bash
+ # Push the Docker image to the local registry
+
+ docker push localhost:5000/filtermodule:0.0.1-amd64
+
+ # Or push the Docker image to an Azure Container Registry. Replace myacr with your Azure Container Registry name.
+
+ az acr login --name myacr
+ docker push myacr.azurecr.io/filtermodule:0.0.1-amd64
+ ```
+
#### Update the deployment template Update the deployment template *deployment.template.json* with the container registry image location. For example, if you're using an Azure Container Registry *myacr.azurecr.io* and your image is *filtermodule:0.0.1-amd64*, update the *filtermodule* configuration to:
This process may take several minutes the first time, but is faster the next tim
::: zone-end
-#### Update the build and image
+#### Optional: Update the module and image
-If you make changes to your module code, you need to rebuild and push the module image to your container registry. Use the steps in this section to update the build and image. You can skip this section if you didn't make any changes to your module code.
+If you make changes to your module code, you need to rebuild and push the module image to your container registry. Use the steps in this section to update the build and container image. You can skip this section if you didn't make any changes to your module code.
::: zone pivot="iotedge-dev-ext"
Notice that the two parameters that had placeholders now contain their proper va
::: zone pivot="iotedge-dev-cli"
-Build and push the updated image with a *0.0.2* version tag.
+Build and push the updated image with a *0.0.2* version tag. For example, to build and push the image for the local registry or an Azure container registry, use the following commands:
+
+# [C\#](#tab/csharp)
+
+```bash
+
+# Build the container image for Linux and amd64 architecture.
+
+dotnet publish --os linux --arch x64
+
+# For local registry:
+# Tag the image with version 0.0.2, x64 architecture, and the local registry.
+
+docker tag filtermodule localhost:5000/filtermodule:0.0.2-amd64
+
+# For Azure Container Registry:
+# Tag the image with version 0.0.2, x64 architecture, and your container registry information. Replace **myacr** with your own registry name.
+
+docker tag filtermodule myacr.azurecr.io/filtermodule:0.0.2-amd64
+```
+
+# [C, Java, Node.js, Python](#tab/c+java+node+python)
-For example, to build and push the image for the local registry or an Azure container registry, use the following commands:
```bash # Build and push the 0.0.2 image for the local registry
docker build --rm -f "./modules/filtermodule/Dockerfile.amd64.debug" -t localhos
docker push localhost:5000/filtermodule:0.0.2-amd64
-# Or build and push the 0.0.2 image for an Azure Container Registry
+# Or build and push the 0.0.2 image for an Azure Container Registry. Replace myacr with your own registry name.
docker build --rm -f "./modules/filtermodule/Dockerfile.amd64.debug" -t myacr.azurecr.io/filtermodule:0.0.2-amd64 "./modules/filtermodule" docker push myacr.azurecr.io/filtermodule:0.0.2-amd64 ``` +++ ::: zone-end ::: zone pivot="iotedge-dev-ext"
az iot edge set-modules --hub-name my-iot-hub --device-id my-device --content ./
1. Under your device, expand **Modules** to see a list of deployed and running modules. Select the refresh button. You should see the new *tempSensor* and *filtermodule* modules running on your device.
- It may take a few minutes for the modules to start. The IoT Edge runtime needs to receive its new deployment manifest, pull down the module images from the container runtime, then start each new module.
+ It might take a few minutes for the modules to start. The IoT Edge runtime needs to receive its new deployment manifest, pull down the module images from the container runtime, then start each new module.
## View messages from device
iot-hub Iot Hub Bulk Identity Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-bulk-identity-mgmt.md
Title: Import and export Azure IoT Hub device identities
-description: How to use the Azure IoT service SDK to run bulk operations against the identity registry to import and export device identities. Import operations enable you to create, update, and delete device identities in bulk.
+ Title: Import and export device identities
+
+description: Use the Azure IoT service SDK to import and export device identities so that you can create, update, and delete device identities in bulk.
Previously updated : 06/16/2023 Last updated : 01/25/2024 # Import and export IoT Hub device identities in bulk
-Each IoT hub has an identity registry you can use to create per-device resources in the service. The identity registry also enables you to control access to the device-facing endpoints. This article describes how to import and export device identities in bulk to and from an identity registry, using the ImportExportDeviceSample sample included with the [Microsoft Azure IoT SDK for .NET](https://github.com/Azure/azure-iot-sdk-csharp/tree/main). For more information about how you can use this capability when migrating an IoT hub to a different region, see [How to manually migrate an Azure IoT hub using an Azure Resource Manager template](migrate-hub-arm.md).
+Each IoT hub has an identity registry that you can use to create device resources in the service. The identity registry also enables you to control access to the device-facing endpoints. This article describes how to import and export device identities in bulk to and from an identity registry, using the ImportExportDeviceSample sample included with the [Microsoft Azure IoT SDK for .NET](https://github.com/Azure/azure-iot-sdk-csharp/tree/main). For more information about how you can use this capability when migrating an IoT hub to a different region, see [How to manually migrate an Azure IoT hub using an Azure Resource Manager template](migrate-hub-arm.md).
> [!NOTE]
-> IoT Hub has recently added virtual network support in a limited number of regions. This feature secures import and export operations and eliminates the need to pass keys for authentication. Initially, virtual network support is available only in these regions: *WestUS2*, *EastUS*, and *SouthCentralUS*. To learn more about virtual network support and the API calls to implement it, see [IoT Hub Support for virtual networks](virtual-network-support.md).
+> IoT Hub recently added virtual network support in a limited number of regions. This feature secures import and export operations and eliminates the need to pass keys for authentication. Currently, virtual network support is available only in these regions: *WestUS2*, *EastUS*, and *SouthCentralUS*. To learn more about virtual network support and the API calls to implement it, see [IoT Hub Support for virtual networks](virtual-network-support.md).
-Import and export operations take place in the context of *Jobs* that enable you to execute bulk service operations against an IoT hub.
+Import and export operations take place in the context of *jobs* that enable you to execute bulk service operations against an IoT hub.
The **RegistryManager** class in the SDK includes the **ExportDevicesAsync** and **ImportDevicesAsync** methods that use the **Job** framework. These methods enable you to export, import, and synchronize the entirety of an IoT hub identity registry.
-This topic discusses using the **RegistryManager** class and **Job** system to perform bulk imports and exports of devices to and from an IoT hub's identity registry. You can also use the Azure IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning to one or more IoT hubs without requiring human intervention. To learn more, see the [provisioning service documentation](../iot-dps/index.yml).
+This article discusses using the **RegistryManager** class and **Job** system to perform bulk imports and exports of devices to and from an IoT hub's identity registry. You can also use the Azure IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning to one or more IoT hubs. To learn more, see the [provisioning service documentation](../iot-dps/index.yml).
> [!NOTE] > Some of the code snippets in this article are included from the ImportExportDevicesSample service sample provided with the [Microsoft Azure IoT SDK for .NET](https://github.com/Azure/azure-iot-sdk-csharp/tree/main). The sample is located in the `/iothub/service/samples/how to guides/ImportExportDevicesSample` folder of the SDK and, where specified, code snippets are included from the `ImportExportDevicesSample.cs` file for that SDK sample. For more information about the ImportExportDevicesSample sample and other service samples included in the Azure IoT SDK for.NET, see [Azure IoT hub service samples for C#](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples/how%20to%20guides). ## What are jobs?
-Identity registry operations use the **Job** system when the operation:
+Identity registry operations use the job system when the operation:
* Has a potentially long execution time compared to standard run-time operations. * Returns a large amount of data to the user.
-Instead of a single API call waiting or blocking on the result of the operation, the operation asynchronously creates a **Job** for that IoT hub. The operation then immediately returns a **JobProperties** object.
+Instead of a single API call waiting or blocking on the result of the operation, the operation asynchronously creates a job for that IoT hub. The operation then immediately returns a **JobProperties** object.
The following C# code snippet shows how to create an export job: ```csharp
-// Call an export job on the IoT Hub to retrieve all devices
+// Call an export job on the IoT hub to retrieve all devices
JobProperties exportJob = await registryManager.ExportDevicesAsync(containerSasUri, false); ```
RegistryManager registryManager =
To find the connection string for your IoT hub, in the Azure portal: -- Navigate to your IoT hub.
+1. Navigate to your IoT hub.
-- Select **Shared access policies**.
+1. Select **Shared access policies**.
-- Select a policy, taking into account the permissions you need.
+1. Select a policy, taking into account the permissions you need.
-- Copy the connection string from the panel on the right-hand side of the screen.
+1. Copy the connection string for that policy.
The following C# code snippet, from the **WaitForJobAsync** method in the SDK sample, shows how to poll every five seconds to see if the job has finished executing:
Only one active device import or export job is allowed at a time for all IoT Hub
## Export devices
-Use the **ExportDevicesAsync** method to export the entirety of an IoT hub identity registry to an Azure Storage blob container using a shared access signature (SAS). For more information about shared access signatures, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../storage/common/storage-sas-overview.md).
-
-This method enables you to create reliable backups of your device information in a blob container that you control.
+Use the **ExportDevicesAsync** method to export the entirety of an IoT hub identity registry to an Azure Storage blob container using a shared access signature (SAS). This method enables you to create reliable backups of your device information in a blob container that you control.
The **ExportDevicesAsync** method requires two parameters:
If a device has twin data, then the twin data is also exported together with the
"id":"export-6d84f075-0", "eTag":"MQ==", "status":"enabled",
- "statusReason":"firstUpdate",
"authentication":null, "twinETag":"AAAAAAAAAAI=", "tags":{
If a device has twin data, then the twin data is also exported together with the
"Temperature":75.1, "Unit":"F" },
- "$metadata":{
- "$lastUpdated":"2017-03-09T18:30:52.3167248Z",
- "$lastUpdatedVersion":2,
- "Thermostat":{
- "$lastUpdated":"2017-03-09T18:30:52.3167248Z",
- "$lastUpdatedVersion":2,
- "Temperature":{
- "$lastUpdated":"2017-03-09T18:30:52.3167248Z",
- "$lastUpdatedVersion":2
- },
- "Unit":{
- "$lastUpdated":"2017-03-09T18:30:52.3167248Z",
- "$lastUpdatedVersion":2
- }
- }
- },
- "$version":2
},
- "reported":{
- "$metadata":{
- "$lastUpdated":"2017-03-09T18:30:51.1309437Z"
- },
- "$version":1
- }
+ "reported":{}
} } ```
-If you need access to this data in code, you can easily deserialize this data using the **ExportImportDevice** class. The following C# code snippet, from the **ReadFromBlobAsync** method in the SDK sample, shows how to read device information that was previously exported from **ExportImportDevice** into a **BlobClient** instance:
+If you need access to this data in code, you can deserialize this data using the **ExportImportDevice** class. The following C# code snippet, from the **ReadFromBlobAsync** method in the SDK sample, shows how to read device information that was previously exported from **ExportImportDevice** into a **BlobClient** instance:
```csharp private static async Task<List<string>> ReadFromBlobAsync(BlobClient blobClient)
Take care using the **ImportDevicesAsync** method because in addition to provisi
The **ImportDevicesAsync** method takes two parameters:
-* A *string* that contains a URI of an [Azure Storage](../storage/index.yml) blob container to use as *input* to the job. This URI must contain a SAS token that grants read access to the container. This container must contain a blob with the name **devices.txt** that contains the serialized device data to import into your identity registry. The import data must contain device information in the same JSON format that the **ExportImportDevice** job uses when it creates a **devices.txt** blob. The SAS token must include these permissions:
+* A *string* that contains a URI of an Azure Storage blob container to use as *input* to the job. This URI must contain a SAS token that grants read access to the container. This container must contain a blob with the name **devices.txt** that contains the serialized device data to import into your identity registry. The import data must contain device information in the same JSON format that the **ExportImportDevice** job uses when it creates a **devices.txt** blob. The SAS token must include these permissions:
```csharp SharedAccessBlobPermissions.Read ```
-* A *string* that contains a URI of an [Azure Storage](../storage/index.yml) blob container to use as *output* from the job. The job creates a block blob in this container to store any error information from the completed import **Job**. The SAS token must include these permissions:
+* A *string* that contains a URI of an Azure Storage blob container to use as *output* from the job. The job creates a block blob in this container to store any error information from the completed import job. The SAS token must include these permissions:
```csharp SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Read
JobProperties importJob =
await registryManager.ImportDevicesAsync(containerSasUri, containerSasUri); ```
-This method can also be used to import the data for the device twin. The format for the data input is the same as the format shown in the **ExportDevicesAsync** section. In this way, you can reimport the exported data. The **$metadata** is optional.
+This method can also be used to import the data for the device twin. The format for the data input is the same as the format shown in the **ExportDevicesAsync** section. In this way, you can reimport the exported data.
## Import behavior
You can use the **ImportDevicesAsync** method to perform the following bulk oper
You can perform any combination of the preceding operations within a single **ImportDevicesAsync** call. For example, you can register new devices and delete or update existing devices at the same time. When used along with the **ExportDevicesAsync** method, you can completely migrate all your devices from one IoT hub to another.
-If the import file includes twin metadata, then this metadata overwrites the existing twin metadata. If the import file doesn't include twin metadata, then only the `lastUpdateTime` metadata is updated using the current time.
- Use the optional **importMode** property in the import serialization data for each device to control the import process per-device. The **importMode** property has the following options:
-| importMode | Description |
-| | |
-| **Create** |If a device doesn't exist with the specified **ID**, it's newly registered. If the device already exists, an error is written to the log file. |
-| **CreateOrUpdate** |If a device doesn't exist with the specified **ID**, it's newly registered. If the device already exists, existing information is overwritten with the provided input data without regard to the **ETag** value. |
-| **CreateOrUpdateIfMatchETag** |If a device doesn't exist with the specified **ID**, it's newly registered. If the device already exists, existing information is overwritten with the provided input data only if there's an **ETag** match. If there's an **ETag** mismatch, an error is written to the log file. |
-| **Delete** |If a device already exists with the specified **ID**, it's deleted without regard to the **ETag** value. If the device doesn't exist, an error is written to the log file. |
-| **DeleteIfMatchETag** |If a device already exists with the specified **ID**, it's deleted only if there's an **ETag** match. If the device doesn't exist, an error is written to the log file. If there's an ETag mismatch, an error is written to the log file. |
-| **Update** |If a device already exists with the specified **ID**, existing information is overwritten with the provided input data without regard to the **ETag** value. If the device doesn't exist, an error is written to the log file. |
-| **UpdateIfMatchETag** |If a device already exists with the specified **ID**, existing information is overwritten with the provided input data only if there's an **ETag** match. If the device doesn't exist or there's an **ETag** mismatch, an error is written to the log file. |
-| **UpdateTwin** |If a twin already exists with the specified **ID**, existing information is overwritten with the provided input data without regard to the twin's **ETag** value. |
-| **UpdateTwinIfMatchETag** |If a twin already exists with the specified **ID**, existing information is overwritten with the provided input data only if there's a match on the twin's **ETag** value. The twin's **ETag** is processed independently from the device's **ETag**. If there's a mismatch with the existing twin's **ETag**, an error is written to the log file. |
+* **Create**
+* **CreateOrUpdate** (default)
+* **CreateOrUpdateIfMatchETag**
+* **Delete**
+* **DeleteIfMatchETag**
+* **Update**
+* **UpdateIfMatchETag**
+* **UpdateTwin**
+* **UpdateTwinIfMatchETag**
-> [!NOTE]
-> If the serialization data doesn't explicitly define an **importMode** flag for a device, it defaults to **createOrUpdate** during the import operation.
+For details about each of these import mode options, see [ImportMode](/dotnet/api/microsoft.azure.devices.importmode)
-## Import troubleshooting
+## Troubleshoot import jobs
-Using an import job to create devices may fail with a quota issue when it's close to the device count limit of the IoT hub. This failure can happen even if the total device count is still lower than the quota limit. The **IotHubQuotaExceeded (403002)** error is returned with the following error message: "Total number of devices on IotHub exceeded the allocated quota.ΓÇ¥
+Using an import job to create devices might fail with a quota issue when it's close to the device count limit of the IoT hub. This failure can happen even if the total device count is still lower than the quota limit. The **IotHubQuotaExceeded (403002)** error is returned with the following error message: "Total number of devices on IotHub exceeded the allocated quota.ΓÇ¥
If you get this error, you can use the following query to return the total number of devices registered on your IoT hub:
SELECT COUNT() as totalNumberOfDevices FROM devices
For information about the total number of devices that can be registered to an IoT hub, see [IoT Hub limits](iot-hub-devguide-quotas-throttling.md#other-limits).
-If there's still quota available, you can examine the job output blob for devices that failed with the **IotHubQuotaExceeded (403002)** error. You can then try adding these devices individually to the IoT hub. For example, you can use the **AddDeviceAsync** or **AddDeviceWithTwinAsync** methods. Don't try to add the devices using another job as you may likely encounter the same error.
+If there's still quota available, you can examine the job output blob for devices that failed with the **IotHubQuotaExceeded (403002)** error. You can then try adding these devices individually to the IoT hub. For example, you can use the **AddDeviceAsync** or **AddDeviceWithTwinAsync** methods. Don't try to add the devices using another job as you might encounter the same error.
## Import devices example ΓÇô bulk device provisioning
private async Task GenerateDevicesAsync(RegistryManager registryManager, int num
Console.WriteLine($"GenerateDevices, time elapsed = {stopwatch.Elapsed}."); } ```
-
+ ## Import devices example ΓÇô bulk deletion The following C# code snippet, from the **DeleteFromHubAsync** method in the SDK sample, shows you how to delete all of the devices from an IoT hub:
iot-hub Troubleshoot Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/troubleshoot-error-codes.md
This error typically occurs when the daily message quota for the IoT hub is exce
* To understand how operations are counted toward the quota, such as twin queries and direct methods, see [Understand IoT Hub pricing](iot-hub-devguide-pricing.md#charges-per-operation). * To set up monitoring for daily quota usage, set up an alert with the metric *Total number of messages used*. For step-by-step instructions, see [Set up metrics and alerts with IoT Hub](tutorial-use-metrics-and-diags.md#set-up-metrics).
-This error may also be returned by a bulk import job when the number of devices registered to your IoT hub approaches or exceeds the quota limit for an IoT hub. To learn more, see [Troubleshoot import jobs](iot-hub-bulk-identity-mgmt.md#import-troubleshooting).
+This error may also be returned by a bulk import job when the number of devices registered to your IoT hub approaches or exceeds the quota limit for an IoT hub. To learn more, see [Troubleshoot import jobs](iot-hub-bulk-identity-mgmt.md#troubleshoot-import-jobs).
## 403004 Device maximum queue depth exceeded
iot-operations Howto Configure Destination Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-destination-fabric.md
The _Fabric Lakehouse_ destination stage JSON configuration defines the details
| WorkspaceId | String | The lakehouse workspace ID. | Yes | - | | | LakehouseId | String | The lakehouse Lakehouse ID. | Yes | - | | | Table | String | The name of the table to write to. | Yes | - | |
-| File path<sup>1</sup> | [Template](../process-dat#templates) | The file path for where to write the parquet file to. | No | `{instanceId}/{pipelineId}/{partitionId}/{YYYY}/{MM}/{DD}/{HH}/{mm}/{fileNumber}` | |
+| File path<sup>1</sup> | [Template](../process-dat#templates) | The file path for where to write the parquet file to. | No | `{{{instanceId}}}/{{{pipelineId}}}/{{{partitionId}}}/{{{YYYY}}}/{{{MM}}}/{{{DD}}}/{{{HH}}}/{{{mm}}}/{{{fileNumber}}}` | |
| Batch<sup>2</sup> | [Batch](../process-dat#batch) data. | No | `60s` | `10s` | | Authentication<sup>3</sup> | The authentication details to connect to Microsoft Fabric. | Service principal | Yes | - | | Columns&nbsp;>&nbsp;Name | string | The name of the column. | Yes | | `temperature` |
iot-operations Howto Configure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/monitor/howto-configure-diagnostics.md
The diagnostics service processes and collates diagnostic signals from various A
| logLevel | false | String | `info` | Log level. `trace`, `debug`, `info`, `warn`, or `error`. | | maxDataStorageSize | false | Unsigned integer | `16` | Maximum data storage size in MiB | | metricsPort | false | Int32 | `9600` | Port for metrics |
-| openTelemetryCollectorAddr | false | String | `null` | Endpoint URL of the OpenTelemetry collector |
+| openTelemetryTracesCollectorAddr | false | String | `null` | Endpoint URL of the OpenTelemetry collector |
| staleDataTimeoutSeconds | false | Int32 | `600` | Data timeouts in seconds | Here's an example of a Diagnostics service resource with basic configuration:
key-vault Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/built-in-roles.md
Previously updated : 11/06/2023 Last updated : 01/25/2024
To manage control plane permissions for the Managed HSM resource, you must use [
|Managed HSM Crypto Auditor|Grants read permissions to read (but not use) key attributes.|2c18b078-7c48-4d3a-af88-5a3a1b3f82b3| |Managed HSM Crypto Service Encryption User| Grants permissions to use a key for service encryption. |33413926-3206-4cdd-b39a-83574fe37a17| |Managed HSM Backup| Grants permissions to perform single-key or whole-HSM backup.|7b127d3c-77bd-4e3e-bbe0-dbb8971fa7f8|
+|Managed HSM Crypto Service Release User| Grants permissions to release a key to a trusted execution environment. |21dbd100-6940-42c2-9190-5d6cb909625c|
## Permitted operations
To manage control plane permissions for the Managed HSM resource, you must use [
> - All the data action names have the prefix **Microsoft.KeyVault/managedHsm**, which is omitted in the table for brevity. > - All role names have the prefix **Managed HSM**, which is omitted in the following table for brevity.
-|Data action | Administrator | Crypto Officer | Crypto User | Policy Administrator | Crypto Service Encryption User | Backup | Crypto Auditor|
-|||||||||
-|**Security domain management**|
-/securitydomain/download/action|<center>X</center>||||||
-/securitydomain/upload/action|<center>X</center>||||||
-/securitydomain/upload/read|<center>X</center>||||||
-/securitydomain/transferkey/read|<center>X</center>||||||
-|**Key management**|
-|/keys/read/action|||<center>X</center>||<center>X</center>||<center>X</center>|
-|/keys/write/action|||<center>X</center>||||
-|/keys/rotate/action|||<center>X</center>||||
-|/keys/create|||<center>X</center>||||
-|/keys/delete|||<center>X</center>||||
-|/keys/deletedKeys/read/action||<center>X</center>|||||
-|/keys/deletedKeys/recover/action||<center>X</center>|||||
-|/keys/deletedKeys/delete||<center>X</center>|||||<center>X</center>|
-|/keys/backup/action|||<center>X</center>|||<center>X</center>|
-|/keys/restore/action|||<center>X</center>||||
-|/keys/release/action|||<center>X</center>||||
-|/keys/import/action|||<center>X</center>||||
-|**Key cryptographic operations**|
-|/keys/encrypt/action|||<center>X</center>||||
-|/keys/decrypt/action|||<center>X</center>||||
-|/keys/wrap/action|||<center>X</center>||<center>X</center>||
-|/keys/unwrap/action|||<center>X</center>||<center>X</center>||
-|/keys/sign/action|||<center>X</center>||||
-|/keys/verify/action|||<center>X</center>||||
-|**Role management**|
-|/roleAssignments/read/action|<center>X</center>|<center>X</center>|<center>X</center>|<center>X</center>|||<center>X</center>
-|/roleAssignments/write/action|<center>X</center>|<center>X</center>||<center>X</center>|||
-|/roleAssignments/delete/action|<center>X</center>|<center>X</center>||<center>X</center>|||
-|/roleDefinitions/read/action|<center>X</center>|<center>X</center>|<center>X</center>|<center>X</center>|||<center>X</center>
-|/roleDefinitions/write/action|<center>X</center>|<center>X</center>||<center>X</center>|||
-|/roleDefinitions/delete/action|<center>X</center>|<center>X</center>||<center>X</center>|||
-|**Backup and restore management**|
-|/backup/start/action|<center>X</center>|||||<center>X</center>|
-|/backup/status/action|<center>X</center>|||||<center>X</center>|
-|/restore/start/action|<center>X</center>||||||
-|/restore/status/action|<center>X</center>||||||
-||||||||
+|Data action | Administrator | Crypto Officer | Crypto User | Policy Administrator | Crypto Service Encryption User | Backup | Crypto Auditor| Crypto Service Released User|
+||::|::|::|::|::|::|::|::|
+|**Security domain management**|||||||||
+|/securitydomain/download/action|X||||||||
+|/securitydomain/upload/action|X||||||||
+|/securitydomain/upload/read|X||||||||
+|/securitydomain/transferkey/read|X||||||||
+|**Key management**|||||||||
+|/keys/read/action|||X||X||X||
+|/keys/write/action|||X||||||
+|/keys/rotate/action|||X||||||
+|/keys/create|||X||||||
+|/keys/delete|||X||||||
+|/keys/deletedKeys/read/action||X|||||||
+|/keys/deletedKeys/recover/action||X|||||||
+|/keys/deletedKeys/delete||X|||||X||
+|/keys/backup/action|||X|||X|||
+|/keys/restore/action|||X||||||
+|/keys/release/action|||X|||||X |
+|/keys/import/action|||X||||||
+|**Key cryptographic operations**|||||||||
+|/keys/encrypt/action|||X||||||
+|/keys/decrypt/action|||X||||||
+|/keys/wrap/action|||X||X||||
+|/keys/unwrap/action|||X||X||||
+|/keys/sign/action|||X||||||
+|/keys/verify/action|||X||||||
+|**Role management**|||||||||
+|/roleAssignments/read/action|X|X|X|X|||X||
+|/roleAssignments/write/action|X|X||X|||||
+|/roleAssignments/delete/action|X|X||X|||||
+|/roleDefinitions/read/action|X|X|X|X|||X||
+|/roleDefinitions/write/action|X|X||X|||||
+|/roleDefinitions/delete/action|X|X||X|||||
+|**Backup and restore management**|||||||||
+|/backup/start/action|X|||||X|||
+|/backup/status/action|X|||||X|||
+|/restore/start/action|X||||||||
+|/restore/status/action|X||||||||
## Next steps
load-balancer Load Balancer Standard Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-diagnostics.md
The various load balancer configurations provide the following metrics:
| Packet count | Public and internal load balancer | A standard load balancer reports the packets processed per front end.| Sum | >[!NOTE]
- >When using distributing traffic from an internal load balancer through an NVA or firewall syn packet, byte count, and packet count metrics are not be available and will show as zero.
+ >Bandwidth-related metrics such as SYN packet, byte count, and packet count will not capture any traffic to an internal load balancer via a UDR (eg. from an NVA or firewall).
> >Max and min aggregations are not available for the SYN count, packet count, SNAT connection count, and byte count metrics. >Count aggregation is not recommended for Data path availability and health probe status. Use average instead for best represented health data.
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md
Title: What are compute targets
+ Title: Understand compute targets
description: Learn how to designate a compute resource or environment to train or deploy your model with Azure Machine Learning.
Previously updated : 10/19/2022 Last updated : 01/23/2024 - ignite-fall-2021 - event-tier1-build-2022
monikerRange: 'azureml-api-2 || azureml-api-1'
A *compute target* is a designated compute resource or environment where you run your training script or host your service deployment. This location might be your local machine or a cloud-based compute resource. Using compute targets makes it easy for you to later change your compute environment without having to change your code.
-In a typical model development lifecycle, you might:
+Azure Machine Learning has varying support across different compute targets. In a typical model development lifecycle, you might:
1. Start by developing and experimenting on a small amount of data. At this stage, use your local environment, such as a local computer or cloud-based virtual machine (VM), as your compute target. 1. Scale up to larger data, or do [distributed training](how-to-train-distributed-gpu.md) by using one of these [training compute targets](#training-compute-targets).
The compute resources you use for your compute targets are attached to a [worksp
## Training compute targets
-Azure Machine Learning has varying support across different compute targets. A typical model development lifecycle starts with development or experimentation on a small amount of data. At this stage, use a local environment like your local computer or a cloud-based VM. As you scale up your training on larger datasets or perform [distributed training](how-to-train-distributed-gpu.md), use Azure Machine Learning compute to create a single- or multi-node cluster that autoscales each time you submit a job. You can also attach your own compute resource, although support for different scenarios might vary.
+As you scale up your training on larger datasets or perform [distributed training](how-to-train-distributed-gpu.md), use Azure Machine Learning compute to create a single- or multi-node cluster that autoscales each time you submit a job. You can also attach your own compute resource, although support for different scenarios might vary.
[!INCLUDE [aml-compute-target-train](includes/aml-compute-target-train.md)] - ## Compute targets for inference When performing inference, Azure Machine Learning creates a Docker container that hosts the model and associated resources needed to use it. This container is then used in a compute target.
-The compute target you use to host your model will affect the cost and availability of your deployed endpoint. Use this table to choose an appropriate compute target.
+The compute target you use to host your model affects the cost and availability of your deployed endpoint. Use this table to choose an appropriate compute target.
:::moniker range="azureml-api-2" | Compute target | Used for | GPU support | Description |
-| -- | -- | -- | -- |
-| [Azure Machine Learning endpoints](~/articles/machine-learning/concept-endpoints.md) | Real-time inference <br/><br/>Batch&nbsp;inference | Yes | Fully managed computes for real-time (managed online endpoints) and batch scoring (batch endpoints) on serverless compute. |
-| [Azure Machine Learning Kubernetes](~/articles/machine-learning/how-to-attach-kubernetes-anywhere.md) | Real-time inference <br/><br/> Batch inference | Yes | Run inferencing workloads on on-premises, cloud, and edge Kubernetes clusters. |
+| -- | -- | -- | -- |
+| [Azure Machine Learning endpoints](~/articles/machine-learning/concept-endpoints.md) | Real-time inference <br><br>Batch inference | Yes | Fully managed computes for real-time (managed online endpoints) and batch scoring (batch endpoints) on serverless compute. |
+| [Azure Machine Learning Kubernetes](~/articles/machine-learning/how-to-attach-kubernetes-anywhere.md) | Real-time inference <br><br> Batch inference | Yes | Run inference workloads on on-premises, cloud, and edge Kubernetes clusters. |
:::moniker-end :::moniker range="azureml-api-1" | Compute target | Used for | GPU support | Description |
-| -- | -- | -- | -- |
-| [Local&nbsp;web&nbsp;service](~/articles/machine-learning/v1/how-to-deploy-local-container-notebook-vm.md) | Testing/debugging | &nbsp; | Use for limited testing and troubleshooting. Hardware acceleration depends on use of libraries in the local system. |
-| [Azure Machine Learning Kubernetes](~/articles/machine-learning/v1/how-to-deploy-azure-kubernetes-service.md) | Real-time inference | Yes | Run inferencing workloads in the cloud. |
-| [Azure Container Instances](~/articles/machine-learning/v1/how-to-deploy-azure-container-instance.md) | Real-time inference <br/><br/> Recommended for dev/test purposes only.| &nbsp; | Use for low-scale CPU-based workloads that require less than 48 GB of RAM. Doesn't require you to manage a cluster.<br/><br/> Only suitable for models less than 1 GB in size.<br/><br/> Supported in the designer. |
+| -- | -- | -- | -- |
+| [Local web service](~/articles/machine-learning/v1/how-to-deploy-local-container-notebook-vm.md) | Testing/debugging | &nbsp; | Use for limited testing and troubleshooting. Hardware acceleration depends on use of libraries in the local system. |
+| [Azure Machine Learning Kubernetes](~/articles/machine-learning/v1/how-to-deploy-azure-kubernetes-service.md) | Real-time inference | Yes | Run inference workloads in the cloud. |
+| [Azure Container Instances](~/articles/machine-learning/v1/how-to-deploy-azure-container-instance.md) | Real-time inference <br><br> Recommended for dev/test purposes only.| &nbsp; | Use for low-scale CPU-based workloads that require less than 48 GB of RAM. Doesn't require you to manage a cluster.<br><br> Only suitable for models less than 1 GB in size.<br><br> Supported in the designer. |
:::moniker-end > [!NOTE] > When choosing a cluster SKU, first scale up and then scale out. Start with a machine that has 150% of the RAM your model requires, profile the result and find a machine that has the performance you need. Once you've learned that, increase the number of machines to fit your need for concurrent inference. :::moniker range="azureml-api-2"
-Learn [where and how to deploy your model to a compute target](how-to-deploy-online-endpoints.md).
+[Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md).
:::moniker-end :::moniker range="azureml-api-1"
-Learn [where and how to deploy your model to a compute target](./v1/how-to-deploy-and-where.md).
+[Deploy machine learning models to Azure](./v1/how-to-deploy-and-where.md).
:::moniker-end ## Azure Machine Learning compute (managed)
Azure Machine Learning creates and manages the managed compute resources. This t
There's no need to create serverless compute. You can create Azure Machine Learning compute instances or compute clusters from:
-* [Azure Machine Learning studio](how-to-create-attach-compute-studio.md).
+* [Azure Machine Learning studio](how-to-create-attach-compute-studio.md)
* The Python SDK and the Azure CLI:
- * [Compute instance](how-to-create-compute-instance.md).
- * [Compute cluster](how-to-create-attach-compute-cluster.md).
+ * [Compute instance](how-to-create-compute-instance.md)
+ * [Compute cluster](how-to-create-attach-compute-cluster.md)
* An Azure Resource Manager template. For an example template, see [Create an Azure Machine Learning compute cluster](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-amlcompute). [!INCLUDE [serverless compute](./includes/serverless-compute.md)] When created, these compute resources are automatically part of your workspace, unlike other kinds of compute targets. - |Capability |Compute cluster |Compute instance | |||| |Single- or multi-node cluster | **&check;** | Single node cluster |
When created, these compute resources are automatically part of your workspace,
|Automatic cluster management and job scheduling | **&check;** | **&check;** | |Support for both CPU and GPU resources | **&check;** | **&check;** | - > [!NOTE] > To avoid charges when the compute is idle:
-> * For compute *cluster* make sure the minimum number of nodes is set to 0, or use [serverless compute](./how-to-use-serverless-compute.md).
+> * For a compute *cluster*, make sure the minimum number of nodes is set to 0, or use [serverless compute](./how-to-use-serverless-compute.md).
> * For a compute *instance*, [enable idle shutdown](how-to-create-compute-instance.md#configure-idle-shutdown). ### Supported VM series and sizes [!INCLUDE [retiring vms](./includes/retiring-vms.md)] - When you select a node size for a managed compute resource in Azure Machine Learning, you can choose from among select VM sizes available in Azure. Azure offers a range of sizes for Linux and Windows for different workloads. To learn more, see [VM types and sizes](../virtual-machines/sizes.md). There are a few exceptions and limitations to choosing a VM size:
See the following table to learn more about supported series.
| [NCasT4_v3](../virtual-machines/nct4-v3-series.md) | GPU | Compute clusters and instance | | [NDasrA100_v4](../virtual-machines/nda100-v4-series.md) | GPU | Compute clusters and instance | - While Azure Machine Learning supports these VM series, they might not be available in all Azure regions. To check whether VM series are available, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines). > [!NOTE]
-> Azure Machine Learning doesn't support all VM sizes that Azure Compute supports. To list the available VM sizes, use one of the following methods:
+> Azure Machine Learning doesn't support all VM sizes that Azure Compute supports. To list the available VM sizes, use the following method:
> * [REST API](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/machinelearningservices/resource-manager/Microsoft.MachineLearningServices/stable/2020-08-01/examples/ListVMSizesResult.json) :::moniker range="azureml-api-2"
+> [!NOTE]
+> Azure Machine Learning doesn't support all VM sizes that Azure Compute supports. To list the available VM sizes, use one of the following methods:
+> * [REST API](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/machinelearningservices/resource-manager/Microsoft.MachineLearningServices/stable/2020-08-01/examples/ListVMSizesResult.json)
> * The [Azure CLI extension 2.0 for machine learning](how-to-configure-cli.md) command, [az ml compute list-sizes](/cli/azure/ml/compute#az-ml-compute-list-sizes). :::moniker-end
-If using the GPU-enabled compute targets, it is important to ensure that the correct CUDA drivers are installed in the training environment. Use the following table to determine the correct CUDA version to use:
+If you use the GPU-enabled compute targets, it's important to ensure that the correct CUDA drivers are installed in the training environment. Use the following table to determine the correct CUDA version to use:
-| **GPU Architecture** | **Azure VM Series** | **Supported CUDA versions** |
+| **GPU Architecture** | **Azure VM series** | **Supported CUDA versions** |
|||| | Ampere | NDA100_v4 | 11.0+ | | Turing | NCT4_v3 | 10.0+ |
If using the GPU-enabled compute targets, it is important to ensure that the cor
| Maxwell | NV, NVv3 | 9.0+ | | Kepler | NC, NC Promo| 9.0+ |
-In addition to ensuring the CUDA version and hardware are compatible, also ensure that the CUDA version is compatible with the version of the machine learning framework you are using:
+In addition to ensuring the CUDA version and hardware are compatible, also ensure that the CUDA version is compatible with the version of the machine learning framework you're using:
-- For PyTorch, you can check the compatibility by visiting [Pytorch's previous versions page](https://pytorch.org/get-started/previous-versions/).
+- For PyTorch, you can check the compatibility by visiting [Pytorch's previous versions page](https://pytorch.org/get-started/previous-versions/).
- For Tensorflow, you can check the compatibility by visiting [Tensorflow's build from source page](https://www.tensorflow.org/install/source#gpu). ### Compute isolation
-Azure Machine Learning compute offers VM sizes that are isolated to a specific hardware type and dedicated to a single customer. Isolated VM sizes are best suited for workloads that require a high degree of isolation from other customers' workloads for reasons that include meeting compliance and regulatory requirements. Utilizing an isolated size guarantees that your VM will be the only one running on that specific server instance.
+Azure Machine Learning compute offers VM sizes that are isolated to a specific hardware type and dedicated to a single customer. Isolated VM sizes are best suited for workloads that require a high degree of isolation from other customers' workloads for reasons that include meeting compliance and regulatory requirements. Utilizing an isolated size guarantees that your VM is the only one running on that specific server instance.
The current isolated VM offerings include: * Standard_M128ms * Standard_F72s_v2 * Standard_NC24s_v3
-* Standard_NC24rs_v3*
-
-*RDMA capable
+* Standard_NC24rs_v3 (RDMA capable)
To learn more about isolation, see [Isolation in the Azure public cloud](../security/fundamentals/isolation-choices.md). ## Unmanaged compute
-An unmanaged compute target is *not* managed by Azure Machine Learning. You create this type of compute target outside Azure Machine Learning and then attach it to your workspace. Unmanaged compute resources can require additional steps for you to maintain or to improve performance for machine learning workloads.
+Azure Machine Learning doesn't manage an *unmanaged* compute target. You create this type of compute target outside Azure Machine Learning and then attach it to your workspace. Unmanaged compute resources can require extra steps for you to maintain or to improve performance for machine learning workloads.
Azure Machine Learning supports the following unmanaged compute types:
Azure Machine Learning supports the following unmanaged compute types:
* Azure Databricks * Azure Data Lake Analytics :::moniker range="azureml-api-1"
-* [Azure Synapse Spark pool](v1/how-to-link-synapse-ml-workspaces.md) (preview)
-
- > [!TIP]
- > Currently this requires the Azure Machine Learning SDK v1.
+* [Azure Kubernetes Service](./v1/how-to-create-attach-kubernetes.md)
+* [Azure Synapse Spark pool](v1/how-to-link-synapse-ml-workspaces.md) (deprecated)
:::moniker-end :::moniker range="azureml-api-2" * [Kubernetes](how-to-attach-kubernetes-anywhere.md) :::moniker-end
-* [Azure Kubernetes Service](./v1/how-to-create-attach-kubernetes.md)
For more information, see [Manage compute resources](how-to-create-attach-compute-studio.md).
-## Next steps
+## Next step
-Learn how to:
:::moniker range="azureml-api-2"
-* [Deploy your model to a compute target](how-to-deploy-online-endpoints.md)
+* [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md)
:::moniker-end :::moniker range="azureml-api-1"
-* [Deploy your model](./v1/how-to-deploy-and-where.md)
+* [Deploy machine learning models to Azure](./v1/how-to-deploy-and-where.md)
:::moniker-end
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
If you haven't [secured Azure Monitor](./v1/how-to-secure-workspace-vnet.md#secu
* `dc.services.visualstudio.com` * `*.in.applicationinsights.azure.com`
-For a list of IP addresses for these hosts, see [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md).
+For a list of IP addresses for these hosts, see [IP addresses used by Azure Monitor](../azure-monitor/ip-addresses.md).
## Next steps
machine-learning How To Inference Server Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-server-http.md
In this section, we'll run the server locally with [sample files](https://github
Use the `curl` command to send an example request to the server and receive a scoring result. ```bash
- curl --request POST "127.0.0.1:5001/score" --header 'Content-Type:application/json' --data @sample-request.json
+ curl --request POST "127.0.0.1:5001/score" --header "Content-Type:application/json" --data @sample-request.json
``` The scoring result will be returned if there's no problem in your scoring script. If you find something wrong, you can try to update the scoring script, and launch the server again to test the updated script.
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
The following limits on assets apply on a *per-workspace* basis.
| Datasets | 10 million | | Runs | 10 million | | Models | 10 million|
+| Component | 10 million|
| Artifacts | 10 million | In addition, the maximum **run time** is 30 days and the maximum number of **metrics logged per run** is 1 million.
To request an exception from the Azure Machine Learning product team, use the st
| Steps in a pipeline | 30,000 | | Workspaces per resource group | 800 | +
+### Azure Machine Learning job schedules
+[Azure Machine Learning job schedules](how-to-schedule-pipeline-job.md) have the following limits.
+
+| **Resource** | **Limit** |
+| | |
+| Schedules per region | 100 |
+ ### Azure Machine Learning integration with Synapse Azure Machine Learning serverless Spark provides easy access to distributed computing capability for scaling Apache Spark jobs. Serverless Spark utilizes the same dedicated quota as Azure Machine Learning Compute. Quota limits can be increased by submitting a support ticket and [requesting for quota and limit increase](#request-quota-and-limit-increases) for ESv3 series under the "Machine Learning Service: Virtual Machine Quota" category.
machine-learning How To Troubleshoot Secure Connection Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-secure-connection-workspace.md
Previously updated : 06/09/2022 Last updated : 01/24/2024
-# Troubleshoot connection to a workspace with a private endpoint
+# Troubleshoot private endpoint connection problems
-When connecting to a workspace that has been configured with a private endpoint, you may encounter a 403 or a messaging saying that access is forbidden. Use the information in this article to check for common configuration problems that can cause this error.
+When you connect to an Azure Machine Learning workspace that's configured with a private endpoint, you might encounter a *403* error or a message saying that access is forbidden. This article explains how you can check for common configuration problems that cause this error.
> [!TIP] > Before using the steps in this article, try the Azure Machine Learning workspace diagnostic API. It can help identify configuration problems with your workspace. For more information, see [How to use workspace diagnostics](how-to-workspace-diagnostic-api.md). ## DNS configuration
-The troubleshooting steps for DNS configuration differ based on whether you're using Azure DNS or a custom DNS. Use the following steps to determine which one you're using:
+The troubleshooting steps for DNS configuration differ based on whether you use Azure DNS or a custom DNS. Use the following steps to determine which one you're using:
1. In the [Azure portal](https://portal.azure.com), select the private endpoint for your Azure Machine Learning workspace.
-1. From the __Overview__ page, select the __Network Interface__ link.
- :::image type="content" source="./media/how-to-troubleshoot-secure-connection-workspace/private-endpoint-overview.png" alt-text="Screenshot of the private endpoint overview with network interface link highlighted.":::
+1. From the **Overview** page, select the **Network Interface** link.
-1. Under __Settings__, select __IP Configurations__ and then select the __Virtual network__ link.
+ :::image type="content" source="media/how-to-troubleshoot-secure-connection-workspace/private-endpoint-overview.png" alt-text="Screenshot of the private endpoint overview with network interface link highlighted." lightbox="media/how-to-troubleshoot-secure-connection-workspace/private-endpoint-overview.png":::
- :::image type="content" source="./media/how-to-troubleshoot-secure-connection-workspace/network-interface-ip-configurations.png" alt-text="Screenshot of the IP configuration with virtual network link highlighted.":::
+1. Under **Settings**, select **IP Configurations** and then select the **Virtual network** link.
-1. From the __Settings__ section on the left of the page, select the __DNS servers__ entry.
+ :::image type="content" source="media/how-to-troubleshoot-secure-connection-workspace/network-interface-ip-configurations.png" alt-text="Screenshot of the IP configuration with virtual network link highlighted." lightbox="media/how-to-troubleshoot-secure-connection-workspace/network-interface-ip-configurations.png":::
+
+1. From the **Settings** section on the left of the page, select the **DNS servers** entry.
:::image type="content" source="./media/how-to-troubleshoot-secure-connection-workspace/dns-servers.png" alt-text="Screenshot of the DNS servers configuration.":::
- * If this value is __Default (Azure-provided)__ or __168.63.129.16__, then the VNet is using Azure DNS. Skip to the [Azure DNS troubleshooting](#azure-dns-troubleshooting) section.
- * If there's a different IP address listed, then the VNet is using a custom DNS solution. Skip to the [Custom DNS troubleshooting](#custom-dns-troubleshooting) section.
+ * If this value is **Default (Azure-provided)** or **168.63.129.16**, then the virtual network is using Azure DNS. Skip to the [Azure DNS troubleshooting](#azure-dns-troubleshooting) section.
+ * If there's a different IP address listed, then the virtual network is using a custom DNS solution. Skip to the [Custom DNS troubleshooting](#custom-dns-troubleshooting) section.
### Custom DNS troubleshooting
Use the following steps to verify if your custom DNS solution is correctly resol
| Azure region | URL | | -- | -- |
- | Azure Government | https://portal.azure.us/?feature.privateendpointmanagedns=false |
- | Microsoft Azure operated by 21Vianet | https://portal.azure.cn/?feature.privateendpointmanagedns=false |
- | All other regions | https://portal.azure.com/?feature.privateendpointmanagedns=false |
+ | Azure Government | <https://portal.azure.us/?feature.privateendpointmanagedns=false> |
+ | Microsoft Azure operated by 21Vianet | <https://portal.azure.cn/?feature.privateendpointmanagedns=false> |
+ | All other regions | <https://portal.azure.com/?feature.privateendpointmanagedns=false> |
1. In the portal, select the private endpoint for the workspace. Make a list of FQDNs listed for the private endpoint.
- :::image type="content" source="./media/how-to-troubleshoot-secure-connection-workspace/custom-dns-settings.png" alt-text="Screenshot of the private endpoint with custom DNS settings highlighted.":::
+ :::image type="content" source="media/how-to-troubleshoot-secure-connection-workspace/custom-dns-settings.png" alt-text="Screenshot of the private endpoint with custom DNS settings highlighted." lightbox="media/how-to-troubleshoot-secure-connection-workspace/custom-dns-settings.png":::
-1. Open a command prompt, PowerShell, or other command line and run the following command for each FQDN returned from the previous step. Each time you run the command, verify that the IP address returned matches the IP address listed in the portal for the FQDN:
+1. Open a command prompt, PowerShell, or other command line and run the following command for each FQDN returned from the previous step. Each time you run the command, verify that the IP address returned matches the IP address listed in the portal for the FQDN:
`nslookup <fqdn>`
- For example, running the command `nslookup 29395bb6-8bdb-4737-bf06-848a6857793f.workspace.eastus.api.azureml.ms` would return a value similar to the following text:
+ For example, running the command `nslookup 29395bb6-8bdb-4737-bf06-848a6857793f.workspace.eastus.api.azureml.ms` returns a value similar to the following text:
- ```
+ ```output
Server: yourdnsserver Address: yourdnsserver-IP-address
- Name: 29395bb6-8bdb-4737-bf06-848a6857793f.workspace.eastus.api.azureml.ms
+ Name: 29395bb6-8bdb-4737-bf06-848a6857793f.workspace.eastus.api.azureml.ms
Address: 10.3.0.5 ```
-1. If the `nslookup` command returns an error, or returns a different IP address than displayed in the portal, then the custom DNS solution isn't configured correctly. For more information, see [How to use your workspace with a custom DNS server](how-to-custom-dns.md)
+1. If the `nslookup` command returns an error, or returns a different IP address than displayed in the portal, then the custom DNS solution isn't configured correctly. For more information, see [How to use your workspace with a custom DNS server](how-to-custom-dns.md).
### Azure DNS troubleshooting When using Azure DNS for name resolution, use the following steps to verify that the Private DNS integration is configured correctly:
-1. On the Private Endpoint, select __DNS configuration__. For each entry in the __Private DNS zone__ column, there should also be an entry in the __DNS zone group__ column.
+1. On the Private Endpoint, select **DNS configuration**. For each entry in the **Private DNS zone** column, there should also be an entry in the **DNS zone group** column.
- :::image type="content" source="./media/how-to-troubleshoot-secure-connection-workspace/dns-zone-group.png" alt-text="Screenshot of the DNS configuration with Private DNS zone and group highlighted.":::
+ :::image type="content" source="media/how-to-troubleshoot-secure-connection-workspace/dns-zone-group.png" alt-text="Screenshot of the DNS configuration with Private DNS zone and group highlighted." lightbox="media/how-to-troubleshoot-secure-connection-workspace/dns-zone-group.png":::
- * If there's a Private DNS zone entry, but __no DNS zone group entry__, delete and recreate the Private Endpoint. When recreating the private endpoint, __enable Private DNS zone integration__.
- * If __DNS zone group__ isn't empty, select the link for the __Private DNS zone__ entry.
-
- From the Private DNS zone, select __Virtual network links__. There should be a link to the VNet. If there isn't one, then delete and recreate the private endpoint. When recreating it, select a Private DNS Zone linked to the VNet or create a new one that is linked to it.
+ * If there's a **Private DNS zone** entry, but no **DNS zone group** entry, delete and recreate the Private Endpoint. When recreating the private endpoint, enable **Private DNS zone integration**.
+ * If **DNS zone group** isn't empty, select the link for the **Private DNS zone** entry.
+
+ From the Private DNS zone, select **Virtual network links**. There should be a link to the virtual network. If there isn't one, then delete and recreate the private endpoint. When recreating it, select a Private DNS Zone linked to the virtual network or create a new one that is linked to it.
:::image type="content" source="./media/how-to-troubleshoot-secure-connection-workspace/virtual-network-links.png" alt-text="Screenshot of the virtual network links for the Private DNS zone.":::
Check if DNS over HTTP is enabled in your web browser. DNS over HTTP can prevent
* Mozilla Firefox: For more information, see [Disable DNS over HTTPS in Firefox](https://support.mozilla.org/en-US/kb/firefox-dns-over-https). * Microsoft Edge:
- 1. In Edge, select __...__ and then select __Settings__.
- 1. From settings, search for `DNS` and then disable __Use secure DNS to specify how to look up the network address for websites__.
-
+ 1. Select **...** in the top right corner, then select **Settings**.
+ 1. From settings, search for **DNS** and then disable **Use secure DNS to specify how to look up the network address for websites**.
+ :::image type="content" source="./media/how-to-troubleshoot-secure-connection-workspace/disable-dns-over-http.png" alt-text="Screenshot of the use secure DNS setting in Microsoft Edge."::: ## Proxy configuration
-If you use a proxy, it may prevent communication with a secured workspace. To test, use one of the following options:
+If you use a proxy, it might prevent communication with a secured workspace. To test, use one of the following options:
* Temporarily disable the proxy setting and see if you can connect. * Create a [Proxy auto-config (PAC)](https://wikipedia.org/wiki/Proxy_auto-config) file that allows direct access to the FQDNs listed on the private endpoint. It should also allow direct access to the FQDN for any compute instances. * Configure your proxy server to forward DNS requests to Azure DNS.---
machine-learning Troubleshoot Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/troubleshoot-guidance.md
If you are using compute instance runtime AI studio, this is not scenario curren
This type of error related to runtime lacks required packages. If you're using a default environment, make sure the image of your runtime is using the latest version. For more information, see [Runtime update](../how-to-create-manage-runtime.md#update-a-runtime-on-the-ui). If you're using a custom image and you're using a conda environment, make sure you installed all the required packages in your conda environment. For more information, see [Customize a prompt flow environment](../how-to-customize-environment-runtime.md#customize-environment-with-docker-context-for-runtime).
+### Where to find the serverless instance used by automatic runtime?
+
+Automatic runtime is running on a serverless instance, you can find the serverless instance under compute quota page, [View your usage and quotas in the Azure portal](../../how-to-manage-quotas.md#view-your-usage-and-quotas-in-the-azure-portal). The serverless instances with have with name like this `sessionxxxxyyyy`.
++ ### Request timeout issue You might experience timeout issues.
machine-learning How To Save Write Experiment Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-save-write-experiment-files.md
To resolve this error, store your experiment files on a datastore. If you can't
Experiment&nbsp;description|Storage limit solution |
-Less than 2000 files & can't use a datastore| Override snapshot size limit with <br> `azureml._restclient.snapshots_client.SNAPSHOT_MAX_SIZE_BYTES = 'insert_desired_size'`<br> This may take several minutes depending on the number and size of files.
+Less than 2000 files & can't use a datastore| Override snapshot size limit with <br> `azureml._restclient.snapshots_client.SNAPSHOT_MAX_SIZE_BYTES = 'insert_desired_size'` and `azureml._restclient.constants.SNAPSHOT_MAX_SIZE_BYTES = 'insert_desired_size'`<br> This may take several minutes depending on the number and size of files.
Must use specific script directory| [!INCLUDE [amlinclude-info](../includes/machine-learning-amlignore-gitignore.md)] Pipeline|Use a different subdirectory for each step Jupyter notebooks| Create a `.amlignore` file or move your notebook into a new, empty, subdirectory and run your code again.
Otherwise, write files to the `./outputs` and/or `./logs` folder.
* Learn more about [accessing data from storage](how-to-access-data.md).
-* Learn more about [Create compute targets for model training and deployment](../how-to-create-attach-compute-studio.md)
+* Learn more about [Create compute targets for model training and deployment](../how-to-create-attach-compute-studio.md)
mysql How To Decide On Right Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/how-to-decide-on-right-migration-tools.md
To help you select the right tools for migrating to Azure Database for MySQL, co
| Migration Scenario | Tool(s) | Details | More information | |--|||| | Single to Flexible Server (Azure portal) | Database Migration Service (classic) and the Azure portal | [Tutorial: DMS (classic) with the Azure portal (offline)](../../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md) | Suitable for < 1TB workloads; cross-region, cross-storage type and cross-version migrations. |
-| Single to Flexible Server (Azure CLI) | Azure MySQL Import CLI | [Tutorial: Azure MySQL Import](../migrate/migrate-single-flexible-mysql-import-cli.md) | **Recommended** - Suitable for all sizes of workloads, extremely performant for > 500 GB workloads.|
+| Single to Flexible Server (Azure CLI) | Azure Database for MySQL Import CLI | [Tutorial: Azure Database for MySQL Import](../migrate/migrate-single-flexible-mysql-import-cli.md) | **Recommended** - Suitable for all sizes of workloads, extremely performant for > 500 GB workloads.|
| MySQL databases (>= 1 TB) to Azure Database for MySQL | Dump and Restore using **MyDumper/MyLoader** + High Compute VM | [Migrate large databases to Azure Database for MySQL using mydumper/myloader](concepts-migrate-mydumper-myloader.md) | [Best Practices for migrating large databases to Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699) | ### Online
mysql Migrate External Mysql Import Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-external-mysql-import-cli.md
Title: "Migrate MySQL on-premises or Virtual Machine (VM) workload to Azure Database for MySQL - Flexible Server using Azure MySQL Import CLI"
-description: This tutorial describes how to use the Azure MySQL Import CLI to migrate MySQL on-premises or VM workload to Azure Database for MySQL - Flexible Server.
+ Title: "Migrate MySQL on-premises or Virtual Machine (VM) workload to Azure Database for MySQL - Flexible Server using Azure Database for MySQL Import CLI"
+description: This tutorial describes how to use the Azure Database for MySQL Import CLI to migrate MySQL on-premises or VM workload to Azure Database for MySQL - Flexible Server.
- mode-api ms.devlang: azurecli
-# Migrate MySQL on-premises or Virtual Machine (VM) workload to Azure Database for MySQL - Flexible Server using Azure MySQL Import CLI
+# Migrate MySQL on-premises or Virtual Machine (VM) workload to Azure Database for MySQL - Flexible Server using Azure Database for MySQL Import CLI
-Azure MySQL Import enables you to migrate your MySQL on-premises or Virtual Machine (VM) workload seamlessly to Azure Database for MySQL - Flexible Server. It uses a user-provided physical backup file and restores the source server's physical data files to the target server offering a simple and fast migration path. Post MySQL Import operation, you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows.
+Azure Database for MySQL Import enables you to migrate your MySQL on-premises or Virtual Machine (VM) workload seamlessly to Azure Database for MySQL - Flexible Server. It uses a user-provided physical backup file and restores the source server's physical data files to the target server offering a simple and fast migration path. Post Import operation, you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows.
Based on user-inputs, it takes up the responsibility of provisioning your target Flexible Server and then restoring the user-provided physical backup of the source server stored in the Azure Blob storage account to the target Flexible Server instance.
-This tutorial shows how to use the Azure MySQL Import CLI command to migrate your Migrate MySQL on-premises or Virtual Machine (VM) workload to Azure Database for MySQL - Flexible Server.
+This tutorial shows how to use the Azure Database for MySQL Import CLI command to migrate your Migrate MySQL on-premises or Virtual Machine (VM) workload to Azure Database for MySQL - Flexible Server.
## Launch Azure Cloud Shell
The following are the steps for using Percona XtraBackup to take a full backup :
## Limitations * Source server configuration isn't migrated. You must configure the target Flexible server appropriately.
-* Users and privileges aren't migrated as part of MySQL Import. You must take a manual dump of users and privileges before initiating MySQL Import to migrate logins post import operation by restoring them on the target Flexible Server.
+* Users and privileges aren't migrated as part of Azure Database for MySQL Import. You must take a manual dump of users and privileges before initiating Azure Database for MySQL Import to migrate logins post import operation by restoring them on the target Flexible Server.
* High Availability (HA) enabled Flexible Servers are returned as HA disabled servers to increase the speed of migration operation post the import migration. Enable HA for your target Flexible Server post migration. ## Recommendations for an optimal migration experience * Consider keeping the Azure Blob storage account and the target Flexible Server to be deployed in the same region for better import performance. * Recommended SKU configuration for target Azure Database for MySQL Flexible Server ΓÇô
- * Setting Burstable SKU for target isn't recommended in order to optimize migration time when running the MySQL Import operation. We recommend scaling to General Purpose/ Business Critical for the course of the import operation, post, which you can scale down to Burstable SKU.
+ * Setting Burstable SKU for target isn't recommended in order to optimize migration time when running the Azure Database for MySQL Import operation. We recommend scaling to General Purpose/ Business Critical for the course of the import operation, post, which you can scale down to Burstable SKU.
-## Trigger a MySQL Import operation to migrate from Azure Database for MySQL -Flexible Server
+## Trigger an Azure Database for MySQL Import operation to migrate from Azure Database for MySQL -Flexible Server
-Trigger a MySQL Import operation with the `az mysql flexible-server import create` command. The following command creates a target Flexible Server and performs instance-level import from backup file to target destination using your Azure CLI's local context:
+Trigger an Azure Database for MySQL Import operation with the `az mysql flexible-server import create` command. The following command creates a target Flexible Server and performs instance-level import from backup file to target destination using your Azure CLI's local context:
```azurecli az mysql flexible-server import create --data-source-type
Here are the details for the arguments above:
**Setting** | **Sample value** | **Description** ||
-data-source-type | azure_blob | The type of data source that serves as the source destination for triggering MySQL Import. Accepted values: [azure_blob]. Description of accepted values- azure_blob: Azure Blob storage.
+data-source-type | azure_blob | The type of data source that serves as the source destination for triggering Azure Database for MySQL Import. Accepted values: [azure_blob]. Description of accepted values- azure_blob: Azure Blob storage.
data-source | {resourceID} | The resource ID of the Azure Blob container. data-source-backup-dir | mysql_percona_backup | The directory of the Azure Blob storage container in which the backup file was uploaded. This value is required only when the backup file isn't stored in the root folder of Azure Blob container. data-source-sas-token | {sas-token} | The Shared Access Signature (SAS) token generated for granting access to import from the Azure Blob storage container. resource-group | test-rg | The name of the Azure resource group of the target Azure Database for MySQL Flexible Server.
-mode | Offline | The mode of MySQL import. Accepted values: [Offline]; Default value: Offline.
+mode | Offline | The mode of Azure Database for MySQL import. Accepted values: [Offline]; Default value: Offline.
location | westus | The Azure location for the source Azure Database for MySQL Flexible Server. name | test-flexible-server | Enter a unique name for your target Azure Database for MySQL Flexible Server. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain from 3 to 63 characters. Note: This server is deployed in the same subscription, resource group, and region as the source. admin-user | adminuser | The username for the administrator sign-in for your target Azure Database for MySQL Flexible Server. It can't be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.
iops | 500 | Number of IOPS to be allocated for the target Azure Database for My
## Migrate to Flexible Server with minimal downtime
-In order to perform an online migration after completing the initial seeding from backup file using MySQL import, you can configure data-in replication between the source and target by following steps [here](../flexible-server/how-to-data-in-replication.md?tabs=bash%2Ccommand-line). You can use the bin-log position captured while taking the backup file using Percona XtraBackup to set up Bin-log position based replication.
+In order to perform an online migration after completing the initial seeding from backup file using Azure Database for MySQL import, you can configure data-in replication between the source and target by following steps [here](../flexible-server/how-to-data-in-replication.md?tabs=bash%2Ccommand-line). You can use the bin-log position captured while taking the backup file using Percona XtraBackup to set up Bin-log position based replication.
-## How long does MySQL Import take to migrate my MySQL instance?
+## How long does Azure Database for MySQL Import take to migrate my MySQL instance?
Benchmarked performance based on storage size.
- | Backup file Storage Size | MySQL Import time |
+ | Backup file Storage Size | Import time |
| - |:-:| | 1 GiB | 0 min 23 secs | | 10 GiB | 4 min 24 secs |
mysql Migrate Single Flexible Mysql Import Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-mysql-import-cli.md
Title: "Migrate Azure Database for MySQL - Single Server to Flexible Server using Azure MySQL Import CLI"
-description: This tutorial describes how to use the Azure MySQL Import CLI to migrate an Azure Database for MySQL Single Server to Flexible Server.
+ Title: "Migrate Azure Database for MySQL - Single Server to Flexible Server using Azure Database for MySQL Import CLI"
+description: This tutorial describes how to use the Azure Database for MySQL Import CLI to migrate an Azure Database for MySQL Single Server to Flexible Server.
- mode-api ms.devlang: azurecli
-# Migrate Azure Database for MySQL - Single Server to Flexible Server using Azure MySQL Import CLI
+# Migrate Azure Database for MySQL - Single Server to Flexible Server using Azure Database for MySQL Import CLI
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
-Azure MySQL Import (Generally Available) enables you to migrate your Azure Database for MySQL seamlessly - Single Server to Flexible Server. It uses snapshot backup and restores technology to offer a simple and fast migration path to restore the source server's physical data files to the target server. Post MySQL Import operation, you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows.
+Azure Database for MySQL Import CLI (Generally Available) enables you to migrate your Azure Database for MySQL seamlessly - Single Server to Flexible Server. It uses snapshot backup and restores technology to offer a simple and fast migration path to restore the source server's physical data files to the target server. Post import operation, you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows.
Based on user-inputs, it takes up the responsibility of provisioning your target Flexible Server and then taking the backup of the source server and restoring the target. It copies the data files, server parameters, compatible firewall rules and server properties - tier, version, sku-name, storage-size, location, geo-redundant-backup, public-access, tags, auto grow, backup-retention-days, admin-user and admin-password from Single to Flexible Server instance.
-Azure MySQL Import supports a near-zero downtime migration by first performing an offline import operation and consequently users can set up data-in replication between source and target to perform an online migration.
+Azure Database for MySQL Import CLI supports a near-zero downtime migration by first performing an offline import operation and consequently users can set up data-in replication between source and target to perform an online migration.
-This tutorial shows how to use the Azure MySQL Import CLI command to migrate your Azure Database for MySQL Single Server to Flexible Server.
+This tutorial shows how to use the Azure Database for MySQL Import CLI command to migrate your Azure Database for MySQL Single Server to Flexible Server.
## Launch Azure Cloud Shell
az account set --subscription <subscription id>
## Limitations and pre-requisites - If your source Azure Database for MySQL Single Server has engine version v8.x, ensure to upgrade your source server's .NET client driver version to 8.0.32 to avoid any encoding incompatibilities post migration to Flexible Server.-- The source Azure Database for MySQL - Single Server and the target Azure Database for MySQL - Flexible Server must be in the same subscription, resource group, region, and on the same MySQL version. MySQL Import across subscriptions, resource groups, regions, and versions isn't possible.-- MySQL versions supported by Azure MySQL Import are 5.7 and 8.0. If you are on a different major MySQL version on Single Server, make sure to upgrade your version on your Single Server instance before triggering the import command.-- If the Azure Database for MySQL - Single Server instance has server parameter 'lower_case_table_names' set to 2 and your application used partition tables, MySQL Import will result in corrupted partition tables. The recommendation is to set 'lower_case_table_names' to 1 for your Azure Database for MySQL - Single Server instance in order to proceed with corruption-free MySQL Import operation.-- MySQL Import for Single Servers with Legacy Storage architecture (General Purpose storage V1) isn't supported. You must upgrade your storage to the latest storage architecture (General Purpose storage V2) to trigger a MySQL Import operation. Find your storage type and upgrade steps by following directions [here](../single-server/concepts-pricing-tiers.md#how-can-i-determine-which-storage-type-my-server-is-running-on).-- MySQL Import to an existing Azure MySQL Flexible Server isn't supported. The CLI command initiates the import of a new Azure MySQL Flexible Server.
+- The source Azure Database for MySQL - Single Server and the target Azure Database for MySQL - Flexible Server must be in the same subscription, resource group, region, and on the same MySQL version. Import across subscriptions, resource groups, regions, and versions isn't possible.
+- MySQL versions supported by Azure Database for MySQL Import CLI are 5.7 and 8.0. If you are on a different major MySQL version on Single Server, make sure to upgrade your version on your Single Server instance before triggering the import command.
+- If the Azure Database for MySQL - Single Server instance has server parameter 'lower_case_table_names' set to 2 and your application used partition tables, Import operation will result in corrupted partition tables. The recommendation is to set 'lower_case_table_names' to 1 for your Azure Database for MySQL - Single Server instance in order to proceed with corruption-free MySQL Import operation.
+- Import operation for Single Servers with Legacy Storage architecture (General Purpose storage V1) isn't supported. You must upgrade your storage to the latest storage architecture (General Purpose storage V2) to trigger an Import operation. Find your storage type and upgrade steps by following directions [here](../single-server/concepts-pricing-tiers.md#how-can-i-determine-which-storage-type-my-server-is-running-on).
+- Import to an existing Azure MySQL Flexible Server isn't supported. The CLI command initiates the import of a new Azure MySQL Flexible Server.
- If the flexible target server is provisioned as non-HA (High Availability disabled) when updating the CLI command parameters, it can later be switched to Same-Zone HA but not Zone-Redundant HA.-- For CMK enabled Single Server instances, MySQL Import command requires you to provide mandatory input parameters for enabling CMK on target Flexible Server.-- If the Single Server instance has ' Infrastructure Double Encryption' enabled, enabling Customer Managed Key (CMK) on target Flexible Server instance is recommended to support similar functionality. You can choose to enable CMK on target server with MySQL Import CLI input parameters or post migration as well.
+- For CMK enabled Single Server instances, Azure Database for MySQL Import command requires you to provide mandatory input parameters for enabling CMK on target Flexible Server.
+- If the Single Server instance has ' Infrastructure Double Encryption' enabled, enabling Customer Managed Key (CMK) on target Flexible Server instance is recommended to support similar functionality. You can choose to enable CMK on target server with Azure Database for MySQL Import CLI input parameters or post migration as well.
- Only instance-level import is supported. No option to import selected databases within an instance is provided.-- Below items should be copied from source to target by the user post MySQL Import operation:
+- Below items should be copied from source to target by the user post the Import operation:
- Read-Replicas - Monitoring page settings (Alerts, Metrics, and Diagnostic settings) - Any Terraform/CLI scripts hosted by you to manage your Single Server instance should be updated with Flexible Server references
-## Trigger a MySQL Import operation to migrate from Azure Database for MySQL - Single Server to Flexible Server
+## Trigger an Azure Database for MySQL Import operation to migrate from Azure Database for MySQL - Single Server to Flexible Server
-Trigger a MySQL Import operation with the `az mysql flexible-server import create` command. The following command creates a target Flexible Server and performs instance-level import from source to target destination using service defaults and values from your Azure CLI's local context:
+Trigger an Azure Database for MySQL Import operation with the `az mysql flexible-server import create` command. The following command creates a target Flexible Server and performs instance-level import from source to target destination using service defaults and values from your Azure CLI's local context:
```azurecli az mysql flexible-server import create --data-source-type
az mysql flexible-server import create --data-source-type
[--zone] ```
-The following example takes in the data source information for Single Server named 'test-single-server' and target Flexible Server information, creates a target Flexible Server named `test-flexible-server` in the `westus` location (same location as that of the source Single Server) and performs an import from source to target. MySQL Import command maps over the corresponding tier, version, sku-name, storage-size, location, geo-redundant-backup, public-access, tags, auto grow, backup-retention-days, admin-user and admin-password properties from Single Server to Flexible Server as smart defaults if no inputs are provided to the CLI command. You can chose to override the smart defaults by providing inputs for these optional parameters.
+The following example takes in the data source information for Single Server named 'test-single-server' and target Flexible Server information, creates a target Flexible Server named `test-flexible-server` in the `westus` location (same location as that of the source Single Server) and performs an import from source to target. Azure Database MySQL Import command maps over the corresponding tier, version, sku-name, storage-size, location, geo-redundant-backup, public-access, tags, auto grow, backup-retention-days, admin-user and admin-password properties from Single Server to Flexible Server as smart defaults if no inputs are provided to the CLI command. You can chose to override the smart defaults by providing inputs for these optional parameters.
```azurecli-interactive az mysql flexible-server import create --data-source-type "mysql_single" --data-source "test-single-server" --resource-group "test-rg" --name "test-flexible-server"
az network private-dns zone create -g testGroup -n myserver.private.contoso.com
az mysql flexible-server import create --data-source-type "mysql_single" --data-source "test-single-server" --resource-group "test-rg" --name "test-flexible-server" --high-availability ZoneRedundant --zone 1 --standby-zone 3 --vnet "myVnet" --subnet "mySubnet" --private-dns-zone "myserver.private.contoso.com" ```
-The following example takes in the data source information for Single Server named 'test-single-server' with Customer Managed Key (CMK) enabled and target Flexible Server information, creates a target Flexible Server named `test-flexible-server` and performs an import from source to target. For CMK enabled Single Server instances, MySQL Import command requires you to provide mandatory input parameters for enabling CMK : --key keyIdentifierOfTestKey --identity testIdentity.
+The following example takes in the data source information for Single Server named 'test-single-server' with Customer Managed Key (CMK) enabled and target Flexible Server information, creates a target Flexible Server named `test-flexible-server` and performs an import from source to target. For CMK enabled Single Server instances, Azure Database for MySQL Import command requires you to provide mandatory input parameters for enabling CMK : --key keyIdentifierOfTestKey --identity testIdentity.
```azurecli-interactive # create keyvault
identityPrincipalId=$(az identity create -g testGroup --name testIdentity \
az keyvault set-policy -g testGroup -n testVault --object-id $identityPrincipalId \ --key-permissions wrapKey unwrapKey get list
-# trigger mysql import for CMK enabled single server
+# trigger azure database for mysql import for CMK enabled single server
az mysql flexible-server import create --data-source-type "mysql_single" --data-source "test-single-server" --resource-group "test-rg" --name "test-flexible-server" --key $keyIdentifier --identity testIdentity ```
Here are the details for the arguments above:
**Setting** | **Sample value** | **Description** ||
-data-source-type | mysql_single | The type of data source that serves as the source destination for triggering MySQL Import. Accepted values: [mysql_single]. Description of accepted values- mysql_single: Azure Database for MySQL Single Server.
+data-source-type | mysql_single | The type of data source that serves as the source destination for triggering Azure Database for MySQL Import. Accepted values: [mysql_single]. Description of accepted values- mysql_single: Azure Database for MySQL Single Server.
data-source | test-single-server | The name or resource ID of the source Azure Database for MySQL Single Server. resource-group | test-rg | The name of the Azure resource group of the source Azure Database for MySQL Single Server.
-mode | Offline | The mode of MySQL import. Accepted values: [Offline]; Default value: Offline.
+mode | Offline | The mode of Azure Database for MySQL import. Accepted values: [Offline]; Default value: Offline.
location | westus | The Azure location for the source Azure Database for MySQL Single Server. name | test-flexible-server | Enter a unique name for your target Azure Database for MySQL Flexible Server. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain from 3 to 63 characters. Note: This server is deployed in the same subscription, resource group, and region as the source. admin-user | adminuser | The username for the administrator sign-in for your target Azure Database for MySQL Flexible Server. It can't be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.
iops | 500 | Number of IOPS to be allocated for the target Azure Database for My
## Steps for online migration
-After completing the above-stated MySQL Import operation :
+After completing the above-stated Azure Database for MySQL Import operation :
-- Log in to the target Azure Database for MySQL Flexible Server and run the following command to get the bin-log filename and position corresponding to the backup snapshot used by Azure MySQL Import to restore to the target server.
+- Log in to the target Azure Database for MySQL Flexible Server and run the following command to get the bin-log filename and position corresponding to the backup snapshot used by Azure Database for MySQL Import CLI to restore to the target server.
```sql CALL mysql.az_show_binlog_file_and_pos_for_mysql_import();
CALL mysql.az_show_binlog_file_and_pos_for_mysql_import();
- Set up data-in replication between the source and target server instances using bin-log position by following steps listed [here](../flexible-server/how-to-data-in-replication.md) and when replication status reflects that the target server has caught up with the source, stop replication and perform cutover.
-## Best practices for configuring Azure MySQL Import CLI command parameters
+## Best practices for configuring Azure Database for MySQL Import CLI command parameters
- Before you trigger the Azure MySQL Import command, consider the following parameter configuration guidance to help ensure faster data loads using Azure MySQL Import.
+ Before you trigger the Azure Database for MySQL Import CLI command, consider the following parameter configuration guidance to help ensure faster data loads using Azure Database for MySQL Import CLI.
- If you want to override smart defaults, select the compute tier and SKU name for the target flexible server based on the source single serverΓÇÖs pricing tier and VCores based on the detail in the following table.
CALL mysql.az_show_binlog_file_and_pos_for_mysql_import();
- The MySQL version, region, subscription and resource for the target flexible server must be equal to that of the source single server. - The storage size for target flexible server should be equal to or greater than on the source single server.-- If the Single Server instance has ' Infrastructure Double Encryption' enabled, enabling Customer Managed Key (CMK) on target Flexible Server instance is recommended to support similar functionality. You can choose to enable CMK on target server with MySQL Import CLI input parameters or post migration as well.
+- If the Single Server instance has ' Infrastructure Double Encryption' enabled, enabling Customer Managed Key (CMK) on target Flexible Server instance is recommended to support similar functionality. You can choose to enable CMK on target server with Azure Database for MySQL Import CLI input parameters or post migration as well.
-## How long does MySQL Import take to migrate my Single Server instance?
+## How long does Azure Database for MySQL Import take to migrate my Single Server instance?
Below is the benchmarked performance based on storage size.
- | Single Server Storage Size | MySQL Import time |
+ | Single Server Storage Size | Import time |
| - |:-:| | 1 GiB | 0 min 23 secs | | 10 GiB | 4 min 24 secs |
From the table above, as the storage size increases, the time required for data
Below is the benchmarked performance based on varying number of tables for 10 GiB storage size.
- | Number of tables in Single Server instance | MySQL Import time |
+ | Number of tables in Single Server instance | Import time |
| - | :-: | | 100 | 4 min 24 secs | | 200 | 4 min 40 secs |
Below is the benchmarked performance based on varying number of tables for 10 Gi
| 28,800 | 19 min 18 secs | | 38,400 | 22 min 50 secs |
- As the number of files increases, each file/table in the database may become very small. This will result in a consistent amount of data being transferred, but there will be more frequent file-related operations, which may impact the performance of Mysql Import.
+ As the number of files increases, each file/table in the database may become very small. This will result in a consistent amount of data being transferred, but there will be more frequent file-related operations, which may impact the performance of Azure Database for Mysql Import.
## Post-import steps -- Copy the following properties from the source Single Server to target Flexible Server post MySQL Import operation is completed successfully:
+- Copy the following properties from the source Single Server to target Flexible Server post Azure Database for MySQL Import operation is completed successfully:
- Read-Replicas - Monitoring page settings (Alerts, Metrics, and Diagnostic settings) - Any Terraform/CLI scripts you host to manage your Single Server instance should be updated with Flexible Server references.
mysql Whats Happening To Mysql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/whats-happening-to-mysql-single-server.md
Learn how to migrate from Azure Database for MySQL - Single Server to Azure Data
| Scenario | Tool(s) | Details | |-||| | Offline | Database Migration Service (classic) and the Azure portal | [Tutorial: DMS (classic) with the Azure portal (offline)](../../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md) |
-| Offline | Azure MySQL Import and the Azure CLI | [Tutorial: Azure MySQL Import with the Azure CLI (offline)](../migrate/migrate-single-flexible-mysql-import-cli.md) |
+| Offline | Azure Database for MySQL Import and the Azure CLI | [Tutorial: Azure Database for MySQL Import with the Azure CLI (offline)](../migrate/migrate-single-flexible-mysql-import-cli.md) |
| Online | Database Migration Service (classic) and the Azure portal | [Tutorial: DMS (classic) with the Azure portal (online)](../../dms/tutorial-mysql-Azure-single-to-flex-online-portal.md) | For more information on migrating from Single Server to Flexible Server using other migration tools, visit [Select the right tools for migration to Azure Database for MySQL](../migrate/how-to-decide-on-right-migration-tools.md). > [!NOTE]
-> In-place auto-migration from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with Basic or General Purpose SKU, data storage used <= 20 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure MySQL Import to migrate. Learn more about in-place auto-migration [here](../migrate/migrate-single-flexible-in-place-auto-migration.md).
+> In-place auto-migration from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with Basic or General Purpose SKU, data storage used <= 20 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure Database for MySQL Import to migrate. Learn more about in-place auto-migration [here](../migrate/migrate-single-flexible-in-place-auto-migration.md).
## Migration Eligibility
network-watcher Network Watcher Packet Capture Manage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-portal.md
Title: Manage packet captures in VMs with Azure Network Watcher - Azure portal
-description: Learn how to manage packet captures in virtual machines with the packet capture feature of Network Watcher using the Azure portal.
-
+ Title: Manage packet captures for VMs - Azure portal
+
+description: Learn how to start, stop, download, and delete Azure virtual machines packet captures with the packet capture feature of Network Watcher using the Azure portal.
+ - Previously updated : 01/04/2023-- Last updated : 01/26/2024
+# CustomerIntent: As an administrator, I want to capture IP packets to and from a virtual machine (VM) so I can review and analyze the data to help diagnose and solve network problems.
-# Manage packet captures in virtual machines with Azure Network Watcher using the Azure portal
+# Manage packet captures for virtual machines with Azure Network Watcher using the Azure portal
-> [!div class="op_single_selector"]
-> - [Azure portal](network-watcher-packet-capture-manage-portal.md)
-> - [PowerShell](network-watcher-packet-capture-manage-powershell.md)
-> - [Azure CLI](network-watcher-packet-capture-manage-cli.md)
+The Network Watcher packet capture tool allows you to create capture sessions to record network traffic to and from an Azure virtual machine (VM). Filters are provided for the capture session to ensure you capture only the traffic you want. Packet capture helps in diagnosing network anomalies both reactively and proactively. Its applications extend beyond anomaly detection to include gathering network statistics, acquiring insights into network intrusions, debugging client-server communication, and addressing various other networking challenges. Network Watcher packet capture enables you to initiate packet captures remotely, alleviating the need for manual execution on a specific virtual machine.
-Network Watcher packet capture allows you to create capture sessions to track traffic to and from a virtual machine. Filters are provided for the capture session to ensure you capture only the traffic you want. Packet capture helps to diagnose network anomalies, both reactively, and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communication, and much more. Being able to remotely trigger packet captures, eases the burden of running a packet capture manually on a desired virtual machine, which saves valuable time.
-
-In this article, you learn to start, stop, download, and delete a packet capture.
+In this article, you learn how to remotely configure, start, stop, download, and delete a virtual machine packet capture using the Azure portal. To learn how to manage packet captures using PowerShell or Azure CLI, see [Manage packet captures for virtual machines using PowerShell](network-watcher-packet-capture-manage-powershell.md) or [Manage packet captures for virtual machines using the Azure CLI](network-watcher-packet-capture-manage-cli.md).
## Prerequisites -- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- A virtual machine with the following outbound TCP connectivity:
- - to the chosen storage account over port 443
+ - to the storage account over port 443
- to 169.254.169.254 over port 80 - to 168.63.129.16 over port 8037 > [!NOTE]
-> The ports mentioned in the latter two cases are common across all Network Watcher features that involve the Network Watcher extension and might occasionally change.
+> - Azure creates a Network Watcher instance in the the virtual machine's region if Network Watcher wasn't enabled for that region. For more information, see [Enable or disable Azure Network Watcher](network-watcher-create.md).
+> - Network Watcher packet capture requires Network Watcher agent VM extension to be installed on the target virtual machine. Whenever you use Network Watcher packet capture, Azure installs the agent on the target VM or scale set if it wasn't previously installed. To update an already installed agent, see [Update Azure Network Watcher extension to the latest version](../virtual-machines/extensions/network-watcher-update.md?toc=/azure/network-watcher/toc.json). To manually install the agent, see [Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md) or [Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md).
+> - The last two IP addresses and ports listed in the **Prerequisites** are common across all Network Watcher tools that use the Network Watcher agent and might occasionally change.
If a network security group is associated to the network interface, or subnet that the network interface is in, ensure that rules exist to allow outbound connectivity over the previous ports. Similarly, ensure outbound connectivity over the previous ports when adding user-defined routes to your network. ## Start a packet capture 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the search box at the top of the portal, enter *Network Watcher*.
-1. In the search results, select **Network Watcher**.
+
+1. In the search box at the top of the portal, enter *Network Watcher*. Select **Network Watcher** from the search results.
+
+ :::image type="content" source="./media/network-watcher-packet-capture-manage-portal/portal-search.png" alt-text="Screenshot shows how to search for Network Watcher in the Azure portal." lightbox="./media/network-watcher-packet-capture-manage-portal/portal-search.png":::
+ 1. Select **Packet capture** under **Network diagnostic tools**. Any existing packet captures are listed, regardless of their status.+
+ :::image type="content" source="./media/network-watcher-packet-capture-manage-portal/packet-capture.png" alt-text="Screenshot shows Network Watcher packet capture in the Azure portal." lightbox="./media/network-watcher-packet-capture-manage-portal/packet-capture.png":::
+ 1. Select **+ Add** to create a packet capture. In **Add packet capture**, enter or select values for the following settings:
- | Setting | Value |
- | | |
- | **Basic Details** | |
- | Subscription | Select the Azure subscription of the virtual machine. |
- | Resource group | Select the resource group of the virtual machine. |
- | Target type | Select **Virtual machine**. |
- | Target instance | Select the virtual machine. |
- | Packet capture name | Enter a name or leave the default name. |
- | **Packet capture configuration** | |
- | Capture location | Select **Storage account**, **File**, or **Both**. |
- | Storage account | Select your **Standard** storage account. <br> This option is available if you selected **Storage account** or **Both**. |
- | Local file path | Enter a valid local file path where you want the capture to be saved in the target virtual machine. If you're using a Linux machine, the path must start with */var/captures*. <br> This option is available if you selected **File** or **Both**. |
- | Maximum bytes per packet | Enter the maximum number of bytes to be captured per each packet. All bytes are captured if left blank or 0 entered. |
- | Maximum bytes per session | Enter the total number of bytes that are captured. Once the value is reached the packet capture stops. Up to 1 GB is captured if left blank. |
- | Time limit (seconds) | Enter the time limit of the packet capture session in seconds. Once the value is reached the packet capture stops. Up to 5 hours (18,000 seconds) is captured if left blank. |
- | **Filtering (optional)** | |
- | Add filter criteria | Select **Add filter criteria** to add a new filter. |
- | Protocol | Filters the packet capture based on the selected protocol. Available values are **TCP**, **UDP**, or **Any**. |
- | Local IP address | Filters the packet capture for packets where the local IP address matches this value. |
- | Local port | Filters the packet capture for packets where the local port matches this value. |
- | Remote IP address | Filters the packet capture for packets where the remote IP address matches this value. |
- | Remote port | Filters the packet capture for packets where the remote port matches this value. |
-
- > [!NOTE]
- > Premium storage accounts are currently not supported for storing packet captures.
-
- > [!NOTE]
- > Port and IP address values can be a single value, multiple values, or a range, such as 80-1024, for port. You can define as many filters as you need.
+ | Setting | Value |
+ | | |
+ | **Basic Details** | |
+ | Subscription | Select the Azure subscription of the virtual machine. |
+ | Resource group | Select the resource group of the virtual machine. |
+ | Target type | Select **Virtual machine**. |
+ | Target instance | Select the virtual machine. |
+ | Packet capture name | Enter a name or leave the default name. |
+ | **Packet capture configuration** | |
+ | Capture location | Select **Storage account**, **File**, or **Both**. |
+ | Storage account | Select your **Standard** storage account<sup>1</sup>. <br> This option is available if you selected **Storage account** or **Both** as a capture location. |
+ | Local file path | Enter a valid local file path where you want the capture to be saved in the target virtual machine. If you're using a Linux machine, the path must start with */var/captures*. <br> This option is available if you selected **File** or **Both** as a capture location. |
+ | Maximum bytes per packet | Enter the maximum number of bytes to be captured per each packet. All bytes are captured if left blank or 0 entered. |
+ | Maximum bytes per session | Enter the total number of bytes that are captured. Once the value is reached the packet capture stops. Up to 1 GB is captured if left blank. |
+ | Time limit (seconds) | Enter the time limit of the packet capture session in seconds. Once the value is reached the packet capture stops. Up to 5 hours (18,000 seconds) is captured if left blank. |
+ | **Filtering (optional)** | |
+ | Add filter criteria | Select **Add filter criteria** to add a new filter. You can define as many filters as you need. |
+ | Protocol | Filters the packet capture based on the selected protocol. Available values are **TCP**, **UDP**, or **Any**. |
+ | Local IP address<sup>2</sup> | Filters the packet capture for packets where the local IP address matches this value. |
+ | Local port<sup>2</sup> | Filters the packet capture for packets where the local port matches this value. |
+ | Remote IP address<sup>2</sup> | Filters the packet capture for packets where the remote IP address matches this value. |
+ | Remote port<sup>2</sup> | Filters the packet capture for packets where the remote port matches this value. |
+
+ <sup>1</sup> Premium storage accounts are currently not supported for storing packet captures.
+
+ <sup>2</sup> Port and IP address values can be a single value, a range such as 80-1024, or multiple values such as 80, 443.
1. Select **Start packet capture**.
- :::image type="content" source="./media/network-watcher-packet-capture-manage-portal/add-packet-capture.png" alt-text="Screenshot of Add packet capture in Azure portal showing available options.":::
+ :::image type="content" source="./media/network-watcher-packet-capture-manage-portal/add-packet-capture.png" alt-text="Screenshot of Add packet capture in the Azure portal showing available options.":::
-1. Once the time limit set on the packet capture is reached, the packet capture stops and can be reviewed. To manually stop a packet capture session before it reaches its time limit, select the **...** on the right-side of the packet capture in **Packet capture** page, or right-click it, then select **Stop**.
+1. Once the time limit set on the packet capture is reached, the packet capture stops and can be reviewed. To manually stop a packet capture session before it reaches its time limit, select the **...** on the right-side of the packet capture, or right-click it, then select **Stop**.
- :::image type="content" source="./media/network-watcher-packet-capture-manage-portal/stop-packet-capture.png" alt-text="Screenshot showing how to stop a packet capture in Azure portal.":::
+ :::image type="content" source="./media/network-watcher-packet-capture-manage-portal/stop-packet-capture.png" alt-text="Screenshot that shows how to stop a packet capture in the Azure portal.":::
-> [!NOTE]
-> The Azure portal automatically:
-> * Creates a network watcher in the same region as the region of the target virtual machine, if the region doesn't already have a network watcher.
-> * Adds `AzureNetworkWatcherExtension` to [Linux](../virtual-machines/extensions/network-watcher-linux.md) or [Windows](../virtual-machines/extensions/network-watcher-windows.md) virtual machines, if the extension isn't already installed.
+## Download a packet capture
-## Delete a packet capture
+After concluding your packet capture session, the resulting capture file is saved to Azure storage, a local file on the target virtual machine or both. The storage destination for the packet capture is specified during its creation. For more information, see [Start a packet capture](#start-a-packet-capture).
+
+To download a packet capture file saved to Azure storage, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).+ 1. In the search box at the top of the portal, enter *Network Watcher*, then select **Network Watcher** from the search results.+ 1. Select **Packet capture** under **Network diagnostic tools**.
-1. In the **Packet capture** page, select **...** on the right-side of the packet capture that you want to delete, or right-click it, then select **Delete**.
- :::image type="content" source="./media/network-watcher-packet-capture-manage-portal/delete-packet-capture.png" alt-text="Screenshot showing how to delete a packet capture from Network Watcher in Azure portal.":::
+1. In the **Packet capture** page, select the packet capture that you want to download its file.
-1. Select **Yes**.
+1. In the **Details** section, select the packet capture file link.
+
+ :::image type="content" source="./media/network-watcher-packet-capture-manage-portal/packet-capture-file.png" alt-text="Screenshot that shows how to select the packet capture file in the Azure portal.":::
+
+1. In the blob page, select **Download**.
> [!NOTE]
-> Deleting a packet capture does not delete the capture file in the storage account or on the virtual machine.
+> You can also download the capture file from the storage account container using the Azure portal or Storage Explorer<sup>1</sup> at the following path:
+> ```
+> https://{storageAccountName}.blob.core.windows.net/network-watcher-logs/subscriptions/{subscriptionId}/resourcegroups/{storageAccountResourceGroup}/providers/microsoft.compute/virtualmachines/{virtualMachineName}/{year}/{month}/{day}/packetcapture_{UTCcreationTime}.cap
+> ```
+> <sup>1</sup> Storage Explorer is a standalone app that you can conveniently use to access and work with Azure Storage data. For more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
-## Download a packet capture
+To download a packet capture file saved to the virtual machine (VM), connect to the VM and download the file from the local path specified during the packet capture creation.
+
+## Delete a packet capture
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
-Once your packet capture session has completed, the capture file is saved to a blob storage or a local file on the target virtual machine. The storage location of the packet capture is defined during creation of the packet capture. A convenient tool to access capture files saved to a storage account is Azure Storage Explorer, which you can [download](https://storageexplorer.com/) after selecting the operating system.
+1. In the search box at the top of the portal, enter *Network Watcher*, then select **Network Watcher** from the search results.
-- If a storage account is specified, packet capture files are saved to a storage account at the following location:
+1. Select **Packet capture** under **Network diagnostic tools**.
- ```
- https://{storageAccountName}.blob.core.windows.net/network-watcher-logs/subscriptions/{subscriptionId}/resourcegroups/{storageAccountResourceGroup}/providers/microsoft.compute/virtualmachines/{VMName}/{year}/{month}/{day}/packetCapture_{creationTime}.cap
- ```
+1. In the **Packet capture** page, select **...** on the right-side of the packet capture that you want to delete, or right-click it, then select **Delete**.
+
+ :::image type="content" source="./media/network-watcher-packet-capture-manage-portal/delete-packet-capture.png" alt-text="Screenshot that shows how to delete a packet capture from Network Watcher in Azure portal.":::
+
+1. Select **Yes**.
-- If a file path is specified, the capture file can be viewed on the virtual machine or downloaded.
+> [!IMPORTANT]
+> Deleting a packet capture in Network Watcher doesn't delete the capture file from the storage account or the virtual machine. If you don't need the capture file anymore, you must manually delete it from the storage account to avoid incurring storage costs.
-## Next steps
+## Related content
- To learn how to automate packet captures with virtual machine alerts, see [Create an alert triggered packet capture](network-watcher-alert-triggered-packet-capture.md). - To determine whether specific traffic is allowed in or out of a virtual machine, see [Diagnose a virtual machine network traffic filter problem](diagnose-vm-network-traffic-filtering-problem.md).
openshift Howto Remotewrite Prometheus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-remotewrite-prometheus.md
Title: Remote write to Azure Monitor Managed Service
-description: Describes how to configure remote write to send data from the default Prometheus server running in your ARO cluster.
+ Title: Set up remote write for Azure Monitor managed service for Prometheus
+description: Learn how to set up remote write to send data from the default Prometheus server running in your Azure Red Hat OpenShift cluster to your Azure Monitor workspace.
Last updated 01/02/2023
-# Send data to Azure Monitor workspace from the Prometheus server in your Azure Red Hat OpenShift (ARO) cluster
-Azure Red Hat OpenShift comes preinstalled with a default Prometheus server. As per the [support policy](support-policies-v4.md), this default Prometheus server shouldn't be removed. Some scenarios need to centralize data from self-managed Prometheus clusters for long-term data retention to create a centralized view across your clusters. Azure Monitor managed service for Prometheus allows you to collect and analyze metrics at scale using a Prometheus-compatible monitoring solution based on the [Prometheus](https://aka.ms/azureprometheus-promio) project from the Cloud Native Computing Foundation. You can use [remote_write](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) to send data from the in-cluster Prometheus servers to the Azure managed service.
+# Send data to an Azure Monitor workspace from your Prometheus server
+
+Azure Red Hat OpenShift is preinstalled with a default Prometheus server. As detailed in the Azure Red Hat OpenShift [support policy](support-policies-v4.md), this default Prometheus server shouldn't be removed.
+
+In some scenarios, you might want to centralize data from self-managed Prometheus clusters for long-term data retention to create a centralized view across your clusters. You can use Azure Monitor managed service for Prometheus to collect and analyze metrics at scale by using a Prometheus-compatible monitoring solution that's based on the [Prometheus](https://aka.ms/azureprometheus-promio) project from the Cloud Native Computing Foundation. You can use [remote write](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) to send data from Prometheus servers in your cluster to the Azure managed service.
## Prerequisites-- Data for Azure Monitor managed service for Prometheus is stored in an [Azure Monitor workspace](../azure-monitor/essentials/azure-monitor-workspace-overview.md). You must [create a new workspace](../azure-monitor/essentials/azure-monitor-workspace-manage.md#create-an-azure-monitor-workspace) if you don't already have one.
-## Create Microsoft Entra ID application
-Follow the procedure at [Register an application with Microsoft Entra ID and create a service principal](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) to register an application for Prometheus remote-write and create a service principal.
+To send data from a Prometheus server by using remote write, you need:
-1. Copy the tenant ID and client ID of the created service principal.
- 1. Browse to **Identity > Applications > App registrations**, then select your application.
- 1. On the overview page for the app, copy the Directory (tenant) ID value and store it in your application code.
- 1. Copy the Application (client) ID value and store it in your application code.
-
-1. Create a new client secret as directed in [Create new client secret](../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-client-secret).Copy the value of the created secret.
+- An [Azure Monitor workspace](../azure-monitor/essentials/azure-monitor-workspace-overview.md). If you don't already have a workspace, you must [create a new workspace](../azure-monitor/essentials/azure-monitor-workspace-manage.md#create-an-azure-monitor-workspace).
-1. Set the values of the collected tenant ID, client ID and client secret:
+## Register an application with Microsoft Entra ID
- ```
- export TENANT_ID=<tenant-id>
- export CLIENT_ID=<client-id>
- export CLIENT_SECRET=<client-secret>
- ```
-
-## Assign Monitoring Metrics Publisher role to the application
+1. Complete the steps to [register an application with Microsoft Entra ID](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) and create a service principal.
-The application requires the *Monitoring Metrics Publisher* role on the data collection rule associated with your Azure Monitor workspace.
+1. Copy the tenant ID and client ID of the service principal:
-1. From the menu of your Azure Monitor Workspace account, select the **Data collection rule** to open the **Overview** page for the data collection rule.
+ 1. In the [Microsoft Entra admin center](https://entra.microsoft.com), go to **Identity** > **Applications** > **App registrations**, and then select your application.
+ 1. Go to the **Overview** page for the app.
+ 1. Copy and retain the **Directory (tenant) ID** value.
+ 1. Copy and retain the **Application (client) ID** value.
-2. On the **Overview** page, select **Access control (IAM)**.
+1. Create a [new client secret](../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-client-secret). Copy and retain the value of the secret.
-3. Select **Add**, and then select **Add role assignment**.
+1. In your application code, set the values of the tenant ID, client ID, and client secret that you copied:
-4. Select **Monitoring Metrics Publisher** role, and then select **Next**.
+ ```bash
+ export TENANT_ID=<tenant-id>
+ export CLIENT_ID=<client-id>
+ export CLIENT_SECRET=<client-secret>
+ ```
-5. Select **User, group, or service principal**, and then select **Select members**. Select the application that you created and select **Select**.
+## Assign the Monitoring Metrics Publisher role to the application
-6. Select **Review + assign** to complete the role assignment.
+The application must have the Monitoring Metrics Publisher role for the data collection rule that is associated with your Azure Monitor workspace.
-## Create secret in the ARO cluster
+1. In the Azure portal, go to the instance of Azure Monitor for your subscription.
-To authenticate with a remote write endpoint, the OAuth 2.0 authentication method from the [supported remote write authentication settings](https://docs.openshift.com/container-platform/4.11/monitoring/configuring-the-monitoring-stack.html#supported_remote_write_authentication_settings_configuring-the-monitoring-stack) is used. To facilitate this approach, create a secret with the client ID and client secret:
+1. On the resource menu, select **Data Collection Rules**.
-```
+1. Select the data collection rule that is associated with your Azure Monitor workspace.
+
+1. On the **Overview** page for the data collection rule, select **Access control (IAM)**.
+
+1. Select **Add**, and then select **Add role assignment**.
+
+1. Select the **Monitoring Metrics Publisher** role, and then select **Next**.
+
+1. Select **User, group, or service principal**, and then choose **Select members**. Select the application that you registered, and then choose **Select**.
+
+1. To complete the role assignment, select **Review + assign**.
+
+## Create a secret in your Azure Red Hat OpenShift cluster
+
+To authenticate by using a remote write endpoint, you use the OAuth 2.0 authentication method from the [supported remote write authentication settings](https://docs.openshift.com/container-platform/4.11/monitoring/configuring-the-monitoring-stack.html#supported_remote_write_authentication_settings_configuring-the-monitoring-stack).
+
+To begin, create a secret by using the client ID and client secret:
+
+```yaml
cat << EOF | oc apply -f - apiVersion: v1 kind: Secret
stringData:
EOF ```
-## Configure remote write
+## Set up remote write
-To [configure](https://docs.openshift.com/container-platform/4.11/monitoring/configuring-the-monitoring-stack.html#configuring_remote_write_storage_configuring-the-monitoring-stack) remote write for default platform monitoring, update the *cluster-monitoring-config* config map in the openshift-monitoring namespace.
+To [set up remote write](https://docs.openshift.com/container-platform/4.11/monitoring/configuring-the-monitoring-stack.html#configuring_remote_write_storage_configuring-the-monitoring-stack) for the default platform monitoring, update the *cluster-monitoring-config* config map YAML file in the openshift-monitoring namespace.
-1. Open the config map for editing:
+1. Open the config map file for editing:
- ```
+ ```bash
oc edit -n openshift-monitoring cm cluster-monitoring-config ```
-
- ```
+
+ ```yaml
data: config.yaml: | prometheusK8s:
To [configure](https://docs.openshift.com/container-platform/4.11/monitoring/con
scopes: - "https://monitor.azure.com/.default" ```
-
-1. Update the configuration.
- 1. Replace `INGESTION-URL` in the configuration with the **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace.
-
- 1. Replace `TENANT_ID` in the configuration with the tenant ID of the service principal.
+1. Update the config map file:
+
+ 1. Replace `INGESTION-URL` in the config map file with the value for **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace.
+ 1. Replace `TENANT_ID` in the config map file with the tenant ID of the service principal.
-## Visualize metrics using Azure Managed Grafana Workspace
+## Visualize metrics by using Azure Managed Grafana
-The captured metrics can be visualized using community Grafana dashboards, or you can create contextual dashboards as required.
+You can use community Grafana dashboards to visualize the captured metrics, or you can create contextual dashboards.
1. Create an [Azure Managed Grafana workspace](../managed-grafan).
-1. [Link](../azure-monitor/essentials/azure-monitor-workspace-manage.md?tabs=azure-portal#link-a-grafana-workspace) the created Grafana workspace to the Azure Monitor workspace.
+1. [Link the Azure Managed Grafana workspace](../azure-monitor/essentials/azure-monitor-workspace-manage.md?tabs=azure-portal#link-a-grafana-workspace) to your Azure Monitor workspace.
-1. [Import](../managed-grafan?tabs=azure-portal#import-a-grafana-dashboard) the community Grafana Dashboard with ID 3870 [OpenShift/K8 Cluster Overview](https://grafana.com/grafana/dashboards/3870-openshift-k8-cluster-overview/) into the Grafana workspace.
+1. [Import](../managed-grafan?tabs=azure-portal#import-a-grafana-dashboard) the community Grafana dashboard [Openshift/K8 Cluster Overview](https://grafana.com/grafana/dashboards/3870-openshift-k8-cluster-overview/) (ID 3870) to the Grafana workspace.
-1. Specify the Azure Monitor workspace as the data source.
+1. For the data source, use your Azure Monitor workspace.
1. Save the dashboard.
-1. Access the dashboard from **Home -> Dashboards**.
+To access the dashboard, in your Azure Managed Grafana workspace, go to **Home** > **Dashboards**, and then select the dashboard.
-## Troubleshooting
+## Troubleshoot
-See [Azure Monitor managed service for Prometheus remote write](../azure-monitor/containers/prometheus-remote-write.md#hitting-your-ingestion-quota-limit).
+For troubleshooting information, see [Azure Monitor managed service for Prometheus remote write](../azure-monitor/containers/prometheus-remote-write.md#hitting-your-ingestion-quota-limit).
-## Next steps
+## Related content
-- [Learn more about Azure Monitor managed service for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md).
+- [Learn more about Azure Monitor managed service for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md)
partner-solutions New Relic How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-how-to-manage.md
For each virtual machine, the following info appears:
> [!NOTE] > If a virtual machine shows that an agent is installed, but the option **Uninstall extension** is disabled, the agent was configured through a different New Relic resource in the same Azure subscription. To make any changes, go to the other New Relic resource in the Azure subscription.
+## Monitor virtual machine scale sets by using the New Relic agent
+
+You can install New Relic agent on virtual machine scale sets as an extension. Select **Virtual Machine Scale Sets** under **New Relic account config** in the Resource menu. In the working pane, you see a list of all virtual machine scale sets in the subscription.
+Virtual Machine Scale Sets (VMSS) is an Azure Compute resource which can be used to deploy and manage a set of identical VMs. Please familiarize yourself with the Azure resource [here](../../virtual-machine-scale-sets/overview.md) and the orchestration modes available [here](../../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md).
+
+The native integration can be used to install agent on both the uniform and flexible scale-sets. The new instances (VMs) of a scale set, in any mode, will receive the agent extension in the event of a scale-up scenario. VMSS resources in a uniform orchestration mode supports Automatic, Rolling, and Manual upgrade policy while resources in Flexible orchestration mode only supports manual upgrade today. In case, a manual upgrade policy is set for a resource, please upgrade the instances manually by installing the agent extension for the already scaled up instances. The auto-scaling and instance orchestration guide can be found [here](../../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#autoscaling-and-instance-orchestration)
+
+> [!NOTE]
+> In manual upgrade policy, pre-existing VM instances will not receive the extension automatically. This will show the agent status as **Partially Installed**. Please upgrade the VM instances by manually installing extension on them from the VM extensions blade or by going to ΓÇÿVMSS resource/InstancesΓÇÖ view.
+
+> [!NOTE]
+> The agent installation dashboard will support the automatic and rolling upgrade policy for Flex orchestration mode in the next release when similar support is available from VMSS Flex resources.
+ ## Monitor app services by using the New Relic agent You can install the New Relic agent on app services as an extension. Select **App Services** on the left pane. The working pane shows a list of all app services in the subscription.
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-limits.md
When using Azure Database for PostgreSQL flexible server for a busy database wit
### Storage - Once configured, storage size can't be reduced. You have to create a new server with desired storage size, perform manual [dump and restore](../howto-migrate-using-dump-and-restore.md) and migrate your database(s) to the new server.-- Currently, storage auto-grow feature isn't available. You can monitor the usage and increase the storage to a higher size. -- When the storage usage reaches 95% or if the available capacity is less than 5 GiB whichever is more, the server is automatically switched to **read-only mode** to avoid errors associated with disk-full situations. In rare cases, if the rate of data growth outpaces the time it takes switch to read-only mode, your Server may still run out of storage.-- We recommend setting alert rules for `storage used` or `storage percent` when they exceed certain thresholds so that you can proactively take action such as increasing the storage size. For example, you can set an alert if the storage percent exceeds 80% usage.-- If you're using logical replication, then you must drop the logical replication slot in the primary server if the corresponding subscriber no longer exists. Otherwise the WAL files start to get accumulated in the primary filling up the storage. If the storage threshold exceeds certain threshold and if the logical replication slot isn't in use (due to non-available subscriber), Azure Database for PostgreSQL flexible server automatically drops that unused logical replication slot. That action releases accumulated WAL files and avoids your server becoming unavailable due to storage getting filled situation.
+- When the storage usage reaches 95% or if the available capacity is less than 5 GiB whichever is more, the server is automatically switched to **read-only mode** to avoid errors associated with disk-full situations. In rare cases, if the rate of data growth outpaces the time it takes to switch to read-only mode, your Server may still run out of storage. You can enable storage autogrow to avoid these issues and automatically scale your storage based on your workload demands.
+- We recommend setting alert rules for `storage used` or `storage percent` when they exceed certain thresholds so that you can proactively take action such as increasing the storage size. For example, you can set an alert if the storage percentage exceeds 80% usage.
+- If you're using logical replication, then you must drop the logical replication slot in the primary server if the corresponding subscriber no longer exists. Otherwise, the WAL files accumulate in the primary filling up the storage. If the storage threshold exceeds certain threshold and if the logical replication slot isn't in use (due to a non-available subscriber), Azure Database for PostgreSQL flexible server automatically drops that unused logical replication slot. That action releases accumulated WAL files and avoids your server becoming unavailable due to storage getting filled situation.
- We don't support the creation of tablespaces, so if you're creating a database, donΓÇÖt provide a tablespace name. Azure Database for PostgreSQL flexible server uses the default one that is inherited from the template database. It's unsafe to provide a tablespace like the temporary one because we can't ensure that such objects will remain persistent after server restarts, HA failovers, etc. ### Networking
When using Azure Database for PostgreSQL flexible server for a busy database wit
- Moving in and out of VNET is currently not supported. - Combining public access with deployment within a VNET is currently not supported. - Firewall rules aren't supported on VNET, Network security groups can be used instead.-- Public access database servers can connect to public internet, for example through `postgres_fdw`, and this access can't be restricted. VNET-based servers can have restricted outbound access using Network Security Groups.
+- Public access database servers can connect to the public internet, for example through `postgres_fdw`, and this access can't be restricted. VNET-based servers can have restricted outbound access using Network Security Groups.
### High availability (HA)
When using Azure Database for PostgreSQL flexible server for a busy database wit
### Availability zones -- Manually moving servers to a different availability zone is currently not supported. However, you can enable HA using the preferred AZ as the standby zone. Once established, you can fail over to the standby and then disable HA.
+- Manually moving servers to a different availability zone is currently not supported. However, using the preferred AZ as the standby zone, you can enable HA. Once established, you can fail over to the standby and then disable HA.
### Postgres engine, extensions, and PgBouncer
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md
select * from pgaadauth_create_principal('Prod DB Readonly', false, false).
When group members sign in, they use their access tokens but specify the group name as the username. > [!NOTE]
-> Azure Database for PostgreSQL flexible server supports managed identities as group members.
+> Azure Database for PostgreSQL flexible server supports managed identities and service principals as group members.
### Sign in to the user's Azure subscription
postgresql How To Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-server-portal.md
You can delete your server if you no longer need it.
:::image type="content" source="./media/how-to-manage-server-portal/delete-server.png" alt-text="Delete the Azure Database for PostgreSQL flexible server instance.":::
- > [!IMPORTANT]
- > Deleting a server is irreversible.
> [!div class="mx-imgBorder"] > ![Delete the Azure Database for PostgreSQL flexible server instance](./media/how-to-manage-server-portal/delete-server.png)
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. Azure Database
| South Africa West* | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | South Central US | :heavy_check_mark: (v3/v4 only) | :x: $ | :heavy_check_mark: | :heavy_check_mark: | | South India | :heavy_check_mark: (v3/v4/v5 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
-| Southeast Asia | :heavy_check_mark:(v3/v4 only) | :x: $ | :heavy_check_mark: | :heavy_check_mark: |
+| Southeast Asia | :heavy_check_mark:| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Sweden Central | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Sweden South* | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Switzerland North | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
private-link How To Approve Private Link Cross Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/how-to-approve-private-link-cross-subscription.md
Title: Approve private link connections across subscriptions
+ Title: Approve private endpoint connections across subscriptions
-description: Get started learning how to approve and manage private link connections across subscriptions with Azure Private Link.
+description: Get started learning how to approve and manage private endpoint connections across subscriptions by using Azure Private Link.
Last updated 01/11/2024
-#customer intent: As a Network Administrator, I want the approve private link connections across Azure subscriptions.
+#customer intent: As a network administrator, I want to approve Private Link connections across Azure subscriptions.
-# Approve private link connections across subscriptions
+# Approve Private Link connections across subscriptions
Azure Private Link enables you to connect privately to Azure resources. Private Link connections are scoped to a specific subscription. This article shows you how to approve a private endpoint connection across subscriptions. ## Prerequisites -- Two active Azure subscriptions.
+- Two active Azure subscriptions:
- One subscription hosts the Azure resource and the other subscription contains the consumer private endpoint and virtual network.
Resources used in this article:
| Resource | Subscription | Resource group | Location | | | | | |
-| **storage1** *(This name is unique, replace with the name you create)* | subscription-1 | test-rg | East US 2 |
+| **storage1** *(This name is unique. Replace with the name you create.)* | subscription-1 | test-rg | East US 2 |
| **vnet-1** | subscription-2 | test-rg | East US 2 | | **private-endpoint** | subscription-2 | test-rg | East US 2 |
Sign in to **subscription-1** in the [Azure portal](https://portal.azure.com).
## Register the resource providers for subscription-1
-For the private endpoint connection to complete successfully, the `Microsoft.Network` and `Microsoft.Storage` resource providers must be registered in **subscription-1**. Use the following steps to register the resource providers. If the `Microsoft.Network` and `Microsoft.Storage` resource providers are already registered, skip this step.
+For the private endpoint connection to complete successfully, the `Microsoft.Storage` and `Microsoft.Network` resource providers must be registered in **subscription-1**. Use the following steps to register the resource providers. If the `Microsoft.Storage` and `Microsoft.Network` resource providers are already registered, skip this step.
> [!IMPORTANT] > If you're using a different resource type, you must register the resource provider for that resource type if it's not already registered.
For the private endpoint connection to complete successfully, the `Microsoft.Net
1. Select **Register**.
-1. Repeat the previous steps to register the **Microsoft.Network** resource provider.
+1. Repeat the previous steps to register the `Microsoft.Network` resource provider.
## Create a resource group
For the private endpoint connection to complete successfully, the `Microsoft.Net
1. Select **+ Create**.
-1. In the **Basics** tab of **Create a resource group**, enter or select the following information:
+1. On the **Basics** tab of **Create a resource group**, enter or select the following information:
| Setting | Value | | - | -- |
For the private endpoint connection to complete successfully, the `Microsoft.Net
[!INCLUDE [create-storage-account.md](../../includes/create-storage-account.md)]
-## Obtain storage account resource ID
+## Obtain the storage account resource ID
You need the storage account resource ID to create the private endpoint connection in **subscription-2**. Use the following steps to obtain the storage account resource ID.
Sign in to **subscription-2** in the [Azure portal](https://portal.azure.com).
## Register the resource providers for subscription-2
-For the private endpoint connection to complete successfully, the `Microsoft.Storage` and `Microsoft.Network` resource provider must be registered in **subscription-2**. Use the following steps to register the resource providers. If the `Microsoft.Storage` and `Microsoft.Network` resource providers are already registered, skip this step.
+For the private endpoint connection to complete successfully, the `Microsoft.Storage` and `Microsoft.Network` resource providers must be registered in **subscription-2**. Use the following steps to register the resource providers. If the `Microsoft.Storage` and `Microsoft.Network` resource providers are already registered, skip this step.
> [!IMPORTANT] > If you're using a different resource type, you must register the resource provider for that resource type if it's not already registered.
For the private endpoint connection to complete successfully, the `Microsoft.Sto
1. Select **Register**.
-1. Repeat the previous steps to register the **Microsoft.Network** resource provider.
+1. Repeat the previous steps to register the `Microsoft.Network` resource provider.
[!INCLUDE [virtual-network-create.md](../../includes/virtual-network-create.md)]
For the private endpoint connection to complete successfully, the `Microsoft.Sto
1. Select **+ Create** in **Private endpoints**.
-1. In the **Basics** tab of **Create a private endpoint**, enter or select the following information:
+1. On the **Basics** tab of **Create a private endpoint**, enter or select the following information:
| Setting | Value | | - | -- | | **Project details** | | | Subscription | Select **subscription-2**. |
- | Resource group | Select **test-rg** |
+ | Resource group | Select **test-rg**. |
| **Instance details** | | | Name | Enter **private-endpoint**. | | Network Interface Name | Leave the default of **private-endpoint-nic**. |
The private endpoint connection is in a **Pending** state until approved. Use th
In this article, you learned how to approve a private endpoint connection across subscriptions. To learn more about Azure Private Link, continue to the following articles: - [Azure Private Link overview](private-link-overview.md)--- [Azure Private endpoint overview](private-endpoint-overview.md)
+- [Azure private endpoint overview](private-endpoint-overview.md)
role-based-access-control Classic Administrators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/classic-administrators.md
Title: Azure classic subscription administrators
-description: Describes how to add or change the Azure Co-Administrator and Service Administrator roles, and how to view the Account Administrator.
+description: Describes how to remove or change the Azure Co-Administrator and Service Administrator roles, and how to view the Account Administrator.
Previously updated : 06/07/2023 Last updated : 01/26/2024
# Azure classic subscription administrators > [!IMPORTANT]
-> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
+> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Starting February 26, 2024, you won't be able to add new Co-Administrators. Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
-Microsoft recommends that you manage access to Azure resources using Azure role-based access control (Azure RBAC). However, if you are still using the classic deployment model, you'll need to use a classic subscription administrator role: Service Administrator and Co-Administrator. For more information, see [Azure Resource Manager vs. classic deployment](../azure-resource-manager/management/deployment-models.md).
+Microsoft recommends that you manage access to Azure resources using Azure role-based access control (Azure RBAC). However, if you are still using the classic deployment model, you'll need to use a classic subscription administrator role: Service Administrator and Co-Administrator. For information about how to migrate your resources from classic deployment to Resource Manager deployment, see [Azure Resource Manager vs. classic deployment](../azure-resource-manager/management/deployment-models.md).
-This article describes how to add or change the Co-Administrator and Service Administrator roles, and how to view the Account Administrator.
+This article describes how to remove or change the Co-Administrator and Service Administrator roles, and how to view the Account Administrator.
+
+## Frequently asked questions
+
+Will Co-Administrators lose access after August 31, 2024?
+
+- Starting on August 31, 2024, Microsoft will start the process to remove access for Co-Administrators.
+
+What is the equivalent Azure role I should assign for Co-Administrators?
+
+- [Owner](built-in-roles.md#owner) role at subscription scope has the equivalent access. However, Owner is a [privileged administrator role](role-assignments-steps.md#privileged-administrator-roles) and grants full access to manage Azure resources. You should consider another Azure role with fewer permissions or reduce the scope.
+
+What should I do if I have a strong dependency on Co-Administrators?
+
+- Email ACARDeprecation@microsoft.com and describe your scenario.
+
+## View Co-Administrators
+
+Follow these steps to view the Co-Administrators for a subscription using the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as an [Owner](built-in-roles.md#owner) of a subscription.
+
+1. Open [Subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) and select a subscription.
+
+1. Click **Access control (IAM)**.
+
+1. Click the **Classic administrators** tab to view a list of the Co-Administrators.
+
+ ![Screenshot that opens Classic administrators.](./media/shared/classic-administrators.png)
+
+## Assess Co-Administrators
+
+Use the following table to assess how to remove or re-assign Co-Administrators.
+
+| Assessment | Next steps|
+| | |
+| User no longer needs access | Follow steps to [remove Co-Administrator](#remove-a-co-administrator). |
+| User still needs some access, but not full access | 1. Determine the Azure role the user needs.<br/>2. Determine the scope the user needs.<br/>3. Follow steps to [assign an Azure role to user](role-assignments-portal.md).<br/>4. [Remove Co-Administrator](#remove-a-co-administrator). |
+| User needs the same access as a Co-Administrator | 1. Assign the [Owner role at subscription scope](role-assignments-portal-subscription-admin.md).<br/>2. [Remove Co-Administrator](#remove-a-co-administrator). |
+
+## Remove a Co-Administrator
+
+> [!IMPORTANT]
+> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Starting February 26, 2024, you won't be able to add new Co-Administrators. Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
+
+Follow these steps to remove a Co-Administrator.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as an [Owner](built-in-roles.md#owner) of a subscription.
+
+1. Open [Subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) and select a subscription.
+
+1. Click **Access control (IAM)**.
+
+1. Click the **Classic administrators** tab to view a list of the Co-Administrators.
+
+1. Add a check mark next to the Co-Administrator you want to remove.
+
+1. Click **Remove**.
+
+1. In the message box that appears, click **Yes**.
+
+ ![Screenshot that removes co-administrator.](./media/classic-administrators/remove-coadmin.png)
## Add a Co-Administrator
-> [!TIP]
+> [!IMPORTANT]
+> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Starting February 26, 2024, you won't be able to add new Co-Administrators. Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
+>
> You only need to add a Co-Administrator if the user needs to manage Azure classic deployments by using [Azure Service Management PowerShell Module](/powershell/azure/servicemanagement/install-azure-ps). If the user only uses the Azure portal to manage the classic resources, you wonΓÇÖt need to add the classic administrator for the user.
-1. Sign in to the [Azure portal](https://portal.azure.com) as the Service Administrator or a Co-Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as an [Owner](built-in-roles.md#owner) of a subscription.
1. Open [Subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) and select a subscription.
Note that the [Azure built-in roles](../role-based-access-control/built-in-roles
For information that compares member users and guest users, see [What are the default user permissions in Microsoft Entra ID?](../active-directory/fundamentals/users-default-permissions.md).
-## Remove a Co-Administrator
-
-1. Sign in to the [Azure portal](https://portal.azure.com) as the Service Administrator or a Co-Administrator.
-
-1. Open [Subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) and select a subscription.
-
-1. Click **Access control (IAM)**.
-
-1. Click the **Classic administrators** tab.
-
-1. Add a check mark next to the Co-Administrator you want to remove.
-
-1. Click **Remove**.
-
-1. In the message box that appears, click **Yes**.
-
- ![Screenshot that removes co-administrator](./media/classic-administrators/remove-coadmin.png)
- ## Change the Service Administrator Only the Account Administrator can change the Service Administrator for a subscription. By default, when you sign up for an Azure subscription, the Service Administrator is the same as the Account Administrator.
For more information about Microsoft accounts and Microsoft Entra accounts, see
You might want to remove the Service Administrator, for example, if they are no longer with the company. If you do remove the Service Administrator, you must have a user who is assigned the [Owner](built-in-roles.md#owner) role at subscription scope to avoid orphaning the subscription. A subscription Owner has the same access as the Service Administrator.
-1. Sign in to the [Azure portal](https://portal.azure.com) as a subscription Owner or a Co-Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as an [Owner](built-in-roles.md#owner) of a subscription.
1. Open [Subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) and select a subscription.
role-based-access-control Rbac And Directory Admin Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/rbac-and-directory-admin-roles.md
ms.assetid: 174f1706-b959-4230-9a75-bf651227ebf6
Previously updated : 12/01/2023 Last updated : 01/26/2024
Several Microsoft Entra roles span Microsoft Entra ID and Microsoft 365, such as
## Classic subscription administrator roles > [!IMPORTANT]
-> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
+> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Starting February 26, 2024, you won't be able to add new Co-Administrators. Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
Account Administrator, Service Administrator, and Co-Administrator are the three classic subscription administrator roles in Azure. Classic subscription administrators have full access to the Azure subscription. They can manage resources using the Azure portal, Azure Resource Manager APIs, and the classic deployment model APIs. The account that is used to sign up for Azure is automatically set as both the Account Administrator and Service Administrator. Then, additional Co-Administrators can be added. The Service Administrator and the Co-Administrators have the equivalent access of users who have been assigned the Owner role (an Azure role) at the subscription scope. The following table describes the differences between these three classic subscription administrative roles.
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshooting.md
ms.assetid: df42cca2-02d6-4f3c-9d56-260e1eb7dc44
Previously updated : 12/01/2023 Last updated : 01/26/2024
If you're a Microsoft Entra Global Administrator and you don't have access to a
## Classic subscription administrators > [!IMPORTANT]
-> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
+> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Starting February 26, 2024, you won't be able to add new Co-Administrators. Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
+>
+> For more information, see [Azure classic subscription administrators](classic-administrators.md).
## Next steps
sap Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/tutorial.md
For this example configuration, the resource group is `LAB-SECE-DEP05-INFRASTRUC
Select the playbook `1) BoM Downloader` to download the SAP software described in the BOM file into the storage account. Check that the `sapbits` container has all your media for installation.
+ You can either run the playbook using the configuration menu or directly from the command line.
+
+ ```bash
+
+ cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/BOMS/
+
+ export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+ playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars="@sap-parameters.yaml"
+ --extra-vars="bom_processing=true"
+ "${@}"
+ )
+
+ # Run the playbook to retrieve the ssh key from the Azure key vault
+ ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+ # Run the playbook to perform the Operating System configuration
+ ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_bom_downloader.yaml
+
+
+ ```
+
+ if you want you can also pass the SAP User credentials as parameters
+
+ ```bash
+
+ cd ${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/BOMS/
+
+ sap_username=<sap-username>
+ sap_user_password='<sap-password>'
+
+ export ANSIBLE_PRIVATE_KEY_FILE=sshkey
+
+ playbook_options=(
+ --inventory-file="${sap_sid}_hosts.yaml"
+ --private-key=${ANSIBLE_PRIVATE_KEY_FILE}
+ --extra-vars="_workspace_directory=`pwd`"
+ --extra-vars="@sap-parameters.yaml"
+ --extra-vars="s_user=${sap_username}"
+ --extra-vars="s_password=${sap_user_password}"
+ --extra-vars="bom_processing=true"
+ "${@}"
+ )
+
+ # Run the playbook to retrieve the ssh key from the Azure key vault
+ ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/pb_get-sshkey.yaml
+
+ # Run the playbook to perform the Operating System configuration
+ ansible-playbook "${playbook_options[@]}" ~/Azure_SAP_Automated_Deployment/sap-automation/deploy/ansible/playbook_bom_downloader.yaml
+
+
+ ```
+++ ## SAP application installation
search Search Get Started Retrieval Augmented Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-retrieval-augmented-generation.md
+
+ Title: 'Quickstart: RAG app'
+
+description: Use Azure OpenAI Studio to chat with a search index on Azure AI Search. Explore the Retrieval Augmented Generation (RAG) pattern for your search solution.
++++++ Last updated : 01/25/2024++
+# Quickstart: Chat with your search index in Azure OpenAI Studio
+
+Get started with generative search using Azure OpenAI Studio's **Add your own data** option to implement a Retrieval Augmented Generation (RAG) experience powered by Azure AI Search.
+
+**Add your own data** gives you built-in data preprocessing (text extraction and clean up), data chunking, embedding, and indexing. You can stand up a chat experience quickly, experiment with prompts over your own data, and gain important insights as to how your content performs before writing any code.
+
+In this quickstart:
+
+> [!div class="checklist"]
+> + Deploy Azure OpenAI models
+> + Download sample PDFs
+> + Configure data processing
+> + Chat with your data in the Azure OpenAI Studio playground
+> + Test your index with different chat models, configurations, and history
+
+## Prerequisites
+++ [An Azure subscription](https://azure.microsoft.com/free/)+++ [Azure OpenAI](https://aka.ms/oai/access)+++ [Azure Storage](/azure/storage/common/storage-account-create)+++ [Azure AI Search](search-create-app-portal.md), in any region, on a billable tier (Basic and above), preferably with [semantic ranking enabled](semantic-how-to-enable-disable.md)+++ Contributor permissions in the Azure subscription for creating resources+
+## Set up model deployments
+
+1. Start [Azure OpenAI Studio](https://oai.azure.com/portal).
+
+1. Sign in, select your Azure subscription and Azure OpenAI resource, and then select **Use resource**.
+
+1. Under **Management > Deployments**, find or create a deployment for each of the following models:
+
+ + [text-embedding-ada-002](/azure/ai-services/openai/concepts/models#embeddings)
+ + [gpt-35-turbo](/azure/ai-services/openai/concepts/models#gpt-35)
+
+ Deploy more chat models if you want to test them with your data. Note that Text-Davinci-002 isn't supported.
+
+ If you create new deployments, the default configurations are suited for this tutorial. It's helpful to name each deployment after the model. For example, "text-embedding-ada-002" as the deployment name of the text-embedding-ada-002 model.
+
+## Generate a vector store for the playground
+
+1. Download the sample famous-speeches-pdf PDFs in [azure-search-sample-data](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/famous-speeches-pdf).
+
+1. Sign in to the [Azure OpenAI Studio](https://oai.azure.com/portal).
+
+1. On the **Chat** page under **Playground**, select **Add your data (preview)**.
+
+1. Select **Add data source**.
+
+1. From the dropdown list, select **Upload files**.
+
+ :::image type="content" source="media/search-get-started-rag/azure-openai-data-source.png" lightbox="media/search-get-started-rag/azure-openai-data-source.png" alt-text="Screenshot of the upload files option.":::
+
+1. In Data source, select your Azure Blob storage resource. Enable cross-origin scripting if prompted.
+
+1. Select your Azure AI Search resource.
+
+1. Provide an index name that's unique in your search service.
+
+1. Check **Add vector search to this search index.**
+
+1. Select **Azure OpenaI - text-embedding-ada-002**.
+
+1. Check the acknowledgment that Azure AI Search is a billable service. If you're using an existing search service, there's no extra charge for vector store unless you add semantic ranking. If you're creating a new service, Azure AI Search becomes billable upon service creation.
+
+1. Select **Next**.
+
+1. In Upload files, select the four files and then select **Upload**.
+
+1. Select **Next**.
+
+1. In Data Management, choose **Hybrid + semantic** if [semantic ranking is enabled](semantic-how-to-enable-disable.md) on your search service. If semantic ranking is disabled, choose **Hybrid (vector + keyword)**. Hybrid is a better choice because vector (similarity) search and keyword search execute the same query input in parallel, which can produce a more relevant response.
+
+ :::image type="content" source="media/search-get-started-rag/azure-openai-data-manage.png" lightbox="media/search-get-started-rag/azure-openai-data-manage.png" alt-text="Screenshot of the data management options.":::
+
+1. Acknowledge that vectorization of the sample data is billed at the usage rate of the Azure OpenAI embedding model.
+
+1. Select **Next**, and then select **Review and Finish**.
+
+## Chat with your data
+
+1. Review advanced settings that determine how much flexibility the chat model has in supplementing the grounding data, and how many chunks are returned from the query to the vector store.
+
+ Strictness determines whether the model supplements the query with its own information. A level of 5 is no supplementation. Only your grounding data is used, which means the search engine plays a large role in the quality of the response. Semantic ranking can be helpful in this scenario because the ranking models do a better job of interpreting the intent of the query.
+
+ Lower levels of strictness produce more verbose answers, but might also include information that isn't in your index.
+
+ :::image type="content" source="media/search-get-started-rag/azure-openai-studio-advanced-settings.png" alt-text="Screenshot of the advanced settings.":::
+
+1. Start with these settings:
+
+ + Check the **Limit responses to your data content** option.
+ + Strictness set to 3.
+ + Retrieved documents set to 20. Given chunk sizes of 1024 tokens, a setting of 20 gives you roughly 20,000 tokens to use for generating responses. The tradeoff is query latency, but you can experiment with chat replay to find the right balance.
+
+1. Send your first query. The chat models perform best in question and answer exercises. For example, "who gave the Gettysburg speech" or "when was the Gettysburg speech delivered".
+
+ More complex queries, such as "why was Gettysburg important", perform better if the model has some latitude to answer (lower levels of strictness) or if semantic ranking is enabled.
+
+ Queries that require deeper analysis, such as "how many speeches are in the vector store", might fail to return a response. In RAG pattern chat scenarios, information retrieval is keyword and similarity search against the query string, where the search engine looks for chunks having exact or similar terms, phrases, or construction. The payload might have insufficient data for the model to work with.
+
+ Finally, chats are constrained by the number of documents (chunks) returned in the response (limited to 3-20 in Azure OpenAI Studio playground). As you can imagine, posing a question about "all of the titles" requires a full scan of the entire vector store, which means a different approach, or modifying the generated code to allow for [exhaustive search](vector-search-how-to-create-index.md#add-a-vector-search-configuration) in the vector search configuration.
+
+ :::image type="content" source="media/search-get-started-rag/chat-results.png" lightbox="media/search-get-started-rag/chat-results.png" alt-text="Screenshot of a chat session.":::
+
+## Next steps
+
+Now that you're familiar with the benefits of Azure OpenAI Studio for scenario testing, review code samples that demonstrate the full range of APIs for RAG applications. Samples are available in [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python), [C#](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet), and [JavaScript](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript).
+
+## Clean up
+
+Azure AI Search is a billable resource for as long as the service exists. If it's no longer needed, delete it from your subscription to avoid charges.
service-fabric Service Fabric Cluster Creation Setup Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-setup-aad.md
Last updated 08/29/2022
> [!WARNING] > At this time, Microsoft Entra client authentication and the Managed Identity Token Service are mutually incompatible on Linux.
-For clusters running on Azure, Microsoft Entra ID is recommended to secure access to management endpoints. This article describes how to setup Microsoft Entra ID to authenticate clients for a Service Fabric cluster.
+For clusters running on Azure, Microsoft Entra ID is recommended to secure access to management endpoints. This article describes how to set up Microsoft Entra ID to authenticate clients for a Service Fabric cluster.
On Linux, you must complete the following steps before you create the cluster. On Windows, you also have the option to [configure Microsoft Entra authentication for an existing cluster](https://github.com/Azure/Service-Fabric-Troubleshooting-Guides/blob/master/Security/Configure%20Azure%20Active%20Directory%20Authentication%20for%20Existing%20Cluster.md).
-In this article, the term "application" will be used to refer to [Microsoft Entra applications](../active-directory/develop/developer-glossary.md#client-application), not Service Fabric applications; the distinction will be made where necessary. Microsoft Entra ID enables organizations (known as tenants) to manage user access to applications.
+In this article, the term "application" refers to [Microsoft Entra applications](../active-directory/develop/developer-glossary.md#client-application), not Service Fabric applications; the distinction is made where necessary. Microsoft Entra ID enables organizations (known as tenants) to manage user access to applications.
-A Service Fabric cluster offers several entry points to its management functionality, including the web-based [Service Fabric Explorer][service-fabric-visualizing-your-cluster] and [Visual Studio][service-fabric-manage-application-in-visual-studio]. As a result, you will create two Microsoft Entra applications to control access to the cluster: one web application and one native application. After the applications are created, you will assign users to read-only and admin roles.
+A Service Fabric cluster offers several entry points to its management functionality, including the web-based [Service Fabric Explorer][service-fabric-visualizing-your-cluster] and [Visual Studio][service-fabric-manage-application-in-visual-studio]. As a result, you'll create two Microsoft Entra applications to control access to the cluster: one web application and one native application. After the applications are created, you'll assign users to read-only and admin roles.
> [!NOTE] > At this time, Service Fabric doesn't support Microsoft Entra authentication for storage. > [!NOTE]
-> It is a [known issue](https://github.com/microsoft/service-fabric/issues/399) that applications and nodes on Linux Microsoft Entra ID-enabled clusters cannot be viewed in Azure Portal.
+> It's a [known issue](https://github.com/microsoft/service-fabric/issues/399) that applications and nodes on Linux Microsoft Entra ID-enabled clusters cannot be viewed in Azure Portal.
> [!NOTE]
-> Microsoft Entra ID now requires an application (app registration) publishers domain to be verified or use of default scheme. See [Configure an application's publisher domain](../active-directory/develop/howto-configure-publisher-domain.md) and [AppId Uri in single tenant applications will require use of default scheme or verified domains](../active-directory/develop/reference-breaking-changes.md#appid-uri-in-single-tenant-applications-will-require-use-of-default-scheme-or-verified-domains) for additional information.
+> Microsoft Entra ID now requires an application (app registration) publishers domain to be verified or use of default scheme. See [Configure an application's publisher domain](../active-directory/develop/howto-configure-publisher-domain.md) and [AppId Uri in single tenant applications requires use of default scheme or verified domains](../active-directory/develop/reference-breaking-changes.md#appid-uri-in-single-tenant-applications-will-require-use-of-default-scheme-or-verified-domains) for additional information.
> [!NOTE]
-> Starting in Service Fabric 11.0, Service Fabric Explorer will require a Single-page application Redirect URI instead of a Web Redirect URI.
+> Starting in Service Fabric 11.0, Service Fabric Explorer requires a Single-page application Redirect URI instead of a Web Redirect URI.
## Prerequisites
-In this article, we assume that you have already created a tenant. If you have not, start by reading [How to get a Microsoft Entra tenant][active-directory-howto-tenant].
-To simplify some of the steps involved in configuring Microsoft Entra ID with a Service Fabric cluster, we have created a set of Windows PowerShell scripts. Some actions require administrative level access to Microsoft Entra ID. If script errors with 401/403 'Authorization_RequestDenied', an administrator will need to execute script.
+In this article, we assume that you have already created a tenant. If you haven't, start by reading [How to get a Microsoft Entra tenant][active-directory-howto-tenant].
+To simplify some of the steps involved in configuring Microsoft Entra ID with a Service Fabric cluster, we have created a set of Windows PowerShell scripts. Some actions require administrative level access to Microsoft Entra ID. If the script experiences a 401 or 403 'Authorization_RequestDenied' error, an administrator needs to execute script.
1. Authenticate with Azure administrative permissions. 2. [Clone the repo](https://github.com/Azure-Samples/service-fabric-aad-helpers) to your computer.
We'll use the scripts to create two Microsoft Entra applications to control acce
### SetupApplications.ps1
-Run `SetupApplications.ps1` and provide the tenant ID, cluster name, web application URI, and web application reply URL as parameters. Use -remove to remove the app registrations. Using -logFile `<log file path>` will generate a transcript log. See script help (help .\setupApplications.ps1 -full) for additional information. The script creates the web and native applications to represent your Service Fabric cluster. The two new app registration entries will be in the following format:
+Run `SetupApplications.ps1` and provide the tenant ID, cluster name, web application URI, and web application reply URL as parameters. Use -remove to remove the app registrations. Using -logFile `<log file path>` generates a transcript log. See script help (help .\setupApplications.ps1 -full) for additional information. The script creates the web and native applications to represent your Service Fabric cluster. The two new app registration entries are in the following format:
- ClusterName_Cluster - ClusterName_Client
Run `SetupApplications.ps1` and provide the tenant ID, cluster name, web applica
- **tenantId:** You can find your *TenantId* by executing the PowerShell command `Get-AzureSubscription`. Executing this command displays the TenantId for every subscription. -- **clusterName:** *ClusterName* is used to prefix the Microsoft Entra applications that are created by the script. It does not need to match the actual cluster name exactly. It is intended only to make it easier to map Microsoft Entra artifacts to the Service Fabric cluster that they're being used with.
+- **clusterName:** *ClusterName* is used to prefix the Microsoft Entra applications that are created by the script. It doesn't need to match the actual cluster name exactly. It's intended only to make it easier to map Microsoft Entra artifacts to the Service Fabric cluster that they're being used with.
-- **SpaApplicationReplyUrl:** *SpaApplicationReplyUrl* is the default endpoint that Microsoft Entra ID returns to your users after they finish signing in. Set this endpoint as the Service Fabric Explorer endpoint for your cluster. If you are creating Microsoft Entra applications to represent an existing cluster, make sure this URL matches your existing cluster's endpoint. If you are creating applications for a new cluster, plan the endpoint your cluster will have and make sure not to use the endpoint of an existing cluster. By default the Service Fabric Explorer endpoint is: `https://<cluster_domain>:19080/Explorer/https://docsupdatetracker.net/index.html`
+- **SpaApplicationReplyUrl:** *SpaApplicationReplyUrl* is the default endpoint that Microsoft Entra ID returns to your users after they finish signing in. Set this endpoint as the Service Fabric Explorer endpoint for your cluster. If you're creating Microsoft Entra applications to represent an existing cluster, make sure this URL matches your existing cluster's endpoint. If you're creating applications for a new cluster, plan the endpoint for your cluster and make sure not to use the endpoint of an existing cluster. By default the Service Fabric Explorer endpoint is: `https://<cluster_domain>:19080/Explorer/https://docsupdatetracker.net/index.html`
-- **webApplicationUri:** *WebApplicationUri* is either the URI of a 'verified domain' or URI using API scheme format of api://{{tenant Id}}/{{cluster name}}. See [AppId Uri in single tenant applications will require use of default scheme or verified domains](../active-directory/develop/reference-breaking-changes.md#appid-uri-in-single-tenant-applications-will-require-use-of-default-scheme-or-verified-domains) for additional information.
+- **webApplicationUri:** *WebApplicationUri* is either the URI of a 'verified domain' or URI using API scheme format of API://{{tenant Id}}/{{cluster name}}. See [AppId Uri in single tenant applications requires use of default scheme or verified domains](../active-directory/develop/reference-breaking-changes.md#appid-uri-in-single-tenant-applications-will-require-use-of-default-scheme-or-verified-domains) for additional information.
- Example API scheme: api://0e3d2646-78b3-4711-b8be-74a381d9890c/mysftestcluster
+ Example API scheme: API://0e3d2646-78b3-4711-b8be-74a381d9890c/mysftestcluster
#### SetupApplications.ps1 example
$tenantId = '0e3d2646-78b3-4711-b8be-74a381d9890c'
$clusterName = 'mysftestcluster' $spaApplicationReplyUrl = 'https://mysftestcluster.eastus.cloudapp.azure.com:19080/Explorer/https://docsupdatetracker.net/index.html' # < client browser redirect url #$webApplicationUri = 'https://mysftestcluster.contoso.com' # < must be verified domain due to AAD changes
-$webApplicationUri = "api://$tenantId/$clusterName" # < does not have to be verified domain
+$webApplicationUri = "API://$tenantId/$clusterName" # < doesn't have to be verified domain
$configObj = .\SetupApplications.ps1 -TenantId $tenantId ` -ClusterName $clusterName `
NativeClientAppId b22cc0e2-7c4e-480c-89f5-25f768ecb439
### SetupUser.ps1
-SetupUser.ps1 is used to add user accounts to the newly created app registration using $configObj output variable from above. Specify username for user account to be configured with app registration and specify 'isAdmin' for administrative permissions. If the user account is new, provide the temporary password for the new user as well. The password will need to be changed on first logon. Using '-remove', will remove the user account not just the app registration.
+SetupUser.ps1 is used to add user accounts to the newly created app registration using $configObj output variable from above. Specify username for user account to be configured with app registration and specify 'isAdmin' for administrative permissions. If the user account is new, provide the temporary password for the new user as well. The password needs to be changed on first logon. If you use '-remove', you'll remove the user account, not just the app registration.
#### SetupUser.ps1 user (read) example
Setting up Microsoft Entra ID and using it can be challenging, so here are some
> [!NOTE] > With migration of Identities platforms (ADAL to MSAL), deprecation of AzureRM in favor of Azure AZ, and supporting multiple versions of PowerShell, dependencies may not always be correct or up to date causing errors in script execution. Running PowerShell commands and scripts from Azure Cloud Shell reduces the potential for errors with session auto authentication and managed identity.
-[![Button that will launch Cloud Shell](../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com/powershell)
+[![Button that launches Cloud Shell](../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com/powershell)
### **Request_BadRequest**
VERBOSE: received -byte response of content type application/json
confirm-graphApiRetry returning:True VERBOSE: invoke-graphApiCall status: 400 exception:
-Response status code does not indicate success: 400 (Bad Request).
+Response status code doesn't indicate success: 400 (Bad Request).
Invoke-WebRequest: /home/<user>/clouddrive/service-fabric-aad-helpers/Common.ps1:239 Line |
confirm-graphApiRetry returning:True
#### **Reason**
-Configuration changes have not propagated. Scripts will retry on certain requests with HTTP status codes 400 and 404.
+Configuration changes haven't propagated. Scripts retry on certain requests with HTTP status codes 400 and 404.
#### **Solution**
-Scripts will retry on certain requests with HTTP status codes 400 and 404 upto provided '-timeoutMin' which is by default 5 minutes. Script can be re-executed as needed.
+Scripts retry on certain requests with HTTP status codes 400 and 404 upto provided '-timeoutMin' which is by default 5 minutes. Script can be re-executed as needed.
### **Service Fabric Explorer prompts you to select a certificate**
After you sign in successfully to Microsoft Entra ID in Service Fabric Explorer,
![SFX certificate dialog][sfx-select-certificate-dialog] #### **Reason**
-The user is not assigned a role in the Microsoft Entra ID cluster application. Thus, Microsoft Entra authentication fails on Service Fabric cluster. Service Fabric Explorer falls back to certificate authentication.
+The user isn't assigned a role in the Microsoft Entra ID cluster application. Thus, Microsoft Entra authentication fails on Service Fabric cluster. Service Fabric Explorer falls back to certificate authentication.
#### **Solution** Follow the instructions for setting up Microsoft Entra ID, and assign user roles. Also, we recommend that you turn on "User assignment required to access app," as `SetupApplications.ps1` does.
This solution is the same as the preceding one.
### **Service Fabric Explorer returns a failure when you sign in: "AADSTS50011"** #### **Problem**
-When you try to sign in to Microsoft Entra ID in Service Fabric Explorer, the page returns a failure: "AADSTS50011: The reply address &lt;url&gt; does not match the reply addresses configured for the application: &lt;guid&gt;."
+When you try to sign in to Microsoft Entra ID in Service Fabric Explorer, the page returns a failure: "AADSTS50011: The reply address &lt;url&gt; doesn't match the reply addresses configured for the application: &lt;guid&gt;."
-![SFX reply address does not match][sfx-reply-address-not-match]
+![SFX reply address doesn't match][sfx-reply-address-not-match]
#### **Reason**
-The cluster (web) application that represents Service Fabric Explorer attempts to authenticate against Microsoft Entra ID, and as part of the request it provides the redirect return URL. But the URL is not listed in the Microsoft Entra application **REPLY URL** list.
+The cluster (web) application that represents Service Fabric Explorer attempts to authenticate against Microsoft Entra ID, and as part of the request it provides the redirect return URL. But the URL isn't listed in the Microsoft Entra application **REPLY URL** list.
#### **Solution** On the Microsoft Entra app registration page for your cluster, select **Authentication**, and under the **Redirect URIs** section, add the Service Fabric Explorer URL to the list. Save your change.
On the Microsoft Entra app registration page for your cluster, select **Authenti
### **Connecting to the cluster using Microsoft Entra authentication via PowerShell gives an error when you sign in: "AADSTS50011"** #### **Problem**
-When you try to connect to a Service Fabric cluster using Microsoft Entra ID via PowerShell, the sign-in page returns a failure: "AADSTS50011: The reply url specified in the request does not match the reply urls configured for the application: &lt;guid&gt;."
+When you try to connect to a Service Fabric cluster using Microsoft Entra ID via PowerShell, the sign-in page returns a failure: "AADSTS50011: The reply url specified in the request doesn't match the reply urls configured for the application: &lt;guid&gt;."
#### **Reason** Similar to the preceding issue, PowerShell attempts to authenticate against Microsoft Entra ID, which provides a redirect URL that isn't listed in the Microsoft Entra application **Reply URLs** list.
This error is returned when the user account executing the script doesn't have t
#### **Solution**
-Work with an Administrator of Azure tenant/Microsoft Entra ID to complete all remaining actions. The scripts provided are idempotent so can be re-executed to complete the process.
+Work with an Administrator of Azure tenant or Microsoft Entra ID to complete all remaining actions. The scripts provided are idempotent, so they can be re-executed to complete the process.
<a name='connect-the-cluster-by-using-azure-ad-authentication-via-powershell'></a>
After setting up Microsoft Entra applications and setting roles for users, [conf
[active-directory-howto-tenant]:../active-directory/develop/quickstart-create-new-tenant.md [service-fabric-visualizing-your-cluster]: service-fabric-visualizing-your-cluster.md [service-fabric-manage-application-in-visual-studio]: service-fabric-manage-application-in-visual-studio.md
-[sf-aad-ps-script-download]:http://servicefabricsdkstorage.blob.core.windows.net/publicrelease/MicrosoftAzureServiceFabric-AADHelpers.zip
[x509-certificates-and-service-fabric]: service-fabric-cluster-security.md#x509-certificates-and-service-fabric <!-- Images -->
service-fabric Service Fabric Cluster Creation Setup Azure Ad Via Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-setup-azure-ad-via-portal.md
After you set up Microsoft Entra applications and set roles for users, [configur
[active-directory-howto-tenant]:../active-directory/develop/quickstart-create-new-tenant.md [service-fabric-visualizing-your-cluster]: service-fabric-visualizing-your-cluster.md [service-fabric-manage-application-in-visual-studio]: service-fabric-manage-application-in-visual-studio.md
-[sf-azure-ad-ps-script-download]:http://servicefabricsdkstorage.blob.core.windows.net/publicrelease/MicrosoftAzureServiceFabric-AADHelpers.zip
[x509-certificates-and-service-fabric]: service-fabric-cluster-security.md#x509-certificates-and-service-fabric <!-- Images -->
spring-apps Application Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/application-observability.md
To use Application Insights to investigate the performance issues, use the follo
> [Set up a staging environment](../spring-apps/how-to-staging-environment.md) > [!div class="nextstepaction"]
-> [Map an existing custom domain to Azure Spring Apps](./tutorial-custom-domain.md)
+> [Map an existing custom domain to Azure Spring Apps](./how-to-custom-domain.md)
> [!div class="nextstepaction"] > [Use TLS/SSL certificates](./how-to-use-tls-certificate.md)
spring-apps Concept App Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-app-customer-responsibilities.md
Title: Version support for Java, Spring Boot, and more
-description: This article describes customer responsibilities developing Azure Spring Apps.
+description: This article describes the version support for Java, Spring Boot, and Spring Cloud, and the customer responsibilities when developing Azure Spring Apps.
spring-apps Concept Understand App And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-understand-app-and-deployment.md
*App* and *Deployment* are the two key concepts in the resource model of Azure Spring Apps. In Azure Spring Apps, an *App* is an abstraction of one business app. One version of code or binary deployed as the *App* runs in a *Deployment*. Apps run in an *Azure Spring Apps service instance*, or simply *service instance*, as shown next. You can have multiple service instances within a single Azure subscription, but the Azure Spring Apps Service is easiest to use when all of the Apps that make up a business app reside within a single service instance. One reason is that the Apps are likely to communicate with each other. They can easily do that by using Eureka service registry in the service instance.
spring-apps Concepts Blue Green Deployment Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concepts-blue-green-deployment-strategies.md
Suppose your application has two deployments: `deployment1` and `deployment2`. C
This makes `deployment2` the staging deployment. Thus, when the Continuous Delivery (CD) pipeline is ready to run, it deploys the next version of the app, version `v4`, onto the staging deployment `deployment2`.
-![Two deployments: deployment1 receives production traffic](media/spring-cloud-blue-green-patterns/alternating-deployments-1.png)
+![Two deployments: deployment1 receives production traffic](media/concepts-blue-green-deployment-strategies/alternating-deployments-1.png)
After `v4` has started up on `deployment2`, you can run automated and manual tests against it through a private test endpoint to ensure `v4` meets all expectations.
-![V4 is now deployed on deployment2 and undergoes testing](media/spring-cloud-blue-green-patterns/alternating-deployments-2.png)
+![V4 is now deployed on deployment2 and undergoes testing](media/concepts-blue-green-deployment-strategies/alternating-deployments-2.png)
When you have confidence in `v4`, you can set `deployment2` as the production deployment so that it receives all production traffic. `v3` will remain running on `deployment1` in case you discover a critical issue that requires rolling back.
-![V4 on deployment2 now receives production traffic](media/spring-cloud-blue-green-patterns/alternating-deployments-3.png)
+![V4 on deployment2 now receives production traffic](media/concepts-blue-green-deployment-strategies/alternating-deployments-3.png)
Now, `deployment1` is the staging deployment. So the next run of the deployment pipeline deploys onto `deployment1`.
-![V5 deployed on deployment1](media/spring-cloud-blue-green-patterns/alternating-deployments-4.png)
+![V5 deployed on deployment1](media/concepts-blue-green-deployment-strategies/alternating-deployments-4.png)
You can now test `V5` on `deployment1`'s private test endpoint.
-![V5 tested on deployment1](media/spring-cloud-blue-green-patterns/alternating-deployments-5.png)
+![V5 tested on deployment1](media/concepts-blue-green-deployment-strategies/alternating-deployments-5.png)
Finally, after `v5` meets all your expectations, you set `deployment1` as the production deployment once again, so that `v5` receives all production traffic.
-![V5 receives traffic on deployment1](media/spring-cloud-blue-green-patterns/alternating-deployments-6.png)
+![V5 receives traffic on deployment1](media/concepts-blue-green-deployment-strategies/alternating-deployments-6.png)
### Tradeoffs of the alternating deployments approach
The staging deployment always remains running, and thus consuming resources of t
Suppose in the above application, the release pipeline requires manual approval before each new version of the application can receive production traffic. This creates the risk that while one version (`v6`) awaits manual approval on the staging deployment, the deployment pipeline will run again and overwrite it with a newer version (`v7`). Then, when the approval for `v6` is granted, the pipeline that deployed `v6` will set the staging deployment as production. But now it will be the unapproved `v7`, not the approved `v6`, that is deployed on that deployment and receives traffic.
-![The approval race condition](media/spring-cloud-blue-green-patterns/alternating-deployments-race-condition.png)
+![The approval race condition](media/concepts-blue-green-deployment-strategies/alternating-deployments-race-condition.png)
You may be able to prevent the race condition by ensuring that the deployment flow for one version can't begin until the deployment flow for all previous versions is complete or aborted. Another way to prevent the approval race condition is to use the Named Deployments approach described below.
In the named deployments approach, a new deployment is created for each new vers
In the illustration below, version `v5` is running on the deployment `deployment-v5`. The deployment name now contains the version because the deployment was created specifically for this version. There's no other deployment at the outset. Now, to deploy version `v6`, the deployment pipeline creates a new deployment `deployment-v6` and deploys app version `v6` there.
-![Deploying new version on a named deployment](media/spring-cloud-blue-green-patterns/named-deployment-1.png)
+![Deploying new version on a named deployment](media/concepts-blue-green-deployment-strategies/named-deployment-1.png)
There's no risk of another version being deployed in parallel. First, Azure Spring Apps doesn't allow the creation of a third deployment while two deployments already exist. Second, even if it was possible to have more than two deployments, each deployment is identified by the version of the application it contains. Thus, the pipeline orchestrating the deployment of `v6` would only attempt to set `deployment-v6` as the production deployment.
-![New version receives production traffic named deployment](media/spring-cloud-blue-green-patterns/named-deployment-2.png)
+![New version receives production traffic named deployment](media/concepts-blue-green-deployment-strategies/named-deployment-2.png)
After the deployment created for the new version receives production traffic, you'll need to remove the deployment containing the previous version to make room for future deployments. You may wish to postpone by some number of minutes or hours so you can roll back to the previous version if you discover a critical issue in the new version.
-![After a fallback period, deleting the previous deployment](media/spring-cloud-blue-green-patterns/named-deployment-3.png)
+![After a fallback period, deleting the previous deployment](media/concepts-blue-green-deployment-strategies/named-deployment-3.png)
### Tradeoffs of the named deployments approach
spring-apps How To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-application-insights.md
zone_pivot_groups: spring-apps-tier-selection
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+>
+> With Spring Boot Native Image applications, use the [Azure Monitor OpenTelemetry Distro / Application Insights in Spring Boot native image Java application](https://aka.ms/AzMonSpringNative) project instead of the Application Insights Java agent.
+ **This article applies to:** ✔️ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ❌️ Enterprise
spring-apps How To Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-cicd.md
To deploy using a pipeline, follow these steps:
Your pipeline settings should match the following image.
- :::image type="content" source="media/spring-cloud-how-to-cicd/pipeline-task-setting.jpg" alt-text="Screenshot of pipeline settings." lightbox="media/spring-cloud-how-to-cicd/pipeline-task-setting.jpg":::
+ :::image type="content" source="media/how-to-cicd/pipeline-task-setting.jpg" alt-text="Screenshot of pipeline settings." lightbox="media/how-to-cicd/pipeline-task-setting.jpg":::
You can also build and deploy your projects using following pipeline template. This example first defines a Maven task to build the application, followed by a second task that deploys the JAR file using the Azure Spring Apps task for Azure Pipelines.
The following steps show you how to enable a blue-green deployment from the **Re
1. Add a new pipeline, and select **Empty job** to create a job. 1. Under **Stages** select the line **1 job, 0 task**
- :::image type="content" source="media/spring-cloud-how-to-cicd/create-new-job.jpg" alt-text="Screenshot of where to select to add a task to a job.":::
+ :::image type="content" source="media/how-to-cicd/create-new-job.jpg" alt-text="Screenshot of where to select to add a task to a job." lightbox="media/how-to-cicd/create-new-job.jpg":::
1. Select the **+** to add a task to the job. 1. Search for the **Azure Spring Apps** template, then select **Add** to add the task to the job.
The following steps show you how to enable a blue-green deployment from the **Re
1. Navigate to the **Azure Spring Apps Deploy** task in **Stage 1**, then select the ellipsis next to **Package or folder**. 1. Select *spring-boot-complete-0.0.1-SNAPSHOT.jar* in the dialog, then select **OK**.
- :::image type="content" source="media/spring-cloud-how-to-cicd/change-artifact-path.jpg" alt-text="Screenshot of the 'Select a file or folder' dialog box.":::
+ :::image type="content" source="media/how-to-cicd/change-artifact-path.jpg" alt-text="Screenshot of the 'Select a file or folder' dialog box." lightbox="media/how-to-cicd/change-artifact-path.jpg":::
1. Select the **+** to add another **Azure Spring Apps** task to the job.
-2. Change the action to **Set Production Deployment**.
-3. Select **Save**, then **Create release** to automatically start the deployment.
+1. Change the action to **Set Production Deployment**.
+1. Select **Save**, then **Create release** to automatically start the deployment.
To verify your app's current release status, select **View release**. After this task is finished, visit the Azure portal to verify your app status.
spring-apps How To Configure Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-enterprise-spring-cloud-gateway.md
The following list shows the options available for Autoscale demand management:
On the Azure portal, choose how you want to scale. The following screenshot shows the **Custom autoscale** option and mode settings:
-#### [Azure CLI](#tab/Azure-CLI)
+#### [Azure CLI](#tab/Azure-CLI)
Use the following command to create an autoscale setting:
spring-apps How To Configure Health Probes Graceful Termination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-health-probes-graceful-termination.md
Use the following steps to customize your application using Azure CLI.
Use the following best practices when adding health probes to Azure Spring Apps: -- Use liveness and readiness probes together. Azure Spring Apps provides two approaches for service discovery at the same time. When the readiness probe fails, the app instance is removed only from Kubernetes service discovery. A properly configured liveness probe can remove the issued app instance from Eureka service discovery to avoid unexpected cases. For more information about service discovery, see [Discover and register your Spring Boot applications](how-to-service-registration.md).
+- Use liveness and readiness probes together. Azure Spring Apps provides two approaches for service discovery at the same time. When the readiness probe fails, the app instance is removed only from Kubernetes service discovery. A properly configured liveness probe can remove the issued app instance from Eureka service discovery to avoid unexpected cases. For more information about service discovery, see [Discover and register your Spring Boot applications](how-to-service-registration.md). For more information about service discovery with the Enterprise plan, see [Use Tanzu Service Registry](how-to-enterprise-service-registry.md).
- When an app instance starts, the first check occurs after the delay specified by `initialDelaySeconds`. Subsequent checks occur periodically, according to the period length specified by `periodSeconds`. If the app fails to respond to the requests for several times as specified by `failureThreshold`, the app instance is restarted. Make sure your application can start fast enough, or update these parameters, so that the total timeout `initialDelaySeconds + periodSeconds * failureThreshold` is longer than the start time of your application.
spring-apps How To Configure Palo Alto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-palo-alto.md
crl.microsoft.com,
crl3.digicert.com ```
-Name the third file *AzureMonitorAddresses.csv*. This file should contain all addresses and IP ranges to be made available for metrics and monitoring via Azure Monitor, if you're using Azure monitor. The values in the following example are for demonstration purposes only. For up-to-date values, see [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md).
+Name the third file *AzureMonitorAddresses.csv*. This file should contain all addresses and IP ranges to be made available for metrics and monitoring via Azure Monitor, if you're using Azure monitor. The values in the following example are for demonstration purposes only. For up-to-date values, see [IP addresses used by Azure Monitor](../azure-monitor/ip-addresses.md).
```CSV name,type,address,tag
spring-apps How To Custom Persistent Storage With Standard Consumption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-custom-persistent-storage-with-standard-consumption.md
You can also mount your own persistent storage not only to Azure Spring Apps but
- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher. - An Azure Spring Apps Standard consumption and dedicated plan service instance. For more information, see [Quickstart: Provision an Azure Spring Apps Standard consumption and dedicated plan service instance](quickstart-provision-standard-consumption-service-instance.md).-- A Spring app deployed to Azure Spring Apps. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md).
+- A Spring app deployed to Azure Spring Apps.
## Set up the environment
spring-apps How To Deploy With Custom Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-deploy-with-custom-container-image.md
The following matrix shows what features are supported in each application type.
| Custom domain | ✔️ | ✔️ | | | Scaling - auto scaling | ✔️ | ✔️ | | | Scaling - manual scaling (in/out, up/down) | ✔️ | ✔️ | |
-| Managed Identity | ✔️ | ✔️ | |
+| Managed identity | ✔️ | ✔️ | |
| Spring Cloud Eureka & Config Server | ✔️ | ❌ | | | API portal for VMware Tanzu | ✔️ | ✔️ | Enterprise plan only. | | Spring Cloud Gateway for VMware Tanzu | ✔️ | ✔️ | Enterprise plan only. |
spring-apps How To Enable Ingress To App Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enable-ingress-to-app-tls.md
This article describes secure communications in Azure Spring Apps. The article a
The following picture shows the overall secure communication support in Azure Spring Apps. ## Secure communication model within Azure Spring Apps This section explains the secure communication model shown in the overview diagram above. 1. The client request from the client to the application in Azure Spring Apps comes into the ingress controller. The request can be either HTTP or HTTPS. The TLS certificate returned by the ingress controller is issued by the Microsoft Azure TLS issuing CA.
-
+ If the app has been mapped to an existing custom domain and is configured as HTTPS only, the request to the ingress controller can only be HTTPS. The TLS certificate returned by the ingress controller is the SSL binding certificate for that custom domain. The server side SSL/TLS verification for the custom domain is done in the ingress controller.
-2. The secure communication between the ingress controller and the applications in Azure Spring Apps are controlled by the ingress-to-app TLS. You can also control the communication through the portal or CLI, which will be explained later in this article. If ingress-to-app TLS is disabled, the communication between the ingress controller and the apps in Azure Spring Apps is HTTP. If ingress-to-app TLS is enabled, the communication will be HTTPS and has no relation to the communication between the clients and the ingress controller. The ingress controller won't verify the certificate returned from the apps because the ingress-to-app TLS encrypts the communication.
+1. The secure communication between the ingress controller and the applications in Azure Spring Apps are controlled by the ingress-to-app TLS. You can also control the communication through the portal or CLI, which will be explained later in this article. If ingress-to-app TLS is disabled, the communication between the ingress controller and the apps in Azure Spring Apps is HTTP. If ingress-to-app TLS is enabled, the communication will be HTTPS and has no relation to the communication between the clients and the ingress controller. The ingress controller won't verify the certificate returned from the apps because the ingress-to-app TLS encrypts the communication.
-3. Communication between the apps and the Azure Spring Apps services is always HTTPS and handled by Azure Spring Apps. Such services include config server, service registry, and Eureka server.
+1. Communication between the apps and the Azure Spring Apps services is always HTTPS and handled by Azure Spring Apps. Such services include config server, service registry, and Eureka server.
-4. You manage the communication between the applications. You can also take advantage of Azure Spring Apps features to load certificates into the application's trust store. For more information, see [Use TLS/SSL certificates in an application](./how-to-use-tls-certificate.md).
+1. You manage the communication between the applications. You can also take advantage of Azure Spring Apps features to load certificates into the application's trust store. For more information, see [Use TLS/SSL certificates in an application](./how-to-use-tls-certificate.md).
-5. You manage the communication between applications and external services. To reduce your development effort, Azure Spring Apps helps you manage your public certificates and loads them into your application's trust store. For more information, see [Use TLS/SSL certificates in an application](./how-to-use-tls-certificate.md).
+1. You manage the communication between applications and external services. To reduce your development effort, Azure Spring Apps helps you manage your public certificates and loads them into your application's trust store. For more information, see [Use TLS/SSL certificates in an application](./how-to-use-tls-certificate.md).
## Enable ingress-to-app TLS for an application
To enable ingress-to-app TLS in the [Azure portal](https://portal.azure.com/), f
3. Select **Ingress-to-app TLS**. 4. Switch **Ingress-to-app TLS** to *Yes*.
-![Screenshot showing where to enable Ingress-to-app TLS in portal.](./media/enable-end-to-end-tls/enable-i2a-tls.png)
### Verify ingress-to-app TLS status
az spring app show -n app_name -s service_name -g resource_group_name
## Next steps
-* [Access Config Server and Service Registry](how-to-access-data-plane-azure-ad-rbac.md)
+[Access Config Server and Service Registry](how-to-access-data-plane-azure-ad-rbac.md)
spring-apps How To Enterprise Configure Apm Integration And Ca Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-configure-apm-integration-and-ca-certificates.md
For information about using Web servers, see [Deploy web static files](how-to-en
Tanzu Build Service is enabled by default in Azure Spring Apps Enterprise. If you choose to disable the build service, you can deploy applications but only by using a custom container image. This section provides guidance for both build service enabled and disabled scenarios.
-#### Supported APM types
+### Supported APM types
This section lists the supported languages and required environment variables for the APMs that you can use for your integrations.
Use the following steps to show, add, edit, or delete an APM configuration:
:::image type="content" source="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/add-apm.png" alt-text="Screenshot of the Azure portal showing the APM configuration page with the Add button highlighted." lightbox="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/add-apm.png":::
-1. To view or edit an APM configuration, select the ellipsis (**...**) button for the configuration, then select **Edit APM**.
+1. To view or edit an APM configuration, select the ellipsis (**...**) button for the configuration, then select **Edit APM**.
:::image type="content" source="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/show-apm.png" alt-text="Screenshot of the Azure portal showing the APM configuration page with the Edit APM option selected." lightbox="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/show-apm.png":::
Use the following steps to view the APM configurations bound to the build:
1. Navigate to the **Build Service** page for your Azure Spring Apps instance. :::image type="content" source="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/build-service-build.png" alt-text="Screenshot of the Azure portal showing the build service page with the current build in the list." lightbox="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/build-service-build.png":::
-
+ 1. On the navigation pane, in the **Settings** section, select **APM bindings**.
-
+ 1. On the **APM bindings** page, view the APM configurations bound to the build. :::image type="content" source="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/build-apm-bindings.png" alt-text="Screenshot of the APM bindings page showing the APM configurations bound to the build." lightbox="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/build-apm-bindings.png":::
Use the following steps to view the APM configurations bound to the build:
Use the following steps to view the APM configurations bound to the deployment: 1. Navigate to your application page.
-
+ 1. On the navigation pane, in the **Settings** section, select **APM bindings**.
-
+ 1. On the **APM bindings** page, view the APM configurations bound to the deployment. :::image type="content" source="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/deployment-apm-bindings.png" alt-text="Screenshot of the APM bindings page showing the APM configurations bound to the deployment." lightbox="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/deployment-apm-bindings.png":::
The following list shows you the Azure CLI commands you can use to manage APM co
```azurecli az spring apm list \ --resource-group <resource-group-name> \
- --service <Azure-Spring-Apps-instance-name>
+ --service <Azure-Spring-Apps-instance-name>
``` - Use the following command to list all the supported APM types:
The following list shows you the Azure CLI commands you can use to manage APM co
```azurecli az spring apm list-enabled-globally \ --resource-group <resource-group-name> \
- --service <Azure-Spring-Apps-instance-name>
+ --service <Azure-Spring-Apps-instance-name>
``` - Use the following command to delete an APM configuration.
The following list shows you the Azure CLI commands you can use to manage APM co
--service <Azure-Spring-Apps-instance-name> \ --name <your-APM-name> \ ```
-
+ For more information on the `properties` and `secrets` parameters for your buildpack, see the [Supported Scenarios - APM and CA Certificates Integration](#supported-scenariosapm-and-ca-certificates-integration) section.
Use the following command to build an image and configure APM:
az spring build-service build <create|update> \ --resource-group <resource-group-name> \ --service <Azure-Spring-Apps-instance-name> \
- --name <app-name> \
+ --name <app-name> \
--builder <builder-name> \ --apms <APM-name> \ --artifact-path <path-to-your-JAR-file>
When you use your own container registry for the build service or disable the bu
az spring build-service build <create|update> \ --resource-group <resource-group-name> \ --service <Azure-Spring-Apps-instance-name> \
- --name <app-name> \
+ --name <app-name> \
--builder <builder-name> \ --certificates <CA certificate-name> \ --artifact-path <path-to-your-JAR-file>
Use the following steps to view the CA certificates bound to the build:
1. Navigate to your build page. :::image type="content" source="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/build-service-build.png" alt-text="Screenshot of the Azure portal showing the build service page with the current build in the list." lightbox="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/build-service-build.png":::
-
+ 1. On the navigation pane, in the **Settings** section, select **Certificate bindings**.
-
+ 1. On the **Certificate bindings** page, view the CA certificates bound to the build. :::image type="content" source="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/build-certificate-bindings.png" alt-text="Screenshot of the certificate bindings page showing CA certificates bound to the build." lightbox="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/build-certificate-bindings.png":::
spring-apps How To Enterprise Deploy App At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-app-at-scale.md
Spring Cloud Gateway supports rolling restarts to ensure zero downtime and disru
## Next steps -- [Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md) - [Scale an application in Azure Spring Apps](how-to-scale-manual.md)
spring-apps How To Enterprise Deploy Polyglot Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-polyglot-apps.md
The following table indicates the features supported for each language.
| Custom domain | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | Scaling - auto scaling | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | ✔️ | | Scaling - manual scaling (in/out, up/down) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Managed Identity | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ️ | ✔️ |
+| Managed identity | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ️ | ✔️ |
| API portal for VMware Tanzu | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | Spring Cloud Gateway for VMware Tanzu | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | Application Configuration Service for VMware Tanzu | ✔️ | | | | | | ✔️ | |
If you want to build the native image into a smaller size container image, then
The following table lists the features supported in Azure Spring Apps:
-| Feature description | Comment | Environment variable | Usage |
-||--|--||
-| Integrate with Bellsoft OpenJDK. | Configures the JDK version. Currently supported: JDK 8, 11, 17, and 20. | `BP_JVM_VERSION` | `--build-env BP_JVM_VERSION=17` |
-| Configure arguments for the `native-image` command. | Arguments to pass directly to the native-image command. These arguments must be valid and correctly formed or the native-image command fails. | `BP_NATIVE_IMAGE_BUILD_ARGUMENTS` | `--build-env BP_NATIVE_IMAGE_BUILD_ARGUMENTS="--no-fallback"` |
-| Add CA certificates to the system trust store at build and runtime. | See the [Use CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md#use-ca-certificates) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md). | Not applicable. | Not applicable. |
-| Enable configuration of labels on the created image | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more environment variables [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
-| Support building Maven-based applications from source. | Used for a multi-module project. Indicates the module to find the application artifact in. Defaults to the root module (empty). | `BP_MAVEN_BUILT_MODULE` | `--build-env BP_MAVEN_BUILT_MODULE=./gateway` |
+| Feature description | Comment | Environment variable | Usage |
+|||-||
+| Integrate with Bellsoft OpenJDK. | Configures the JDK version. Currently supported: JDK 8, 11, 17, and 20. | `BP_JVM_VERSION` | `--build-env BP_JVM_VERSION=17` |
+| Configure arguments for the `native-image` command. | Arguments to pass directly to the native-image command. These arguments must be valid and correctly formed or the native-image command fails. | `BP_NATIVE_IMAGE_BUILD_ARGUMENTS` | `--build-env BP_NATIVE_IMAGE_BUILD_ARGUMENTS="--no-fallback"` |
+| Add CA certificates to the system trust store at build and runtime. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | Not applicable. | Not applicable. |
+| Enable configuration of labels on the created image | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more environment variables [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+| Support building Maven-based applications from source. | Used for a multi-module project. Indicates the module to find the application artifact in. Defaults to the root module (empty). | `BP_MAVEN_BUILT_MODULE` | `--build-env BP_MAVEN_BUILT_MODULE=./gateway` |
There are some limitations for Java Native Image. For more information, see the [Java Native Image limitations](#java-native-image-limitations) section.
spring-apps How To Enterprise Deploy Static File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-static-file.md
This article shows you how to deploy your static files to an Azure Spring Apps E
## Prerequisites - An already provisioned Azure Spring Apps Enterprise plan instance. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise plan](quickstart-deploy-apps-enterprise.md).-- One or more applications running in Azure Spring Apps. For more information on creating apps, see [How to Deploy Spring Boot applications from Azure CLI](./how-to-launch-from-source.md).
+- One or more applications running in Azure Spring Apps.
- [Azure CLI](/cli/azure/install-azure-cli), version 2.45.0 or higher. - Your static files or dynamic front-end application - for example, a React app.
spring-apps How To Enterprise Large Cpu Memory Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-large-cpu-memory-applications.md
This article shows how to deploy large CPU and memory applications in Azure Spri
## Prerequisites - An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.-- An Azure Spring Apps service instance. For more information, see [Quickstart: Provision an Azure Spring Apps service instance](quickstart-provision-service-instance.md).
+- An Azure Spring Apps service instance.
- The [Azure CLI](/cli/azure/install-azure-cli). Install the Azure Spring Apps extension with the following command: `az extension add --name spring`. ## Create a large CPU and memory application
az spring app scale \
## Next steps -- [Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md) - [Scale an application in Azure Spring Apps](how-to-scale-manual.md)
spring-apps How To Integrate Azure Load Balancers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-integrate-azure-load-balancers.md
Add endpoints in traffic
1. Input fully qualified domain name (FQDN) of each Azure Spring Apps public endpoint. 1. Select **OK**.
- ![Traffic Manager 1](media/spring-cloud-load-balancers/traffic-manager-1.png)
- ![Traffic Manager 2](media/spring-cloud-load-balancers/traffic-manager-2.png)
+ :::image type="content" source="media/how-to-integrate-azure-load-balancers/traffic-manager-1.png" alt-text="Screenshot of the Azure portal that shows the Add endpoint page with an eastus FQDN with Priority 1." lightbox="media/how-to-integrate-azure-load-balancers/traffic-manager-1.png":::
+
+ :::image type="content" source="media/how-to-integrate-azure-load-balancers/traffic-manager-2.png" alt-text="Screenshot of the Azure portal that shows the Add endpoint page with a westus FQDN with Priority 2." lightbox="media/how-to-integrate-azure-load-balancers/traffic-manager-2.png":::
### Configure Custom Domain
To integrate with Azure Spring Apps service, complete the following configuratio
1. Specify **Target type** as *IP address* or *FQDN*. 1. Enter your Azure Spring Apps public endpoints.
- ![App Gateway 1](media/spring-cloud-load-balancers/app-gateway-1.png)
+ :::image type="content" source="media/how-to-integrate-azure-load-balancers/app-gateway-1.png" alt-text="Screenshot of the Azure portal that shows the Add backend pool page with the Backend targets values highlighted." lightbox="media/how-to-integrate-azure-load-balancers/app-gateway-1.png":::
### Add Custom Probe 1. Select **Health Probes** then **Add** to open custom **Probe** dialog.
-1. The key point is to select *No* for **Pick host name from backend HTTP settings** option and explicitly specify the host name. For more information, see [Application Gateway configuration for host name preservation](/azure/architecture/best-practices/host-name-preservation#application-gateway).
+1. The key point is to select **No** for **Pick host name from backend HTTP settings** option and explicitly specify the host name. For more information, see [Application Gateway configuration for host name preservation](/azure/architecture/best-practices/host-name-preservation#application-gateway).
- ![App Gateway 2](media/spring-cloud-load-balancers/app-gateway-2.png)
+ :::image type="content" source="media/how-to-integrate-azure-load-balancers/app-gateway-2.png" alt-text="Screenshot of the Azure portal that shows the probe page." lightbox="media/how-to-integrate-azure-load-balancers/app-gateway-2.png":::
### Configure Backend Setting 1. Select **Backend settings** then **Add** to add a backend setting.
-1. **Override with new host name:** select *No*.
-1. **Use custom probe**: select *Yes* and pick the custom probe created above.
+1. **Override with new host name:** select **No**.
+1. **Use custom probe**: select **Yes** and pick the custom probe created above.
- ![App Gateway 3](media/spring-cloud-load-balancers/app-gateway-3.png)
+ :::image type="content" source="media/how-to-integrate-azure-load-balancers/app-gateway-3.png" alt-text="Screenshot of the Azure portal that shows the Add Backend setting page." lightbox="media/how-to-integrate-azure-load-balancers/app-gateway-3.png":::
## Integrate Azure Spring Apps with Azure Front Door
To integrate with Azure Spring Apps service and configure an origin group, use t
1. **Add origin group**. 1. Specify the backend endpoints by adding origins for the different Azure Spring Apps instances.
- ![Front Door 1](media/spring-cloud-load-balancers/front-door-1.png)
+ :::image type="content" source="media/how-to-integrate-azure-load-balancers/front-door-1.png" alt-text="Screenshot of the Azure portal that shows the Add an origin group page with the Add an origin button highlighted." lightbox="media/how-to-integrate-azure-load-balancers/front-door-1.png":::
1. Specify **origin type** as *Azure Spring Apps*. 1. Select your Azure Spring Apps instance for the **host name**. 1. Keep the **origin host header** empty, so that the incoming host header will be used towards the backend. For more information, see [Azure Front Door configuration for host name preservation](/azure/architecture/best-practices/host-name-preservation#azure-front-door).
- ![Front Door 2](media/spring-cloud-load-balancers/front-door-2.png)
+ :::image type="content" source="media/how-to-integrate-azure-load-balancers/front-door-2.png" alt-text="Screenshot of the Azure portal that shows the Add an origin page." lightbox="media/how-to-integrate-azure-load-balancers/front-door-2.png":::
## Next steps
spring-apps How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-permissions.md
The Developer role includes permissions to restart apps and see their log stream
### [Portal](#tab/Azure-portal) 1. In the Azure portal, open the subscription where you want to assign the custom role.
-2. Open **Access control (IAM)**.
-3. Select **Add**.
-4. Select **Add custom role**.
-5. Select **Next**:
- ![Screenshot that shows the Basics tab of the Create a custom role window.](media/spring-cloud-permissions/create-custom-role.png)
+1. Open **Access control (IAM)**.
-6. Select **Add permissions**:
+1. Select **Add**.
- ![Screenshot that shows the Add permissions button.](media/spring-cloud-permissions/add-permissions.png)
+1. Select **Add custom role**.
-7. In the search box, search for **Microsoft.app**. Select **Microsoft Azure Spring Apps**:
+1. Select **Next**:
- ![Screenshot that shows the results of searching for Microsoft.app.](media/spring-cloud-permissions/spring-cloud-permissions.png)
+ :::image type="content" source="media/how-to-permissions/create-custom-role.png" alt-text="Screenshot that shows the Basics tab of the Create a custom role window." lightbox="media/how-to-permissions/create-custom-role.png":::
-8. Select the permissions for the Developer role.
+1. Select **Add permissions**:
+
+ :::image type="content" source="media/how-to-permissions/add-permissions.png" alt-text="Screenshot that shows the Add permissions button." lightbox="media/how-to-permissions/add-permissions.png":::
+
+1. In the search box, search for **Microsoft.app**. Select **Microsoft Azure Spring Apps**:
+
+ :::image type="content" source="media/how-to-permissions/spring-cloud-permissions.png" alt-text="Screenshot that shows the results of searching for Microsoft.app." lightbox="media/how-to-permissions/spring-cloud-permissions.png":::
+
+1. Select the permissions for the Developer role.
Under **Microsoft.AppPlatform/Spring**, select:
The Developer role includes permissions to restart apps and see their log stream
* **Read : Read operation status**
- [![Screenshot of Azure portal that shows the selections for Developer permissions.](media/spring-cloud-permissions/developer-permissions-box.png)](media/spring-cloud-permissions/developer-permissions-box.png#lightbox)
+ :::image type="content" source="media/how-to-permissions/developer-permissions-box.png" alt-text="Screenshot of Azure portal that shows the selections for Developer permissions." lightbox="media/how-to-permissions/developer-permissions-box.png":::
-9. Select **Add**.
+1. Select **Add**.
-10. Review the permissions.
+1. Review the permissions.
-11. Select **Review and create**.
+1. Select **Review and create**.
### [JSON](#tab/JSON) 1. In the Azure portal, open the subscription where you want to assign the custom role.
-2. Open **Access control (IAM)**.
-3. Select **Add**.
-4. Select **Add custom role**.
-5. Select **Next**.
-6. Select the **JSON** tab.
+1. Open **Access control (IAM)**.
+
+1. Select **Add**.
+
+1. Select **Add custom role**.
+
+1. Select **Next**.
-7. Select **Edit**, and then delete the default text:
+1. Select the **JSON** tab.
- ![Screenshot that shows the default JSON text.](media/spring-cloud-permissions/create-custom-role-edit-json.png)
+1. Select **Edit**, and then delete the default text:
-8. Paste in the following JSON to define the Developer role:
+ :::image type="content" source="media/how-to-permissions/create-custom-role-edit-json.png" alt-text="Screenshot that shows the default JSON text." lightbox="media/how-to-permissions/create-custom-role-edit-json.png":::
+
+1. Paste in the following JSON to define the Developer role:
* Basic/Standard plan
The Developer role includes permissions to restart apps and see their log stream
} ```
- ![Screenshot that shows the JSON for the Developer role.](media/spring-cloud-permissions/create-custom-role-json.png)
+ :::image type="content" source="media/how-to-permissions/create-custom-role-json.png" alt-text="Screenshot that shows the JSON for the Developer role." lightbox="media/how-to-permissions/create-custom-role-json.png":::
-9. Select **Save**.
+1. Select **Save**.
-10. Review the permissions.
+1. Review the permissions.
-11. Select **Review and create**.
+1. Select **Review and create**.
This procedure defines a role that has permissions to deploy, test, and restart
1. Repeat steps 1 through 4 in the procedure for adding the Developer role.
-2. Select the permissions for the DevOps Engineer role:
+1. Select the permissions for the DevOps Engineer role:
Under **Microsoft.AppPlatform/Spring**, select:
This procedure defines a role that has permissions to deploy, test, and restart
* **Read : List available skus**
- [![Screenshot of Azure portal that shows the selections for DevOps permissions.](media/spring-cloud-permissions/dev-ops-permissions.png)](media/spring-cloud-permissions/dev-ops-permissions.png#lightbox)
+ :::image type="content" source="media/how-to-permissions/dev-ops-permissions.png" alt-text="Screenshot of Azure portal that shows the selections for DevOps permissions." lightbox="media/how-to-permissions/dev-ops-permissions.png":::
-3. Select **Add**.
+1. Select **Add**.
-4. Review the permissions.
+1. Review the permissions.
-5. Select **Review and create**.
+1. Select **Review and create**.
### [JSON](#tab/JSON) 1. Repeat steps 1 through 4 from the procedure for adding the Developer role.
-2. Select **Next**.
-3. Select the **JSON** tab.
+1. Select **Next**.
+
+1. Select the **JSON** tab.
-4. Select **Edit**, and then delete the default text:
+1. Select **Edit**, and then delete the default text:
- ![Screenshot that shows the default JSON text.](media/spring-cloud-permissions/create-custom-role-edit-json.png)
+ :::image type="content" source="media/how-to-permissions/create-custom-role-edit-json.png" alt-text="Screenshot that shows the default JSON text." lightbox="media/how-to-permissions/create-custom-role-edit-json.png":::
-5. Paste in the following JSON to define the DevOps Engineer role:
+1. Paste in the following JSON to define the DevOps Engineer role:
* Basic/Standard plan
This procedure defines a role that has permissions to deploy, test, and restart
} ```
-6. Review the permissions.
+1. Review the permissions.
-7. Select **Review and create**.
+1. Select **Review and create**.
This procedure defines a role that has permissions to deploy, test, and restart
### [Portal](#tab/Azure-portal) 1. Repeat steps 1 through 4 from the procedure for adding the Developer role.
-2. Select the permissions for the Ops - Site Reliability Engineering role:
+
+1. Select the permissions for the Ops - Site Reliability Engineering role:
Under **Microsoft.AppPlatform/Spring**, select:
This procedure defines a role that has permissions to deploy, test, and restart
* **Read : Read operation status**
- [![Screenshot of Azure portal that shows the selections for Ops - Site Reliability Engineering permissions.](media/spring-cloud-permissions/ops-sre-permissions.png)](media/spring-cloud-permissions/ops-sre-permissions.png#lightbox)
+ :::image type="content" source="media/how-to-permissions/ops-sre-permissions.png" alt-text="Screenshot of Azure portal that shows the selections for Ops - Site Reliability Engineering permissions." lightbox="media/how-to-permissions/ops-sre-permissions.png":::
-3. Select **Add**.
+1. Select **Add**.
-4. Review the permissions.
+1. Review the permissions.
-5. Select **Review and create**.
+1. Select **Review and create**.
### [JSON](#tab/JSON) 1. Repeat steps 1 through 4 from the procedure for adding the Developer role.
-2. Select **Next**.
-3. Select the **JSON** tab.
+1. Select **Next**.
+
+1. Select the **JSON** tab.
-4. Select **Edit**, and then delete the default text:
+1. Select **Edit**, and then delete the default text:
- ![Screenshot that shows the default JSON text.](media/spring-cloud-permissions/create-custom-role-edit-json.png)
+ :::image type="content" source="media/how-to-permissions/create-custom-role-edit-json.png" alt-text="Screenshot that shows the default JSON text." lightbox="media/how-to-permissions/create-custom-role-edit-json.png":::
-5. Paste in the following JSON to define the Ops - Site Reliability Engineering role:
+1. Paste in the following JSON to define the Ops - Site Reliability Engineering role:
* Enterprise/Basic/Standard plan
This procedure defines a role that has permissions to deploy, test, and restart
} ```
-6. Review the permissions.
+1. Review the permissions.
-7. Select **Review and create**.
+1. Select **Review and create**.
This role can create and configure everything in Azure Spring Apps and apps with
### [Portal](#tab/Azure-portal) 1. Repeat steps 1 through 4 from the procedure for adding the Developer role.
-2. Open the **Permissions** options.
-3. Select the permissions for the Azure Pipelines / Jenkins / GitHub Actions role:
+1. Open the **Permissions** options.
+
+1. Select the permissions for the Azure Pipelines / Jenkins / GitHub Actions role:
Under **Microsoft.AppPlatform/Spring**, select:
This role can create and configure everything in Azure Spring Apps and apps with
* **Other : Disable Azure Spring Apps service instance test endpoint** * **Other : List Azure Spring Apps service instance test keys** * **Other : Regenerate Azure Spring Apps service instance test key**
-
+ (For Enterprise plan only) Under **Microsoft.AppPlatform/Spring/buildServices**, select: * **Read : Read Microsoft Azure Spring Apps Build Services**
This role can create and configure everything in Azure Spring Apps and apps with
* **Read : List available skus**
- [![Screenshot of Azure portal that shows the selections for Azure Pipelines / Jenkins / GitHub Actions permissions.](media/spring-cloud-permissions/pipelines-permissions-box.png)](media/spring-cloud-permissions/pipelines-permissions-box.png#lightbox)
+ :::image type="content" source="media/how-to-permissions/pipelines-permissions-box.png" alt-text="Screenshot of Azure portal that shows the selections for Azure Pipelines / Jenkins / GitHub Actions permissions." lightbox="media/how-to-permissions/pipelines-permissions-box.png":::
-4. Select **Add**.
+1. Select **Add**.
-5. Review the permissions.
+1. Review the permissions.
-6. Select **Review and create**.
+1. Select **Review and create**.
### [JSON](#tab/JSON) 1. Repeat steps 1 through 4 from the procedure for adding the Developer role.
-2. Select **Next**.
+1. Select **Next**.
-3. Select the **JSON** tab.
+1. Select the **JSON** tab.
-4. Select **Edit**, and then delete the default text:
+1. Select **Edit**, and then delete the default text:
- ![Screenshot that shows the default JSON text.](media/spring-cloud-permissions/create-custom-role-edit-json.png)
+ :::image type="content" source="media/how-to-permissions/create-custom-role-edit-json.png" alt-text="Screenshot that shows the default JSON text." lightbox="media/how-to-permissions/create-custom-role-edit-json.png":::
-5. Paste in the following JSON to define the Azure Pipelines / Jenkins / GitHub Actions role:
+1. Paste in the following JSON to define the Azure Pipelines / Jenkins / GitHub Actions role:
* Basic/Standard plan
This role can create and configure everything in Azure Spring Apps and apps with
} ```
-6. Select **Add**.
+1. Select **Add**.
-7. Review the permissions.
+1. Review the permissions.
spring-apps How To Setup Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-setup-autoscale.md
There are two options for Autoscale demand management:
In the Azure portal, choose how you want to scale. The following figure shows the **Custom autoscale** option and mode settings. ## Set up Autoscale settings for your application in Azure CLI
spring-apps How To Use Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-tls-certificate.md
You need to grant Azure Spring Apps access to your key vault before you import y
1. In the left navigation pane, select **Access policies**, then select **Create**. 1. Select **Certificate permissions**, then select **Get** and **List**.
- :::image type="content" source="media/use-tls-certificates/grant-key-vault-permission.png" alt-text="Screenshot of Azure portal 'Create an access policy' page with Permission pane showing and Get and List permissions highlighted." lightbox="media/use-tls-certificates/grant-key-vault-permission.png":::
+ :::image type="content" source="media/how-to-use-tls-certificates/grant-key-vault-permission.png" alt-text="Screenshot of Azure portal 'Create an access policy' page with Permission pane showing and Get and List permissions highlighted." lightbox="media/how-to-use-tls-certificates/grant-key-vault-permission.png":::
1. Under **Principal**, select your **Azure Spring Cloud Resource Provider**.
- :::image type="content" source="media/use-tls-certificates/select-service-principal.png" alt-text="Screenshot of Azure portal 'Create an access policy' page with Principal pane showing and Azure Spring Apps Resource Provider highlighted." lightbox="media/use-tls-certificates/select-service-principal.png":::
+ :::image type="content" source="media/how-to-use-tls-certificates/select-service-principal.png" alt-text="Screenshot of Azure portal 'Create an access policy' page with Principal pane showing and Azure Spring Apps Resource Provider highlighted." lightbox="media/how-to-use-tls-certificates/select-service-principal.png":::
1. Select **Review + Create**, then select **Create**.
To load a certificate into your application in Azure Spring Apps, start with the
1. From the left navigation pane of your app, select **Certificate management**. 1. Select **Add certificate** to choose certificates accessible for the app. ### Load a certificate from code
spring-apps Quickstart Analyze Logs And Metrics Standard Consumption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-analyze-logs-and-metrics-standard-consumption.md
This article shows you how to analyze logs and metrics in the Azure Spring Apps
- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - An Azure Spring Apps Standard consumption and dedicated plan service instance. For more information, see [Quickstart: Provision an Azure Spring Apps Standard consumption and dedicated plan service instance](quickstart-provision-standard-consumption-service-instance.md).-- A Spring app deployed to Azure Spring Apps. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md).
+- A Spring app deployed to Azure Spring Apps.
## Analyze logs
spring-apps Quickstart Apps Autoscale Standard Consumption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-apps-autoscale-standard-consumption.md
For more information, see [Azure Container Apps documentation](../container-apps
- An Azure subscription. If you don't have an Azure subscription, see [Azure free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - An Azure Spring Apps Standard consumption and dedicated plan service instance. For more information, see [Quickstart: Provision an Azure Spring Apps Standard consumption and dedicated plan service instance](quickstart-provision-standard-consumption-service-instance.md).-- A Spring app deployed to Azure Spring Apps. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md).
+- A Spring app deployed to Azure Spring Apps.
## Scale definition
Use the following steps to define autoscale settings and rules.
1. Set up the instance limits of your deployment. 1. Select **Add** to add your scale rules. ### [Azure CLI](#tab/azure-cli)
spring-apps Quickstart Deploy Event Driven App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-event-driven-app.md
Use the following steps to confirm that the event-driven app works correctly. Yo
> [Structured application log for Azure Spring Apps](./structured-app-log.md) > [!div class="nextstepaction"]
-> [Map an existing custom domain to Azure Spring Apps](./tutorial-custom-domain.md)
+> [Map an existing custom domain to Azure Spring Apps](./how-to-custom-domain.md)
> [!div class="nextstepaction"] > [Set up Azure Spring Apps CI/CD with GitHub Actions](./how-to-github-actions.md)
spring-apps Quickstart Deploy Java Native Image App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-java-native-image-app.md
az group delete --name ${RESOURCE_GROUP}
> [Structured application log for Azure Spring Apps](./structured-app-log.md) > [!div class="nextstepaction"]
-> [Map an existing custom domain to Azure Spring Apps](./tutorial-custom-domain.md)
+> [Map an existing custom domain to Azure Spring Apps](./how-to-custom-domain.md)
> [!div class="nextstepaction"] > [Set up Azure Spring Apps CI/CD with GitHub Actions](./how-to-github-actions.md)
az group delete --name ${RESOURCE_GROUP}
> [!div class="nextstepaction"] > [Run the polyglot ACME fitness store apps on Azure Spring Apps](./quickstart-sample-app-acme-fitness-store-introduction.md)
+> [!div class="nextstepaction"]
+> [Monitor your Spring Boot Native Image application](https://aka.ms/AzMonSpringNative)
+ For more information, see the following articles: - [Azure Spring Apps Samples](https://github.com/Azure-Samples/azure-spring-apps-samples).
spring-apps Quickstart Deploy Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-web-app.md
az group delete --name ${RESOURCE_GROUP}
> [Structured application log for Azure Spring Apps](./structured-app-log.md) > [!div class="nextstepaction"]
-> [Map an existing custom domain to Azure Spring Apps](./tutorial-custom-domain.md)
+> [Map an existing custom domain to Azure Spring Apps](./how-to-custom-domain.md)
> [!div class="nextstepaction"] > [Set up Azure Spring Apps CI/CD with GitHub Actions](./how-to-github-actions.md)
spring-apps Quickstart Integrate Azure Database Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-integrate-azure-database-mysql.md
Use [Service Connector](../service-connector/overview.md) to connect the app hos
1. Under **Settings**, select **Apps**, and then select the `customers-service` application from the list. 1. Select **Service Connector** from the left table of contents and select **Create**.
- :::image type="content" source="./media\quickstart-integrate-azure-database-mysql\create-service-connection.png" alt-text="Screenshot of the Azure portal, in the Azure Spring Apps instance, create a connection with Service Connector.":::
+ :::image type="content" source="./media/quickstart-integrate-azure-database-mysql/create-service-connection.png" alt-text="Screenshot of the Azure portal, in the Azure Spring Apps instance, create a connection with Service Connector.":::
1. Select or enter the following settings in the table.
Use [Service Connector](../service-connector/overview.md) to connect the app hos
| **MySQL database** | *petclinic* | Select the database you created earlier. | | **Client type** | *SpringBoot* | Select the application stack that works with the target service you selected. |
- :::image type="content" source="./media\quickstart-integrate-azure-database-mysql\basics-tab.png" alt-text="Screenshot of the Azure portal, filling out the basics tab in Service Connector.":::
+ :::image type="content" source="./media/quickstart-integrate-azure-database-mysql/basics-tab.png" alt-text="Screenshot of the Azure portal, filling out the basics tab in Service Connector.":::
1. Select **Next: Authentication** to select the authentication type. Then select **Connection string > Database credentials** and enter your database username and password.
Username and password validated. success
Azure Spring Apps connections are displayed under **Settings > Service Connector**. Select **Validate** to check your connection status, and select **Learn more** to review the connection validation details.
spring-apps Quickstart Logs Metrics Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-logs-metrics-tracing.md
az config set defaults.group=
To explore more monitoring capabilities of Azure Spring Apps, see: > [!div class="nextstepaction"]
->
> [Analyze logs and metrics with diagnostics settings](diagnostic-services.md)
->
> [Stream Azure Spring Apps app logs in real-time](./how-to-log-streaming.md)
spring-apps Quickstart Provision Standard Consumption App Environment With Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-provision-standard-consumption-app-environment-with-virtual-network.md
Use the following steps to create an Azure Spring Apps instance in an Azure Cont
1. In the search box, search for *Azure Spring Apps*, and then select **Azure Spring Apps** in the results.
- :::image type="content" source="media/quickstart-provision-app-environment-with-virtual-network/azure-spring-apps-start.png" alt-text="Screenshot of the Azure portal showing Azure Spring Apps in search results, with Azure Spring Apps highlighted in the search bar and in the results." lightbox="media/quickstart-provision-app-environment-with-virtual-network/azure-spring-apps-start.png":::
+ :::image type="content" source="media/quickstart-provision-standard-consumption-app-environment-with-virtual-network/azure-spring-apps-start.png" alt-text="Screenshot of the Azure portal showing Azure Spring Apps in search results, with Azure Spring Apps highlighted in the search bar and in the results." lightbox="media/quickstart-provision-standard-consumption-app-environment-with-virtual-network/azure-spring-apps-start.png":::
1. On the Azure Spring Apps page, select **Create**.
- :::image type="content" source="media/quickstart-provision-app-environment-with-virtual-network/azure-spring-apps-create.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps page with the Create button highlighted." lightbox="media/quickstart-provision-app-environment-with-virtual-network/azure-spring-apps-create.png":::
+ :::image type="content" source="media/quickstart-provision-standard-consumption-app-environment-with-virtual-network/azure-spring-apps-create.png" alt-text="Screenshot of the Azure portal showing the Azure Spring Apps page with the Create button highlighted." lightbox="media/quickstart-provision-standard-consumption-app-environment-with-virtual-network/azure-spring-apps-create.png":::
1. Fill out the **Basics** form on the Azure Spring Apps **Create** page using the following guidelines:
Use the following steps to create an Azure Spring Apps instance in an Azure Cont
- Select **Create new** to create a new Azure Container Apps environment or select an existing environment from the dropdown menu.
- :::image type="content" source="media/quickstart-provision-app-environment-with-virtual-network/select-azure-container-apps-environment.png" alt-text="Screenshot of the Azure portal showing the Create Container Apps environment page with Consumption and Dedicated workload profiles selected for the plan." lightbox="media/quickstart-provision-app-environment-with-virtual-network/select-azure-container-apps-environment.png":::
+ :::image type="content" source="media/quickstart-provision-standard-consumption-app-environment-with-virtual-network/select-azure-container-apps-environment.png" alt-text="Screenshot of the Azure portal showing the Create Container Apps environment page with Consumption and Dedicated workload profiles selected for the plan." lightbox="media/quickstart-provision-standard-consumption-app-environment-with-virtual-network/select-azure-container-apps-environment.png":::
1. Fill out the **Basics** form on the **Create Container Apps environment** page. Use the default value `asa-standard-consumption-app-env` for the **Environment name** and choose **Consumption and Dedicated workload profiles** for the **Plan**.
- :::image type="content" source="media/quickstart-provision-app-environment-with-virtual-network/create-azure-container-apps-environment.png" alt-text="Screenshot of the Azure portal showing the Create Container Apps environment page with the Basics tab selected." lightbox="media/quickstart-provision-app-environment-with-virtual-network/create-azure-container-apps-environment.png":::
+ :::image type="content" source="media/quickstart-provision-standard-consumption-app-environment-with-virtual-network/create-azure-container-apps-environment.png" alt-text="Screenshot of the Azure portal showing the Create Container Apps environment page with the Basics tab selected." lightbox="media/quickstart-provision-standard-consumption-app-environment-with-virtual-network/create-azure-container-apps-environment.png":::
1. At this point, you've created an Azure Container Apps environment with a default standard consumption workload profile. If you wish to add a dedicated workload profile to the same Azure Container Apps environment, you can select the **Workload profiles** tab and then select **Add workload profile**.
- :::image type="content" source="media/quickstart-provision-app-environment-with-virtual-network/create-workload-profiles.png" alt-text="Screenshot of the Azure portal showing the Create Workload Profiles tab." lightbox="media/quickstart-provision-app-environment-with-virtual-network/create-workload-profiles.png":::
+ :::image type="content" source="media/quickstart-provision-standard-consumption-app-environment-with-virtual-network/create-workload-profiles.png" alt-text="Screenshot of the Azure portal showing the Create Workload Profiles tab." lightbox="media/quickstart-provision-standard-consumption-app-environment-with-virtual-network/create-workload-profiles.png":::
1. Select **Networking** and then specify the settings using the following guidelines:
Use the following steps to create an Azure Spring Apps instance in an Azure Cont
- Select the names for **Virtual network** and for **Infrastructure subnet** from the dropdown menus or use **Create new** as needed. - Set **Virtual IP** to **External**. You can set the value to **Internal** if you prefer to use only internal IP addresses available in the virtual network instead of a public static IP.
- :::image type="content" source="media/quickstart-provision-app-environment-with-virtual-network/create-azure-container-apps-environment-virtual-network.png" alt-text="Screenshot of the Azure portal showing the Create Container Apps environment page with the Networking tab selected." lightbox="media/quickstart-provision-app-environment-with-virtual-network/create-azure-container-apps-environment-virtual-network.png":::
+ :::image type="content" source="media/quickstart-provision-standard-consumption-app-environment-with-virtual-network/create-azure-container-apps-environment-virtual-network.png" alt-text="Screenshot of the Azure portal showing the Create Container Apps environment page with the Networking tab selected." lightbox="media/quickstart-provision-standard-consumption-app-environment-with-virtual-network/create-azure-container-apps-environment-virtual-network.png":::
>[!NOTE] > The subnet associated with an Azure Container Apps environment requires a CIDR prefix of `/23` or higher.
spring-apps Quickstart Sample App Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-sample-app-introduction.md
To follow the Azure Spring Apps deployment examples, you only need the location
The following diagram shows the architecture of the PetClinic application.
-![Architecture of PetClinic](media/build-and-deploy/microservices-architecture-diagram.jpg)
> [!NOTE] > When the application is hosted in Azure Spring Apps Enterprise plan, the managed Application Configuration Service for VMware Tanzu assumes the role of Spring Cloud Config Server and the managed VMware Tanzu Service Registry assumes the role of Eureka Service Discovery without any code changes to the application. For more information, see the [Infrastructure services hosted by Azure Spring Apps](#infrastructure-services-hosted-by-azure-spring-apps) section later in this article.
spring-apps Quickstart Standard Consumption Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-standard-consumption-custom-domain.md
The mapping secures the custom domain with a certificate and enforces Transport
- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [Azure CLI](/cli/azure/install-azure-cli) - An Azure Spring Apps Standard consumption and dedicated plan service instance. For more information, see [Quickstart: Provision an Azure Spring Apps Standard consumption and dedicated plan service instance](quickstart-provision-standard-consumption-service-instance.md).-- A Spring app deployed to Azure Spring Apps. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md).
+- A Spring app deployed to Azure Spring Apps.
- A domain name registered in the DNS registry as provided by a web hosting or domain provider. - A certificate resource created under an Azure Container Apps environment. For more information, see [Add certificate in Container App](../container-apps/custom-domains-certificates.md).
spring-apps Structured App Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/structured-app-log.md
Use the following procedure:
4. Application logs return as shown in the following image:
- ![Json Log show](media/spring-cloud-structured-app-log/json-log-query.png)
+ :::image type="content" source="media/structured-app-log/json-log-query.png" alt-text="Screenshot of the Azure portal showing the log Results pane." lightbox="media/structured-app-log/json-log-query.png":::
### Show log entries containing errors
spring-apps Tools To Troubleshoot Memory Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tools-to-troubleshoot-memory-issues.md
In the Azure portal, you can find **Memory Usage** under **Diagnose and solve pr
### Metrics
-The following sections describe metrics that cover issues including high memory usage, heap memory that's too large, and abnormal garbage collection abnormal (too frequent or not frequent enough). For more information, see [Quickstart: Monitoring Azure Spring Apps apps with logs, metrics, and tracing](quickstart-logs-metrics-tracing.md?tabs=Azure-CLI&pivots=programming-language-java).
+The following sections describe metrics that cover issues including high memory usage, heap memory that's too large, and abnormal garbage collection abnormal (too frequent or not frequent enough). For more information, see [Quickstart: Monitoring Azure Spring Apps apps with logs, metrics, and tracing](quickstart-logs-metrics-tracing.md?pivots=programming-language-java).
#### App memory usage
spring-apps Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/troubleshoot.md
Before you onboard your application, ensure that it meets the following criteria
* The application can run locally with the specified Java runtime version. * The environment config (CPU/RAM/Instances) meets the minimum requirement set by the application provider.
-* The configuration items have their expected values. For more information, see [Set up a Spring Cloud Config Server instance for your service](./how-to-config-server.md). For Enterpriseplan, see [Use Application Configuration Service](./how-to-enterprise-application-configuration-service.md).
+* The configuration items have their expected values. For more information, see [Set up a Spring Cloud Config Server instance for your service](./how-to-config-server.md). For the Enterprise plan, see [Use Application Configuration Service](./how-to-enterprise-application-configuration-service.md).
* The environment variables have their expected values. * The JVM parameters have their expected values. * We recommended that you disable or remove the embedded *Config Server* and *Spring Service Registry* services from the application package.
spring-apps Tutorial Circuit Breaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-circuit-breaker.md
Verify using public endpoints or private test endpoints.
Access hystrix-turbine with the path `https://<SERVICE-NAME>-hystrix-turbine.azuremicroservices.io/hystrix` from your browser. The following figure shows the Hystrix dashboard running in this app. Copy the Turbine stream url `https://<SERVICE-NAME>-hystrix-turbine.azuremicroservices.io/turbine.stream?cluster=default` into the text box, and select **Monitor Stream**. This action displays the dashboard. If nothing shows in the viewer, hit the `user-service` endpoints to generate streams. > [!NOTE] > In production, the Hystrix dashboard and metrics stream should not be exposed to the Internet.
Copy the Turbine stream url `https://<SERVICE-NAME>-hystrix-turbine.azuremicrose
Hystrix metrics streams are also accessible from `test-endpoint`. As a backend service, we didn't assign a public end-point for `recommendation-service`, but we can show its metrics with test-endpoint at `https://primary:<KEY>@<SERVICE-NAME>.test.azuremicroservices.io/recommendation-service/default/actuator/hystrix.stream` As a web app, Hystrix dashboard should be working on `test-endpoint`. If it isn't working properly, there may be two reasons: first, using `test-endpoint` changed the base URL from `/` to `/<APP-NAME>/<DEPLOYMENT-NAME>`, or, second, the web app is using absolute path for static resource. To get it working on `test-endpoint`, you might need to manually edit the `<base>` in the front-end files.
spring-apps Vnet Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/vnet-customer-responsibilities.md
Azure Firewall provides the FQDN tag `AzureKubernetesService` to simplify the fo
## Azure Spring Apps optional FQDN for Application Insights
-You need to open some outgoing ports in your server's firewall to allow the Application Insights SDK or the Application Insights Agent to send data to the portal. For more information, see the [outgoing ports](../azure-monitor/app/ip-addresses.md#outgoing-ports) section of [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md).
+You need to open some outgoing ports in your server's firewall to allow the Application Insights SDK or the Application Insights Agent to send data to the portal. For more information, see the [outgoing ports](../azure-monitor/ip-addresses.md#outgoing-ports) section of [IP addresses used by Azure Monitor](../azure-monitor/ip-addresses.md).
## Next steps
storage Storage Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-attributes.md
Previously updated : 11/15/2023 Last updated : 01/26/2024
The following table summarizes the available attributes by source:
| | [Subnet](#subnet) | The subnet over which an object is accessed | | | [UTC now](#utc-now) | The current date and time in Coordinated Universal Time | | **Request** | | |
-| | [Blob index tags [Keys]](#blob-index-tags-keys) | Index tags on a blob resource (keys) |
-| | [Blob index tags [Values in key]](#blob-index-tags-values-in-key) | Index tags on a blob resource (values in key) |
+| | [Blob index tags [Keys]](#blob-index-tags-keys) | Index tags on a blob resource (keys); available only for storage accounts where hierarchical namespace is not enabled |
+| | [Blob index tags [Values in key]](#blob-index-tags-values-in-key) | Index tags on a blob resource (values in key); available only for storage accounts where hierarchical namespace is not enabled |
| | [Blob prefix](#blob-prefix) | Allowed prefix of blobs to be listed |
-| | [Snapshot](#snapshot) | The Snapshot identifier for the Blob snapshot |
-| | [Version ID](#version-id) | The version ID of the versioned Blob |
+| | [Snapshot](#snapshot) | The Snapshot identifier for the Blob snapshot |
+| | [Version ID](#version-id) | The version ID of the versioned blob; available only for storage accounts where hierarchical namespace is not enabled |
| **Resource** | | | | | [Account name](#account-name) | The storage account name | | | [Blob index tags [Keys]](#blob-index-tags-keys) | Index tags on a blob resource (keys) |
The following table summarizes the available attributes by source:
> | Property | Value | > | | | > | **Display name** | Blob index tags [Keys] |
-> | **Description** | Index tags on a blob resource.<br/>Arbitrary user-defined key-value properties that you can store alongside a blob resource. Use when you want to check the key in blob index tags. |
+> | **Description** | Index tags on a blob resource.<br/>Arbitrary user-defined key-value properties that you can store alongside a blob resource. Use when you want to check the key in blob index tags.<br/>*Available only for storage accounts where hierarchical namespace is not enabled.* |
> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&` | > | **Attribute source** | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes)<br/>[Request](../../role-based-access-control/conditions-format.md#request-attributes) | > | **Attribute type** | [StringList](../../role-based-access-control/conditions-format.md#cross-product-comparison-operators) |
The following table summarizes the available attributes by source:
> | Property | Value | > | | | > | **Display name** | Blob index tags [Values in key] |
-> | **Description** | Index tags on a blob resource.<br/>Arbitrary user-defined key-value properties that you can store alongside a blob resource. Use when you want to check both the key (case-sensitive) and value in blob index tags. |
+> | **Description** | Index tags on a blob resource.<br/>Arbitrary user-defined key-value properties that you can store alongside a blob resource. Use when you want to check both the key (case-sensitive) and value in blob index tags.<br/>*Available only for storage accounts where hierarchical namespace is not enabled.* |
> | **Attribute** | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags` | > | **Attribute source** | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes)<br/>[Request](../../role-based-access-control/conditions-format.md#request-attributes) | > | **Attribute type** | [String](../../role-based-access-control/conditions-format.md#string-comparison-operators) |
The following table summarizes the available attributes by source:
> | Property | Value | > | | | > | **Display name** | Encryption scope name |
-> | **Description** | Name of the encryption scope used to encrypt data.<br/>*Available only for storage accounts where hierarchical namespace is not enabled.* |
+> | **Description** | Name of the encryption scope used to encrypt data. |
> | **Attribute** | `Microsoft.Storage/storageAccounts/encryptionScopes:name` | > | **Attribute source** | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) | > | **Attribute type** | [String](../../role-based-access-control/conditions-format.md#string-comparison-operators) |
storage Storage Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac.md
Previously updated : 11/15/2023 Last updated : 01/26/2024
The [Azure role assignment condition format](../../role-based-access-control/con
## Status of condition features in Azure Storage
-Currently, Azure attribute-based access control (Azure ABAC) is generally available (GA) for controlling access only to Azure Blob Storage, Azure Data Lake Storage Gen2, and Azure Queues using `request` and `resource` attributes in the standard storage account performance tier. It's either not available or in PREVIEW for other storage account performance tiers, resource types, and attributes.
+Currently, Azure attribute-based access control (Azure ABAC) is generally available (GA) for controlling access only to Azure Blob Storage, Azure Data Lake Storage Gen2, and Azure Queues using `request`, `resource`, and `principal` attributes in the standard storage account performance tier. It's either not available or in PREVIEW for other storage account performance tiers, resource types, and attributes.
See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
The following table shows the current status of ABAC by storage account performa
| Standard | Blobs<br/>Data Lake Storage Gen2<br/>Queues | environment | All attributes | Preview | | Premium | Blobs<br/>Data Lake Storage Gen2<br/>Queues | environment<br/>principal<br/>request<br/>resource | All attributes | Preview | +
+> [!NOTE]
+> Some storage features aren't supported for Data Lake Storage Gen2 storage accounts, which use a hierarchical namespace (HNS). To learn more, see [Blob storage feature support](storage-feature-support-in-storage-accounts.md).
+>
+>The following ABAC attributes aren't supported when hierarchical namespace is enabled for a storage account:
+>
+> - [Blob index tags [Keys]](storage-auth-abac-attributes.md#blob-index-tags-keys)
+> - [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key)
+> - [Version ID](storage-auth-abac-attributes.md#version-id)
+ ## Next steps - [Prerequisites for Azure role assignment conditions](../../role-based-access-control/conditions-prerequisites.md)
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
The following table describes key parameters for each redundancy option:
| Parameter | LRS | ZRS | GRS/RA-GRS | GZRS/RA-GZRS | |:-|:-|:-|:-|:-| | Percent durability of objects over a given year | at least 99.999999999% (11 9's) | at least 99.9999999999% (12 9's) | at least 99.99999999999999% (16 9's) | at least 99.99999999999999% (16 9's) |
-| Availability for read requests | At least 99.9% (99% for cool or archive access tiers) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool or archive access tiers) for GRS<br/><br/>At least 99.99% (99.9% for cool or archive access tiers) for RA-GRS | At least 99.9% (99% for cool access tier) for GZRS<br/><br/>At least 99.99% (99.9% for cool access tier) for RA-GZRS |
-| Availability for write requests | At least 99.9% (99% for cool or archive access tiers) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool or archive access tiers) | At least 99.9% (99% for cool access tier) |
+| Availability for read requests | At least 99.9% (99% for cool/cold/archive access tiers) | At least 99.9% (99% for cool/cold access tier) | At least 99.9% (99% for cool/cold/archive access tiers) for GRS<br/><br/>At least 99.99% (99.9% for cool/cold/archive access tiers) for RA-GRS | At least 99.9% (99% for cool/cold access tier) for GZRS<br/><br/>At least 99.99% (99.9% for cool/cold access tier) for RA-GZRS |
+| Availability for write requests | At least 99.9% (99% for cool/cold/archive access tiers) | At least 99.9% (99% for cool/cold access tier) | At least 99.9% (99% for cool/cold/archive access tiers) | At least 99.9% (99% for cool/cold access tier) |
| Number of copies of data maintained on separate nodes | Three copies within a single region | Three copies across separate availability zones within a single region | Six copies total, including three in the primary region and three in the secondary region | Six copies total, including three across separate availability zones in the primary region and three locally redundant copies in the secondary region | For more information, see the [SLA for Storage Accounts](https://azure.microsoft.com/support/legal/sla/storage/v1_5/).
storage Storage Ref Azcopy Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-copy.md
Copy a subset of buckets by using a wildcard symbol (*) in the bucket name from
`--include-before` (string) Include only those files modified before or on the given date/time. The value should be in ISO8601 format. If no timezone is specified, the value is assumed to be in the local timezone of the machine running AzCopy. for example, `2020-08-19T15:04:00Z` for a UTC time, or `2020-08-19` for midnight (00:00) in the local timezone. As of AzCopy 10.7, this flag applies only to files, not folders, so folder properties won't be copied when using this flag with `--preserve-smb-info` or `--preserve-smb-permissions`.
-`--include-directory-stub` False by default to ignore directory stubs. Directory stubs are blobs with metadata `hdi_isfolder:true`. Setting value to true will preserve directory stubs during transfers.
+`--include-directory-stub` False by default to ignore directory stubs. Directory stubs are blobs with metadata `hdi_isfolder:true`. Setting value to true will preserve directory stubs during transfers. Including this flag with no value defaults to true (*e.g,* `azcopy copy --include-directory-stub` is the same as `azcopy copy --include-directory-stub=true`).
`--include-path` (string) Include only these paths when copying. This option doesn't support wildcard characters (*). Checks relative path prefix (For example: myFolder;myFolder/subDirName/file.pdf).
storage Storage Use Azurite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azurite.md
Title: Use Azurite emulator for local Azure Storage development
description: The Azurite open-source emulator provides a free local environment for testing your Azure storage applications. Previously updated : 12/05/2023 Last updated : 01/26/2024
The steps in the video are also described in the following sections. Select any
### [Visual Studio](#tab/visual-studio)
-Azurite is automatically available with [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). The Azurite executable is updated as part of Visual Studio new version releases. If you're running an earlier version of Visual Studio, you can install Azurite by using either Node Package Manager, DockerHub, or by cloning the Azurite GitHub repository.
+Azurite is automatically available with [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). The Azurite executable is updated as part of Visual Studio new version releases. If you're running an earlier version of Visual Studio, you can install Azurite by using either Node Package Manager (npm), DockerHub, or by cloning the Azurite GitHub repository.
### [Visual Studio Code](#tab/visual-studio-code)
-Within Visual Studio Code, select the **EXTENSIONS** pane and search for *Azurite* in the **EXTENSIONS:MARKETPLACE**.
+In Visual Studio Code, select the **Extensions** icon and search for **Azurite**. Select the **Install** button to install the Azurite extension.
-![Visual Studio Code extensions marketplace](media/storage-use-azurite/azurite-vs-code-extension.png)
You can also navigate to [Visual Studio Code extension market](https://marketplace.visualstudio.com/items?itemName=Azurite.azurite) in your browser. Select the **Install** button to open Visual Studio Code and go directly to the Azurite extension page.
-To configure Azurite within Visual Studio Code, select the extensions pane. Select the **Manage** (gear) icon for **Azurite**. Select **Extension Settings**.
+#### Configure Azurite extension settings
-![Azurites configure extension settings](media/storage-use-azurite/azurite-configure-extension-settings.png)
+To configure Azurite settings within Visual Studio Code, select the **Extensions** icon. Select the **Manage** gear button for the **Azurite** entry. Select **Extension Settings**.
+ The following settings are supported:
The following settings are supported:
- **azurite.skipApiVersionCheck** - Skip the request API version check. The default value is **false**. - **azurite.disableProductStyleUrl** Force the parsing of the storage account name from request Uri path, instead of from request Uri host. - ### [npm](#tab/npm) This installation method requires that you have [Node.js version 8.0 or later](https://nodejs.org) installed. Node Package Manager (npm) is the package management tool included with every Node.js installation. After installing Node.js, execute the following `npm` command to install Azurite.
This configuration option can be changed later by modifying the project's **Conn
### [Visual Studio Code](#tab/visual-studio-code) > [!NOTE]
-> Azurite cannot be run from the command line if you only installed the Visual Studio Code extension. Instead, use the Visual Studio Code command palette.
+> Azurite cannot be run from the command line if you only installed the Visual Studio Code extension. Instead, use the Visual Studio Code command palette to run commands. Configuration settings are detailed at [Configure Azurite extension settings](#configure-azurite-extension-settings).
-The extension supports the following Visual Studio Code commands. To open the command palette, press F1 in Visual Studio Code.
+The Azurite extension supports the following Visual Studio Code commands. To open the command palette, press **F1** in Visual Studio Code.
- **Azurite: Clean** - Reset all Azurite services persistency data - **Azurite: Clean Blob Service** - Clean blob service
stream-analytics Streaming Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/streaming-technologies.md
Azure Stream Analytics supports user-defined functions (UDF) or user-defined agg
### Your solution is in a multi-cloud or on-premises environment
-Azure Stream Analytics is Microsoft's proprietary technology and is only available on Azure. If you need your solution to be portable across Clouds or on-premises, consider open-source technologies such as Spark Structured Streaming or Storm.
+Azure Stream Analytics is Microsoft's proprietary technology and is only available on Azure. If you need your solution to be portable across Clouds or on-premises, consider open-source technologies such as Spark Structured Streaming or [Apache Flink](/azure/hdinsight-aks/flink/flink-overview).
## Next steps
synapse-analytics Apache Spark 3 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-3-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.1 (EOLA)
+ Title: Azure Synapse Runtime for Apache Spark 3.1 (unsupported)
description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 3.1.
Last updated 11/28/2022
-# Azure Synapse Runtime for Apache Spark 3.1 (EOLA)
+# Azure Synapse Runtime for Apache Spark 3.1 (unsupported)
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document covers the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.1. > [!IMPORTANT]
-> * End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 3.1 has been announced January 26, 2023.
-> * End of life announced (EOLA) runtime will not have bug and feature fixes. Security fixes will be backported based on risk assessment.
-> * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.1 will be retired and disabled as of January 26, 2024. After the EOL date, the retired runtimes are unavailable for new Spark pools and existing workflows can't execute. Metadata will temporarily remain in the Synapse workspace.
-> * We recommend that you upgrade your Apache Spark 3.1 workloads to version 3.3 at your earliest convenience.
+> * End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 3.1 has been announced January 26, 2023.
+> * Effective January 26, 2024, the Azure Synapse has stopped official support for Spark 3.1 Runtimes.
+> * Post January 26, 2024, we will not be addressing any support tickets related to Spark 3.1. There will be no release pipeline in place for bug or security fixes for Spark 3.1. Utilizing Spark 3.1 post the support cutoff date is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns.
+> * Recognizing that certain customers may need additional time to transition to a higher runtime version, we are temporarily extending the usage option for Spark 3.1, but we will not provide any official support for it.
+> * We strongly advise to proactively upgrade their workloads to a more recent version of the runtime (e.g., [Azure Synapse Runtime for Apache Spark 3.3 (GA)](./apache-spark-33-runtime.md)).
+ ## Component versions
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
The runtimes have the following advantages:
## Supported Azure Synapse runtime releases > [!WARNING]
-> End of Support Notification for Azure Synapse Runtime for Apache Spark 2.4
-> * Effective September 29, 2023, the Azure Synapse will discontinue official support for Spark 2.4 Runtimes.
-> * Post September 29, we will not be addressing any support tickets related to Spark 2.4. There will be no release pipeline in place for bug or security fixes for Spark 2.4. Utilizing Spark 2.4 post the support cutoff date is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns.
-> * Recognizing that certain customers might need additional time to transition to a higher runtime version, we are temporarily extending the usage option for Spark 2.4, but we will not provide any official support for it.
+> End of Support Notification for Azure Synapse Runtime for Apache Spark 2.4 and Apache Spark 3.1.
+> * Effective September 29, 2023, the Azure Synapse will discontinue official support for Spark 2.4 Runtimes.
+> * Effective January 26, 2024, the Azure Synapse will discontinue official support for Spark 3.1 Runtimes.
+> * Post these dates, we will not be addressing any support tickets related to Spark 2.4 or 3.1. There will be no release pipeline in place for bug or security fixes for Spark 2.4 and 3.1. Utilizing Spark 2.4 or 3.1 post the support cutoff dates is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns.
> * We strongly advise to proactively upgrade their workloads to a more recent version of the runtime (e.g., [Azure Synapse Runtime for Apache Spark 3.3 (GA)](./apache-spark-33-runtime.md)). The following table lists the runtime name, Apache Spark version, and release date for supported Azure Synapse Runtime releases.
The following table lists the runtime name, Apache Spark version, and release da
| [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md) | Nov 21, 2023 | Public Preview (GA expected in Q1 2024) | | [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | Q1/Q2 2024 | Q1 2025 | | [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | __End of Life Announced (EOLA)__ | July 8, 2023 | July 8, 2024 |
-| [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __End of Life Announced (EOLA)__ | January 26, 2023 | January 26, 2024 |
-| [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __End of Life (EOL)__ | __July 29, 2022__ | __September 29, 2023__ |
+| [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __End of Life (EOL) unsupported__ | January 26, 2023 | January 26, 2024 |
+| [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __End of Life (EOL) unsupported__ | __July 29, 2022__ | __September 29, 2023__ |
## Runtime release stages
virtual-desktop Autoscale Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scaling-plan.md
Title: Create and assign an autoscale scaling plan for Azure Virtual Desktop
description: How to create and assign an autoscale scaling plan to optimize deployment costs. Previously updated : 01/16/2024 Last updated : 01/10/2024 + # Create and assign an autoscale scaling plan for Azure Virtual Desktop
To use scaling plans, make sure you follow these guidelines:
> [!IMPORTANT] > Hibernation is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+- If you are using PowerShell to create and assign your scaling plan, you will need module [Az.DesktopVirtualization](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/) version 4.2.0 or later.
## Assign the Desktop Virtualization Power On Off Contributor role with the Azure portal
To learn how to assign the *Desktop Virtualization Power On Off Contributor* rol
## Create a scaling plan
-Now that you've assigned the *Desktop Virtualization Power On Off Contributor* role to the service principal on your subscriptions, you can create a scaling plan. To create a scaling plan:
+### [Portal](#tab/portal)
+
+Now that you've assigned the *Desktop Virtualization Power On Off Contributor* role to the service principal on your subscriptions, you can create a scaling plan. To create a scaling plan using the portal:
1. Sign in to the [Azure portal](https://portal.azure.com).
Now that you've assigned the *Desktop Virtualization Power On Off Contributor* r
1. Select **Next**, which should take you to the **Schedules** tab. Schedules let you define when autoscale turns VMs on and off throughout the day. The schedule parameters are different based on the **Host pool type** you chose for the scaling plan.
- #### [Pooled host pools](#tab/pooled-autoscale)
+ #### Pooled host pools
In each phase of the schedule, autoscale only turns off VMs when in doing so the used host pool capacity won't exceed the capacity threshold. The default values you'll see when you try to create a schedule are the suggested values for weekdays, but you can change them as needed.
Now that you've assigned the *Desktop Virtualization Power On Off Contributor* r
- Load-balancing algorithm. We recommend choosing **depth-first** to gradually reduce the number of session hosts based on sessions on each VM. - Just like peak hours, you can't configure the capacity threshold here. Instead, the value you entered in **Ramp-down** will carry over.
- #### [Personal host pools](#tab/personal-autoscale)
+ #### Personal host pools
In each phase of the schedule, define whether VMs should be deallocated based on the user session state.
Now that you've assigned the *Desktop Virtualization Power On Off Contributor* r
> [!NOTE] > If you change resource settings on other tabs after creating tags, your tags will be automatically updated.
-1. Once you're done, go to the **Review + create** tab and select **Create** to deploy your host pool.
+1. Once you're done, go to the **Review + create** tab and select **Create** to create and assign your scaling plan to the host pools you selected.
+
+### [PowerShell](#tab/powershell)
+
+Here's how to create a scaling plan using the Az.DesktopVirtualization PowerShell module. The following examples show you how to create a scaling plan and scaling plan schedule.
+
+> [!IMPORTANT]
+> In the following examples, you'll need to change the `<placeholder>` values for your own.
++
+2. Create a scaling plan for your pooled or personal host pool(s) using the [New-AzWvdScalingPlan](/powershell/module/az.desktopvirtualization/new-azwvdscalingplan) cmdlet:
+
+ ```azurepowershell
+ $scalingPlanParams = @{
+ ResourceGroupName = '<resourceGroup>'
+ Name = '<scalingPlanName>'
+ Location = '<AzureRegion>'
+ Description = '<Scaling plan description>'
+ FriendlyName = '<Scaling plan friendly name>'
+ HostPoolType = '<Pooled or personal>'
+ TimeZone = '<Time zone, such as Pacific Standard Time>'
+ HostPoolReference = @(@{'hostPoolArmPath' = '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/<resourceGroup/providers/Microsoft.DesktopVirtualization/hostPools/<hostPoolName>'; 'scalingPlanEnabled' = $true;})
+ }
+
+ $scalingPlan = New-AzWvdScalingPlan @scalingPlanParams
+ ```
+
+++
+3. Create a scaling plan schedule.
+
+ * For pooled host pools, use the [New-AzWvdScalingPlanPooledSchedule](/powershell/module/az.desktopvirtualization/new-azwvdscalingplanpooledschedule) cmdlet. This example creates a pooled scaling plan that runs on Monday through Friday, ramps up at 6:30 AM, starts peak hours at 8:30 AM, ramps down at 4:00 PM, and starts off-peak hours at 10:45 PM.
++
+ ```azurepowershell
+ $scalingPlanPooledScheduleParams = @{
+ ResourceGroupName = 'resourceGroup'
+ ScalingPlanName = 'scalingPlanPooled'
+ ScalingPlanScheduleName = 'pooledSchedule1'
+ DaysOfWeek = 'Monday','Tuesday','Wednesday','Thursday','Friday'
+ RampUpStartTimeHour = '6'
+ RampUpStartTimeMinute = '30'
+ RampUpLoadBalancingAlgorithm = 'BreadthFirst'
+ RampUpMinimumHostsPct = '20'
+ RampUpCapacityThresholdPct = '20'
+ PeakStartTimeHour = '8'
+ PeakStartTimeMinute = '30'
+ PeakLoadBalancingAlgorithm = 'DepthFirst'
+ RampDownStartTimeHour = '16'
+ RampDownStartTimeMinute = '0'
+ RampDownLoadBalancingAlgorithm = 'BreadthFirst'
+ RampDownMinimumHostsPct = '20'
+ RampDownCapacityThresholdPct = '20'
+ RampDownForceLogoffUser:$true
+ RampDownWaitTimeMinute = '30'
+ RampDownNotificationMessage = '"Log out now, please."'
+ RampDownStopHostsWhen = 'ZeroSessions'
+ OffPeakStartTimeHour = '22'
+ OffPeakStartTimeMinute = '45'
+ OffPeakLoadBalancingAlgorithm = 'DepthFirst'
+ }
+
+ $scalingPlanPooledSchedule = New-AzWvdScalingPlanPooledSchedule @scalingPlanPooledScheduleParams
+ ```
+
+
+ * For personal host pools, use the [New-AzWvdScalingPlanPersonalSchedule](/powershell/module/az.desktopvirtualization/new-azwvdscalingplanpersonalschedule) cmdlet. The following example creates a personal scaling plan that runs on Monday, Tuesday, and Wednesday, ramps up at 6:00 AM, starts peak hours at 8:15 AM, ramps down at 4:30 PM, and starts off-peak hours at 6:45 PM.
++
+ ```azurepowershell
+ $scalingPlanPersonalScheduleParams = @{
+ ResourceGroupName = 'resourceGroup'
+ ScalingPlanName = 'scalingPlanPersonal'
+ ScalingPlanScheduleName = 'personalSchedule1'
+ DaysOfWeek = 'Monday','Tuesday','Wednesday'
+ RampUpStartTimeHour = '6'
+ RampUpStartTimeMinute = '0'
+ RampUpAutoStartHost = 'WithAssignedUser'
+ RampUpStartVMOnConnect = 'Enable'
+ RampUpMinutesToWaitOnDisconnect = '30'
+ RampUpActionOnDisconnect = 'Deallocate'
+ RampUpMinutesToWaitOnLogoff = '3'
+ RampUpActionOnLogoff = 'Deallocate'
+ PeakStartTimeHour = '8'
+ PeakStartTimeMinute = '15'
+ PeakStartVMOnConnect = 'Enable'
+ PeakMinutesToWaitOnDisconnect = '10'
+ PeakActionOnDisconnect = 'Hibernate'
+ PeakMinutesToWaitOnLogoff = '15'
+ PeakActionOnLogoff = 'Deallocate'
+ RampDownStartTimeHour = '16'
+ RampDownStartTimeMinute = '30'
+ RampDownStartVMOnConnect = 'Disable'
+ RampDownMinutesToWaitOnDisconnect = '10'
+ RampDownActionOnDisconnect = 'None'
+ RampDownMinutesToWaitOnLogoff = '15'
+ RampDownActionOnLogoff = 'Hibernate'
+ OffPeakStartTimeHour = '18'
+ OffPeakStartTimeMinute = '45'
+ OffPeakStartVMOnConnect = 'Disable'
+ OffPeakMinutesToWaitOnDisconnect = '10'
+ OffPeakActionOnDisconnect = 'Deallocate'
+ OffPeakMinutesToWaitOnLogoff = '15'
+ OffPeakActionOnLogoff = 'Deallocate'
+ }
+
+ $scalingPlanPersonalSchedule = New-AzWvdScalingPlanPersonalSchedule @scalingPlanPersonalScheduleParams
+ ```
+
+ >[!NOTE]
+ > We recommended that `RampUpStartVMOnConnect` is enabled for the ramp up phase of the schedule if you opt out of having autoscale start session host VMs. For more information, see [Start VM on Connect](start-virtual-machine-connect.md).
+
+4. Use [Get-AzWvdScalingPlan](/powershell/module/az.desktopvirtualization/get-azwvdscalingplan) to get the host pool(s) that your scaling plan is assigned to.
+
+ ```azurepowershell
+ $params = @{
+ ResourceGroupName = 'resourceGroup'
+ Name = 'scalingPlanPersonal'
+ }
+
+ (Get-AzWvdScalingPlan @params).HostPoolReference | FL HostPoolArmPath,ScalingPlanEnabled
+ ```
+
+
+ You have now created a new scaling plan, 1 or more schedules, assigned it to your pooled or personal host pool(s), and enabled autoscale.
++++ ## Edit an existing scaling plan
+### [Portal](#tab/portal)
+ To edit an existing scaling plan: 1. Sign in to the [Azure portal](https://portal.azure.com).
To edit an existing scaling plan:
1. To edit the plan's friendly name, description, time zone, or exclusion tags, go to the **Properties** tab.
+### [PowerShell](#tab/powershell)
+
+Here's how to update a scaling plan using the Az.DesktopVirtualization PowerShell module. The following examples show you how to update a scaling plan and scaling plan schedule.
+
+* Update a scaling plan using [Update-AzWvdScalingPlan](/powershell/module/az.desktopvirtualization/update-azwvdscalingplan). This example updates the scaling plan's timezone.
+
+ ```azurepowershell
+ $scalingPlanParams = @{
+ ResourceGroupName = 'resourceGroup'
+ Name = 'scalingPlanPersonal'
+ Timezone = 'Eastern Standard Time'
+ }
+
+ Update-AzWvdScalingPlan @scalingPlanParams
+ ```
+
+* Update a scaling plan schedule using [Update-AzWvdScalingPlanPersonalSchedule](/powershell/module/az.desktopvirtualization/update-azwvdscalingplanpersonalschedule). This example updates the ramp up start time.
+
+ ```azurepowershell
+ $scalingPlanPersonalScheduleParams = @{
+ ResourceGroupName = 'resourceGroup'
+ ScalingPlanName = 'scalingPlanPersonal'
+ ScalingPlanScheduleName = 'personalSchedule1'
+ RampUpStartTimeHour = '5'
+ RampUpStartTimeMinute = '30'
+ }
+
+ Update-AzWvdScalingPlanPersonalSchedule @scalingPlanPersonalScheduleParams
+ ```
+
+* Update a pooled scaling plan schedule using [Update-AzWvdScalingPlanPooledSchedule](/powershell/module/az.desktopvirtualization/update-azwvdscalingplanpooledschedule). This example updates the peak hours start time.
+
+ ```azurepowershell
+ $scalingPlanPooledScheduleParams = @{
+ ResourceGroupName = 'resourceGroup'
+ ScalingPlanName = 'scalingPlanPooled'
+ ScalingPlanScheduleName = 'pooledSchedule1'
+ PeakStartTimeHour = '9'
+ PeakStartTimeMinute = '15'
+ }
+
+ Update-AzWvdScalingPlanPooledSchedule @scalingPlanPooledScheduleParams
+ ```
+++ ## Assign scaling plans to existing host pools
-You can assign a scaling plan to any existing host pools in your deployment. When you assign a scaling plan to your host pool, the plan will apply to all session hosts within that host pool. The scaling plan also automatically applies to any new session hosts you create in the assigned host pool.
+You can assign a scaling plan to any existing host pools of the same type in your deployment. When you assign a scaling plan to your host pool, the plan will apply to all session hosts within that host pool. The scaling plan also automatically applies to any new session hosts you create in the assigned host pool.
If you disable a scaling plan, all assigned resources will remain in the state they were in at the time you disabled it.
-### Assign a scaling plan to a single existing host pool
-To assign a scaling plan to an existing host pool:
+### [Portal](#tab/portal)
+
+To assign a scaling plan to existing host pools:
1. Open the [Azure portal](https://portal.azure.com). 1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
-1. Select **Host pools**, and select the host pool you want to assign the scaling plan to.
+1. Select **Scaling plans**, and select the scaling plan you want to assign to host pools.
-1. Under the **Settings** heading, select **Scaling plan**, and then select **+ Assign**. Select the scaling plan you want to assign and select **Assign**. The scaling plan must be in the same Azure region as the host pool and the scaling plan's host pool type must match the type of host pool that you're trying to assign it to.
+1. Under the **Manage** heading, select **Host pool assignments**, and then select **+ Assign**. Select the host pools you want to assign the scaling plan to and select **Assign**. The host pools must be in the same Azure region as the scaling plan and the scaling plan's host pool type must match the type of host pools you're trying to assign it to.
> [!TIP] > If you've enabled the scaling plan during deployment, then you'll also have the option to disable the plan for the selected host pool in the **Scaling plan** menu by unselecting the **Enable autoscale** checkbox, as shown in the following screenshot.
To assign a scaling plan to an existing host pool:
> [!div class="mx-imgBorder"] > ![A screenshot of the scaling plan window. The "enable autoscale" check box is selected and highlighted with a red border.](media/enable-autoscale.png)
-### Assign a scaling plan to multiple existing host pools
+### [PowerShell](#tab/powershell)
-To assign a scaling plan to multiple existing host pools at the same time:
+1. Assign a scaling plan to existing host pools using [Update-AzWvdScalingPlan](/powershell/module/az.desktopvirtualization/update-azwvdscalingplan). The following example assigns a personal scaling plan to two existing personal host pools.
-1. Open the [Azure portal](https://portal.azure.com).
+ ```azurepowershell
+ $scalingPlanParams = @{
+ ResourceGroupName = 'resourceGroup'
+ Name = 'scalingPlanPersonal'
+ HostPoolReference = @(
+ @{
+ 'hostPoolArmPath' = '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourceGroup/providers/Microsoft.DesktopVirtualization/hostPools/scalingPlanPersonal';
+ 'scalingPlanEnabled' = $true;
+ },
+ @{
+ 'hostPoolArmPath' = '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourceGroup/providers/Microsoft.DesktopVirtualization/hostPools/scalingPlanPersonal2';
+ 'scalingPlanEnabled' = $true;
+ }
+ )
+ }
+
+ $scalingPlan = Update-AzWvdScalingPlan @scalingPlanParams
+ ```
-1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+2. Use [Get-AzWvdScalingPlan](/powershell/module/az.desktopvirtualization/get-azwvdscalingplan) to get the host pool(s) that your scaling plan is assigned to.
-1. Select **Scaling plans**, and select the scaling plan you want to assign to host pools.
+ ```azurepowershell
+ $params = @{
+ ResourceGroupName = 'resourceGroup'
+ Name = 'scalingPlanPersonal'
+ }
+
+ (Get-AzWvdScalingPlan @params).HostPoolReference | FL HostPoolArmPath,ScalingPlanEnabled
+ ```
++
-1. Under the **Manage** heading, select **Host pool assignments**, and then select **+ Assign**. Select the host pools you want to assign the scaling plan to and select **Assign**. The host pools must be in the same Azure region as the scaling plan and the scaling plan's host pool type must match the type of host pools you're trying to assign it to.
## Next steps
Now that you've created your scaling plan, here are some things you can do:
- [Enable diagnostics for your scaling plan](autoscale-diagnostics.md)
-If you'd like to learn more about terms used in this article, check out our [autoscale glossary](autoscale-glossary.md). For examples of how autoscale works, see [Autoscale example scenarios](autoscale-scenarios.md). You can also look at our [Autoscale FAQ](autoscale-faq.yml) if you have other questions.
+If you'd like to learn more about terms used in this article, check out our [autoscale glossary](autoscale-glossary.md). For examples of how autoscale works, see [Autoscale example scenarios](autoscale-scenarios.md). You can also look at our [Autoscale FAQ](autoscale-faq.yml) if you have other questions.
virtual-desktop Create Custom Image Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-custom-image-templates.md
description: Learn how to use custom image templates to create custom images whe
Previously updated : 09/08/2023 Last updated : 01/24/2024 # Use custom image templates to create custom images in Azure Virtual Desktop
To create a custom image using the Azure portal:
| Parameter | Value/Description | |--|--|
- | Build timeout (minutes) | Enter the [maximum duration to wait](../virtual-machines/linux/image-builder-json.md#properties-buildtimeoutinminutes) while building the image template (includes all customizations, validations, and distributions). |
+ | Build timeout (minutes) | Enter the [maximum duration to wait](../virtual-machines/linux/image-builder-json.md#properties-buildtimeoutinminutes) while building the image template (includes all customizations, validations, and distributions). <br /><br />Customizations like Language Pack installation or Configure Windows Optimization require Windows Update and we recommend a higher build timeout. Windows Update is automatically triggered for those built-in scripts. |
| Build VM size | Select a size for the temporary VM created and used to build the template. You need to select a [VM size that matches the generation](../virtual-machines/generation-2.md) of your source image. | | OS disk size (GB) | Select the resource group you assigned the managed identity to.<br /><br />Alternatively, if you assigned the managed identity to the subscription, you can create a new resource group here. | | Staging group | Enter a name for a new resource group you want Azure Image Builder to use to create the Azure resources it needs to create the image. If you leave this blank Azure Image Builder creates its own default resource group. |
virtual-desktop Private Link Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-overview.md
When a user connects to Azure Virtual Desktop over Private Link, and Azure Virtu
1. Your private DNS zone for **privatelink.wvd.microsoft.com** returns the private IP address for the Remote Desktop gateway to use for the host pool providing the remote session.
-## Limitations
+## Known issues and limitations
Private Link with Azure Virtual Desktop has the following limitations:
Private Link with Azure Virtual Desktop has the following limitations:
- After you've changed a private endpoint to a host pool, you must restart the *Remote Desktop Agent Loader* (*RDAgentBootLoader*) service on each session host in the host pool. You also need to restart this service whenever you change a host pool's network configuration. Instead of restarting the service, you can restart each session host. -- Using both Private Link and [RDP Shortpath](./shortpath.md) at the same time isn't currently supported.
+- Using both Private Link and [RDP Shortpath](./shortpath.md) has the following limitations:
+
+ - Using Private Link and [RDP Shortpath for public networks](rdp-shortpath.md?tabs=public-networks) isn't supported.
+ - Using Private Link and [RDP Shortpath for managed networks](rdp-shortpath.md?tabs=managed-networks) isn't supported, but they can work together. You can use Private Link and RDP Shortpath for managed networks at your own risk. You can follow the steps to [Disable RDP Shortpath for managed networks](configure-rdp-shortpath.md?tabs=managed-networks#disable-rdp-shortpath).
- Early in the preview of Private Link with Azure Virtual Desktop, the private endpoint for the initial feed discovery (for the *global* sub-resource) shared the private DNS zone name of `privatelink.wvd.microsoft.com` with other private endpoints for workspaces and host pools. In this configuration, users are unable to establish private endpoints exclusively for host pools and workspaces. Starting September 1, 2023, sharing the private DNS zone in this configuration will no longer be supported. You need to create a new private endpoint for the *global* sub-resource to use the private DNS zone name of `privatelink-global.wvd.microsoft.com`. For the steps to do this, see [Initial feed discovery](private-link-setup.md#initial-feed-discovery).
virtual-desktop Troubleshoot Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-insights.md
If your data isn't displaying properly, check the following common solutions:
- Read-access to the subscription's resource groups that hold your Azure Virtual Desktop session hosts - Read-access to whichever Log Analytics workspaces you're using - You may need to open outgoing ports in your server's firewall to allow Azure Monitor and Log Analytics to send data to the portal. To learn how to do this, see the following articles:
- - [Azure Monitor Outgoing ports](../azure-monitor/app/ip-addresses.md)
+ - [Azure Monitor Outgoing ports](../azure-monitor/ip-addresses.md)
- [Log Analytics Firewall Requirements](../azure-monitor/agents/log-analytics-agent.md#firewall-requirements). - Not seeing data from recent activity? You may want to wait for 15 minutes and refresh the feed. Azure Monitor has a 15-minute latency period for populating log data. To learn more, see [Log data ingestion time in Azure Monitor](../azure-monitor/logs/data-ingestion-time.md).
virtual-machine-scale-sets Virtual Machine Scale Sets Attach Detach Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-attach-detach-vm.md
Update-AzVMΓÇ»-ResourceGroupNameΓÇ»$resourceGroupNameΓÇ»-VMΓÇ»$vm -VirtualMachin
- The scale set must use Flexible orchestration mode. - The scale set must have a `platformFaultDomainCount` of **1**. - VMs created by the scale set must be `Stopped` prior to being detached.-- Detach of VMs created by the scale set is currently not supported in West Central US, East Asia, UK South, and North Europe.
+- Detach of VMs created by the scale set is currently not supported in UK South and North Europe.
## Moving VMs between scale sets (Preview)
virtual-machines Sizes B Series Burstable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-b-series-burstable.md
The B-series comes in the following VM sizes:
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Base CPU Performance of VM (%) | Initial Credits | Credits banked/hour | Max Banked Credits | Max data disks | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps1 | Max NICs | |-||-||--|--||--|-|--||-|
-| Standard_B1ls2 | 1 | 0.5 | 4 | 10 | 30 | 6 | 144 | 2 | 160/10 | 4000/100 | 2 |
-| Standard_B1s | 1 | 1 | 4 | 20 | 30 | 12 | 288 | 2 | 320/10 | 4000/100 | 2 |
-| Standard_B1ms | 1 | 2 | 4 | 40 | 30 | 24 | 576 | 2 | 640/10 | 4000/100 | 2 |
-| Standard_B2s | 2 | 4 | 8 | 40 | 60 | 48 | 1152 | 4 | 1280/15 | 4000/100 | 3 |
-| Standard_B2ms | 2 | 8 | 16 | 60 | 60 | 72 | 1728 | 4 | 1920/22.5 | 4000/100 | 3 |
-| Standard_B4ms | 4 | 16 | 32 | 45 | 120 | 108 | 2592 | 8 | 2880/35 | 8000/200 | 4 |
-| Standard_B8ms | 8 | 32 | 64 | 33 | 240 | 158 | 3800 | 16 | 4320/50 | 8000/200 | 4 |
-| Standard_B12ms | 12 | 48 | 96 | 36 | 360 | 258 | 6220 | 16 | 4320/50 | 16000/400 | 6 |
-| Standard_B16ms | 16 | 64 | 128 | 40 | 480 | 384 | 9216 | 32 | 4320/50 | 16000/400 | 8 |
-| Standard_B20ms | 20 | 80 | 160 | 40 | 600 | 480 | 11520 | 32 | 4320/50 | 16000/400 | 8 |
+| Standard_B1ls2 | 1 | 0.5 | 4 | 5 | 30 | 3 | 72 | 2 | 160/10 | 4000/100 | 2 |
+| Standard_B1s | 1 | 1 | 4 | 10 | 30 | 6 | 144 | 2 | 320/10 | 4000/100 | 2 |
+| Standard_B1ms | 1 | 2 | 4 | 20 | 30 | 12 | 288 | 2 | 640/10 | 4000/100 | 2 |
+| Standard_B2s | 2 | 4 | 8 | 20 | 60 | 24 | 576 | 4 | 1280/15 | 4000/100 | 3 |
+| Standard_B2ms | 2 | 8 | 16 | 30 | 60 | 36 | 864 | 4 | 1920/22.5 | 4000/100 | 3 |
+| Standard_B4ms | 4 | 16 | 32 | 20 | 120 | 54 | 1296 | 8 | 2880/35 | 8000/200 | 4 |
+| Standard_B8ms | 8 | 32 | 64 | 17 | 240 | 81 | 1994 | 16 | 4320/50 | 8000/200 | 4 |
+| Standard_B12ms | 12 | 48 | 96 | 17 | 360 | 121 | 2908 | 16 | 4320/50 | 16000/400 | 6 |
+| Standard_B16ms | 16 | 64 | 128 | 17 | 480 | 162 | 3888 | 32 | 4320/50 | 16000/400 | 8 |
+| Standard_B20ms | 20 | 80 | 160 | 17 | 600 | 202 | 4867 | 32 | 4320/50 | 16000/400 | 8 |
<sup>1</sup> B-series VMs can [burst](./disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
virtual-network Manage Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-custom-ip-address-prefix.md
The following commands can be used in Azure CLI and Azure PowerShell to begin th
|Tool|Command| ||| |Azure portal|Use the **Decommission** option in the Overview section of a Custom IP Prefix |
-|CLI|[az network custom-ip prefix update](/cli/azure/network/public-ip/prefix#az-network-custom-ip-prefix-update) with the flag to `-Decommission` |
-|PowerShell|[Update-AzCustomIpPrefix](/powershell/module/az.network/update-azcustomipprefix) with the `--state` flag set to decommission |
+|CLI|[az network custom-ip prefix update](/cli/azure/network/public-ip/prefix#az-network-custom-ip-prefix-update) with `--state` flag set to decommission |
+|PowerShell|[Update-AzCustomIpPrefix](/powershell/module/az.network/update-azcustomipprefix) with the flag to `-Decommission` |
Alternatively, a custom IP prefix can be decommissioned via the Azure portal using the **Decommission** button in the **Overview** section of the custom IP prefix.
The following commands can be used in Azure CLI and Azure PowerShell to deprovis
|Tool|Command| ||| |Azure portal|Use the **Deprovision** option in the Overview section of a Custom IP Prefix |
-|CLI|[az network custom-ip prefix update](/cli/azure/network/public-ip/prefix#az-network-custom-ip-prefix-update) with the flag to `-Deprovision` <br>[az network custom-ip prefix delete](/cli/azure/network/public-ip/prefix#az-network-custom-ip-prefix-delete) to remove|
-|PowerShell|[Update-AzCustomIpPrefix](/powershell/module/az.network/update-azcustomipprefix) with the `--state` flag set to deprovision<br>[Remove-AzCustomIpPrefix](/powershell/module/az.network/update-azcustomipprefix) to remove|
+|CLI|[az network custom-ip prefix update](/cli/azure/network/public-ip/prefix#az-network-custom-ip-prefix-update) with the `--state` flag set to deprovision <br>[az network custom-ip prefix delete](/cli/azure/network/public-ip/prefix#az-network-custom-ip-prefix-delete) to remove|
+|PowerShell|[Update-AzCustomIpPrefix](/powershell/module/az.network/update-azcustomipprefix)with the flag to `-Deprovision` <br>[Remove-AzCustomIpPrefix](/powershell/module/az.network/remove-azcustomipprefix) to remove|
Alternatively, a custom IP prefix can be decommissioned via the Azure portal using the **Deprovision** button in the **Overview** section of the custom IP prefix, and then deleted using the **Delete** button in the same section.
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
The current behavior is to prefer the ExpressRoute circuit path over hub-to-hub
* Configure AS-Path as the Hub Routing Preference for your Virtual Hub. This ensures traffic between the 2 hubs traverses through the Virtual hub router in each hub and uses the hub-to-hub path instead of the ExpressRoute path (which traverses through the Microsoft Edge routers). For more information, see [Configure virtual hub routing preference](howto-virtual-hub-routing-preference.md).
-### When there's an ExpressRoute circuit connected as a bow-tie to a Virtual WAN hub and a non Virtual WAN VNet, what is the path for the non Virtual WAN VNet to reach the Virtual WAN hub?
+### When there's an ExpressRoute circuit connected as a bow-tie to a Virtual WAN hub and a standalone VNet, what is the path for the standalone VNet to reach the Virtual WAN hub?
-The current behavior is to prefer the ExpressRoute circuit path for non Virtual WAN VNet to Virtual WAN connectivity. It's recommended that the customer [create a Virtual Network connection](howto-connect-vnet-hub.md) to directly connect the non Virtual WAN VNet to the Virtual WAN hub. Afterwards, VNet to VNet traffic will traverse through the Virtual WAN router instead of the ExpressRoute path (which traverses through the Microsoft Enterprise Edge routers/MSEE).
+The current behavior is to prefer the ExpressRoute circuit path for standalone (non-Virtual WAN) VNet to Virtual WAN connectivity. It's recommended that the customer [create a Virtual Network connection](howto-connect-vnet-hub.md) to directly connect the standalone VNet to the Virtual WAN hub. Afterwards, VNet to VNet traffic will traverse through the Virtual WAN hub router instead of the ExpressRoute path (which traverses through the Microsoft Enterprise Edge routers/MSEE).
+
+In Azure portal, the **Allow traffic from remote Virtual WAN networks** and **Allow traffic from non Virtual WAN networks** toggles allow connectivity between the standalone virtual network (VNet 4) and the spoke virtual networks directly connected to the Virtual WAN hub (VNet 2 and VNet 3). To allow this connectivity, both toggles need to be enabled: the **Allow traffic from remote Virtual WAN networks** toggle for the ExpressRoute gateway in the standalone virtual network and the **Allow traffic from non Virtual WAN networks** for the ExpressRoute gateway in the Virtual WAN hub. In the diagram below, if both of these toggles are enabled, then connectivity would be allowed between the standalone VNet 4 and the VNets directly connected to hub 2 (VNet 2 and VNet 3). If an Azure Route Server is deployed in standalone VNet 4, and the Route Server has [branch-to-branch](../route-server/quickstart-configure-route-server-portal.md#configure-route-exchange) enabled, then connectivity will be blocked between VNet 1 and standalone VNet 4.
+ ### Can hubs be created in different resource groups in Virtual WAN?
web-application-firewall Ag Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/ag-overview.md
description: This article provides an overview of Web Application Firewall (WAF)
Previously updated : 11/08/2022 Last updated : 01/26/2024 # What is Azure Web Application Firewall on Azure Application Gateway?
-Azure Web Application Firewall (WAF) on Azure Application Gateway provides centralized protection of your web applications from common exploits and vulnerabilities. Web applications are increasingly targeted by malicious attacks that exploit commonly known vulnerabilities. SQL injection and cross-site scripting are among the most common attacks.
+The Azure Web Application Firewall (WAF) on Azure Application Gateway actively safeguards your web applications against common exploits and vulnerabilities. As web applications become more frequent targets for malicious attacks, these attacks often exploit well-known vulnerabilities such as SQL injection and cross-site scripting.
WAF on Application Gateway is based on the [Core Rule Set (CRS)](application-gateway-crs-rulegroups-rules.md) from the Open Web Application Security Project (OWASP).
-All of the WAF features listed below exist inside of a WAF policy. You can create multiple policies, and they can be associated with an Application Gateway, to individual listeners, or to path-based routing rules on an Application Gateway. This way, you can have separate policies for each site behind your Application Gateway if needed. For more information on WAF policies, see [Create a WAF Policy](create-waf-policy-ag.md).
+All of the following WAF features exist inside of a WAF policy. You can create multiple policies, and they can be associated with an Application Gateway, to individual listeners, or to path-based routing rules on an Application Gateway. This way, you can have separate policies for each site behind your Application Gateway if needed. For more information on WAF policies, see [Create a WAF Policy](create-waf-policy-ag.md).
> [!Note] > Application Gateway has two versions of the WAF sku: Application Gateway WAF_v1 and Application Gateway WAF_v2. WAF policy associations are only supported for the Application Gateway WAF_v2 sku.
All of the WAF features listed below exist inside of a WAF policy. You can creat
Application Gateway operates as an application delivery controller (ADC). It offers Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), termination, cookie-based session affinity, round-robin load distribution, content-based routing, ability to host multiple websites, and security enhancements.
-Application Gateway security enhancements include TLS policy management and end-to-end TLS support. Application security is strengthened by WAF integration into Application Gateway. The combination protects your web applications against common vulnerabilities. And it provides an easy-to-configure central location to manage.
+Application Gateway enhances security through TLS policy management and end-to-end TLS support. By integrating WAF into Application Gateway, it fortifies application security. This combination actively defends your web applications against common vulnerabilities and offers a centrally manageable, easy-to-configure location.
## Benefits
You can configure a WAF policy and associate that policy to one or more applicat
- Custom rules that you create -- Managed rule sets that are a collection of Azure-managed pre-configured set of rules
+- Managed rule sets that are a collection of Azure-managed preconfigured set of rules
When both are present, custom rules are processed before processing the rules in a managed rule set. A rule is made of a match condition, a priority, and an action. Action types supported are: ALLOW, BLOCK, and LOG. You can create a fully customized policy that meets your specific application protection requirements by combining managed and custom rules.
Three bot categories are supported:
- **Bad**
- Bad bots include bots from malicious IP addresses and bots that have falsified their identities. Bad bots with malicious IPs are sourced from the Microsoft Threat Intelligence feedΓÇÖs high confidence IP Indicators of Compromise.
+ Bad bots include bots from malicious IP addresses and bots that falsify their identities. Bad bots with malicious IPs are sourced from the Microsoft Threat Intelligence feedΓÇÖs high confidence IP Indicators of Compromise.
- **Good** Good bots include validated search engines such as Googlebot, bingbot, and other trusted user agents. - **Unknown**
- Unknown bots are classified via published user agents without additional validation. For example, market analyzer, feed fetchers, and data collection agents. Unknown bots also include malicious IP addresses that are sourced from Microsoft Threat Intelligence feedΓÇÖs medium confidence IP Indicators of Compromise.
+ Unknown bots are classified via published user agents without more validation. For example, market analyzer, feed fetchers, and data collection agents. Unknown bots also include malicious IP addresses that are sourced from Microsoft Threat Intelligence feedΓÇÖs medium confidence IP Indicators of Compromise.
-Bot signatures are managed and dynamically updated by the WAF platform.
+The WAF platform actively manages and dynamically updates bot signatures.
:::image type="content" source="../media/ag-overview/bot-rule-set.png" alt-text="Screenshot of bot rule set.":::
-You may assign Microsoft_BotManagerRuleSet_1.0 by using the **Assign** option under **Managed Rulesets**:
+You can assign Microsoft_BotManagerRuleSet_1.0 by using the **Assign** option under **Managed Rulesets**:
:::image type="content" source="../media/ag-overview/assign-managed-rule-sets.png" alt-text="Screenshot of Assign managed rule sets.":::
-If Bot protection is enabled, incoming requests that match bot rules are blocked, allowed, or logged based on the configured action. Malicious bots are blocked, verified search engine crawlers are allowed, unknown search engine crawlers are blocked, and unknown bots are logged by default. You can set custom actions to block, allow, or log for different types of bots.
+When Bot protection is enabled, it blocks, allows, or logs incoming requests that match bot rules based on the action you've configured. It blocks malicious bots, allows verified search engine crawlers, blocks unknown search engine crawlers, and logs unknown bots by default. You have the option to set custom actions to block, allow, or log different types of bots.
You can access WAF logs from a storage account, event hub, log analytics, or send logs to a partner solution.
The Application Gateway WAF can be configured to run in the following two modes:
### WAF engines
-The Azure web application firewall (WAF) engine is the component that inspects traffic and determines whether a request includes a signature that represents a potential attack. When you use CRS 3.2 or later, your WAF runs the new [WAF engine](waf-engine.md), which gives you higher performance and an improved set of features. When you use earlier versions of the CRS, your WAF runs on an older engine. New features will only be available on the new Azure WAF engine.
+The Azure web application firewall (WAF) engine is the component that inspects traffic and determines whether a request includes a signature that represents a potential attack. When you use CRS 3.2 or later, your WAF runs the new [WAF engine](waf-engine.md), which gives you higher performance and an improved set of features. When you use earlier versions of the CRS, your WAF runs on an older engine. New features are only available on the new Azure WAF engine.
### WAF actions
-WAF customers can choose which action is run when a request matches a rules conditions. Below is the listed of supported actions.
+You can choose which action is run when a request matches a rule condition. The following actions are supported:
-* Allow: Request passes through the WAF and is forwarded to back-end. No further lower priority rules can block this request. Allow actions are only applicable to the Bot Manager ruleset, and are not applicable to the Core Rule Set.
+* Allow: Request passes through the WAF and is forwarded to back-end. No further lower priority rules can block this request. Allow actions are only applicable to the Bot Manager ruleset, and aren't applicable to the Core Rule Set.
* Block: The request is blocked and WAF sends a response to the client without forwarding the request to the back-end. * Log: Request is logged in the WAF logs and WAF continues evaluating lower priority rules.
-* Anomaly score: This is the default action for CRS ruleset where total anomaly score is incremented when a rule with this action is matched. Anomaly scoring is not applicable for the Bot Manager ruleset.
+* Anomaly score: This is the default action for CRS ruleset where total anomaly score is incremented when a rule with this action is matched. Anomaly scoring isn't applicable for the Bot Manager ruleset.
### Anomaly Scoring mode
You can configure and deploy all WAF policies using the Azure portal, REST APIs,
### WAF monitoring
-Monitoring the health of your application gateway is important. Monitoring the health of your WAF and the applications that it protects are supported by integration with Microsoft Defender for Cloud, Azure Monitor, and Azure Monitor logs.
+It's important to monitor the health of your application gateway. You can support this by integrating your WAF and the applications it protects with Microsoft Defender for Cloud, Azure Monitor, and Azure Monitor logs.
![Diagram of Application Gateway WAF diagnostics](../media/ag-overview/diagnostics.png)
Application Gateway logs are integrated with [Azure Monitor](../../azure-monitor
Microsoft Sentinel is a scalable, cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution. Microsoft Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response.
-With the built-in Azure WAF firewall events workbook, you can get an overview of the security events on your WAF. This includes events, matched and blocked rules, and everything else that gets logged in the firewall logs. See more on logging below.
+With the built-in Azure WAF firewall events workbook, you can get an overview of the security events on your WAF. This includes events, matched and blocked rules, and everything else that gets logged in the firewall logs. More information on logging follows.
![Azure WAF firewall events workbook](../media/ag-overview/sentinel.png)
Application Gateway WAF provides detailed reporting on each threat that it detec
## Application Gateway WAF SKU pricing
-The pricing models are different for the WAF_v1 and WAF_v2 SKUs. Please see the [Application Gateway pricing](https://azure.microsoft.com/pricing/details/application-gateway/) page to learn more.
+The pricing models are different for the WAF_v1 and WAF_v2 SKUs. See the [Application Gateway pricing](https://azure.microsoft.com/pricing/details/application-gateway/) page to learn more.
## What's new