Updates from: 01/26/2024 02:10:38
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector Token Enrichment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector-token-enrichment.md
Last updated 01/17/2023
zone_pivot_groups: b2c-policy-type+
+# Customer intent: I want to enrich tokens with claims from external identity data sources using APIs or outbound webhooks.
+ # Enrich tokens with claims from external sources using API connectors
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector.md
description: Configure an API connector to be used in a sign-up user flow.
Previously updated : 12/20/2022 Last updated : 01/24/2024 zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer or IT administrator, I want to use API connectors to integrate sign-up user flows with REST APIs to customize the sign-up experience and integrate with external systems.
+ # Add an API connector to a sign-up user flow
active-directory-b2c Add Identity Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-identity-provider.md
+
+#Customer Intent: As a developer integrating Azure AD B2C into my application, I want to add an identity provider, so that users can sign in with their existing social or enterprise accounts without creating a new account.
+ # Add an identity provider to your Azure Active Directory B2C tenant
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-reset-policy.md
zone_pivot_groups: b2c-policy-type
-#Customer intent: As a developer, I want to enable my users to reset their passwords without the need for admin intervention, so that they can recover their accounts if they forget their passwords.
+#Customer Intent: As an Azure AD B2C administrator, I want to set up a password reset flow for local accounts, so that users can reset their passwords if they forget them.
# Set up a password reset flow in Azure Active Directory B2C
active-directory-b2c Add Sign Up And Sign In Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-sign-up-and-sign-in-policy.md
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer integrating Azure Active Directory B2C into my application, I want to set up a sign-up and sign-in flow, so that users can sign up and sign in with local or social accounts, and reset their passwords if needed.
+ # Set up a sign-up and sign-in flow in Azure Active Directory B2C
active-directory-b2c Configure User Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-user-input.md
zone_pivot_groups: b2c-policy-type
-#Customer intent: As a developer or IT administrator, I want to add a new attribute to the sign-up journey, customize it's input type, and define whether it's mandatory or optional.
+#Customer Intent: As a developer using Azure Active Directory B2C, I want to add a new attribute to the sign-up journey, customize the input type, and define whether it's required, so that I can collect specific user information during the sign-up process.
# Add user attributes and customize user input in Azure Active Directory B2C
active-directory-b2c Identity Protection Investigate Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-protection-investigate-risk.md
description: Learn how to investigate risky users, and detections in Azure AD B2
Previously updated : 09/16/2021 Last updated : 01/24/2024 zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As an Azure AD B2C administrator, I want to investigate and mitigate identity-based risks using Identity Protection, so that I can identify at-risk users, view risk detections, and take appropriate actions to secure the Azure AD B2C tenant.
+ # Investigate risk with Identity Protection in Azure AD B2C
active-directory-b2c Identity Provider Adfs Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-adfs-saml.md
Previously updated : 09/16/2021 Last updated : 01/24/2024 zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As an Azure AD B2C administrator, I want to add AD FS as a SAML identity provider using custom policies, so that users can sign in with their AD FS accounts and access Azure AD B2C resources.
+ # Add AD FS as a SAML identity provider using custom policies in Azure Active Directory B2C
active-directory-b2c Identity Provider Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-adfs.md
Previously updated : 06/08/2022 Last updated : 01/24/2024 zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer integrating Azure AD B2C with AD FS, I want to configure AD FS as an OpenID Connect identity provider, so that users can sign in with their AD FS accounts in Azure AD B2C.
+ # Add AD FS as an OpenID Connect identity provider using custom policies in Azure Active Directory B2C
active-directory-b2c Identity Provider Amazon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-amazon.md
Last updated 09/16/2021
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-up and sign-in with an Amazon account, so that users can authenticate using their Amazon credentials.
+ # Set up sign-up and sign-in with an Amazon account using Azure Active Directory B2C
active-directory-b2c Identity Provider Apple Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-apple-id.md
Last updated 11/02/2021
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-up and sign-in with an Apple ID, so that users can authenticate using their Apple ID.
+ # Set up sign-up and sign-in with an Apple ID using Azure Active Directory B2C
active-directory-b2c Identity Provider Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-b2c.md
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As an Azure AD B2C administrator, I want to set up federation with another Azure AD B2C tenant, so that users from the other tenant can sign in to applications protected by my tenant using their existing accounts.
+ # Set up sign-up and sign-in with an Azure AD B2C account from another Azure AD B2C tenant
active-directory-b2c Identity Provider Azure Ad Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
zone_pivot_groups: b2c-policy-type
-#Customer intent: As a developer, I want to enable sign-in for users using the multitenant endpoint for Microsoft Entra ID. Allowing users from multiple Microsoft Entra tenants to sign in using Azure AD B2C, without me having to configure an identity provider for each tenant.
+#Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-in for multi-tenant Microsoft Entra ID, so that users from multiple Entra tenants can sign in without configuring an identity provider for each tenant.
+
active-directory-b2c Identity Provider Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
Previously updated : 02/07/2023 Last updated : 01/27/2024 zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-in for a single Microsoft Entra ID tenant, so that users from that organization can authenticate and access my application securely.
+ # Set up sign-in for a specific Microsoft Entra organization in Azure Active Directory B2C
active-directory-b2c Identity Provider Ebay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-ebay.md
Last updated 09/16/2021
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer integrating Azure Active Directory B2C, I want to set up sign-in with eBay as an identity provider, so that users can sign in with their eBay accounts.
+ # Set up sign-up and sign-in with an eBay account using Azure Active Directory B2C
active-directory-b2c Identity Provider Facebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-facebook.md
Last updated 03/10/2022
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-in with a Facebook account, so that users can authenticate with their Facebook credentials and access my application.
+ # Set up sign-up and sign-in with a Facebook account using Azure Active Directory B2C
active-directory-b2c Identity Provider Generic Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-openid-connect.md
Last updated 12/28/2022
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer integrating Azure AD B2C with a custom OpenID Connect identity provider, I want to understand the steps to add the identity provider and configure the necessary settings, so that users can sign in securely using the custom identity provider.
+ # Set up sign-up and sign-in with generic OpenID Connect using Azure Active Directory B2C
active-directory-b2c Identity Provider Generic Saml Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-saml-options.md
Last updated 03/20/2023
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer integrating Azure AD B2C with a SAML identity provider, I want to understand how to configure the SAML identity provider options, so that I can enable sign-in with the identity provider and map the claims correctly.
+
active-directory-b2c Identity Provider Generic Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-saml.md
Previously updated : 09/16/2021 Last updated : 01/24/2024 zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer integrating Azure AD B2C with a SAML identity provider, I want to configure the SAML technical profile and map the claims, so that users can sign in to my application using an existing social or enterprise identity.
+
active-directory-b2c Identity Provider Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-github.md
Last updated 03/10/2022
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer using Azure Active Directory B2C, I want to integrate GitHub as an identity provider, so that users can sign up and sign in with their GitHub accounts.
+ # Set up sign-up and sign-in with a GitHub account using Azure Active Directory B2C
active-directory-b2c Identity Provider Id Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-id-me.md
Last updated 09/16/2021
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer, I want to set up sign-up and sign-in with an ID.me account using Azure Active Directory B2C, so that I can enable users to authenticate with their ID.me accounts.
+ # Set up sign-up and sign-in with a ID.me account using Azure Active Directory B2C
active-directory-b2c Identity Provider Linkedin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-linkedin.md
Last updated 09/16/2021
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-up and sign-in with a LinkedIn account, so that users can authenticate using their LinkedIn credentials.
+ # Set up sign-up and sign-in with a LinkedIn account using Azure Active Directory B2C
active-directory-b2c Identity Provider Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-local.md
Title: Set up Azure AD B2C local account identity provider description: Define the identity types uses can use to sign-up or sign-in (email, username, phone number) in your Azure Active Directory B2C tenant.- - - Previously updated : 09/02/2022 Last updated : 01/24/2024 zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As an Azure AD B2C administrator, I want to configure the sign-in methods for local accounts, so that users can sign up and sign in to the application using their preferred method (email, username, or phone number).
+ # Set up the local account identity provider
active-directory-b2c Identity Provider Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-microsoft-account.md
Last updated 05/01/2023
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-up and sign-in with a Microsoft account, so that users can authenticate using their Microsoft account credentials.
+ # Set up sign-up and sign-in with a Microsoft account using Azure Active Directory B2C
active-directory-b2c Identity Provider Mobile Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-mobile-id.md
Last updated 04/08/2022
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer integrating Azure Active Directory B2C, I want to set up sign-up and sign-in with Mobile ID, so that I can provide a strong multi-factor authentication solution for my customers and protect access to company data and applications.
+ # Set up sign-up and sign-in with Mobile ID using Azure Active Directory B2C
active-directory-b2c Identity Provider Ping One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-ping-one.md
Last updated 12/2/2021
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer integrating PingOne with Azure Active Directory B2C, I want to set up sign-up and sign-in with a PingOne account, so that users can authenticate using their PingOne credentials.
# Set up sign-up and sign-in with a PingOne account using Azure Active Directory B2C
active-directory-b2c Identity Provider Qq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-qq.md
Last updated 09/16/2021
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-up and sign-in with a QQ account, so that users can authenticate with their QQ accounts in my application.
+ # Set up sign-up and sign-in with a QQ account using Azure Active Directory B2C
active-directory-b2c Identity Provider Salesforce Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-salesforce-saml.md
Last updated 09/16/2021
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developed using Azure Active Directory B2C, I want to set up sign-in with a Salesforce SAML provider, so that users from my Salesforce organization can sign in to Azure AD B2C using their Salesforce accounts.
+ # Set up sign-in with a Salesforce SAML provider by using SAML protocol in Azure Active Directory B2C
active-directory-b2c Identity Provider Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-salesforce.md
Last updated 09/16/2021
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer integrating Salesforce with Azure Active Directory B2C, I want to set up sign-up and sign-in with a Salesforce account using Azure AD B2C, so that users can authenticate with their Salesforce credentials in my application.
+ # Set up sign-up and sign-in with a Salesforce account using Azure Active Directory B2C
active-directory-b2c Identity Provider Swissid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-swissid.md
Last updated 12/07/2021
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer integrating SwissID accounts with Azure Active Directory B2C, I want to set up sign-up and sign-in functionality for customers with SwissID accounts, so that they can easily access my application using their existing credentials.
+ # Set up sign-up and sign-in with a SwissID account using Azure Active Directory B2C
active-directory-b2c Identity Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-twitter.md
Last updated 07/20/2022
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer setting up sign-up and sign-in with a Twitter account using Azure Active Directory B2C, I want to configure Twitter as an identity provider so that I can enable users to sign in with their Twitter accounts.
+ # Set up sign-up and sign-in with a Twitter account using Azure Active Directory B2C
active-directory-b2c Identity Provider Wechat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-wechat.md
Last updated 09/16/2021
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-up and sign-in with a WeChat account, so that users can authenticate using their WeChat credentials.
+ # Set up sign-up and sign-in with a WeChat account using Azure Active Directory B2C
active-directory-b2c Identity Provider Weibo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-weibo.md
Last updated 09/16/2021
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-up and sign-in with a Weibo account, so that users can authenticate with their Weibo credentials and access my application.
+ # Set up sign-up and sign-in with a Weibo account using Azure Active Directory B2C
active-directory-b2c Integrate With App Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/integrate-with-app-code-samples.md
Previously updated : 02/21/2023 Last updated : 01/24/2024 +
+#Customer Intent: As a developer, I want to access code samples for Azure Active Directory B2C, so that I can learn how to integrate authentication and user management into my web, mobile, and desktop applications using Azure AD B2C.
+ # Azure Active Directory B2C code samples
active-directory-b2c Javascript And Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/javascript-and-page-layout.md
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer customizing the user interface of an application in Azure Active Directory B2C, I want to enable JavaScript and page layout versions, so that I can create a more interactive and customized user experience for my users.
+ # Enable JavaScript and page layout versions in Azure Active Directory B2C
active-directory-b2c Language Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/language-customization.md
Last updated 12/28/2022
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer integrating Azure Active Directory B2C into my application, I want to customize the language of a user flow, so that I can provide a localized experience for my customers in different locales.
+ # Language customization in Azure Active Directory B2C
active-directory-b2c Manage Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-users-portal.md
Last updated 05/26/2023
+
+#Customer Intent: As an Azure AD B2C administrator, I want to manually create and delete consumer users in the Azure portal, so that I can manage user accounts for my applications.
+ # Use the Azure portal to create and delete consumer users in Azure AD B2C
active-directory-b2c Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/overview.md
Previously updated : 11/08/2023 Last updated : 01/24/2024
-# Customer intent: As a technical or non-technical customer, I need to understand at a high level what Azure AD B2C is and how it can help me build a customer-facing application.
+#Customer Intent: As an IT administrator or developer, I want to understand what Azure Active Directory B2C is and how it can be used for customer identity access management, so that I can determine if it is the right solution for authenticating end users to my web/mobile applications and managing access to API resources.
active-directory-b2c Partner Akamai Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-akamai-secure-hybrid-access.md
Last updated 11/23/2022
zone_pivot_groups: b2c-policy-type+
+#Customer Intent: As a developer building a desktop app, I want to set up sign-in functionality using Azure Active Directory B2C, so that I can authenticate users with social and enterprise accounts and protect my application and customer data.
+ # Configure Azure Active Directory B2C with Akamai Enterprise Application Access for SSO and secure hybrid access
active-directory-b2c Quickstart Native App Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-native-app-desktop.md
Previously updated : 01/13/2022 Last updated : 01/24/2023 +
+#Customer Intent: As a developer building a desktop app, I want to set up sign-in functionality using Azure Active Directory B2C, so that I can authenticate users with social and enterprise accounts and protect my application and customer data.
+ # Quickstart: Set up sign in for a desktop app using Azure Active Directory B2C
active-directory-b2c Quickstart Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-single-page-app.md
Last updated 02/23/2023
+
+#Customer Intent: As a developer building a single-page app, I want to set up sign-in functionality using Azure Active Directory B2C, so that I can authenticate users with social accounts and call a protected web API to retrieve user information.
+ # Quickstart: Set up sign in for a single-page app using Azure Active Directory B2C
active-directory-b2c Quickstart Web App Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-web-app-dotnet.md
Previously updated : 01/17/2023 Last updated : 01/24/2023 +
+#Customer Intent: As a developer building an ASP.NET application, I want to set up sign-in functionality using Azure Active Directory B2C, so that I can authenticate users with social or enterprise accounts and protect my application and customer data.
+ # Quickstart: Set up sign in for an ASP.NET application using Azure Active Directory B2C
active-directory-b2c Roles Resource Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/roles-resource-access-control.md
Last updated 02/24/2023 +
+#Customer Intent: As an Azure AD B2C administrator, I want to assign users the least privileged role required to access resources, so that I can ensure proper access control and security within my tenant.
+ # Roles and resource access control
active-directory-b2c Security Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/security-architecture.md
Last updated 05/09/2023 +
+#Customer intent: As a developer implementing Azure Active Directory B2C, I want to know the best practices for securing my identity solution, so that I can protect my users from bot attacks, fraudulent activities, and resource exhaustion.
++ # How to secure your Azure Active Directory B2C identity solution
active-directory-b2c Sign In Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/sign-in-options.md
Last updated 02/08/2023
+#Customer Intent: As a developer integrating Azure AD B2C into my application, I want to understand the different sign-in options available so that I can choose the appropriate method for my users and configure the sign-in flow accordingly.
++ # Sign-in options in Azure AD B2C
active-directory-b2c Tenant Management Directory Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-directory-quota.md
+
+#Customer intent: As an Azure AD B2C tenant administrator, I want to monitor and manage the directory size quota, so that I can ensure that I don't exceed the maximum number of objects allowed in the directory and take necessary actions such as removing inactive users or requesting a quota increase.
+ # Manage directory size quota of your Azure Active Directory B2C tenant
active-directory-b2c Threat Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/threat-management.md
Last updated 09/20/2021 +
+#Customer Intent: As an Azure AD B2C administrator, I want to mitigate credential attacks by using smart lockout, so that I can protect user accounts from unauthorized access.
+ # Mitigate credential attacks in Azure AD B2C with smart lockout
active-directory-b2c User Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-overview.md
Last updated 11/09/2023
-# Customer intent: As a developer, I want to understand the difference between user flows and custom policies, so that I can choose the best method for my business needs. I want to understand the scenarios that can be enabled with each method, and how to integrate them with my applications.
+#Customer intent: As a developer, I want to understand the difference between user flows and custom policies, so that I can choose the best method for my business needs. I want to understand the scenarios that can be enabled with each method, and how to integrate them with my applications.
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/custom/quickstart.md
Previously updated : 12/19/2023 Last updated : 01/25/2024 zone_pivot_groups: usage-custom-language-features
-# Quickstart: Custom sentiment analysis
+# Quickstart: Custom sentiment analysis (preview)
Use this article to get started with creating a Custom sentiment analysis project where you can train custom models for detecting the sentiment of text. A model is artificial intelligence software that's trained to do a certain task. For this system, the models classify text, and are trained by learning from tagged data.
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/overview.md
Previously updated : 12/19/2023 Last updated : 01/25/2024
Opinion mining is a feature of sentiment analysis. Also known as aspect-based se
[!INCLUDE [Developer reference](../includes/reference-samples-text-analytics.md)]
-#### [Custom model](#tab/custom)
+#### [Custom model (preview)](#tab/custom)
Custom sentiment analysis enables users to build custom AI models to classify text into sentiments pre-defined by the user. By creating a Custom sentiment analysis project, developers can iteratively label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/quickstart.md
Previously updated : 12/19/2023 Last updated : 01/25/2024 ms.devlang: csharp # ms.devlang: csharp, java, javascript, python
ai-services Provisioned Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/provisioned-throughput.md
az cognitiveservices account deployment create \
Provisioned throughput quota represents a specific amount of total throughput you can deploy. Quota in the Azure OpenAI Service is managed at the subscription level. All Azure OpenAI resources within the subscription share this quota.
-Quota is specific to a (deployment type, model, region) triplet and isn't interchangeable. Meaning you can't use quota for GPT-4 to deploy GPT-35-turbo. You can raise a support request to move quota across deployment types, models, or regions but the swap isn't guaranteed.
+Quota is specified in Provisioned throughput units and is specific to a (deployment type, model, region) triplet. Quota isn't interchangeable. Meaning you can't use quota for GPT-4 to deploy GPT-35-turbo. You can raise a support request to move quota across deployment types, models, or regions but the swap isn't guaranteed.
While we make every attempt to ensure that quota is deployable, quota doesn't represent a guarantee that the underlying capacity is available. The service assigns capacity during the deployment operation and if capacity is unavailable the deployment fails with an out of capacity error.
+### Determining the number of PTUs needed for a workload
+
+PTUs represent an amount of model processing capacity. Similar to your computer or databases, different workloads or requests to the model will consume different amounts of underlying processing capacity. The conversion from call shape characteristics (prompt size, generation size and call rate) to PTUs is complex and non-linear. To simplify this process, you can use the [Azure OpenAI Capacity calculator](https://oai.azure.com/portal/calculator) to size specific workload shapes.
+
+A few high-level considerations:
+- Generations require more capacity than prompts
+- Larger calls are progressively more expensive to compute. For example, 100 calls of with a 1000 token prompt size will require less capacity than 1 call with 100,000 tokens in the prompt. This also means that the distribution of these call shapes is important in overall throughput. Traffic patterns with a wide distribution that includes some very large calls may experience lower throughput per PTU than a narrower distribution with the same average prompt & completion token sizes.
++ ### How utilization enforcement works Provisioned deployments provide you with an allocated amount of model processing capacity to run a given model. The `Provisioned-Managed Utilization` metric in Azure Monitor measures a given deployments utilization on 1-minute increments. Provisioned-Managed deployments are optimized to ensure that accepted calls are processed with a consistent model processing time (actual end-to-end latency is dependent on a call's characteristics). When the workload exceeds the allocated PTU capacity, the service returns a 429 HTTP status code until the utilization drops down below 100%.
We use a variation of the leaky bucket algorithm to maintain utilization below 1
4. The overall utilization is decremented down at a continuous rate based on the number of PTUs deployed.
-Since calls are accepted until utilization reaches 100%, you're allowed to burst over 100% utilization when first increasing traffic. For sizeable calls and small sized deployments, you might then be over 100% utilization for up to several minutes.
+> [!NOTE]
+> Calls are accepted until utilization reaches 100%. Bursts just over 100% maybe permitted in short periods, but over time, your traffic is capped at 100% utilization.
:::image type="content" source="../media/provisioned/utilization.jpg" alt-text="Diagram showing how subsequent calls are added to the utilization." lightbox="../media/provisioned/utilization.jpg":::
ai-services Gpt With Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/gpt-with-vision.md
Follow these steps to set up a video retrieval system and integrate it with your
{ "type": "AzureComputerVisionVideoIndex", "parameters": {
- "computerVisionBaseUrl": "<your_computer_vision_endpoint>",
+ "endpoint": "<your_computer_vision_endpoint>",
"computerVisionApiKey": "<your_computer_vision_key>", "indexName": "<name_of_your_index>", "videoUrls": ["<your_video_SAS_URL>"]
Follow these steps to set up a video retrieval system and integrate it with your
{ "type": "text", "text": "Describe this video:"
- }
- ]
- },
- {
- "role": "user",
- "content": [
+ },
{ "type": "acv_document_id", "acv_document_id": "<your_video_ID>"
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
| ```logprobs``` | integer | Optional | null | Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API will return a list of the 10 most likely tokens. the API will always return the logprob of the sampled token, so there might be up to logprobs+1 elements in the response. This parameter cannot be used with `gpt-35-turbo`. | | ```suffix```| string | Optional | null | The suffix that comes after a completion of inserted text. | | ```echo``` | boolean | Optional | False | Echo back the prompt in addition to the completion. This parameter cannot be used with `gpt-35-turbo`. |
-| ```stop``` | string or array | Optional | null | Up to four sequences where the API will stop generating further tokens. The returned text won't contain the stop sequence. |
+| ```stop``` | string or array | Optional | null | Up to four sequences where the API will stop generating further tokens. The returned text won't contain the stop sequence. For GPT-4 Turbo with Vision, up to two sequences are supported. |
| ```presence_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | | ```frequency_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | | ```best_of``` | integer | Optional | 1 | Generates best_of completions server-side and returns the "best" (the one with the lowest log probability per token). Results can't be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return ΓÇô best_of must be greater than n. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. This parameter cannot be used with `gpt-35-turbo`. |
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
Last updated 12/21/2022 keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty, aks, kubernetes-+ # Deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster
aks Howto Deploy Java Quarkus App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-quarkus-app.md
Last updated 07/26/2023-+ # external contributor: danieloh30
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
This article shows you how to deploy the [NGINX ingress controller][nginx-ingres
To create a basic NGINX ingress controller without customizing the defaults, you'll use Helm. The following configuration uses the default configuration for simplicity. You can add parameters for customizing the deployment, like `--set controller.replicaCount=3`.
+> [!NOTE]
+> If you would like to enable [client source IP preservation][client-source-ip] for requests to containers in your cluster, add `--set controller.service.externalTrafficPolicy=Local` to the Helm install command. The client source IP is stored in the request header under *X-Forwarded-For*. When you're using an ingress controller with client source IP preservation enabled, TLS pass-through won't work.
+ ### [Azure CLI](#tab/azure-cli) ```console
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \ --create-namespace \ --namespace $NAMESPACE \
- --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
+ --set controller.service.externalTrafficPolicy=Local
``` ### [Azure PowerShell](#tab/azure-powershell)
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx ` --create-namespace ` --namespace $Namespace `
- --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz `
+ --set controller.service.externalTrafficPolicy=Local
```
helm install ingress-nginx ingress-nginx/ingress-nginx \
--set controller.image.digest="" \ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
+ --set controller.service.externalTrafficPolicy=Local \
--set controller.admissionWebhooks.patch.image.registry=$ACR_URL \ --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \ --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
helm install ingress-nginx ingress-nginx/ingress-nginx `
--set controller.image.digest="" ` --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux ` --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz `
+ --set controller.service.externalTrafficPolicy=Local `
--set controller.admissionWebhooks.patch.image.registry=$AcrUrl ` --set controller.admissionWebhooks.patch.image.image=$PatchImage ` --set controller.admissionWebhooks.patch.image.tag=$PatchTag `
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes history](https://github.com/kubern
| 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 | Until 1.30 GA | | 1.27* | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024, LTS until Jul 2025 | Until 1.31 GA | | 1.28 | Aug 2023 | Sep 2023 | Nov 2023 | Nov 2024 | Until 1.32 GA|
-| 1.29 | Dec 2023 | Jan 2024 | Feb 2024 | | Until 1.33 GA |
+| 1.29 | Dec 2023 | Feb 2024 | Mar 2024 | | Until 1.33 GA |
*\* Indicates the version is designated for Long Term Support*
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
Microsoft Entra pod-managed identities use Kubernetes primitives to associate [m
> Kubernetes native capabilities to federate with any external identity providers on behalf of the > application. >
-> The open source Microsoft Entra pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022, and the project will be archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on begins deprecation in Sept. 2024.
+> The open source Microsoft Entra pod-managed identity (preview) in Azure Kubernetes Service was deprecated on 10/24/2022, and the project archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on was deprecated in Sept. 2024.
> > To disable the AKS Managed add-on, use the following command: `az feature unregister --namespace "Microsoft.ContainerService" --name "EnablePodIdentityPreview"`.
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
- devx-track-azurecli - ignite-2023 Previously updated : 01/24/2024 Last updated : 01/25/2024 # Use a managed identity in Azure Kubernetes Service (AKS)
Azure Kubernetes Service (AKS) clusters require an identity to access Azure reso
AKS doesn't automatically create a [service principal](kubernetes-service-principal.md), so you have to create one. Clusters that use a service principal eventually expire, and the service principal must be renewed to avoid impacting cluster authentication with the identity. Managing service principals adds complexity, so it's easier to use managed identities instead. The same permission requirements apply for both service principals and managed identities. Managed identities use certificate-based authentication. Each managed identity's credentials have an expiration of *90 days* and are rolled after *45 days*. AKS uses both system-assigned and user-assigned managed identity types, and these identities are immutable.
-> [!NOTE]
-> If you're considering implementing [Microsoft Entra pod-managed identity][aad-pod-identity] on your AKS cluster, we recommend you first review the [Microsoft Entra Workload ID overview][workload-identity-overview]. This authentication method replaces Microsoft Entra pod-managed identity (preview) and is the recommended method.
+> [!IMPORTANT]
+> The open source [Microsoft Entra pod-managed identity][entra-id-pod-managed-identity] (preview) in Azure Kubernetes Service was deprecated on 10/24/2022, and the project archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on was deprecated in Sept. 2024.
+>
+> We recommend you first review [Microsoft Entra Workload ID][workload-identity-overview] overview. This authentication method replaces Microsoft Entra pod-managed identity (preview) and is the recommended method.
## Before you begin
AKS doesn't automatically create a [service principal](kubernetes-service-princi
* If the cluster has Microsoft Entra pod-managed identity (`aad-pod-identity`) enabled, Node-Managed Identity (NMI) pods modify the iptables of the nodes to intercept calls to the Azure Instance Metadata (IMDS) endpoint. This configuration means any request made to the Metadata endpoint is intercepted by NMI, even if the pod doesn't use `aad-pod-identity`. AzurePodIdentityException CRD can be configured to inform `aad-pod-identity` of any requests to the Metadata endpoint originating from a pod that matches labels defined in CRD should be proxied without any processing in NMI. The system pods with `kubernetes.azure.com/managedby: aks` label in *kube-system* namespace should be excluded in `aad-pod-identity` by configuring the AzurePodIdentityException CRD. * For more information, see [Disable Microsoft Entra ID-pod-identity for a specific pod or application](./use-azure-ad-pod-identity.md#clean-up). * To configure an exception, install the [mic-exception YAML](https://github.com/Azure/aad-pod-identity/blob/master/deploy/infra/mic-exception.yaml).
-* AKS doesn't support the use of a system-assigned managed identity if using a custom private DNS zone.
+* AKS doesn't support the use of a system-assigned managed identity when using a custom private DNS zone.
## Summary of managed identities
AKS uses several managed identities for built-in services and add-ons.
> AKS creates a user-assigned kubelet identity in the node resource group if you don't [specify your own kubelet managed identity][use-a-pre-created-kubelet-managed-identity]. > [!NOTE]
-> If your cluster is already using managed identity and the identity was changed, for example you update the cluster identity type from system-assigned to user-assigned, there will be a delay for control plane components to switch to the new identity. Control plane components keep using the old identity until its token expires. After the token is refreshed, they switch to the new identity. This process can take several hours.
+> If your cluster is already using managed identity and the identity was changed, for example you update the cluster identity type from system-assigned to user-assigned, there is a delay for control plane components to switch to the new identity. Control plane components keep using the old identity until its token expires. After the token is refreshed, they switch to the new identity. This process can take several hours.
1. Create an Azure resource group using the [`az group create`][az-group-create] command.
A custom user-assigned managed identity for the control plane enables access to
### Update managed identity on an existing cluster > [!NOTE]
-> Migrating a managed identity for the control plane, from system-assigned to user-assigned, doesn't cause any downtime for control plane and agent pools. Meanwhile, control plane components will keep using the old system-assigned identity for several hours until the next token refresh.
+> Migrating a managed identity for the control plane, from system-assigned to user-assigned, doesn't cause any downtime for control plane and agent pools. Meanwhile, control plane components keep using the old system-assigned identity for several hours until the next token refresh.
* If you don't have a managed identity, create one using the [`az identity create`][az-identity-create] command.
Now you can create your AKS cluster with your existing identities. Make sure to
### Update an existing cluster using kubelet identity > [!WARNING]
-> Updating kubelet managed identity upgrades node pools, which causes downtime for your AKS cluster as the nodes in the node pools will be cordoned/drained and reimaged.
+> Updating kubelet managed identity upgrades node pools, which causes downtime for your AKS cluster as the nodes in the node pools are cordoned/drained and reimaged.
> [!NOTE] > If your cluster was using `--attach-acr` to pull from images from Azure Container Registry, you need to run the `az aks update --resource-group myResourceGroup --name myAKSCluster --attach-acr <ACR Resource ID>` command after updating your cluster to let the newly-created kubelet used for managed identity get the permission to pull from ACR. Otherwise, you won't be able to pull from ACR after the upgrade.
Use [Azure Resource Manager templates][aks-arm-template] to create a managed ide
<!-- LINKS - external --> [aks-arm-template]: /azure/templates/microsoft.containerservice/managedclusters
+[entra-id-pod-managed-identity]: https://github.com/furkanyildiz/azure-docs/blob/feature/inform-deprecated-aks-identity/articles/aks/use-azure-ad-pod-identity.md
<!-- LINKS - internal --> [install-azure-cli]: /cli/azure/install-azure-cli
Use [Azure Resource Manager templates][aks-arm-template] to create a managed ide
[use-a-pre-created-kubelet-managed-identity]: use-managed-identity.md#use-a-pre-created-kubelet-managed-identity [update-managed-identity-on-an-existing-cluster]: use-managed-identity.md#update-managed-identity-on-an-existing-cluster [workload-identity-overview]: workload-identity-overview.md
-[aad-pod-identity]: use-azure-ad-pod-identity.md
[add-role-assignment-for-managed-identity]: use-managed-identity.md#add-role-assignment-for-managed-identity [az-group-create]: /cli/azure/group#az_group_create [az-aks-create]: /cli/azure/aks#az_aks_create
api-center Use Vscode Extension Copilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/use-vscode-extension-copilot.md
Your API center resources appear in the tree view on the left-hand side. Expand
## Search for APIs using GitHub Copilot Chat
-Use GitHub Copilot Chat to find APIs in your APIs based on semantic search queries.
+Use GitHub Copilot Chat to search for APIs and API definitions based on semantic search queries.
1. In Visual Studio Code, in the Activity Bar, select GitHub Copilot Chat. 1. Type `@apicenter /` to see available commands:
Use GitHub Copilot Chat to find APIs in your APIs based on semantic search queri
* [Azure API Center - key concepts](key-concepts.md) * [Get started with the Azure API Center extension for Visual Studio Code](use-vscode-extension.md)
-* [Getting started with GitHub Copilot](https://docs.github.com/copilot/using-github-copilot/getting-started-with-github-copilot)
+* [Getting started with GitHub Copilot](https://docs.github.com/copilot/using-github-copilot/getting-started-with-github-copilot)
azure-app-configuration Enable Dynamic Configuration Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core.md
Finish the quickstart [Create a .NET app with App Configuration](./quickstart-do
Open the `Program.cs` file and update the code configurations to match the following:
-### [ASP.NET Core 6.0+](#tab/core6x)
+### [.NET 6.0+](#tab/core6x)
```csharp using Microsoft.Extensions.Configuration;
if (_refresher != null)
} ```
-### [ASP.NET Core 3.x](#tab/core3x)
+### [.NET Core 3.x](#tab/core3x)
```csharp using Microsoft.Extensions.Configuration;
azure-app-configuration Howto Feature Filters Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-feature-filters-aspnet-core.md
Title: Use feature filters to enable conditional feature flags
-description: Learn how to use feature filters to enable conditional feature flags
+description: Learn how to use feature filters in Azure App Configuration to enable conditional feature flags for your app.
ms.devlang: csharp - Previously updated : 3/9/2020+ Last updated : 01/12/2024
+#Customerintent: As a developer, I want to create a feature filter to activate a feature flag depending on a specific scenario.
+ # Use feature filters to enable conditional feature flags Feature flags allow you to activate or deactivate functionality in your application. A simple feature flag is either on or off. The application always behaves the same way. For example, you could roll out a new feature behind a feature flag. When the feature flag is enabled, all users see the new feature. Disabling the feature flag hides the new feature.
The `Microsoft.FeatureManagement` library includes three feature filters:
You can also create your own feature filter that implements the Microsoft.FeatureManagement.IFeatureFilter interface.
-## Registering a feature filter
+## Prerequisites
+
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
+
+## Register a feature filter
You register a feature filter by calling the `AddFeatureFilter` method, specifying the type name of the desired feature filter. For example, the following code registers `PercentageFilter`:
public void ConfigureServices(IServiceCollection services)
} ```
-## Configuring a feature filter in Azure App Configuration
+## Configure a feature filter in Azure App Configuration
Some feature filters have additional settings. For example, `PercentageFilter` activates a feature based on a percentage. It has a setting defining the percentage to use.
You can configure these settings for feature flags defined in Azure App Configur
1. Follow the instructions in [Quickstart: Add feature flags to an ASP.NET Core app](./quickstart-feature-flag-aspnet-core.md) to create a web app with a feature flag.
-1. In the Azure portal, go to your configuration store and click **Feature manager**.
-
-1. Click on the context menu for the *Beta* feature flag that you created in the quickstart. Click **Edit**.
+1. In the Azure portal, go to your configuration store and select **Feature manager**.
- > [!div class="mx-imgBorder"]
- > ![Edit Beta feature flag](./media/edit-beta-feature-flag.png)
+ :::image type="content" source="./media/feature-filters/edit-beta-feature-flag.png" alt-text="Screenshot of the Azure portal, selecting the Edit option for the Beta feature flag, under Feature manager.":::
-1. In the **Edit** screen, check the **Enable feature flag** checkbox if it isn't already enabled. Then check the **Use feature filter** checkbox and select **Custom**.
+1. On the line with the Beta feature flag you created in the quickstart, select the context menu and then **Edit**.
-1. In the **Name** field, select *Microsoft.Percentage*.
+1. In the **Edit feature flag** pane that opens, check the **Enable feature flag** checkbox if it isn't already enabled. Then check the **Use feature filter** checkbox and select **Create**.
- > [!div class="mx-imgBorder"]
- > ![Add feature filter](./media/feature-flag-add-filter.png)
+ :::image type="content" source="./media/feature-filters/edit-a-feature-flag.png" alt-text="Screenshot of the Azure portal, filling out the form 'Edit feature flag'.":::
-1. Click the context menu next to the feature filter name. Click **Edit filter parameters**.
+1. The pane **Create a new filter** opens. Under **Filter type**, select **Targeting filter** to enable a new filter for specific users or a group.
- > [!div class="mx-imgBorder"]
- > ![Edit feature filter parameters](./media/feature-flags-edit-filter-parameters.png)
+ :::image type="content" source="./media/feature-filters/add-targeting-filter.png" alt-text="Screenshot of the Azure portal, creating a new targeting filter.":::
-1. Enter a **Name** of *Value* and a **Value** of 50. The **Value** field indicates the percentage of requests for which to enable the feature filter.
+1. Optionally expand the **Evaluation flow** menu to see a graph showing how the targeting filter is evaluated in the selected scenario. Leave the **Default Percentage** at 50. The options **Override by Groups** and **Override by Users** let you enable or disable the feature flag for select groups or users. These options are disabled by default.
+1. Select **Add** to save the new feature filter and return to the **Edit feature flag** screen.
- > [!div class="mx-imgBorder"]
- > ![Set feature filter parameters](./media/feature-flag-set-filter-parameters.png)
+1. The feature filter you created is now listed in the feature flag details. Select **Apply** to save the new feature flag settings.
-1. Click **Apply** to return to the **Edit feature flag** screen. Then click **Apply** again to save the feature flag settings.
+ :::image type="content" source="./media/feature-filters/feature-flag-edit-apply-filter.png" alt-text="Screenshot of the Azure portal, applying new targeting filter.":::
-1. On the **Feature manager** page, the feature flag now has a **Feature filter** value of *Custom*.
+1. On the **Feature manager** page, the feature flag now has a **Feature filter(s)** value of **1**.
- > [!div class="mx-imgBorder"]
- > ![Feature flag listed with a Feature filter value of "Custom"](./media/feature-flag-filter-custom.png)
+ :::image type="content" source="./media/feature-filters/updated-feature-flag.png" alt-text="Screenshot of the Azure portal, displaying updated feature flag.":::
## Feature filters in action To see the effects of this feature flag, launch the application and hit the **Refresh** button in your browser multiple times. You'll see that the *Beta* item appears on the toolbar about 50% of the time. It's hidden the rest of the time, because the `PercentageFilter` deactivates the *Beta* feature for a subset of requests. The following video shows this behavior in action.
-> [!div class="mx-imgBorder"]
-> ![TargetingFilter in action](./media/feature-flags-percentagefilter.gif)
## Next steps
azure-app-configuration Quickstart Dotnet Core App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-dotnet-core-app.md
You use the [.NET command-line interface (CLI)](/dotnet/core/tools/) to create a
4. Use App Configuration by calling the `builder.AddAzureAppConfiguration()` method in the `Program.cs` file.
- ### [ASP.NET Core 6.0+](#tab/core6x)
+ ### [.NET 6.0+](#tab/core6x)
```csharp var builder = new ConfigurationBuilder();
You use the [.NET command-line interface (CLI)](/dotnet/core/tools/) to create a
Console.WriteLine(config["TestApp:Settings:Message"] ?? "Hello world!"); ```
- ### [ASP.NET Core 3.x](#tab/core3x)
+ ### [.NET Core 3.x](#tab/core3x)
```csharp static void Main(string[] args)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
In order to use Arc resource bridge in a region, Arc resource bridge and the Arc
Arc resource bridge supports the following Azure regions: -- * East US - East US - East US 2
-* West US 2
-* West US 3
+- West US 2
+- West US 3
- Central US - North Central US-
-* South Central US
-* West Europe
-* North Europe
-* UK South
-* UK West
-* Sweden Central
-- * Canada Central
+- South Central US
+- Canada Central
- Australia East
+- West Europe
+- North Europe
+- UK South
+- UK West
+- Sweden Central
- Japan East-
-* Southeast Asia
+- Southeast Asia
- East Asia - Central India
azure-cache-for-redis Cache Azure Active Directory For Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-azure-active-directory-for-authentication.md
Because most Azure Cache for Redis clients assume that a password/access key is
### Microsoft Entra Client Workflow
-1. Configure your client application to acquire a Microsoft Entra token for scope `acca5fbb-b7e4-4009-81f1-37e38fd66d78/.default` using the [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview).
+1. Configure your client application to acquire a Microsoft Entra token for scope `https://redis.azure.com/.default` or `acca5fbb-b7e4-4009-81f1-37e38fd66d78/.default` using the [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview).
<!-- (ADD code snippet) -->
azure-functions Create First Function Cli Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-csharp.md
In Azure Functions, a function project is a container for one or more individual
If desired, you can skip to [Run the function locally](#run-the-function-locally) and examine the file contents later. #### HttpExample.cs-
-The function code generated from the template depends on the type of compiled C# project.
-
-*HttpExample.cs* contains a `Run` method that receives request data in the `req` variable is an [HttpRequestData](/dotnet/api/microsoft.azure.functions.worker.http.httprequestdata) object that's decorated with the **HttpTriggerAttribute**, which defines the trigger behavior. Because of the isolated worker process model, `HttpRequestData` is a representation of the actual `HttpRequest`, and not the request object itself.
--
-The return object is an [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata) object that contains the data that's handed back to the HTTP response.
+
+*HttpExample.cs* contains a `Run` method that receives request data in the `req` variable as an [HttpRequest](/dotnet/api/microsoft.aspnetcore.http.httprequest) object. That parameter is decorated with the **HttpTriggerAttribute**, to define the trigger behavior.
+
+```csharp
+using System.Net;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Extensions.Logging;
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+
+namespace Company.Function
+{
+ public class HttpExample
+ {
+ private readonly ILogger<HttpExample> _logger;
+
+ public HttpExample(ILogger<HttpExample> logger)
+ {
+ _logger = logger;
+ }
+
+ [Function("HttpExample")]
+ public IActionResult Run([HttpTrigger(AuthorizationLevel.AuthLevelValue, "get", "post")] HttpRequest req)
+ {
+ _logger.LogInformation("C# HTTP trigger function processed a request.");
+
+ return new OkObjectResult("Welcome to Azure Functions!");
+ }
+ }
+}
+```
+
+The return object is an [IActionResult](/dotnet/api/microsoft.aspnetcore.mvc.iactionresult) object that contains the data that's handed back to the HTTP response.
To learn more, see [Azure Functions HTTP triggers and bindings](./functions-bindings-http-webhook.md?tabs=csharp).
azure-functions Create First Function Vs Code Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-csharp.md
In this section, you use Visual Studio Code to create a local Azure Functions pr
|--|--| |**Select a language for your function project**|Choose `C#`.| | **Select a .NET runtime** | Choose `.NET 8.0 Isolated (LTS)`.|
- |**Select a template for your project's first function**|Choose `HTTP trigger`.|
+ |**Select a template for your project's first function**|Choose `HTTP trigger`.<sup>1</sup>|
|**Provide a function name**|Type `HttpExample`.| |**Provide a namespace** | Type `My.Functions`. | |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).| |**Select how you would like to open your project**|Select `Open in current window`.|
+ <sup>1</sup> Depending on your VS Code settings, you may need to use the `Change template filter` option to see the full list of templates.
+ 1. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=csharp#generated-project-files). [!INCLUDE [functions-run-function-test-local-vs-code-csharp](../../includes/functions-run-function-test-local-vs-code-csharp.md)]
azure-functions Functions Create First Quarkus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-quarkus.md
Last updated 01/10/2023 ms.devlang: java-+ # Deploy serverless Java apps with Quarkus on Azure Functions
azure-functions Functions Create Your First Function Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio.md
Your function definition should now look like the following code:
```csharp [Function("HttpExample")]
-public static HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req,
- FunctionContext executionContext)
+public IActionResult Run([HttpTrigger(AuthorizationLevel.AuthLevelValue, "get", "post")] HttpRequest req)
+{
+ return new OkObjectResult("Welcome to Azure Functions!");
+}
``` Now that you've renamed the function, you can test it on your local computer.
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md
The Functions extension lets you create a function app project, along with your
:::image type="content" source="./media/functions-develop-vs-code/select-http-trigger.png" alt-text="Screenshot for selecting H T T P trigger.":::
+ > [!TIP]
+ > You can view additional templates by selecting the `Change template filter` option and setting it to "Core" or "All".
+ 1. Type **HttpExample** for the function name and select Enter, and then select **Function** authorization. This authorization level requires you to provide a [function key](functions-bindings-http-webhook-trigger.md#authorization-keys) when you call the function endpoint. :::image type="content" source="./media/functions-develop-vs-code/create-function-auth.png" alt-text="Screenshot for creating function authorization.":::
azure-functions Functions Target Based Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-target-based-scaling.md
For Functions languages that use `function.json`, the `LagThreshold` parameter i
"protocol": "saslSsl", "authenticationMode": "plain", "lagThreshold": "100"
- "FUNCTIONS_RUNTIME_SCALE_MONITORING_ENABLED" : 1,
- "TARGET_BASED_SCALING_ENABLED" : 1
} ] }
azure-government Documentation Government Plan Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-plan-security.md
Microsoft takes strong measures to protect your data from inappropriate access o
Microsoft engineers can be granted access to customer data using temporary credentials via **Just-in-Time (JIT)** access. There must be an incident logged in the Azure Incident Management system that describes the reason for access, approval record, what data was accessed, etc. This approach ensures that there's appropriate oversight for all access to customer data and that all JIT actions (consent and access) are logged for audit. Evidence that procedures have been established for granting temporary access for Azure personnel to customer data and applications upon appropriate approval for customer support or incident handling purposes is available from the Azure [SOC 2 Type 2 attestation report](/azure/compliance/offerings/offering-soc-2) produced by an independent third-party auditing firm.
-JIT access works with multi-factor authentication that requires Microsoft engineers to use a smartcard to confirm their identity. All access to production systems is performed using Secure Admin Workstations (SAWs) that are consistent with published guidance on [securing privileged access](/security/compass/overview). Use of SAWs for access to production systems is required by Microsoft policy and compliance with this policy is closely monitored. These workstations use a fixed image with all software fully managed ΓÇô only select activities are allowed and users cannot accidentally circumvent the SAW design since they don't have admin privileges on these machines. Access is permitted only with a smartcard and access to each SAW is limited to specific set of users.
+JIT access works with multi-factor authentication that requires Microsoft engineers to use a smartcard to confirm their identity. All access to production systems is performed using Secure Admin Workstations (SAWs) that are consistent with published guidance on securing privileged access. Use of SAWs for access to production systems is required by Microsoft policy and compliance with this policy is closely monitored. These workstations use a fixed image with all software fully managed ΓÇô only select activities are allowed and users cannot accidentally circumvent the SAW design since they don't have admin privileges on these machines. Access is permitted only with a smartcard and access to each SAW is limited to specific set of users.
### Customer Lockbox
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
Last updated 06/01/2023 + # Install the Log Analytics agent on Linux computers
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
## Benefits
-In addition to consolidating and improving on the legacy Log Analytics agents, Azure Monitor Agent provides [a variety of immediate benefits](./azure-monitor-agent-overview.md#benefits), including **cost savings, a simplified management experience, and enhanced security and performance.**
+In addition to consolidating and improving on the legacy Log Analytics agents, Azure Monitor Agent provides [various immediate benefits](./azure-monitor-agent-overview.md#benefits), including **cost savings, a simplified management experience, and enhanced security and performance.**
## Migration guidance
Before you begin migrating from the Log Analytics agent to Azure Monitor Agent,
1. Use the [DCR generator](./azure-monitor-agent-migration-tools.md#installing-and-using-dcr-config-generator) to convert your legacy agent configuration into [data collection rules](./data-collection-rule-azure-monitor-agent.md#create-a-data-collection-rule) automatically.<sup>1</sup>
- Review the generated rules before you create them, to leverage benefits like [filtering](../essentials/data-collection-transformations.md), granular targeting (per machine), and other optimizations. There are special steps needed to[ migrate MMA custom logs to AMA custom logs](./azure-monitor-agent-custom-text-log-migration.md)
+ Review the generated rules before you create them and take advantage of advanced options, such as [filtering](../essentials/data-collection-transformations.md), granular targeting (per machine), and other optimizations. There are special steps needed to [migrate MMA custom logs to AMA custom logs](./azure-monitor-agent-custom-text-log-migration.md)
1. Test the new agent and data collection rules on a few nonproduction machines:
Before you begin migrating from the Log Analytics agent to Azure Monitor Agent,
1. Check your workbooks, dashboards, and alerts for variances from typical behavior following the migration. 1. Clean up: After you confirm that Azure Monitor Agent is collecting data properly, **disable or uninstall the legacy Log Analytics agents**.
- - If you have need to continue using both agents, [disable data collection with the Log Analytics agent](./agent-data-sources.md#configure-data-sources).
- - If you've migrated to Azure Monitor Agent for all your requirements, [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from monitored resources. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent. Continue using the legacy Log Analytics for features and solutions that Azure Monitor Agent doesn't support.
+
+ - If you've migrated to Azure Monitor Agent for all your requirements, [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from monitored resources. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent. Continue using the legacy Log Analytics for features and solutions that Azure Monitor Agent doesn't support.
+
+ Use the [MMA removal tool](../agents/azure-monitor-agent-mma-removal-tool.md) to discovery and remove the Log Analytics agent extension from all machines within your tenant.
- Don't uninstall the legacy agent if you need to use it to upload data to System Center Operations Manager.
-<sup>1</sup> The DCR generator only converts the configurations for Windows event logs, Linux syslog and performance counters. Support for more features and solutions will be available soon
+<sup>1</sup> The DCR generator only converts the configurations for Windows event logs, Linux syslog and performance counters. Support for more features and solutions will be available soon.
<sup>2</sup> You might need to deploy [extensions required for specific solutions](#migrate-additional-services-and-features) in addition to the Azure Monitor Agent extension. ## Migrate additional services and features
The following features and services now have an Azure Monitor Agent version (som
| Service or feature | Migration recommendation | Current state | More information | | : | : | : | : |
-| [VM insights, Service Map, and Dependency agent](../vm/vminsights-overview.md) | Migrate to Azure Monitor Agent | Generally available | [Enable VM Insights](../vm/vminsights-enable-overview.md) |
+| [VM insights, Service Map, and Dependency agent](../vm/vminsights-overview.md) | Migrate to Azure Monitor Agent | Generally Available | [Enable VM Insights](../vm/vminsights-enable-overview.md) |
| [Container insights](../containers/container-insights-overview.md) | Migrate to Azure Monitor Agent | **Linux**: Generally available<br>**Windows**:Public preview | [Enable Container Insights](../containers/container-insights-onboard.md) |
-| [Microsoft Sentinel](../../sentinel/overview.md) | Migrate to Azure Monitor Agent | Public preview | See [AMA migration for Microsoft Sentinel](../../sentinel/ama-migrate.md). |
-| [Change Tracking and Inventory](../../automation/change-tracking/overview-monitoring-agent.md) | Migrate to Azure Monitor Agent | Generally available | [Migration guidance from Change Tracking and inventory using Log Analytics to Change Tracking and inventory using Azure Monitoring Agent version](../../automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md) |
-| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Migrate to new service called Connection Monitor with Azure Monitor Agent | Generally available | [Monitor network connectivity using Azure Monitor agent with connection monitor](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |
-| Azure Stack HCI Insights | Migrate to Azure Monitor Agent | Generally available| [Monitor Azure Stack HCI with Insights](/azure-stack/hci/manage/monitor-hci-single) |
-| [Azure Virtual Desktop (AVD) Insights](../../virtual-desktop/insights.md) | Migrate to Azure Monitor Agent |Generally available | [Use Azure Virtual Desktop Insights to monitor your deployment](../../virtual-desktop/insights.md#session-host-data-settings) |
+| [Microsoft Sentinel](../../sentinel/overview.md) | Migrate to Azure Monitor Agent | Public Preview | See [AMA migration for Microsoft Sentinel](../../sentinel/ama-migrate.md). |
+| [Change Tracking and Inventory](../../automation/change-tracking/overview-monitoring-agent.md) | Migrate to Azure Monitor Agent | Generally Available | [Migration guidance from Change Tracking and inventory using Log Analytics to Change Tracking and inventory using Azure Monitoring Agent version](../../automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md) |
+| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Migrate to new service called Connection Monitor with Azure Monitor Agent | Generally Available | [Monitor network connectivity using Azure Monitor agent with connection monitor](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |
+| Azure Stack HCI Insights | Migrate to Azure Monitor Agent | Generally Available| [Monitor Azure Stack HCI with Insights](/azure-stack/hci/manage/monitor-hci-single) |
+| [Azure Virtual Desktop (AVD) Insights](../../virtual-desktop/insights.md) | Migrate to Azure Monitor Agent |Generally Available | [Use Azure Virtual Desktop Insights to monitor your deployment](../../virtual-desktop/insights.md#session-host-data-settings) |
| [Container Monitoring Solution](../containers/containers.md) | Migrate to new service called Container Insights with Azure Monitor Agent | Generally Available | [Enable Container Insights](../containers/container-insights-transition-solution.md) |
+| [DNS Collector](../../sentinel/connect-dns-ama.md) | Use new Sentinel Connector | Generally Available | [Enable DNS Connector](../../sentinel/connect-dns-ama.md)|
> [!NOTE] > Features and services listed above in preview **may not be available in Azure Government and China clouds**. They will be available typically within a month *after* the features/services become generally available.
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
For information about pricing for supported countries/regions, see [Azure Monito
> > If you use the webhook action, your target webhook endpoint must be able to process the various JSON payloads that different alert sources emit. You can't pass security certificates through a webhook action. To use basic authentication, you must pass your credentials through the URI. If the webhook endpoint expects a specific schema, for example, the Microsoft Teams schema, use the Logic Apps action to transform the alert schema to meet the target webhook's expectations.
-Webhook action groups use the following rules:
+Webhook action groups generally follow these rules when called:
- When a webhook is invoked, if the first call fails, it is retried at least 1 more time, and up to 5 times (5 retries) at various delay intervals (5, 20, 40 seconds). - The delay between 1st and 2nd attempt is 5 seconds - The delay between 2nd and 3rd attempt is 20 seconds
azure-monitor Alerts Create Rule Cli Powershell Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-rule-cli-powershell-arm.md
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
## Create a new alert rule using PowerShell - To create a metric alert rule using PowerShell, use the [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2) cmdlet.
+ > [!NOTE]
+ > When you create a metric alert on a single resource, the syntax uses the `TargetResourceId`. When you create a metric alert on multiple resources, the syntax contains the `TargetResourceScope`, `TargetResourceType`, and `TargetResourceRegion`.
- To create a log alert rule using PowerShell, use the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) cmdlet. - To create an activity log alert rule using PowerShell, use the [Set-AzActivityLogAlert](/powershell/module/az.monitor/set-azactivitylogalert) cmdlet.
You can use an [Azure Resource Manager template (ARM template)](../../azure-reso
1. Create a new resource, using the following resource types: - For metric alerts: `Microsoft.Insights/metricAlerts`
+ > [!NOTE]
+ > - We recommend that you create the metric alert using the same resource group as your target resource.
+ > - Metric alerts for an Azure Log Analytics workspace resource type (`Microsoft.OperationalInsights/workspaces`) are configured differently than other metric alerts. For more information, see [Resource Template for Metric Alerts for Logs](alerts-metric-logs.md#resource-template-for-metric-alerts-for-logs).
+ > - If you are creating a metric alert for a single resource, the template uses the `ResourceId` of the target resource. If you are creating a metric alert for multiple resources, the template uses the `scope`, `TargetResourceType`, and `TargetResourceRegion` for the target resources.
- For log alerts: `Microsoft.Insights/scheduledQueryRules` - For activity log, service health, and resource health alerts: `microsoft.Insights/activityLogAlerts`
- > [!NOTE]
- > - Metric alerts for an Azure Log Analytics workspace resource type (`Microsoft.OperationalInsights/workspaces`) are configured differently than other metric alerts. For more information, see [Resource Template for Metric Alerts for Logs](alerts-metric-logs.md#resource-template-for-metric-alerts-for-logs).
- > - We recommend that you create the metric alert using the same resource group as your target resource.
+ 1. Copy one of the templates from these sample ARM templates. - For metric alerts: [Resource Manager template samples for metric alert rules](resource-manager-alerts-metric.md) - For log alerts: [Resource Manager template samples for log alert rules](resource-manager-alerts-log.md)
azure-monitor Resource Manager Alerts Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-metric.md
This section will describe Azure Resource Manager templates for three scenarios
> [!NOTE] >
-> In a metric alert rule that monitors multiple resources, only one condition is allowed.
+> - In a metric alert rule that monitors multiple resources, only one condition is allowed.
+> - If you are creating a metric alert for a single resource, the template uses the `ResourceId` of the target resource. If you are creating a metric alert for multiple resources, the template uses the `scope`, `TargetResourceType`, and `TargetResourceRegion` for the target resources.
### Static threshold alert on all virtual machines in one or more resource groups
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
After setting it, restart your application. It now sends telemetry to Applicatio
### [Python](#tab/python) > [!NOTE]
-> Microsoft Entra authentication is only available for Python v2.7, v3.6, and v3.7. Support for Microsoft Entra ID in the Application Insights Opencensus Python SDK
+> Microsoft Entra authentication is only available for Python v2.7, v3.6, and v3.7. Support for Microsoft Entra ID in the Application Insights OpenCensus Python SDK
is included starting with beta version [opencensus-ext-azure 1.1b0](https://pypi.org/project/opencensus-ext-azure/1.1b0/).
+> [!NOTE]
+> [OpenCensus Python SDK is deprecated](https://opentelemetry.io/blog/2023/sunsetting-opencensus/), but Microsoft supports it until retirement on September 30, 2024. We now recommend the [OpenTelemetry-based Python offering](./opentelemetry-enable.md?tabs=python) and provide [migration guidance](./opentelemetry-python-opencensus-migrate.md?tabs=aspnetcore).
+ Construct the appropriate [credentials](/python/api/overview/azure/identity-readme#credentials) and pass them into the constructor of the Azure Monitor exporter. Make sure your connection string is set up with the instrumentation key and ingestion endpoint of your resource. The `OpenCensus` Azure Monitor exporters support these authentication types. We recommend using managed identities in production environments.
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
The following table shows the current state of autoinstrumentation availability.
Links are provided to more information for each supported scenario.
+> [!NOTE]
+> If your hosting environment or resource provider is not listed in the following table, autoinstrumentation is not supported. You can manually instrument your code using Application Insights SDKs or Azure Monitor OpenTelemetry Distros. For more information, see [Data Collection Basics of Azure Monitor Application Insights](opentelemetry-overview.md).
+ |Environment/Resource provider | .NET Framework | .NET Core / .NET | Java | Node.js | Python | |-|-|-|--|-|--| |Azure App Service on Windows - Publish as Code | [ :white_check_mark: :link: ](azure-web-apps-net.md) ┬╣ | [ :white_check_mark: :link: ](azure-web-apps-net-core.md) ┬╣ | [ :white_check_mark: :link: ](azure-web-apps-java.md) ┬╣ | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md) ┬╣ | :x: |
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Follow the steps in this section to instrument your application with OpenTelemet
### [Python](#tab/python) -- Python Application using Python 3.7+
+- Python Application using Python 3.8+
azure-monitor Opentelemetry Python Opencensus Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-python-opencensus-migrate.md
page to onboard onto the Azure Monitor OpenTelemetry Distro.
The following changes and limitations may be encountered when migrating from OpenCensus to OpenTelemetry.
-### Python < 3.7 support
+### Python < 3.8 support
-OpenTelemetry's Python-based monitoring solutions only support Python 3.7 and greater, excluding the previously supported Python versions 2.7, 3.4, 3.5, and 3.6 from OpenCensus. We suggest upgrading for users who are on the older versions of Python since, as of writing this document, those versions have already reached [end of life](https://devguide.python.org/versions/). Users who are adamant about not upgrading may still use the OpenTelemetry solutions, but may find unexpected or breaking behavior that is unsupported. In any case, the last supported version of [opencensus-ext-azure](https://pypi.org/project/opencensus-ext-azure/) always exists, and stills work for those versions, but no new releases are made for that project.
+OpenTelemetry's Python-based monitoring solutions only support Python 3.8 and greater, excluding the previously supported Python versions 2.7, 3.4, 3.5, 3.6, and 3.7 from OpenCensus. We suggest upgrading for users who are on the older versions of Python since, as of writing this document, those versions have already reached [end of life](https://devguide.python.org/versions/). Users who are adamant about not upgrading may still use the OpenTelemetry solutions, but may find unexpected or breaking behavior that is unsupported. In any case, the last supported version of [opencensus-ext-azure](https://pypi.org/project/opencensus-ext-azure/) always exists, and stills work for those versions, but no new releases are made for that project.
### Configurations
azure-monitor Kubernetes Monitoring Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-enable.md
As of version 6.4.0-main-02-22-2023-3ee44b9e of the Managed Prometheus addon con
* If onboarding using the CLI, include the option `--enable-windows-recording-rules`. * If onboarding using an ARM template, Bicep, or Azure Policy, set `enableWindowsRecordingRules` to `true` in the parameters file.
- * If the cluster is already onboarded, use [this ARM template](https://github.com/Azure/prometheus-collector/blob/kaveesh/windows_recording_rules/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRules.json) and [this parameter file](https://github.com/Azure/prometheus-collector/blob/kaveesh/windows_recording_rules/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRulesParameters.json) to create the rule groups.
+ * If the cluster is already onboarded, use [this ARM template](https://github.com/Azure/prometheus-collector/blob/main/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRules.json) and [this parameter file](https://github.com/Azure/prometheus-collector/blob/main/AddonArmTemplate/WindowsRecordingRuleGroupTemplate/WindowsRecordingRulesParameters.json) to create the rule groups.
azure-monitor Cost Meters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-meters.md
The following table lists the meters used to bill for data ingestion in your Log
| Standalone (legacy tier) | Log Analytics | Pay-as-you-go Data Analyzed | no | | Standard (legacy tier) | Log Analytics | Standard Data Analyzed | no | | Premium (legacy tier) | Log Analytics | Premium Data Analyzed | no |
+| (any) | Azure Monitor | Free Benefit - M365 Defender Data Ingestion | yes |
-The *Standard Data Included per Node* meter is used both for the Log Analytics [Per Node tier](logs/cost-logs.md#per-node-pricing-tier) data allowance, and also the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud), for workspaces in any pricing tier.
+The **Standard Data Included per Node** meter is used both for the Log Analytics [Per Node tier](logs/cost-logs.md#per-node-pricing-tier) data allowance, and also the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud), for workspaces in any pricing tier.
+The **Free Benefit - M365 Defender Data Ingestion** meter is used to record the benefit from the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/).
## Other Azure Monitor logs meters
azure-monitor Cost Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-usage.md
description: Overview of how Azure Monitor is billed and how to analyze billable
Previously updated : 11/13/2023 Last updated : 01/24/2024 # Azure Monitor cost and usage
Other services such as Microsoft Defender for Cloud and Microsoft Sentinel also
### Automated mails and alerts Rather than manually analyzing your costs in the Azure portal, you can automate delivery of information using the following methods.
- - **Daily cost analysis emails.** Once you've configured your Cost Analysis view, you should click **Subscribe** at the top of the screen to receive regular email updates from Cost Analysis.
+- **Daily cost analysis emails.** Once you've configured your Cost Analysis view, you should click **Subscribe** at the top of the screen to receive regular email updates from Cost Analysis.
- **Budget alerts.** To be notified if there are significant increases in your spending, create a [budget alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md) for a single workspace or group of workspaces. ### Export usage details To gain deeper understanding of your usage and costs, create exports using **Cost Analysis**. See [Tutorial: Create and manage exported data](../cost-management-billing/costs/tutorial-export-acm-data.md) to learn how to automatically create a daily export you can use for regular analysis.
-These exports are in CSV format and will contain a list of daily usage (billed quantity and cost) by resource, billing meter, and several other fields such as [AdditionalInfo](../cost-management-billing/automate/understand-usage-details-fields.md#list-of-fields-and-descriptions). You can use Microsoft Excel to do rich analyses of your usage not possible in the **Cost Analytics** experiences in the portal.
-
-The usage export has both the number of units of usage and their cost. Consequently, you can use this export to see the amount of benefits you are receiving from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/).
+These exports are in CSV format and will contain a list of daily usage (billed quantity and cost) by resource, [billing meter](cost-meters.md), and several other fields such as [AdditionalInfo](../cost-management-billing/automate/understand-usage-details-fields.md#list-of-fields-and-descriptions). You can use Microsoft Excel to do rich analyses of your usage not possible in the **Cost Analytics** experiences in the portal.
For example, usage from Log Analytics can be found by first filtering on the **Meter Category** column to show
For example, usage from Log Analytics can be found by first filtering on the **M
Add a filter on the **Instance ID** column for **contains workspace** or **contains cluster**. The usage is shown in the **Consumed Quantity** column. The unit for each entry is shown in the **Unit of Measure** column.
+#### View data benefits used
+
+Since the usage export has both the number of units of usage and their cost, you can use this export to see the amount of benefits you are receiving from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/). In the usage export, to see the benefits, look for the meters named:
+
+- **Standard Data Included per Node**: this meter is under the service "Insight and Analytics" and tracks the benefits received when a workspace in either in Log Analytics [Per Node tier](logs/cost-logs.md#per-node-pricing-tier) data allowance and/or has [Defender for Servers](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) enabled.
+- **Free Benefit - M365 Defender Data Ingestion**: this meter, under the service "Azure Monitor", tracks the benefit from the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/).
+ > [!NOTE] > See [Azure Monitor billing meter names](cost-meters.md) for a reference of the billing meter names used by Azure Monitor in Azure Cost Management + Billing.
B. Billable data ingestion by table from the past month.
To investigate your Application Insights usage more deeply, open the **Metrics** page, add the metric named *Data point volume*, and then select the *Apply splitting* option to split the data by "Telemetry item type". - ## View data allocation benefits To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5 and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to [export your usage details](#export-usage-details).
To view data allocation benefits from sources such as [Microsoft Defender for Se
2. Filter the *ResourceRate* column to show only rows where this is equal to zero. Now you will see the data allocations from these various sources. > [!NOTE]
-> Data allocations from Defender for Servers 500 MB/server/day will appear in rows with the meter name "Data Included per Node" and the meter category to "Insight and Analytics" (the name of a legacy offer still used with this meter.) If the workspace is in the legacy Per Node Log Analytics pricing tier, this meter will also include the data allocations from this Log Analytics pricing tier.
+> Data allocations from Defender for Servers 500 MB/server/day will appear in rows with the meter name "Standard Data Included per Node" and the meter category to "Insight and Analytics" (the name of a legacy offer still used with this meter.) If the workspace is in the legacy Per Node Log Analytics pricing tier, this meter will also include the data allocations from this Log Analytics pricing tier.
## Operations Management Suite subscription entitlements
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
Title: Azure Monitor Logs cost calculations and options
description: Cost details for data stored in a Log Analytics workspace in Azure Monitor, including commitment tiers and data size calculation. Previously updated : 01/23/2024 Last updated : 01/25/2024 ms.reviwer: dalek git
Azure Commitment Discounts, such as discounts received from [Microsoft Enterpris
## Dedicated clusters
-An [Azure Monitor Logs dedicated cluster](logs-dedicated-clusters.md) is a collection of workspaces in a single managed Azure Data Explorer cluster. Dedicated clusters support advanced features, such as [customer-managed keys](customer-managed-keys.md), and use the same commitment-tier pricing model as workspaces, although they must have a commitment level of at least 500 GB per day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. There's no pay-as-you-go option for clusters.
+An [Azure Monitor Logs dedicated cluster](logs-dedicated-clusters.md) is a collection of workspaces in a single managed Azure Data Explorer cluster. Dedicated clusters support advanced features, such as [customer-managed keys](customer-managed-keys.md), and use the same commitment-tier pricing model as workspaces, although they must have a commitment level of at least 100 GB per day. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. There's no pay-as-you-go option for clusters.
The cluster commitment tier has a 31-day commitment period after the commitment level is increased. During the commitment period, the commitment tier level can't be reduced, but it can be increased at any time. When workspaces are associated to a cluster, the data ingestion billing for those workspaces is done at the cluster level by using the configured commitment tier level.
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Title: Azure Monitor Logs Dedicated Clusters
description: Customers meeting the minimum commitment tier could use dedicated clusters Previously updated : 12/01/2023 Last updated : 01/25/2024 # Create and manage a dedicated cluster in Azure Monitor Logs
-Linking a Log Analytics workspace to a dedicated cluster in Azure Monitor provides advanced capabilities and higher query utilization. You can link and unlink workspaces from a dedicated cluster without any data loss or service interruption.
+Linking a Log Analytics workspace to a dedicated cluster in Azure Monitor provides advanced capabilities and higher query utilization. Clusters require a minimum ingestion commitment of 100 GB per day. You can link and unlink workspaces from a dedicated cluster without any data loss or service interruption.
## Advanced capabilities Capabilities that require dedicated clusters:
eligible for commitment tier discount.
- **[Ingest from Azure Event Hubs](../logs/ingest-logs-event-hub.md)** - Lets you ingest data directly from an event hub into a Log Analytics workspace. Dedicated cluster lets you use capability when ingestion from all linked workspaces combined meet commitment tier. ## Cluster pricing model
-Log Analytics Dedicated Clusters use a commitment tier pricing model of at least 500 GB/day. Any usage above the tier level incurs charges based on the per-GB rate of that commitment tier. See [Azure Monitor Logs pricing details](cost-logs.md#dedicated-clusters) for pricing details for dedicated clusters. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected.
+Log Analytics Dedicated Clusters use a commitment tier pricing model of at least 100 GB/day. Any usage above the tier level incurs charges based on the per-GB rate of that commitment tier. See [Azure Monitor Logs pricing details](cost-logs.md#dedicated-clusters) for pricing details for dedicated clusters. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected.
## Prerequisites
Provide the following properties when creating new dedicated cluster:
- **ClusterName**: Must be unique for the resource group. - **ResourceGroupName**: Use a central IT resource group because many teams in the organization usually share clusters. For more design considerations, review [Design a Log Analytics workspace configuration](../logs/workspace-design.md). - **Location**-- **SkuCapacity**: You can set the commitment tier (formerly called capacity reservations) to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters).
+- **SkuCapacity**: You can set the commitment tier to 100, 200, 300, 400, 500, 1000, 2000, 5000, 10000, 25000, 50000 GB per day. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters).
- **Managed identity**: Clusters support two [managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types): - System-assigned managed identity - Generated automatically with the cluster creation when identity `type` is set to "*SystemAssigned*". This identity can be used later to grant storage access to your Key Vault for wrap and unwrap operations.
Content-type: application/json
}, "sku": { "name": "capacityReservation",
- "Capacity": 500
+ "Capacity": 100
}, "properties": { "billingType": "Cluster",
Send a GET request on the cluster resource and look at the *provisioningState* v
}, "sku": { "name": "capacityreservation",
- "capacity": 500
+ "capacity": 100
}, "properties": { "provisioningState": "ProvisioningAccount",
Send a GET request on the cluster resource and look at the *provisioningState* v
"isAvailabilityZonesEnabled": false, "capacityReservationProperties": { "lastSkuUpdate": "last-sku-modified-date",
- "minCapacity": 500
+ "minCapacity": 100
} }, "id": "/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.OperationalInsights/clusters/cluster-name",
Authorization: Bearer <token>
}, "sku": { "name": "capacityreservation",
- "capacity": 500
+ "capacity": 100
}, "properties": { "provisioningState": "Succeeded",
Authorization: Bearer <token>
"isAvailabilityZonesEnabled": false, "capacityReservationProperties": { "lastSkuUpdate": "last-sku-modified-date",
- "minCapacity": 500
+ "minCapacity": 100
} }, "id": "/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.OperationalInsights/clusters/cluster-name",
The same as for 'clusters in a resource group', but in subscription scope.
## Update commitment tier in cluster
-When the data volume to linked workspaces changes over time, you can update the Commitment Tier level appropriately to optimize cost. The tier is specified in units of Gigabytes (GB) and can have values of 500, 1000, 2000 or 5000 GB per day. You don't have to provide the full REST request body, but you must include the sku.
+When the data volume to linked workspaces changes over time, you can update the Commitment Tier level appropriately to optimize cost. The tier is specified in units of Gigabytes (GB) and can have values of 100, 200, 300, 400, 500, 1000, 2000, 5000, 10000, 25000, 50000 GB per day. You don't have to provide the full REST request body, but you must include the sku.
During the commitment period, you can change to a higher commitment tier, which restarts the 31-day commitment period. You can't move back to pay-as-you-go or to a lower commitment tier until after you finish the commitment period.
N/A
```azurecli az account set --subscription "cluster-subscription-id"
-az monitor log-analytics cluster update --resource-group "resource-group-name" --name "cluster-name" --sku-capacity 500
+az monitor log-analytics cluster update --resource-group "resource-group-name" --name "cluster-name" --sku-capacity 100
``` #### [PowerShell](#tab/powershell)
az monitor log-analytics cluster update --resource-group "resource-group-name" -
```powershell Select-AzSubscription "cluster-subscription-id"
-Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name" -SkuCapacity 500
+Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name" -SkuCapacity 100
``` #### [REST API](#tab/restapi)
Authorization: Bearer <token>
- 400--The body of the request is null or in bad format. - 400--SKU name is invalid. Set SKU name to capacityReservation. - 400--Capacity was provided but SKU isn't capacityReservation. Set SKU name to capacityReservation.-- 400--Missing Capacity in SKU. Set Capacity value to 500, 1000, 2000 or 5000 GB/day.
+- 400--Missing Capacity in SKU. Set Capacity value to 100, 200, 300, 400, 500, 1000, 2000, 5000, 10000, 25000, 50000 GB per day.
- 400--Capacity is locked for 30 days. Decreasing capacity is permitted 30 days after update.-- 400--No SKU was set. Set the SKU name to capacityReservation and Capacity value to 500, 1000, 2000 or 5000 GB/day.
+- 400--No SKU was set. Set the SKU name to capacityReservation and Capacity value to 100, 200, 300, 400, 500, 1000, 2000, 5000, 10000, 25000, 50000 GB per day.
- 400--Identity is null or empty. Set Identity with systemAssigned type. - 400--KeyVaultProperties are set on creation. Update KeyVaultProperties after cluster creation. - 400--Operation can't be executed now. Async operation is in a state other than succeeded. Cluster must complete its operation before any update operation is performed.
azure-netapp-files Tools Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/tools-reference.md
# Azure NetApp Files tools
-Azure NetApp Files offers [multiple tools](https://azure.github.io/azure-netapp-files/) to estimate costs, understand features and availability, and monitor your Azure NetApp Files deployment.
+Azure NetApp Files offers [multiple tools](https://aka.ms/anftools) to estimate costs, understand features and availability, and monitor your Azure NetApp Files deployment.
* [**Azure NetApp Files Performance Calculator**](https://aka.ms/anfcalc)
azure-resource-manager Bicep Functions Lambda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-lambda.md
Title: Bicep functions - lambda
description: Describes the lambda functions to use in a Bicep file. Previously updated : 11/03/2023 Last updated : 01/25/2024 # Lambda functions for Bicep
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
| Parameter | Required | Type | Description | |: |: |: |: | | inputArray |Yes |array |The array to reduce.|
-| initialValue |No |any |Initial value.|
+| initialValue |Yes |any |Initial value.|
| lambda expression |Yes |expression |The lambda expression used to aggregate the current value and the next value.| ### Return value
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md
az deployment group create \
## Versioning
-When you create a template spec, you provide a version name for it. As you iterate on the template code, you can either update an existing version (for hotfixes) or publish a new version. The version is a text string. You can choose to follow any versioning system, including semantic versioning. Users of the template spec can provide the version name they want to use when deploying it.
+When you create a template spec, you provide a version name for it. As you iterate on the template code, you can either update an existing version (for hotfixes) or publish a new version. The version is a text string. You can choose to follow any versioning system, including semantic versioning. Users of the template spec can provide the version name they want to use when deploying it. You can have an unlimit number of versions.
## Use tags
azure-resource-manager Template Functions Lambda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-lambda.md
Title: Template functions - lambda
description: Describes the lambda functions to use in an Azure Resource Manager template (ARM template) Previously updated : 06/22/2023 Last updated : 01/25/2024 # Lambda functions for ARM templates
In Bicep, use the [reduce](../bicep/bicep-functions-lambda.md#reduce) function.
| Parameter | Required | Type | Description | |: |: |: |: | | inputArray |Yes |array |The array to reduce.|
-| initialValue |No |any |Initial value.|
+| initialValue |Yes |any |Initial value.|
| lambda function |Yes |expression |The lambda function used to aggregate the current value and the next value.| ### Return value
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs.md
az deployment group create \
## Versioning
-When you create a template spec, you provide a version name for it. As you iterate on the template code, you can either update an existing version (for hotfixes) or publish a new version. The version is a text string. You can choose to follow any versioning system, including semantic versioning. Users of the template spec can provide the version name they want to use when deploying it.
+When you create a template spec, you provide a version name for it. As you iterate on the template code, you can either update an existing version (for hotfixes) or publish a new version. The version is a text string. You can choose to follow any versioning system, including semantic versioning. Users of the template spec can provide the version name they want to use when deploying it. You can have an unlimit number of versions.
## Use tags
azure-vmware Concepts Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-vulnerability-management.md
Title: Concepts - How AVS Addresses Vulnerabilities in the Infrastructure
description: The process that AVS follows to address security vulnerabilities. Previously updated : 01/18/2024 Last updated : 01/24/2024
AVS takes a defense in depth approach to vulnerability and risk management. We f
- Details within the signal are adjudicated and assigned a CVSS score and risk rating according to compensating controls within the service. - The risk rating is used against internal bug bars, internal policies and regulations to establish a timeline for implementing a fix. - Internal engineering teams partner with appropriate parties to qualify and roll out any fixes, patches and other configuration updates necessary.-- Communications are drafted and published according to the risk rating assigned.
+- Communications are drafted and published according to the risk rating assigned.
+>[!tip]
+>Communications are surfaced through [Azure Service Health Portal](/azure/service-health/service-health-portal-update), [Known Issues](/azure/azure-vmware/azure-vmware-solution-known-issues) and Email.
### Subset of regulations governing vulnerability and risk management
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
# Deploy Arc-enabled Azure VMware Solution
-In this article, learn how to deploy Arc for Azure VMware Solution. Once you set up the components needed, you're ready to execute operations in Azure VMware Solution vCenter Server from the Azure portal. Arc-enabled Azure VMware Solution allows you to do the actions:
+In this article, learn how to deploy Arc for Azure VMware Solution. Once you set up the components needed, you're ready to execute operations in Azure VMware Solution vCenter Server from the Azure portal. Arc-enabled Azure VMware Solution allows you to do the actions:
- Identify your VMware vSphere resources (VMs, templates, networks, datastores, clusters/hosts/resource pools) and register them with Arc at scale. - Perform different virtual machine (VM) operations directly from Azure like; create, resize, delete, and power cycle operations (start/stop/restart) on VMware VMs consistently with Azure.
In this article, learn how to deploy Arc for Azure VMware Solution. Once you set
- Browse your VMware vSphere resources (vms, templates, networks, and storage) in Azure
-## How Arc-enabled VMware vSphere differs from Arc-enabled servers
+## Deployment Considerations
-You have the flexibility to start with either option, Arc-enabled servers or Arc-enabled VMware vSphere. With both options, you receive the same consistent experience. Regardless of the initial option chosen, you can incorporate the other one later without disruption. The following information helps you understand the difference between both options:
+Running software in Azure VMware Solution, as a private cloud in Azure, offers some benefits not realized by operating your environment outside of Azure. For software running in a VM, such as SQL Server and Windows Server, running in Azure VMware Solution provides additional value such as free Extended Security Updates (ESUs).
-**Arc-enabled servers**
-Azure Arc-enabled servers interact on the guest operating system level. They do that with no awareness of the underlying infrastructure or the virtualization platform they're running on. Since Arc-enabled servers support bare-metal machines, there might not be a host hypervisor in some cases.
+To take advantage of these benefits if you are running in an Azure VMware Solution it is important to enable Arc through this document to fully integrate the experience with the AVS private cloud. Alternatively, Arc-enabling VMs through the following mechanisms will not create the necessary attributes to register the VM and software as part of Azure VMware Solution and therefore result in billing for SQL Server ESUs for:
-**Arc-enabled VMware vSphere**
-Arc-enabled VMware vSphere is a superset of Arc-enabled servers that extends management capabilities beyond the quest operating system to the VM itself that provides lifecycle management and CRUD (Create, Read, Update, Delete) operations on a VMware vSphere VM. These lifecycle management capabilities are exposed in the Azure portal with a look and feel just like a regular Azure VM. Azure Arc-enabled VMware vSphere provides guest operating system management that uses the same components as Azure Arc-enabled servers.
+- Arc-enabled servers,
+
+- Arc-enabled VMware vSphere
+
+- SQL Server enabled by Azure Arc
+
+## How to manually integrate an Arc-enabled VM into Azure VMware Solutions
+
+When a VM in Azure VMware Solution private cloud is Arc-enabled using a method distinct from the one outlined in this document, the following steps are provided to refresh the integration between the Arc-enabled VMs and Azure VMware Solution
+
+These steps change the VM machine type from _Machine – Azure Arc_ to type _Machine – Azure Arc (AVS),_ which has the necessary integrations with Azure VMware Solution. 
+
+There are two ways to refresh the integration between the Arc-enabled VMs and Azure VMware Solution:  
+
+1. In the Azure VMware Solution private cloud, navigate to the vCenter Server inventory and Virtual Machines section within the portal. Locate the virtual machine that requires updating and follow the process to 'Enable in Azure'. If the option is grayed out, you must first **Remove from Azure** and then proceed to **Enable in Azure**
+
+2. Run the [az connectedvmware vm create ](/cli/azure/connectedvmware/vm?view=azure-cli-latest%22%20\l%20%22az-connectedvmware-vm-create)Azure CLI command on the VM in Azure VMware Solution to update the machine type. 
++
+```azurecli
+az connectedvmware vm create --subscription <subscription-id> --location <Azure region of the machine> --resource-group <resource-group-name> --custom-location /providers/microsoft.extendedlocation/customlocations/<custom-location-name> --name <machine-name> --inventory-item /subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ConnectedVMwarevSphere/VCenters/<vcenter-name>/InventoryItems/<machine-name>
+```
## Deploy Arc+ The following requirements must be met in order to use Azure Arc-enabled Azure VMware Solutions. ### Prerequisites
You need the following items to ensure you're set up to begin the onboarding pro
- A resource pool or a cluster with a minimum capacity of 16 GB of RAM and four vCPUs. - A datastore with a minimum of 100 GB of free disk space is available through the resource pool or cluster. - On the vCenter Server, allow inbound connections on TCP port 443. This action ensures that the Arc resource bridge and VMware vSphere cluster extension can communicate with the vCenter Server.+ > [!NOTE] > - Private endpoint is currently not supported. > - DHCP support isn't available to customers at this time, only static IP addresses are currently supported.
Before you can install an extension, ensure your target machine meets the follow
- Is able to connect through the firewall to communicate over the internet and these [URLs](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud#urls) aren't blocked. - Has VMware tools installed and running. - Is powered on and the resource bridge has network connectivity to the host running the VM.
+- Is Enabled in Azure.
### Enable guest management
backup Backup Azure Recovery Services Vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-recovery-services-vault-overview.md
Title: Overview of Recovery Services vaults description: An overview of Recovery Services vaults. Previously updated : 01/24/2023 Last updated : 01/25/2024 -+ # Recovery Services vaults overview
Recovery Services vaults are based on the Azure Resource Manager model of Azure,
- **Azure role-based access control (Azure RBAC)**: Azure RBAC provides fine-grained access management control in Azure. [Azure provides various built-in roles](../role-based-access-control/built-in-roles.md), and Azure Backup has three [built-in roles to manage recovery points](backup-rbac-rs-vault.md). Recovery Services vaults are compatible with Azure RBAC, which restricts backup and restore access to the defined set of user roles. [Learn more](backup-rbac-rs-vault.md) -- **Soft Delete**: With soft delete, even if a malicious actor deletes a backup (or backup data is accidentally deleted), the backup data is retained for 14 additional days, allowing the recovery of that backup item with no data loss. The additional 14 days of retention for backup data in the "soft delete" state don't incur any cost to you. [Learn more](backup-azure-security-feature-cloud.md).
+- **Soft Delete**: With soft delete, even if a malicious actor deletes a backup (or backup data is accidentally deleted), the backup data is retained for 14 additional days, allowing the recovery of that backup item with no data loss. The additional 14 days of retention for backup data in the "soft delete" state don't incur any cost to you. Additionally, Azure Backup provides *Enhanced soft delete*, an improvement to the soft delete feature. With enhanced soft delete, you can *customize soft delete retention period* and make *soft delete always-on*, thus protecting it from being disabled by any malicious actors. Learn more about [Soft delete](backup-azure-security-feature-cloud.md) and [Enhanced soft delete](backup-azure-enhanced-soft-delete-about.md).
- **Cross Region Restore**: Cross Region Restore (CRR) allows you to restore Azure VMs in a secondary region, which is an Azure paired region. By enabling this feature at the [vault level](backup-create-rs-vault.md#set-cross-region-restore), you can restore the replicated data in the secondary region any time, when you choose. This enables you to restore the secondary region data for audit-compliance, and during outage scenarios, without waiting for Azure to declare a disaster (unlike the GRS settings of the vault). [Learn more](backup-azure-arm-restore-vms.md#cross-region-restore).
backup Backup Mabs Whats New Mabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-whats-new-mabs.md
The following table lists the included features in MABS V4:
## What's new in MABS V3 UR2 Hotfix?
-This update contains the following enhancement to improve the backup time. For more information on the enhancements and the installation, see the [KB article](https://help.microsoft.com/support/5031799).
+This update contains the following enhancement to improve the backup time. For more information on the enhancements and the installation, see the [KB article](https://support.microsoft.com/topic/hotfix-for-update-rollup-2-for-microsoft-azure-backup-server-v3-3ef522f7-c307-47e2-827d-8e14f1e84017).
**Removed File Catalog dependency for online backup of file/folder workloads**: This update removes the dependency of MABS V3 on File Catalog (list of files in a recovery point maintained in the cloud) which was needed to restore individual files and folders from the online recovery points. This Hotfix allows MABS V3 UR2 to use a modern *iSCSI mount* method to provide individual file restoration.
Learn how to prepare your server or begin protecting a workload:
* [Prepare Backup Server workloads](backup-azure-microsoft-azure-backup.md) * [Use Backup Server to back up a VMware server](backup-azure-backup-server-vmware.md) * [Use Backup Server to back up SQL Server](backup-azure-sql-mabs.md)
-* [Use Modern Backup Storage with Backup Server](backup-mabs-add-storage.md)
+* [Use Modern Backup Storage with Backup Server](backup-mabs-add-storage.md)
cloud-services-extended-support Non Vnet Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/non-vnet-migration.md
Last updated 01/24/2024
Some legacy cloud services are still running without Vnet support. While there's a process for migrating directly through the portal, there are certain considerations that should be made prior to migration. This article walks you through the process of migrating a non Vnet supporting Cloud Service to a Vnet supporting Cloud Service.
-## Advantages to direct migration
-- No custom changes are needed from platform side.
+## Advantages of this approach
+ - No required changes in your configurations. Use the same configurations to deploy a staging deployment. - Your DNS and Public IP address is preserved. - Less downtime.
-## Disadvantages to direct migration
-- If using external sources, you may need to move the state of staging slot. -- Potential downtime while manually rebuilding network.--
-## Migration procedure using the Azure Portal
-1. Create a non vnet classic cloud service in the same region as the vnet you want to migrate to.
+## Migration procedure using the Azure portal
-1. In the Azure portal, select the 'Staging' drop-down.
+1. Create a non vnet classic cloud service in the same region as the vnet you want to migrate to. In the Azure portal, select the 'Staging' drop-down.
![Screenshot of the staging drop-down in the Azure portal.](./media/vnet-migrate-staging.png) 1. Create a deployment with same configuration as existing deployment by selecting 'Upload' next to the staging drop-down. The platform creates a Default Vnet deployment in staging slot.
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/known-issues.md
Firefox desktop browser support is now available in public preview. Known issues
### iOS Chrome Known Issues iOS Chrome browser support is now available in public preview. Known issues are:-- No outgoing and incoming audio when switching browser to background or locking the device
+- No outgoing and incoming audio when switching browser to background or locking the device. This issue has been fixed in iOS version 16.4+.
- No incoming/outgoing audio coming from bluetooth headset. When a user connects bluetooth headset in the middle of Azure Communication Services call, the audio still comes out from the speaker until the user locks and unlocks the phone. We have seen this issue on older iOS versions (15.6, 15.7), and it isn't reproducible on iOS 16. ### iOS 16 introduced bugs when putting browser in the background during a call
communication-services Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/notifications.md
Learn more about [event handling in Azure Communication Services](../../event-gr
## Deliver push notifications via Azure Notification Hubs
+> [!IMPORTANT]
+> Azure Notification Hubs has announced that Firebase Cloud Messaging (FCM) legacy API will be deprecated in July 2024. This will affect all Azure Communiation Services Calling applications who send Android push notifications. Customers impacted will need to migrate their registrations from FCM legacy to FCM v1, and can start doing so in March 2024. For more information including migration steps, please see [Notifcation Hub documentation](../../notification-hubs/notification-hubs-gcm-to-fcm.md).
+ You can connect an Azure Notification Hub to your Communication Services resource in order to automatically send push notifications to a user's mobile device when they receive an incoming call or to notify them about missed chat activity. You should use these push notifications to wake up your application from the background and display UI that lets the user accept or decline the call or read the newly received chat message. :::image type="content" source="./media/notifications/acs-anh-int.png" alt-text="Diagram showing how communication services integrates with Azure Notification Hubs.":::
container-instances Container Instances Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-log-analytics.md
ContainerInstanceLog_CL
|_ResourceId|string|A unique identifier for the resource that the record is associated with| |_SubscriptionId|string|A unique identifier for the subscription that the record is associated with|
+## Using Diagnostic Settings
+
+Diagnostic Settings for container groups is a preview feature and it can be enabled through preview features options in Azure portal. Once this feature is enabled for a subscription, Diagnostic Settings can be applied to a container group. Applying Diagnostic Settings will cause a container group to restart.
+
+For example, here is how we can use New-AzDiagnosticSetting command to apply a Diagnostic Settings object to a container group.
+
+```azurepowershell
+$log = @()
+$log += New-AzDiagnosticSettingLogSettingsObject -Enabled $true -Category ContainerInstanceLog -RetentionPolicyDay 7 -RetentionPolicyEnabled $true
+
+New-AzDiagnosticSetting -Name test-setting -ResourceId <container-group-resource-id> -WorkspaceId <log-analytics-workspace-id> -Log $log
+```
+ ## Next steps ### Azure Monitor logs
container-instances Tutorial Docker Compose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/tutorial-docker-compose.md
In this tutorial, you use [Docker Compose](https://docs.docker.com/compose/) to
Run containers in Azure Container Instances on-demand when you develop cloud-native apps with Docker and you want to switch seamlessly from local development to cloud deployment. This capability is enabled by [integration between Docker and Azure](https://docs.docker.com/engine/context/aci-integration/). You can use native Docker commands to run either [a single container instance](quickstart-docker-cli.md) or multi-container group in Azure.
+> [!IMPORTANT]
+> Docker Compose's integration for ACI has been retired in November 2023. See also: [Retirement Date Pending](https://github.com/docker/compose-cli?tab=readme-ov-file#warning-retirement-date-pending).
+ > [!IMPORTANT] > Not all features of Azure Container Instances are supported. Provide feedback about the Docker-Azure integration by creating an issue in the [Docker ACI Integration](https://github.com/docker/aci-integration-beta) GitHub repository.
container-registry Container Registry Tasks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-overview.md
description: An introduction to ACR Tasks, a suite of features in Azure Containe
Previously updated : 10/31/2023 Last updated : 01/24/2024
Containers provide new levels of virtualization, isolating application and devel
**ACR Tasks** is a suite of features within Azure Container Registry. It provides cloud-based container image building for [platforms](#image-platforms) including Linux, Windows, and ARM, and can automate [OS and framework patching](#automate-os-and-framework-patching) for your Docker containers. ACR Tasks not only extends your "inner-loop" development cycle to the cloud with on-demand container image builds, but also enables automated builds triggered by source code updates, updates to a container's base image, or timers. For example, with base image update triggers, you can automate your OS and application framework patching workflow, maintaining secure environments while adhering to the principles of immutable containers.
+ACR is temporarily pausing ACR Tasks runs from Azure free credits. This may affect existing Tasks runs. If you encounter problems, open a [support case](../azure-portal/supportability/how-to-create-azure-support-request.md) for our team to provide additional guidance. We'll remove this note when this pause is lifted.
+ ## Task scenarios ACR Tasks supports several scenarios to build and maintain container images and other artifacts. See the following sections in this article for details.
cosmos-db How To Setup Customer Managed Keys Existing Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys-existing-accounts.md
For enabling CMK on existing account that has continuous backup and point in tim
- Enabling CMK is available only at a Cosmos DB account level and not at collections. - We don't support enabling CMK on existing Azure Cosmos DB for Apache Cassandra accounts.-- We don't support enabling CMK on existing accounts that are enabled for Materialized Views and Full Fidelity Change Feed (FFCF) as well.
+- We don't support enabling CMK on existing accounts that are enabled for Materialized Views and [all versions and deletes change feed mode](nosql/change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview).
- Ensure account must not have documents with large IDs greater than 990 bytes before enabling CMK. If not, you'll get an error due to max supported limit of 1024 bytes after encryption. - During encryption of existing data, [control plane](./audit-control-plane-logs.md) actions such as "add region" is blocked. These actions are unblocked and can be used right after the encryption is complete.
cost-management-billing Enable Preview Features Cost Management Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-preview-features-cost-management-labs.md
description: This article explains how to explore preview features and provides a list of the recent previews you might be interested in. Previously updated : 08/07/2023 Last updated : 01/25/2024
You can explore preview features from the Cost Management overview.
2. From there, enable the features you'd like to use and select **Close** at the bottom of the page. :::image type="content" source="./media/enable-preview-features-cost-management-labs/cost-management-labs.png" alt-text="Screenshot showing the Cost Management labs preview options." lightbox="./media/enable-preview-features-cost-management-labs/cost-management-labs.png" ::: 3. To see the features enabled, close and reopen Cost Management. You can reopen Cost Management by selecting the link in the notification in the top-right corner.
- :::image type="content" source="./media/enable-preview-features-cost-management-labs/reopen-cost-management.png" alt-text="Screenshot showing the Reopen Cost Management notification." :::
+ :::image type="content" source="./media/enable-preview-features-cost-management-labs/reopen-cost-management.png" alt-text="Screenshot showing a Reopen Cost Management notification." :::
If you're interested in getting preview features even earlier:
View your non-USD charges in USD or switch between the currencies you have charg
Enable the currency switcher on the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Select **How would you rate cost analysis?** at the bottom of the page to share feedback about the preview.
+## Exports (preview)
+
+The improved exports experience is designed to streamline your FinOps practice. It includes automatic exports of more cost-impacting datasets, optimized to handle large datasets while enhancing the user experience. You can export more datasets, including price sheets, reservation recommendations, reservation details, and reservation transactions. Furthermore, you can download cost and usage details using the open-source [FinOps Open Cost and Usage Specification (FOCUS)](https://focus.finops.org) format. The format combines actual and amortized costs and reduces data processing times and storage and compute costs.
+
+The enhanced user interface allows you to easily create multiple exports for various cost management datasets using a single, simplified creation experience. You can also selectively rerun an existing Export job for a historical date range instead of creating a new one-time export of the required date range.
+
+You can easily handle large datasets through features like file partitioning, break the file into manageable chunks and file overwrite for daily exports, which replaces the previous dayΓÇÖs file with an updated file each day and the upcoming support for Parquet format and file compression. These optimizations improve file manageability, reduce download latency, and save on storage and network charges.
+
+You can choose the latest or any of the previous dataset schema versions during the export creation. Selecting a version ensures that the data processing layers that you build on top of the datasets can be reused without compromising on the latest API functionality.
+
+And, you can enhance security and compliance by configuring exports to storage accounts behind a firewall, which provides access control for the public endpoint of the storage account.
++ ## How to share feedback We're always listening and making constant improvements based on your feedback, so we welcome it. Here are a few ways to share your feedback with the team:
databox Data Box Disk Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-limits.md
For the latest information on Azure storage service limits and best practices fo
## Data copy and upload caveats
+- Importing data into NFS Azure file shares isn't supported by Azure Data Box. Copying data from Data Box into an existing NFS Azure file share with an identical name as your source folder creates a conflict. To resolve this conflict, Data Box renames the source share to `databox-<GUID>` and uploads it to the target storage account as an SMB Azure file share.
- Don't copy data directly into the disks. Copy data to pre-created *BlockBlob*, *PageBlob*, and *AzureFile* folders. - A folder under the *BlockBlob* and *PageBlob* is a container. For instance, containers are created as *BlockBlob/container* and *PageBlob/container*. - If a folder has the same name as an existing container, the folder's contents are merged with the container's contents. Files or blobs that aren't already in the cloud are added to the container. If a file or blob has the same name as a file or blob that's already in the container, the existing file or blob is overwritten.
ddos-protection Ddos Protection Sku Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md
DDoS Network Protection and DDoS IP Protection have the following limitations:
- Protecting a public IP resource attached to a NAT Gateway isn't supported. - Virtual machines in Classic/RDFE deployments aren't supported. - VPN gateway or Virtual network gateway is protected by a fixed DDoS policy. Adaptive tuning isn't supported at this stage. -- Disabling DDoS protection for a public IP address is currently a preview feature. If you disable DDoS protection for a public IP resource that is linked to a virtual network with an active DDoS protection plan, you'll still be billed for DDoS Network Protection. However, the following functionalities will be suspended: mitigation of DDoS attacks, telemetry, and logging of DDoS mitigation events. - Partially supported: the Azure DDoS Protection service can protect a public load balancer with a public IP address prefix linked to its frontend. It effectively detects and mitigates DDoS attacks. However, telemetry and logging for the protected public IP addresses within the prefix range are currently unavailable.
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | | | | :-: | - | | **A logon from a malicious IP has been detected. [seen multiple times]** | A successful remote authentication for the account [account] and process [process] occurred, however the logon IP address (x.x.x.x) has previously been reported as malicious or highly unusual. A successful attack has probably occurred. Files with the .scr extensions are screen saver files and are normally reside and execute from the Windows system directory. | - | High |
+| **Adaptive application control policy violation was audited**<br>VM_AdaptiveApplicationControlWindowsViolationAudited | The below users ran applications that are violating the application control policy of your organization on this machine. It can possibly expose the machine to malware or application vulnerabilities. | Execution | Informational |
| **Addition of Guest account to Local Administrators group** | Analysis of host data has detected the addition of the built-in Guest account to the Local Administrators group on %{Compromised Host}, which is strongly associated with attacker activity. | - | Medium | | **An event log was cleared** | Machine logs indicate a suspicious event log clearing operation by user: '%{user name}' in Machine: '%{CompromisedEntity}'. The %{log channel} log was cleared. | - | Informational | | **Antimalware Action Failed** | Microsoft Antimalware has encountered an error when taking an action on malware or other potentially unwanted software. | - | Medium |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | |||:-:|| | **a history file has been cleared** | Analysis of host data indicates that the command history log file has been cleared. Attackers may do this to cover their traces. The operation was performed by user: '%{user name}'. | - | Medium |
+| **Adaptive application control policy violation was audited**<br>(VM_AdaptiveApplicationControlLinuxViolationAudited) | The below users ran applications that are violating the application control policy of your organization on this machine. It can possibly expose the machine to malware or application vulnerabilities. | Execution | Informational |
| **Antimalware broad files exclusion in your virtual machine**<br>(VM_AmBroadFilesExclusion) | Files exclusion from antimalware extension with broad exclusion rule was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Such exclusion practically disabling the Antimalware protection.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | - | Medium | | **Antimalware disabled and code execution in your virtual machine**<br>(VM_AmDisablementAndCodeExecution) | Antimalware disabled at the same time as code execution on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers disable antimalware scanners to prevent detection while running unauthorized tools or infecting the machine with malware. | - | High | | **Antimalware disabled in your virtual machine**<br>(VM_AmDisablement) | Antimalware disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | Defense Evasion | Medium |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Anonymity network activity**<br>(AzureDNS_DarkWeb) | Analysis of DNS transactions from %{CompromisedEntity} detected anonymity network activity. Such activity, while possibly legitimate user behavior, is frequently employed by attackers to evade tracking and fingerprinting of network communications. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | Low | | **Anonymity network activity using web proxy**<br>(AzureDNS_DarkWebProxy) | Analysis of DNS transactions from %{CompromisedEntity} detected anonymity network activity. Such activity, while possibly legitimate user behavior, is frequently employed by attackers to evade tracking and fingerprinting of network communications. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | Low | | **Attempted communication with suspicious sinkholed domain**<br>(AzureDNS_SinkholedDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected request for sinkholed domain. Such activity, while possibly legitimate user behavior, is frequently an indication of the download or execution of malicious software. Typical related attacker activity is likely to include the download and execution of further malicious software or remote administration tools. | Exfiltration | Medium |
-| **Communication with possible phishing domain**<br>(AzureDNS_PhishingDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected a request for a possible phishing domain. Such activity, while possibly benign, is frequently performed by attackers to harvest credentials to remote services. Typical related attacker activity is likely to include the exploitation of any credentials on the legitimate service. | Exfiltration | Low |
-| **Communication with suspicious algorithmically generated domain**<br>(AzureDNS_DomainGenerationAlgorithm) | Analysis of DNS transactions from %{CompromisedEntity} detected possible usage of a domain generation algorithm. Such activity, while possibly benign, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | Low |
+| **Communication with possible phishing domain**<br>(AzureDNS_PhishingDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected a request for a possible phishing domain. Such activity, while possibly benign, is frequently performed by attackers to harvest credentials to remote services. Typical related attacker activity is likely to include the exploitation of any credentials on the legitimate service. | Exfiltration | Informational |
+| **Communication with suspicious algorithmically generated domain**<br>(AzureDNS_DomainGenerationAlgorithm) | Analysis of DNS transactions from %{CompromisedEntity} detected possible usage of a domain generation algorithm. Such activity, while possibly benign, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | Informational |
| **Communication with suspicious domain identified by threat intelligence**<br>(AzureDNS_ThreatIntelSuspectDomain) | Communication with suspicious domain was detected by analyzing DNS transactions from your resource and comparing against known malicious domains identified by threat intelligence feeds. Communication to malicious domains is frequently performed by attackers and could imply that your resource is compromised. | Initial Access | Medium |
-| **Communication with suspicious random domain name**<br>(AzureDNS_RandomizedDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected usage of a suspicious randomly generated domain name. Such activity, while possibly benign, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | Low |
+| **Communication with suspicious random domain name**<br>(AzureDNS_RandomizedDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected usage of a suspicious randomly generated domain name. Such activity, while possibly benign, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | Informational |
| **Digital currency mining activity**<br>(AzureDNS_CurrencyMining) | Analysis of DNS transactions from %{CompromisedEntity} detected digital currency mining activity. Such activity, while possibly legitimate user behavior, is frequently performed by attackers following compromise of resources. Typical related attacker activity is likely to include the download and execution of common mining tools. | Exfiltration | Low | | **Network intrusion detection signature activation**<br>(AzureDNS_SuspiciousDomain) | Analysis of DNS transactions from %{CompromisedEntity} detected a known malicious network signature. Such activity, while possibly legitimate user behavior, is frequently an indication of the download or execution of malicious software. Typical related attacker activity is likely to include the download and execution of further malicious software or remote administration tools. | Exfiltration | Medium | | **Possible data download via DNS tunnel**<br>(AzureDNS_DataInfiltration) | Analysis of DNS transactions from %{CompromisedEntity} detected a possible DNS tunnel. Such activity, while possibly legitimate user behavior, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | Low |
Learn more about [Defender for Cloud latest protections against the abuse of Azu
| **Fileless Attack Technique Detected**<br>(AppServices_FilelessAttackTechniqueDetection) | The memory of the process specified below contains evidence of a fileless attack technique. Fileless attacks are used by attackers to execute code while evading detection by security software.<br>Specific behaviors include: {list of observed behaviors}<br>(Applies to: App Service on Windows and App Service on Linux) | Execution | High | | **Fileless Attack Toolkit Detected**<br>(AppServices_FilelessAttackToolkitDetection) | The memory of the process specified below contains a fileless attack toolkit: {ToolKitName}. Fileless attack toolkits typically do not have a presence on the filesystem, making detection by traditional anti-virus software difficult.<br>Specific behaviors include: {list of observed behaviors}<br>(Applies to: App Service on Windows and App Service on Linux) | Defense Evasion, Execution | High | | **Microsoft Defender for Cloud test alert for App Service (not a threat)**<br>(AppServices_EICAR) | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed.<br>(Applies to: App Service on Windows and App Service on Linux) | - | High |
-| **NMap scanning detected**<br>(AppServices_Nmap) | Azure App Service activity log indicates a possible web fingerprinting activity on your App Service resource.<br>The suspicious activity detected is associated with NMAP. Attackers often use this tool for probing the web application to find vulnerabilities.<br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium |
+| **NMap scanning detected**<br>(AppServices_Nmap) | Azure App Service activity log indicates a possible web fingerprinting activity on your App Service resource.<br>The suspicious activity detected is associated with NMAP. Attackers often use this tool for probing the web application to find vulnerabilities.<br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Informational |
| **Phishing content hosted on Azure Webapps**<br>(AppServices_PhishingContent) | URL used for phishing attack found on the Azure AppServices website. This URL was part of a phishing attack sent to Microsoft 365 customers. The content typically lures visitors into entering their corporate credentials or financial information into a legitimate looking website.<br>(Applies to: App Service on Windows and App Service on Linux) | Collection | High | | **PHP file in upload folder**<br>(AppServices_PhpInUploadFolder) | Azure App Service activity log indicates an access to a suspicious PHP page located in the upload folder.<br>This type of folder doesn't usually contain PHP files. The existence of this type of file might indicate an exploitation taking advantage of arbitrary file upload vulnerabilities.<br>(Applies to: App Service on Windows and App Service on Linux) | Execution | Medium | | **Possible Cryptocoinminer download detected**<br>(AppServices_CryptoCoinMinerDownload) | Analysis of host data has detected the download of a file normally associated with digital currency mining.<br>(Applies to: App Service on Linux) | Defense Evasion, Command and Control, Exploitation | Medium |
Learn more about [Defender for Cloud latest protections against the abuse of Azu
| **Suspicious process executed**<br>(AppServices_KnownCredential AccessTools) | Machine logs indicate that the suspicious process: '%{process path}' was running on the machine, often associated with attacker attempts to access credentials.<br>(Applies to: App Service on Windows) | Credential Access | High | | **Suspicious process name detected**<br>(AppServices_ProcessWithKnownSuspiciousExtension) | Analysis of host data on {NAME} detected a process whose name is suspicious, for example corresponding to a known attacker tool or named in a way that is suggestive of attacker tools that try to hide in plain sight. This process could be legitimate activity, or an indication that one of your machines has been compromised.<br>(Applies to: App Service on Windows) | Persistence, Defense Evasion | Medium | | **Suspicious SVCHOST process executed**<br>(AppServices_SVCHostFromInvalidPath) | The system process SVCHOST was observed running in an abnormal context. Malware often use SVCHOST to mask its malicious activity.<br>(Applies to: App Service on Windows) | Defense Evasion, Execution | High |
-| **Suspicious User Agent detected**<br>(AppServices_UserAgentInjection) | Azure App Service activity log indicates requests with suspicious user agent. This behavior can indicate on attempts to exploit a vulnerability in your App Service application.<br>(Applies to: App Service on Windows and App Service on Linux) | Initial Access | Medium |
+| **Suspicious User Agent detected**<br>(AppServices_UserAgentInjection) | Azure App Service activity log indicates requests with suspicious user agent. This behavior can indicate on attempts to exploit a vulnerability in your App Service application.<br>(Applies to: App Service on Windows and App Service on Linux) | Initial Access | Informational |
| **Suspicious WordPress theme invocation detected**<br>(AppServices_WpThemeInjection) | Azure App Service activity log indicates a possible code injection activity on your App Service resource.<br>The suspicious activity detected resembles that of a manipulation of WordPress theme to support server side execution of code, followed by a direct web request to invoke the manipulated theme file.<br>This type of activity was seen in the past as part of an attack campaign over WordPress.<br>If your App Service resource isn't hosting a WordPress site, it isn't vulnerable to this specific code injection exploit and you can safely suppress this alert for the resource. To learn how to suppress security alerts, see [Suppress alerts from Microsoft Defender for Cloud](alerts-suppression-rules.md).<br>(Applies to: App Service on Windows and App Service on Linux) | Execution | High | | **Vulnerability scanner detected**<br>(AppServices_DrupalScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting a content management system (CMS).<br>If your App Service resource isn't hosting a Drupal site, it isn't vulnerable to this specific code injection exploit and you can safely suppress this alert for the resource. To learn how to suppress security alerts, see [Suppress alerts from Microsoft Defender for Cloud](alerts-suppression-rules.md).<br>(Applies to: App Service on Windows) | PreAttack | Low | | **Vulnerability scanner detected**<br>(AppServices_JoomlaScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting Joomla applications.<br>If your App Service resource isn't hosting a Joomla site, it isn't vulnerable to this specific code injection exploit and you can safely suppress this alert for the resource. To learn how to suppress security alerts, see [Suppress alerts from Microsoft Defender for Cloud](alerts-suppression-rules.md).<br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Low |
Microsoft Defender for Containers provides security alerts on the cluster level
| | | :-: | - | | **Exposed Postgres service with trust authentication configuration in Kubernetes detected (Preview)**<br>(K8S_ExposedPostgresTrustAuth) | Kubernetes cluster configuration analysis detected exposure of a Postgres service by a load balancer. The service is configured with trust authentication method, which doesn't require credentials. | InitialAccess | Medium | | **Exposed Postgres service with risky configuration in Kubernetes detected (Preview)**<br>(K8S_ExposedPostgresBroadIPRange) | Kubernetes cluster configuration analysis detected exposure of a Postgres service by a load balancer with a risky configuration. Exposing the service to a wide range of IP addresses poses a security risk. | InitialAccess | Medium |
-| **Attempt to create a new Linux namespace from a container detected**<br>(K8S.NODE_NamespaceCreation) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container in Kubernetes cluster detected an attempt to create a new Linux namespace. While this behavior might be legitimate, it might indicate that an attacker tries to escape from the container to the node. Some CVE-2022-0185 exploitations use this technique. | PrivilegeEscalation | Medium |
+| **Attempt to create a new Linux namespace from a container detected**<br>(K8S.NODE_NamespaceCreation) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container in Kubernetes cluster detected an attempt to create a new Linux namespace. While this behavior might be legitimate, it might indicate that an attacker tries to escape from the container to the node. Some CVE-2022-0185 exploitations use this technique. | PrivilegeEscalation | Informational |
| **A history file has been cleared**<br>(K8S.NODE_HistoryFileCleared) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium | | **Abnormal activity of managed identity associated with Kubernetes (Preview)**<br>(K8S_AbnormalMiActivity) | Analysis of Azure Resource Manager operations detected an abnormal behavior of a managed identity used by an AKS addon. The detected activity isn\'t consistent with the behavior of the associated addon. While this activity can be legitimate, such behavior might indicate that the identity was gained by an attacker, possibly from a compromised container in the Kubernetes cluster. | Lateral Movement | Medium | | **Abnormal Kubernetes service account operation detected**<br>(K8S_ServiceAccountRareOperation) | Kubernetes audit log analysis detected abnormal behavior by a service account in your Kubernetes cluster. The service account was used for an operation, which isn't common for this service account. While this activity can be legitimate, such behavior might indicate that the service account is being used for malicious purposes. | Lateral Movement, Credential Access | Medium | | **An uncommon connection attempt detected**<br>(K8S.NODE_SuspectConnection) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium |
-| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert's extended properties. | Execution | Medium |
-| **Anomalous secret access (Preview)**<br>(K8S_AnomalousSecretAccess) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected secret access request which is anomalous based on previous secret access activity. This activity is considered an anomaly when taking into account how the different features seen in the secret access operation are in relations to one another. The features monitored by this analytics include the user name used, the name of the secret, the name of the namespace, user agent used in the operation, or other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert extended properties. | CredentialAccess | Medium |
| **Attempt to stop apt-daily-upgrade.timer service detected**<br>(K8S.NODE_TimerServiceDisabled) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational | | **Behavior similar to common Linux bots detected (Preview)**<br>(K8S.NODE_CommonBot) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a process normally associated with common Linux botnets. | Execution, Collection, Command And Control | Medium |
-| **Command within a container running with high privileges**<br>(K8S.NODE_PrivilegedExecutionInContainer) <sup>[1](#footnote1)</sup> | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | PrivilegeEscalation | Low |
-| **Container running in privileged mode**<br>(K8S.NODE_PrivilegedContainerArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a Docker command that is running a privileged container. The privileged container has full access to the hosting pod or host resource. If compromised, an attacker may use the privileged container to gain access to the hosting pod or host. | PrivilegeEscalation, Execution | Low |
-| **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Medium |
+| **Command within a container running with high privileges**<br>(K8S.NODE_PrivilegedExecutionInContainer) <sup>[1](#footnote1)</sup> | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | PrivilegeEscalation | Informational |
+| **Container running in privileged mode**<br>(K8S.NODE_PrivilegedContainerArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a Docker command that is running a privileged container. The privileged container has full access to the hosting pod or host resource. If compromised, an attacker may use the privileged container to gain access to the hosting pod or host. | PrivilegeEscalation, Execution | Informational |
+| **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Informational |
| **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the cluster's DNS server and poison it. | Lateral Movement | Low |
-| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low |
+| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Informational |
| **Detected file download from a known malicious source**<br>(K8S.NODE_SuspectDownload) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium |
-| **Detected suspicious file download**<br>(K8S.NODE_SuspectDownloadArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious download of a remote file. | Persistence | Low |
+| **Detected suspicious file download**<br>(K8S.NODE_SuspectDownloadArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious download of a remote file. | Persistence | Informational |
| **Detected suspicious use of the nohup command**<br>(K8S.NODE_SuspectNohup) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It's rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium | | **Detected suspicious use of the useradd command**<br>(K8S.NODE_SuspectUserAddition) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the useradd command. | Persistence | Medium | | **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High | | **Digital currency mining related behavior detected**<br>(K8S.NODE_DigitalCurrencyMining) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an execution of a process or command normally associated with digital currency mining. | Execution | High |
-| **Docker build operation detected on a Kubernetes node**<br>(K8S.NODE_ImageBuildOnNode) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Low |
-| **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) <sup>[3](#footnote3)</sup> | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low |
+| **Docker build operation detected on a Kubernetes node**<br>(K8S.NODE_ImageBuildOnNode) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Informational |
| **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: <https://aka.ms/exposedkubeflow-blog> | Initial Access | Medium | | **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) | Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High | | **Exposed Kubernetes service detected**<br>(K8S_ExposedService) | The Kubernetes audit log analysis detected exposure of a service by a load balancer. This service is related to a sensitive application that allows high impact operations in the cluster such as running processes on the node or creating new containers. In some cases, this service doesn't require authentication. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low | | **Manipulation of host firewall detected**<br>(K8S.NODE_FirewallDisabled) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium | | **Microsoft Defender for Cloud test alert (not a threat).**<br>(K8S.NODE_EICAR) <sup>[1](#footnote1)</sup> | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High |
-| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new container in the kube-system namespace that isn't among the containers that normally run in this namespace. The kube-system namespaces shouldn't contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
-| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low |
+| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new container in the kube-system namespace that isn't among the containers that normally run in this namespace. The kube-system namespaces shouldn't contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Informational |
+| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Informational |
| **Possible attack tool detected**<br>(K8S.NODE_KnownLinuxAttackTool) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious tool invocation. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium | | **Possible backdoor detected**<br>(K8S.NODE_LinuxBackdoorArtifact) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium | | **Possible command line exploitation attempt**<br>(K8S.NODE_ExploitAttempt) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Possible password change using crypt-method detected**<br>(K8S.NODE_SuspectPasswordChange) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a password change using the crypt method. Attackers can make this change to continue access and gain persistence after compromise. | CredentialAccess | Medium | | **Potential port forwarding to external IP address**<br>(K8S.NODE_SuspectPortForwarding) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an initiation of port forwarding to an external IP address. | Exfiltration, Command And Control | Medium | | **Potential reverse shell detected**<br>(K8S.NODE_ReverseShell) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. | Exfiltration, Exploitation | Medium |
-| **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the node's resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Low |
+| **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the node's resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Informational |
| **Process associated with digital currency mining detected**<br>(K8S.NODE_CryptoCoinMinerArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected the execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium |
-| **Process seen accessing the SSH authorized keys file in an unusual way**<br>(K8S.NODE_SshKeyAccess) <sup>[1](#footnote1)</sup> | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Low |
-| **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low |
+| **Process seen accessing the SSH authorized keys file in an unusual way**<br>(K8S.NODE_SshKeyAccess) <sup>[1](#footnote1)</sup> | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Informational |
+| **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Informational |
| **Security-related process termination detected**<br>(K8S.NODE_SuspectProcessTermination) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low |
-| **SSH server is running inside a container**<br>(K8S.NODE_ContainerSSH) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected an SSH server running inside the container. | Execution | Medium |
+| **SSH server is running inside a container**<br>(K8S.NODE_ContainerSSH) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected an SSH server running inside the container. | Execution | Informational |
| **Suspicious file timestamp modification**<br>(K8S.NODE_TimestampTampering) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious timestamp modification. Attackers will often copy timestamps from existing legitimate files to new tools to avoid detection of these newly dropped files. | Persistence, DefenseEvasion | Low | | **Suspicious request to Kubernetes API**<br>(K8S.NODE_KubernetesAPI) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes API. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | LateralMovement | Medium | | **Suspicious request to the Kubernetes Dashboard**<br>(K8S.NODE_KubernetesDashboard) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | LateralMovement | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium | | **PREVIEW - Suspicious management session using PowerShell detected**<br>(ARM_UnusedAppPowershellPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal that doesn't regularly use PowerShell to manage the subscription environment is now using PowerShell, and performing actions that can secure persistence for an attacker. | Persistence | Medium | | **PREVIEW ΓÇô Suspicious management session using Azure portal detected**<br>(ARM_UnusedAppIbizaPersistence) | Analysis of your subscription activity logs has detected a suspicious behavior. A principal that doesn't regularly use the Azure portal (Ibiza) to manage the subscription environment (hasn't used Azure portal to manage for the last 45 days, or a subscription that it is actively managing), is now using the Azure portal and performing actions that can secure persistence for an attacker. | Persistence | Medium |
-| **Privileged custom role created for your subscription in a suspicious way (Preview)**<br>(ARM_PrivilegedRoleDefinitionCreation) | Microsoft Defender for Resource Manager detected a suspicious creation of privileged custom role definition in your subscription. This operation might have been performed by a legitimate user in your organization. Alternatively, it might indicate that an account in your organization was breached, and that the threat actor is trying to create a privileged role to use in the future to evade detection. | Privilege Escalation, Defense Evasion | Low |
+| **Privileged custom role created for your subscription in a suspicious way (Preview)**<br>(ARM_PrivilegedRoleDefinitionCreation) | Microsoft Defender for Resource Manager detected a suspicious creation of privileged custom role definition in your subscription. This operation might have been performed by a legitimate user in your organization. Alternatively, it might indicate that an account in your organization was breached, and that the threat actor is trying to create a privileged role to use in the future to evade detection. | Privilege Escalation, Defense Evasion | Informational |
| **Suspicious Azure role assignment detected (Preview)**<br>(ARM_AnomalousRBACRoleAssignment) | Microsoft Defender for Resource Manager identified a suspicious Azure role assignment / performed using PIM (Privileged Identity Management) in your tenant which might indicate that an account in your organization was compromised. The identified operations are designed to allow administrators to grant principals access to Azure resources. While this activity may be legitimate, a threat actor might utilize role assignment to escalate their permissions allowing them to advance their attack. |Lateral Movement, Defense Evasion|Low (PIM) / High| | **Suspicious invocation of a high-risk 'Credential Access' operation detected (Preview)**<br>(ARM_AnomalousOperation.CredentialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access credentials. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Credential Access | Medium | | **Suspicious invocation of a high-risk 'Data Collection' operation detected (Preview)**<br>(ARM_AnomalousOperation.Collection) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to collect data. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to collect sensitive data on resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Collection | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
|-||:--:|-| | **Network communication with a malicious machine detected**<br>(Network_CommunicationWithC2) | Network traffic analysis indicates that your machine (IP %{Victim IP}) has communicated with what is possibly a Command and Control center. When the compromised resource is a load balancer or an application gateway, the suspected activity might indicate that one or more of the resources in the backend pool (of the load balancer or application gateway) has communicated with what is possibly a Command and Control center. | Command and Control | Medium | | **Possible compromised machine detected**<br>(Network_ResourceIpIndicatedAsMalicious) | Threat intelligence indicates that your machine (at IP %{Machine IP}) may have been compromised by a malware of type Conficker. Conficker was a computer worm that targets the Microsoft Windows operating system and was first detected in November 2008. Conficker infected millions of computers including government, business and home computers in over 200 countries/regions, making it the largest known computer worm infection since the 2003 Welchia worm. | Command and Control | Medium |
-| **Possible incoming %{Service Name} brute force attempts detected**<br>(Generic_Incoming_BF_OneToOne) | Network traffic analysis detected incoming %{Service Name} communication to %{Victim IP}, associated with your resource %{Compromised Host} from %{Attacker IP}. When the compromised resource is a load balancer or an application gateway, the suspected incoming traffic has been forwarded to one or more of the resources in the backend pool (of the load balancer or application gateway). Specifically, sampled network data shows suspicious activity between %{Start Time} and %{End Time} on port %{Victim Port}. This activity is consistent with brute force attempts against %{Service Name} servers. | PreAttack | Medium |
+| **Possible incoming %{Service Name} brute force attempts detected**<br>(Generic_Incoming_BF_OneToOne) | Network traffic analysis detected incoming %{Service Name} communication to %{Victim IP}, associated with your resource %{Compromised Host} from %{Attacker IP}. When the compromised resource is a load balancer or an application gateway, the suspected incoming traffic has been forwarded to one or more of the resources in the backend pool (of the load balancer or application gateway). Specifically, sampled network data shows suspicious activity between %{Start Time} and %{End Time} on port %{Victim Port}. This activity is consistent with brute force attempts against %{Service Name} servers. | PreAttack | Informational |
| **Possible incoming SQL brute force attempts detected**<br>(SQL_Incoming_BF_OneToOne) | Network traffic analysis detected incoming SQL communication to %{Victim IP}, associated with your resource %{Compromised Host}, from %{Attacker IP}. When the compromised resource is a load balancer or an application gateway, the suspected incoming traffic has been forwarded to one or more of the resources in the backend pool (of the load balancer or application gateway). Specifically, sampled network data shows suspicious activity between %{Start Time} and %{End Time} on port %{Port Number} (%{SQL Service Type}). This activity is consistent with brute force attempts against SQL servers. | PreAttack | Medium | | **Possible outgoing denial-of-service attack detected**<br>(DDOS) | Network traffic analysis detected anomalous outgoing activity originating from %{Compromised Host}, a resource in your deployment. This activity may indicate that your resource was compromised and is now engaged in denial-of-service attacks against external endpoints. When the compromised resource is a load balancer or an application gateway, the suspected activity might indicate that one or more of the resources in the backend pool (of the load balancer or application gateway) was compromised. Based on the volume of connections, we believe that the following IPs are possibly the targets of the DOS attack: %{Possible Victims}. Note that it is possible that the communication to some of these IPs is legitimate. | Impact | Medium | | **Suspicious incoming RDP network activity from multiple sources**<br>(RDP_Incoming_BF_ManyToOne) | Network traffic analysis detected anomalous incoming Remote Desktop Protocol (RDP) communication to %{Victim IP}, associated with your resource %{Compromised Host}, from multiple sources. When the compromised resource is a load balancer or an application gateway, the suspected incoming traffic has been forwarded to one or more of the resources in the backend pool (of the load balancer or application gateway). Specifically, sampled network data shows %{Number of Attacking IPs} unique IPs connecting to your resource, which is considered abnormal for this environment. This activity may indicate an attempt to brute force your RDP end point from multiple hosts (Botnet) | PreAttack | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Suspicious outgoing RDP network activity**<br>(RDP_Outgoing_BF_OneToOne) | Network traffic analysis detected anomalous outgoing Remote Desktop Protocol (RDP) communication to %{Victim IP} originating from %{Compromised Host} (%{Attacker IP}), a resource in your deployment. When the compromised resource is a load balancer or an application gateway, the suspected outgoing traffic has been originated from to one or more of the resources in the backend pool (of the load balancer or application gateway). Specifically, sampled network data shows %{Number of Connections} outgoing connections from your resource, which is considered abnormal for this environment. This activity may indicate that your machine was compromised and is now used to brute force external RDP end points. Note that this type of activity could possibly cause your IP to be flagged as malicious by external entities. | Lateral Movement | High | | **Suspicious outgoing SSH network activity to multiple destinations**<br>(SSH_Outgoing_BF_OneToMany) | Network traffic analysis detected anomalous outgoing SSH communication to multiple destinations originating from %{Compromised Host} (%{Attacker IP}), a resource in your deployment. When the compromised resource is a load balancer or an application gateway, the suspected outgoing traffic has been originated from to one or more of the resources in the backend pool (of the load balancer or application gateway). Specifically, sampled network data shows your resource connecting to %{Number of Attacked IPs} unique IPs, which is considered abnormal for this environment. This activity may indicate that your resource was compromised and is now used to brute force external SSH end points. Note that this type of activity could possibly cause your IP to be flagged as malicious by external entities. | Discovery | Medium | | **Suspicious outgoing SSH network activity**<br>(SSH_Outgoing_BF_OneToOne) | Network traffic analysis detected anomalous outgoing SSH communication to %{Victim IP} originating from %{Compromised Host} (%{Attacker IP}), a resource in your deployment. When the compromised resource is a load balancer or an application gateway, the suspected outgoing traffic has been originated from to one or more of the resources in the backend pool (of the load balancer or application gateway). Specifically, sampled network data shows %{Number of Connections} outgoing connections from your resource, which is considered abnormal for this environment. This activity may indicate that your resource was compromised and is now used to brute force external SSH end points. Note that this type of activity could possibly cause your IP to be flagged as malicious by external entities. | Lateral Movement | Medium |
-| **Traffic detected from IP addresses recommended for blocking** | Microsoft Defender for Cloud detected inbound traffic from IP addresses that are recommended to be blocked. This typically occurs when this IP address doesn't communicate regularly with this resource. Alternatively, the IP address has been flagged as malicious by Defender for Cloud's threat intelligence sources. | Probing | Low |
+| **Traffic detected from IP addresses recommended for blocking** <br>(Network_TrafficFromUnrecommendedIP) | Microsoft Defender for Cloud detected inbound traffic from IP addresses that are recommended to be blocked. This typically occurs when this IP address doesn't communicate regularly with this resource. Alternatively, the IP address has been flagged as malicious by Defender for Cloud's threat intelligence sources. | Probing | Informational |
## <a name="alerts-azurekv"></a>Alerts for Azure Key Vault
defender-for-cloud Configure Servers Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/configure-servers-coverage.md
+
+ Title: Configure monitoring coverage
+description: Learn how to configure the different monitoring components that are available in Defender for Servers in Microsoft Defender for Cloud.
+ Last updated : 01/25/2024++
+# Configure monitoring coverage
+
+Microsoft Defender for Cloud's Defender for Servers plans contains components that monitor your environments to provide extended coverage on your servers. Each of these components can be enabled, disabled or configured to your meet your specific requirements.
+
+| Component | Availability | Description | Learn more |
+|--|--|--|--|
+| [Log Analytics agent](plan-defender-for-servers-agents.md) | Plan 1 and Plan 2 | Collects security-related configurations and event logs from the machine and stores the data in your Log Analytics default or custom workspace for analysis. | [Learn more](../azure-monitor/agents/log-analytics-agent.md) about the Log Analytics agent. |
+| [Vulnerability assessment for machines](deploy-vulnerability-assessment-defender-vulnerability-management.md) | Plan 1 and Plan 2 |Enables vulnerability assessment on your Azure and hybrid machines. | [Learn more](monitoring-components.md) about how Defender for Cloud collects data. |
+| [Endpoint protection](integration-defender-for-endpoint.md) | Plan 1 and Plan 2 | Enables protection powered by Microsoft Defender for Endpoint, including automatic agent deployment to your servers, and security data integration with Defender for Cloud | [Learn more](integration-defender-for-endpoint.md) about endpoint protection. |
+| [Agentless scanning for machines](concept-agentless-data-collection.md) | Plan 2 | Scans your machines for installed software and vulnerabilities without relying on agents or impacting machine performance. | [Learn more](concept-agentless-data-collection.md) about agentless scanning for machines. |
+
+When you enable Defender for Servers plan 2, all of these components are toggled to **On** by default.
+
+## Configure Log Analytics agent
+
+After enabling the Log Analytics agent, you'll be presented with the option to select which workspace should be utilized.
+
+**To configure the Log Analytics agent**:
+
+1. Select **Edit configuration**.
+
+ :::image type="content" source="media/configure-servers-coverage/edit-configuration-log.png" alt-text="Screenshot that shows you where on the screen you need to select edit configuration, to edit the log analytics agent/azure monitor agent." lightbox="media/configure-servers-coverage/edit-configuration-log.png":::
+
+1. Select either a **Default workspace(s)** or a **Custom workspace** depending on your need.
+
+ :::image type="content" source="media/configure-servers-coverage/auto-provisioning-screen.png" alt-text="Screenshot of the auto provisioning configuration screen with the available options to select." lightbox="media/configure-servers-coverage/auto-provisioning-screen.png":::
+
+1. Select **Apply**.
+
+1. Select **Continue**.
+
+## Configure vulnerability assessment for machines
+
+Vulnerability assessment for machines allows you to select between two vulnerability assessment solutions:
+
+- Microsoft Defender Vulnerability Management
+- Microsoft Defender for Cloud integrated Qualys scanner
+
+**To select either of the vulnerability assessment solutions**:
+
+1. Select **Edit configuration**.
+
+ :::image type="content" source="media/configure-servers-coverage/vulnerability-edit.png" alt-text="Screenshot that shows you where to select edit for vulnerabilities assessment for machines." lightbox="media/configure-servers-coverage/vulnerability-edit.png":::
+
+1. In the Extension deployment configuration window, select either of the solutions depending on your need.
+
+1. Select **Apply**.
+
+1. Select **Continue**.
+
+## Configure endpoint protection
+
+With Microsoft Defender for Servers, you enable the protections provided by [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide) to your server resources. Defender for Endpoint includes automatic agent deployment to your servers, and security data integration with Defender for Cloud.
+
+To configure endpoint protection:
+
+1. Toggle the switch to **On**.
+
+1. Select **Continue**.
+
+## Configure agentless scanning for machines
+
+Defender for Cloud has the ability to scan your Azure machines for installed software and vulnerabilities without requiring you to install agents, have network connectivity or affect your machine's performance.
+
+**To configure agentless scanning for machines**:
+
+1. Select **Edit configuration**.
+
+ :::image type="content" source="media/configure-servers-coverage/agentless-scanning-edit.png" alt-text="Screenshot that shows where you need to select to edit the configuration of the agentless scanner." lightbox="media/configure-servers-coverage/agentless-scanning-edit.png":::
+
+1. Enter a tag name and tag value for any machines to be excluded from scans.
+
+1. Select **Apply**.
+
+1. Select **Continue**.
+
+Learn more about agentless scanning and how to [enable agentless scanning](enable-agentless-scanning-vms.md) on other cloud environments.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 01/16/2024 Last updated : 01/25/2024 # What's new in Microsoft Defender for Cloud?
If you're looking for items older than six months, you can find them in the [Arc
## January 2024
-| Date | Update |
-|--|--|
+|Date | Update |
+|-|-|
+| January 25 | [Deprecation of security alerts and update of security alerts to informational severity level](#deprecation-of-security-alerts-and-update-of-security-alerts-to-informational-severity-level) |
| January 24 | [Agentless container posture for GCP in Defender for Containers and Defender CSPM (Preview)](#agentless-container-posture-for-gcp-in-defender-for-containers-and-defender-cspm-preview) | | January 16 | [Public preview of agentless malware scanning for servers](#public-preview-of-agentless-malware-scanning-for-servers)| | January 15 | [General availability of Defender for Cloud's integration with Microsoft Defender XDR](#general-availability-of-defender-for-clouds-integration-with-microsoft-defender-xdr) | | January 12 | [DevOps security Pull Request annotations are now enabled by default for Azure DevOps connectors](#devops-security-pull-request-annotations-are-now-enabled-by-default-for-azure-devops-connectors) | | January 4 | [Recommendations released for preview: Nine new Azure security recommendations](#recommendations-released-for-preview-nine-new-azure-security-recommendations) |
+### Deprecation of security alerts and update of security alerts to informational severity level
+
+January 25, 2024
+
+This announcement includes container security alerts that are deprecated, and security alerts whose severity level is updated to **Informational**.
+
+- The following container security alerts are deprecated:
+
+ - `Anomalous pod deployment (Preview) (K8S_AnomalousPodDeployment)`
+ - `Excessive role permissions assigned in Kubernetes cluster (Preview) (K8S_ServiceAcountPermissionAnomaly)`
+ - `Anomalous access to Kubernetes secret (Preview) (K8S_AnomalousSecretAccess)`
+
+The following security alerts are updated to the **informational** severity level:
+
+- **Alerts for Windows machines**:
+
+ - `Adaptive application control policy violation was audited (VM_AdaptiveApplicationControlWindowsViolationAudited)`
+ - `Adaptive application control policy violation was audited (VM_AdaptiveApplicationControlLinuxViolationAudited)`
+
+- **Alerts for containers**:
+
+ - `Attempt to create a new Linux namespace from a container detected (K8S.NODE_NamespaceCreation)`
+ - `Attempt to stop apt-daily-upgrade.timer service detected (K8S.NODE_TimerServiceDisabled)`
+ - `Command within a container running with high privileges (K8S.NODE_PrivilegedExecutionInContainer)`
+ - `Container running in privileged mode (K8S.NODE_PrivilegedContainerArtifacts)`
+ - `Container with a sensitive volume mount detected (K8S_SensitiveMount)`
+ - `Creation of admission webhook configuration detected (K8S_AdmissionController)`
+ - `Detected suspicious file download (K8S.NODE_SuspectDownloadArtifacts)`
+ - `Docker build operation detected on a Kubernetes node (K8S.NODE_ImageBuildOnNode)`
+ - `New container in the kube-system namespace detected (K8S_KubeSystemContainer)`
+ - `New high privileges role detected (K8S_HighPrivilegesRole)`
+ - `Privileged container detected (K8S_PrivilegedContainer)`
+ - `Process seen accessing the SSH authorized keys file in an unusual way (K8S.NODE_SshKeyAccess)`
+ - `Role binding to the cluster-admin role detected (K8S_ClusterAdminBinding)`
+ - `SSH server is running inside a container (K8S.NODE_ContainerSSH)`
+
+- **Alerts for DNS**:
+
+ - `Communication with suspicious algorithmically generated domain (AzureDNS_DomainGenerationAlgorithm)`
+ - `Communication with suspicious algorithmically generated domain (DNS_DomainGenerationAlgorithm)`
+ - `Communication with suspicious random domain name (Preview) (DNS_RandomizedDomain)`
+ - `Communication with suspicious random domain name (AzureDNS_RandomizedDomain)`
+ - `Communication with possible phishing domain (AzureDNS_PhishingDomain)`
+ - `Communication with possible phishing domain (Preview) (DNS_PhishingDomain)`
+
+- **Alerts for Azure App Service**:
+
+ - `NMap scanning detected (AppServices_Nmap)`
+ - `Suspicious User Agent detected (AppServices_UserAgentInjection)`
+
+- **Alerts for Azure network layer**:
+
+ - `Possible incoming SMTP brute force attempts detected (Generic_Incoming_BF_OneToOne)`
+ - `Traffic detected from IP addresses recommended for blocking (Network_TrafficFromUnrecommendedIP)`
+
+- **Alerts for Azure Resource Manager**:
+
+ - `Privileged custom role created for your subscription in a suspicious way (Preview)(ARM_PrivilegedRoleDefinitionCreation)`
+
+See the full [list of security alerts](alerts-reference.md).
+ ### Agentless container posture for GCP in Defender for Containers and Defender CSPM (Preview) January 24, 2024
defender-for-cloud Tutorial Enable Servers Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-servers-plan.md
You can enable the Defender for Servers plan from the Environment settings page
:::image type="content" source="media/tutorial-enable-servers-plan/enable-servers-plan.png" alt-text="Screenshot that shows you how to toggle the Defender for Servers plan to on." lightbox="media/tutorial-enable-servers-plan/enable-servers-plan.png":::
+Once the plan has been enabled, you have the ability to [configure the monitoring settings](configure-servers-coverage.md) to suit your needs.
+ ### Select a Defender for Servers plan When you enable the Defender for Servers plan, you're then given the option to select which plan - Plan 1 or Plan 2 - to enable. There are two plans you can choose from that offer different levels of protections for your resources.
When you enable the Defender for Servers plan, you're then given the option to s
1. Select **Save**.
-### Configure monitoring coverage
-
-There are components that can be enabled and configured to provide extra protections to your environments in the Defender for Servers plans.
-
-| Component | Description | Learn more |
-|:--:|:--:|:--:|
-| [Log Analytics agent](plan-defender-for-servers-agents.md) | Collects security-related configurations and event logs from the machine and stores the data in your Log Analytics workspace for analysis. | [Learn more](../azure-monitor/agents/log-analytics-agent.md) about the Log Analytics agent. |
-| [Vulnerability assessment for machines](deploy-vulnerability-assessment-defender-vulnerability-management.md) | Enables vulnerability assessment on your Azure and hybrid machines. | [Learn more](monitoring-components.md) about how Defender for Cloud collects data. |
-| [Endpoint protection](integration-defender-for-endpoint.md) | Enables protection powered by Microsoft Defender for Endpoint, including automatic agent deployment to your servers, and security data integration with Defender for Cloud | [Learn more](integration-defender-for-endpoint.md) about endpoint protection |
-| [Agentless scanning for machines](concept-agentless-data-collection.md) | Scans your machines for installed software and vulnerabilities without relying on agents or impacting machine performance. | [Learn more](concept-agentless-data-collection.md) about agentless scanning for machines. |
-
-Toggle the corresponding switch to **On**, to enable any of these options.
-
-### Configure Log Analytics agent
-
-After enabling the Log Analytics agent, you'll be presented with the option to select which workspace should be utilized.
-
-**To configure the Log Analytics agent**:
-
-1. Select **Edit configuration**.
-
- :::image type="content" source="media/tutorial-enable-servers-plan/edit-configuration-log.png" alt-text="Screenshot that shows you where on the screen you need to select edit configuration, to edit the log analytics agent/azure monitor agent." lightbox="media/tutorial-enable-servers-plan/edit-configuration-log.png":::
-
-1. Select either a **Default workspace(s)** or a **Custom workspace** depending on your need.
-
- :::image type="content" source="media/tutorial-enable-servers-plan/auto-provisioning-screen.png" alt-text="Screenshot of the auto provisioning configuration screen with the available options to select." lightbox="media/tutorial-enable-servers-plan/auto-provisioning-screen.png":::
-
-1. Select **Apply**.
-
-### Configure vulnerability assessment for machines
-
-Vulnerability assessment for machines allows you to select between two vulnerability assessment solutions:
--- Microsoft Defender Vulnerability Management-- Microsoft Defender for Cloud integrated Qualys scanner-
-**To select either of the vulnerability assessment solutions**:
-
-1. Select **Edit configuration**.
-
- :::image type="content" source="media/tutorial-enable-servers-plan/vulnerability-edit.png" alt-text="Screenshot that shows you where to select edit for vulnerabilities assessment for machines." lightbox="media/tutorial-enable-servers-plan/vulnerability-edit.png":::
-
-1. In the Extension deployment configuration window, select either of the solutions depending on your need.
-
-1. Select **Apply**.
-
-## Configure endpoint protection
-
-With Microsoft Defender for Servers, you enable the protections provided by [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide) to your server resources. Defender for Endpoint includes automatic agent deployment to your servers, and security data integration with Defender for Cloud.
-
-**To configure endpoint protection**:
-
-1. Toggle the switch to **On**.
-
-### Configure agentless scanning for machines
-
-Defender for Cloud has the ability to scan your Azure machines for installed software and vulnerabilities without requiring you to install agents, have network connectivity or affect your machine's performance.
-
-**To configure agentless scanning for machines**:
-
-1. Select **Edit configuration**.
-
- :::image type="content" source="media/tutorial-enable-servers-plan/agentless-scanning-edit.png" alt-text="Screenshot that shows where you need to select to edit the configuration of the agentless scanner." lightbox="media/tutorial-enable-servers-plan/agentless-scanning-edit.png":::
-
-1. Enter a tag name and tag value for any machines to be excluded from scans.
-
-1. Select **Apply**.
-
-Learn more about agentless scanning and how to [enable agentless scanning](enable-agentless-scanning-vms.md) on other cloud environments.
+Once the plan has been enabled, you have the ability to [configure the monitoring settings](configure-servers-coverage.md) to suit your needs.
## Enable the plan at the resource level
Supported resource types include:
- On-premises with Azure Arc - Azure Virtual Machine Scale Sets Flex
+Once the plan has been enabled, you have the ability to [configure the monitoring settings](configure-servers-coverage.md) to suit your needs.
+ ### Enablement via REST API The ability to enable or disable Defender for Servers at the resource level is available exclusively via REST API. Learn how to [interact with the API](/rest/api/defenderforcloud/pricings) to manage your Defender for Servers at the resource or subscription level.
Since Microsoft Defender for Endpoint deployment for eligible machines is a near
- If you plan to roll out and enable Servers Plan 1/Plan 2 at the subscription level and exclude individual existing VMs, make sure you exclude the VMs before (or at the same time) you enable the plan at the subscription level. - When you plan to exclude new VMs created under a subscription already enabled for P1/P2, make sure you exclude them during or shortly after creation time, to avoid unintentional deployment of Microsoft Defender for Endpoint.
+Once the plan has been enabled, you have the ability to [configure the monitoring settings](configure-servers-coverage.md) to suit your needs.
+ ### Enablement at scale Use the following base script file to customize it for your specific needs.
Use the following base script file to customize it for your specific needs.
1. Select whether to set pricing by **tag** or by **resource group**. 1. Follow the onscreen instructions.
+Once the plan has been enabled, you have the ability to [configure the monitoring settings](configure-servers-coverage.md) to suit your needs.
+ ### Monitoring coverage status To monitor your coverage status, you can use the inventory. In the main menu, select **Inventory** and then check the **plan status** in the ΓÇ£Defender for cloudΓÇ¥ column:
To monitor your coverage status, you can use the inventory. In the main menu, se
## Next steps
+[Configure monitoring coverage](configure-servers-coverage.md)
[Overview of Microsoft Defender for Servers](defender-for-servers-introduction.md)
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 01/11/2024 Last updated : 01/21/2024 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you can find them in the [What's
| [Retirement of the Defender for Cloud Containers Vulnerability Assessment powered by Qualys](#retirement-of-the-defender-for-cloud-containers-vulnerability-assessment-powered-by-qualys) | January 9, 2023 | March 2024 | | [New version of Defender Agent for Defender for Containers](#new-version-of-defender-agent-for-defender-for-containers) | January 4, 2024 | February 2024 | | [Upcoming change for the Defender for CloudΓÇÖs multicloud network requirements](#upcoming-change-for-the-defender-for-clouds-multicloud-network-requirements) | January 3, 2024 | May 2024 |
-| [Deprecation and severity changes to security alerts](#deprecation-and-severity-changes-to-security-alerts) | December 27, 2023 | January 2024 |
| [Deprecation of two DevOps security recommendations](#deprecation-of-two-devops-security-recommendations) | November 30, 2023 | January 2024 | | [Consolidation of Defender for Cloud's Service Level 2 names](#consolidation-of-defender-for-clouds-service-level-2-names) | November 1, 2023 | December 2023 | | [Changes to how Microsoft Defender for Cloud's costs are presented in Microsoft Cost Management](#changes-to-how-microsoft-defender-for-clouds-costs-are-presented-in-microsoft-cost-management) | October 25, 2023 | November 2023 |
The list is applicable to all plans and sufficient for full capability of the CS
- Central US (cus): 20.44.10.240/28 - East US 2 (eus2): 20.44.19.128/28
-## Deprecation and severity changes to security alerts
-
-**Announcement date: December 27, 2023**
-
-**Estimated date for change: January 2024**
-
-The following security alerts are set for deprecation or are set for update to the **informational** severity level.
--- The following container security alerts are set for deprecation:-
- - `Anomalous pod deployment (Preview) (K8S_AnomalousPodDeployment)`
- - `Excessive role permissions assigned in Kubernetes cluster (Preview) (K8S_ServiceAcountPermissionAnomaly)`
- - `Anomalous access to Kubernetes secret (Preview) (K8S_AnomalousSecretAccess)`
--- The following security alerts are set to be updated to the **informational** severity level:-
- - **Alerts for Windows machines**:
-
- - `Adaptive application control policy violation was audited (VM_AdaptiveApplicationControlWindowsViolationAudited)`
- - `Adaptive application control policy violation was audited (VM_AdaptiveApplicationControlLinuxViolationAudited)`
-
- - **Alerts for containers**:
-
- - `Attempt to create a new Linux namespace from a container detected (K8S.NODE_NamespaceCreation)`
- - `Attempt to stop apt-daily-upgrade.timer service detected (K8S.NODE_TimerServiceDisabled)`
- - `Command within a container running with high privileges (K8S.NODE_PrivilegedExecutionInContainer)`
- - `Container running in privileged mode (K8S.NODE_PrivilegedContainerArtifacts)`
- - `Container with a sensitive volume mount detected (K8S_SensitiveMount)`
- - `Creation of admission webhook configuration detected (K8S_AdmissionController)`
- - `Detected suspicious file download (K8S.NODE_SuspectDownloadArtifacts)`
- - `Docker build operation detected on a Kubernetes node (K8S.NODE_ImageBuildOnNode)`
- - `New container in the kube-system namespace detected (K8S_KubeSystemContainer)`
- - `New high privileges role detected (K8S_HighPrivilegesRole)`
- - `Privileged container detected (K8S_PrivilegedContainer)`
- - `Process seen accessing the SSH authorized keys file in an unusual way (K8S.NODE_SshKeyAccess)`
- - `Role binding to the cluster-admin role detected (K8S_ClusterAdminBinding)`
- - `SSH server is running inside a container (K8S.NODE_ContainerSSH)`
-
- - **Alerts for DNS**:
-
- - `Communication with suspicious algorithmically generated domain (AzureDNS_DomainGenerationAlgorithm)`
- - `Communication with suspicious algorithmically generated domain (DNS_DomainGenerationAlgorithm)`
- - `Communication with suspicious random domain name (Preview) (DNS_RandomizedDomain)`
- - `Communication with suspicious random domain name (AzureDNS_RandomizedDomain)`
- - `Communication with possible phishing domain (AzureDNS_PhishingDomain)`
- - `Communication with possible phishing domain (Preview) (DNS_PhishingDomain)`
-
- - **Alerts for Azure App Service**:
-
- - `NMap scanning detected (AppServices_Nmap)`
- - `Suspicious User Agent detected (AppServices_UserAgentInjection)`
-
- - **Alerts for Azure network layer**
-
- - `Possible incoming SMTP brute force attempts detected (Generic_Incoming_BF_OneToOne)`
- - `Traffic detected from IP addresses recommended for blocking (Network_TrafficFromUnrecommendedIP)`
-
- - **Alerts for Azure Resource Manager**:
-
- - `Privileged custom role created for your subscription in a suspicious way (Preview)(ARM_PrivilegedRoleDefinitionCreation)`
-
-See the full [list of security alerts](alerts-reference.md).
- ## Deprecation of two DevOps security recommendations **Announcement date: November 30, 2023**
event-grid Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/authentication-overview.md
Authentication for clients publishing events to Event Grid is supported using th
- Microsoft Entra ID - Access key or shared access signature (SAS)
-> [!IMPORTANT]
-> Microsoft Entra authentication isn't supported for namespace topics.
- ## Authenticate using Microsoft Entra ID Microsoft Entra integration for Event Grid resources provides Azure role-based access control (RBAC) for fine-grained control over a clientΓÇÖs access to resources. You can use Azure RBAC to grant permissions to a security principal, which may be a user, a group, or an application service principal. Microsoft Entra authenticates the security principal and returns an OAuth 2.0 token. The token can be used to authorize a request to access Event Grid resources (topics, domains, or partner namespaces). For detailed information, see [Authenticate and authorize with the Microsoft identity platform](authenticate-with-microsoft-entra-id.md).
event-grid Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/transport-layer-security-configure-minimum-version.md
Last updated 01/24/2024
Azure Event Grid topics or domains permit clients to send and receive data with TLS 1.0 and above. To enforce stricter security measures, you can configure your Event Grid topic or domain to require that clients send and receive data with a newer version of TLS. If an Event Grid topic or domain requires a minimum version of TLS, then any requests made with an older version fail. For conceptual information about this feature, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Grid topic or domain](transport-layer-security-enforce-minimum-version.md).
+When creating a Web Hook event subscription, you can configure it to use the same TLS version as the topic or explicitly specify the minimum TLS version. If you do so, Event Grid fails to deliver events to a Web Hook that doesn't support the minimum version of TLS or higher.
+ You can configure the minimum TLS version using the Azure portal or Azure Resource Manager (ARM) template. > [!NOTE] > The screenshots and the sample Resource Manager templates are for Event Grid topics. The screenshots and template for domains are similar.
-## Specify the minimum TLS version in the Azure portal
-You can specify the minimum TLS version when creating an Event Grid topic or a domain in the Azure portal on the **Security** tab.
+## Configure minimum TLS version for a topic or a domain in the Azure portal
+You can specify the minimum TLS version when creating an Event Grid topic or a domain in the Azure portal on the **Security** tab. The screenshots are for an Event Grid topic and the user interface for configuring this property for a domain is similar.
:::image type="content" source="./media/transport-layer-security-configure-minimum-version/create-topic-tls.png" alt-text="Screenshot showing the page to set the minimum TLS version when creating an Event Grid topic.":::
You can also specify the minimum TLS version for an existing topic on the **Conf
:::image type="content" source="./media/transport-layer-security-configure-minimum-version/existing-topic-tls.png" alt-text="Screenshot showing the page to set the minimum TLS version for an existing Event Grid topic.":::
-## Create a template to configure the minimum TLS version
+## Configure minimum TLS version for a topic or a domain using Resource Manager template
-To configure the minimum TLS version for an Event Grid topic or domain with a template, create a template with the `MinimumTlsVersion` property set to 1.0, 1.1, or 1.2. When you create an Event Grid topic or domain with an Azure Resource Manager template, the `MinimumTlsVersion` property is set to 1.2 by default, unless explicitly set to another version.
+To configure the minimum TLS version for an Event Grid topic or domain with a template, create a template with the `MinimumTlsVersion` property set to 1.0, 1.1, or 1.2. When you create an Event Grid topic or domain with an Azure Resource Manager template, the `MinimumTlsVersion` property is set to 1.2 by default, unless explicitly set to another version The sample template in this article is for an Event Grid topic and the template for configuring the TLS version for a domain is similar.
The following steps describe how to create a template in the Azure portal.
The following steps describe how to create a template in the Azure portal.
> After you update the minimum TLS version for the Event Grid topic or domain, it may take up to 30 seconds before the change is fully propagated.
+## Configure minimum TLS version for a Web Hook event subscription
+When creating a Web Hook event subscription, you can configure it to use the same TLS version as the topic or explicitly specify the minimum TLS version.
++
+For an existing Web Hook event subscription, navigate to the **Event Subscription** page and then you can update the TLS setting on the **Additional features** tab. By default, the event subscription uses the same TLS version as the topic.
+
+
## Next steps For more information, see the following article: [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Grid topic or domain](transport-layer-security-enforce-minimum-version.md).
event-grid Transport Layer Security Enforce Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/transport-layer-security-enforce-minimum-version.md
Title: Enforce a minimum TLS version for requests to an Azure Event Grid topic or domain
-description: Configure an Azure Event Grid topic or domain to require a minimum version of Transport Layer Security (TLS) for clients making requests against the topic or domain.
+ Title: Enforce a minimum TLS version for requests to an Azure Event Grid topic, domain, or subscription
+description: Configure an Azure Event Grid topic or domain to require a minimum version of Transport Layer Security (TLS) for clients making requests against the topic, domain, or subscription.
-+ Last updated 01/22/2024
-# Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Grid topic or domain
+# Enforce a minimum required version of Transport Layer Security (TLS) for an Event Grid topic, domain, or subscription
-Communication between a client application and an Azure Grid topic or domain is encrypted using Transport Layer Security (TLS). TLS is a standard cryptographic protocol that ensures privacy and data integrity between clients and services over the Internet. For more information about TLS, see [Transport Layer Security](https://datatracker.ietf.org/wg/tls/about/).
+Communication between a client application and an Azure Grid topic, domain, or subscription is encrypted using Transport Layer Security (TLS). For information about TLS in general, see [Transport Layer Security](https://datatracker.ietf.org/wg/tls/about/).
-Azure Event Grid supports choosing a specific TLS version for topics or domains. Currently Azure Event Grid uses TLS 1.2 on public endpoints by default, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility.
+Azure Event Grid supports choosing a specific TLS version for topics, domains, or subscriptions (when using a Web Hook destination). Currently Azure Event Grid uses TLS 1.2 on public endpoints by default, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility.
-Azure Event Grid topics or domains permit clients to send and receive data with TLS 1.0 and above. To enforce stricter security measures, you can configure your Event Grid topic or domain to require that clients send and receive data with a newer version of TLS. If an Event Grid topic or domain requires a minimum version of TLS, then any requests made with an older version fail.
+Azure Event Grid topics or domains permit clients to send and receive data with TLS 1.0 and above. To enforce stricter security measures, you can configure your Event Grid topic or domain to require that clients send and receive data with a newer version of TLS. If an Event Grid topic or domain requires a minimum version of TLS, then any requests made with an older version fail.
+
+When creating a Web Hook event subscription, you can configure it to use the same TLS version as the topic or explicitly specify the minimum TLS version. If you do so, Event Grid will fail to deliver events to a Web Hook that doesn't support the minimum version of TLS or above.
> [!IMPORTANT] > If the client is a service, ensure that the service uses the appropriate version of TLS to send requests to Event Grid before you set the required minimum version for an Event Grid topic or domain.
expressroute Expressroute Howto Circuit Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-circuit-portal-resource-manager.md
Title: 'Quickstart: Create and modify a circuit with ExpressRoute - Azure portal'
-description: In this quickstart, you learn how to create, provision, verify, update, delete, and deprovision an ExpressRoute circuit by using the Azure portal.
+ Title: 'Quickstart: Create and modify ExpressRoute circuits - Azure portal'
+description: In this quickstart, you learn how to create, provision, verify, update, delete, and deprovision ExpressRoute circuits by using the Azure portal.
Previously updated : 08/31/2023 Last updated : 01/24/2024
+zone_pivot_groups: expressroute-experience
-# Quickstart: Create and modify an ExpressRoute circuit
+# Quickstart: Create and modify ExpressRoute circuits
This quickstart shows you how to create an ExpressRoute circuit using the Azure portal and the Azure Resource Manager deployment model. You can also check the status, update, delete, or deprovision a circuit.
This quickstart shows you how to create an ExpressRoute circuit using the Azure
## <a name="create"></a>Create and provision an ExpressRoute circuit +
+### Sign in to the Azure portal
+
+Sign in to the Azure portal with this [Preview link](https://aka.ms/expressrouteguidedportal). This link is required to access the new preview create experience for an ExpressRoute circuit.
+
+### Create a new ExpressRoute circuit (Preview)
+
+> [!IMPORTANT]
+> Your ExpressRoute circuit is billed from the moment a service key is issued. Ensure that you perform this operation when the connectivity provider is ready to provision the circuit.
+
+1. On the Azure portal menu, select **+ Create a resource**. Search for **ExpressRoute (preview)** and then select **Create**.
+
+ :::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/create-expressroute-circuit-new.png" alt-text="Screenshot of preview create experience of ExpressRoute circuit resource.":::
+
+1. Select the **Subscription** and **Resource Group** for the circuit. Then select the type of **Resiliency** for your setup.
+
+ **Maximum Resiliency** - This option provides the highest level of resiliency for your ExpressRoute circuit. It provides two ExpressRoute circuits with local redundancy in two different ExpressRoute locations.
+
+ :::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/maximum-resiliency.png" alt-text="Diagram of maximum resiliency for an ExpressRoute connection.":::
+
+ **Standard Resiliency** - This option provides a single ExpressRoute circuit with local redundancy at a single ExpressRoute location.
+
+ :::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/standard-resiliency.png" alt-text="Diagram of standard resiliency for an ExpressRoute connection.":::
+
+1. Enter or select the following information for the respective resiliency type.
+
+ :::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/new-configuration.png" alt-text="Screenshot of the new ExpressRoute circuit configuration page.":::
+
+ **Maximum Resiliency**
+
+ | Setting | Value |
+ | | |
+ | Use existing circuit or create new | You can augment resiliency of an existing standard circuit by selecting **Use existing circuit** and selecting an existing circuit for the first location. If you select **Use existing circuit**, you only need to configure the second circuit. If you select **Create new**, enter following information for both the new ExpressRoute circuit. |
+ | Region | Select the region closest to the peering location of the circuit. |
+ | Name | Enter a name for the ExpressRoute circuit. |
+ | Port type | Select if you're connecting to a service provider or directly into Microsoft's global network at a peering location. |
+ | Peering Location (Provider port type) | Select the physical location where you're peering with Microsoft. |
+ | Provider (Provider port type)| Select the internet service provider who you are requesting your service from. |
+ | ExpressRoute Direct resource (Direct port type) | Select the ExpressRoute Direct resource that you want to use. |
+ | Bandwidth | Select the bandwidth for the ExpressRoute circuit. |
+ | SKU | Select the SKU for the ExpressRoute circuit. You can specify **Local** to get the local SKU, **Standard** to get the standard SKU or **Premium** for the premium add-on. You can change between Local, Standard and Premium. |
+ | Billing model | Select the billing type for egress data charge. You can specify **Metered** for a metered data plan and **Unlimited** for an unlimited data plan. You can change the billing type from **Metered** to **Unlimited**. |
+
+ > [!IMPORTANT]
+ > * The Peering Location indicates the [physical location](expressroute-locations.md) where you are peering with Microsoft. This is **not** linked to "Location" property, which refers to the geography where the Azure Network Resource Provider is located. While they're not related, it is a good practice to choose a Network Resource Provider geographically close to the Peering Location of the circuit.
+ > * You can't change the SKU from **Standard/Premium** to **Local** in Azure portal. To downgrade the SKU to **Local**, you can use [Azure PowerShell](expressroute-howto-circuit-arm.md) or [Azure CLI](howto-circuit-cli.md).
+ > * You can't change the type from **Unlimited** to **Metered**.
+
+ Complete the same information for the second ExpressRoute circuit. When selecting an ExpressRoute location for the second circuit, you are provided with distances information from the first ExpressRoute location. This information can help you select the second ExpressRoute location.
+
+ :::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/peering-location-distance.png" alt-text="Screenshot of distance information from first ExpressRoute circuit.":::
+
+ **Standard Resiliency**
+
+ For standard resiliency, you only need to enter information for one ExpressRoute circuit.
+
+1. Select **Review + create** and then select **Create** to deploy the ExpressRoute circuit.
+++ ### Sign in to the Azure portal From a browser, sign in to the [Azure portal](https://portal.azure.com) and sign in with your Azure account.
From a browser, sign in to the [Azure portal](https://portal.azure.com) and sign
| Region | West US 2 | | Name | TestERCircuit |
- :::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/expressroute-create-basic.png" alt-text=" Screenshot of how to configure the resource group and region.":::
+ :::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/expressroute-create-basic.png" alt-text="Screenshot of how to configure the resource group and region.":::
-1. When you're filling in the values on this page, make sure that you specify the correct SKU tier (Local, Standard, or Premium) and data metering billing model (Unlimited or Metered).
+1. When you enter in the values on this page, make sure that you specify the correct SKU tier (Local, Standard, or Premium) and data metering billing model (Unlimited or Metered).
:::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/expressroute-create-configuration.png" alt-text="Screenshot of how to configure the circuit.":::
From a browser, sign in to the [Azure portal](https://portal.azure.com) and sign
1. Select **Review + create** and then select **Create** to deploy the ExpressRoute circuit. + ### View the circuits and properties **View all the circuits**
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
Metrics explorer supports SUM, MAX, MIN, AVG and COUNT as [aggregation types](..
| [Count of routes advertised to peer](#advertisedroutes) | Availability | Count | Maximum | Count Of Routes Advertised To Peer by ExpressRouteGateway | roleInstance | Yes | | [Count of routes learned from peer](#learnedroutes)| Availability | Count | Maximum | Count Of Routes Learned From Peer by ExpressRouteGateway | roleInstance | Yes | | [Frequency of routes changed](#frequency) | Availability | Count | Total | Frequency of Routes change in ExpressRoute Gateway | roleInstance | Yes |
-| [Number of VMs in virtual network](#vm) | Availability | Count | Maximum | Number of VMs in the Virtual Network | No Dimensions | Yes |
+| [Number of VMs in virtual network](#vm) | Availability | Count | Maximum | Estimated number of VMs in the virtual network | No Dimensions | Yes |
| [Active flows](#activeflows) | Scalability | Count | Average | Number of active flows on ExpressRoute Gateway | roleInstance | Yes | | [Max flows created per second](#maxflows) | Scalability | FlowsPerSecond | Maximum | Maximum number of flows created per second on ExpressRoute Gateway | roleInstance, direction | Yes |
external-attack-surface-management Understanding Asset Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-asset-details.md
The following data is specific to the issuer of an SSL certificate.
## Data tabs
-On the rightmost pane of the asset details page, users can access more expansive data related to the selected asset. This data is organized in a series of categorized tabs. The available metadata tabs change depending on the type of asset you're viewing.
+On the rightmost pane of the asset details page, users can access more expansive data related to the selected asset. This data is organized in a series of categorized tabs. The available metadata tabs change depending on the type of asset you're viewing.
+
+Certain tabs display a "Recent only" toggle in the upper-right corner. By default, Defender EASM displays all data that we've collected for each asset, including historical observations that may not be actively running on your current attack surface. While this historical context is very valuable for certain use cases, the "Recent only" toggle will limit all results on the Asset Details page to those most recently observed on the asset. It is recommended that you use the "Recent only" toggle when you only wish to view data that represents the current state of the asset for remediation purposes.
+
+![Screenshot that highlights the "Recent only" toggle from the Overview page of Asset Details.](media/inventory-1b.png)
+ ### Overview
The **Overview** tab provides more context to ensure that significant insights a
This section can also include dashboard widgets that visualize insights that are relevant to the asset type in question.
-![Screenshot that shows the asset details page Overview pane.](media/Inventory_2.png)
+![Screenshot that shows the asset details page Overview pane.](media/inventory-2.png)
### Discovery chain
external-attack-surface-management Using And Managing Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/using-and-managing-discovery.md
Custom discoveries are organized into discovery groups. They're independent seed
![Screenshot that shows the first page of the discovery group setup.](media/Discovery_4.png)
-5. Select the seeds that you want to use for this discovery group. Seeds are known assets that belong to your organization. The Defender EASM platform scans these entities and maps their connections to other online infrastructure to create your attack surface.
+5. Select the seeds that you want to use for this discovery group. Seeds are known assets that belong to your organization. The Defender EASM platform scans these entities and maps their connections to other online infrastructure to create your attack surface. Since Defender EASM is intended to monitor your attack surface from an external perspective, private IP addresses cannot be included as discovery seeds.
![Screenshot that shows the seed selection page of the discovery group setup.](media/Discovery_5.png)
frontdoor Integrate Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/integrate-storage-account.md
A storage account gives access to the Azure Storage services. The storage accoun
:::image type="content" source="./media/integrate-storage-account/security-settings.png" alt-text="Screenshot of the caching, WAF and private link settings for an endpoint."::: > [!NOTE]
- > * With Standard tier, you can only use custom rules with WAF.To deploy managed rules and bot protection, choose Premium tier. For detailed comparison, see [Azure Front Door tier comparison](./standard-premium/tier-comparison.md).
+ > * With Standard tier, you can only use custom rules with WAF. To deploy managed rules and bot protection, choose Premium tier. For detailed comparison, see [Azure Front Door tier comparison](./standard-premium/tier-comparison.md).
> * Private Link feature is **only** available with Premium tier. 1. Select **Create** to create the new endpoint. After the endpoint is created, it appears in the endpoint list. :::image type="content" source="./media/integrate-storage-account/endpoint-created.png" alt-text="Screenshot of new Front Door endpoint created from Storage account.":::
+> [!NOTE]
+> * The endpoint list will only show Front Door and CDN profiles within the same subscription.
++ ## Extra features From the storage account **Front Door and CDN** page, select the endpoint from the list to open the Front Door endpoint configuration page. You can enable more Front Door features for your delivery, such as [rules engine](front-door-rules-engine.md) and configure how traffic gets [load balanced](routing-methods.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/index.md
quality and ready to deploy today to assist you in meeting your various complian
| [ISO 27001](./iso-27001-2013.md) | Provides guardrails for compliance with ISO 27001. | | [ISO 27001 Shared Services](./iso27001-shared/index.md) | Provides a set of compliant infrastructure patterns and policy guardrails that help toward ISO 27001 attestation. | | [ISO 27001 App Service Environment/SQL Database workload](./iso27001-ase-sql-workload/index.md) | Provides more infrastructure to the [ISO 27001 Shared Services](./iso27001-shared/index.md) blueprint sample. |
-| [New Zealand ISM Restricted](./new-zealand-ism.md) | Assigns policies to address specific New Zealand Information Security Manual controls. |
| [SWIFT CSP-CSCF v2020](./swift-2020/index.md) | Aides in SWIFT CSP-CSCF v2020 compliance. | | [UK OFFICIAL and UK NHS Governance](./ukofficial-uknhs.md) | Provides a set of compliant infrastructure patterns and policy guardrails that help toward UK OFFICIAL and UK NHS attestation. | | [CAF Foundation](./caf-foundation/index.md) | Provides a set of controls to help you manage your cloud estate in alignment with the [Microsoft Cloud Adoption Framework for Azure (CAF)](/azure/architecture/cloud-adoption/governance/journeys/index). |
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/new-zealand-ism.md
- Title: New Zealand ISM Restricted blueprint sample
-description: Overview of the New Zealand ISM Restricted blueprint sample. This blueprint sample helps customers assess specific controls.
Previously updated : 09/07/2023--
-# New Zealand ISM Restricted blueprint sample
--
-The New Zealand ISM Restricted blueprint sample provides governance guardrails using
-[Azure Policy](../../policy/overview.md) that help you assess specific
-[New Zealand Information Security Manual](https://www.nzism.gcsb.govt.nz/) controls. This blueprint
-helps customers deploy a core set of policies for any Azure-deployed architecture that must
-implement controls for New Zealand ISM Restricted.
-
-## Control mapping
-
-The [Azure Policy control mapping](../../policy/samples/new-zealand-ism.md) provides details
-on policy definitions included within this blueprint and how these policy definitions map to the
-**controls** in the New Zealand Information Security Manual. When assigned to an architecture,
-resources are evaluated by Azure Policy for non-compliance with assigned policy definitions. For
-more information, see [Azure Policy](../../policy/overview.md).
-
-## Deploy
-
-To deploy the Azure Blueprints New Zealand ISM Restricted blueprint sample,
-the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-### Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **New Zealand ISM Restricted** blueprint sample under _Other
- Samples_ and select **Use this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the New Zealand ISM Restricted blueprint
- sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that are included in the blueprint sample. Many of the artifacts
- have parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-### Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with New Zealand ISM Restricted controls.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the New
- Zealand ISM Restricted blueprint sample." Then select **Publish** at the bottom of the page.
-
-### Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are provided
-to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprints uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since
- they're defined during the assignment of the blueprint. For a full list or artifact parameters
- and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-### Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|New Zealand ISM Restricted|Policy Assignment|List of users that must be included in Windows VM Administrators group|A semicolon-separated list of users that should be included in the Administrators local group; Ex: Administrator; myUser1; myUser2|
-|New Zealand ISM Restricted|Policy Assignment|List of users that must be excluded from Windows VM Administrators group|A semicolon-separated list of users that should be excluded in the Administrators local group; Ex: Administrator; myUser1; myUser2|
-|New Zealand ISM Restricted|Policy Assignment|List of users that Windows VM Administrators group must only include|A semicolon-separated list of all the expected members of the Administrators local group; Ex: Administrator; myUser1; myUser2|
-|New Zealand ISM Restricted|Policy Assignment|Log Analytics workspace ID for VM agent reporting|ID (GUID) of the Log Analytics workspace where VMs agents should report|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should be enabled for Azure Front Door Service|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Adaptive network hardening recommendations should be applied on internet facing virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: There should be more than one owner assigned to your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Disk encryption should be applied on virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Remote debugging should be turned off for Function Apps|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should use the specified mode for Application Gateway|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|WAF mode requirement for Application Gateway|The Prevention or Detection mode must be enabled on the Application Gateway service|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Transparent Data Encryption on SQL databases should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on SQL Managed Instance|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Windows OS to add to scope additional to the images in the gallery for policy: Deploy - Configure Dependency agent to be enabled on Windows virtual machines|For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: An Azure Active Directory administrator should be provisioned for SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Only secure connections to your Azure Cache for Redis should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Endpoint protection solution should be installed on virtual machine scale sets|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows machines missing any of specified members in the Administrators group|By selecting 'true', you agree to be charged monthly per Arc connected machine|
-|New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Windows OS to add to scope additional to the images in the gallery for policy: [Preview]: Log Analytics Agent should be enabled for listed virtual machine images|For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md)|
-|New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Linux OS to add to scope additional to the images in the gallery for policy: [Preview]: Log Analytics Agent should be enabled for listed virtual machine images|For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Storage accounts should restrict network access|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Windows OS to add to scope additional to the images in the gallery for policy: Deploy - Configure Dependency agent to be enabled on Windows virtual machine scale sets|For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerabilities in security configuration on your virtual machine scale sets should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows machines that have extra accounts in the Administrators group|By selecting 'true', you agree to be charged monthly per Arc connected machine|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Secure transfer to storage accounts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should use the specified mode for Azure Front Door Service|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|WAF mode requirement for Azure Front Door Service|The Prevention or Detection mode must be enabled on the Azure Front Door service|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Adaptive application controls for defining safe applications should be enabled on your machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: A maximum of 3 owners should be designated for your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: [Preview]: Storage account public access should be disallowed|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: A vulnerability assessment solution should be enabled on your virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should be enabled for Application Gateway|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: CORS should not allow every resource to access your Web Applications|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows web servers that are not using secure communication protocols|By selecting 'true', you agree to be charged monthly per Arc connected machine|
-|New Zealand ISM Restricted|Policy Assignment|Minimum TLS version for Windows web servers|Windows web servers with lower TLS versions will be assessed as non-compliant|
-|New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Linux OS to add to scope additional to the images in the gallery for policy: Log Analytics agent should be enabled in virtual machine scale sets for listed virtual machine images|For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md)|
-|New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Windows OS to add to scope additional to the images in the gallery for policy: Log Analytics agent should be enabled in virtual machine scale sets for listed virtual machine images|For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: External accounts with write permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows machines that have the specified members in the Administrators group|By selecting 'true', you agree to be charged monthly per Arc connected machine|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Deprecated accounts should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Function App should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Azure subscriptions should have a log profile for Activity Log|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|List of resource types that should have diagnostic logs enabled||
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: System updates should be installed on your machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Latest TLS version should be used in your API App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: MFA should be enabled accounts with write permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Microsoft IaaSAntimalware extension should be deployed on Windows servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Web Application should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Azure DDoS Protection should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: MFA should be enabled on accounts with owner permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Advanced data security should be enabled on your SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Advanced data security should be enabled on SQL Managed Instance|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Monitor missing Endpoint Protection in Azure Security Center|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Activity log should be retained for at least one year|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Management ports of virtual machines should be protected with just-in-time network access control|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Service Fabric clusters should only use Azure Active Directory for client authentication|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: API App should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Audit Windows machines on which Windows Defender Exploit Guard is not enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows machines on which Windows Defender Exploit Guard is not enabled|By selecting 'true', you agree to be charged monthly per Arc connected machine|
-|New Zealand ISM Restricted|Policy Assignment|Compliance state to report for Windows machines on which Windows Defender Exploit Guard is not available|Windows Defender Exploit Guard is only available starting with Windows 10/Windows Server with update 1709. Setting this value to 'Non-Compliant' shows machines with older versions on which Windows Defender Exploit Guard is not available (such as Windows Server 2012 R2) as non-compliant. Setting this value to 'Compliant' shows these machines as compliant.|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: System updates on virtual machine scale sets should be installed|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Remote debugging should be turned off for Web Applications|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerabilities in security configuration on your machines should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: MFA should be enabled on accounts with read permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerabilities in container security configurations should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Remote debugging should be turned off for API Apps|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Audit Linux machines that allow remote connections from accounts without passwords|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Linux machines that allow remote connections from accounts without passwords|By selecting 'true', you agree to be charged monthly per Arc connected machine|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Deprecated accounts with owner permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on your SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Latest TLS version should be used in your Web App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Windows machines should meet requirements for 'Security Settings - Account Policies'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Enforce password history for Windows VM local accounts|Specifies limits on password reuse - how many times a new password must be created for a user account before the password can be repeated|
-|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Windows machines should meet requirements for 'Security Settings - Account Policies'|By selecting 'true', you agree to be charged monthly per Arc connected machine|
-|New Zealand ISM Restricted|Policy Assignment|Maximum password age for Windows VM local accounts|Specifies the maximum number of days that may elapse before a user account password must be changed; the format of the value is two integers separated by a comma, denoting an inclusive range|
-|New Zealand ISM Restricted|Policy Assignment|Minimum password age for Windows VM local accounts|Specifies the minimum number of days that must elapse before a user account password can be changed|
-|New Zealand ISM Restricted|Policy Assignment|Minimum password length for Windows VM local accounts|Specifies the minimum number of characters that a user account password may contain|
-|New Zealand ISM Restricted|Policy Assignment|Password must meet complexity requirements for Windows VM local accounts|Specifies whether a user account password must be complex; if required, a complex password must not contain part of the user's account name or full name; be at least 6 characters long; contain a mix of uppercase, lowercase, number, and non-alphabetic characters|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Internet-facing virtual machines should be protected with network security groups|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Audit Linux machines that have accounts without passwords|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Linux machines that have accounts without passwords|By selecting 'true', you agree to be charged monthly per Arc connected machine|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: External accounts with owner permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Latest TLS version should be used in your Function App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: [Preview]: All Internet traffic should be routed via your deployed Azure Firewall|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerabilities on your SQL databases should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-
-## Next steps
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Assignment Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/assignment-structure.md
Title: Details of the policy assignment structure
description: Describes the policy assignment definition used by Azure Policy to relate policy definitions and parameters to resources for evaluation. Last updated 10/03/2022 -- # Azure Policy assignment structure
governance Attestation Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/attestation-structure.md
Title: Details of the Azure Policy attestation structure
description: Describes the components of the Azure Policy attestation JSON object. Last updated 09/23/2022 -- # Azure Policy attestation structure
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
Title: Details of the policy definition structure
description: Describes how policy definitions are used to establish conventions for Azure resources in your organization. Last updated 08/15/2023 -- # Azure Policy definition structure
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
Title: Understand how effects work description: Azure Policy definitions have various effects that determine how compliance is managed and reported.- Last updated 12/19/2023 - # Understand Azure Policy effects
governance Event Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/event-overview.md
Title: Reacting to Azure Policy state change events
description: Use Azure Event Grid to subscribe to Azure Policy events, which allow applications to react to state changes without the need for complicated code. Last updated 07/12/2022 -- # Reacting to Azure Policy state change events
governance Exemption Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/exemption-structure.md
Title: Details of the policy exemption structure
description: Describes the policy exemption definition used by Azure Policy to exempt resources from evaluation of initiatives or definitions. Last updated 11/03/2022 -- # Azure Policy exemption structure
governance Policy Applicability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-applicability.md
Title: Azure Policy applicability logic
description: Describes the rules Azure Policy uses to determine whether the policy is applied to its assigned resources. Last updated 09/22/2022 -- # What is applicability in Azure Policy?
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md
description: Learn how Azure Policy uses Rego and Open Policy Agent to manage cl
Last updated 08/29/2023 -- # Understand Azure Policy for Kubernetes clusters
governance Determine Non Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/determine-non-compliance.md
Title: Determine causes of non-compliance
description: When a resource is non-compliant, there are many possible reasons. Discover what caused the non-compliance with the policy. Last updated 11/30/2023 -- # Determine causes of non-compliance
governance Export Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/export-resources.md
Last updated 04/18/2022
ms.devlang: azurecli-- # Export Azure Policy resources
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md
Title: Get policy compliance data description: Azure Policy evaluations and effects determine compliance. Learn how to get the compliance details of your Azure resources.- Last updated 11/03/2022 - # Get compliance data of Azure resources
governance Remediate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/remediate-resources.md
description: This guide walks you through the remediation of resources that are
Last updated 07/29/2022 -- # Remediate non-compliant resources with Azure Policy
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/overview.md
Title: Overview of Azure Policy
description: Azure Policy is a service in Azure, that you use to create, assign and, manage policy definitions in your Azure environment. Last updated 06/15/2023 -- # What is Azure Policy?
governance Policy Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/policy-glossary.md
Title: Azure Policy glossary description: A glossary defining the terminology used throughout Azure Policy-- Last updated 07/13/2022
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/index.md
Title: Index of policy samples
description: Index of built-ins for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Last updated 1/3/2024 -- # Azure Policy Samples
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/new-zealand-ism.md
initiative definition, open **Policy** in the Azure portal and select the **Defi
Then, find and select the **New Zealand ISM Restricted** Regulatory Compliance built-in initiative definition.
-This built-in initiative is deployed as part of the
-[New Zealand ISM Restricted blueprint sample](../../blueprints/samples/new-zealand-ism.md).
- > [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions. > These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
governance Pattern Deploy Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pattern-deploy-resources.md
Title: "Pattern: Deploy resources with a policy definition"
description: This Azure Policy pattern provides an example of how to deploy resources with a deployIfNotExists policy definition. Last updated 05/16/2022 -- # Azure Policy pattern: deploy resources
governance Route State Change Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/route-state-change-events.md
Title: "Tutorial: Route policy state change events to Event Grid with Azure CLI" description: In this tutorial, you configure Event Grid to listen for policy state change events and call a webhook.- Last updated 07/19/2022 - # Tutorial: Route policy state change events to Event Grid with Azure CLI
governance Guidance For Throttled Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/guidance-for-throttled-requests.md
Title: Guidance for throttled requests in Azure Resource Graph description: Learn to group, stagger, paginate, and query in parallel to avoid requests being throttled in Azure Resource Graph.-- Last updated 01/04/2024
governance Work With Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/work-with-data.md
Title: Work with large data sets description: Understand how to get, format, page, and skip records in large data sets while working with Azure Resource Graph.- Last updated 11/04/2022 - # Working with large Azure resource data sets
governance First Query Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-portal.md
Title: 'Quickstart: Your first portal query' description: In this quickstart, you follow the steps to run your first query from Azure portal using Azure Resource Graph Explorer.-- Last updated 10/12/2022
governance First Query Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-powershell.md
description: In this quickstart, you follow the steps to enable the Resource Gra
Last updated 06/15/2022 -- # Quickstart: Run your first Resource Graph query using Azure PowerShell
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/overview.md
description: Understand how the Azure Resource Graph service enables complex que
Last updated 01/20/2024 -- # What is Azure Resource Graph?
governance Paginate Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/paginate-powershell.md
Title: 'Paginate Azure Resource Graph query results using Azure PowerShell'
description: In this quickstart, you control the volume Azure Resource Graph query output by using pagination in Azure PowerShell. Last updated 11/11/2022 -- # Quickstart: Paginate Azure Resource Graph query results using Azure PowerShell
governance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/advanced.md
description: Use Azure Resource Graph to run some advanced queries, including wo
Last updated 12/18/2023 -- # Advanced Resource Graph query samples
governance Starter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/starter.md
Title: Starter query samples description: Use Azure Resource Graph to run some starter queries, including counting resources, ordering resources, or by a specific tag.-- Last updated 12/19/2023
governance Create Share Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/tutorials/create-share-query.md
Title: "Tutorial: Manage queries in the Azure portal" description: In this tutorial, you create a Resource Graph Query and share the new query with others in the Azure portal.-- Last updated 10/06/2022
lighthouse View Manage Service Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/view-manage-service-providers.md
Title: View and manage service providers description: Customers can view info about Azure Lighthouse service providers, service provider offers, and delegated resources in the Azure portal. Previously updated : 12/18/2023 Last updated : 01/25/2024
To add an offer from a template, select **Add offer** near the top of the page a
## Update service provider offers
-After a customer has added an offer, a service provider may publish an updated version of the same offer to Azure Marketplace, such as to add a new role definition. If a new version of the offer has been published, the **Service provider offers** page will show an "update" icon in the row for that offer. Select this icon to see the differences between the current version of the offer and the new one.
+After a customer has added an offer, a service provider may publish an updated version of the same offer to Azure Marketplace, such as to add a new role definition. If a new version of the offer has been published, the **Service provider offers** page shows an "update" icon in the row for that offer. Select this icon to see the differences between the current version of the offer and the new one.
![Update offer icon](../media/update-offer.jpg)
After you confirm the deletion, that service provider will no longer have access
## Delegate resources
-Before a service provider can access and manage a customer's resources, one or more specific subscriptions and/or resource groups must be delegated. If a customer has added an offer but has not yet delegated any resources, a note will appear at the top of the **Service provider offers** section. The service provider won't be able to work on any resources in the customer's tenant until the delegation is completed.
+Before a service provider can access and manage a customer's resources, one or more specific subscriptions and/or resource groups must be delegated. When a customer adds an offer without delegating any resources, a note appears at the top of the **Service provider offers** section. The service provider can't work on any resources in the customer's tenant until the delegation is completed.
To delegate subscriptions or resource groups:
To delegate subscriptions or resource groups:
1. In the **Offer details** section of the **Delegate resources** page, review the details about the service provider and offer. To review role assignments for the offer, select **Click here to see the details of the selected offer**. 1. In the **Delegate** section, select **Delegate subscriptions** or **Delegate resource groups**. 1. Choose the subscriptions and/or resource groups you'd like to delegate for this offer, then select **Add**.
-1. Select the checkbox at the bottom of the page to confirm that you want to grant this service provider access to the resources that you've selected, then select **Delegate**.
+1. Select the checkbox at the bottom of the page to confirm that you want to grant this service provider access to these resources, then select **Delegate**.
## View delegations
Delegations represent an association of specific customer resources (subscriptio
Filters at the top of the page let you sort and group your delegation information. You can also filter by specific service providers, offers, or keywords. > [!NOTE]
-> When [viewing role assignments for the delegated scope in the Azure portal](../../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-at-a-scope) or via APIs, customers won't see role assignments or any users from the service provider tenant who have access through Azure Lighthouse. Similarly, users in the service provider tenant won't see role assignments for users in a customer's tenant, regardless of the role they've been assigned.
+> When [viewing role assignments for the delegated scope in the Azure portal](../../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-at-a-scope) or via APIs, customers won't see role assignments for users from the service provider tenant who have access through Azure Lighthouse. Similarly, users in the service provider tenant won't see role assignments for users in a customer's tenant, regardless of the role they've been assigned.
+>
+> Note that [classic administrator](../../role-based-access-control/classic-administrators.md) assignments in a customer tenant may be visible to users in the managing tenant, or the other way around, because classic administrator roles don't use the Resource Manager deployment model.
## Audit and restrict delegations in your environment Customers may want to review all subscriptions and/or resource groups that have been delegated to Azure Lighthouse. This is especially useful for those customers with a large number of subscriptions, or who have many users who perform management tasks.
-We provide an [Azure Policy built-in policy definition](../../governance/policy/samples/built-in-policies.md#lighthouse) to [audit delegation of scopes to a managing tenant](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Lighthouse/Lighthouse_Delegations_Audit.json). You can assign this policy to a management group that includes all of the subscriptions that you want to audit. When you check for compliance with this policy, any delegated subscriptions and/or resource groups (within the management group to which the policy is assigned) will be shown in a noncompliant state. You can then review the results and confirm that there are no unexpected delegations.
+We provide an [Azure Policy built-in policy definition](../../governance/policy/samples/built-in-policies.md#lighthouse) to [audit delegation of scopes to a managing tenant](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Lighthouse/Lighthouse_Delegations_Audit.json). You can assign this policy to a management group that includes all of the subscriptions that you want to audit. When you check for compliance with this policy, any delegated subscriptions and/or resource groups (within the management group to which the policy is assigned) are shown in a noncompliant state. You can then review the results and confirm that there are no unexpected delegations.
Another [built-in policy definition](../../governance/policy/samples/built-in-policies.md#lighthouse) lets you [restrict delegations to specific managing tenants](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Lighthouse/AllowCertainManagingTenantIds_Deny.json). This policy can be assigned to a management group that includes any subscriptions for which you want to limit delegations. After the policy is deployed, any attempts to delegate a subscription to a tenant outside of the ones you specify will be denied.
load-balancer Tutorial Add Lb Existing Scale Set Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-add-lb-existing-scale-set-portal.md
Previously updated : 12/29/2022 Last updated : 01/23/2024 # Tutorial: Add Azure Load Balancer to an existing Virtual Machine Scale Set using the Azure portal
-The need may arise when an Azure Load Balancer isn't associated with a Virtual Machine Scale Set.
-
-You may have an existing Virtual Machine Scale Set deployed with an Azure Load Balancer that requires updating.
-
-The Azure portal can be used to add or update an Azure Load Balancer associated with a Virtual Machine Scale Set.
+In many organizations, the need can arise where an Azure Load Balancer isn't associated with a Virtual Machine Scale Set, but needs to be added. Or an existing Virtual Machine Scale Set is deployed with an Azure Load Balancer that requires updating. The Azure portal can be used to add or update an Azure Load Balancer associated with a Virtual Machine Scale Set.
In this tutorial, you learn how to:
In this tutorial, you learn how to:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
-## Create a virtual network
-
-In this section, you'll create a virtual network for the scale set and the other resources used in the tutorial.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the search box at the top of the portal, enter **Virtual network** and select **Virtual networks** from the search results.
-
-1. Select **Create**.
-
-1. In the **Basics** tab of the **Create virtual network**, enter or select the following information:
-
- | Setting | Value |
- | - | |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **Create new**. </br> Enter **myResourceGroup** in **Name**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNet**. |
- | Region | Select **(US) West US 2**. |
-
-1. Select the **IP addresses** tab, or the **Next: IP Addresses** button at the bottom of the page.
-
-1. In the **IP addresses** tab, select **default** under **Subnet name**.
-
-1. In the **Edit subnet** pane, enter **myBackendSubnet** under **Subnet name**.
-
-1. Select **Save**.
-
-1. Select the **Security** tab, or the **Next: Security** button at the bottom of the page.
-
-1. In the **Security** tab, in **BastionHost** select **Enable**.
-
-1. Enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Bastion name | Enter **myBastionHost**. |
- | AzureBastionSubnet address space | Enter **10.1.1.0/26**. |
- | Public IP address | Select **Create new**. </br> Enter **myBastionIP** in **Name**. |
-
-1. Select the **Review + create** tab, or the blue **Review + create** button at the bottom of the page.
-
-1. Select **Create**.
-> [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
->
## Create Virtual Machine Scale Set
-In this section, you'll create a Virtual Machine Scale Set that will be attached to a load balancer created later.
+In this section, you create a Virtual Machine Scale Set that is attached to a load balancer created later.
1. In the search box at the top of the portal, enter **Virtual machine scale**, and select **Virtual machine scale sets** from the search results. 1. Select **Create**.
In this section, you'll create a Virtual Machine Scale Set that will be attached
| - | -- | | **Project details** | | | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
+ | Resource group | Select **load-balancer-rg**. |
| **Scale set details** | |
- | Virtual machine scale set name | Enter **myVMScaleSet**. |
- | Region | Select **(US) West US 2**. |
+ | Virtual machine scale set name | Enter **lb-vmss**. |
+ | Region | Select **(US) East US**. |
| Availability zone | Leave the default of **None**. | | **Orchestration** | | | Orchestration mode | Leave the default of **Uniform: optimized for large-scale stateless workloads with identical instances**. |
In this section, you'll create a Virtual Machine Scale Set that will be attached
| Setting | Value | | - | -- | | **Virtual network configuration** | |
- | Virtual network | Select **myVNet**. |
+ | Virtual network | Select **lb-vnet**. |
1. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page. 1. Select **Create**.
-## Create NAT gateway
-
-In this section, you'll create a NAT gateway for outbound connectivity of the virtual machines.
-
-1. In the search box at the top of the portal, enter **NAT gateway**, and select **NAT gateways** in the search results.
-
-1. Select **+ Create**.
-
-1. In the **Basics** tab of **Create network address translation (NAT) gateway** page, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | NAT gateway name | Enter **myNATgateway**. |
- | Region | Select **(US) West US 2**. |
- | Availability zone | Select **None**. |
- | Idle timeout (minutes) | Enter **15**. |
-
-1. Select the **Outbound IP** tab, or select the **Next: Outbound IP** button at the bottom of the page.
-
-1. Select **Create a new public IP address** next to **Public IP addresses** in the **Outbound IP** tab.
-
-1. Enter **myPublicIP-nat** in **Name**.
-
-1. Select **OK**.
-
-1. Select the **Subnet** tab, or select the **Next: Subnet** button at the bottom of the page.
-
-1. Select **myVNet** in the pull-down menu under **Virtual network**.
-
-1. Select the check box next to **myBackendSubnet**.
-
-1. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
-
-1. Select **Create**.
-## Create load balancer
-
-You'll create a load balancer in this section. The frontend IP, backend pool, load-balancing, and inbound NAT rules are configured as part of the creation.
-
-1. In the search box at the top of the portal, enter **Load balancer**.
-
-1. Select **Load balancers** in the search results.
-
-1. Select **+ Create**.
-
-1. In the **Basics** tab of **Create load balancer**, enter, or select the following information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Name | Enter **myLoadBalancer**. |
- | Region | Select **(US) West US 2**. |
- | SKU | Leave the default of **Standard**. |
- | Type | Select **Public**. |
- | Tier | Leave the default of **Regional**. |
-
-1. Select **Next: Frontend IP configuration** at the bottom of the page.
-
-1. In **Frontend IP configuration**, select **+ Add a frontend IP**.
-
-1. Enter **myFrontend** in **Name**.
-
-1. Select **IPv4** or **IPv6** for the **IP version**.
-
- > [!NOTE]
- > IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier).
-
-1. Select **IP address** for the **IP type**.
-
- > [!NOTE]
- > For more information on IP prefixes, see [Azure Public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md).
-
-1. Select **Create new** in **Public IP address**.
-
-1. In **Add a public IP address**, enter **myPublicIP** for **Name**.
-
-1. Select **Zone-redundant** in **Availability zone**.
-
- > [!NOTE]
- > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
-
-1. Select **Add**.
-
-1. Select **Next: Backend pools** at the bottom of the page.
-
-1. In the **Backend pools** tab, select **+ Add a backend pool**.
-
-1. Enter or select the following information in **Add backend pool**.
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **myBackendPool**. |
- | Virtual network | Select **myVNet (myResourceGroup)**. |
- | Backend Pool Configuration | Select **NIC**. |
- | IP version | Select **IPv4**.
-
-1. Select **Save**.
-
-1. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
-
-1. Select **Create**.
### Configure load balancer settings
-In this section, you'll create a backend pool for **myLoadBalancer**.
-
-You'll create a health probe to monitor **HTTP** and **Port 80**. The health probe will monitor the health of the virtual machines in the backend pool.
-
-You'll create a load-balancing rule for **Port 80** with outbound SNAT disabled. The NAT gateway you created earlier will handle the outbound connectivity of the virtual machines.
+In this section, you create a backend pool for load-balancer. You create a health probe to monitor HTTP and Port 80 to ensure the health of the virtual machines in the backend pool. Additionally, you create a load-balancing rule for Port 80 with outbound SNAT disabled. The outbound connectivity of the virtual machines is handled by the NAT gateway created earlier.
1. In the search box at the top of the portal, enter **Load balancer**. 1. Select **Load balancers** in the search results.
-1. Select **myLoadBalancer**.
+1. Select **load-balancer**.
-1. In **myLoadBalancer**, select **Backend pools** in **Settings**.
+1. In **load-balancer**, select **Backend pools** in **Settings**.
1. Select **+ Add** in **Backend pools**.
You'll create a load-balancing rule for **Port 80** with outbound SNAT disabled.
| Setting | Value | | - | -- |
- | Name | Enter **myBackendPool**. |
- | Virtual network | Select **myVNet**. |
+ | Name | Enter **lb-backend-pool**. |
+ | Virtual network | Select **lb-vnet**. |
| Backend Pool Configuration | Leave the default of **NIC**. | 1. Select **Save**.
You'll create a load-balancing rule for **Port 80** with outbound SNAT disabled.
| Setting | Value | | - | -- |
- | Name | Enter **myHTTPRule**. |
+ | Name | Enter **lb-HTTP-rule**. |
| IP Version | Leave the default of **IPv4**. |
- | Frontend IP address | Select **myFrontEnd**. |
- | Backend pool | Select **myBackendPool**. |
+ | Frontend IP address | Select **lb-Frontend-IP**. |
+ | Backend pool | Select **lb-backend-pool**. |
| Protocol | Select the default of **TCP**. | | Port | Enter **80**. | | Backend port | Enter **80**. |
- | Health probe | Select **Create new**.<br/> Enter **myHTTPProbe** for **Name**.</br><br/>Select **HTTP** for **Protocol**.</br><br/> Select **Ok**.</br>|
+ | Health probe | Select **Create new**.<br/> Enter **lb-HTTP-probe** for **Name**.</br><br/>Select **HTTP** for **Protocol**.</br><br/> Select **Ok**.</br>|
| Session persistence | Leave the default of **None**. | | Idle timeout (minutes) | Change the slider to **15**. | | TCP reset | Select **Enabled**. |
You'll create a load-balancing rule for **Port 80** with outbound SNAT disabled.
1. Select **Add**. 1. Select **Review + Create** and **Create**.+ ## Add load balancer to scale set
-In this section, you'll go to the scale set in the Azure portal and add a load balancer to the scale set.
+In this section, you add a load balancer to the scale set in the Azure portal.
1. In the search box at the top of the portal, enter **Virtual machine scale**.
-2. In the search results, select **Virtual machine scale sets**.
+1. In the search results, select **Virtual machine scale sets**.
-3. Select **myVMScaleSet**.
+1. Select **lb-vmss**.
-4. In the **Settings** section of **myVMScaleSet**, select **Networking**.
+1. In the **Settings** section of **lb-vmss**, select **Networking**.
-5. Select the **Load balancing** tab in the **Overview** page of the **Networking** settings of **myVMScaleSet**.
+1. Select the **Load balancing** tab in the **Overview** page of the **Networking** settings of **lb-vmss**.
:::image type="content" source="./media/tutorial-add-lb-existing-scale-set-portal/load-balancing-tab.png" alt-text="Select the load balancing tab in networking." border="true":::
-6. Select the blue **Add load balancing** button.
+1. Select the blue **Add load balancing** button.
-7. In **Add load balancing**, enter or select the following information:
+1. In **Add load balancing**, enter or select the following information:
| Setting | Value | | - | -- | | Load balancing options | Select **Azure load balancer**. |
- | Select a load balancer | Select **myLoadBalancer**. |
+ | Select a load balancer | Select **load-balancer**. |
| Backend pool | Select **Use existing**. |
- | Select a backend pool | Select **myBackendPool**. |
+ | Select a backend pool | Select **lb-backend-pool**. |
-8. Select **Save**.
+1. Select **Save**.
## Clean up resources
If you're not going to continue to use this application, delete
the load balancer and the supporting resources with the following steps: 1. In the search box at the top of the portal, enter **Resource group**.-
-2. Select **Resource groups** in the search results.
-
-3. Select **myResourceGroup**.
-
-4. In the overview page of **myResourceGroup**, select **Delete resource group**.
-
-5. Enter **myResourceGroup** in **TYPE THE RESOURCE GROUP NAME**.
-
-6. Select **Delete**.
+1. Select **Resource groups** in the search results.
+1. Select **load-balancer-rg**.
+1. In the overview page of **load-balancer-rg**, select **Delete resource group**.
+1. Enter **load-balancer-rg** in **TYPE THE RESOURCE GROUP NAME**.
+1. Select **Delete**.
## Next steps
machine-learning How To Deploy Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-kubernetes-extension.md
Previously updated : 08/31/2022 Last updated : 01/19/2024
In this article, you can learn:
## Prerequisites
-* An AKS cluster running in Azure. If you have not previously used cluster extensions, you need to [register the KubernetesConfiguration service provider](../aks/dapr.md#register-the-kubernetesconfiguration-service-provider).
+* An AKS cluster running in Azure. If you haven't previously used cluster extensions, you need to [register the KubernetesConfiguration service provider](../aks/dapr.md#register-the-kubernetesconfiguration-service-provider).
* Or an Arc Kubernetes cluster is up and running. Follow instructions in [connect existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md). * If the cluster is an Azure RedHat OpenShift Service (ARO) cluster or OpenShift Container Platform (OCP) cluster, you must satisfy other prerequisite steps as documented in the [Reference for configuring Kubernetes cluster](./reference-kubernetes.md#prerequisites-for-aro-or-ocp-clusters) article. * For production purposes, the Kubernetes cluster must have a minimum of **4 vCPU cores and 14-GB memory**. For more information on resource detail and cluster size recommendations, see [Recommended resource planning](./reference-kubernetes.md).
In this article, you can learn:
- When your AKS cluster used service principal is converted to use Managed Identity, before installing the extension, all node pools need to be deleted and recreated, rather than updated directly. - [Disabling local accounts](../aks/manage-local-accounts-managed-azure-ad.md#disable-local-accounts) for AKS is **not supported** by Azure Machine Learning. When the AKS Cluster is deployed, local accounts are enabled by default. - If your AKS cluster has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the Azure Machine Learning control plane IP ranges for the AKS cluster. The Azure Machine Learning control plane is deployed across paired regions. Without access to the API server, the machine learning pods can't be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster.-- Azure Machine Learning does not support attaching an AKS cluster cross subscription. If you have an AKS cluster in a different subscription, you must first [connect it to Azure-Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md) and specify in the same subscription as your Azure Machine Learning workspace.-- Azure Machine Learning does not guarantee support for all preview stage features in AKS. For example, [Microsoft Entra pod identity](../aks/use-azure-ad-pod-identity.md) is not supported.-- If you've previously followed the steps from [Azure Machine Learning AKS v1 document](./v1/how-to-create-attach-kubernetes.md?view=azureml-api-1&preserve-view=true) to create or attach your AKS as inference cluster, use the following link to [clean up the legacy azureml-fe related resources](./v1/how-to-create-attach-kubernetes.md?view=azureml-api-1&preserve-view=true#delete-azureml-fe-related-resources) before you continue the next step.
+- Azure Machine Learning doesn't support attaching an AKS cluster cross subscription. If you have an AKS cluster in a different subscription, you must first [connect it to Azure-Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md) and specify in the same subscription as your Azure Machine Learning workspace.
+- Azure Machine Learning doesn't guarantee support for all preview stage features in AKS. For example, [Microsoft Entra pod identity](../aks/use-azure-ad-pod-identity.md) isn't supported.
+- If you've followed the steps from [Azure Machine Learning AKS v1 document](./v1/how-to-create-attach-kubernetes.md?view=azureml-api-1&preserve-view=true) to create or attach your AKS as inference cluster, use the following link to [clean up the legacy azureml-fe related resources](./v1/how-to-create-attach-kubernetes.md?view=azureml-api-1&preserve-view=true#delete-azureml-fe-related-resources) before you continue the next step.
## Review Azure Machine Learning extension configuration settings
You can use Azure Machine Learning CLI command `k8s-extension create` to deploy
| `internalLoadBalancerProvider` | This config is only applicable for Azure Kubernetes Service(AKS) cluster now. Set to `azure` to allow the inference router using internal load balancer. | N/A| Optional | Optional | |`sslSecret`| The name of the Kubernetes secret in the `azureml` namespace. This config is used to store `cert.pem` (PEM-encoded TLS/SSL cert) and `key.pem` (PEM-encoded TLS/SSL key), which are required for inference HTTPS endpoint support when ``allowInsecureConnections`` is set to `False`. For a sample YAML definition of `sslSecret`, see [Configure sslSecret](./how-to-secure-kubernetes-online-endpoint.md). Use this config or a combination of `sslCertPemFile` and `sslKeyPemFile` protected config settings. |N/A| Optional | Optional | |`sslCname` |An TLS/SSL CNAME is used by inference HTTPS endpoint. **Required** if `allowInsecureConnections=False` | N/A | Optional | Optional|
- | `inferenceRouterHA` |`True` or `False`, default `True`. By default, Azure Machine Learning extension will deploy three inference router replicas for high availability, which requires at least three worker nodes in a cluster. Set to `False` if your cluster has fewer than three worker nodes, in this case only one inference router service is deployed. | N/A| Optional | Optional |
+ | `inferenceRouterHA` |`True` or `False`, default `True`. By default, Azure Machine Learning extension deploys three inference router replicas for high availability, which requires at least three worker nodes in a cluster. Set to `False` if your cluster has fewer than three worker nodes, in this case only one inference router service is deployed. | N/A| Optional | Optional |
|`nodeSelector` | By default, the deployed kubernetes resources and your machine learning workloads are randomly deployed to one or more nodes of the cluster, and DaemonSet resources are deployed to ALL nodes. If you want to restrict the extension deployment and your training/inference workloads to specific nodes with label `key1=value1` and `key2=value2`, use `nodeSelector.key1=value1`, `nodeSelector.key2=value2` correspondingly. | Optional| Optional | Optional | |`installNvidiaDevicePlugin` | `True` or `False`, default `False`. [NVIDIA Device Plugin](https://github.com/NVIDIA/k8s-device-plugin#nvidia-device-plugin-for-kubernetes) is required for ML workloads on NVIDIA GPU hardware. By default, Azure Machine Learning extension deployment won't install NVIDIA Device Plugin regardless Kubernetes cluster has GPU hardware or not. User can specify this setting to `True`, to install it, but make sure to fulfill [Prerequisites](https://github.com/NVIDIA/k8s-device-plugin#prerequisites). | Optional |Optional |Optional |
- |`installPromOp`|`True` or `False`, default `True`. Azure Machine Learning extension needs prometheus operator to manage prometheus. Set to `False` to reuse the existing prometheus operator. For more information about reusing the existing prometheus operator, refer to [reusing the prometheus operator](./how-to-troubleshoot-kubernetes-extension.md#prometheus-operator)| Optional| Optional | Optional |
- |`installVolcano`| `True` or `False`, default `True`. Azure Machine Learning extension needs volcano scheduler to schedule the job. Set to `False` to reuse existing volcano scheduler. For more information about reusing the existing volcano scheduler, refer to [reusing volcano scheduler](./how-to-troubleshoot-kubernetes-extension.md#volcano-scheduler) | Optional| N/A | Optional |
- |`installDcgmExporter` |`True` or `False`, default `False`. Dcgm-exporter can expose GPU metrics for Azure Machine Learning workloads, which can be monitored in Azure portal. Set `installDcgmExporter` to `True` to install dcgm-exporter. But if you want to utilize your own dcgm-exporter, refer to [DCGM exporter](./how-to-troubleshoot-kubernetes-extension.md#dcgm-exporter) |Optional |Optional |Optional |
+ |`installPromOp`|`True` or `False`, default `True`. Azure Machine Learning extension needs prometheus operator to manage prometheus. Set to `False` to reuse the existing prometheus operator. For more information about reusing the existing prometheus operator, see [reusing the prometheus operator](./how-to-troubleshoot-kubernetes-extension.md#prometheus-operator)| Optional| Optional | Optional |
+ |`installVolcano`| `True` or `False`, default `True`. Azure Machine Learning extension needs volcano scheduler to schedule the job. Set to `False` to reuse existing volcano scheduler. For more information about reusing the existing volcano scheduler, see [reusing volcano scheduler](./how-to-troubleshoot-kubernetes-extension.md#volcano-scheduler) | Optional| N/A | Optional |
+ |`installDcgmExporter` |`True` or `False`, default `False`. Dcgm-exporter can expose GPU metrics for Azure Machine Learning workloads, which can be monitored in Azure portal. Set `installDcgmExporter` to `True` to install dcgm-exporter. But if you want to utilize your own dcgm-exporter, see [DCGM exporter](./how-to-troubleshoot-kubernetes-extension.md#dcgm-exporter) |Optional |Optional |Optional |
|Configuration Protected Setting Key Name |Description |Training |Inference |Training and Inference |--|--|--|--|--| | `sslCertPemFile`, `sslKeyPemFile` |Path to TLS/SSL certificate and key file (PEM-encoded), required for Azure Machine Learning extension deployment with inference HTTPS endpoint support, when ``allowInsecureConnections`` is set to False. **Note** PEM file with pass phrase protected isn't supported | N/A| Optional | Optional |
-As you can see from above configuration settings table, the combinations of different configuration settings allow you to deploy Azure Machine Learning extension for different ML workload scenarios:
+As you can see from the configuration settings table, the combinations of different configuration settings allow you to deploy Azure Machine Learning extension for different ML workload scenarios:
* For training job and batch inference workload, specify `enableTraining=True` * For inference workload only, specify `enableInference=True`
We list four typical extension deployment scenarios for reference. To deploy ext
``` - **Enable an AKS cluster in Azure for production training and inference workload**
- For Azure Machine Learning extension deployment on AKS, make sure to specify `managedClusters` value for `--cluster-type` parameter. Assuming your cluster has more than three nodes, and you'll use an Azure public load balancer and HTTPS for inference workload support. Run the following Azure CLI command to deploy Azure Machine Learning extension:
+ For Azure Machine Learning extension deployment on AKS, make sure to specify `managedClusters` value for `--cluster-type` parameter. Assuming your cluster has more than three nodes, and you use an Azure public load balancer and HTTPS for inference workload support. Run the following Azure CLI command to deploy Azure Machine Learning extension:
```azurecli az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer sslCname=<ssl cname> --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslKeyPemFile=<file-path-to-cert-KEY> --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster ``` - **Enable an [Arc Kubernetes](../azure-arc/kubernetes/overview.md) cluster anywhere for production training and inference workload using NVIDIA GPUs**
- For Azure Machine Learning extension deployment on [Arc Kubernetes](../azure-arc/kubernetes/overview.md) cluster, make sure to specify `connectedClusters` value for `--cluster-type` parameter. Assuming your cluster has more than three nodes, you'll use a NodePort service type and HTTPS for inference workload support, run following Azure CLI command to deploy Azure Machine Learning extension:
+ For Azure Machine Learning extension deployment on [Arc Kubernetes](../azure-arc/kubernetes/overview.md) cluster, make sure to specify `connectedClusters` value for `--cluster-type` parameter. Assuming your cluster has more than three nodes, you use a NodePort service type and HTTPS for inference workload support, run following Azure CLI command to deploy Azure Machine Learning extension:
```azurecli az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=NodePort sslCname=<ssl cname> installNvidiaDevicePlugin=True installDcgmExporter=True --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslKeyPemFile=<file-path-to-cert-KEY> --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-RG-name> --scope cluster ```
machine-learning Tutorial Train Deploy Image Classification Model Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-deploy-image-classification-model-vscode.md
Previously updated : 05/25/2021 Last updated : 01/25/2024 - contperf-fy20q4 - cliv2
In this tutorial, you learn the following tasks:
- Azure subscription. If you don't have one, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). If you're using the free subscription, only CPU clusters are supported. - Install [Visual Studio Code](https://code.visualstudio.com/docs/setup/setup-overview), a lightweight, cross-platform code editor.-- Azure Machine Learning Studio Visual Studio Code extension. For install instructions see the [Setup Azure Machine Learning Visual Studio Code extension guide](./how-to-setup-vs-code.md)
+- Azure Machine Learning studio Visual Studio Code extension. For install instructions, see the [Setup Azure Machine Learning Visual Studio Code extension guide](./how-to-setup-vs-code.md)
- CLI (v2). For installation instructions, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md) - Clone the community driven repository ```bash
In this tutorial, you learn the following tasks:
## Understand the code
-The code for this tutorial uses TensorFlow to train an image classification machine learning model that categorizes handwritten digits from 0-9. It does so by creating a neural network that takes the pixel values of 28 px x 28 px image as input and outputs a list of 10 probabilities, one for each of the digits being classified. Below is a sample of what the data looks like.
+The code for this tutorial uses TensorFlow to train an image classification machine learning model that categorizes handwritten digits from 0-9. It does so by creating a neural network that takes the pixel values of 28 px x 28 px image as input and outputs a list of 10 probabilities, one for each of the digits being classified. This is a sample of what the data looks like.
![MNIST Digits](./media/tutorial-train-deploy-image-classification-model-vscode/digits.png) ## Create a workspace
-The first thing you have to do to build an application in Azure Machine Learning is to create a workspace. A workspace contains the resources to train models as well as the trained models themselves. For more information, see [what is a workspace](./concept-workspace.md).
+The first thing you have to do to build an application in Azure Machine Learning is to create a workspace. A workspace contains the resources to train models and the trained models themselves. For more information, see [what is a workspace](./concept-workspace.md).
1. Open the *azureml-examples/cli/jobs/single-step/tensorflow/mnist* directory from the community driven repository in Visual Studio Code. 1. On the Visual Studio Code activity bar, select the **Azure** icon to open the Azure Machine Learning view.
For more information on workspaces, see [how to manage resources in VS Code](how
During the training process, a TensorFlow model is trained by processing the training data and learning patterns embedded within it for each of the respective digits being classified.
-Like workspaces and compute targets, training jobs are defined using resource templates. For this sample, the specification is defined in the *job.yml* file which looks like the following:
+Like workspaces and compute targets, training jobs are defined using resource templates. For this sample, the specification is defined in the *job.yml* file, which looks like the following:
```yml
When the model is done training, the status label next to the run node updates t
## Next steps
-In this tutorial, you learn the following tasks:
-
-> [!div class="checklist"]
-> * Understand the code
-> * Create a workspace
-> * Train a model
-
-For next steps, see:
- * [Launch Visual Studio Code integrated with Azure Machine Learning (preview)](how-to-launch-vs-code-remote.md) * For a walkthrough of how to edit, run, and debug code locally, see the [Python hello-world tutorial](https://code.visualstudio.com/docs/Python/Python-tutorial). * [Run Jupyter Notebooks in Visual Studio Code](how-to-manage-resources-vscode.md) using a remote Jupyter server.
managed-grafana How To Connect Azure Monitor Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-connect-azure-monitor-workspace.md
Previously updated : 11/10/2023 Last updated : 01/22/2024 # Add an Azure Monitor workspace to Azure Managed Grafana to collect Prometheus data In this guide, learn how to connect an Azure Monitor workspace to Grafana directly from an Azure Managed Grafana workspace. This feature is designed to provide a quick way to collect Prometheus metrics stored in an Azure Monitor workspace and enables you to monitor your Azure Kubernetes Service (AKS) clusters in Grafana.
-> [!IMPORTANT]
-> The integration of Azure Monitor workspaces within Azure Managed Grafana workspaces is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
To assign the Monitoring Data Reader role:
## Add an Azure Monitor workspace 1. Open your Azure Managed Grafana workspace.
-1. In the left menu, select **Integrations** > **Azure Monitor workspaces (Preview**).
+1. In the left menu, select **Integrations** > **Azure Monitor workspaces**.
:::image type="content" source="media\monitor-integration\add-azure-monitor.png" alt-text="Screenshot of the Grafana roles in the Azure platform.":::
To build a brand new dashboard with Prometheus metrics:
If you no longer need it, you can remove an Azure Monitor workspace from your Azure Managed Grafana workspace:
-1. In your Azure Managed Grafana workspace, select **Integrations** > **Azure Monitor workspaces (Preview**) from the left menu.
+1. In your Azure Managed Grafana workspace, select **Integrations** > **Azure Monitor workspaces from the left menu.
1. Select the row with the resource to delete and select **Delete** > **Yes**. Optionally also remove the role assignment that was previously added in the Azure Monitor workspace:
managed-grafana How To Manage Plugins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-manage-plugins.md
Previously updated : 10/26/2023 Last updated : 01/22/2024
-# How to manage Grafana plugins (Preview)
+# How to manage Grafana plugins
Grafana supports data source, panel, and app plugins. When you create a new Grafana instance, some plugins, such as Azure Monitor, are installed by default. In the following guide, learn how you can add or remove optional plugins. > [!NOTE] > Installing and removing plugins isn't available from the Grafana UI or the Azure CLI at this stage. Plugin management is done from the Azure Managed Grafana workspace in the Azure portal.
-> [!IMPORTANT]
-> Plugin management is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Prerequisites [An Azure Managed Grafana instance](./how-to-permissions.md)
Grafana supports data source, panel, and app plugins. When you create a new Graf
To install Grafana plugins, follow the process below. 1. Open your Azure Managed Grafana instance in the Azure portal.
-1. Select **Plugin management (Preview)**. This page shows a table with three columns containing checkboxes, plugin names, and plugin IDs. Review the checkboxes. A checked box indicates that the corresponding plugin is already installed and can be removed, an unchecked box indicates that the corresponding plugin isn't installed and can be added.
+1. Select **Plugin management**. This page shows a table with three columns containing checkboxes, plugin names, and plugin IDs. Review the checkboxes. A checked box indicates that the corresponding plugin is already installed and can be removed, an unchecked box indicates that the corresponding plugin isn't installed and can be added.
> [!NOTE] > This page only shows optional plugins. Core Grafana plugins that are included in your pricing plan by default aren't listed here.
To install Grafana plugins, follow the process below.
To remove a plugin that isn't part of the Grafana built-in core plugins, follow the steps below: 1. Open your Azure Managed Grafana instance in the Azure portal.
-1. Select **Plugin management (Preview)**. This page displays a table with data source plugins. It contains three columns including checkboxes, plugin names, and plugin IDs. Review the checkboxes. A checked box indicates that the corresponding plugin is already installed and can be removed, an unchecked box indicates that the corresponding plugin can be added.
+1. Select **Plugin management**. This page displays a table with data source plugins. It contains three columns including checkboxes, plugin names, and plugin IDs. Review the checkboxes. A checked box indicates that the corresponding plugin is already installed and can be removed, an unchecked box indicates that the corresponding plugin can be added.
1. Select a plugin to remove from your Grafana instance by unchecking its checkbox. A refresh icon appears in the table next to the plugin you selected, indicating that a change is pending. :::image type="content" source="media/plugin-management/remove-plugin.png" alt-text="Screenshot of the Plugin management feature data source page. Remove plugin.":::
To remove a plugin that isn't part of the Grafana built-in core plugins, follow
1. A notification appears, indicating if the update operation has been successful. 1. Select **Refresh** above the table to get an updated list of installed plugins.
-> [!IMPORTANT]
-> Plugin management is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- > [!CAUTION] > Removing a data source that is used in a dashboard will make the dashboard unable to collect the corresponding data and will trigger an error or result in no data being shown in the panel.
mysql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-portal.md
In the screenshot below it has been increased to 34 days.
The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the following section. ## Point-in-time restore
-Azure Database for MySQL allows you to restore the server back to a point-in-time and into to a new copy of the server. You can use this new server to recover your data, or have your client applications point to this new server.
+Azure Database for MySQL allows you to restore the server back to a point-in-time and into a new copy of the server. You can use this new server to recover your data, or have your client applications point to this new server.
-For example, if a table was accidentally dropped at noon today, you could restore to the time just before noon and retrieve the missing table and data from that new copy of the server. Point-in-time restore is at the server level, not at the database level.
+For example, if a table was accidentally dropped at noon today, you could restore it to the time just before noon and retrieve the missing table and data from that new copy of the server. Point-in-time restore is at the server level, not at the database level.
The following steps restore the sample server to a point-in-time: 1. In the Azure portal, select your Azure Database for MySQL server.
The following steps restore the sample server to a point-in-time:
:::image type="content" source="./media/how-to-restore-server-portal/3-restore.png" alt-text="Azure Database for MySQL - Restore information"::: - **Restore point**: Select the point-in-time you want to restore to. - **Target server**: Provide a name for the new server.
- - **Location**: You cannot select the region. By default it is same as the source server.
- - **Pricing tier**: You cannot change these parameters when doing a point-in-time restore. It is same as the source server.
+ - **Location**: You cannot select the region. By default, it is the same as the source server.
+ - **Pricing tier**: You cannot change these parameters when doing a point-in-time restore. It is the same as the source server.
4. Click **OK** to restore the server to restore to a point-in-time.
Additionally, after the restore operation finishes, there are two server paramet
* time_zone - This value to set to DEFAULT value **SYSTEM** * event_scheduler - The event_scheduler is set to **OFF** on the restored server
-You will need to copy over the value from teh primary server and set it on the restored server by reconfiguring the [server parameter](how-to-server-parameters.md)
+You will need to copy over the value from the primary server and set it on the restored server by reconfiguring the [server parameter](how-to-server-parameters.md)
The new server created during a restore does not have the VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. Firewall rules from the original server are restored.
If you configured your server for geographically redundant backups, a new server
2. Provide the subscription, resource group, and name of the new server.
-3. Select **Backup** as the **Data source**. This action loads a dropdown that provides a list of servers that have geo redundant backups enabled.
+3. Select **Backup** as the **Data source**. This action loads a dropdown that provides a list of servers that have geo-redundant backups enabled.
:::image type="content" source="./media/how-to-restore-server-portal/3-geo-restore.png" alt-text="Select data source.":::
notification-hubs Notification Hubs Gcm To Fcm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-gcm-to-fcm.md
Previously updated : 12/06/2023 Last updated : 01/25/2024
-ms.lastreviewed: 04/10/2019
+ms.lastreviewed: 01/25/2024
# Azure Notification Hubs and Google Firebase Cloud Messaging migration
openshift Azure Redhat Openshift Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/azure-redhat-openshift-release-notes.md
Azure Red Hat OpenShift receives improvements on an ongoing basis. To stay up to
## Version 4.13 - December 2023
-We're pleased to announce the launch of OpenShift 4.13 for Azure Red Hat OpenShift. This release enables [OpenShift Container Platform 4.13](https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html). Version 4.11 will be outside of support after January 26, 2024. Existing clusters version 4.11 and below should be upgraded before then.
+We're pleased to announce the launch of OpenShift 4.13 for Azure Red Hat OpenShift. This release enables [OpenShift Container Platform 4.13](https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html). Version 4.11 will be outside of support after February 10, 2024. Existing clusters version 4.11 and below should be upgraded before then.
## Update - September 2023
openshift Howto Deploy Java Jboss Enterprise Application Platform App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-app.md
description: Shows you how to quickly stand up Red Hat JBoss EAP on Azure Red Ha
Previously updated : 05/09/2023 Last updated : 01/25/2024
This article uses the Azure Marketplace offer for JBoss EAP to accelerate your j
> [!NOTE] > You can also execute this guidance from a local developer command line with the Azure CLI installed. To learn how to install the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
+ >
+ > If you are using a local developer command line, you must install the `mysql` CLI. For instructions see [How To Install MySQL](https://www.digitalocean.com/community/tutorials/how-to-install-mysql-on-ubuntu-20-04).
- Ensure the Azure identity you use to sign in has either the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role and the [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) role or the [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview)
Next, use the following steps to connect to the OpenShift cluster using the Open
```azurecli-interactive oc login \ $(az aro show \
- --resource-group eaparo033123rg \
+ --resource-group ${RG_NAME} \
--name aro-cluster \ --query apiserverProfile.url \ --output tsv) \ -u $(az aro list-credentials \
- --resource-group eaparo033123rg \
+ --resource-group ${RG_NAME} \
--name aro-cluster \ --query kubeadminUsername \ --output tsv) \ -p $(az aro list-credentials \
- --resource-group eaparo033123rg \
+ --resource-group ${RG_NAME} \
--name aro-cluster \ --query kubeadminPassword \ --output tsv)
+ ```
This command produces output similar to the following example:
Use the following steps to deploy the app to the cluster. The app is hosted in t
EOF ```
- You must see `secret/eaparo-sample-pull-secret created` to indicate successful creation of the secret. If you don't see this output, troubleshoot and resolve the problem before proceeding. Finally, link the secret.
+ You must see `secret/eaparo-sample-pull-secret created` to indicate successful creation of the secret. If you don't see this output, troubleshoot and resolve the problem before proceeding. Finally, link the secret to the default service account for downloading container images so the cluster can run them.
```azurecli-interactive oc secrets link default ${CON_REG_SECRET_NAME} --for=pull
Next, use the following steps to create a secret:
javaee-cafe-0 1/1 Running 0 30s ```
- It may take a few minutes to reach the proper state.
+ It may take a few minutes to reach the proper state. You may even see `STATUS` column values including `ErrImagePull` and `ImagePullBackOff` before `Running` is shown.
1. Run the following command to return the URL of the application. You can use this URL to access the deployed sample app. Copy the output to the clipboard.
Next, use the following steps to create a secret:
1. Paste the output into an Internet-connected web browser, and then press <kbd>Enter</kbd>. You should see the UI of **Java EE Cafe** app similar to the following screenshot: :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/javaee-cafe-ui.png" alt-text="Screenshot of Java EE Cafe app UI." lightbox="media/howto-deploy-java-enterprise-application-platform-app/javaee-cafe-ui.png":::
+
+1. Add and delete some rows to verify the database connectivity is correctly functioning.
## Clean up resources
openshift Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-liberty-app.md
Last updated 06/24/2023-+ # Deploy IBM WebSphere Liberty and Open Liberty on Azure Red Hat OpenShift
openshift Support Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-lifecycle.md
See the following guide for the [past Red Hat OpenShift Container Platform (upst
|4.8|July 2021| Sept 15 2021|4.10 GA| |4.9|November 2021| February 1 2022|4.11 GA| |4.10|March 2022| June 21 2022|4.12 GA|
-|4.11|August 2022| March 2 2023|January 26 2024|
+|4.11|August 2022| March 2 2023|February 10 2024|
|4.12|January 2023| August 19 2023|October 19 2024| |4.13|May 2023| December 15 2023|February 15 2025|
oracle Oracle Database Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/oracle-database-support.md
In this article, you learn how to get information and support for Oracle Databas
## Oracle Support scope and contact information
-Oracle Support can help you with the following types of Oracle Database@Azure issues:
+Oracle Support is your first line of support for all Oracle Database@Azure issues. Oracle Support can help you with the following types of Oracle Database@Azure issues:
- Database connection issues (Oracle TNS) - Oracle Database performance issues
postgresql Concepts Pgbouncer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-pgbouncer.md
Previously updated : 7/25/2023 Last updated : 1/25/2024 # PgBouncer in Azure Database for PostgreSQL - Flexible Server
postgresql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-performance-insight.md
Previously updated : 4/1/2023 Last updated : 1/25/2024 # Query Performance Insight for Azure Database for PostgreSQL - Flexible Server
postgresql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-server-parameters.md
Previously updated : 12/21/2023 Last updated : 1/25/2024 # Server parameters in Azure Database for PostgreSQL - Flexible Server
postgresql Create Automation Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/create-automation-tasks.md
Previously updated : 01/02/2024 Last updated : 01/24/2024 # Manage Azure Database for PostgreSQL - Flexible Server using automation tasks (preview)
postgresql How To Alert On Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-alert-on-metrics.md
-+ Last updated 7/12/2023
postgresql How To Configure And Access Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-and-access-logs.md
Previously updated : 4/3/2023 Last updated : 1/25/2024 # Configure and access logs in Azure Database for PostgreSQL - Flexible Server
postgresql How To Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-server-parameters-using-cli.md
Title: Configure parameters description: This article describes how to configure Postgres parameters in Azure Database for PostgreSQL - Flexible Server using the Azure CLI.--++ ms.devlang: azurecli
postgresql How To Configure Server Parameters Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-server-parameters-using-portal.md
Previously updated : 8/14/2023 Last updated : 1/25/2024 # Configure server parameters in Azure Database for PostgreSQL - Flexible Server via the Azure portal
postgresql How To Cost Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-cost-optimization.md
Previously updated : 4/13/2023 Last updated : 1/25/2024 # How to optimize costs in Azure Database for PostgreSQL - Flexible Server
postgresql How To Server Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-server-logs-portal.md
-+ Last updated 1/16/2024
postgresql Overview Postgres Choose Server Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview-postgres-choose-server-options.md
Previously updated : 01/02/2024 Last updated : 01/25/2024 # Choose the right Azure Database for PostgreSQL - Flexible Server hosting option in Azure
search Hybrid Search How To Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/hybrid-search-how-to-query.md
A hybrid query combines full text search and vector search, where the `"search"`
Hybrid queries are useful because they add support for all query capabilities, including orderby and [semantic ranking](semantic-how-to-query-request.md). For example, in addition to the vector query, you could search over people or product names or titles, scenarios for which similarity search isn't a good fit.
-The following example is from the [Postman collection of REST APIs](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python) that demonstrate hybrid query configurations.
+The following example is from the [Postman collection of REST APIs](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python) that demonstrate hybrid query configurations.
```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version=2023-11-01
In this section, compare the responses between single vector search and simple h
## Next steps
-As a next step, we recommend reviewing the demo code for [Python](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python), [C#](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet) or [JavaScript](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript).
+As a next step, we recommend reviewing the demo code for [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python), [C#](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet) or [JavaScript](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript).
search Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-dotnet.md
- devx-track-dotnet - ignite-2023 Previously updated : 08/02/2023 Last updated : 01/25/2024 # C# samples for Azure AI Search
Code samples from the Azure SDK development team demonstrate API usage. You can
| [FieldBuilderIgnore](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample04_FieldBuilderIgnore.md) | Demonstrates a technique for working with unsupported data types. | | [Indexing documents (push model)](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample05_IndexingDocuments.md) | "Push" model indexing, where you send a JSON payload to an index on a service. | | [Encryption key sample](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample06_EncryptedIndex.md) | Demonstrates using a customer-managed encryption key to add an extra layer of protection over sensitive content. |
-| [Vector search sample](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample07_VectorSearch.md) | Shows you how to index a vector field and perform vector search using the Azure SDK for .NET. Vector search is in preview. |
+| [Vector search sample](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample07_VectorSearch.md) | Shows you how to index a vector field and perform vector search using the Azure SDK for .NET. |
+| [Semantic ranking sample](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample08_SemanticSearch.md) | Shows you how to configure semantic ranking in an index and invoke semantic queries using the Azure SDK for .NET. |
## Doc samples
Code samples from the Azure AI Search team demonstrate features and workflows. A
| Code sample | Related article | Purpose | |-|||
+| [create-mvc-app](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/create-mvc-app) | [Tutorial: Add search to an ASP.NET Core (MVC) app](tutorial-csharp-create-mvc-app.md) | While most samples are console applications, this MVC sample uses a web page to front the sample Hotels index, demonstrating basic search, pagination, and other server-side behaviors.|
| [quickstart](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/quickstart/v11) | [Quickstart: Full text search using the Azure SDKs](search-get-started-text.md) | Covers the basic workflow for creating, loading, and querying a search index in C# using sample data. |
+| [quickstart-semantic-search](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/main/quickstart-semantic-search/) | [Quickstart: Semantic ranking using the Azure SDKs](search-get-started-semantic.md) | Shows the index schema and query request for invoking semantic ranking. |
| [search-website](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/search-website-functions-v4) | [Tutorial: Add search to web apps](tutorial-csharp-overview.md) | Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.|
+| [tutorial-ai-enrichment](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/tutorial-ai-enrichment) | [Tutorial: AI-generated searchable content from Azure blobs](cognitive-search-tutorial-blob-dotnet.md) | Shows how to configure an indexer and skillset. |
+| [multiple-data-sources](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/multiple-data-sources) | [Tutorial: Index from multiple data sources](tutorial-multiple-data-sources.md). | Merges content from two data sources into one search index.
+| [Optimize-data-indexing](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/optimize-data-indexing) | [Tutorial: Optimize indexing with the push API](tutorial-optimize-indexing-push-api.md).| Demonstrates optimization techniques for pushing data into a search index. |
| [DotNetHowTo](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowTo) | [How to use the .NET client library](search-howto-dotnet-sdk.md) | Steps through the basic workflow, but in more detail and with discussion of API usage. | | [DotNetHowToSynonyms](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToSynonyms) | [Example: Add synonyms in C#](search-synonyms-tutorial-sdk.md) | Synonym lists are used for query expansion, providing matchable terms that are external to an index. | | [DotNetToIndexers](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToIndexers) | [Tutorial: Index Azure SQL data](search-indexer-tutorial.md) | Shows how to configure an Azure SQL indexer that has a schedule, field mappings, and parameters. | | [DotNetHowToEncryptionUsingCMK](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToEncryptionUsingCMK) | [How to configure customer-managed keys for data encryption](search-security-manage-encryption-keys.md) | Shows how to create objects that are encrypted with a Customer Key. |
-| [multiple-data-sources](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/multiple-data-sources) | [Tutorial: Index from multiple data sources](tutorial-multiple-data-sources.md). | Merges content from two data sources into one search index.
-| [Optimize-data-indexing](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/optimize-data-indexing) | [Tutorial: Optimize indexing with the push API](tutorial-optimize-indexing-push-api.md).| Demonstrates optimization techniques for pushing data into a search index. |
-| [tutorial-ai-enrichment](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/tutorial-ai-enrichment) | [Tutorial: AI-generated searchable content from Azure blobs](cognitive-search-tutorial-blob-dotnet.md) | Shows how to configure an indexer and skillset. |
-| [create-mvc-app](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/main/create-mvc-app) | [Tutorial: Add search to an ASP.NET Core (MVC) app](tutorial-csharp-create-mvc-app.md) | While most samples are console applications, this MVC sample uses a web page to front the sample Hotels index, demonstrating basic search, pagination, and other server-side behaviors.|
+| [DotNetVectorDemo](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet/DotNetVectorDemo) | [readme](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet/DotNetVectorDemo/readme.md) | Create, load, and query a vector store. |
+| [DotNetIntegratedVectorizationDemo](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet/DotNetIntegratedVectorizationDemo) | [readme](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet/DotNetIntegratedVectorizationDemo/readme.md) | Extends the vector workflow to include skills-based automation for data chunking and embedding. |
## Accelerators
The following samples are also published by the Azure AI Search team, but aren't
| Samples | Repository | Description | |||-|
-| [DotNetVectorDemo](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-dotnet/DotNetVectorDemo/readme.md) | [azure-search-vector](https://github.com/Azure/cognitive-search-vector-pr) | Calls Azure OpenAI to generate embeddings and Azure AI Search to create, load, and query an index. |
| [Query multiple services](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/multiple-search-services) | [azure-search-dotnet-scale](https://github.com/Azure-Samples/azure-search-dotnet-samples) | Issue a single query across multiple search services and combine the results into a single page. | | [Check storage](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/check-storage-usage/README.md) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | Invokes an Azure function that checks search service storage on a schedule. | | [Export an index](https://github.com/Azure-Samples/azure-search-dotnet-utilities/blob/main/export-dat) | [azure-search-dotnet-utilities](https://github.com/Azure-Samples/azure-search-dotnet-utilities) | C# console app that partitions and export a large index. |
search Samples Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-java.md
- devx-track-extended-java - ignite-2023 Previously updated : 07/27/2023 Last updated : 01/25/2024 # Java samples for Azure AI Search
search Samples Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-javascript.md
- devx-track-js - ignite-2023 Previously updated : 08/01/2023 Last updated : 01/25/2024 # JavaScript samples for Azure AI Search
Code samples from the Azure SDK development team demonstrate API usage. You can
| [indexers](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, reset, and delete [indexers](search-indexer-overview.md).| | [skillSet](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, and delete [skillsets](cognitive-search-working-with-skillsets.md) that are attached indexers, and that perform AI-based enrichment during indexing. | | [synonymMaps](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, and delete [synonym maps](search-synonyms.md). |
-| [VectorSearch](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v12-bet). |
+| [VectorSearch](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v12-bet). |
### TypeScript samples
Code samples from the Azure SDK development team demonstrate API usage. You can
| [indexers](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/typescript/src) | Demonstrates how to create, update, get, list, reset, and delete [indexers](search-indexer-overview.md).| | [skillSet](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v11/typescript/src/skillSetOperations.ts) | Demonstrates how to create, update, get, list, and delete [skillsets](cognitive-search-working-with-skillsets.md) that are attached indexers, and that perform AI-based enrichment during indexing. | | [synonymMaps](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v11/typescript/src/synonymMapOperations.ts) | Demonstrates how to create, update, get, list, and delete [synonym maps](search-synonyms.md). |
+| [VectorSearch](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v12/typescript/src/vectorSearch.ts) | Demonstrates how to index vectors and send a [vector query](vector-search-how-to-query.md). |
## Doc samples
The following samples are also published by the Azure AI Search team, but aren't
| Samples | Description | ||-|
-| [azure-search-vector-sample.js](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript) | Vector search sample using the Azure SDK for JavaScript |
+| [azure-search-vector-sample.js](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript/JavaScriptVectorDemo) | Vector search sample using the Azure SDK for JavaScript |
| [azure-search-react-template](https://github.com/dereklegenzoff/azure-search-react-template) | React template for Azure AI Search (github.com) |
search Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-python.md
- devx-track-python - ignite-2023 Previously updated : 08/02/2023 Last updated : 01/25/2024 # Python samples for Azure AI Search
Code samples from the Azure SDK development team demonstrate API usage. You can
| [Simple query](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/search/azure-search-documents/samples/sample_simple_query.py) | Demonstrates how to set up a [basic query](search-query-overview.md). | | [Filter query](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/search/azure-search-documents/samples/sample_filter_query.py) | Demonstrates setting up a [filter expression](search-filters.md). | | [Facet query](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/search/azure-search-documents/samples/sample_facet_query.py) | Demonstrates working with [facets](search-faceted-navigation.md). |
+| [Semantic ranking sample](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/search/azure-search-documents/samples/sample_semantic_search.py) | Shows you how to configure semantic ranking in an index and invoke semantic queries. |
| [Vector search](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/search/azure-search-documents/samples/sample_vector_search.py) | Demonstrates how to get embeddings from a description field and then send vector queries against the data. | ## Doc samples
Code samples from the Azure AI Search team demonstrate features and workflows. M
| Samples | Article | ||| | [quickstart](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart/v11) | Source code for the Python portion of [Quickstart: Full text search using the Azure SDKs](search-get-started-text.md). This article covers the basic workflow for creating, loading, and querying a search index using sample data. |
+| [quickstart-semantic-search](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/quickstart-semantic-search/) | Source code for the Python portion of [Quickstart: Semantic ranking using the Azure SDKs](search-get-started-semantic.md). It shows the index schema and query request for invoking semantic ranking. |
| [search-website-functions-v4](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/search-website-functions-v4) | Source code for [Tutorial: Add search to web apps](tutorial-python-overview.md). Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.| | [tutorial-ai-enrichment](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Tutorial-AI-Enrichment) | Source code for [Tutorial: Use Python and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob-python.md). This article shows how to create a blob indexer with a cognitive skillset, where the skillset creates and transforms raw content to make it searchable or consumable. |
A demo repo provides proof-of-concept source code for examples or scenarios show
| Repository | Description | ||-|
-| [**azure-search-vector-python-sample.ipynb**](https://github.com/Azure/cognitive-search-vector-pr/blob/main/demo-python/code/azure-search-vector-image-python-sample.ipynb) | Uses the **azure.search.documents** library in the Azure SDK for Python to generate embeddings, create and load an index, and run several vector queries. For more vector search Python demos, see [cognitive-search-vector-pr/demo-python](https://github.com/Azure/cognitive-search-vector-pr/blob/main/demo-python). |
-| [**ChatGPT + Enterprise data with Azure OpenAI and Cognitive Search**](https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/README.md) | Python code showing how to use Azure AI Search with the large language models in Azure OpenAI. For background, see this Tech Community blog post: [Revolutionize your Enterprise Data with ChatGPT](https://techcommunity.microsoft.com/t5/ai-applied-ai-blog/revolutionize-your-enterprise-data-with-chatgpt-next-gen-apps-w/ba-p/3762087). |
+| [azure-search-vector-python-sample.ipynb](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/azure-search-vector-image-python-sample.ipynb) | Uses the **azure.search.documents** library in the Azure SDK for Python to create, load, and query a vector store. |
+| [azure-search-integrated-vectorization-sample.ipynb](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/azure-search-integrated-vectorization-sample.ipynb) | Extends the vector store workflow to include integrated data chunking and embedding. |
+| [azure-search-vector-image-index-creation-python-sample.ipynb](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/azure-search-vector-image-index-creation-python-sample.ipynb) | Demonstrates image embedding. |
+| [azure-search-vector-image-python-sample.ipynb](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/azure-search-vector-image-python-sample.ipynb) | Demonstrates image retrieval. |
+| [azure-search-vector-python-huggingface-model-sample.ipynb](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/azure-search-vector-python-huggingface-model-sample.ipynb) | Hugging Face integration. |
+| [azure-search-vector-python-langchain-sample.ipynb](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/azure-search-vector-python-langchain-sample.ipynb) | LangChain integration. |
+| [azure-search-vector-python-llamaindex-sample.ipynb](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/azure-search-vector-python-llamaindex-sample.ipynb) | Llamaindex integration. |
+| [azure-search-openai-demo](https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/README.md) | ChatGPT + Enterprise data with Azure OpenAI Python code showing how to use Azure AI Search with the large language models in Azure OpenAI. For background, see this Tech Community blog post: [Revolutionize your Enterprise Data with ChatGPT](https://techcommunity.microsoft.com/t5/ai-applied-ai-blog/revolutionize-your-enterprise-data-with-chatgpt-next-gen-apps-w/ba-p/3762087). |
+## Other samples
+
+The following samples are also published by the Azure AI Search team, but aren't referenced in documentation. Associated readme files provide usage instructions.
+
+| Repository | Description |
+||-|
+| [azure-search-backup-and-restore.ipynb](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/azure-search-backup-and-restore.ipynb) | Uses the **azure.search.documents** library in the Azure SDK for Python to make a local copy of the retrievable fields of a search index, and then push those fields to a new search index. |
> [!TIP] > Try the [Samples browser](/samples/browse/?languages=python&products=azure-cognitive-search) to search for Microsoft code samples in GitHub, filtered by product, service, and language.
search Search Get Started Portal Import Vectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal-import-vectors.md
Azure AI Search is a billable resource. If it's no longer needed, delete it from
## Next steps
-This quickstart introduced you to the **Import and vectorize data** wizard that creates all of the objects necessary for integrated vectorization. If you want to explore each step in detail, try an [integrated vectorization sample](https://github.com/Azure/cognitive-search-vector-pr/blob/main/demo-python/code/azure-search-integrated-vectorization-sample.ipynb).
+This quickstart introduced you to the **Import and vectorize data** wizard that creates all of the objects necessary for integrated vectorization. If you want to explore each step in detail, try an [integrated vectorization sample](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/azure-search-integrated-vectorization-sample.ipynb).
search Search Get Started Vector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md
Azure AI Search is a billable resource. If it's no longer needed, delete it from
## Next steps
-As a next step, we recommend reviewing the demo code for [Python](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python), [C#](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet), or [JavaScript](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript).
+As a next step, we recommend reviewing the demo code for [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python), [C#](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet), or [JavaScript](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript).
search Vector Search How To Create Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-create-index.md
REST API version [**2023-07-01-Preview**](/rest/api/searchservice/index-preview)
+ Use the [**Azure.Search.Documents**](https://www.nuget.org/packages/Azure.Search.Documents) package for vector scenarios.
-+ See the [azure-search-vector](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet) GitHub repository for .NET code samples.
++ See the [azure-search-vector](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet) GitHub repository for .NET code samples. ### [**Python**](#tab/python-add-config) + Use the [**Azure.Search.Documents**](https://pypi.org/project/azure-search-documents) package for vector scenarios.
-+ See the [azure-search-vector](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python) GitHub repository for Python code samples.
++ See the [azure-search-vector](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python) GitHub repository for Python code samples. ### [**JavaScript**](#tab/js-add-config) + Use the [**@azure/search-documents 12.0.0-beta.2**](https://www.npmjs.com/package/@azure/search-documents/v/12.0.0-beta.2) package for vector scenarios.
-+ See the [azure-search-vector](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript) GitHub repository for JavaScript code samples.
++ See the [azure-search-vector](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript) GitHub repository for JavaScript code samples.
Although you can add a field to an index, there's no portal (Import data wizard)
+ Use the [**Azure.Search.Documents**](https://www.nuget.org/packages/Azure.Search.Documents) package for vector scenarios.
-+ See the [azure-search-vector](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet) GitHub repository for .NET code samples.
++ See the [azure-search-vector](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet) GitHub repository for .NET code samples. ### [**Python**](#tab/python-add-field) + Use the [**Azure.Search.Documents 11.4.0b8**](https://pypi.org/project/azure-search-documents/11.4.0b8/) package for vector scenarios.
-+ See the [azure-search-vector](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python) GitHub repository for Python code samples.
++ See the [azure-search-vector](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python) GitHub repository for Python code samples. ### [**JavaScript**](#tab/js-add-field) + Use the [**@azure/search-documents 12.0.0-beta.2**](https://www.npmjs.com/package/@azure/search-documents/v/12.0.0-beta.2) package for vector scenarios.
-+ See the [azure-search-vector](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript) GitHub repository for JavaScript code samples.
++ See the [azure-search-vector](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript) GitHub repository for JavaScript code samples.
api-key: {{admin-api-key}}
As a next step, we recommend [Query vector data in a search index](vector-search-how-to-query.md).
-Code samples in the [azure-search-vector](https://github.com/Azure/cognitive-search-vector-pr) repository demonstrate end-to-end workflows that include schema definition, vectorization, indexing, and queries.
+Code samples in the [azure-search-vector](https://github.com/Azure/azure-search-vector-samples) repository demonstrate end-to-end workflows that include schema definition, vectorization, indexing, and queries.
-There's demo code for [Python](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python), [C#](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet), and [JavaScript](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript).
+There's demo code for [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python), [C#](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet), and [JavaScript](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript).
search Vector Search How To Generate Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-generate-embeddings.md
Last updated 10/30/2023
# Generate embeddings for search queries and documents
-Azure AI Search doesn't host vectorization models, so one of your challenges is creating embeddings for query inputs and outputs. You can use any embedding model, but this article assumes Azure OpenAI embeddings models. Demos in the [sample repository](https://github.com/Azure/cognitive-search-vector-pr/tree/main) tap the [similarity embedding models](/azure/ai-services/openai/concepts/models#embeddings-models) of Azure OpenAI.
+Azure AI Search doesn't host vectorization models, so one of your challenges is creating embeddings for query inputs and outputs. You can use any embedding model, but this article assumes Azure OpenAI embeddings models. Demos in the [sample repository](https://github.com/Azure/azure-search-vector-samples/tree/main) tap the [similarity embedding models](/azure/ai-services/openai/concepts/models#embeddings-models) of Azure OpenAI.
Dimension attributes have a minimum of 2 and a maximum of 2048 dimensions per vector field.
search Vector Search How To Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md
In Azure AI Search, if you [added vector fields](vector-search-how-to-create-ind
> + [Query multiple vector fields at once](#multiple-vector-fields) > + [Query with integrated vectorization (preview)](#query-with-integrated-vectorization-preview)
-Code samples in the [azure-search-vector](https://github.com/Azure/cognitive-search-vector-pr) repository demonstrate end-to-end workflows that include schema definition, vectorization, indexing, and queries.
+Code samples in the [azure-search-vector](https://github.com/Azure/azure-search-vector-samples) repository demonstrate end-to-end workflows that include schema definition, vectorization, indexing, and queries.
## Prerequisites
This section applies to the generally available version of vector search (**2023
To query a vector field, the query itself must be a vector. To convert a text query string provided by a user into a vector representation, your application must call an embedding library or API endpoint that provides this capability. **Use the same embedding that you used to generate embeddings in the source documents.**
-You can find multiple instances of query string conversion in the [azure-search-vector](https://github.com/Azure/cognitive-search-vector-pr/) repository for each of the Azure SDKs.
+You can find multiple instances of query string conversion in the [azure-search-vector](https://github.com/Azure/azure-search-vector-samples) repository for each of the Azure SDKs.
Here's a REST API example of a query string submitted to a deployment of an Azure OpenAI model:
Be sure to the **JSON view** and formulate the vector query in JSON. The search
+ Use the [**Azure.Search.Documents 11.5.0**](https://www.nuget.org/packages/Azure.Search.Documents/11.5.0) package for vector scenarios.
-+ See the [azure-search-vector](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet) GitHub repository for .NET code samples.
++ See the [azure-search-vector](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet) GitHub repository for .NET code samples. ### [**Python**](#tab/python-vector-query) + Use the [**Azure.Search.Documents**](https://pypi.org/project/azure-search-documents) package for vector scenarios.
-+ See the [azure-search-vector](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python) GitHub repository for Python code samples.
++ See the [azure-search-vector](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python) GitHub repository for Python code samples. ### [**JavaScript**](#tab/js-vector-query) + Use the [**@azure/search-documents 12.0.0-beta.4**](https://www.npmjs.com/package/@azure/search-documents/v/12.0.0-beta.4) package for vector scenarios.
-+ See the [azure-search-vector](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript) GitHub repository for JavaScript code samples.
++ See the [azure-search-vector](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript) GitHub repository for JavaScript code samples.
Multiple sets are created if the query targets multiple vector fields, or if the
## Next steps
-As a next step, we recommend reviewing the demo code for [Python](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python), [C#](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet) or [JavaScript](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript).
+As a next step, we recommend reviewing the demo code for [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python), [C#](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet) or [JavaScript](https://github.com/Azure/cazure-search-vector-samplesr/tree/main/demo-javascript).
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
We recommend this article for background, but if you'd rather get started, follo
> + [Load vector data](search-what-is-data-import.md) into an index using push or pull methodologies. > + [Query vector data](vector-search-how-to-query.md) using the Azure portal, REST APIs, or Azure SDK packages.
-You could also begin with the [vector quickstart](search-get-started-vector.md) or the [code samples on GitHub](https://github.com/Azure/cognitive-search-vector-pr).
+You could also begin with the [vector quickstart](search-get-started-vector.md) or the [code samples on GitHub](https://github.com/Azure/azure-search-vector-samples).
Vector search is in the Azure portal and the Azure SDKs for [.NET](https://www.nuget.org/packages/Azure.Search.Documents), [Python](https://pypi.org/project/azure-search-documents), and [JavaScript](https://www.npmjs.com/package/@azure/search-documents/v/12.0.0-beta.2).
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
|--||--| | [**Vector demo (Azure SDK for JavaScript)**](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-javascript/JavaScriptVectorDemo/readme.md) | Sample | Uses Node.js and the **@azure/search-documents 12.0.0-beta.2** library to generate embeddings, create and load an index, and run several vector queries. | | [**Vector demo (Azure SDK for .NET)**](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-dotnet/DotNetVectorDemo/readme.md) | Sample | Uses the **Azure.Search.Documents 11.5.0-beta.3** library to generate embeddings, create and load an index, and run several vector queries. You can also try [this sample](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample07_VectorSearch.md) from the Azure SDK team.|
-| [**Vector demo (Azure SDK for Python)**](https://github.com/Azure/cognitive-search-vector-pr/blob/main/demo-python/code/azure-search-vector-image-python-sample.ipynb) | Sample | Uses the latest beta release of the **azure.search.documents** to generate embeddings, create and load an index, and run several vector queries. Visit the [cognitive-search-vector-pr/demo-python](https://github.com/Azure/cognitive-search-vector-pr/blob/main/demo-python) repo for more vector search demos. |
+| [**Vector demo (Azure SDK for Python)**](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/azure-search-vector-image-python-sample.ipynb) | Sample | Uses the latest beta release of the **azure.search.documents** to generate embeddings, create and load an index, and run several vector queries. Visit the [azure-search-vector-samples/demo-python](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python) repo for more vector search demos. |
## June 2023
security Backup Plan To Protect Against Ransomware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/backup-plan-to-protect-against-ransomware.md
This article addresses what to do before an attack to protect your critical busi
Ransomware is a type of extortion attack that encrypts files and folders, preventing access to important data and systems. Attackers use ransomware to extort money from victims by demanding money, usually in the form of cryptocurrencies, in exchange for a decryption key or in exchange for not releasing sensitive data to the dark web or the public internet.
-While early ransomware mostly used malware that spread with phishing or between devices, [human-operated ransomware](/security/compass/human-operated-ransomware) has emerged where a gang of active attackers, driven by human attack operators, target all systems in an organization (rather than a single device or set of devices). An attack can:
+While early ransomware mostly used malware that spread with phishing or between devices, human-operated ransomware has emerged where a gang of active attackers, driven by human attack operators, target all systems in an organization (rather than a single device or set of devices). An attack can:
- Encrypt your data - Exfiltrate your data
After a ransomware attack or an incident response simulation, take the following
## Next steps
-In this article, you learned how to improve your backup and restore plan to protect against ransomware. For best practices on deploying ransomware protection, see [Rapidly protect against ransomware and extortion](/security/compass/protect-against-ransomware).
+In this article, you learned how to improve your backup and restore plan to protect against ransomware. For best practices on deploying ransomware protection, see Rapidly protect against ransomware and extortion.
Key industry information:
security Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/management.md
Using a hardened workstation configuration for administering your Azure cloud se
## Next steps The following resources are available to provide more general information about Azure and related Microsoft
-* [Securing Privileged Access](/windows-server/identity/securing-privileged-access/securing-privileged-access) - get the technical details for designing and building a secure administrative workstation for Azure management
+* Securing Privileged Access - get the technical details for designing and building a secure administrative workstation for Azure management
* [Microsoft Trust Center](https://microsoft.com/trustcenter/cloudservices/azure) - learn about Azure platform capabilities that protect the Azure fabric and the workloads that run on Azure * [Microsoft Security Response Center](https://www.microsoft.com/msrc) - where Microsoft security vulnerabilities, including issues with Azure, can be reported or via email to [secure@microsoft.com](mailto:secure@microsoft.com)
security Ransomware Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-prepare.md
To achieve this, organizations should identify and execute quick wins to strengt
Ensure you have strong controls (prevent, detect, respond) for privileged accounts like IT Admins and other roles with control of business-critical systems. This slows and/or blocks attackers from gaining complete access to your resources to steal and encrypt them. Taking away the attackers' ability to use IT Admin accounts as a shortcut to resources will drastically lower the chances they are successful at attacking you and demanding payment / profiting.
-Organizations should have elevated security for privileged accounts (tightly protect, closely monitor, and rapidly respond to incidents related to these roles). See Microsoft's [Security rapid modernization plan](/security/compass/security-rapid-modernization-plan), which covers:
+Organizations should have elevated security for privileged accounts (tightly protect, closely monitor, and rapidly respond to incidents related to these roles). See Microsoft's Security rapid modernization plan, which covers:
- End to End Session Security (including multifactor authentication (MFA) for admins) - Protect and Monitor Identity Systems - Mitigate Lateral Traversal
security Recover From Identity Compromise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/recover-from-identity-compromise.md
We recommend the following actions to ensure your general security posture:
- **Ensure that your organization has extended detection and response (XDR) and security information and event management (SIEM) solutions in place**, such as [Microsoft Defender XDR for Endpoint](/microsoft-365/security/defender/microsoft-365-defender), [Microsoft Sentinel](../../sentinel/overview.md), and [Microsoft Defender for IoT](../../defender-for-iot/organizations/index.yml). -- **Review MicrosoftΓÇÖs [Enterprise access model](/security/compass/privileged-access-access-model)**.
+- **Review MicrosoftΓÇÖs Enterprise access model**.
### Improve identity security posture
This section provides possible methods and steps to consider when building your
> [!IMPORTANT] > The exact steps required in your organization will depend on what persistence you've discovered in your investigation, and how confident you are that your investigation was complete and has discovered all possible entry and persistence methods. >
-> Ensure that any actions taken are performed from a trusted device, built from a [clean source](/security/compass/privileged-access-access-model). For example, use a fresh, [privileged access workstation](/security/compass/privileged-access-deployment).
+> Ensure that any actions taken are performed from a trusted device, built from a clean source. For example, use a fresh, privileged access workstation.
> The following sections include the following types of recommendations for remediating and retaining administrative control:
In addition to the recommendations listed earlier in this article, we also recom
|Activity |Description | ||| |**Rebuild affected systems** | Rebuild systems that were identified as compromised by the attacker during your investigation. |
-|**Remove unnecessary admin users** | Remove unnecessary members from Domain Admins, Backup Operators, and Enterprise Admin groups. For more information, see [Securing Privileged Access](/security/compass/overview). |
+|**Remove unnecessary admin users** | Remove unnecessary members from Domain Admins, Backup Operators, and Enterprise Admin groups. For more information, see Securing Privileged Access. |
|**Reset passwords to privileged accounts** | Reset passwords of all privileged accounts in the environment. <br><br>**Note**: Privileged accounts are not limited to built-in groups, but can also be groups that are delegated access to server administration, workstation administration, or other areas of your environment. | |**Reset the krbtgt account** | Reset the **krbtgt** account twice using the [New-KrbtgtKeys](https://github.com/microsoft/New-KrbtgtKeys.ps1/blob/master/New-KrbtgtKeys.ps1) script. <br><br>**Note**: If you are using Read-Only Domain Controllers, you will need to run the script separately for Read-Write Domain Controllers and for Read-Only Domain Controllers. | |**Schedule a system restart** | After you validate that no persistence mechanisms created by the attacker exist or remain on your system, schedule a system restart to assist with removing memory-resident malware. |
security Steps Secure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/steps-secure-identity.md
Microsoft Entra built-in and custom roles operate on concepts similar to roles f
Both systems contain similarly used role definitions and role assignments. However, Microsoft Entra role permissions can't be used in Azure custom roles and vice versa. As part of deploying your privileged account process, follow the best practice to create at least two emergency accounts to make sure you still have access to Microsoft Entra ID if you lock yourself out.
-For more information, see the article [Plan a Privileged Identity Management deployment](../../active-directory/privileged-identity-management/pim-deployment-plan.md) and [securing privileged access](/security/compass/overview).
+For more information, see the article [Plan a Privileged Identity Management deployment](../../active-directory/privileged-identity-management/pim-deployment-plan.md) and securing privileged access.
### Restrict user consent operations
service-fabric Service Fabric Service Model Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-service-model-schema.md
- Last updated 07/11/2022
def writeOverviewFile(filename, namedElements, namedComplexTypes, namedElementGr
file.write('\n') file.write(' Title: Azure Service Fabric service model XML schema descriptions | Microsoft Docs') file.write('description: Describes the XML schema of the Service Fabric service model.\n')
- file.write('
- file.write('documentationcenter: na\n')
file.write(' file.write('
- file.write('editor: ''\n')
-
- file.write('ms.assetid: \n')
file.write(' file.write('
- file.write(' na\n')
- file.write('
file.write(' Last updated %s\n' % datetime.datetime.today().strftime('%m/%d/%Y')) file.write('
def writeElementGroupsFile(filename, namedElementGroups):
file.write('\n') file.write(' Title: Azure Service Fabric service model XML schema element groups | Microsoft Docs') file.write('description: Describes the element groups in the XML schema of the Service Fabric service model.\n')
- file.write('
- file.write('documentationcenter: na\n')
file.write(' file.write('
- file.write('editor: ''\n')
-
- file.write('ms.assetid: \n')
file.write(' file.write('
- file.write(' na\n')
- file.write('
file.write(' Last updated %s\n' % datetime.datetime.today().strftime('%m/%d/%Y')) file.write('
def writeElementsFile(filename, namedElements):
file.write('\n') file.write(' Title: Azure Service Fabric service model XML schema elements | Microsoft Docs') file.write('description: Describes the elements in the XML schema of the Service Fabric service model.\n')
- file.write('
- file.write('documentationcenter: na\n')
file.write(' file.write('
- file.write('editor: ''\n')
-
- file.write('ms.assetid: \n')
file.write(' file.write('
- file.write(' na\n')
- file.write('
file.write(' Last updated %s\n' % datetime.datetime.today().strftime('%m/%d/%Y')) file.write('
def writeComplexTypesFile(filename, namedComplexTypes):
file.write('\n') file.write(' Title: Azure Service Fabric service model XML schema complex types | Microsoft Docs\n') file.write('description: Describes the complex types in the XML schema of the Service Fabric service model.\n')
- file.write('
- file.write('documentationcenter: na\n')
file.write(' file.write('
- file.write('editor: ''\n')
-
- file.write('ms.assetid: \n')
file.write(' file.write('
- file.write(' na\n')
- file.write('
file.write(' Last updated %s\n' % datetime.datetime.today().strftime('%m/%d/%Y')) file.write('
def writeAttributesFile(filename, namedAttributeGroups):
file.write('\n') file.write(' Title: Azure Service Fabric service model XML schema attribute groups | Microsoft Docs\n') file.write('description: Describes the attribute groups in the XML schema of the Service Fabric service model.\n')
- file.write('
- file.write('documentationcenter: na\n')
file.write(' file.write('
- file.write('editor: ''\n')
-
- file.write('ms.assetid: \n')
file.write(' file.write('
- file.write(' na\n')
- file.write('
file.write(' Last updated %s\n' % datetime.datetime.today().strftime('%m/%d/%Y')) file.write('
def writeSimpleTypesFile(filename, namedSimpleTypes):
file.write('\n') file.write(' Title: Azure Service Fabric service model XML schema simple types | Microsoft Docs\n') file.write('description: Describes the simple types in the XML schema of the Service Fabric service model.\n')
- file.write('
- file.write('documentationcenter: na\n')
file.write(' file.write('
- file.write('editor: ''\n')
-
- file.write('ms.assetid: \n')
file.write(' file.write('
- file.write(' na\n')
- file.write('
file.write(' Last updated %s\n' % datetime.datetime.today().strftime('%m/%d/%Y')) file.write('
spring-apps Access App Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/access-app-virtual-network.md
Use the following steps to find the IP address for your application.
1. In the **General info** section, find **Endpoint** and copy the **IP Address** value. The example in the following screenshot uses the IP address `10.0.1.6`:
- :::image type="content" source="media/spring-cloud-access-app-vnet/find-ip-address.png" alt-text="Screenshot of the Azure portal that shows the Vnet injection Endpoint information." lightbox="media/spring-cloud-access-app-vnet/find-ip-address.png":::
+ :::image type="content" source="media/access-app-virtual-network/find-ip-address.png" alt-text="Screenshot of the Azure portal that shows the Vnet injection Endpoint information." lightbox="media/access-app-virtual-network/find-ip-address.png":::
### [Azure CLI](#tab/azure-CLI)
Use the following steps to link the private DNS zone you created to the virtual
1. For **Virtual network**, select the virtual network you created previously.
- :::image type="content" source="media/spring-cloud-access-app-vnet/add-virtual-network-link.png" alt-text="Screenshot of the Azure portal that shows the Add virtual network link page." lightbox="media/spring-cloud-access-app-vnet/add-virtual-network-link.png":::
+ :::image type="content" source="media/access-app-virtual-network/add-virtual-network-link.png" alt-text="Screenshot of the Azure portal that shows the Add virtual network link page." lightbox="media/access-app-virtual-network/add-virtual-network-link.png":::
1. Select **OK**.
Use the following steps to use the private DNS zone to translate/resolve DNS.
| TTL unit | Select **Hours**. | | IP address | Enter the [IP address](#find-the-ip-for-your-application). The following screenshot uses the IP address *10.1.0.7*. |
- :::image type="content" source="media/spring-cloud-access-app-vnet/private-dns-zone-add-record.png" alt-text="Screenshot of the Azure portal that shows the Add record set page." lightbox="media/spring-cloud-access-app-vnet/private-dns-zone-add-record.png":::
+ :::image type="content" source="media/access-app-virtual-network/private-dns-zone-add-record.png" alt-text="Screenshot of the Azure portal that shows the Add record set page." lightbox="media/access-app-virtual-network/private-dns-zone-add-record.png":::
1. Select **OK**.
Use the following steps to assign a private FQDN:
1. Select **Assign Endpoint** to assign a private FQDN to your application. Assigning an FQDN can take a few minutes.
- :::image type="content" source="media/spring-cloud-access-app-vnet/assign-private-endpoint.png" alt-text="Screenshot of the Azure portal that shows the Overview page with Assign endpoint highlighted." lightbox="media/spring-cloud-access-app-vnet/assign-private-endpoint.png":::
+ :::image type="content" source="media/access-app-virtual-network/assign-private-endpoint.png" alt-text="Screenshot of the Azure portal that shows the Overview page with Assign endpoint highlighted." lightbox="media/access-app-virtual-network/assign-private-endpoint.png":::
1. The assigned private FQDN (labeled **URL**) is now available. You can only access the URL within the private network, but not on the internet.
az spring app update \
After the assignment, you can access the application's private FQDN in the private network. For example, you can create a jumpbox machine in the same virtual network or in a peered virtual network. Then, on that jumpbox or virtual machine, you can access the private FQDN. ## Clean up resources
spring-apps Connect Managed Identity To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/connect-managed-identity-to-azure-sql.md
Rebuild the app and deploy it to the Azure Spring Apps provisioned in the second
* [How to access Storage blob with managed identity in Azure Spring Apps](https://github.com/Azure-Samples/azure-spring-apps-samples/tree/main/managed-identity-storage-blob) * [How to enable system-assigned managed identity for applications in Azure Spring Apps](./how-to-enable-system-assigned-managed-identity.md)
-* [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
+* [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview)
* [Authenticate Azure Spring Apps with Key Vault in GitHub Actions](./github-actions-key-vault.md)
spring-apps Expose Apps Gateway End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/expose-apps-gateway-end-to-end-tls.md
az network public-ip create \
--sku Standard ```
-## Create a Managed Identity for Application Gateway
+## Create a managed identity for Application Gateway
-Application Gateway will need to be able to access Key Vault to read the certificate. To do so, it will use a User-assigned [Managed Identity](../active-directory/managed-identities-azure-resources/overview.md). Create the Managed Identity by using the following command:
+Application Gateway will need to be able to access Key Vault to read the certificate. To do so, it will use a user-assigned [managed identity](/entra/identity/managed-identities-azure-resources/overview). Create the managed identity by using the following command:
```azurecli export APPGW_IDENTITY_NAME='name-for-appgw-managed-identity'
az identity create \
--name $APPGW_IDENTITY_NAME ```
-Then fetch the objectId for the Managed Identity as it will be used later on to give rights to access the certificate in Key Vault:
+Then fetch the objectId for the managed identity as it will be used later on to give rights to access the certificate in Key Vault:
```azurecli export APPGW_IDENTITY_CLIENTID=$(az identity show \
export APPGW_IDENTITY_OID=$(az ad sp show \
## Set policy on Key Vault
-Configure Key Vault using the following command so that the Managed Identity for Application Gateway is allowed to access the certificate stored in Key Vault:
+Configure Key Vault using the following command so that the managed identity for Application Gateway is allowed to access the certificate stored in Key Vault:
```azurecli az keyvault set-policy \
az keyvault set-policy \
## Create Application Gateway
-Create an application gateway using `az network application-gateway create` and specify your application's private fully qualified domain name (FQDN) as servers in the backend pool. Make sure to use the user-assigned Managed Identity and to point to the certificate in Key Vault using the certificate's Secret ID. Then update the HTTP setting using `az network application-gateway http-settings update` to use the public host name.
+Create an application gateway using `az network application-gateway create` and specify your application's private fully qualified domain name (FQDN) as servers in the backend pool. Make sure to use the user-assigned managed identity and to point to the certificate in Key Vault using the certificate's Secret ID. Then update the HTTP setting using `az network application-gateway http-settings update` to use the public host name.
```azurecli export APPGW_NAME='name-for-application-gateway'
spring-apps Expose Apps Gateway Tls Termination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/expose-apps-gateway-tls-termination.md
az network public-ip create \
### Create a managed identity for the application gateway
-Your application gateway will need to be able to access Key Vault to read the certificate. To do this, the application gateway will use a user-assigned managed identity. For more information, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md). Create the managed identity by using the following command, replacing the *\<...>* placeholder:
+Your application gateway will need to be able to access Key Vault to read the certificate. To do this, the application gateway will use a user-assigned managed identity. For more information, see [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview) Create the managed identity by using the following command, replacing the *\<...>* placeholder:
```azurecli export APPGW_IDENTITY_NAME='<name-for-appgw-managed-identity>'
spring-apps How To Enable System Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enable-system-assigned-managed-identity.md
Managed identities for Azure resources provide an automatically managed identity
## Prerequisites
-If you're unfamiliar with managed identities for Azure resources, see the [Managed identities for Azure resources overview section](../active-directory/managed-identities-azure-resources/overview.md).
+If you're unfamiliar with managed identities for Azure resources, see [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview)
::: zone pivot="sc-enterprise"
az spring app identity assign \
An app can use its managed identity to get tokens to access other resources protected by Microsoft Entra ID, such as Azure Key Vault. These tokens represent the application accessing the resource, not any specific user of the application.
-You may need to [configure the target resource to allow access from your application](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). For example, if you request a token to access Key Vault, make sure you have added an access policy that includes your application's identity. Otherwise, your calls to Key Vault are rejected, even if they include the token. To learn more about which resources support Microsoft Entra tokens, see [Azure services that support Microsoft Entra authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).
+You may need to [configure the target resource to allow access from your application](/entra/identity/managed-identities-azure-resources/howto-assign-access-portal). For example, if you request a token to access Key Vault, make sure you have added an access policy that includes your application's identity. Otherwise, your calls to Key Vault are rejected, even if they include the token. To learn more about which resources support Microsoft Entra tokens, see [Azure services that can use managed identities to access other services](/entra/identity/managed-identities-azure-resources/managed-identities-status).
-Azure Spring Apps shares the same endpoint for token acquisition with Azure Virtual Machine. We recommend using Java SDK or spring boot starters to acquire a token. See [How to use VM token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md) for various code and script examples and guidance on important topics such as handling token expiration and HTTP errors.
+Azure Spring Apps shares the same endpoint for token acquisition with Azure Virtual Machine. We recommend using Java SDK or spring boot starters to acquire a token. See [How to use VM token](/entra/identity/managed-identities-azure-resources/how-to-use-vm-token) for various code and script examples and guidance on important topics such as handling token expiration and HTTP errors.
## Disable system-assigned identity from an app
az ad sp show --id <object-ID> --query appId
## Next steps
-* [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
+* [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview)
* [How to use managed identities with Java SDK](https://github.com/Azure-Samples/azure-spring-apps-samples)
spring-apps How To Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-manage-user-assigned-managed-identities.md
Managed identities for Azure resources provide an automatically managed identity
## Prerequisites -- If you're unfamiliar with managed identities for Azure resources, see the [Managed identities for Azure resources overview section](../active-directory/managed-identities-azure-resources/overview.md).
+- If you're unfamiliar with managed identities for Azure resources, see [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview)
::: zone pivot="sc-enterprise" - An already provisioned Azure Spring Apps Enterprise plan instance. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise plan](quickstart-deploy-apps-enterprise.md). - [Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli). - [!INCLUDE [install-app-user-identity-extension](includes/install-app-user-identity-extension.md)]-- At least one already provisioned user-assigned managed identity. For more information, see [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+- At least one already provisioned user-assigned managed identity. For more information, see [Manage user-assigned managed identities](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities).
::: zone-end
Managed identities for Azure resources provide an automatically managed identity
- An already provisioned Azure Spring Apps instance. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md). - [Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli). - [!INCLUDE [install-app-user-identity-extension](includes/install-app-user-identity-extension.md)]-- At least one already provisioned user-assigned managed identity. For more information, see [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+- At least one already provisioned user-assigned managed identity. For more information, see [Manage user-assigned managed identities](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities).
::: zone-end
az spring app identity assign \
An application can use its managed identity to get tokens to access other resources protected by Microsoft Entra ID, such as Azure Key Vault. These tokens represent the application accessing the resource, not any specific user of the application.
-You may need to configure the target resource to allow access from your application. For more information, see [Assign a managed identity access to a resource by using the Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). For example, if you request a token to access Key Vault, be sure you've added an access policy that includes your application's identity. Otherwise, your calls to Key Vault are rejected, even if they include the token. To learn more about which resources support Microsoft Entra tokens, see [Azure services that support Microsoft Entra authentication](../active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md)
+You may need to configure the target resource to allow access from your application. For more information, see [Assign a managed identity access to a resource by using the Azure portal](/entra/identity/managed-identities-azure-resources/howto-assign-access-portal). For example, if you request a token to access Key Vault, be sure you've added an access policy that includes your application's identity. Otherwise, your calls to Key Vault are rejected, even if they include the token. To learn more about which resources support Microsoft Entra tokens, see [Azure services that support Microsoft Entra authentication](/entra/identity/managed-identities-azure-resources/services-id-authentication-support)
-Azure Spring Apps shares the same endpoint for token acquisition with Azure Virtual Machines. We recommend using Java SDK or Spring Boot starters to acquire a token. For various code and script examples, and guidance on important topics such as handling token expiration and HTTP errors, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md).
+Azure Spring Apps shares the same endpoint for token acquisition with Azure Virtual Machines. We recommend using Java SDK or Spring Boot starters to acquire a token. For various code and script examples, and guidance on important topics such as handling token expiration and HTTP errors, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](/entra/identity/managed-identities-azure-resources/how-to-use-vm-token).
## Remove user-assigned managed identities from an existing app
For user-assigned managed identity limitations, see [Quotas and service plans fo
## Next steps -- [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
+- [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview)
- [How to use managed identities with Java SDK](https://github.com/Azure-Samples/azure-spring-apps-samples)
spring-apps How To Outbound Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-outbound-public-ip.md
az spring show --resource-group <group_name> --name <service_name> --query prope
## Next steps
-* [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
+* [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview)
* [Learn more about key vault in Azure Spring Apps](./tutorial-managed-identities-key-vault.md)
spring-apps How To Set Up Sso With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-set-up-sso-with-azure-ad.md
Register your application to establish a trust relationship between your app and
1. In *Redirect URI (optional)* select **Web**, then enter the URL from the above section in the text box. The redirect URI is the location where Microsoft Entra ID redirects your client and sends security tokens after authentication. 1. Select **Register** to finish registering the application. When registration finishes, you'll see the *Application (client) ID* on the **Overview** screen of the *App registrations** page.
You can also add redirect URIs after app registration by following these steps:
1. Select **Web**, then select **Add URI** under *Redirect URIs*. 1. Add a new redirect URI, then select **Save**.
-For more information on Application Registration, see [Quickstart: Register an app in the Microsoft identity platform ](../active-directory/develop/quickstart-register-app.md#quickstart-register-an-application-with-the-microsoft-identity-platform).
+For more information on Application Registration, see [Quickstart: Register an application with the Microsoft identity platform](/entra/identity-platform/quickstart-register-app).
## Add a client secret
The application uses a client secret to authenticate itself in SSO workflow. You
## Configure scope
-The `scope` property of SSO is a list of scopes to be included in JWT identity tokens. They're often referred to permissions. Identity platform supports several [OpenID Connect scopes](../active-directory/develop/v2-permissions-and-consent.md#openid-connect-scopes), such as `openid`, `email` and `profile`.
+The `scope` property of SSO is a list of scopes to be included in JWT identity tokens. They're often referred to permissions. Identity platform supports several OpenID Connect scopes, such as `openid`, `email` and `profile`. For more information, see the [OpenID Connect scopes](/entra/identity-platform/scopes-oidc#openid-connect-scopes) section of [Scopes and permissions in the Microsoft identity platform](/entra/identity-platform/scopes-oidc).
## Configure issuer URI
spring-apps How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-managed-identities.md
For user-assigned managed identities, see [How to assign and remove user-assigne
An application can use its managed identity to get tokens to access other resources protected by Microsoft Entra ID, such as Azure Key Vault. These tokens represent the application accessing the resource, not any specific user of the application.
-You can configure the target resource to allow access from your application. For more information, see [Assign a managed identity access to a resource by using the Azure portal](/entra/identity/managed-identities-azure-resources/howto-assign-access-portal). For example, if you request a token to access Key Vault, be sure you have added an access policy that includes your application's identity. Otherwise, your calls to Key Vault will be rejected, even if they include the token. To learn more about which resources support Microsoft Entra tokens, see [Azure services that support Microsoft Entra authentication](/entra/identity/managed-identities-azure-resources/services-id-authentication-support.
+You can configure the target resource to allow access from your application. For more information, see [Assign a managed identity access to a resource by using the Azure portal](/entra/identity/managed-identities-azure-resources/howto-assign-access-portal). For example, if you request a token to access Key Vault, be sure you have added an access policy that includes your application's identity. Otherwise, your calls to Key Vault will be rejected, even if they include the token. To learn more about which resources support Microsoft Entra tokens, see [Azure services that support Microsoft Entra authentication](/entra/identity/managed-identities-azure-resources/services-id-authentication-support).
Azure Spring Apps shares the same endpoint for token acquisition with Azure Virtual Machines. We recommend using Java SDK or Spring Boot starters to acquire a token. For various code and script examples, as well as guidance on important topics like handling token expiration and HTTP errors, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](/entra/identity/managed-identities-azure-resources/how-to-use-vm-token).
The following table shows the mappings between concepts in Managed Identity scop
## Next steps -- [Learn more about managed identities for Azure resources](/entra/identity/managed-identities-azure-resources/overview)
+- [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview)
- [How to use managed identities with Java SDK](https://github.com/Azure-Samples/azure-spring-apps-samples)
spring-apps Quickstart Deploy Restful Api App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-restful-api-app.md
zone_pivot_groups: spring-apps-enterprise-or-consumption-plan-selection
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-This article describes how to deploy a RESTful API application protected by [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md) to Azure Spring Apps. The sample project is a simplified version based on the [Simple Todo](https://github.com/Azure-Samples/ASA-Samples-Web-Application) web application, which only provides the backend service and uses Microsoft Entra ID to protect the RESTful APIs.
+This article describes how to deploy a RESTful API application protected by [Microsoft Entra ID](/entra/fundamentals/whatis) to Azure Spring Apps. The sample project is a simplified version based on the [Simple Todo](https://github.com/Azure-Samples/ASA-Samples-Web-Application) web application, which only provides the backend service and uses Microsoft Entra ID to protect the RESTful APIs.
These RESTful APIs are protected by applying role-based access control (RBAC). Anonymous users can't access any data and aren't allowed to control access for different users. Anonymous users only have the following three permissions:
The following diagram shows the architecture of the system:
- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17.-- A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](../active-directory/fundamentals/create-new-tenant.md).
+- A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](/entra/fundamentals/create-new-tenant).
### [Azure CLI](#tab/Azure-CLI)
The following diagram shows the architecture of the system:
- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17.-- A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](../active-directory/fundamentals/create-new-tenant.md).
+- A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](/entra/fundamentals/create-new-tenant).
- [Azure CLI](/cli/azure/install-azure-cli) version 2.53.1 or higher.
The following diagram shows the architecture of the system:
For more information, see [Grant tenant-wide admin consent to an application](/entra/identity/enterprise-apps/grant-admin-consent?pivots=portal). - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17.-- A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](../active-directory/fundamentals/create-new-tenant.md).
+- A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](/entra/fundamentals/create-new-tenant).
### [Azure Developer CLI](#tab/Azure-Developer-CLI)
The following diagram shows the architecture of the system:
For more information, see [grant admin consent](/entra/identity/enterprise-apps/grant-admin-consent?pivots=portal#prerequisites). - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17.-- A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](../active-directory/fundamentals/create-new-tenant.md).
+- A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](/entra/fundamentals/create-new-tenant).
- [Azure Developer CLI (AZD)](https://aka.ms/azd-install), version 1.0.2 or higher.
The following diagram shows the architecture of the system:
#### Obtain the access token
-Use the following steps to use [OAuth 2.0 authorization code flow](../active-directory/develop/v2-oauth2-auth-code-flow.md) method to obtain an access token with Microsoft Entra ID, then access the RESTful APIs of the `ToDo` app:
+Use the following steps to use [OAuth 2.0 authorization code flow](/entra/identity-platform/v2-oauth2-auth-code-flow) method to obtain an access token with Microsoft Entra ID, then access the RESTful APIs of the `ToDo` app:
1. Open the URL exposed by the app, then select **Authorize** to prepare the OAuth2 authentication.
spring-apps Secure Communications End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/secure-communications-end-to-end.md
You can use any type of TLS/SSL certificate. For example, you can use certificat
Zero Trust is based on the principle of "never trust, always verify, and credential-free". Zero Trust helps to secure all communications by eliminating unknown and unmanaged certificates. Zero Trust involves trusting only certificates that are shared by verifying identity prior to granting access to those certificates. For more information, see the [Zero Trust Guidance Center](/security/zero-trust/).
-To securely load certificates from [Azure Key Vault](../key-vault/index.yml), Spring Boot apps use [managed identities](../active-directory/managed-identities-azure-resources/overview.md) and [Azure role-based access control (RBAC)](../role-based-access-control/index.yml). Azure Spring Apps uses a provider [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) and Azure role-based access control. This secure loading is powered using the Azure Key Vault Java Cryptography Architecture (JCA) Provider. For more information, see [Azure Key Vault JCA client library for Java](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/keyvault/azure-security-keyvault-jca).
+To securely load certificates from [Azure Key Vault](../key-vault/index.yml), Spring Boot apps use [managed identities](/entra/identity/managed-identities-azure-resources/overview) and [Azure role-based access control (RBAC)](../role-based-access-control/index.yml). Azure Spring Apps uses a provider [service principal](/entra/identity-platform/app-objects-and-service-principals#service-principal-object) and Azure role-based access control. This secure loading is powered using the Azure Key Vault Java Cryptography Architecture (JCA) Provider. For more information, see [Azure Key Vault JCA client library for Java](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/keyvault/azure-security-keyvault-jca).
With Azure Key Vault, you control the storage and distribution of certificates to reduce accidental leakage. Applications and services can securely access certificates. Key Vault uses Azure role-based access control to lock down access to only those requiring access, such as an admin, but also apps, using the principle of least privilege. Applications and services authenticate and authorize, using Microsoft Entra ID and Azure role-based access control, to access certificates. You can monitor the access and use of certificates in Key Vault through its full audit trail.
spring-apps Tutorial Authenticate Client With Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-authenticate-client-with-gateway.md
**This article applies to:** ✔️ Standard consumption and dedicated (Preview)
-This quickstart shows you how to secure communication between a client application and a microservice application that is hosted on Azure Spring Apps and shielded with a Spring Cloud Gateway app. The client application is verified as a security principal to initiate contact with the microservice deployed on Azure Spring Apps, using the app built with [Spring Cloud Gateway](https://docs.spring.io/spring-cloud-gateway/docs/current/reference/html/). This method employs Spring Cloud Gateway's Token Relay and Spring Security's Resource Server features for the processes of authentication and authorization, realized through the execution of the [OAuth 2.0 client credentials flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
+This quickstart shows you how to secure communication between a client application and a microservice application that is hosted on Azure Spring Apps and shielded with a Spring Cloud Gateway app. The client application is verified as a security principal to initiate contact with the microservice deployed on Azure Spring Apps, using the app built with [Spring Cloud Gateway](https://docs.spring.io/spring-cloud-gateway/docs/current/reference/html/). This method employs Spring Cloud Gateway's Token Relay and Spring Security's Resource Server features for the processes of authentication and authorization, realized through the execution of the [OAuth 2.0 client credentials flow](/entra/identity-platform/v2-oauth2-client-creds-grant-flow).
The following list shows the composition of the sample project:
The following list shows the composition of the sample project:
- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17.-- A Microsoft Entra tenant. For more information on how to create a Microsoft Entra tenant, see [Quickstart: Create a new tenant in Microsoft Entra ID](../active-directory/fundamentals/create-new-tenant.md).
+- A Microsoft Entra tenant. For more information on how to create a Microsoft Entra tenant, see [Quickstart: Create a new tenant in Microsoft Entra ID](/entra/fundamentals/create-new-tenant).
- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher. - Install [Node.js](https://nodejs.org).
spring-apps Tutorial Managed Identities Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-functions.md
az spring app create \
## Build a sample Spring Boot app to invoke the Function
-This sample invokes the HTTP triggered function by first requesting an access token from the MSI endpoint and using that token to authenticate the function HTTP request. For more information, see the [Get a token using HTTP](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md#get-a-token-using-http) section of [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md).
+This sample invokes the HTTP triggered function by first requesting an access token from the MSI endpoint and using that token to authenticate the function HTTP request. For more information, see the [Get a token using HTTP](/entra/identity/managed-identities-azure-resources/how-to-use-vm-token#get-a-token-using-http) section of [How to use managed identities for Azure resources on an Azure VM to acquire an access token](/entra/identity/managed-identities-azure-resources/how-to-use-vm-token).
1. Use the following command clone the sample project:
This sample invokes the HTTP triggered function by first requesting an access to
## Next steps - [How to enable system-assigned managed identity for applications in Azure Spring Apps](./how-to-enable-system-assigned-managed-identity.md)-- [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
+- [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview)
- [Configure client apps to access your App Service](../app-service/configure-authentication-provider-aad.md#configure-client-apps-to-access-your-app-service)
spring-apps Tutorial Managed Identities Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-key-vault.md
spring.cloud.azure.keyvault.secret.property-sources[0].credential.client-id={Cli
* [How to access Storage blob with managed identity in Azure Spring Apps](https://github.com/Azure-Samples/azure-spring-apps-samples/tree/main/managed-identity-storage-blob) * [How to enable system-assigned managed identity for applications in Azure Spring Apps](./how-to-enable-system-assigned-managed-identity.md)
-* [Learn more about managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md)
+* [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview)
* [Authenticate Azure Spring Apps with Key Vault in GitHub Actions](./github-actions-key-vault.md)
spring-apps Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/whats-new.md
Title: What's new in Azure Spring Apps
+ Title: What's new in Azure Spring Apps?
description: Learn about the new features and recent improvements in Azure Spring Apps.
Last updated 05/23/2023
-# What's new in Azure Spring Apps
+# What's new in Azure Spring Apps?
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
Microsoft's in-house antivirus solutions, Windows Defender and System Center End
If cloud tiering is enabled, solutions that directly back up the server endpoint or a VM on which the server endpoint is located shouldn't be used. Cloud tiering causes only a subset of your data to be stored on the server endpoint, with the full dataset residing in your Azure file share. Depending on the backup solution used, tiered files will either be skipped and not backed up (because they have the `FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS` attribute set), or they will be recalled to disk, resulting in high egress charges. We recommend using a cloud backup solution to back up the Azure file share directly. For more information, see [About Azure file share backup](../../backup/azure-file-share-backup-overview.md?toc=/azure/storage/files/toc.json) or contact your backup provider to see if they support backing up Azure file shares.
-If you prefer to use an on-premises backup solution, backups should be performed on a server in the sync group that has cloud tiering disabled. When performing a restore, use the volume-level or file-level restore options. Files restored using the file-level restore option will be synced to all endpoints in the sync group, and existing files will be replaced with the version restored from backup. Volume-level restores won't replace newer file versions in the Azure file share or other server endpoints.
+If you prefer to use an on-premises backup solution, backups should be performed on a server in the sync group that has cloud tiering disabled and make sure there are no tiered files. When performing a restore, use the volume-level or file-level restore options. Files restored using the file-level restore option will be synced to all endpoints in the sync group, and existing files will be replaced with the version restored from backup. Volume-level restores won't replace newer file versions in the Azure file share or other server endpoints.
> [!NOTE]
-> Bare-metal (BMR) restore can cause unexpected results and isn't currently supported. VSS snapshots (including Previous Versions tab) are supported on volumes which have cloud tiering enabled. However, you must enable previous version compatibility through PowerShell. [Learn how](file-sync-deployment-guide.md#optional-self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service).
+> Bare-metal (BMR) restore, VM restore, system restore (Windows built-in OS restore), and file-level restore with its tiered version (this happens when backup software backs up a tiered file instead of a full file) can cause unexpected results and aren't currently supported when cloud tiering is enabled. VSS snapshots (including Previous Versions tab) are supported on volumes which have cloud tiering enabled. However, you must enable previous version compatibility through PowerShell. [Learn how](file-sync-deployment-guide.md#optional-self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service).
## Data Classification
storage Storage Files Migration Nas Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-hybrid.md
Title: On-premises NAS migration to Azure File Sync
-description: Learn how to migrate files from an on-premises Network Attached Storage (NAS) location to a hybrid cloud deployment with Azure File Sync and Azure file shares.
+description: Learn how to migrate SMB file shares from on-premises Network Attached Storage (NAS) to a hybrid cloud deployment with Azure File Sync and Azure file shares.
Previously updated : 03/28/2023 Last updated : 01/25/2024
Azure File Sync works on Direct Attached Storage (DAS) locations and doesn't sup
This fact makes a migration of your files necessary, and this article guides you through the planning and execution of such a migration. ## Applies to+ | File share type | SMB | NFS | |-|:-:|:-:| | Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
This fact makes a migration of your files necessary, and this article guides you
## Migration goals
-The goal is to move the shares on your NAS appliance to a Windows Server, then utilize Azure File Sync for a hybrid cloud deployment. Generally, migrations need to be done in a way that guarantee the integrity of the production data and its availability during the migration. The latter requires keeping downtime to a minimum, so that it can fit into or only slightly exceed regular maintenance windows.
+The goal is to move the SMB file shares on your NAS appliance to a Windows Server, then utilize Azure File Sync for a hybrid cloud deployment. Generally, migrations need to be done in a way that guarantees the integrity of the production data and its availability during the migration. The latter requires keeping downtime to a minimum, so that it can fit into or only slightly exceed regular maintenance windows.
## Migration overview
-As mentioned in the Azure Files [migration overview article](storage-files-migration-overview.md), using the correct copy tool and approach is important. Your NAS appliance is exposing SMB shares directly on your local network. RoboCopy, built into Windows Server, is the best way to move your files in this migration scenario.
+As mentioned in [Migrate to SMB Azure file shares](storage-files-migration-overview.md), using the correct copy tool and approach is important. Your NAS appliance is exposing SMB shares directly on your local network. You can either use Azure Storage Mover or RoboCopy to move your files.
- Phase 1: [Identify how many Azure file shares you need](#phase-1-identify-how-many-azure-file-shares-you-need) - Phase 2: [Provision a suitable Windows Server on-premises](#phase-2-provision-a-suitable-windows-server-on-premises)
As mentioned in the Azure Files [migration overview article](storage-files-migra
- Phase 4: [Deploy Azure storage resources](#phase-4-deploy-azure-storage-resources) - Phase 5: [Deploy the Azure File Sync agent](#phase-5-deploy-the-azure-file-sync-agent) - Phase 6: [Configure Azure File Sync on the Windows Server](#phase-6-configure-azure-file-sync-on-the-windows-server)-- Phase 7: [RoboCopy](#phase-7-robocopy)
+- Phase 7: [Copy data using Azure Storage Mover or RoboCopy](#phase-7-copy-data-using-azure-storage-mover-or-robocopy)
- Phase 8: [User cut-over](#phase-8-user-cut-over) ## Phase 1: Identify how many Azure file shares you need
As mentioned in the Azure Files [migration overview article](storage-files-migra
1. Move a set of files that fits onto the disk 2. Let file sync and cloud tiering engage
- 3. When more free space is created on the volume, proceed with the next batch of files. Alternatively, review the RoboCopy command in the [RoboCopy section](#phase-7-robocopy) of this article for use of the new `/LFSM` switch. Using `/LFSM` can significantly simplify your RoboCopy jobs, but it isn't compatible with some other RoboCopy switches you might depend on. Only use the `/LFSM` switch when the migration destination is local storage. It's not supported when the destination is a remote SMB share.
+ 3. When more free space is created on the volume, proceed with the next batch of files. Alternatively, review the RoboCopy command in the [RoboCopy section](#phase-7-copy-data-using-azure-storage-mover-or-robocopy) of this article for use of the new `/LFSM` switch. Using `/LFSM` can significantly simplify your RoboCopy jobs, but it isn't compatible with some other RoboCopy switches you might depend on. Only use the `/LFSM` switch when the migration destination is local storage. It's not supported when the destination is a remote SMB share.
You can avoid this batching approach by provisioning the equivalent space on the Windows Server that your files occupy on the NAS appliance. Consider deduplication on NAS / Windows. If you don't want to permanently commit this high amount of storage to your Windows Server, you can reduce the volume size after the migration and before you adjust the cloud tiering policies. That creates a smaller on-premises cache of your Azure file shares.
After the creation of all server endpoints, sync is working. You can create a te
Both locations, the server folders and the Azure file shares, are otherwise empty and awaiting data in either location. In the next step, you'll begin to copy files into the Windows Server for Azure File Sync to move them up to the cloud. In case you've enabled cloud tiering, the server will then begin to tier files, should you run out of capacity on the local volume(s).
-## Phase 7: RoboCopy
+## Phase 7: Copy data using Azure Storage Mover or RoboCopy
-The basic migration approach is a RoboCopy from your NAS appliance to your Windows Server, and Azure File Sync to Azure file shares.
+Now you can use Azure Storage Mover or RoboCopy to copy data from your NAS appliance to your Windows Server, and use Azure File Sync to move the data to Azure file shares. This guide uses RoboCopy for the initial copy. To use Azure Storage Mover instead, see [Migrate to SMB Azure file shares using Azure Storage Mover](migrate-files-storage-mover.md).
Run the first local copy to your Windows Server target folder: * Identify the first location on your NAS appliance. * Identify the matching folder on the Windows Server that already has Azure File Sync configured on it.
-* Start the copy using RoboCopy.
+* Start the copy.
The following RoboCopy command will copy files from your NAS storage to your Windows Server target folder. The Windows Server will sync it to the Azure file share(s). If you provisioned less storage on your Windows Server than your files take up on the NAS appliance, then you have configured cloud tiering. As the local Windows Server volume gets full, [cloud tiering](../file-sync/file-sync-cloud-tiering-overview.md) will kick in and tier files that have successfully synced already. Cloud tiering will generate enough space to continue the copy from the NAS appliance. Cloud tiering checks once an hour to see what has synced and to free up disk space to reach the 99% volume free space.
-It's possible that RoboCopy moves files faster than you can sync to the cloud and tier locally, thus running out of local disk space. In this case, RoboCopy will fail. We recommend that you work through the shares in a sequence that prevents this - for example, not starting RoboCopy jobs for all shares at the same time, or only moving shares that fit on the current amount of free space on the Windows Server.
+
+It's possible that RoboCopy moves files faster than you can sync to the cloud and tier locally, thus running out of local disk space. In this case, RoboCopy will fail. We recommend that you work through the shares in a sequence that prevents this - for example, not starting copy jobs for all shares at the same time, or only moving shares that fit on the current amount of free space on the Windows Server.
[!INCLUDE [storage-files-migration-robocopy](../../../includes/storage-files-migration-robocopy.md)]
The following articles will help you understand deployment options, best practic
* [Azure File Sync overview](../file-sync/file-sync-planning.md) * [Deploy Azure File Sync](../file-sync/file-sync-deployment-guide.md)
-* [Azure File Sync troubleshooting](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json)
+* [Azure File Sync troubleshooting](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json)
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
description: Learn about the capacity, IOPS, and throughput rates for Azure file
Previously updated : 01/11/2024 Last updated : 01/25/2024
File scale targets apply to individual files stored in Azure file shares.
<sup>2 Subject to machine network limits, available bandwidth, I/O sizes, queue depth, and other factors. For details see [SMB Multichannel performance](./smb-performance.md).</sup>
-<sup>3 Azure Files supports 10,000 open handles on the root directory and 2,000 open handles per file and directory within the share. The number of active users supported per share is dependent on the applications that are accessing the share. If you're using Azure Files to store disk images for large-scale virtual desktop workloads, you might run out of handles for the root directory or per file/directory. In this case, you might need to use two Azure file shares. If your applications aren't opening a handle on the root directory, Azure Files can support more than 10,000 active users per share.</sup>
+<sup>3 Azure Files supports 10,000 open handles on the root directory and 2,000 open handles per file and directory within the share. The number of active users supported per share is dependent on the applications that are accessing the share. If your applications aren't opening a handle on the root directory, Azure Files can support more than 10,000 active users per share. However, if you're using Azure Files to store disk images for large-scale virtual desktop workloads, you might run out of handles for the root directory or per file/directory. In this case, you might need to use multiple Azure file shares. For more information, see [Azure Files sizing guidance for Azure Virtual Desktop](#azure-files-sizing-guidance-for-azure-virtual-desktop).</sup>
+
+### Azure Files sizing guidance for Azure Virtual Desktop
+
+A popular use case for Azure Files is storing user profile containers and disk images for Azure Virtual Desktop, using either FSLogix or App attach. In large scale Azure Virtual Desktop deployments, you might run out of handles for the root directory or per file/directory if you're using a single Azure file share. This section describes how handles are consumed by various types of disk images, and provides sizing guidance depending on the technology you're using.
+
+#### FSLogix
+
+If you're using [FSLogix with Azure Virtual Desktop](../../virtual-desktop/fslogix-containers-azure-files.md), your user profile containers are either Virtual Hard Disk (VHD) or Hyper-V Virtual Hard Disk (VHDX) files, and they're mounted in a user context, not a system context. Each user will open a single root directory handle, which should be to the file share. Azure Files can support a maximum of 10,000 users assuming you have the file share (`\\storageaccount.file.core.windows.net\sharename`) + the profile directory (`%sid%_%username%`) + profile container (`profile_%username.vhd(x)`).
+
+If you're hitting the limit of 10,000 concurrent handles for the root directory or users are seeing poor performance, try using an additional Azure file share and distributing the containers between the shares.
+
+> [!WARNING]
+> While Azure Files can support up to 10,000 concurrent users from a single file share, it's critical to properly test your workloads against the size and type of file share you've created. Your requirements might vary based on users, profile size, and workload.
+
+For example, if you have 2,400 concurrent users, you'd need 2,400 handles on the root directory (one for each user), which is below the limit of 10,000 open handles. For FSLogix users, reaching the limit of 2,000 open file and directory handles is extremely unlikely. If you have a single FSLogix profile container per user, you'd only consume two file/directory handles: one for the profile directory and one for the profile container file. If users have two containers each (profile and ODFC), you'd need one additional handle for the ODFC file.
+
+#### App attach with CimFS
+
+If you're using [MSIX App attach or App attach](../../virtual-desktop/app-attach-overview.md) to dynamically attach applications, you can use Composite Image File System (CimFS) or VHD/VHDX files for [disk images](../../virtual-desktop/app-attach-overview.md#application-images). Either way, the scale limits are per VM mounting the image, not per user. The number of users is irrelevant when calculating scale limits. When a VM is booted, it mounts the disk image, even if there are zero users.
+
+If you're using App attach with CimFS, the disk images only consume handles on the disk image files. They don't consume handles on the root directory or the directory containing the disk image. However, because a CimFS image is a combination of the .cim file and at least two other files, for every VM mounting the disk image, you'll need one handle each for three files in the directory. So if you have 100 VMs, you'll need 300 file handles.
+
+You might run out of file handles if the number of VMs per app exceeds 2,000. In this case, use an additional Azure file share.
+
+#### App attach with VHD/VHDX
+
+If you're using App attach with VHD/VHDX files, the files are mounted in a system context, not a user context, and they are shared and read-only. More than one handle on the VHDX file can be consumed by a connecting system. To stay within Azure Files scale limits, the number of VMs multiplied by the number of apps must be less than 10,000, and the number of VMs per app can't exceed 2,000. So the constraint is whichever you hit first.
+
+In this scenario, you could hit the per file/directory limit with 2,000 mounts of a single VHD/VHDX. Or, if the share contains multiple VHD/VHDX files, you could hit the root directory limit first. For example, 100 VMs mounting 100 shared VHDX files will hit the 10,000 handle root directory limit.
+
+In another example, 100 VMs accessing 20 apps will require 2,000 root directory handles (100 x 20 = 2,000), which is well within the 10,000 limit for root directory handles. You'll also need a file handle and a directory/folder handle for every VM mounting the VHD(X) image, so 200 handles in this case (100 file handles + 100 directory handles), which is comfortably below the 2,000 handle limit per file/directory.
+
+If you're hitting the limits on maximum concurrent handles for the root directory or per file/directory, use an additional Azure file share.
## Azure File Sync scale targets
The following table indicates which targets are soft, representing the Microsoft
| File size | 100 GiB | No | | Minimum file size for a file to be tiered | Based on file system cluster size (double file system cluster size). For example, if the file system cluster size is 4 KiB, the minimum file size will be 8 KiB. | Yes |
-> [!Note]
-> An Azure File Sync endpoint can scale up to the size of an Azure file share. If the Azure file share size limit is reached, sync will not be able to operate.
+> [!NOTE]
+> An Azure File Sync endpoint can scale up to the size of an Azure file share. If the Azure file share size limit is reached, sync won't be able to operate.
## Azure File Sync performance metrics
stream-analytics Stream Analytics Define Inputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-inputs.md
Previously updated : 11/9/2022 Last updated : 01/25/2024 # Stream data as input into Stream Analytics
These input resources can live in the same Azure subscription as your Stream Ana
### Compression
-Stream Analytics supports compression across all data stream input sources. Supported compression types are: None, Gzip, and Deflate compression. Support for compression isn't available for reference data. If the input format is Avro data that is compressed, it's handled transparently. You don't need to specify compression type with Avro serialization.
+Stream Analytics supports compression for all input sources. Supported compression types are: None, Gzip, and Deflate. The support for compression isn't available for reference data. If the input data is compressed Avro data, Stream Analytics handles it transparently. You don't need to specify the compression type with Avro serialization.
## Create, edit, or test inputs
-You can use the [Azure portal](stream-analytics-quick-create-portal.md), [Visual Studio](stream-analytics-quick-create-vs.md), and [Visual Studio Code](quick-create-visual-studio-code.md) to add and view or edit existing inputs on your streaming job. You can also test input connections and test queries from sample data from the Azure portal, [Visual Studio](stream-analytics-vs-tools-local-run.md), and [Visual Studio Code](visual-studio-code-local-run.md). When you write a query, you list the input in the FROM clause. You can get the list of available inputs from the **Query** page in the portal. If you wish to use multiple inputs, you can `JOIN` them or write multiple `SELECT` queries.
+You can use the [Azure portal](stream-analytics-quick-create-portal.md), [Visual Studio](stream-analytics-quick-create-vs.md), and [Visual Studio Code](quick-create-visual-studio-code.md) to add and view or edit existing inputs on your streaming job. You can also test input connections and test queries from sample data from the Azure portal, [Visual Studio](stream-analytics-vs-tools-local-run.md), and [Visual Studio Code](visual-studio-code-local-run.md). When you write a query, you list the input in the FROM clause. You can get the list of available inputs from the **Query** page in the portal. If you wish to use multiple inputs, `JOIN` them or write multiple `SELECT` queries.
> [!NOTE]
-> We strongly recommend using [**Stream Analytics tools for Visual Studio Code**](./quick-create-visual-studio-code.md) for best local development experience. There are known feature gaps in Stream Analytics tools for Visual Studio 2019 (version 2.6.3000.0) and it won't be improved going forward.
+> We strongly recommend that you use [**Stream Analytics tools for Visual Studio Code**](./quick-create-visual-studio-code.md) for the best local development experience. There are known feature gaps in Stream Analytics tools for Visual Studio 2019 (version 2.6.3000.0) and it won't be improved going forward.
## Stream data from Event Hubs
-Azure Event Hubs provides highly scalable publish-subscribe event investors. An event hub can collect millions of events per second so that you can process and analyze the massive amounts of data produced by your connected devices and applications. Together, Event Hubs and Stream Analytics provide an end-to-end solution for real-time analytics. Event Hubs lets you feed events into Azure in real-time, and Stream Analytics jobs can process those events in real-time. For example, you can send web clicks, sensor readings, or online log events to Event Hubs. You can then create Stream Analytics jobs to use Event Hubs as the input data streams for real-time filtering, aggregating, and correlation.
+Azure Event Hubs is a highly scalable publish-subscribe event ingestor. An event hub can collect millions of events per second so that you can process and analyze the massive amounts of data produced by your connected devices and applications. Together, Event Hubs and Stream Analytics can provide an end-to-end solution for real-time analytics. Event Hubs lets you feed events into Azure in real-time, and Stream Analytics jobs can process those events in real-time. For example, you can send web clicks, sensor readings, or online log events to Event Hubs. You can then create Stream Analytics jobs to use Event Hubs for the input data for real-time filtering, aggregating, and correlation.
`EventEnqueuedUtcTime` is the timestamp of an event's arrival in an event hub and is the default timestamp of events coming from Event Hubs to Stream Analytics. To process the data as a stream using a timestamp in the event payload, you must use the [TIMESTAMP BY](/stream-analytics-query/timestamp-by-azure-stream-analytics) keyword. ### Event Hubs Consumer groups
-You should configure each Stream Analytics event hub input to have its own consumer group. When a job contains a self-join or has multiple inputs, some inputs might be read by more than one reader downstream. This situation impacts the number of readers in a single consumer group. To avoid exceeding the Event Hubs limit of five readers per consumer group per partition, it's a best practice to designate a consumer group for each Stream Analytics job. There's also a limit of 20 consumer groups for a Standard tier event hub. For more information, see [Troubleshoot Azure Stream Analytics inputs](stream-analytics-troubleshoot-input.md).
+You should configure each event hub input to have its own consumer group. When a job contains a self-join or has multiple inputs, some inputs might be read by more than one reader downstream. This situation impacts the number of readers in a single consumer group. To avoid exceeding the Event Hubs limit of five readers per consumer group per partition, it's a best practice to designate a consumer group for each Stream Analytics job. There's also a limit of 20 consumer groups for a Standard tier event hub. For more information, see [Troubleshoot Azure Stream Analytics inputs](stream-analytics-troubleshoot-input.md).
### Create an input from Event Hubs
The following table explains each property in the **New input** page in the Azur
| Property | Description | | | |
-| **Input alias** |A friendly name that you use in the job's query to reference this input. |
-| **Subscription** | Choose the subscription in which the Event hub resource exists. |
-| **Event Hub namespace** | The Event Hubs namespace is a container for a set of messaging entities. When you create a new event hub, you also create the namespace. |
+| **Input alias** | A friendly name that you use in the job's query to reference this input. |
+| **Subscription** | Choose the Azure subscription in which the Event hub resource exists. |
+| **Event Hub namespace** | The Event Hubs namespace is a container for event hubs. When you create an event hub, you also create the namespace. |
| **Event Hub name** | The name of the event hub to use as input. |
+| **Event Hub consumer group** (recommended) | We recommend that you use a distinct consumer group for each Stream Analytics job. This string identifies the consumer group to use to ingest data from the event hub. If no consumer group is specified, the Stream Analytics job uses the `$Default` consumer group. |
+| **Authentication mode** | Specify the type of the authentication you want to use to connect to the event hub. You can use a connection string or a managed identity to authenticate with the event hub. For the managed identity option, you can either create a system-assigned managed identity to the Stream Analytics job or a user-assigned managed identity to authenticate with the event hub. When you use a managed identity, the managed identity must be a member of [Azure Event Hubs Data Receiver or Azure Event Hubs Data Owner roles](../event-hubs/authenticate-application.md#built-in-roles-for-azure-event-hubs). |
| **Event Hub policy name** | The shared access policy that provides access to the Event Hubs. Each shared access policy has a name, permissions that you set, and access keys. This option is automatically populated, unless you select the option to provide the Event Hubs settings manually.|
-| **Event Hub consumer group** (recommended) | It's highly recommended to use a distinct consumer group for each Stream Analytics job. This string identifies the consumer group to use to ingest data from the event hub. If no consumer group is specified, the Stream Analytics job uses the $Default consumer group. |
-| **Partition key** | It is an optional field that is available only if your job is configured to use [compatibility level](./stream-analytics-compatibility-level.md) 1.2 or higher. If your input is partitioned by a property, you can add the name of this property here. It is used for improving the performance of your query if it includes a PARTITION BY or GROUP BY clause on this property. If this job uses compatibility level 1.2 or higher, this field defaults to "PartitionId". |
-| **Event serialization format** | The serialization format (JSON, CSV, Avro, Parquet, or [Other (Protobuf, XML, proprietary...)](custom-deserializer.md)) of the incoming data stream. Ensure the JSON format aligns with the specification and doesnΓÇÖt include leading 0 for decimal numbers. |
+| **Partition key** | It's an optional field that is available only if your job is configured to use [compatibility level](./stream-analytics-compatibility-level.md) 1.2 or higher. If your input is partitioned by a property, you can add the name of this property here. It's used for improving the performance of your query if it includes a `PARTITION BY` or `GROUP BY` clause on this property. If this job uses compatibility level 1.2 or higher, this field defaults to `PartitionId.` |
+| **Event serialization format** | The serialization format (JSON, CSV, Avro, Parquet, or [Other (Protobuf, XML, proprietary...)](custom-deserializer.md)) of the incoming data stream. Ensure the JSON format aligns with the specification and doesnΓÇÖt include leading 0 for decimal numbers. |
| **Encoding** | UTF-8 is currently the only supported encoding format. | | **Event compression type** | The compression type used to read the incoming data stream, such as None (default), Gzip, or Deflate. |
+| **Schema registry (preview)** | You can select the schema registry with schemas for event data that's received from the event hub. |
When your data comes from an Event Hubs stream input, you have access to the following metadata fields in your Stream Analytics query: | Property | Description | | | |
-| **EventProcessedUtcTime** |The date and time that the event was processed by Stream Analytics. |
-| **EventEnqueuedUtcTime** |The date and time that the event was received by Event Hubs. |
+| **EventProcessedUtcTime** |The date and time when Stream Analytics processes the event. |
+| **EventEnqueuedUtcTime** |The date and time that when Event Hubs receives the events. |
| **PartitionId** |The zero-based partition ID for the input adapter. | For example, using these fields, you can write a query like the following example:
FROM Input
``` > [!NOTE]
-> When using Event Hub as an endpoint for IoT Hub Routes, you can access to the IoT Hub metadata using the [GetMetadataPropertyValue function](/stream-analytics-query/getmetadatapropertyvalue).
+> When using Event Hubs as an endpoint for IoT Hub Routes, you can access to the IoT Hub metadata using the [GetMetadataPropertyValue function](/stream-analytics-query/getmetadatapropertyvalue).
> ## Stream data from IoT Hub
The following table explains each property in the **New input** page in the Azur
| **Input alias** | A friendly name that you use in the job's query to reference this input.| | **Subscription** | Choose the subscription in which the IoT Hub resource exists. | | **IoT Hub** | The name of the IoT Hub to use as input. |
-| **Endpoint** | The endpoint for the IoT Hub.|
+| **Consumer group** | We recommend that you use a different consumer group for each Stream Analytics job. The consumer group is used to ingest data from the IoT Hub. Stream Analytics uses the $Default consumer group unless you specify otherwise. |
| **Shared access policy name** | The shared access policy that provides access to the IoT Hub. Each shared access policy has a name, permissions that you set, and access keys. |
-| **Shared access policy key** | The shared access key used to authorize access to the IoT Hub. This option is automatically populated in unless you select the option to provide the Iot Hub settings manually. |
-| **Consumer group** | It's highly recommended that you use a different consumer group for each Stream Analytics job. The consumer group is used to ingest data from the IoT Hub. Stream Analytics uses the $Default consumer group unless you specify otherwise. |
-| **Partition key** | It is an optional field that is available only if your job is configured to use [compatibility level](./stream-analytics-compatibility-level.md) 1.2 or higher. If your input is partitioned by a property, you can add the name of this property here. It is used for improving the performance of your query if it includes a PARTITION BY or GROUP BY clause on this property. If this job uses compatibility level 1.2 or higher, this field defaults to "PartitionId". |
-| **Event serialization format** | The serialization format (JSON, CSV, Avro, Parquet, or [Other (Protobuf, XML, proprietary...)](custom-deserializer.md)) of the incoming data stream. Ensure the JSON format aligns with the specification and doesnΓÇÖt include leading 0 for decimal numbers. |
+| **Shared access policy key** | The shared access key used to authorize access to the IoT Hub. This option is automatically populated in unless you select the option to provide the Iot Hub settings manually. |
+| **Endpoint** | The endpoint for the IoT Hub.|
+| **Partition key** | It's an optional field that is available only if your job is configured to use [compatibility level](./stream-analytics-compatibility-level.md) 1.2 or higher. If your input is partitioned by a property, you can add the name of this property here. It's used for improving the performance of your query if it includes a PARTITION BY or GROUP BY clause on this property. If this job uses compatibility level 1.2 or higher, this field defaults to "PartitionId." |
+| **Event serialization format** | The serialization format (JSON, CSV, Avro, Parquet, or [Other (Protobuf, XML, proprietary...)](custom-deserializer.md)) of the incoming data stream. Ensure the JSON format aligns with the specification and doesnΓÇÖt include leading 0 for decimal numbers. |
| **Encoding** | UTF-8 is currently the only supported encoding format. | | **Event compression type** | The compression type used to read the incoming data stream, such as None (default), Gzip, or Deflate. |
When you use stream data from an IoT Hub, you have access to the following metad
| Property | Description | | | | | **EventProcessedUtcTime** | The date and time that the event was processed. |
-| **EventEnqueuedUtcTime** | The date and time that the event was received by the IoT Hub. |
+| **EventEnqueuedUtcTime** | The date and time when the IoT Hub receives the event. |
| **PartitionId** | The zero-based partition ID for the input adapter. | | **IoTHub.MessageId** | An ID that's used to correlate two-way communication in IoT Hub. | | **IoTHub.CorrelationId** | An ID that's used in message responses and feedback in IoT Hub. |
-| **IoTHub.ConnectionDeviceId** | The authentication ID used to send this message. This value is stamped on servicebound messages by the IoT Hub. |
+| **IoTHub.ConnectionDeviceId** | The authentication ID used to send this message. This value is stamped on service-bound messages by the IoT Hub. |
| **IoTHub.ConnectionDeviceGenerationId** | The generation ID of the authenticated device that was used to send this message. This value is stamped on servicebound messages by the IoT Hub. |
-| **IoTHub.EnqueuedTime** | The time when the message was received by the IoT Hub. |
+| **IoTHub.EnqueuedTime** | The time when the IoT Hub receives the message. |
## Stream data from Blob storage or Data Lake Storage Gen2
-For scenarios with large quantities of unstructured data to store in the cloud, Azure Blob storage or Azure Data Lake Storage Gen2 (ADLS Gen2) offers a cost-effective and scalable solution. Data in Blob storage or ADLS Gen2 is considered data at rest; however, this data can be processed as a data stream by Stream Analytics.
+For scenarios with large quantities of unstructured data to store in the cloud, Azure Blob storage or Azure Data Lake Storage Gen2 offers a cost-effective and scalable solution. Data in Blob storage or Azure Data Lake Storage Gen2 is considered data at rest. However, this data can be processed as a data stream by Stream Analytics.
-Log processing is a commonly used scenario for using such inputs with Stream Analytics. In this scenario, telemetry data files have been captured from a system and need to be parsed and processed to extract meaningful data.
+Log processing is a commonly used scenario for using such inputs with Stream Analytics. In this scenario, telemetry data files are captured from a system and need to be parsed and processed to extract meaningful data.
-The default timestamp of a Blob storage or ADLS Gen2 event in Stream Analytics is the timestamp that it was last modified, which is `BlobLastModifiedUtcTime`. If a blob is uploaded to a storage account at 13:00, and the Azure Stream Analytics job is started using the option *Now* at 13:01, it will not be picked up as its modified time falls outside the job run period.
+The default timestamp of a Blob storage or Azure Data Lake Storage Gen2 event in Stream Analytics is the timestamp that it was last modified, which is `BlobLastModifiedUtcTime`. If a blob is uploaded to a storage account at 13:00, and the Azure Stream Analytics job is started using the option *Now* at 13:01, it will not be picked up as its modified time falls outside the job run period.
If a blob is uploaded to a storage account container at 13:00, and the Azure Stream Analytics job is started using *Custom Time* at 13:00 or earlier, the blob will be picked up as its modified time falls inside the job run period.
-If an Azure Stream Analytics job is started using *Now* at 13:00, and a blob is uploaded to the storage account container at 13:01, Azure Stream Analytics will pick up the blob. The timestamp assigned to each blob is based only on `BlobLastModifiedTime`. The folder the blob is in has no relation to the timestamp assigned. For example, if there's a blob *2019/10-01/00/b1.txt* with a `BlobLastModifiedTime` of 2019-11-11, then the timestamp assigned to this blob is 2019-11-11.
+If an Azure Stream Analytics job is started using *Now* at 13:00, and a blob is uploaded to the storage account container at 13:01, Azure Stream Analytics picks up the blob. The timestamp assigned to each blob is based only on `BlobLastModifiedTime`. The folder the blob is in has no relation to the timestamp assigned. For example, if there's a blob `2019/10-01/00/b1.txt` with a `BlobLastModifiedTime` of `2019-11-11`, then the timestamp assigned to this blob is `2019-11-11`.
-To process the data as a stream using a timestamp in the event payload, you must use the [TIMESTAMP BY](/stream-analytics-query/stream-analytics-query-language-reference) keyword. A Stream Analytics job pulls data from Azure Blob storage or ADLS Gen2 input every second if the blob file is available. If the blob file is unavailable, there's an exponential backoff with a maximum time delay of 90 seconds.
+To process the data as a stream using a timestamp in the event payload, you must use the [TIMESTAMP BY](/stream-analytics-query/stream-analytics-query-language-reference) keyword. A Stream Analytics job pulls data from Azure Blob storage or Azure Data Lake Storage Gen2 input every second if the blob file is available. If the blob file is unavailable, there's an exponential backoff with a maximum time delay of 90 seconds.
CSV-formatted inputs require a header row to define fields for the data set, and all header row fields must be unique.
The following table explains each property in the **New input** page in the Azur
| **Storage account** | The name of the storage account where the blob files are located. | | **Storage account key** | The secret key associated with the storage account. This option is automatically populated in unless you select the option to provide the settings manually. | | **Container** | Containers provide a logical grouping for blobs. You can choose either **Use existing** container or **Create new** to have a new container created.|
+| **Authentication mode** | Specify the type of the authentication you want to use to connect to the storage account. You can use a connection string or a managed identity to authenticate with the storage account. For the managed identity option, you can either create a system-assigned managed identity to the Stream Analytics job or a user-assigned managed identity to authenticate with the storage account. When you use a managed identity, the managed identity must be a member of an [appropriate role](/azure/role-based-access-control/built-in-roles#storage) on the storage account. |
| **Path pattern** (optional) | The file path used to locate the blobs within the specified container. If you want to read blobs from the root of the container, don't set a path pattern. Within the path, you can specify one or more instances of the following three variables: `{date}`, `{time}`, or `{partition}`<br/><br/>Example 1: `cluster1/logs/{date}/{time}/{partition}`<br/><br/>Example 2: `cluster1/logs/{date}`<br/><br/>The `*` character isn't an allowed value for the path prefix. Only valid <a href="/rest/api/storageservices/Naming-and-Referencing-Containers--Blobs--and-Metadata">Azure blob characters</a> are allowed. Don't include container names or file names. | | **Date format** (optional) | If you use the date variable in the path, the date format in which the files are organized. Example: `YYYY/MM/DD` <br/><br/> When blob input has `{date}` or `{time}` in its path, the folders are looked at in ascending time order.| | **Time format** (optional) | If you use the time variable in the path, the time format in which the files are organized. Currently the only supported value is `HH` for hours. |
-| **Partition key** | It is an optional field that is available only if your job is configured to use [compatibility level](./stream-analytics-compatibility-level.md) 1.2 or higher. If your input is partitioned by a property, you can add the name of this property here. It is used for improving the performance of your query if it includes a PARTITION BY or GROUP BY clause on this property. If this job uses compatibility level 1.2 or higher, this field defaults to "PartitionId". |
+| **Partition key** | It's an optional field that is available only if your job is configured to use [compatibility level](./stream-analytics-compatibility-level.md) 1.2 or higher. If your input is partitioned by a property, you can add the name of this property here. It's used for improving the performance of your query if it includes a PARTITION BY or GROUP BY clause on this property. If this job uses compatibility level 1.2 or higher, this field defaults to "PartitionId." |
| **Count of input partitions** | This field is present only when {partition} is present in path pattern. The value of this property is an integer >=1. Wherever {partition} appears in pathPattern, a number between 0 and the value of this field -1 will be used. |
-| **Event serialization format** | The serialization format (JSON, CSV, Avro, Parquet, or [Other (Protobuf, XML, proprietary...)](custom-deserializer.md)) of the incoming data stream. Ensure the JSON format aligns with the specification and doesnΓÇÖt include leading 0 for decimal numbers. |
+| **Event serialization format** | The serialization format (JSON, CSV, Avro, Parquet, or [Other (Protobuf, XML, proprietary...)](custom-deserializer.md)) of the incoming data stream. Ensure the JSON format aligns with the specification and doesnΓÇÖt include leading 0 for decimal numbers. |
| **Encoding** | For CSV and JSON, UTF-8 is currently the only supported encoding format. | | **Compression** | The compression type used to read the incoming data stream, such as None (default), Gzip, or Deflate. |
When your data comes from a Blob storage source, you have access to the followin
| Property | Description | | | | | **BlobName** |The name of the input blob that the event came from. |
-| **EventProcessedUtcTime** |The date and time that the event was processed by Stream Analytics. |
+| **EventProcessedUtcTime** |The date and time when Stream Analytics processes the event. |
| **BlobLastModifiedUtcTime** |The date and time that the blob was last modified. | | **PartitionId** |The zero-based partition ID for the input adapter. |
SELECT
FROM Input ```
+## Stream data from Apache Kafka
+Azure Stream Analytics lets you connect directly to Apache Kafka clusters to ingest data. The solution is low code and entirely managed by the Azure Stream Analytics team at Microsoft, allowing it to meet business compliance standards. The Kafka input is backward compatible and supports all versions with the latest client release starting from version 0.10. Users can connect to Kafka clusters inside a virtual network and Kafka clusters with a public endpoint, depending on the configurations. The configuration relies on existing Kafka configuration conventions. Supported compression types are None, Gzip, Snappy, LZ4, and Zstd.
+
+For more information, see [Stream data from Kafka into Azure Stream Analytics (Preview)](stream-analytics-define-kafka-input.md).
++ ## Next steps > [!div class="nextstepaction"] > [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
stream-analytics Stream Analytics Define Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-outputs.md
Previously updated : 01/18/2023 Last updated : 01/25/2024 # Outputs from Azure Stream Analytics
An Azure Stream Analytics job consists of an input, query, and an output. There
To create, edit, and test Stream Analytics job outputs, you can use the [Azure portal](stream-analytics-quick-create-portal.md#configure-job-output), [Azure PowerShell](stream-analytics-quick-create-powershell.md#configure-output-to-the-job), [.NET API](/dotnet/api/microsoft.azure.management.streamanalytics.ioutputsoperations), [REST API](/rest/api/streamanalytics/), [Visual Studio](stream-analytics-quick-create-vs.md), and [Visual Studio Code](./quick-create-visual-studio-code.md). > [!NOTE]
-> We strongly recommend using [**Stream Analytics tools for Visual Studio Code**](./quick-create-visual-studio-code.md) for best local development experience. There are known feature gaps in Stream Analytics tools for Visual Studio 2019 (version 2.6.3000.0) and it won't be improved going forward.
+> We strongly recommend that you use [**Stream Analytics tools for Visual Studio Code**](./quick-create-visual-studio-code.md) for the best local development experience. There are known feature gaps in Stream Analytics tools for Visual Studio 2019 (version 2.6.3000.0) and it won't be improved going forward.
-Some outputs types support [partitioning](#partitioning), and [output batch sizes](#output-batch-size) vary to optimize throughput. The following table shows features that are supported for each output type:
+Some outputs types support [partitioning](#partitioning) as shown in the following table.
+
+All outputs support batching, but only some support setting the output batch size explicitly. For more information, see the [output batch sizes](#output-batch-size) section.
| Output type | Partitioning | Security | |-|--|-|
-|[Azure Data Lake Storage Gen 1](azure-data-lake-storage-gen1-output.md)|Yes|Microsoft Entra user </br> , Managed Identity|
|[Azure Data Explorer](azure-database-explorer-output.md)|Yes|Managed Identity|
-|[Azure Database for PostgreSQL](postgresql-database-output.md)|Yes|Username and password auth|
-|[Azure SQL Database](sql-database-output.md)|Yes, optional.|SQL user auth, </br> Managed Identity|
+|[Azure Functions](azure-functions-output.md)|Yes|Access key|
|[Azure Synapse Analytics](azure-synapse-analytics-output.md)|Yes|SQL user auth, </br> Managed Identity| |[Blob storage and Azure Data Lake Gen 2](blob-storage-azure-data-lake-gen2-output.md)|Yes|Access key, </br> Managed Identity|
+|[Azure Cosmos DB](azure-cosmos-db-output.md)|Yes|Access key, </br> Managed Identity|
+|[Azure Data Lake Storage Gen 1](azure-data-lake-storage-gen1-output.md)|Yes|Microsoft Entra user </br> Managed Identity|
|[Azure Event Hubs](event-hubs-output.md)|Yes, need to set the partition key column in output configuration.|Access key, </br> Managed Identity|
+|[Kafka (preview)](kafka-output.md)|Yes, need to set the partition key column in output configuration.|Access key, </br> Managed Identity|
+|[Azure Database for PostgreSQL](postgresql-database-output.md)|Yes|Username and password auth|
|[Power BI](power-bi-output.md)|No|Microsoft Entra user, </br> Managed Identity|
-|[Azure Table storage](table-storage-output.md)|Yes|Account key|
|[Azure Service Bus queues](service-bus-queues-output.md)|Yes|Access key, </br> Managed Identity| |[Azure Service Bus topics](service-bus-topics-output.md)|Yes|Access key, </br> Managed Identity|
-|[Azure Cosmos DB](azure-cosmos-db-output.md)|Yes|Access key, </br> Managed Identity|
-|[Azure Functions](azure-functions-output.md)|Yes|Access key|
+|[Azure SQL Database](sql-database-output.md)|Yes, optional.|SQL user auth, </br> Managed Identity|
+|[Azure Table storage](table-storage-output.md)|Yes|Account key|
+ > [!IMPORTANT] > Azure Stream Analytics uses Insert or Replace API by design. This operation replaces an existing entity or inserts a new entity if it does not exist in the table. ## Partitioning
-Stream Analytics supports partitions for all outputs except for Power BI. For more information on partition keys and the number of output writers, see the article for the specific output type you're interested in. All output articles are linked in the previous section.
+Stream Analytics supports partitions for all outputs except for Power BI. For more information on partition keys and the number of output writers, see the article for the specific output type you're interested in. Articles for output types are linked in the previous section.
-Additionally, for more advanced tuning of the partitions, the number of output writers can be controlled using an `INTO <partition count>` (see [INTO](/stream-analytics-query/into-azure-stream-analytics#into-shard-count)) clause in your query, which can be helpful in achieving a desired job topology. If your output adapter is not partitioned, lack of data in one input partition causes a delay up to the late arrival amount of time. In such cases, the output is merged to a single writer, which might cause bottlenecks in your pipeline. To learn more about late arrival policy, see [Azure Stream Analytics event order considerations](./stream-analytics-time-handling.md).
+Additionally, for more advanced tuning of the partitions, the number of output writers can be controlled using an `INTO <partition count>` (see [INTO](/stream-analytics-query/into-azure-stream-analytics#into-shard-count)) clause in your query, which can be helpful in achieving a desired job topology. If your output adapter isn't partitioned, lack of data in one input partition causes a delay up to the late arrival amount of time. In such cases, the output is merged to a single writer, which might cause bottlenecks in your pipeline. To learn more about late arrival policy, see [Azure Stream Analytics event order considerations](./stream-analytics-time-handling.md).
## Output batch size
-All outputs support batching, but only some support batch size explicitly. Azure Stream Analytics uses variable-size batches to process events and write to outputs. Typically the Stream Analytics engine doesn't write one message at a time, and uses batches for efficiency. When the rate of both the incoming and outgoing events is high, Stream Analytics uses larger batches. When the egress rate is low, it uses smaller batches to keep latency low.
+All outputs support batching, but only some support settings the batch size explicitly. Azure Stream Analytics uses variable-size batches to process events and write to outputs. Typically the Stream Analytics engine doesn't write one message at a time, and uses batches for efficiency. When the rate of both the incoming and outgoing events is high, Stream Analytics uses larger batches. When the egress rate is low, it uses smaller batches to keep latency low.
## Avro and Parquet file splitting behavior A Stream Analytics query can generate multiple schemas for a given output. The list of columns projected, and their type, can change on a row-by-row basis.
-By design, the Avro and Parquet formats do not support variable schemas in a single file.
+By design, the Avro and Parquet formats don't support variable schemas in a single file.
-The following behaviors may occur when directing a stream with variable schemas to an output using these formats:
+The following behaviors might occur when directing a stream with variable schemas to an output using these formats:
-- If the schema change can be detected, the current output file will be closed, and a new one initialized on the new schema. Splitting files as such will severely slow down the output when schema changes happen frequently. With back pressure this will in turn severely impact the overall performance of the job-- If the schema change cannot be detected, the row will most likely be rejected, and the job become stuck as the row can't be output. Nested columns, or multi-type arrays, are situations that won't be discovered and be rejected.
+- If the schema change can be detected, the current output file is closed, and a new one initialized on the new schema. Splitting files as such severely slows down the output when schema changes happen frequently. This behavior can severely impact the overall performance of the job
+- If the schema change can't be detected, the row is most likely be rejected, and the job gets stuck as the row can't be output. Nested columns, or multi-type arrays, are situations that aren't discovered and be rejected.
-It is highly recommended to consider outputs using the Avro or Parquet format to be strongly typed, or schema-on-write, and queries targeting them to be written as such (explicit conversions and projections for a uniform schema).
+We recommend that you consider outputs using the Avro or Parquet format to be strongly typed, or schema-on-write, and queries targeting them to be written as such (explicit conversions and projections for a uniform schema).
If multiple schemas need to be generated, consider creating multiple outputs and splitting records into each destination by using a `WHERE` clause. ## Parquet output batching window properties
-When using Azure Resource Manager template deployment or the REST API, the two batching window properties are:
+When you use Azure Resource Manager template deployment or the REST API, the two batching window properties are:
1. *timeWindow*
- The maximum wait time per batch. The value should be a string of Timespan. For example, "00:02:00" for two minutes. After this time, the batch is written to the output even if the minimum rows requirement is not met. The default value is 1 minute and the allowed maximum is 2 hours. If your blob output has path pattern frequency, the wait time cannot be higher than the partition time range.
+ The maximum wait time per batch. The value should be a string of `Timespan`. For example, `00:02:00` for two minutes. After this time, the batch is written to the output even if the minimum rows requirement isn't met. The default value is 1 minute and the allowed maximum is 2 hours. If your blob output has path pattern frequency, the wait time can't be higher than the partition time range.
2. *sizeWindow* The number of minimum rows per batch. For Parquet, every batch creates a new file. The current default value is 2,000 rows and the allowed maximum is 10,000 rows.
-These batching window properties are only supported by API version **2017-04-01-preview**. Below is an example of the JSON payload for a REST API call:
+These batching window properties are only supported by API version **2017-04-01-preview** or higher. Here's is an example of the JSON payload for a REST API call:
```json "type": "stream",
stream-analytics Stream Analytics Stream Analytics Query Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-stream-analytics-query-patterns.md
Title: Common query patterns in Azure Stream Analytics
description: This article describes several common query patterns and designs that are useful in Azure Stream Analytics jobs. Previously updated : 08/29/2022 Last updated : 01/23/2024
This article outlines solutions to several common query patterns based on real-w
## Supported Data Formats
-Azure Stream Analytics supports processing events in CSV, JSON, and Avro data formats.
-
-Both JSON and Avro might contain complex types such as nested objects (records) or arrays. For more information on working with these complex data types, see the [Parsing JSON and AVRO data](stream-analytics-parsing-json.md) article.
+Azure Stream Analytics supports processing events in CSV, JSON, and Avro data formats. The JSON and Avro formats can contain complex types such as nested objects (records) or arrays. For more information on working with these complex data types, see [Parsing JSON and AVRO data](stream-analytics-parsing-json.md).
## Send data to multiple outputs
-Multiple **SELECT** statements can be used to output data to different output sinks. For example, one **SELECT** can output a threshold-based alert while another one can output events to blob storage.
+Multiple **SELECT** statements can be used to output data to different output sinks. For example, one **SELECT** statement can output a threshold-based alert while another one can output events to a blob storage.
-**Input**:
+Consider the following **input**:
``` | Make | Time | | | |
-| Make1 |2015-01-01T00:00:01.0000000Z |
-| Make1 |2015-01-01T00:00:02.0000000Z |
-| Make2 |2015-01-01T00:00:01.0000000Z |
-| Make2 |2015-01-01T00:00:02.0000000Z |
-| Make2 |2015-01-01T00:00:03.0000000Z |
+| Make1 |2023-01-01T00:00:01.0000000Z |
+| Make1 |2023-01-01T00:00:02.0000000Z |
+| Make2 |2023-01-01T00:00:01.0000000Z |
+| Make2 |2023-01-01T00:00:02.0000000Z |
+| Make2 |2023-01-01T00:00:03.0000000Z |
```
-**Output ArchiveOutput**:
+And, you want the following two outputs from the query:
+
+**ArchiveOutput**:
``` | Make | Time | | | |
-| Make1 |2015-01-01T00:00:01.0000000Z |
-| Make1 |2015-01-01T00:00:02.0000000Z |
-| Make2 |2015-01-01T00:00:01.0000000Z |
-| Make2 |2015-01-01T00:00:02.0000000Z |
-| Make2 |2015-01-01T00:00:03.0000000Z |
+| Make1 |2023-01-01T00:00:01.0000000Z |
+| Make1 |2023-01-01T00:00:02.0000000Z |
+| Make2 |2023-01-01T00:00:01.0000000Z |
+| Make2 |2023-01-01T00:00:02.0000000Z |
+| Make2 |2023-01-01T00:00:03.0000000Z |
```
-**Output AlertOutput**:
+**AlertOutput**:
``` | Make | Time | Count | | | | |
-| Make2 |2015-01-01T00:00:10.0000000Z |3 |
+| Make2 |2023-01-01T00:00:10.0000000Z |3 |
```
-**Query**:
+**Query with two SELECT statements with Archive output and Alert output as outputs**:
```SQL SELECT
HAVING
[Count] >= 3 ```
-The **INTO** clause tells Stream Analytics which of the outputs to write the data to. The first **SELECT** defines a pass-through query that receives data from the input and sends it to the output named **ArchiveOutput**. The second query does some simple aggregation and filtering before sending the results to a downstream alerting system output called **AlertOutput**.
-Note that the **WITH** clause can be used to define multiple subquery blocks. This option has the benefit of opening fewer readers to the input source.
+The **INTO** clause tells the Stream Analytics service which of the outputs to write the data to. The first **SELECT** defines a pass-through query that receives data from the input and sends it to the output named **ArchiveOutput**. The second query aggregates and filters data before sending the results to a downstream alerting system output called **AlertOutput**.
+
+The **WITH** clause can be used to define multiple subquery blocks. This option has the benefit of opening fewer readers to the input source.
**Query**:
For more information, see [**WITH** clause](/stream-analytics-query/with-azure-s
A simple pass-through query can be used to copy the input stream data into the output. For example, if a stream of data containing real-time vehicle information needs to be saved in an SQL database for later analysis, a simple pass-through query does the job.
-**Input**:
+Consider the following **input**:
``` | Make | Time | Weight | | | | |
-| Make1 |2015-01-01T00:00:01.0000000Z |"1000" |
-| Make1 |2015-01-01T00:00:02.0000000Z |"2000" |
+| Make1 |2023-01-01T00:00:01.0000000Z |"1000" |
+| Make1 |2023-01-01T00:00:02.0000000Z |"2000" |
```
-**Output**:
+You want the **output** to be the same as the input:
``` | Make | Time | Weight | | | | |
-| Make1 |2015-01-01T00:00:01.0000000Z |"1000" |
-| Make1 |2015-01-01T00:00:02.0000000Z |"2000" |
+| Make1 |2023-01-01T00:00:01.0000000Z |"1000" |
+| Make1 |2023-01-01T00:00:02.0000000Z |"2000" |
```
-**Query**:
+Here's the **query**:
```SQL SELECT
INTO Output
FROM Input ```
-A **SELECT** * query projects all the fields of an incoming event and sends them to the output. The same way, **SELECT** can also be used to only project required fields from the input. In this example, if vehicle *Make* and *Time* are the only required fields to be saved, those fields can be specified in the **SELECT** statement.
+This **SELECT** * query projects **all** the fields of an incoming event and sends them to the output. Instead, you can project only the required fields in a **SELECT** statement. In the following example, the **SELECT** statement projects only the *Make* and *Time* fields from the input data.
-**Input**:
+Consider the following **input**:
``` | Make | Time | Weight | | | | |
-| Make1 |2015-01-01T00:00:01.0000000Z |1000 |
-| Make1 |2015-01-01T00:00:02.0000000Z |2000 |
-| Make2 |2015-01-01T00:00:04.0000000Z |1500 |
+| Make1 |2023-01-01T00:00:01.0000000Z |1000 |
+| Make1 |2023-01-01T00:00:02.0000000Z |2000 |
+| Make2 |2023-01-01T00:00:04.0000000Z |1500 |
```
-**Output**:
+You want the **output** to have only the Make and Time fields:
``` | Make | Time | | | |
-| Make1 |2015-01-01T00:00:01.0000000Z |
-| Make1 |2015-01-01T00:00:02.0000000Z |
-| Make2 |2015-01-01T00:00:04.0000000Z |
+| Make1 |2023-01-01T00:00:01.0000000Z |
+| Make1 |2023-01-01T00:00:02.0000000Z |
+| Make2 |2023-01-01T00:00:04.0000000Z |
```
-**Query**:
+Here's the **query** that projects only the required fields:
```SQL SELECT
FROM Input
## String matching with LIKE and NOT LIKE
-**LIKE** and **NOT LIKE** can be used to verify if a field matches a certain pattern. For example, a filter can be created to return only the license plates that start with the letter 'A' and end with the number 9.
+**LIKE** and **NOT LIKE** can be used to verify if a field matches a certain pattern. For example, you can use a filter to return only the license plates that start with the letter `A` and end with the number `9`.
-**Input**:
+Consider the following **input**:
``` | Make | License_plate | Time | | | | |
-| Make1 |ABC-123 |2015-01-01T00:00:01.0000000Z |
-| Make2 |AAA-999 |2015-01-01T00:00:02.0000000Z |
-| Make3 |ABC-369 |2015-01-01T00:00:03.0000000Z |
+| Make1 |ABC-123 |2023-01-01T00:00:01.0000000Z |
+| Make2 |AAA-999 |2023-01-01T00:00:02.0000000Z |
+| Make3 |ABC-369 |2023-01-01T00:00:03.0000000Z |
```
-**Output**:
+You want the **output** to have the license plates that start with the letter `A` and end with the number `9`:
``` | Make | License_plate | Time | | | | |
-| Make2 |AAA-999 |2015-01-01T00:00:02.0000000Z |
-| Make3 |ABC-369 |2015-01-01T00:00:03.0000000Z |
+| Make2 |AAA-999 |2023-01-01T00:00:02.0000000Z |
+| Make3 |ABC-369 |2023-01-01T00:00:03.0000000Z |
```
-**Query**:
+Here's **query** that uses the LIKE operator:
```SQL SELECT
WHERE
License_plate LIKE 'A%9' ```
-Use the **LIKE** statement to check the **License_plate** field value. It should start with the letter 'A', then have any string of zero or more characters, ending with the number 9.
+Use the **LIKE** statement to check the **License_plate** field value. It should start with the letter `A`, then have any string of zero or more characters, ending with the number 9.
## Calculation over past events
-The **LAG** function can be used to look at past events within a time window and compare them against the current event. For example, the current car make can be outputted if itΓÇÖs different from the last car that went through the toll.
+The **LAG** function can be used to look at past events within a time window and compare them against the current event. For example, make of the current car can be outputted if itΓÇÖs different from make of the last car that passed through the toll booth.
-**Input**:
+Sample **input**:
``` | Make | Time | | | |
-| Make1 |2015-01-01T00:00:01.0000000Z |
-| Make2 |2015-01-01T00:00:02.0000000Z |
+| Make1 |2023-01-01T00:00:01.0000000Z |
+| Make2 |2023-01-01T00:00:02.0000000Z |
```
-**Output**:
+Sample **output**:
``` | Make | Time | | | |
-| Make2 |2015-01-01T00:00:02.0000000Z |
+| Make2 |2023-01-01T00:00:02.0000000Z |
```
-**Query**:
+Sample **query**:
```SQL SELECT
For more information, see [**LAG**](/stream-analytics-query/lag-azure-stream-ana
## Return the last event in a window
-As events are consumed by the system in real time, thereΓÇÖs no function that can determine if an event will be the last one to arrive for that window of time. To achieve this, the input stream needs to be joined with another where the time of an event is the maximum time for all events at that window.
+As events are consumed by the system in real time, thereΓÇÖs no function that can determine if an event is the last one to arrive for that time window. To achieve this, the input stream needs to be joined with another one where the time of an event is the maximum time for all events at that window.
-**Input**:
+Sample **input**:
``` | License_plate | Make | Time | | | | |
-| DXE 5291 |Make1 |2015-07-27T00:00:05.0000000Z |
-| YZK 5704 |Make3 |2015-07-27T00:02:17.0000000Z |
-| RMV 8282 |Make1 |2015-07-27T00:05:01.0000000Z |
-| YHN 6970 |Make2 |2015-07-27T00:06:00.0000000Z |
-| VFE 1616 |Make2 |2015-07-27T00:09:31.0000000Z |
-| QYF 9358 |Make1 |2015-07-27T00:12:02.0000000Z |
-| MDR 6128 |Make4 |2015-07-27T00:13:45.0000000Z |
+| DXE 5291 |Make1 |2023-07-27T00:00:05.0000000Z |
+| YZK 5704 |Make3 |2023-07-27T00:02:17.0000000Z |
+| RMV 8282 |Make1 |2023-07-27T00:05:01.0000000Z |
+| YHN 6970 |Make2 |2023-07-27T00:06:00.0000000Z |
+| VFE 1616 |Make2 |2023-07-27T00:09:31.0000000Z |
+| QYF 9358 |Make1 |2023-07-27T00:12:02.0000000Z |
+| MDR 6128 |Make4 |2023-07-27T00:13:45.0000000Z |
```
-**Output**:
+Sample **output** with information about last cars in two ten-minute time windows:
``` | License_plate | Make | Time | | | | |
-| VFE 1616 |Make2 |2015-07-27T00:09:31.0000000Z |
-| MDR 6128 |Make4 |2015-07-27T00:13:45.0000000Z |
+| VFE 1616 |Make2 |2023-07-27T00:09:31.0000000Z |
+| MDR 6128 |Make4 |2023-07-27T00:13:45.0000000Z |
```
-**Query**:
+Sample **query**:
```SQL WITH LastInWindow AS
For more information on joining streams, see [**JOIN**](/stream-analytics-query/
## Data aggregation over time
-To compute information over a time window, data can be aggregated together. In this example, a count is computed over the last 10 seconds of time for every specific car make.
+To compute information over a time window, you can aggregate the data. In this example, the statement computes a count over the last 10 seconds of time for every specific make of a car.
-**Input**:
+Sample **input**:
``` | Make | Time | Weight | | | | |
-| Make1 |2015-01-01T00:00:01.0000000Z |1000 |
-| Make1 |2015-01-01T00:00:02.0000000Z |2000 |
-| Make2 |2015-01-01T00:00:04.0000000Z |1500 |
+| Make1 |2023-01-01T00:00:01.0000000Z |1000 |
+| Make1 |2023-01-01T00:00:02.0000000Z |2000 |
+| Make2 |2023-01-01T00:00:04.0000000Z |1500 |
```
-**Output**:
+Sample **output**:
``` | Make | Count |
GROUP BY
TumblingWindow(second, 10) ```
-This aggregation groups the cars by *Make* and counts them every 10 seconds. The output has the *Make* and *Count* of cars that went through the toll.
+This aggregation groups the cars by *Make* and counts them every 10 seconds. The output has the *Make* and *Count* of cars that went through the toll booth.
TumblingWindow is a windowing function used to group events together. An aggregation can be applied over all grouped events. For more information, see [windowing functions](stream-analytics-window-functions.md).
For more information on aggregation, see [aggregate functions](/stream-analytics
## Periodically output values
-In case of irregular or missing events, a regular interval output can be generated from a more sparse data input. For example, generate an event every 5 seconds that reports the most recently seen data point.
+When events are missing or irregular, a regular interval output can be generated from a more sparse data input. For example, generate an event every 5 seconds that reports the most recently seen data point.
-**Input**:
+Sample **input**:
``` | Time | Value |
In case of irregular or missing events, a regular interval output can be generat
| "2014-01-01T06:01:30" |5 | | "2014-01-01T06:01:35" |6 | ```
-**Output (first 10 rows)**:
+Sample **output (first 10 rows)**:
``` | Window_end | Last_event.Time | Last_event.Value |
In case of irregular or missing events, a regular interval output can be generat
| 2014-01-01T14:01:45.000Z |2014-01-01T14:01:35.000Z |6 | ```
-**Query**:
+Sample **query**:
```SQL SELECT
For more information, see [Hopping window](/stream-analytics-query/hopping-windo
## Correlate events in a stream
-Correlating events in the same stream can be done by looking at past events using the **LAG** function. For example, an output can be generated every time two consecutive cars from the same *Make* go through the toll for the last 90 seconds.
+Correlating events in the same stream can be done by looking at past events using the **LAG** function. For example, an output can be generated every time two consecutive cars from the same *Make* go through the toll booth in the last 90 seconds.
-**Input**:
+Sample **input**:
``` | Make | License_plate | Time | | | | |
-| Make1 |ABC-123 |2015-01-01T00:00:01.0000000Z |
-| Make1 |AAA-999 |2015-01-01T00:00:02.0000000Z |
-| Make2 |DEF-987 |2015-01-01T00:00:03.0000000Z |
-| Make1 |GHI-345 |2015-01-01T00:00:04.0000000Z |
+| Make1 |ABC-123 |2023-01-01T00:00:01.0000000Z |
+| Make1 |AAA-999 |2023-01-01T00:00:02.0000000Z |
+| Make2 |DEF-987 |2023-01-01T00:00:03.0000000Z |
+| Make1 |GHI-345 |2023-01-01T00:00:04.0000000Z |
```
-**Output**:
+Sample **output**:
``` | Make | Time | Current_car_license_plate | First_car_license_plate | First_car_time | | | | | | |
-| Make1 |2015-01-01T00:00:02.0000000Z |AAA-999 |ABC-123 |2015-01-01T00:00:01.0000000Z |
+| Make1 |2023-01-01T00:00:02.0000000Z |AAA-999 |ABC-123 |2023-01-01T00:00:01.0000000Z |
```
-**Query**:
+Sample **query**:
```SQL SELECT
For more information, see [LAG](/stream-analytics-query/lag-azure-stream-analyti
The duration of an event can be computed by looking at the last Start event once an End event is received. This query can be useful to determine the time a user spends on a page or a feature.
-**Input**:
+Sample **input**:
``` | User | Feature | Event | Time | | | | | |
-| user@location.com |RightMenu |Start |2015-01-01T00:00:01.0000000Z |
-| user@location.com |RightMenu |End |2015-01-01T00:00:08.0000000Z |
+| user@location.com |RightMenu |Start |2023-01-01T00:00:01.0000000Z |
+| user@location.com |RightMenu |End |2023-01-01T00:00:08.0000000Z |
```
-**Output**:
+Sample **output**:
``` | User | Feature | Duration |
The duration of an event can be computed by looking at the last Start event once
| user@location.com |RightMenu |7 | ```
-**Query**:
+Sample **query**:
```SQL SELECT
WHERE
Event = 'end' ```
-The **LAST** function can be used to retrieve the last event within a specific condition. In this example, the condition is an event of type Start, partitioning the search by **PARTITION BY** user and feature. This way, every user and feature are treated independently when searching for the Start event. **LIMIT DURATION** limits the search back in time to 1 hour between the End and Start events.
+The **LAST** function can be used to retrieve the last event within a specific condition. In this example, the condition is an event of type Start, partitioning the search by **PARTITION BY** user and feature. This way, every user, and feature are treated independently when searching for the Start event. **LIMIT DURATION** limits the search back in time to 1 hour between the End and Start events.
## Count unique values
-**COUNT** and **DISTINCT** can be used to count the number of unique field values that appear in the stream within a time window. A query can be created to calculate how many unique *Makes* of cars passed through the toll booth in a 2-second window.
+**COUNT** and **DISTINCT** can be used to count the number of unique field values that appear in the stream within a time window. You can create a query to calculate how many unique *Makes* of cars have passed through the toll booth in a 2-second window.
-**Input**:
+Sample **input**:
``` | Make | Time | | | |
-| Make1 |2015-01-01T00:00:01.0000000Z |
-| Make1 |2015-01-01T00:00:02.0000000Z |
-| Make2 |2015-01-01T00:00:01.0000000Z |
-| Make2 |2015-01-01T00:00:02.0000000Z |
-| Make2 |2015-01-01T00:00:03.0000000Z |
+| Make1 |2023-01-01T00:00:01.0000000Z |
+| Make1 |2023-01-01T00:00:02.0000000Z |
+| Make2 |2023-01-01T00:00:01.0000000Z |
+| Make2 |2023-01-01T00:00:02.0000000Z |
+| Make2 |2023-01-01T00:00:03.0000000Z |
```
-**Output:**
+Sample **output:**
``` | Count_make | Time | | | |
-| 2 |2015-01-01T00:00:02.000Z |
-| 1 |2015-01-01T00:00:04.000Z |
+| 2 |2023-01-01T00:00:02.000Z |
+| 1 |2023-01-01T00:00:04.000Z |
```
-**Query:**
+Sample **query:**
```SQL SELECT
GROUP BY
TumblingWindow(second, 2) ```
-**COUNT(DISTINCT Make)** returns the count of distinct values in the **Make** column within a time window.
-For more information, see [**COUNT** aggregate function](/stream-analytics-query/count-azure-stream-analytics).
+**COUNT(DISTINCT Make)** returns the count of distinct values in the **Make** column within a time window. For more information, see [**COUNT** aggregate function](/stream-analytics-query/count-azure-stream-analytics).
## Retrieve the first event in a window
-**IsFirst** can be used to retrieve the first event in a time window. For example, outputting the first car information at every 10-minute interval.
+You can use `IsFirst` to retrieve the first event in a time window. For example, outputting the first car information at every 10-minute interval.
-**Input**:
+Sample **input**:
``` | License_plate | Make | Time | | | | |
-| DXE 5291 |Make1 |2015-07-27T00:00:05.0000000Z |
-| YZK 5704 |Make3 |2015-07-27T00:02:17.0000000Z |
-| RMV 8282 |Make1 |2015-07-27T00:05:01.0000000Z |
-| YHN 6970 |Make2 |2015-07-27T00:06:00.0000000Z |
-| VFE 1616 |Make2 |2015-07-27T00:09:31.0000000Z |
-| QYF 9358 |Make1 |2015-07-27T00:12:02.0000000Z |
-| MDR 6128 |Make4 |2015-07-27T00:13:45.0000000Z |
+| DXE 5291 |Make1 |2023-07-27T00:00:05.0000000Z |
+| YZK 5704 |Make3 |2023-07-27T00:02:17.0000000Z |
+| RMV 8282 |Make1 |2023-07-27T00:05:01.0000000Z |
+| YHN 6970 |Make2 |2023-07-27T00:06:00.0000000Z |
+| VFE 1616 |Make2 |2023-07-27T00:09:31.0000000Z |
+| QYF 9358 |Make1 |2023-07-27T00:12:02.0000000Z |
+| MDR 6128 |Make4 |2023-07-27T00:13:45.0000000Z |
```
-**Output**:
+Sample **output**:
``` | License_plate | Make | Time | | | | |
-| DXE 5291 |Make1 |2015-07-27T00:00:05.0000000Z |
-| QYF 9358 |Make1 |2015-07-27T00:12:02.0000000Z |
+| DXE 5291 |Make1 |2023-07-27T00:00:05.0000000Z |
+| QYF 9358 |Make1 |2023-07-27T00:12:02.0000000Z |
```
-**Query**:
+Sample **query**:
```SQL SELECT
WHERE
**IsFirst** can also partition the data and calculate the first event to each specific car *Make* found at every 10-minute interval.
-**Output**:
+Sample **output**:
``` | License_plate | Make | Time | | | | |
-| DXE 5291 |Make1 |2015-07-27T00:00:05.0000000Z |
-| YZK 5704 |Make3 |2015-07-27T00:02:17.0000000Z |
-| YHN 6970 |Make2 |2015-07-27T00:06:00.0000000Z |
-| QYF 9358 |Make1 |2015-07-27T00:12:02.0000000Z |
-| MDR 6128 |Make4 |2015-07-27T00:13:45.0000000Z |
+| DXE 5291 |Make1 |2023-07-27T00:00:05.0000000Z |
+| YZK 5704 |Make3 |2023-07-27T00:02:17.0000000Z |
+| YHN 6970 |Make2 |2023-07-27T00:06:00.0000000Z |
+| QYF 9358 |Make1 |2023-07-27T00:12:02.0000000Z |
+| MDR 6128 |Make4 |2023-07-27T00:13:45.0000000Z |
```
-**Query**:
+Sample **query**:
```SQL SELECT
For more information, see [**IsFirst**](/stream-analytics-query/isfirst-azure-st
## Remove duplicate events in a window
-When performing an operation such as calculating averages over events in a given time window, duplicate events should be filtered. In the following example, the second event is a duplicate of the first.
+When you perform an operation such as calculating averages over events in a given time window, duplicate events should be filtered. In the following example, the second event is a duplicate of the first.
-**Input**:
+Sample **input**:
``` | DeviceId | Time | Attribute | Value |
When performing an operation such as calculating averages over events in a given
| 1 |2018-07-27T00:00:10.0000000Z |Temperature |100 | ```
-**Output**:
+Sample **output**:
``` | AverageValue | DeviceId |
When performing an operation such as calculating averages over events in a given
|45 | 2 | ```
-**Query**:
+Sample **query**:
```SQL WITH Temp AS (
When the first statement executes, the duplicate records are combined into one a
## Specify logic for different cases/values (CASE statements)
-**CASE** statements can provide different computations for different fields, based on particular criterion. For example, assign lane 'A' to cars of `Make1` and lane `B` to any other make.
+**CASE** statements can provide different computations for different fields, based on particular criterion. For example, assign lane `A` to cars of `Make1` and lane `B` to any other make.
-**Input**:
+Sample **input**:
``` | Make | Time | | | |
-| Make1 |2015-01-01T00:00:01.0000000Z |
-| Make2 |2015-01-01T00:00:02.0000000Z |
-| Make2 |2015-01-01T00:00:03.0000000Z |
+| Make1 |2023-01-01T00:00:01.0000000Z |
+| Make2 |2023-01-01T00:00:02.0000000Z |
+| Make2 |2023-01-01T00:00:03.0000000Z |
```
-**Output**:
+Sample **output**:
``` | Make |Dispatch_to_lane | Time | | | | |
-| Make1 |"A" |2015-01-01T00:00:01.0000000Z |
-| Make2 |"B" |2015-01-01T00:00:02.0000000Z |
+| Make1 |"A" |2023-01-01T00:00:01.0000000Z |
+| Make2 |"B" |2023-01-01T00:00:02.0000000Z |
```
-**Solution**:
+Sample **query**:
```SQL SELECT
For more information, see [case expression](/stream-analytics-query/case-azure-s
## Data conversion
-Data can be cast in real time using the **CAST** method. For example, car weight can be converted from type **nvarchar(max)** to type **bigint** and be used on a numeric calculation.
+Data can be cast in real time using the **CAST** method. For example, car weight can be converted from type **nvarchar(max)** to type **bigint** and be used in a numeric calculation.
-**Input**:
+Sample **input**:
``` | Make | Time | Weight | | | | |
-| Make1 |2015-01-01T00:00:01.0000000Z |"1000" |
-| Make1 |2015-01-01T00:00:02.0000000Z |"2000" |
+| Make1 |2023-01-01T00:00:01.0000000Z |"1000" |
+| Make1 |2023-01-01T00:00:02.0000000Z |"2000" |
```
-**Output**:
+Sample **output**:
``` | Make | Weight |
Data can be cast in real time using the **CAST** method. For example, car weight
| Make1 |3000 | ```
-**Query**:
+Sample **query**:
```SQL SELECT
For more information on [data conversion functions](/stream-analytics-query/conv
For conditions that span through multiple events the **LAG** function can be used to identify the duration of that condition. For example, suppose that a bug resulted in all cars having an incorrect weight (above 20,000 pounds), and the duration of that bug must be computed.
-**Input**:
+Sample **input**:
``` | Make | Time | Weight | | | | |
-| Make1 |2015-01-01T00:00:01.0000000Z |2000 |
-| Make2 |2015-01-01T00:00:02.0000000Z |25000 |
-| Make1 |2015-01-01T00:00:03.0000000Z |26000 |
-| Make2 |2015-01-01T00:00:04.0000000Z |25000 |
-| Make1 |2015-01-01T00:00:05.0000000Z |26000 |
-| Make2 |2015-01-01T00:00:06.0000000Z |25000 |
-| Make1 |2015-01-01T00:00:07.0000000Z |26000 |
-| Make2 |2015-01-01T00:00:08.0000000Z |2000 |
+| Make1 |2023-01-01T00:00:01.0000000Z |2000 |
+| Make2 |2023-01-01T00:00:02.0000000Z |25000 |
+| Make1 |2023-01-01T00:00:03.0000000Z |26000 |
+| Make2 |2023-01-01T00:00:04.0000000Z |25000 |
+| Make1 |2023-01-01T00:00:05.0000000Z |26000 |
+| Make2 |2023-01-01T00:00:06.0000000Z |25000 |
+| Make1 |2023-01-01T00:00:07.0000000Z |26000 |
+| Make2 |2023-01-01T00:00:08.0000000Z |2000 |
```
-**Output**:
+Sample **output**:
``` | Start_fault | End_fault | | | |
-| 2015-01-01T00:00:02.000Z |2015-01-01T00:00:07.000Z |
+| 2023-01-01T00:00:02.000Z |2023-01-01T00:00:07.000Z |
```
-**Query**:
+Sample **query**:
```SQL WITH SelectPreviousEvent AS
WHERE
``` The first **SELECT** statement correlates the current weight measurement with the previous measurement, projecting it together with the current measurement. The second **SELECT** looks back to the last event where the *previous_weight* is less than 20000, where the current weight is smaller than 20000 and the *previous_weight* of the current event was bigger than 20000.
-The End_fault is the current non-faulty event where the previous event was faulty, and the Start_fault is the last non-faulty event before that.
+The End_fault is the current nonfaulty event where the previous event was faulty, and the Start_fault is the last nonfaulty event before that.
## Process events with independent time (Substreams) Events can arrive late or out of order due to clock skews between event producers, clock skews between partitions, or network latency. For example, the device clock for *TollID* 2 is five seconds behind *TollID* 1, and the device clock for *TollID* 3 is 10 seconds behind *TollID* 1. A computation can happen independently for each toll, considering only its own clock data as a timestamp.
-**Input**:
+Sample **input**:
``` | LicensePlate | Make | Time | TollID | | | | | |
-| DXE 5291 |Make1 |2015-07-27T00:00:01.0000000Z | 1 |
-| YHN 6970 |Make2 |2015-07-27T00:00:05.0000000Z | 1 |
-| QYF 9358 |Make1 |2015-07-27T00:00:01.0000000Z | 2 |
-| GXF 9462 |Make3 |2015-07-27T00:00:04.0000000Z | 2 |
-| VFE 1616 |Make2 |2015-07-27T00:00:10.0000000Z | 1 |
-| RMV 8282 |Make1 |2015-07-27T00:00:03.0000000Z | 3 |
-| MDR 6128 |Make3 |2015-07-27T00:00:11.0000000Z | 2 |
-| YZK 5704 |Make4 |2015-07-27T00:00:07.0000000Z | 3 |
+| DXE 5291 |Make1 |2023-07-27T00:00:01.0000000Z | 1 |
+| YHN 6970 |Make2 |2023-07-27T00:00:05.0000000Z | 1 |
+| QYF 9358 |Make1 |2023-07-27T00:00:01.0000000Z | 2 |
+| GXF 9462 |Make3 |2023-07-27T00:00:04.0000000Z | 2 |
+| VFE 1616 |Make2 |2023-07-27T00:00:10.0000000Z | 1 |
+| RMV 8282 |Make1 |2023-07-27T00:00:03.0000000Z | 3 |
+| MDR 6128 |Make3 |2023-07-27T00:00:11.0000000Z | 2 |
+| YZK 5704 |Make4 |2023-07-27T00:00:07.0000000Z | 3 |
```
-**Output**:
+Sample **output**:
``` | TollID | Count |
For example, the device clock for *TollID* 2 is five seconds behind *TollID* 1,
| 3 | 1 | ```
-**Query**:
+Sample **query**:
```SQL SELECT
For more information, see [TIMESTAMP BY OVER](/stream-analytics-query/timestamp-
## Session Windows
-A Session Window is a window that keeps expanding as events occur and closes for computation if no event is received after a specific amount of time or if the window reaches its maximum duration.
-This window is particularly useful when computing user interaction data. A window starts when a user starts interacting with the system and closes when no more events are observed, meaning, the user has stopped interacting.
-For example, a user is interacting with a web page where the number of clicks is logged, a Session Window can be used to find out how long the user interacted with the site.
+A session window is a window that keeps expanding as events occur and closes for computation if no event is received after a specific amount of time or if the window reaches its maximum duration. This window is particularly useful when computing user interaction data. A window starts when a user starts interacting with the system and closes when no more events are observed, meaning, the user has stopped interacting. For example, a user is interacting with a web page where the number of clicks is logged, a Session Window can be used to find out how long the user interacted with the site.
-**Input**:
+Sample **input**:
``` | User_id | Time | URL |
For example, a user is interacting with a web page where the number of clicks is
| 1 | 2017-01-26T00:01:15.0000000Z | "www.example.com/e.html" | ```
-**Output**:
+Sample **output**:
``` | User_id | StartTime | EndTime | Duration_in_seconds |
For example, a user is interacting with a web page where the number of clicks is
| 1 | 2017-01-26T00:00:55.0000000Z | 2017-01-26T00:01:15.0000000Z | 20 | ```
-**Query**:
+Sample **query**:
``` SQL SELECT
The **SELECT** projects the data relevant to the user interaction, together with
For more information on SessionWindow, see [Session Window](/stream-analytics-query/session-window-azure-stream-analytics) .
-## Language extensibility with User Defined Function in JavaScript and C#
+## User defined functions in JavaScript and C#
Azure Stream Analytics query language can be extended with custom functions written either in JavaScript or C# language. User Defined Functions (UDF) are custom/complex computations that canΓÇÖt be easily expressed using the **SQL** language. These UDFs can be defined once and used multiple times within a query. For example, an UDF can be used to convert a hexadecimal *nvarchar(max)* value to a *bigint* value.
-**Input**:
+Sample **input**:
``` | Device_id | HexValue |
Azure Stream Analytics query language can be extended with custom functions writ
| 3 | "121" | ```
-**Output**:
+Sample **output**:
``` | Device_id | Decimal |
From
Input ```
-The User-Defined Function will compute the *bigint* value from the HexValue on every event consumed.
+The User-Defined Function computes the *bigint* value from the HexValue on every event consumed.
For more information, see [JavaScript](./stream-analytics-javascript-user-defined-functions.md) and [C#](./stream-analytics-edge-csharp-udf.md).
synapse-analytics Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cicd/continuous-integration-delivery.md
Previously updated : 10/08/2021 Last updated : 01/25/2024
To automate the deployment of an Azure Synapse workspace to multiple environment
- Set up a blank workspace to deploy to: 1. Create a new Azure Synapse workspace.
- 1. Grant the VM agent and the service principal Contributor permission to the resource group in which the new workspace is hosted.
- 1. In the workspace, don't configure the Git repository connection.
- 1. In the Azure portal, find the new Azure Synapse workspace, and then grant Owner permission to yourself and to the user that will run the Azure DevOps pipeline Azure Synapse workspace.
- 1. Add the Azure DevOps VM agent and the service principal to the Contributor role for the workspace. (The role should have been inherited, but verify that it is.)
- 1. In the Azure Synapse workspace, go to **Studio** > **Manage** > **Access Control**. Add the Azure DevOps VM agent and the service principal to the workspace admin group.
- 1. Open the storage account that's used for the workspace. On the **Identity and access management** pane, add the VM agent and the service principal to the Storage Blob Data Contributor role.
- 1. Create a key vault in the support subscription, and ensure that both the existing workspace and the new workspace have at least GET and LIST permissions to the vault.
- 1. For the automated deployment to work, ensure that any connection strings that are specified in your linked services are in the key vault.
+ 2. Grant the service principal the following permissions to the new Synapse workspace:
+ - Microsoft.Synapse/workspaces/integrationruntimes/write
+ - Microsoft.Synapse/workspaces/operationResults/read
+ - Microsoft.Synapse/workspaces/read
+ 3. In the workspace, don't configure the Git repository connection.
+ 4. In the Azure Synapse workspace, go to **Studio** > **Manage** > **Access Control**. 4. In the Azure Synapse workspace, go to Studio > Manage > Access Control. Assign the ΓÇ£Synapse Artifact PublisherΓÇ¥ to the service principal. If the deployment pipeline will need to deploy managed private endpoints, then assign the ΓÇ£Synapse AdministratorΓÇ¥ instead.
+ 5. When you use linked services whose connection information is stored in Azure Key Vault, it is recommended to keep separate key vaults for different environments. You can also configure separate permission levels for each key vault. For example, you might not want your team members to have permissions to production secrets. If you follow this approach, we recommend that you to keep the same secret names across all stages. If you keep the same secret names, you don't need to parameterize each connection string across CI/CD environments because the only thing that changes is the key vault name, which is a separate parameter.
### Other prerequisites
synapse-analytics Microsoft Spark Utilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/microsoft-spark-utilities.md
Title: Introduction to Microsoft Spark utilities description: "Tutorial: MSSparkutils in Azure Synapse Analytics notebooks"-+ Last updated 09/10/2020-+ zone_pivot_groups: programming-languages-spark-all-minus-sql
mssparkutils.fs.cp('source file or directory', 'destination file or directory',
``` ::: zone-end
+### Performant copy file
+
+This method provides a faster way of copying or moving files, especially large volumes of data.
+
+```python
+mssparkutils.fs.fastcp('source file or directory', 'destination file or directory', True) # Set the third parameter as True to copy all files and directories recursively
+```
+
+> [!NOTE]
+> The method only supports in Spark 3.3 and Spark 3.4.
+ ### Preview file content Returns up to the first 'maxBytes' bytes of the given file as a String encoded in UTF-8.
After the run finished, you will see a snapshot link named '**View notebook run:
![Screenshot of a snap link python](./media/microsoft-spark-utilities/spark-utilities-run-notebook-snap-link-sample-python.png)
+### Reference run multiple notebooks in parallel
+
+The method `mssparkutils.notebook.runMultiple()` allows you to run multiple notebooks in parallel or with a predefined topological structure. The API is using a multi-thread implementation mechanism within a spark session, which means the compute resources are shared by the reference notebook runs.
+
+With `mssparkutils.notebook.runMultiple()`, you can:
+
+- Execute multiple notebooks simultaneously, without waiting for each one to finish.
+
+- Specify the dependencies and order of execution for your notebooks, using a simple JSON format.
+
+- Optimize the use of Spark compute resources and reduce the cost of your Synapse projects.
+
+- View the Snapshots of each notebook run record in the output, and debug/monitor your notebook tasks conveniently.
+
+- Get the exit value of each executive activity and use them in downstream tasks.
+
+You can also try to run the mssparkutils.notebook.help("runMultiple") to find the example and detailed usage.
+
+Here's a simple example of running a list of notebooks in parallel using this method:
+
+```python
+
+mssparkutils.notebook.runMultiple(["NotebookSimple", "NotebookSimple2"])
+
+```
+
+The execution result from the root notebook is as follows:
++
+The following is an example of running notebooks with topological structure using `mssparkutils.notebook.runMultiple()`. Use this method to easily orchestrate notebooks through a code experience.
+
+```python
+# run multiple notebooks with parameters
+DAG = {
+ "activities": [
+ {
+ "name": "NotebookSimple", # activity name, must be unique
+ "path": "NotebookSimple", # notebook path
+ "timeoutPerCellInSeconds": 90, # max timeout for each cell, default to 90 seconds
+ "args": {"p1": "changed value", "p2": 100}, # notebook parameters
+ },
+ {
+ "name": "NotebookSimple2",
+ "path": "NotebookSimple2",
+ "timeoutPerCellInSeconds": 120,
+ "args": {"p1": "changed value 2", "p2": 200}
+ },
+ {
+ "name": "NotebookSimple2.2",
+ "path": "NotebookSimple2",
+ "timeoutPerCellInSeconds": 120,
+ "args": {"p1": "changed value 3", "p2": 300},
+ "retry": 1,
+ "retryIntervalInSeconds": 10,
+ "dependencies": ["NotebookSimple"] # list of activity names that this activity depends on
+ }
+ ]
+}
+mssparkutils.notebook.runMultiple(DAG)
+
+```
+
+> [!NOTE]
+>
+> - The method only supports in Spark 3.3 and Spark 3.4.
+> - The parallelism degree of the multiple notebook run is restricted to the total available compute resource of a Spark session.
++ ### Exit a notebook Exits a notebook with a value. You can run nesting function calls in a notebook interactively or in a pipeline.
virtual-desktop App Attach Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-overview.md
Any disaster recovery plans for Azure Virtual Desktop must include replicating t
### Azure Files
-Azure Files has limits on the number of open handles per root directory, directory, and file. When using MSIX app attach or app attach, VHDX or CimFS disk images are mounted using the computer account of the session host, meaning one handle is opened per session host per disk image, rather than per user. For more information on the limits, see [Azure Files scalability and performance targets](../storage/files/storage-files-scale-targets.md#file-scale-targets).
+Azure Files has limits on the number of open handles per root directory, directory, and file. When using MSIX app attach or app attach, VHDX or CimFS disk images are mounted using the computer account of the session host, meaning one handle is opened per session host per disk image, rather than per user. For more information on the limits and sizing guidance, see [Azure Files scalability and performance targets](../storage/files/storage-files-scale-targets.md#file-scale-targets) and [Azure Files sizing guidance for Azure Virtual Desktop](../storage/files/storage-files-scale-targets.md#azure-files-sizing-guidance-for-azure-virtual-desktop).
## Next steps
virtual-desktop App Attach Test Msix Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-test-msix-packages.md
description: Learn how to mount disk images for testing and troubleshooting outs
Previously updated : 11/27/2023 Last updated : 01/25/2024 # Test MSIX packages for app attach
Staging and destaging are machine-level operations, while registering and deregi
## Prepare to stage an MSIX package
-The staging script prepares your machine to receive the MSIX package and mounts the relevant package to your machine. You only need to run the following commands once per machine.
-
-However, if you're using an image in CimFS format, or a version of PowerShell greater than 5.1, the instructions are different. Later versions of PowerShell are multi-platform, which means the Windows application parts are split off into their own package called [Windows Runtime](/windows/uwp/winrt-components/). You need to use a variation of the commands to install a package with a multi-platform version of PowerShell.
+The staging script prepares your machine to receive the MSIX package and mounts the relevant package to your machine.
Select the relevant tab for the version of PowerShell you're using. # [PowerShell 6 and later](#tab/posh6)
-To stage packages at boot using PowerShell 6 or later, you need to run the following commands before the staging operations to bring the capabilities of the Windows Runtime package to PowerShell.
+To stage packages using PowerShell 6 or later, you need to run the following commands before the staging operations to bring the capabilities of the Windows Runtime package to PowerShell.
1. Open a PowerShell prompt as an administrator.
-1. Run the following command to download and install the Windows Runtime Package:
+1. Run the following command to download and install the Windows Runtime Package. You only need to run the following commands once per machine.
```powershell #Required for PowerShell 6 and later
To stage packages at boot using PowerShell 6 or later, you need to run the follo
#Required for PowerShell 6 and later $nuGetPackageName = 'Microsoft.Windows.SDK.NET.Ref' $winRT = Get-Package $nuGetPackageName
- $dllWinRT = Get-Childitem (Split-Path -Parent $winRT.Source) -Recurse -File WinRT.Runtime.dll
- $dllSdkNet = Get-Childitem (Split-Path -Parent $winRT.Source) -Recurse -File Microsoft.Windows.SDK.NET.dll
+ $dllWinRT = Get-ChildItem (Split-Path -Parent $winRT.Source) -Recurse -File WinRT.Runtime.dll
+ $dllSdkNet = Get-ChildItem (Split-Path -Parent $winRT.Source) -Recurse -File Microsoft.Windows.SDK.NET.dll
Add-Type -AssemblyName $dllWinRT.FullName Add-Type -AssemblyName $dllSdkNet.FullName ``` # [PowerShell 5.1 and earlier](#tab/posh5)
-To stage packages at boot with PowerShell version 5.1 or earlier, you need to run the following command before the staging operations to bring the capabilities of the Windows Runtime package to PowerShell.
+To stage packages with PowerShell version 5.1 or earlier, you need to run the following command before the staging operations to bring the capabilities of the Windows Runtime package to PowerShell.
1. Open a PowerShell prompt as an administrator.
To mount a CimFS disk image:
1. In the same PowerShell session, run the following command: ```powershell
- $diskImage = "<UNC path to the Disk Image>"
+ $diskImage = "<Local or UNC path to the disk image>"
- $mount = Mount-CimDiskimage -ImagePath $diskImage -PassThru -NoMountPath
+ $mount = Mount-CimDiskImage -ImagePath $diskImage -PassThru -NoMountPath
#We can now get the Device Id for the mounted volume, this will be useful for the destage step. $deviceId = $mount.DeviceId
To mount a VHDX or VHD disk image:
1. In the same PowerShell session, run the following command: ```powershell
- $diskImage = "<UNC path to the Disk Image>"
+ $diskImage = "<Local or UNC path to the disk image>"
- $mount = Mount-Diskimage -ImagePath $diskImage -PassThru -NoDriveLetter -Access ReadOnly
+ $mount = Mount-DiskImage -ImagePath $diskImage -PassThru -NoDriveLetter -Access ReadOnly
#We can now get the Device Id for the mounted volume, this will be useful for the destage step. $partition = Get-Partition -DiskNumber $mount.Number
Finally, you need to run the following commands for all image formats to complet
1. In the same PowerShell session, retrieve the application information by running the following commands: ```powershell
- $manifest = Get-Childitem -LiteralPath $deviceId -Recurse -File AppxManifest.xml
+ $manifest = Get-ChildItem -LiteralPath $deviceId -Recurse -File AppxManifest.xml
$manifestFolder = $manifest.DirectoryName ```
Finally, you need to run the following commands for all image formats to complet
$packageManager = New-Object -TypeName Windows.Management.Deployment.PackageManager $asyncOperation = $packageManager.StagePackageAsync($folderAbsoluteUri, $null, "StageInPlace")
- $stagingResult = $asTaskAsyncOperation.Invoke($null, @($asyncOperation))
```
-1. Check the `$stagingResult` variable to monitor the staging progress for the application package by running the following command:
+1. Monitor the staging progress for the application package by running the following commands. The time it takes to stage the package depends on its size. The `Status` property of the `$stagingResult` variable will be `RanToCompletion` when the staging is complete.
```powershell
+ $stagingResult = $asTaskAsyncOperation.Invoke($null, @($asyncOperation))
+
+ while ($stagingResult.Status -eq "WaitingForActivation") {
+ Write-Output "Waiting for activation..."
+ Start-Sleep -Seconds 5
+ }
+ Write-Output $stagingResult ```
$manifestPath = Join-Path (Join-Path $Env:ProgramFiles 'WindowsApps') (Join-Path
Add-AppxPackage -Path $manifestPath -DisableDevelopmentMode -Register ```
-Now that your MSIX package is registered, your application should be available for use in your session. You can now open the application for testing and troubleshooting. Once you're finished, you can deregister and destage your MSIX package.
+Now that your MSIX package is registered, your application should be available for use in your session. You can now open the application for testing and troubleshooting. Once you're finished, you need to deregister and destage your MSIX package.
## Deregister an MSIX package
-Once you're finished with your MSIX package and are ready to remove it, you need to deregister it. To deregister an MSIX package, run the following command in the same PowerShell session. This command uses the `$msixPackageFullName` variable created in a previous section.
+Once you're finished with your MSIX package and are ready to remove it, first you need to deregister it. To deregister the MSIX package, run the following commands in the same PowerShell session. These commands get the disk's `DeviceId` parameter again, and removes the package using the `$msixPackageFullName` variable created in a previous section.
```powershell
+$appPath = Join-Path (Join-Path $Env:ProgramFiles 'WindowsApps') $msixPackageFullName
+$folderInfo = Get-Item $appPath
+$deviceId = '\\?\' + $folderInfo.Target.Split('\')[0] +'\'
+Write-Output $deviceId #Save this for later
+ Remove-AppxPackage $msixPackageFullName -PreserveRoamableApplicationData ``` ## Destage an MSIX package
-To destage an MSIX package, run the following commands in the same PowerShell session to get the disk's `DeviceId` parameter. This command uses the `$msixPackageFullName` variable created in a previous section.
+Finally, to destage the MSIX package, you need to dismount your disk image, run the following command in the same PowerShell session to ensure that the package isn't still registered for any user. This command uses the `$msixPackageFullName` variable created in a previous section.
- ```powershell
-$appPath = Join-Path (Join-Path $Env:ProgramFiles 'WindowsApps') $msixPackageFullName
-$folderInfo = Get-Item $appPath
-$deviceId = '\\?\' + $folderInfo.LinkTarget.Split('\')[0] +'\'
-Write-Output $deviceId #Save this for later
-
-Remove-AppxPackage -AllUsers -Package $msixPackageFullName
-Remove-AppxPackage -Package $msixPackageFullName
+```powershell
+Remove-AppxPackage -AllUsers -Package $msixPackageFullName -ErrorAction SilentlyContinue
```
-### Dismount the disks from the system
+### Dismount the disks image
To finish the destaging process, you need to dismount the disks from the system. The command you need to use depends on the format of your disk image. Select the relevant tab for the format you're using.
To finish the destaging process, you need to dismount the disks from the system.
To dismount a CimFS disk image, run the following commands in the same PowerShell session: ```powershell
-DisMount-CimDiskimage -DeviceId $deviceId
+Dismount-CimDiskImage -DeviceId $deviceId
``` ### [VHDX or VHD](#tab/vhdx)
DisMount-CimDiskimage -DeviceId $deviceId
To dismount a VHDX or VHD disk image, run the following command in the same PowerShell session: ```powershell
-DisMount-DiskImage -DevicePath $deviceId.TrimEnd('\')
+Dismount-DiskImage -DevicePath $deviceId.TrimEnd('\')
```
Once you finished dismounting your disks, you've safely removed your MSIX packag
## Set up simulation scripts for the MSIX app attach agent
-If you want to add and remove MSIX packages to your device automatically, you can use the PowerShell commands in this article to create scripts that run at startup, logon, logoff, and shutdown. To learn more, see [Using startup, shutdown, logon, and logoff scripts in Group Policy](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn789196(v=ws.11)/).
+If you want to add and remove MSIX packages to your device automatically, you can use the PowerShell commands in this article to create scripts that run at startup, logon, logoff, and shutdown. To learn more, see [Using startup, shutdown, logon, and logoff scripts in Group Policy](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn789196(v=ws.11)/). You need to make sure that any variables required for each phase are available in each script.
You create a script for each phase:
Here's how to set up a license for offline use:
} catch [Exception] {
- Write-Host $_ | out-string
+ Write-Host $_ | Out-String
} ```
virtual-desktop Azure Stack Hci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci-overview.md
Azure Virtual Desktop for Azure Stack HCI has the following limitations:
- [Autoscale](autoscale-scaling-plan.md) - [Session host scaling with Azure Automation](set-up-scaling-script.md) - [Start VM On Connect](start-virtual-machine-connect.md)
- - [Multimedia redirection](multimedia-redirection.md)
- [Per-user access pricing](./remote-app-streaming/licensing.md) - Each host pool must only contain session hosts on Azure or on Azure Stack HCI. You can't mix session hosts on Azure and on Azure Stack HCI in the same host pool.
virtual-desktop Fslogix Containers Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/fslogix-containers-azure-files.md
To ensure your Azure Virtual Desktop environment follows best practices:
- For optimal performance, the storage solution and the FSLogix profile container should be in the same data center location. - The storage account containing the master image must be in the same region and subscription where the VMs are being provisioned.
+Azure Files has limits on the number of open handles per root directory, directory, and file. For more information on the limits and sizing guidance, see [Azure Files scalability and performance targets](../storage/files/storage-files-scale-targets.md#file-scale-targets) and [Azure Files sizing guidance for Azure Virtual Desktop](../storage/files/storage-files-scale-targets.md#azure-files-sizing-guidance-for-azure-virtual-desktop).
+ ## Next steps - Learn more about storage options for FSLogix profile containers, see [Storage options for FSLogix profile containers in Azure Virtual Desktop](store-fslogix-profile.md).
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure
description: New features and product updates for the Azure Virtual Desktop Agent. Previously updated : 12/14/2023 Last updated : 01/25/2024
A rollout may take several weeks before the agent is available in all environmen
| Release | Latest version | |--|--| | Production | 1.0.7909.2600 |
-| Validation | 1.0.7909.1200 |
+| Validation | 1.0.8297.400 |
> [!TIP] > The Azure Virtual Desktop Agent is automatically installed when adding session hosts in most scenarios. If you need to install the agent manually, you can download it at [Register session hosts to a host pool](add-session-hosts-host-pool.md#register-session-hosts-to-a-host-pool), together with the steps to install it.
+## Version 1.0.8297.400 (validation)
+
+*Published: January 2024*
+
+In this update, we've made the following changes:
+
+- General improvements and bug fixes.
+ ## Version 1.0.7909.2600 *Published: December 2023*
In this update, we've made the following changes:
- General improvements and bug fixes.
-## Version 1.0.7909.1200 (validation)
+## Version 1.0.7909.1200
*Published: November 2023*
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
description: Learn about recent changes to the Remote Desktop client for Windows
Previously updated : 01/10/2024 Last updated : 01/24/2024 # What's new in the Remote Desktop client for Windows
The following table lists the current versions available for the public and Insi
| Release | Latest version | Download | ||-|-| | Public | 1.2.5105 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
-| Insider | 1.2.5105 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
+| Insider | 1.2.5126 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
+
+## Updates for version 1.2.5126 (Insider)
+
+*Published: January 24, 2024*
+
+Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368)
+
+In this release, we've made the following changes:
+
+- Fixed the regression that caused a display issue when a user selects monitors for their session.
+- Made the following accessibility improvements:
+ - Improved screen reader experience.
+ - Greater contrast for background color of the connection bar remote commands drop-down menu.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
## Updates for version 1.2.5105
virtual-machines Disks Deploy Premium V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-premium-v2.md
Title: Deploy a Premium SSD v2 managed disk
description: Learn how to deploy a Premium SSD v2 and about its regional availability. Previously updated : 01/23/2024 Last updated : 01/25/2024
virtual-machines Disks Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md
Title: Select a disk type for Azure IaaS VMs - managed disks
description: Learn about the available Azure disk types for virtual machines, including ultra disks, Premium SSDs v2, Premium SSDs, standard SSDs, and Standard HDDs. Previously updated : 01/10/2024 Last updated : 01/25/2024
The following table provides a comparison of disk sizes and performance caps to
### Ultra disk performance
- Ultra disks are designed to provide low sub millisecond latencies and provisioned IOPS and throughput 99.99% of the time. Ultra disks also feature a flexible performance configuration model that allows you to independently configure IOPS and throughput, before and after you provision the disk. Ultra disks come in several fixed sizes, ranging from 4 GiB up to 64 TiB.
+Ultra disks are designed to provide low sub millisecond latencies and provisioned IOPS and throughput 99.99% of the time. Ultra disks also feature a flexible performance configuration model that allows you to independently configure IOPS and throughput, before and after you provision the disk. Ultra disks come in several fixed sizes, ranging from 4 GiB up to 64 TiB.
### Ultra disk IOPS
Premium SSD v2 disks are designed to provide sub millisecond latencies and provi
Premium SSD v2 capacities range from 1 GiB to 64 TiBs, in 1-GiB increments. You're billed on a per GiB ratio, see the [pricing page](https://azure.microsoft.com/pricing/details/managed-disks/) for details.
-Premium SSD v2 offers up to 32 TiBs per region per subscription by default, but supports higher capacity by request. To request an increase in capacity, request a quota increase or contact Azure Support.
+Premium SSD v2 offers up to 100 TiBs per region per subscription by default, but supports higher capacity by request. To request an increase in capacity, request a quota increase or contact Azure Support.
#### Premium SSD v2 IOPS
Premium SSD v2 supports a 4k physical sector size by default. A 512E sector size
#### Summary
-The following table provides a comparison of disk capacities and performance maximums to help you decide which to use.
+The following table provides an overview of disk capacities and performance maximums to help you decide which to use.
-|Disk Size |Maximum available IOPS |Maximum available throughput (MB/s) |
-||||
-|1 GiB-64 TiBs |3,000-80,000 (Increases by 500 IOPS per GiB) |125-1,200 (increases by 0.25 MB/s per set IOPS) |
To deploy a Premium SSD v2, see [Deploy a Premium SSD v2](disks-deploy-premium-v2.md).
virtual-machines Enable Nvme Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/enable-nvme-interface.md
NVMe can be enabled during VM creation using various methods such as: Azure port
                    }                 }             ],
-            "diskControllerType": "NVME"
+            "diskControllerTypes": "NVME"
        },         "securityProfile": {             "encryptionAtHost": "[parameters('encryptionAtHost')]"
virtual-machines Image Builder Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-overview.md
By using standardized virtual machine (VM) images, your organization can more easily migrate to the cloud and help ensure consistency in your deployments. Images ordinarily include predefined security, configuration settings, and any necessary software. Setting up your own imaging pipeline requires time, infrastructure, and many other details. With Azure VM Image Builder, you need only create a configuration that describes your image and submit it to the service, where the image is built and then distributed.
-With VM Image Builder, you can migrate your existing image customization pipeline to Azure as you continue to use existing scripts, commands, and processes. You can integrate your core applications into a VM image, so that your VMs can take on workloads after the images are created. You can even add configurations to build images for Azure Virtual Desktop, as virtual hard discs (VHDs) for use in Azure Stack, or for ease of exporting.
+With VM Image Builder, you can migrate your existing image customization pipeline to Azure as you continue to use existing scripts, commands, and processes. You can integrate your core applications into a VM image, so that your VMs can take on workloads after the images are created. You can even add configurations to build images for Azure Virtual Desktop, as virtual hard disks (VHDs) for use in Azure Stack, or for ease of exporting.
VM Image Builder lets you start with Windows or Linux images either from Azure Marketplace or as existing custom images, and then add your own customizations. You can also specify where you want your resulting images to be hosted in [Azure Compute Gallery](shared-image-galleries.md) (formerly Shared Image Gallery), as managed images or as VHDs.
virtual-network Public Ip Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-basic-upgrade-guidance.md
We recommend the following approach to upgrade to Standard SKU public IP address
| | | | Virtual Machine or Virtual Machine Scale Sets (flex model) | Disassociate IP(s) and utilize the upgrade options detailed after the table. For virtual machines, you can use the [upgrade script](public-ip-upgrade-vm.md). | | Load Balancer (Basic SKU) | New LB SKU required. Use the upgrade script [Upgrade Basic Load Balancer to Standard SKU](../../load-balancer/upgrade-basic-standard-with-powershell.md) to upgrade to Standard Load Balancer |
- | VPN Gateway (Basic SKU or VpnGw1-5 SKU using Basic IPs) | No action required for existing VPN gateways that use Basic SKU public IP addresses. For new VPN gateways, we recommend that you use Standard SKU public IP addresses.|
-| ExpressRoute Gateway (using Basic IPs) | New ExpressRoute Gateway required. Create a [new ExpressRoute Gateway with a Standard SKU IP](../../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md). |
+ | VPN Gateway (Basic SKU or VpnGw1-5 SKU using Basic IPs) | New VPN Gateway SKU required. Create a [new VPN Gateway with a Standard SKU IP](../../vpn-gateway/tutorial-create-gateway-portal.md). |
+| ExpressRoute Gateway (using Basic IPs) | New ExpressRoute Gateway required. Create a [new ExpressRoute Gateway with a Standard SKU IP](../../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md). For non-production workloads, use this [migration script (Preview)](../../expressroute/gateway-migration.md). |
| Application Gateway (v1 SKU) | New AppGW SKU required. Use this [migration script to migrate from v1 to v2](../../application-gateway/migrate-v1-v2.md). | > [!NOTE]
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureDevOps** | Azure DevOps. | Inbound | Yes | Yes | | **AzureDigitalTwins** | Azure Digital Twins.<br/><br/>**Note**: This tag or the IP addresses covered by this tag can be used to restrict access to endpoints configured for event routes. | Inbound | No | Yes | | **AzureEventGrid** | Azure Event Grid. | Both | No | Yes |
-| **AzureFrontDoor.Frontend** <br/> **AzureFrontDoor.Backend** <br/> **AzureFrontDoor.FirstParty** | Azure Front Door. | Both | Yes | Yes |
+| **AzureFrontDoor.Frontend** <br/> **AzureFrontDoor.Backend** <br/> **AzureFrontDoor.FirstParty** | *Frontend* service tag contains the IP addresses that clients use to reach Front Door. You can apply the **AzureFrontDoor.Frontend** service tag when you want to control the outbound traffic that can connect to services behind Azure Front Door. *Backend* service tag contains the IP addresses that Azure Front Door uses to access your origins. You can apply this service tag when you [configure security for your origins](../frontdoor/origin-security.md). *FirstParty* is a special tag reserved for a select group of Microsoft services hosted on Azure Front Door. | Both | Yes | Yes |
| **AzureHealthcareAPIs** | The IP addresses covered by this tag can be used to restrict access to Azure Health Data Services. | Both | No | Yes | | **AzureInformationProtection** | Azure Information Protection.<br/><br/>**Note**: This tag has a dependency on the **AzureActiveDirectory**, **AzureFrontDoor.Frontend** and **AzureFrontDoor.FirstParty** tags. | Outbound | No | Yes | | **AzureIoTHub** | Azure IoT Hub. | Outbound | Yes | Yes |
vpn-gateway Point To Site Vpn Client Cert Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-windows.md
Previously updated : 01/17/2024 Last updated : 01/25/2024
Before beginning the workflow, verify that you're on the correct article. The fo
[!INCLUDE [All client articles](../../includes/vpn-gateway-vpn-client-install-articles.md)]
->[!IMPORTANT]
->[!INCLUDE [TLS](../../includes/vpn-gateway-tls-change.md)]
- ## Workflow In this article, we start with generating VPN client configuration files and client certificates:
-1. [Generate files to configure the VPN client](#1-generate-vpn-client-configuration-files)
-1. [Generate certificates for the VPN client](#2-generate-client-certificates)
-
-After the steps in these sections are completed, continue on to [3. Configure the VPN client](#3-configure-the-vpn-client). The steps you use to configure your VPN client depend on the tunnel type for your P2S VPN gateway, and the VPN client on the client computer.
+1. [Generate files to configure the VPN client](#1-generate-vpn-client-configuration-files).
+1. [Generate certificates for the VPN client](#2-generate-client-certificates).
+1. [Configure the VPN client](#3-configure-the-vpn-client). The steps you use to configure your VPN client depend on the tunnel type for your P2S VPN gateway, and the VPN client on the client computer.
-* **IKEv2 and SSTP - native VPN client steps** - If your P2S VPN gateway is configured to use IKEv2/SSTP and certificate authentication, you can connect to your VNet using the native VPN client that's part of your Windows operating system. This configuration doesn't require additional client software. See [IKEv2 and SSTP - native VPN client](#ike).
-* **OpenVPN** - If your P2S VPN gateway is configured to use an OpenVPN tunnel and certificate authentication, you have the option of using either the [Azure VPN Client](#openvpn), or the [OpenVPN client](#azurevpn).
+ * **IKEv2 and SSTP - native VPN client steps** - If your P2S VPN gateway is configured to use IKEv2/SSTP and certificate authentication, you can connect to your VNet using the native VPN client that's part of your Windows operating system. This configuration doesn't require additional client software. For steps, see [IKEv2 and SSTP - native VPN client](point-to-site-vpn-client-certificate-windows-native.md).
+ * **OpenVPN** - If your P2S VPN gateway is configured to use an OpenVPN tunnel and certificate authentication, you have the option of using either the [Azure VPN Client](#openvpn), or the [OpenVPN client](#azurevpn) steps in this article.
## 1. Generate VPN client configuration files
In many cases, you can install the client certificate directly on the client com
Next, configure the VPN client. Select from the following instructions:
-* [IKEv2 and SSTP - native VPN client steps](#ike)
+* [IKEv2 and SSTP - native VPN client steps](point-to-site-vpn-client-certificate-windows-native.md)
* [OpenVPN - OpenVPN client steps](#openvpn) * [OpenVPN - Azure VPN Client steps](#azurevpn)
-## <a name="ike"></a>Native VPN client steps - IKEv2/SSTP
-
-If your P2S VPN gateway is configured to use IKEv2/SSTP and certificate authentication, you can connect to your VNet using the native VPN client that's part of your Windows operating system. This configuration doesn't require additional client software.
-
-### <a name="view-ike"></a>View configuration files
-
-Unzip the VPN client profile configuration file to view the following folders:
-
-* **WindowsAmd64** and **WindowsX86**, which contain the Windows 64-bit and 32-bit installer packages, respectively. The **WindowsAmd64** installer package is for all supported 64-bit Windows clients, not just Amd.
-* **Generic**, which contains general information used to create your own VPN client configuration. The Generic folder is provided if IKEv2 or SSTP+IKEv2 was configured on the gateway. If only SSTP is configured, then the Generic folder isnΓÇÖt present.
-
-### <a name="install"></a>Configure VPN client profile
-
-You can use the same VPN client configuration package on each Windows client computer, as long as the version matches the architecture for the client. For the list of client operating systems that are supported, see the point-to-site section of the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#P2S).
-
->[!NOTE]
->You must have Administrator rights on the Windows client computer from which you want to connect.
-
-1. Select the VPN client configuration files that correspond to the architecture of the Windows computer. For a 64-bit processor architecture, choose the 'VpnClientSetupAmd64' installer package. For a 32-bit processor architecture, choose the 'VpnClientSetupX86' installer package.
-
-1. Double-click the package to install it. If you see a SmartScreen popup, click **More info**, then **Run anyway**.
-
-1. Install the client certificate. Typically, you can do this by double-clicking the certificate file and providing a password if required. For more information, see [Install client certificates](point-to-site-how-to-vpn-client-install-azure-cert.md).
-
-1. Connect to your VPN. Go to the **VPN** settings and locate the VPN connection that you created. It's the same name as your virtual network. Select **Connect**. A pop-up message might appear. Select **Continue** to use elevated privileges.
-1. On the **Connection status** page, select **Connect** to start the connection. If you see a **Select Certificate** screen, verify that the client certificate showing is the one that you want to use to connect. If it isn't, use the drop-down arrow to select the correct certificate, and then select **OK**.
- ## <a name="azurevpn"></a>Azure VPN Client steps - OpenVPN If your P2S VPN gateway is configured to use an OpenVPN tunnel type and certificate authentication, you can connect using the Azure VPN Client.
vpn-gateway Point To Site Vpn Client Certificate Windows Native https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-certificate-windows-native.md
+
+ Title: 'Configure P2S VPN clients: certificate authentication: Windows native client'
+
+description: Learn how to configure the native VPN client on a Windows computer for point-to-site certificate authentication connections.
+++ Last updated : 01/25/2024+++
+# Configure the Windows native VPN client for P2S Certificate Authentication connections
+
+If your point-to-site (P2S) VPN gateway is configured to use IKEv2/SSTP and certificate authentication, you can connect to your virtual network using the native VPN client that's part of your Windows operating system. This article walks you through the steps to configure the native VPN client and connect to your virtual network.
+
+## Before you begin
+
+This article assumes that you've already performed the following prerequisites:
+
+* You created and configured your VPN gateway for point-to-site certificate authentication and an IKEv2/SSTP tunnel type. See [Configure server settings for P2S VPN Gateway connections - certificate authentication](vpn-gateway-howto-point-to-site-resource-manager-portal.md) for steps.
+* You generated client certificates and downloaded the VPN client configuration files. See [Point-to-site VPN clients: certificate authentication - Windows ](point-to-site-vpn-client-cert-windows.md)
+
+Before beginning the workflow, verify that you're on the correct VPN client configuration article. The following table shows the configuration articles available for VPN Gateway point-to-site VPN clients. Steps differ, depending on the authentication type, tunnel type, and the client OS.
++
+## View configuration files
+
+The VPN client profile configuration package contains specific folders. The files within the folders contain the settings needed to configure the VPN client profile on the client computer. The files and the settings they contain are specific to the VPN gateway and the type of authentication and tunnel your VPN gateway is configured to use.
+
+Locate and unzip the VPN client profile configuration package you generated. For Certificate authentication and IKEv2/SSTP, you'll see the following files:
+
+* **WindowsAmd64** and **WindowsX86** contain the Windows 64-bit and 32-bit installer packages, respectively. The **WindowsAmd64** installer package is for all supported 64-bit Windows clients, not just AMD.
+* **Generic** contains general information used to create your own VPN client configuration. The Generic folder is provided if IKEv2 or SSTP+IKEv2 was configured on the gateway. If only SSTP is configured, then the Generic folder isnΓÇÖt present.
+
+## Configure the VPN client profile
+
+To connect, you'll first need to configure the VPN client with the required settings. You do this by configuring the VPN client profile using the settings contained in the VPN client configuration package. The settings in the package are specific to the VPN gateway to which you connect.
+
+You can use the same VPN client configuration package on each Windows client computer, as long as the version matches the architecture for the client. For the list of client operating systems that are supported, see the point-to-site section of the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#P2S).
+
+>[!NOTE]
+>You must have Administrator rights on the Windows client computer from which you want to connect.
+
+1. Select the VPN client configuration files that correspond to the architecture of the Windows computer. For a 64-bit processor architecture, choose the 'VpnClientSetupAmd64' installer package. For a 32-bit processor architecture, choose the 'VpnClientSetupX86' installer package.
+
+1. Double-click the package to install it. If you see a SmartScreen popup, select **More info**, then **Run anyway**.
+
+1. Install the client certificate. Typically, you can do this by double-clicking the certificate file and providing a password (if required). For more information, see [Install client certificates](point-to-site-how-to-vpn-client-install-azure-cert.md).
+
+## Connect
+
+Connect to your virtual network via point-to-site VPN.
+
+1. Go to the **VPN** settings and locate the VPN connection that you created. It's the same name as your virtual network. Select **Connect**. A pop-up message might appear. Select **Continue** to use elevated privileges.
+1. On the **Connection status** page, select **Connect** to start the connection. If you see a **Select Certificate** screen, verify that the client certificate showing is the one that you want to use to connect. If it isn't, use the drop-down arrow to select the correct certificate, and then select **OK**.
+
+## Next steps
+
+[Point-to-site configuration steps](vpn-gateway-howto-point-to-site-resource-manager-portal.md)
vpn-gateway Vpn Gateway Howto Point To Site Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md
description: Learn how to configure VPN Gateway server settings for P2S configur
Previously updated : 08/11/2023 Last updated : 01/25/2024
For this exercise, select **Azure certificate** for the authentication type. If
In this section, you upload public root certificate data to Azure. Once the public certificate data is uploaded, Azure can use it to authenticate clients that have installed a client certificate generated from the trusted root certificate.
-1. Navigate to your **Virtual network gateway -> Point-to-site configuration** page in the **Root certificate** section. This section is only visible if you have selected **Azure certificate** for the authentication type.
1. Make sure that you exported the root certificate as a **Base-64 encoded X.509 (.CER)** file in the previous steps. You need to export the certificate in this format so you can open the certificate with text editor. You don't need to export the private key.
- :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/export-base-64.png" alt-text="Screenshot showing export as Base-64 encoded X.509." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/export-base-64-expand.png" :::
1. Open the certificate with a text editor, such as Notepad. When copying the certificate data, make sure that you copy the text as one continuous line without carriage returns or line feeds. You may need to modify your view in the text editor to 'Show Symbol/Show all characters' to see the carriage returns and line feeds. Copy only the following section as one continuous line: :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/notepad-root-cert.png" alt-text="Screenshot showing root certificate information in Notepad." border="false" lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/notepad-root-cert-expand.png":::
+1. Navigate to your **Virtual network gateway -> Point-to-site configuration** page in the **Root certificate** section. This section is only visible if you have selected **Azure certificate** for the authentication type.
1. In the **Root certificate** section, you can add up to 20 trusted root certificates. * Paste the certificate data into the **Public certificate data** field.
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
description: This page provides information on web application firewall CRS rule
Previously updated : 11/08/2022 Last updated : 01/25/2024 # Web Application Firewall DRS and CRS rule groups and rules
-Application Gateway web application firewall (WAF) protects web applications from common vulnerabilities and exploits. Azure-managed rule sets provide an easy way to deploy protection against a common set of security threats. Since such rule sets are managed by Azure, the rules are updated as needed to protect against new attack signatures. Default rule set also includes the Microsoft Threat Intelligence Collection rules that are written in partnership with the Microsoft Intelligence team to provide increased coverage, patches for specific vulnerabilities, and better false positive reduction.
+The Azure-managed rule sets in the Application Gateway web application firewall (WAF) actively protect web applications from common vulnerabilities and exploits. These rule sets, managed by Azure, receive updates as necessary to guard against new attack signatures. The default rule set also incorporates the Microsoft Threat Intelligence Collection rules. The Microsoft Intelligence team collaborates in writing these rules, ensuring enhanced coverage, specific vulnerability patches, and improved false positive reduction.
-Customers also have the option of using rules that are defined based on the OWASP core rule sets 3.2, 3.1, 3.0, or 2.2.9.
+You also have the option of using rules that are defined based on the OWASP core rule sets 3.2, 3.1, 3.0, or 2.2.9.
-Rules can be disabled on a rule-by-rule basis, or you can set specific actions by individual rule. This article contains the current rules and rule sets offered. In the rare occasion that a published ruleset needs to be updated, it will be documented here.
+You can disable rules individually, or set specific actions for each rule. This article lists the current rules and rule sets available. If a published rule set requires an update, we'll document it here.
> [!NOTE]
-> When changing from one ruleset version to another all disabled and enabled rule settings will return to the default for the ruleset your are migrating to. This means that if you previously disabled or enabled a rule, you will need to disable or enable it again once you've moved to the new ruleset version.
+> When changing from one ruleset version to another all disabled and enabled rule settings will return to the default for the ruleset you're migrating to. This means that if you previously disabled or enabled a rule, you will need to disable or enable it again once you've moved to the new ruleset version.
## Default rule sets
The version number of the DRS increments when new attack signatures are added to
The Microsoft Threat Intelligence Collection rules are written in partnership with the Microsoft Threat Intelligence team to provide increased coverage, patches for specific vulnerabilities, and better false positive reduction. > [!NOTE]
-> Please follow the below guidance to tune WAF while you get started with 2.1 on Application Gateway WAF. Details of the rules are as below.
+> Please use the following guidance to tune WAF while you get started with 2.1 on Application Gateway WAF. Details of the rules are described next.
|Rule ID |Rule Group|Description |Details| |||||
The Microsoft Threat Intelligence Collection rules are written in partnership wi
## Core rule sets
-The Application Gateway WAF comes pre-configured with CRS 3.2 by default, but you can choose to use any other supported CRS version.
+The Application Gateway WAF comes preconfigured with CRS 3.2 by default, but you can choose to use any other supported CRS version.
CRS 3.2 offers a new engine and new rule sets defending against Java injections, an initial set of file upload checks, and fewer false positives compared with earlier versions of CRS. You can also [customize rules to suit your needs](application-gateway-customize-waf-rules-portal.md). Learn more about the new [Azure WAF engine](waf-engine.md).
Both DRS and CRS are enabled by default in Detection mode in your WAF policies.
Sometimes you might need to omit certain request attributes from a WAF evaluation. A common example is Active Directory-inserted tokens that are used for authentication. You can configure exclusions to apply when specific WAF rules are evaluated, or to apply globally to the evaluation of all WAF rules. Exclusion rules apply to your whole web application. For more information, see [Web Application Firewall (WAF) with Application Gateway exclusion lists](application-gateway-waf-configuration.md).
-By default, DRS version 2.1 / CRS version 3.2 and above will leverage anomaly scoring when a request matches a rule, CRS 3.1 and below will block matching requests by default. Additionally, custom rules can be configured in the same WAF policy if you wish to bypass any of the pre-configured rules in the Core Rule Set.
+By default, DRS version 2.1 / CRS version 3.2 and above uses anomaly scoring when a request matches a rule. CRS 3.1 and below blocks matching requests by default. Additionally, custom rules can be configured in the same WAF policy if you want to bypass any of the preconfigured rules in the Core Rule Set.
Custom rules are always applied before rules in the Core Rule Set are evaluated. If a request matches a custom rule, the corresponding rule action is applied. The request is either blocked or passed through to the back-end. No other custom rules or the rules in the Core Rule Set are processed.
For example, a single *Critical* rule match is enough for the WAF to block a req
### DRS 2.1
-DRS 2.1 rules offer better protection than earlier versions of the DRS. It includes additional rules developed by the Microsoft Threat Intelligence team and updates to signatures to reduce false positives. It also supports transformations beyond just URL decoding.
+DRS 2.1 rules offer better protection than earlier versions of the DRS. It includes more rules developed by the Microsoft Threat Intelligence team and updates to signatures to reduce false positives. It also supports transformations beyond just URL decoding.
DRS 2.1 includes 17 rule groups, as shown in the following table. Each group contains multiple rules, and you can customize behavior for individual rules, rule groups, or entire rule set.
The following rule groups and rules are available when using Web Application Fir
### <a name="drs911-21"></a> METHOD ENFORCEMENT |RuleId|Description| |||
-|911100|Method is not allowed by policy|
+|911100|Method isn't allowed by policy|
### <a name="drs920-21"></a> PROTOCOL-ENFORCEMENT |RuleId|Description|
The following rule groups and rules are available when using Web Application Fir
|920100|Invalid HTTP Request Line| |920120|Attempted multipart/form-data bypass| |920121|Attempted multipart/form-data bypass|
-|920160|Content-Length HTTP header is not numeric.|
+|920160|Content-Length HTTP header isn't numeric.|
|920170|GET or HEAD Request with Body Content.| |920171|GET or HEAD Request with Transfer-Encoding.| |920180|POST request missing Content-Length Header.|
The following rule groups and rules are available when using Web Application Fir
|920340|Request Containing Content, but Missing Content-Type header| |920341|Request containing content requires Content-Type header| |920350|Host header is a numeric IP address|
-|920420|Request content type is not allowed by policy|
-|920430|HTTP protocol version is not allowed by policy|
+|920420|Request content type isn't allowed by policy|
+|920430|HTTP protocol version isn't allowed by policy|
|920440|URL file extension is restricted by policy| |920450|HTTP header is restricted by policy| |920470|Illegal Content-Type header|
-|920480|Request content type charset is not allowed by policy|
+|920480|Request content type charset isn't allowed by policy|
|920500|Attempt to access a backup or working file| ### <a name="drs921-21"></a> PROTOCOL-ATTACK
The following rule groups and rules are available when using Web Application Fir
### <a name="crs911-32"></a> REQUEST-911-METHOD-ENFORCEMENT |RuleId|Description| |||
-|911100|Method is not allowed by policy|
+|911100|Method isn't allowed by policy|
### <a name="crs913-32"></a> REQUEST-913-SCANNER-DETECTION |RuleId|Description|
The following rule groups and rules are available when using Web Application Fir
|920100|Invalid HTTP Request Line| |920120|Attempted multipart/form-data bypass| |920121|Attempted multipart/form-data bypass|
-|920160|Content-Length HTTP header is not numeric.|
+|920160|Content-Length HTTP header isn't numeric.|
|920170|GET or HEAD Request with Body Content.| |920171|GET or HEAD Request with Transfer-Encoding.| |920180|POST request missing Content-Length Header.|
The following rule groups and rules are available when using Web Application Fir
|920340|Request Containing Content, but Missing Content-Type header| |920341|Request containing content requires Content-Type header| |920350|Host header is a numeric IP address|
-|920420|Request content type is not allowed by policy|
-|920430|HTTP protocol version is not allowed by policy|
+|920420|Request content type isn't allowed by policy|
+|920430|HTTP protocol version isn't allowed by policy|
|920440|URL file extension is restricted by policy| |920450|HTTP header is restricted by policy (%{MATCHED_VAR})| |920460|Abnormal Escape Characters|
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| |||
-|911100|Method is not allowed by policy|
+|911100|Method isn't allowed by policy|
### <a name="crs913-31"></a> REQUEST-913-SCANNER-DETECTION
The following rule groups and rules are available when using Web Application Fir
|920121|Attempted multipart/form-data bypass| |920130|Failed to parse request body.| |920140|Multipart request body failed strict validation|
-|920160|Content-Length HTTP header is not numeric.|
+|920160|Content-Length HTTP header isn't numeric.|
|920170|GET or HEAD Request with Body Content.| |920171|GET or HEAD Request with Transfer-Encoding.| |920180|POST request missing Content-Length Header.|
The following rule groups and rules are available when using Web Application Fir
|920340|Request Containing Content but Missing Content-Type header| |920341|Request containing content requires Content-Type header| |920350|Host header is a numeric IP address|
-|920420|Request content type is not allowed by policy|
-|920430|HTTP protocol version is not allowed by policy|
+|920420|Request content type isn't allowed by policy|
+|920430|HTTP protocol version isn't allowed by policy|
|920440|URL file extension is restricted by policy| |920450|HTTP header is restricted by policy (%@{MATCHED_VAR})| |920460|Abnormal Escape Characters|
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| |||
-|911100|Method is not allowed by policy|
+|911100|Method isn't allowed by policy|
### <a name="crs913-30"></a> REQUEST-913-SCANNER-DETECTION
The following rule groups and rules are available when using Web Application Fir
|920100|Invalid HTTP Request Line| |920130|Failed to parse request body.| |920140|Multipart request body failed strict validation|
-|920160|Content-Length HTTP header is not numeric.|
+|920160|Content-Length HTTP header isn't numeric.|
|920170|GET or HEAD Request with Body Content.| |920180|POST request missing Content-Length Header.| |920190|Range = Invalid Last Byte Value.|
The following rule groups and rules are available when using Web Application Fir
|920390|Total arguments size exceeded| |920400|Uploaded file size too large| |920410|Total uploaded files size too large|
-|920420|Request content type is not allowed by policy|
-|920430|HTTP protocol version is not allowed by policy|
+|920420|Request content type isn't allowed by policy|
+|920430|HTTP protocol version isn't allowed by policy|
|920440|URL file extension is restricted by policy| |920450|HTTP header is restricted by policy (%@{MATCHED_VAR})| |920200|Range = Too many fields (6 or more)|
The following rule groups and rules are available when using Web Application Fir
|960912|Failed to parse request body.| |960914|Multipart request body failed strict validation| |960915|Multipart parser detected a possible unmatched boundary.|
-|960016|Content-Length HTTP header is not numeric.|
+|960016|Content-Length HTTP header isn't numeric.|
|960011|GET or HEAD Request with Body Content.| |960012|POST request missing Content-Length Header.| |960902|Invalid Use of Identity Encoding.|
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| |||
-|960032|Method is not allowed by policy|
-|960010|Request content type is not allowed by policy|
-|960034|HTTP protocol version is not allowed by policy|
+|960032|Method isn't allowed by policy|
+|960010|Request content type isn't allowed by policy|
+|960034|HTTP protocol version isn't allowed by policy|
|960035|URL file extension is restricted by policy| |960038|HTTP header is restricted by policy|